领域
Machine Learning & AI
技能档案
W&B, ClearML, Neptune, hyperparameter optimization, model comparison
角色数
7
包含此技能的角色
级别数
5
结构化成长路径
必要要求
25
其余 10 个可选
Machine Learning & AI
MLOps
2026/3/17
选择当前级别并对比期望。下方卡片显示晋升所需掌握的内容。
表格展示从初级到首席的技能深度变化。点击行查看详情。
| 角色 | 必要性 | 描述 |
|---|---|---|
| Computer Vision Engineer | Logs basic experiment parameters and metrics (accuracy, loss) using MLflow or W&B under supervision; follows team conventions for run naming and tagging in computer vision projects | |
| Data Analyst | Records analysis parameters, data snapshots, and result summaries in a shared experiment tracker; follows established templates for documenting exploratory analyses and A/B test configurations | |
| Data Scientist | Tracks model training runs with hyperparameters, metrics, and dataset versions using MLflow or Neptune; follows team guidelines for organizing experiments into projects and comparing baseline results | |
| LLM Engineer | Logs prompt templates, model versions, and evaluation scores (BLEU, ROUGE, human ratings) in experiment tracking tools; follows team standards for recording LLM fine-tuning and inference experiments | |
| ML Engineer | 必要 | Logs ML experiments: parameters, metrics, models. Uses MLflow or W&B for comparing experiments. Understands the importance of reproducibility. |
| MLOps Engineer | Understands the importance of experiment tracking for ML result reproducibility. Can log hyperparameters, metrics, and training artifacts in MLflow or Weights & Biases, compare results of different runs via UI. Follows team conventions for experiment naming and run tagging. | |
| NLP Engineer | 必要 | Knows experiment tracking basics: MLflow, Weights & Biases. Logs NLP experiment metrics and parameters: F1, precision, recall for NER and text classification models. |
| 角色 | 必要性 | 描述 |
|---|---|---|
| Computer Vision Engineer | Independently structures experiment tracking for CV pipelines: versions datasets with DVC, logs augmentation configs and model architectures in W&B, and builds comparison dashboards for detection/segmentation metrics | |
| Data Analyst | Independently designs experiment tracking workflows for analytical projects: versions SQL queries and feature definitions, logs statistical test parameters, and maintains reproducible analysis trails with clear lineage | |
| Data Scientist | Designs structured experiment hierarchies in MLflow or Neptune: groups related runs into parent experiments, tracks feature engineering decisions alongside model metrics, and understands trade-offs between tracking granularity and overhead | |
| LLM Engineer | Builds comprehensive experiment tracking for LLM workflows: versions prompt chains and RAG configurations, logs token usage and latency alongside quality metrics, and compares fine-tuning runs across different base models in W&B or MLflow | |
| ML Engineer | 必要 | Designs experiment tracking workflow. Organizes experiments by projects/tasks. Configures hyperparameter sweeps (Optuna, W&B Sweeps). Analyzes results for decision making. |
| MLOps Engineer | Configures experiment tracking for ML projects: organizing experiments by tasks, automatic logging via callbacks in PyTorch Lightning/Keras. Implements model comparison across multiple metrics, training visualization via W&B plots, dataset and config tracking for full reproducibility of every experiment in the team. | |
| NLP Engineer | 必要 | Independently organizes NLP model experiments: dataset versioning, configuration comparison, artifact tracking. Configures dashboards for monitoring training progress. |
| 角色 | 必要性 | 描述 |
|---|---|---|
| Computer Vision Engineer | 必要 | Designs end-to-end experiment tracking architecture for CV teams: integrates MLflow with CI/CD for automated metric collection, establishes artifact versioning for large image datasets, and mentors engineers on reproducible experiment practices |
| Data Analyst | 必要 | Designs experiment tracking standards for analytics teams: builds reusable templates for A/B tests and causal analyses, integrates tracking with BI dashboards for stakeholder visibility, and mentors analysts on maintaining auditable experiment histories |
| Data Scientist | 必要 | Architects scalable experiment tracking infrastructure: optimizes MLflow/Neptune deployments for high-throughput training, designs custom metric visualizations for complex model comparisons, and establishes governance policies for experiment metadata and artifact retention |
| LLM Engineer | 必要 | Architects experiment tracking systems for LLM platforms: designs custom logging for multi-stage pipelines (retrieval, generation, ranking), optimizes storage for large prompt/response artifacts, and mentors teams on tracking evaluation drift across model versions |
| ML Engineer | 必要 | Designs experiment tracking infrastructure. Automates experiment analysis. Integrates tracking with CI/CD for automated model promotion. |
| MLOps Engineer | 必要 | Architects experiment tracking infrastructure for the MLOps platform: CI/CD integration for automatic experiment execution, lineage tracking from data to model. Implements automatic hyperparameter tuning with tracking via Optuna/Ray Tune, configures model comparison pipelines, and defines criteria for automatic promotion of best models. |
| NLP Engineer | 必要 | Designs infrastructure for large-scale NLP experiment tracking. Automates training pipelines, implements A/B model testing, and systems for automatic best configuration selection. |
| 角色 | 必要性 | 描述 |
|---|---|---|
| Computer Vision Engineer | 必要 | Defines experiment tracking strategy for CV teams: standardizes tracking of dataset evolution, annotation quality metrics, and model performance across deployment targets; ensures experiment lineage supports regulatory compliance for safety-critical vision systems |
| Data Analyst | 必要 | Defines experiment tracking strategy for analytics teams: standardizes documentation of hypothesis testing, statistical methodologies, and decision outcomes; builds frameworks connecting analytical experiments to business impact measurement |
| Data Scientist | 必要 | Defines experiment tracking strategy for the data science function: selects and standardizes tooling (MLflow, W&B, Neptune) across teams, establishes cross-team experiment sharing protocols, and ensures tracking practices support model governance and audit requirements |
| LLM Engineer | 必要 | Defines experiment tracking strategy for LLM product teams: standardizes tracking of prompt engineering iterations, model evaluations, and cost/quality trade-offs; builds team dashboards connecting experiment outcomes to product metrics and release decisions |
| ML Engineer | 必要 | Defines experiment tracking standards. Introduces culture of experimentation. Standardizes metrics and evaluation. |
| MLOps Engineer | 必要 | Defines experiment tracking standards for the MLOps team: mandatory metadata for each experiment, tag taxonomy and naming conventions. Implements experiment review processes before production deployment, configures automated result reports, and ensures linkage between experiments, Git PRs, and model deployments. |
| NLP Engineer | 必要 | Defines experiment tracking standards for the NLP team. Establishes reproducibility processes, metrics guidelines, and criteria for promoting models from experiment to production. |
| 角色 | 必要性 | 描述 |
|---|---|---|
| Computer Vision Engineer | 必要 | Shapes organization-wide experiment tracking standards for computer vision: architects unified platforms linking dataset governance, training experiments, and deployment validation across all CV products; drives industry best practices for reproducibility and regulatory traceability in vision AI |
| Data Analyst | 必要 | Shapes organization-wide experiment tracking culture for data-driven decision making: architects unified frameworks linking analytical experiments to strategic outcomes, establishes cross-departmental standards for hypothesis documentation and result reproducibility, and drives adoption of systematic experimentation as a core organizational capability |
| Data Scientist | 必要 | Shapes organization-wide experiment tracking vision: drives adoption of unified experiment management platforms across ML, analytics, and engineering; defines metadata standards enabling cross-functional experiment discovery, reproducibility audits, and institutional knowledge preservation |
| LLM Engineer | 必要 | Shapes organization-wide experiment tracking standards for AI/LLM initiatives: architects centralized systems capturing prompt evolution, model lineage, and evaluation benchmarks across all LLM products; drives integration of experiment tracking with compliance frameworks and cost optimization at scale |
| ML Engineer | 必要 | Defines experimentation strategy for enterprise. Designs experiment platform. Evaluates novel experimentation approaches. |
| MLOps Engineer | 必要 | Shapes the experiment management strategy at the organizational level: unified platform for all ML teams, reproducibility and audit standards. Designs scaling architecture — multi-tenant experiment store, integration with Model Registry and Feature Store for complete ML lineage. Defines storage policies, compliance requirements, and governance for experimental data. |
| NLP Engineer | 必要 | Shapes enterprise ML experiment management strategy for the NLP platform. Defines reproducibility standards, governance, and audit trail for all organizational NLP models. |