Dominio
Machine Learning & AI
Perfil de habilidad
MLflow tracking, model registry, serving, projects
Roles
6
donde aparece esta habilidad
Niveles
5
ruta de crecimiento estructurada
Requisitos obligatorios
22
los otros 8 opcionales
Machine Learning & AI
MLOps
17/3/2026
Selecciona tu nivel actual y compara las expectativas.
La tabla muestra cómo crece la profundidad desde Junior hasta Principal.
| Rol | Obligatorio | Descripción |
|---|---|---|
| Computer Vision Engineer | Logs computer vision experiments in MLflow, tracking metrics like mAP, IoU, and loss curves. Registers trained model artifacts (YOLO, ResNet checkpoints) and tags runs with dataset versions. Follows team conventions for organizing CV experiment naming and parameter logging. | |
| Data Scientist | Tracks ML experiments in MLflow, logging hyperparameters, metrics, and model artifacts for reproducibility. Uses MLflow UI to compare runs across different algorithms (XGBoost, LightGBM, sklearn). Follows team standards for experiment naming, tagging, and artifact storage conventions. | |
| LLM Engineer | Logs LLM fine-tuning runs in MLflow, tracking training loss, perplexity, and evaluation scores. Records prompt templates, tokenizer configs, and adapter weights (LoRA, QLoRA) as artifacts. Uses MLflow UI to compare runs across different base models and hyperparameter sets. | |
| ML Engineer | Obligatorio | Uses MLflow for experiment logging: parameters, metrics, artifacts. Compares experiments in MLflow UI. Saves models through mlflow.log_model. |
| MLOps Engineer | Uses MLflow Tracking for logging basic training metrics: accuracy, loss, F1-score. Can run mlflow ui to view experiments, log model parameters via mlflow.log_param, and save training artifacts. Understands MLflow structure — experiments, runs, artifacts — and navigates previous run results. | |
| NLP Engineer | Obligatorio | Knows MLflow basics: experiments, runs, metrics, parameters, artifacts. Logs NLP model training results: F1-score, accuracy, confusion matrix for text classification and NER. |
| Rol | Obligatorio | Descripción |
|---|---|---|
| Computer Vision Engineer | Designs MLflow experiment structures for multi-stage CV pipelines: augmentation tuning, backbone selection, and detection head optimization. Implements custom metrics logging for CV tasks (dice scores, MOTA). Configures Model Registry for versioning production CV models. | |
| Data Scientist | Builds structured MLflow workflows for feature engineering, model selection, and hyperparameter optimization. Integrates with Optuna/Hyperopt for automated tuning with full trial logging. Manages model lifecycle in Model Registry, transitioning through staging and production stages. | |
| LLM Engineer | Structures MLflow experiments for LLM evaluation pipelines, logging BLEU, ROUGE, and benchmark scores across prompt variations. Integrates with Hugging Face Trainer and DeepSpeed for automatic metric capture. Manages LLM versions in Model Registry with training data and quantization metadata. | |
| ML Engineer | Obligatorio | Designs MLflow workflow: experiment naming, run tags, artifact storage. Uses Model Registry for versioning. Configures autologging for sklearn/PyTorch. Writes custom MLflow Plugins. |
| MLOps Engineer | Configures MLflow Tracking Server for the team: remote backend store on PostgreSQL, artifact store in S3. Implements automatic logging via mlflow.autolog for PyTorch/TensorFlow/XGBoost, configures custom metrics and tags for experiment filtering. Integrates MLflow into training pipelines for result reproducibility. | |
| NLP Engineer | Obligatorio | Independently manages MLflow for NLP projects: experiment organization, model registry, artifact store. Configures automatic logging from training scripts and comparison views. |
| Rol | Obligatorio | Descripción |
|---|---|---|
| Computer Vision Engineer | Obligatorio | Architects MLflow tracking for large-scale CV research: distributed GPU training, experiment lineage for distillation chains, and automated model promotion on COCO/ImageNet benchmarks. Builds custom MLflow plugins for CV-specific artifacts like annotation overlays and inference visualizations. |
| Data Scientist | Obligatorio | Designs MLflow tracking architectures for production ML, integrating with CI/CD and automated validation gates. Implements custom plugins for domain-specific metrics and artifact stores (S3, GCS). Establishes model registry governance with approval workflows and rollback procedures. |
| LLM Engineer | Obligatorio | Architects MLflow infrastructure for enterprise LLM development: tracking RLHF reward model iterations, managing adapter registries, and automating evaluation suites (MT-Bench, HumanEval). Builds custom MLflow flavors for serving quantized LLMs (GPTQ, AWQ) with latency and throughput tracking. |
| ML Engineer | Obligatorio | Designs MLflow infrastructure for the team. Configures MLflow on Kubernetes. Integrates MLflow with CI/CD for automated model promotion. Creates custom model flavors. |
| MLOps Engineer | Obligatorio | Architects MLflow for production: highly available Tracking Server with load balancing, artifact storage optimization for large models. Implements custom MLflow plugins for integration with internal systems, configures MLflow Model Registry workflows with stage transitions and automated quality checks before production promotion. |
| NLP Engineer | Obligatorio | Designs MLflow infrastructure for the NLP team. Configures remote tracking server, S3 artifact store, automated pipelines with MLflow Projects. Integrates with CI/CD for model deployment. |
| Rol | Obligatorio | Descripción |
|---|---|---|
| Computer Vision Engineer | Obligatorio | Defines MLflow strategy at the team/product level. Establishes standards and best practices. Conducts reviews. |
| Data Scientist | Obligatorio | Defines MLflow tracking standards across data science teams: naming conventions, metric taxonomies, and experiment organization. Drives Model Registry as single source of truth for governance. Reviews experiment design and tracking practices ensuring reproducibility across ML projects. |
| LLM Engineer | Obligatorio | Sets MLflow strategy for LLM teams, standardizing tracking across fine-tuning, RLHF, and prompt engineering workflows. Establishes model registry policies for LLM versioning with adapter compatibility matrices. Reviews tracking practices for data lineage compliance and model card generation. |
| ML Engineer | Obligatorio | Defines experiment tracking strategy for the organization. Evaluates MLflow vs W&B vs ClearML. Designs model governance workflow. Standardizes ML lifecycle. |
| MLOps Engineer | Obligatorio | Defines MLflow usage standards for the MLOps team: mandatory metrics and tags for each experiment, naming conventions, project structure. Implements best practices for experiment organization — parent/child runs for hyperparameter tuning, nested runs for ensemble models, standardizes MLflow Projects for training reproducibility. |
| NLP Engineer | Obligatorio | Defines MLflow usage standards for the NLP team. Establishes naming conventions, tagging strategy, model promotion workflow, and dashboard for monitoring NLP experiments. |
| Rol | Obligatorio | Descripción |
|---|---|---|
| Computer Vision Engineer | Obligatorio | Defines MLflow strategy at the organizational level. Establishes enterprise approaches. Mentors leads and architects. |
| Data Scientist | Obligatorio | Defines enterprise MLflow strategy across business units, designing unified tracking infrastructure at scale. Architects integration with Databricks, Snowflake, and feature stores. Establishes company-wide model governance with compliance checks, audit trails, and cross-team sharing protocols. |
| LLM Engineer | Obligatorio | Defines organizational MLflow strategy for LLM initiatives, unifying fine-tuning, RAG evaluation, and agent benchmarking across teams. Establishes registry standards for LLM assets: base models, adapters, and prompt libraries. Drives adoption for LLM governance, cost tracking, and compliance. |
| ML Engineer | Obligatorio | Defines ML platform strategy. Designs enterprise experiment tracking. Evaluates and integrates ML lifecycle tools. |
| MLOps Engineer | Obligatorio | Shapes the experiment tracking strategy at the organizational level: MLflow as a unified platform for all ML teams, integration with corporate SSO and RBAC. Designs multi-tenant MLflow architecture with data isolation between teams, defines retention policies for experiments and artifacts, plans scaling for thousands of simultaneous experiments. |
| NLP Engineer | Obligatorio | Shapes enterprise MLflow strategy for the organizational NLP platform. Defines multi-team setup, centralized model registry, and experiment reproducibility standards. |