Dominio
Machine Learning & AI
Perfil de habilidad
Data drift, concept drift, performance monitoring, alerting, retraining
Roles
6
donde aparece esta habilidad
Niveles
5
ruta de crecimiento estructurada
Requisitos obligatorios
22
los otros 8 opcionales
Machine Learning & AI
MLOps
17/3/2026
Selecciona tu nivel actual y compara las expectativas.
La tabla muestra cómo crece la profundidad desde Junior hasta Principal.
| Rol | Obligatorio | Descripción |
|---|---|---|
| Computer Vision Engineer | Understands basic model monitoring concepts for computer vision pipelines. Tracks prediction accuracy, image quality drift, and inference latency using dashboards. Follows team guidelines for setting up alerts on object detection and classification models in staging environments. | |
| Data Scientist | Understands basic model monitoring principles for ML experiments and deployed models. Tracks key metrics like accuracy, precision, recall using tools such as MLflow or Weights & Biases. Follows team standards for logging predictions, detecting data drift, and reporting model degradation. | |
| LLM Engineer | Understands basic monitoring concepts for large language models in production. Tracks response quality, token usage, latency, and hallucination rates using logging tools. Follows team practices for setting up alerts on LLM endpoint performance and prompt regression detection. | |
| ML Engineer | Obligatorio | Understands model monitoring concept: prediction quality, data drift, latency. Configures basic model metrics (accuracy, latency). Visualizes model performance. |
| MLOps Engineer | Understands basic ML model monitoring concepts: why tracking prediction quality in production matters, what data drift and concept drift are. Can view dashboards with model metrics, read quality degradation alerts, and perform simple input data distribution checks using pandas and matplotlib. | |
| NLP Engineer | Obligatorio | Knows basics of NLP model monitoring: quality metrics, data drift, concept drift. Sets up basic alerts for metric degradation of text classification and NER models in production. |
| Rol | Obligatorio | Descripción |
|---|---|---|
| Computer Vision Engineer | Independently configures model monitoring for computer vision systems — tracks distribution shifts in input images, bounding box accuracy degradation, and segmentation quality. Sets up Evidently or Prometheus-based dashboards for real-time detection of concept drift across visual inference pipelines. | |
| Data Scientist | Independently implements model monitoring across the ML lifecycle. Configures data drift detection with Evidently or Great Expectations, sets up feature distribution tracking, and builds automated retraining triggers. Understands trade-offs between statistical tests for drift detection in tabular and time-series data. | |
| LLM Engineer | Independently builds monitoring pipelines for LLM applications. Tracks semantic drift, response quality scores, token cost trends, and prompt effectiveness using LangSmith or custom evaluation harnesses. Configures alerting on latency spikes, safety filter triggers, and output format violations. | |
| ML Engineer | Obligatorio | Configures data drift detection (Evidently, NannyML). Monitors feature distributions. Configures alerting on model degradation. Implements automated retraining trigger. |
| MLOps Engineer | Configures ML model monitoring in production: collecting prediction logs, calculating quality metrics on new data, detecting data drift via statistical tests (KS-test, PSI). Integrates Evidently AI or Whylogs for automated distribution monitoring, configures Grafana dashboards with model metrics, and sets up accuracy/F1 degradation alerts. | |
| NLP Engineer | Obligatorio | Independently configures NLP model monitoring: data drift detection for text data, performance tracking, error analysis. Builds dashboards for tracking NLP service quality. |
| Rol | Obligatorio | Descripción |
|---|---|---|
| Computer Vision Engineer | Obligatorio | Designs production-grade monitoring systems for computer vision at scale. Implements automated detection of dataset shift, label quality degradation, and model staleness across multi-model serving architectures. Optimizes monitoring overhead for high-throughput video and image processing pipelines. Mentors team on SLO-driven alerting strategies. |
| Data Scientist | Obligatorio | Designs end-to-end model monitoring architectures for production ML systems. Implements advanced drift detection combining statistical tests, model performance proxies, and business KPI correlation. Builds automated feedback loops from monitoring signals to retraining pipelines. Mentors team on observability best practices and incident response. |
| LLM Engineer | Obligatorio | Designs comprehensive monitoring platforms for LLM systems in production. Implements automated evaluation pipelines with LLM-as-judge, human feedback loops, and A/B test instrumentation. Builds cost optimization dashboards tracking token usage across model versions and prompt variants. Mentors team on LLMOps observability patterns. |
| ML Engineer | Obligatorio | Designs model monitoring architecture. Configures custom monitoring for specific ML tasks. Integrates monitoring with ML pipeline for closed-loop retraining. |
| MLOps Engineer | Obligatorio | Architects ML model monitoring system: real-time drift detection via streaming pipeline, automatic retraining trigger on metric degradation. Implements monitoring for complex scenarios — multi-model pipelines, concept drift with delayed ground truth, fairness and bias monitoring. Configures A/B testing with automatic decisions based on statistical significance. |
| NLP Engineer | Obligatorio | Designs monitoring system for production NLP platform. Implements automatic degradation detection, root cause analysis, and automated remediation for NLP models. |
| Rol | Obligatorio | Descripción |
|---|---|---|
| Computer Vision Engineer | Obligatorio | Defines Model Monitoring strategy at the team/product level. Establishes standards and best practices. Conducts reviews. |
| Data Scientist | Obligatorio | Defines model monitoring strategy at team and product level. Establishes standards for drift detection thresholds, alerting SLOs, and incident escalation across deployed models. Conducts reviews of monitoring coverage and drives adoption of observability tooling like Evidently or Arize across teams. |
| LLM Engineer | Obligatorio | Defines model monitoring strategy for LLM products at team level. Establishes evaluation frameworks combining automated metrics, human review workflows, and safety monitoring. Sets standards for cost tracking, latency budgets, and quality gates across prompt versions. Conducts reviews of monitoring coverage for all LLM-powered features. |
| ML Engineer | Obligatorio | Defines model monitoring strategy. Standardizes monitoring practices. Designs model observability platform. |
| MLOps Engineer | Obligatorio | Defines model monitoring standards for the MLOps team: mandatory metrics for each model type, prediction quality SLAs, drift response procedures. Builds ML observability culture — stakeholder dashboards, automated model health reports, and runbooks for quality degradation incidents. |
| NLP Engineer | Obligatorio | Defines NLP model monitoring standards for the team. Establishes SLO/SLI for NLP services, incident response processes, and guidelines for interpreting drift signals. |
| Rol | Obligatorio | Descripción |
|---|---|---|
| Computer Vision Engineer | Obligatorio | Defines Model Monitoring strategy at the organizational level. Establishes enterprise approaches. Mentors leads and architects. |
| Data Scientist | Obligatorio | Defines model monitoring strategy at organizational level across ML platforms and business units. Establishes enterprise standards for model governance, drift detection, and automated remediation. Drives centralized monitoring infrastructure integrating with MLOps and DataOps. Mentors leads on building monitoring cultures. |
| LLM Engineer | Obligatorio | Defines organization-wide model monitoring strategy for LLM and generative AI systems. Establishes enterprise standards for evaluation benchmarks, safety monitoring, and cost governance. Drives unified monitoring platforms across teams with different LLM providers. Mentors leads on building robust LLMOps monitoring practices. |
| ML Engineer | Obligatorio | Defines model observability strategy for enterprise. Designs ML observability platform. Evaluates monitoring technologies. |
| MLOps Engineer | Obligatorio | Shapes the ML model monitoring strategy at the organizational level: unified observability platform for all production models, SLA and SLO standards. Designs centralized monitoring architecture for hundreds of models, defines automatic retraining and rollback policies, implements ML governance with bias, fairness, and regulatory compliance tracking. |
| NLP Engineer | Obligatorio | Shapes enterprise NLP model monitoring strategy. Defines observability standards, degradation governance, and automated model lifecycle management at organizational level. |