Domain
Machine Learning & AI
Skill Profile
This skill defines expectations across roles and levels.
Roles
1
where this skill appears
Levels
5
structured growth path
Mandatory requirements
0
the other 5 optional
Machine Learning & AI
LLM & Generative AI
2/22/2026
Choose your current level and compare expectations. The items below show what to cover to advance to the next level.
The table shows how skill depth grows from Junior to Principal. Click a row to see details.
| Role | Required | Description |
|---|---|---|
| LLM Engineer | Knows LLM deployment basics: REST API endpoint, model loading, basic serving. Deploys simple inference server on vLLM or text-generation-inference under mentor guidance. |
| Role | Required | Description |
|---|---|---|
| LLM Engineer | Independently deploys LLM to production: configures vLLM with continuous batching, quantization (GPTQ/AWQ), and health checks. Implements monitoring of latency, throughput, and error rates. |
| Role | Required | Description |
|---|---|---|
| LLM Engineer | Designs production LLM serving infrastructure: multi-model serving, A/B testing, canary deployments, auto-scaling. Optimizes latency (p50/p95/p99) and throughput under high load. |
| Role | Required | Description |
|---|---|---|
| LLM Engineer | Defines LLM deployment strategy for the team. Establishes SLA for inference services, monitoring standards, rollback and incident response processes for LLM production systems. |
| Role | Required | Description |
|---|---|---|
| LLM Engineer | Shapes enterprise LLM serving platform. Defines approaches to multi-model inference at scale, cost optimization, capacity planning, and disaster recovery for critical LLM services. |