Domain
Machine Learning & AI
Skill Profile
LoRA, QLoRA, PEFT, RLHF, instruction tuning, evaluation
Roles
2
where this skill appears
Levels
5
structured growth path
Mandatory requirements
5
the other 5 optional
Machine Learning & AI
LLM & Generative AI
3/17/2026
Choose your current level and compare expectations. The items below show what to cover to advance to the next level.
The table shows how skill depth grows from Junior to Principal. Click a row to see details.
| Role | Required | Description |
|---|---|---|
| Data Scientist | Understands the concept of LLM fine-tuning: full fine-tuning vs parameter-efficient methods. Uses Hugging Face API for fine-tuning small models on custom data. Prepares training data in the correct format for various LLM platforms. | |
| LLM Engineer | Required | Knows LLM fine-tuning basics: full fine-tuning vs LoRA, instruction-tuning data format. Runs basic fine-tuning of a small model via Hugging Face Trainer under mentor guidance. |
| Role | Required | Description |
|---|---|---|
| Data Scientist | Independently conducts LLM fine-tuning using LoRA, QLoRA, and prefix-tuning. Configures training hyperparameters, monitors loss curves. Evaluates fine-tuned model quality through domain-specific benchmarks and human evaluation. | |
| LLM Engineer | Required | Independently conducts LLM fine-tuning: LoRA/QLoRA, instruction dataset preparation, hyperparameter tuning. Monitors training via W&B, evaluates results on held-out datasets. |
| Role | Required | Description |
|---|---|---|
| Data Scientist | Designs fine-tuning pipelines for production LLM systems. Applies RLHF, DPO for model alignment. Optimizes training through DeepSpeed, FSDP. Conducts systematic evaluation via automated benchmarks and red-teaming. | |
| LLM Engineer | Required | Designs production fine-tuning pipelines: data curation, multi-stage training (SFT → DPO), distributed fine-tuning. Optimizes LoRA rank, learning rate, and batch size for maximum quality. |
| Role | Required | Description |
|---|---|---|
| Data Scientist | Defines LLM fine-tuning strategy for the organization. Establishes data preparation, training, and evaluation standards for custom LLMs. Coordinates GPU infrastructure and budgets for LLM experiments. | |
| LLM Engineer | Required | Defines fine-tuning strategy for the LLM team. Establishes best practices for data preparation, training configuration, evaluation. Coordinates fine-tuning experiments and model selection process. |
| Role | Required | Description |
|---|---|---|
| Data Scientist | Shapes custom LLM development strategy at organizational level. Defines buy vs build for LLM, evaluates open-source vs proprietary models. Influences industry through publications and open-source contributions. | |
| LLM Engineer | Required | Shapes enterprise fine-tuning platform. Defines approaches to automated fine-tuning, model versioning, and A/B testing of fine-tuned models. Optimizes cost and speed of fine-tuning at scale. |