Perfil de habilidad

Apache Spark

PySpark, Spark SQL, DataFrames, partitioning, optimization, Spark on K8s

Data Engineering Batch Processing

Roles

3

donde aparece esta habilidad

Niveles

5

ruta de crecimiento estructurada

Requisitos obligatorios

13

los otros 2 opcionales

Dominio

Data Engineering

skills.group

Batch Processing

Última actualización

17/3/2026

Cómo usar

Selecciona tu nivel actual y compara las expectativas.

Qué se espera en cada nivel

La tabla muestra cómo crece la profundidad desde Junior hasta Principal.

Rol Obligatorio Descripción
Data Engineer Obligatorio Understands Apache Spark fundamentals for data engineering: RDD/DataFrame APIs, basic transformations and actions, reading/writing Parquet/CSV/JSON. Follows team patterns for PySpark job structure, SparkSession configuration, and cluster resource allocation.
Data Scientist Understands Apache Spark fundamentals for data science: Spark DataFrames for large-scale data analysis, basic Spark SQL queries, and MLlib for distributed model training. Follows team patterns for notebook-based Spark workflows and feature engineering at scale.
ML Engineer Obligatorio Understands Apache Spark fundamentals for ML engineering: Spark MLlib pipelines, feature transformers, and distributed model training/inference. Follows team patterns for PySpark ML workflows, model serialization, and integration with MLflow tracking.
Rol Obligatorio Descripción
Data Engineer Obligatorio Independently implements Spark data pipelines: optimizes shuffle operations and partitioning strategies, implements Structured Streaming for real-time ETL, manages Delta Lake tables with ACID transactions. Tunes Spark configurations for memory, parallelism, and cost efficiency.
Data Scientist Independently uses Spark for large-scale analysis: writes optimized Spark SQL for complex aggregations, implements distributed feature engineering with window functions, and uses MLlib for hyperparameter tuning at scale. Manages Spark resource allocation for interactive analytics.
ML Engineer Obligatorio Uses PySpark for large-scale feature engineering. Optimizes Spark jobs (partitioning, caching, broadcast joins). Uses Spark ML for distributed model training.
Rol Obligatorio Descripción
Data Engineer Obligatorio Designs Spark-based data platform architecture: multi-tenant cluster management, cost-optimized workload scheduling with YARN/Kubernetes, and lakehouse architecture with Delta Lake/Iceberg. Implements data quality frameworks, CDC pipelines, and Spark application performance monitoring.
Data Scientist Obligatorio Designs Spark-based analytical frameworks: custom MLlib transformers for domain-specific features, distributed experiment pipelines, and Spark integration with GPU-accelerated training (Rapids). Optimizes end-to-end ML workflows from data preparation to model serving at petabyte scale.
ML Engineer Obligatorio Designs Spark-based ML pipelines for production. Optimizes Spark for ML workloads: memory tuning, shuffle optimization. Integrates Spark with ML platform (MLflow, feature store).
Rol Obligatorio Descripción
Data Engineer Obligatorio Defines Spark standards: coding guidelines, job submission patterns, resource allocation policies. Chooses between PySpark and Spark SQL by scenario. Implements unit testing for Spark jobs through chispa.
Data Scientist Obligatorio Defines data engineering strategy. Shapes data platform. Coordinates data teams. Optimizes data mesh/data fabric approaches.
ML Engineer Obligatorio Defines Spark strategy for ML data processing. Evaluates Spark vs alternatives (Dask, Ray) for ML workloads. Designs distributed computing architecture for ML.
Rol Obligatorio Descripción
Data Engineer Obligatorio Designs platform Spark strategy: EMR vs Databricks vs self-hosted, cluster sizing, dynamic allocation. Defines when Spark vs DuckDB vs Polars. Plans migration to Spark 4.0.
Data Scientist Obligatorio Defines organizational data strategy. Designs enterprise data platform. Establishes data governance framework.
ML Engineer Obligatorio Defines distributed processing strategy for enterprise ML. Designs data processing layer for ML platform. Evaluates novel distributed frameworks.

Comunidad

👁 Seguir ✏️ Sugerir cambio Inicia sesión para sugerir cambios
📋 Propuestas
Aún no hay propuestas para Apache Spark
Cargando comentarios...