Senior Data Engineer with Databricks

Senior Data Engineer with Databricks (Praca zdalna)

emagine Polska

Poland (Remote)
B2B
💼 B2B
Senior Data Engineer
📊 Data Architect
📊 Databricks
Lakehouse
Spark
ETL/ELT
🐍 Python
SQL
Scala
☁️ Azure
☁️ AWS
GCP
CI/CD
Terraform
Kafka
🤖 Airbyte
Unity Catalog
🌐 Zdalna

Podsumowanie

Senior Data Engineer / Data Architect – projektowanie architektury danych w Databricks Lakehouse, budowa pipeline’ów batch i streaming (Spark, PySpark, SQL, Scala), wdrażanie governance (Unity Catalog), integracja źródeł (Airbyte, Kafka), mentoring zespołu. Wymagania: ekspertyza w Databricks, Spark, modelowanie danych, chmury (Azure/GCP/AWS), CI/CD, Terraform. Praca zdalna, umowa B2B, pełny etat.

Słowa kluczowe

Senior Data EngineerData ArchitectDatabricksLakehouseSparkETL/ELTPythonSQLScalaAzureAWSGCPCI/CDTerraformKafkaAirbyteUnity CatalogremoteB2B

Benefity

  • Brak wymienionych benefitów

Opis stanowiska

We are looking for an experienced Senior Data Engineer / Data Architect with a strong background in designing and implementing modern data platforms using Databricks Lakehouse, Data Fabric, and large-scale distributed data systems. The ideal candidate has extensive hands-on experience with enterprise-grade data environments, advanced data modelling, scalable ETL/ELT pipelines, and cloud-native architectures.Main Responsibilities Design and develop end-to-end data architectures based on Databricks Lakehouse and Data Fabric principles. Build scalable batch and streaming data pipelines using Spark (Structured Streaming, PySpark, SQL, Scala). Implement medallion architecture (Bronze/Silver/Gold) and optimize compute workloads using Delta Lake, Z-Ordering, cluster tuning, and performance best practices. Define and implement data governance, lineage, and access control using Unity Catalog, RBAC, and enterprise security standards. Integrate diverse data sources using Airbyte, Kafka, REST APIs, and CDC frameworks (e.g., Debezium). Collaborate with stakeholders to translate business requirements into high-quality data products. Drive adoption of data engineering best practices, coding standards, CI/CD automation, and data quality frameworks (e.g., Great Expectations). Mentor team members and contribute to architectural decisions, roadmaps, and long-term platform strategy. Key Requirements Strong expertise in Databricks (clusters, workflows, DLT, SQL, notebooks, Unity Catalog). Hands-on experience with Spark and distributed processing at scale. Deep understanding of modern data architectures: Lakehouse, Data Fabric, Data Mesh, event-driven workflows. Proficiency in building ETL/ELT pipelines using Python, SQL, and/or Scala. Knowledge of data modelling, metadata management, data cataloging, and domain-oriented design. Experience with cloud platforms (Azure, GCP, or AWS) and object storage systems. Familiarity with DevOps practices, Git-based workflows, CI/CD, Infrastructure as Code (Terraform), and data testing. Strong analytical and problem-solving skills, with the ability to operate in complex enterprise environments. Nice to Have Experience with streaming platforms: Kafka, Pub/Sub, Event Hubs. Exposure to dbt for analytics engineering. Knowledge of MLOps concepts or integration with ML pipelines.

Zaloguj się, aby zobaczyć pełny opis oferty

Wyświetlenia: 27
Opublikowana27 dni temu
Wygasaza 2 miesiące
Rodzaj umowyB2B
Źródło
Logo

Podobne oferty, które mogą Cię zainteresować

Na podstawie "Senior Data Engineer with Databricks"

Nie znaleziono ofert, spróbuj zmienić kryteria wyszukiwania.