Advanced Distributed Data Platform Engineer

Advanced Distributed Data Platform Engineer (Praca zdalna)

Relativity

Gdańsk +6 więcej
146000 - 218000 PLN / rok
PERMANENT, B2B
PERMANENT
💼 B2B
📊 Advanced Data Platform Engineer
cloud‑native
📊 data pipelines
lakehouse
Delta Lake
Iceberg
Apache Spark
🐍 Python
SQL
dbt
📊 Databricks
Snowflake
governance
performance optimization

Podsumowanie

Advanced Data Platform Engineer – design & implement cloud‑native data pipelines, lakehouse (Delta Lake/Iceberg), Spark, Python, SQL; build dbt‑based analytics, optimize Databricks/Snowflake, ensure governance & performance. Requires strong Python, SQL, Spark, lakehouse expertise. Benefits: health/dental/vision, parental leave, flexible remote work, two week‑long breaks, long‑term incentives.

Słowa kluczowe

Advanced Data Platform Engineercloud‑nativedata pipelineslakehouseDelta LakeIcebergApache SparkPythonSQLdbtDatabricksSnowflakegovernanceperformance optimization

Benefity

  • Comprehensive health, dental, and vision plans
  • Parental leave for primary and secondary caregivers
  • Flexible work arrangements
  • Two week‑long company breaks per year
  • Additional time off
  • Long‑term incentive program
  • Training investment program

Opis stanowiska

We are building a specialized team focused on enabling advanced analytics and reporting capabilities across our internal data ecosystem. As an Advanced Data Platform Engineer, you will design and implement scalable, cloud-native data platforms that integrate modern lakehouse technologies, distributed compute frameworks, and cloud-native services to support diverse analytical use cases and enterprise-scale insights.  You will work on systems leveraging Apache Spark, Delta Lake, and Iceberg to process large-scale datasets efficiently, while enabling internal users to build reporting and analytics through curated data models, optimized query performance, and reliable data pipelines. This role emphasizes technical depth, performance optimization, and governance best practices to deliver secure and reliable solutions.  Relativity’s scale and breadth provide significant opportunities for rich data exploration and insights. Our data infrastructure ensures that vast datasets remain accessible, secure, and compliant, while enabling innovation across the organization. We are making substantial investments in data lake technology and distributed systems to support future growth and advanced analytics. Job Description and RequirementsYour Role in Action  Design and implement complex data pipelines and distributed systems using Spark and Python.  Apply software engineering best practices: clean code, modular design, CI/CD, automated testing, and code reviews.  Develop and maintain lakehouse capabilities with Delta Lake and Iceberg, ensuring reliability and performance.  Enable analytics workflows by integrating dbt for SQL transformations running on Spark.  Collaborate with internal teams to deliver curated datasets and self-service analytics capabilities.  Optimize data warehousing solutions such as Databricks and Snowflake for performance and scalability.  Implement observability and governance frameworks, including data lineage and compliance controls.   Drive performance tuning, scalability strategies, and cost optimization across Spark jobs and cloud-native environments.    Participate in on-call rotations as part of a team responsibility.   Core Requirements:  Strong programming skills in Python and SQL; experience with Apache Spark for distributed data processing.  Expertise in Delta Lake and/or Apache Iceberg for lakehouse architecture.  Familiarity with dbt, Databricks, and Snowflake for analytics workflows.  Solid understanding of software engineering principles, CI/CD, and automated testing.  Familiarity with Kubernetes, Docker, and infrastructure-as-code tools.  Understanding of performance tuning, scalability strategies, and cost optimization for large-scale systems.  Nice to Have:  Exposure to event-driven architectures and advanced analytics platforms.  Experience enabling self-service analytics for internal stakeholders.  Experience in any of the following languages: Java, Scala, Rust.  Relativity is a diverse workplace with different skills and life experiences—and we love and celebrate those differences. We believe that employees are happiest when they're empowered to be their full, authentic selves, regardless how you identify.Benefit Highlights:Comprehensive health, dental, and vision plansParental leave for primary and secondary caregiversFlexible work arrangementsTwo, week-long company breaks per yearAdditional time offLong-term incentive programTraining investment programAll qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, or national origin, disability or protected veteran status, or any other legally protected basis, in accordance with applicable law.Relativity is committed to competitive, fair, and equitable compensation practices.This position is eligible for total compensation which includes a competitive base salary, an annual performance bonus, and long-term incentives.The expected salary range for this role is between following values:146 000 and 218 000PLNThe final offered salary will be based on several factors, including but not limited to the candidate's depth of experience, skill set, qualifications, and internal pay equity. Hiring at the top end of the range would not be typical, to allow for future meaningful salary growth in this position. Suggested Skills:Engineering Principle, Hardware Integration, Innovation, Problem Solving, Process Improvements, Quality Assurance (QA), Research and Development, System Designs, Technical Documents, Troubleshooting

Zaloguj się, aby zobaczyć pełny opis oferty

Wyświetlenia: 1
Opublikowana20 dni temu
Wygasaza 10 dni
Rodzaj umowyPERMANENT, B2B
Źródło
Logo

Podobne oferty, które mogą Cię zainteresować

Na podstawie "Advanced Distributed Data Platform Engineer"

Nie znaleziono ofert, spróbuj zmienić kryteria wyszukiwania.