Apply Now

Data Engineer with Hadoop

Antal Sp. z o.o.

Kraków, Kobierzyńska +1 more
180 - 220 PLN
hybrid
b2b
Jenkins
Kafka
📊 Big Data
Hadoop
Spark
Ansible
CI/CD
Linux
💼 b2b
hybrid
full_time
Scala
Hive
Apache Spark
HDFS
Apache
BigQuery
📊 Dataflow
📊 Dataproc
SQL

Job description

Data Engineer with Hadoop Location: Cracow (6 days per month)

About the Role: We are currently looking for a Data Engineer with Hadoop to join a dynamic Data Platform team. The role offers the opportunity to work on large-scale, global data solutions designed to enable innovation and improve data accessibility across business units. You'll contribute to the modernization and automation of a hybrid platform that spans on-premises and multi-cloud environments (GCP and private cloud).

This role focuses on enhancing platform resilience, building automation tools, and improving developer experience for data engineering teams. It involves both back-end and front-end work, including integration with CI/CD tools, service management systems, and internal applications.

Key Responsibilities:

  • Develop automation tools and integrate existing solutions within a complex platform ecosystem

  • Provide technical support and design for Hadoop Big Data platforms (Cloudera preferred)

  • Manage user access and security (Kerberos, Ranger, Knox, TLS, etc.)

  • Implement and maintain CI/CD pipelines using Jenkins and Ansible

  • Perform capacity planning, performance tuning, and system monitoring

  • Collaborate with architects and developers to design scalable and resilient solutions

  • Deliver operational support and improve engineering tooling for platform management

  • Analyze existing processes and design improvements to reduce complexity and manual work

Challenges You’ll Tackle:

  • Building scalable automation in a diverse ecosystem of tools and frameworks

  • Enhancing service resilience and reducing operational toil

  • Supporting the adoption of AI agents and real-time data capabilities

  • Integrating with corporate identity, CI/CD, and service management tools

  • Collaborating with cross-functional teams in a global environment

Required Skills & Experience:

  • Minimum 5 years of experience in engineering Big Data environments (on-prem or cloud)

  • Strong understanding of Hadoop ecosystem: Hive, Spark, HDFS, Kafka, YARN, Zookeeper

  • Hands-on experience with Cloudera distribution setup, upgrades, and performance tuning

  • Proven experience with scripting (Shell, Linux utilities) and Hadoop system management

  • Knowledge of security protocols: Apache Ranger, Kerberos, Knox, TLS, encryption

  • Experience in large-scale data processing and optimizing Apache Spark jobs

  • Familiarity with CI/CD tools like Jenkins and Ansible for infrastructure automation

  • Experience working in Agile or hybrid development environments (Agile, Kanban)

  • Ability to work independently and collaboratively in globally distributed teams

To learn more about Antal, please visit www.antal.pl

Views: 12
Published8 days ago
Expiresin 22 days
Type of contractb2b
Work modehybrid
Source
Logo

Similar jobs that may be of interest to you

Based on "Data Engineer with Hadoop"