Margo
Join our team working on advanced Big Data solutions in an enterprise environment. We are looking for someone to help develop a platform based on Spark and Hadoop as part of one of our key projects.Cooperation model: hybrid (remote + on-site in Warsaw office, 2 times per month)Required Skills (must have): Minimum 2 years of professional experience in Spark Technical background (IT/Engineering studies) Solid understanding of Big Data concepts, Data Warehousing, and Data Management Experience with Hadoop platforms (Cloudera/Hortonworks) Knowledge of engineering best practices for large-scale data processing: design standards, data modeling techniques, coding, documenting, testing, and deployment Hands-on experience with data formats: JSON, PARQUET, ORC, AVRO Understanding of database types and usage scenarios (Hive, Kudu, HBase, Iceberg, etc.) Advanced SQL skills Experience integrating data from multiple sources Familiarity with project/application build tools (e.g., Maven) Nice to Have: Practical knowledge of Agile methodologies and tools (Jira, Confluence, Kanban, Scrum) Experience with Kubeflow Knowledge of streaming technologies such as Kafka, Apache NiFi Familiarity with CI/CD automation processes and tools Margo Offers: Salary range per month or rate per day Permanent contract or B2B cooperation, Collaboration with highly skilled specialists in a large enterprise environment Co-financing trainings, certification exams and post-graduate studies, Internal training and the possibility of using our know-how Excellent working atmosphere, integration events.
| Opublikowana | dzień temu |
| Wygasa | za 29 dni |
| Rodzaj umowy | B2B |
| Tryb pracy | Hybrydowa |
| Źródło |
Milczenie jest przytłaczające. Wysyłasz aplikacje jedna po drugiej, ale Twoja skrzynka odbiorcza pozostaje pusta. Nasze AI ujawnia ukryte bariery, które utrudniają Ci dotarcie do rekruterów.
Nie znaleziono ofert, spróbuj zmienić kryteria wyszukiwania.