ITDS
This is a [Wrocław]-based hybrid opportunity – [3] from home per weekAs an [Data Platform Engineer] you will be working for our client – [a global quantitative & systematic investment manager using a scientific, tech-driven approach across all liquid asset classes]. You will be responsible for [building the tools, pipelines and observability that power data analysis, monitoring and system automation for low-latency trading at scale].Your main responsibilities: Design & implement systems to collect, process and manage large-scale market and internal datasets Build and maintain performant, reliable data pipelines/services (Python, Airflow, AWS/S3) Develop monitoring, alerting and observability for data platforms (Grafana/Prometheus/ELK) Integrate data delivery and compute workflows into production with engineers & researchers Improve CI/CD and infrastructure-as-code to streamline deployments and operations Evolve expertise in distributed computing, storage architectures and performance engineering You’re ideal for the role if you have: 3–7 years’ experience with strong Python and shell scripting Solid grasp of data engineering, ETL and time-series data (SQL/Pandas) Hands-on Linux, networking and performance troubleshooting skills Familiarity with containers & orchestration; exposure to Airflow and distributed compute Experience with observability/metrics stacks (Grafana, ELK, Prometheus) Exposure to CI/CD and IaC; bonus: AWS at scale (e.g., ECS/EKS) and ClickHouse Ownership mindset, problem-solving drive and eagerness to work close to trading
| Opublikowana | dzień temu |
| Wygasa | za 29 dni |
| Rodzaj umowy | Umowa zlecenie |
| Tryb pracy | Hybrydowa |
| Źródło |
Milczenie jest przytłaczające. Wysyłasz aplikacje jedna po drugiej, ale Twoja skrzynka odbiorcza pozostaje pusta. Nasze AI ujawnia ukryte bariery, które utrudniają Ci dotarcie do rekruterów.
Nie znaleziono ofert, spróbuj zmienić kryteria wyszukiwania.