DHL

Leider ist dieser Job nicht mehr aktiv

Originele vacaturetekst

Senior Open Source Engineer (m/f/x) Data Lake

BONN, PERMANENT AND FULL TIME

About the role:
Would you like to be part of DHL IT Services? Our Data Lake Platform team is looking for a Senior DevOps Engineer (m/f/x) star! You will develop and operate Elasticsearch clusters and other Open Source services as part of DHL Data Lake ecosystem that sits at the heart of DHL digitalization effort. You will join a platform DevOps team that is on a mission to catapult DHL into data driven enterprise. Apply and be part of one of the most impactful and upstream IT teams in DHL.

Your work:
As part of DevOps product team you will take end-to-end care of Elasticsearch (OpenSearch), Airflow, Prometheus/Grafana and other Open Source technologies being part of Data Lake ecosystem, that includes operating, scaling, supporting and engineering challenges.• Development: further engineer and automate the platform across technologies and infrastructures with strong focus on network, servers and monitoring• Harden & Scale: grow & stabilize the platform to meet rapidly growing demand and load• Operations: overlook daily operations, maintenance, monitoring and capacity situation for 24x7, business critical on-prem platform• Support: help, troubleshoot and consult use cases, solve incidents, coordinate changes

The Team:
Work in a highly skilled, highly motivated international team of unique professionals. Learn very fast, have a high impact and start-up feeling from day one. We follow scrum principles, trace work on Jira, communicate via slack and live high trust DevOps culture.
Our Tech:
• Elasticsearch (OpenSearch distribution) with multiple cluster running on Docker swarm on-prem, as well as clusters on GCP• Airflow to orchestrate data injections & processing• Prometheus & Grafana used for infrastructure & application monitoring• Rancher Kubernetes as shared container infrastructure soon also for OpenSearch• Jenkins, Ansible, Git as automation engine for our GitOpt approach• Mapr (HPE) Hadoop cluster as Big Data storage with Hive, Spark, Drill, Hue ecosystem services• Microsoft Azue and Google (GCP) for hybrid scenarios• Red Hat Enterprise Linux as OS layer (virtual & physical machines)

You should have:
• Master degree in computer science or related field.• A very good experience with network, security and infrastructure administration.• Deep knowledge of Linux, preferably Red Hat Enterprise.• Practical knowledge on infrastructure automation preferably using Ansible, Jenkins and Git.• Working experience in Docker / Docker Swarm / Kubernetes.• Hands-on experience in working within Open Source Communities, best as contributor.• Experience in technologies like: Elasticsearch (OpenSearch), Airflow, Prometheus/Grafana, ScyllaDB, Kudu & Impala is a plus but not mandatory, you will get trained.• Some knowledge on GCP / Microsoft Azure• Hands-on experience in running 24x7 critical, high load, big scale production platforms is a plus.• Fair knowledge in Hadoop (nice to have).• Ability to use English in daily communication.• Ready to learn extremely fast in a very agile, high pace environment with high level of autonomy.
We are looking primary for the right attitude, character and drive, technologies can be quickly picked up.

What do we offer:
• Young team of motivated professionals from across the globe, always ready to support you• A great opportunity to advance your career to a global level, with lots of learning included• Professional and industry-recognized technical trainings and certifications• Flexible working time and the possibility to work from home• Free parking spaces and a job ticket• Company sponsorship of various sports and social clubs• Huge number of internal growth opportunities

For more details feel free to contact Tomek Ziarko: +49 6151 908 4447.
Stunden:
Vollzeit
Art des Stellenangebotes:
Intern

Fähigkeiten

  • Es ist kein Abschluss erforderlich