As DevOps Engineer, you will be responsible for the design, development, implementation, maintenance and automation of (Big) Data solutions, architecture, platforms and services. You will be part of a team consisting of highly skilled specialists in the Data and Technology domain (like data scientists, developers, analysts and product owners), and together you will make sure the organisation will succeed in making the transition towards a more data driven business.
Among your responsibilities are:
- Manage the automation of the (data) infrastructure and initiate the creation of new functionalities
- Implement and administer the Hadoop infrastructure
- Maintain the Hadoop Cluster
- Development and maintenance of infrastructure monitoring software
The DevOps Engineer we are looking for is above all a natural innovator, proactive and pragmatic. Next to that you should have:
- At least 5 years of relevant experience in (leading) DevOps roles;
- Knowledge of and experience with Big Data, specifically with Hadoop (Clustering, HDFS etc);
- Unix / Linux experience;
- Experience with Open Source technologies, OpenStack and AWS;
- Experience with API functionality;
- Experience in Agile environments and with continuous integration and deployment;
- Fluency in English, both written and oral.
Big Data, Hadoop, open source, AWS. Are these the kind of terms that make your heart skip a beat? And do you want to play a vital part in making sure new and innovative (Big) Data concepts are created, maintained and automated in the best possible way? Then read on!
Our client, a leading global player in integrated telecom services, is looking for a passionate and driven DevOps Engineer. You will work on scalable and automated solutions that leverage telecom and data technologies.