#10439 | 2019-01-15 Malmö area, Sweden

Big Data Operations Engineer (Hadoop, Spark)

Job Summary:
We are seeking a solid Big Data Operations Engineer focused on operations to administer/scale our multipetabyte Hadoop clusters and the related services that go with it. This role focuses primarily on provisioning, ongoing capacity planning, monitoring, management of Hadoop platform and application/middleware that run on Hadoop. (an onsite role in Malmö).

Job Description:
  • Hands on experience with managing production clusters (Hadoop, Kafka, Spark, more).
  • Strong development/automation skills. Must be very comfortable with reading and writing
Python and Java code.
  • Overall 10+ years with at least 5+ years of Hadoop experience in production, in medium to
large clusters.
  • Tools-first mindset. You build tools for yourself and others to increase efficiency and to make
hard or repetitive tasks easy and quick.
  • Experience with Configuration Management and automation.
  • Organized, focused on building, improving, resolving and delivering.
  • Good communicator in and across teams, taking the lead.
Education:
Bachelors or Master Degree in Computer Science or similar technical degree.

  • Responsible for maintaining and scaling production Hadoop, HBase, Kafka, and Spark clusters.
  • Responsible for the implementation and ongoing administration of Hadoop infrastructure including monitoring, tuning and troubleshooting.
  • Provide hardware architectural guidance, plan and estimate cluster capacity, Hadoop cluster deployment.
  • Improve scalability, service reliability, capacity, and performance.
  • Triage production issues when they occur with other operational teams.
  • Conduct ongoing maintenance across our large scale deployments.
  • Write automation code for managing large Big Data clusters
  • Work with development and QA teams to design Ingestion Pipelines, Integration APIs, and provide Hadoop ecosystem services
  • Participate in the occasional on-call rotation supporting the infrastructure.
  • Hands on to troubleshoot incidents, formulate theories and test hypothesis, and narrow down possibilities to find the root cause.

Competence demands:
  • Hands on experience with managing production clusters (Hadoop, Kafka, Spark, more).
  • Strong development/automation skills. Must be very comfortable with reading and writing Python and Java code.
  • Overall 10+ years with at least 5+ years of Hadoop experience in production, in medium to large clusters.
  • Tools-first mindset. You build tools for yourself and others to increase efficiency and to make hard or repetitive tasks easy and quick.
  • Experience with Configuration Management and automation.


Start: as soon as found
Duration: long term assignment
Work location: Malmö area, Sweden
Requirements: Min. 5 years of professional IT experience.
Job type: Freelance

Prosjektet er stengt

Beklager, vi er ikke lenger på utkikk etter konsulenter for dette prosjektet.

Klikk på «Tilgjengelige prosjekter» for å se en oversikt over aktuelle prosjekter.

Hvis du er en kunde som ser etter denne typen profil, kan du bruke skjemaet «Se en konsulent-CV» på det andre nettstedet vårt.

ProData Consult lagrer data i din nettleser / enhet ved hjelp av cookies for statistikk og optimalisering av våre nettsteder og eventuelt for målrettet annonsering. Ved å godta, gir du oss ditt samtykke til denne bruken av cookies. Les våre personvernregler for mer informasjon. Du kan alltid trekke samtykket ditt her: Personvernerklæring og informasjonskapsler

Nettstedet krever bruk av "Nødvendige cookies". Våre nødvendige cookies brukes kun for å levere et fungerende nettsted og webservice.

Utvalgte tredjepartstjenester kan lagre cookies for å plassere relevante annonser som skal leveres til deg på tredjeparts nettsteder.