Data Engineer with Scala

Capgemini Polska

  • Kraków

    Kraków, Lesser Poland
  • offer expired over a month ago
  • contract of employment
  • full-time
  • specialist (Mid / Regular)
  • hybrid work
  • Immediate employment
  • remote recruitment
  • запрошуємо працівників з України
Запрошуємо працівників з України
Роботодавець відкритий для працевлаштування громадян України

Technologies we use


  • Spark 5

  • Scala 5

  • Cloud

  • AWS

  • Azure

  • GCP 4

  • Python 3

  • SQL 3

About the project

Capgemini FS - Business Unit (Fintech) is hiring!

Insights & Data practice delivers cutting-edge data centric solutions.

Most of our projects are Cloud & Big Data engineering. We develop solutions to process large, also unstructured, datasets, with dedicated Cloud Data services on AWS, Azure or GCP.

We are responsible for full SDLC of the solution: apart from using data processing tools (e.g., ETL), we code a lot in Python, Scala or Java and use DevOps tools and best practices. The data is either exposed to downstream systems via API, outbound interfaces or visualized on reports and dashboards.

Within our AI CoE we deliver Data Science and Machine Learning projects with focus on NLP, Anomaly Detection and Computer Vision.

Additionally, we are exploring the area of Quantum Computing, searching for practical growth opportunities for both us and our clients.

Currently, over 250 of our Data Architects, Engineers and Scientists work on exciting projects for over 30 clients from different sectors (Financial Services, Logistics, Automotive, Telco and others)

Come on Board! :)

Your responsibilities

  • Translating complex functional and technical requirements into detailed design;

  • Implementation of scalable and high-performance data processing solutions using Spark and Scala;

  • Design and implementation of software to process large and unstructured datasets (noSQL, Data Lake Architecture);

  • Optimization and testing of modern Big Data solutions, also in cloud and Continuous Delivery / Continuous Integration environment

Our requirements

  • At least 3 years of commercial experience working in projects in Data Engineering, Big Data and/or Cloud environment using Apache Spark and Scala;

  • Knowledge of at least one (non)relational database system and SQL language;

  • Familiarity with one or more of the listed (or similar) technologies and tools: Oozie, Hive, Hadoop, Sqoop, Kafka, Flume, Hbase;

  • Very good command of English (willingness to learn German would be an advantage).


  • sharing the costs of sports activities

  • private medical care

  • sharing the costs of professional training & courses

  • life insurance

  • remote work opportunities

  • integration events

  • corporate library

Capgemini Polska

Capgemini is active in the area of insight & data, social and digital marketing in close collaboration with its clients.

Scroll to the company’s profile