Linux/Hadoop Engineer

  • Katowice, śląskie pokaż mapę
  • Specjalista
  • 22.03.2019
  • Ważna jeszcze 0 dni (do 21.04.2019)

    Pracodawca ma prawo zakończyć rekrutację we wcześniejszym terminie.

    Linux/Hadoop Engineer
    Miejsce pracy: Katowice

    Candidate’s profile:

     

    Essential:

    • Must have a very good Unix/Linux experience (preferably Red Hat)
    • Good troubleshooting skills 
    • Network Know-how (DNS/TCP/IP)
    • Basic knowledge of virtualization (VMware)
    • Experience with application support
    • Basic knowledge of Active Directory.
    • A great sense of humor


    Nice to have:

    • Being familiar with Hadoop Ecosystem
    • Basic knowledge of Cloud services (AWS, Azure)
    • Experience with scripting would be appreciated ( bash, python , PowerShell )
    • Good interpersonal skills


    An asset would be:

    • Readiness for on call duty

    Job description

     

    We are looking for a professional experienced with Linux support who wants to gain experience with some of the latest technologies.
    You will provide support for operating systems (Linux / Windows) and get trained on Big Data applications support.
    You will have the chance to collaborate with data scientists and architects on a daily basis.
    You should be motivated, have a “can do” attitude and be willing to keep on developing your skills (no routine).

     

    Main accountabilities:

    • Perform day-to-day Hadoop cluster activities.
    • Install, deploy and maintain Hadoop cluster.
    • Provide Hadoop cluster performance analysis and provide troubleshooting support.
    • Monitor Hadoop cluster connectivity and security.
    • Automation of manual tasks for better performance.
    • User access management.
    • Managing backup and recovery solutions of platform and databases.
    • Cooperating with other teams including external suppliers.
    • Develop and maintain existing documentation.

    Your team


    Big Data Lake as a Service Team is a dynamic environment with great variety of services and tools.

    Once you are part of the Infrastructure team, you will have an opportunity to manage infrastructure components based on DELL EMC VxBLOCK, automate all operational tasks with vRealize Automation and use rest of the time to learn new technologies using our internal learning platform.

    If you prefer working directly with Hadoop clusters, want to manage environment on both app and OS level, have good troubleshooting skills and willingness to explore new technologies - apply to Application team.

    Join us and be part of the great team that shapes the future of Big Data Services!

    What we offer?

    • Working in a close-knit team and a friendly atmosphere
    • Development of expert or leader competences
    • Bonuses, including those for recommending new employees
    • A wide range of training and co-financing of courses
    • Attractive package of additional benefits (fitness, gym, cinema, etc.) – you choose what you want
    • Integration events and joint celebrations
    • An annual family picnic
    • Employee volunteering opportunities and interesting CSR projects
    • Additional life insurance
    • Disability inclusion Assistive technologies Reasonable accommodations
    • Private medical care, also for your family
    • Carpooling and bicycle parking

    About us

    Capgemini is one of the leading global companies offering consulting,
    IT technologies.

    The Cloud is fashionable - everyone’s talking about it, many use it, but few knows what it consists of, how it works, how to access it, and how to take care of it.
    It is us, Cloud Infrastructure Services, who understand the subject thoroughly. From high level services,through managing equipment and operating systems, internal or access networks tomanaging applications, IT operations, availability, configurations, and changes. By working in an international environment… we use a number of foreign languages.
    bottom