+
Вход

Въведи своя e-mail и парола за вход, ако вече имаш създаден профил в DEV.BG/Jobs

Забравена парола?
+
Създай своя профил в DEV.BG/Jobs

За да потвърдите, че не сте робот, моля отговорете на въпроса, като попълните празното поле:

114-28 =

+
Забравена парола

Въведи своя e-mail и ще ти изпратим твоята парола

ITEAM

Senior Data Engineer

ApplyКандидатствай

Обявата е публикувана в следните категории

  • Anywhere
  • Съобщи проблем Megaphone icon

Съобщи за проблем с обявата

×

    Какво не е наред с обявата?*
    Моля опиши ни, къде е проблемът:
    За да потвърдите, че не сте робот, моля отговорете на въпроса, като попълните празното поле:
    Tech Stack / Изисквания

    We believe in innovation!

    We believe in constant change!

    We believe the creation of the future started yesterday!

    We challenge you to bring the change in the world and join us on an adventurous journey to the depths of modern technology.

     

    ITEAM is a Professional Services provider with clear focus expertise on today’s cutting edge IT technologies.

     

    We are looking for a Senior Data Engineer to architect, develop, and enhance a modern Lakehouse data platform. This role is highly hands-on and centers on converting fragmented, legacy data into reliable, well-structured data assets that support analytics, reporting, and operational use cases.

     

    Job description:

    • Design and maintain scalable data pipelines using Databricks, PySpark, and distributed processing
    • Orchestrate workflows with Airflow and apply Delta Lake best practices (CDC, schema evolution)
    • Develop and optimize SQL and Python, integrating data from APIs and cloud storage
    • Implement data quality, CI/CD, and governance solutions (e.g. Unity Catalog)
    • Improve legacy datasets and guide AI-assisted development while collaborating with stakeholders

     

    Requirements:

    • 5+ years of experience in data engineering within cloud-based, distributed data environments
    • Strong hands-on expertise with Databricks, PySpark, Delta Lake, including CDC and schema evolution
    • Advanced skills in SQL performance tuning and Python development
    • Practical experience with Airflow or similar orchestration frameworks
    • Familiarity with core AWS services used in data platforms
    • Solid understanding of CI/CD pipelines for data workloads and data quality tooling
    • Experience with streaming technologies (e.g. Kafka or Kinesis) and exposure to data governance tools such as Unity Catalog
    • Experienced in guiding AI tools or agents for code generation while maintaining high engineering standards
    • Professional level of English

     

    Does it sound like a challenging opportunity for you?

    Fasten your seat belt and send us your CV!

    All job applications will be treated with strict confidentiality!