+
Вход

Въведи своя e-mail и парола за вход, ако вече имаш създаден профил в DEV.BG/Jobs

Забравена парола?
+
Създай своя профил в DEV.BG/Jobs

За да потвърдите, че не сте робот, моля отговорете на въпроса, като попълните празното поле:

113+54 =

+
Забравена парола

Въведи своя e-mail и ще ти изпратим твоята парола

HR агенция Recruitment.bg

Data Engineer – Core Platform (Lakehouse & Streaming)

ApplyКандидатствай

Обявата е публикувана в следните категории

  • Anywhere
  • Съобщи проблем Megaphone icon

Съобщи за проблем с обявата

×

    Какво не е наред с обявата?*
    Моля опиши ни, къде е проблемът:
    За да потвърдите, че не сте робот, моля отговорете на въпроса, като попълните празното поле:
    Tech Stack / Изисквания

    Who We Are

    Recruitment.bg is a boutique IT recruitment company, based in Bulgaria. We aim to work with the top employers in the industry, companies that we thoroughly vet and trust. Our mission is to guide IT professionals toward improved career paths by understanding their skills, crafting employment strategies, and supporting them every step of the way. Placing emphasis on honesty, respect and reliability while delivering exceptional service by ‘going the extra mile’ we build long term relationships with the people and organizations we work with.

     

    We are currently partnering with a large-scale product company building a modern data platform that supports multiple business domains, high transaction volumes, and real-time decision-making.

     

    The team is expanding its Core Platform & Payments data function and is looking for a Data Engineer who enjoys working close to infrastructure, distributed systems, and analytical data modeling.

    This is not a pure reporting role — it is platform engineering with strong ownership.

     

    Your Responsibilities

    • Build and maintain batch and streaming pipelines feeding lakehouse and warehouse layers
    • Develop and optimize dbt models and SQL transformations
    • Create Python utilities, services, and APIs supporting data workflows
    • Work with Airflow DAGs for orchestration
    • Write Spark or distributed processing jobs when needed
    • Contribute to data modeling decisions (partitioning, normalization, optimization)
    • Implement data validation and quality checks
    • Improve platform reliability, observability, and automation

     

    Your Background

    • 3+ years of experience with Python in data or backend environments
    • 3+ years strong SQL (joins, CTEs, window functions, performance tuning)
    • Experience with Kafka or other streaming systems
    • Hands-on experience with Airflow
    • Experience with dbt
    • Docker knowledge and container workflows
    • Understanding of analytical data modeling
    • Nice to have: Spark, ClickHouse/PostgreSQL, lakehouse architectures, Kubernetes basics, CI/CD pipelines.

     

    What’s Offered

    • Hybrid model (3 days office / 2 remote)
    • Twice yearly salary reviews
    • Annual performance bonus
    • Premium health insurance
    • Modern office & long-term product stability
    • Structured engineering environment

     

    All applications will be treated as strictly confidential.

    Only short-listed candidates will be contacted.

    [GV]