+

Вход

Въведи своя e-mail и парола за вход, ако вече имаш създаден профил в DEV.BG/Jobs

Забравена парола?
+

Създай своя профил в DEV.BG/Jobs

За да потвърдите, че не сте робот, моля отговорете на въпроса, като попълните празното поле:

103+41 =
+

Забравена парола

Въведи своя e-mail и ще ти изпратим твоята парола

Една от всички 17 обяви за Big Data в София

Виж всички
Sigma Software
Middle/Senior Big Data Engineer (Big Data Competence Center)
ApplyКандидатствай

Обявата е публикувана в следните минибордове

  • Anywhere
  • Съобщи за проблем с обявата

Съобщи за проблем с обявата

×

    Какво не е наред с обявата?*
    Моля опиши ни, къде е проблемът:
    За да потвърдите, че не сте робот, моля отговорете на въпроса, като попълните празното поле:

    We are inviting you to join our Big Data Competence Center, which is part of Sigma Software’s complex organizational structure that combines various clients, interesting projects, and activities to grow your professional skills.

    Big Data Competence Center – is a place where we collect the best engineering practices and unite them into one knowledge base to provide the best technical excellence services for our clients.

    We are not just a team gathered to write code. We all are willing to contribute to the field ether by participating in the life of a Big Data Unit, or by constantly growing our own skills. We are using an unusual approach of hiring people not for the specific project, but for our team in the first place. It gives us a chance to know you better and ensures that we will provide the perfect match between the client’s needs and your professional interests.

    We’re acting in various business domains and working with the top range clients (please see the ones without NDA here: sigma.software/case-studies).

     

    Responsibilities:

    • Contributing to new technologies investigations and complex solutions design, supporting a culture of innovation considering matters of security, scalability, and reliability
    • Working with modern data stack, coming up with well-designed technical solutions and robust code, implementing data governance processes
    • Working and professionally communicating with the customer’s team
    • Taking up responsibility for delivering major solution features
    • Participating in requirements gathering & clarification process, proposing optimal architecture strategies, leading the data architecture implementation
    • Developing core modules and functions, designing scalable and cost-effective solutions
    • Performing code reviews, writing unit and integration tests
    • Scaling the distributed system and infrastructure to the next level
    • Building data platform using the power of modern cloud providers (AWS/GCP/Azure)

     

    Extra Responsibilities:

    • Developing the Micro Batch/Real-Time streaming pipelines (Lambda architecture)
    • Working on POCs for validating proposed solutions and migrations
    • Leading the migration to the modern technology platform, providing technical guidance
    • Adhering to CI/CD methods, helping to implement best practices in the team
    • Contributing to unit growth, mentoring other members in the team (optional)
    • Owning the whole pipeline and optimizing the engineering processes
    • Designing complex ETL processes for analytics and data management, driving the massive implementation

     

    Requirements:

    • 3+ years of experience with Python and SQL
    • Experience with AWS, specifically API Gateway, Kinesis, Athena, RDS, and Aurora
    • Experience building ETL pipelines for analytics and internal operations
    • Experience building internal APIs and integrating with external APIs
    • Working with Linux operational system
    • Effective communication skills, especially for explaining technical concepts to nontechnical business leaders
    • Desire to work on a dynamic, research-oriented team
    • Experience with distributed application concepts and DevOps tooling
    • Excellent writing and communication skills
    • Troubleshooting and debugging ability

     

    Would be a plus:

    • 2+ years of experience with Hadoop, Spark and Airflow
    • Experience with DAGs and orchestration tools
    • Experience with developing Snowflake-driven data warehouses
    • Experience with developing event-driven data pipelines