Adastra is a global IT company, offering premier class services and solutions in various sectors, facilitating the transition to the digital era.
We work in international teams and on projects for leading global financial, automotive, telecommunications and insurance pioneers.
We pride ourselves in accepting the challenge of mastering cutting-edge technologies to empower our clients achieve great outcomes. What gives us satisfaction is the opportunity to apply innovation in our jobs daily, inclusive of areas such as Cloud, Managed Services, Big Data, AI, IoT, Blockchain, Data Monetization and RPA.
We are looking for Big Data developer with AWS who is passionate about working with huge data sets.
Design, build and deploy large scale enterprise cloud solutions using AWS services
Analyze, and re-architect existing cloud solutions to achieve target performance goals
Design and build data pipelines from ingestion to consumption within a big data architecture, using Spark and Scala
Design and implement data ingestion, data transformation and data governance functions on AWS using AWS native or custom programming
Experience in designing, coding, and tuning big data processes using Apache Spark and Scala
Proven experience with AWS services including Glue, S3, Redshift, CloudWatch, DynamoDB, Lambda, and Athena
Experience operating very large data warehouses or data lakes in a cloud environment
Good knowledge of distributed systems and how to optimize the distribution, partitioning, and MPP of data structures
Excellent skills in writing and optimizing SQL
Experience with CI/CD pipelines on AWS, CloudFormation and AWS CDK is a plus
Excellent interpersonal skills. Be able to work with business owners to understand process and data requirements