Please, let Increasingly know you found this job
on RemoteYeah.
This helps us grow 🌱.
Description:
The Data Engineer will be responsible for working on data integration and pipeline development.
The role requires a strong background in AWS Cloud, specifically with tools such as Databricks, Apache Spark, EMR, Glue, Kafka, Kinesis, and Lambda in S3, Redshift, RDS, and MongoDB/DynamoDB ecosystems.
The candidate should have over 3 years of relevant experience in these areas.
The position involves designing, developing, testing, deploying, maintaining, and improving data integration pipelines.
Strong real-life experience in Python development, particularly in PySpark within the AWS Cloud environment, is essential.
The role also requires strong analytical skills for writing complex queries, query optimization, debugging, and working with user-defined functions, views, and indexes.
Experience with source control systems such as Git, Bitbucket, and Jenkins for build and continuous integration tools is necessary.
Requirements:
Candidates must have a minimum of 3 years of relevant experience with AWS Cloud and data integration tools.
Proficiency in Python development, especially in PySpark, is required.
Strong analytical skills and experience with databases, including writing complex queries and optimizing them, are essential.
Familiarity with source control systems like Git and Bitbucket, as well as continuous integration tools such as Jenkins, is necessary.
Benefits:
Employees will have the opportunity to work in one of the fastest-growing retail technologies in Europe.
A competitive salary will be offered, along with the chance to work directly with a highly experienced team.
The work environment is designed to be varied, complex, and challenging, fostering a great culture that employees can help shape.
Group Health Insurance is provided as part of the benefits package.
Apply now
Please, let Increasingly know you found this job
on RemoteYeah
.
This helps us grow 🌱.