Remote Sr Data Engineer Databricks Y Python

Posted

Apply now
Please, let Multiplica Talent know you found this job on RemoteYeah. This helps us grow 🌱.

Description:

  • We are looking for a Data Engineer with experience in Databricks and Python to join our data team.
  • The ideal candidate should be able to build, optimize, and maintain large-scale data processing solutions using cutting-edge tools.
  • It is important that the candidate has a good level of English, as they will work with international teams and must communicate effectively in a global environment.
  • Responsibilities include designing, developing, and maintaining efficient and scalable data pipelines in Databricks.
  • The candidate will utilize Python and other associated tools to process and analyze large volumes of data.
  • They will integrate, transform, and load data (ETL) from various sources into storage platforms such as Data Lakes and Data Warehouses.
  • Collaboration with data scientists, business analysts, and other technical teams is essential to understand project requirements and provide appropriate solutions.
  • The candidate will optimize the performance of data pipelines and ensure the quality and consistency of processed data.
  • Participation in the design and maintenance of cloud data infrastructure using platforms like AWS, Azure, or Google Cloud is required.
  • They will implement data validation tests, perform data quality analysis, and ensure the integrity of databases.
  • Documentation of processes, architectures, and data integration procedures is necessary.
  • Staying updated with best practices and new tools related to data engineering, including advancements in Databricks and Python, is expected.

Requirements:

  • Solid experience in Databricks, especially in creating workflows and data pipelines is required.
  • Extensive experience in Python for data processing, including libraries such as pandas, numpy, and pySpark is necessary.
  • Strong knowledge of data architecture and data storage (Data Lakes, Data Warehouses) is essential.
  • Experience working with Apache Spark or similar technologies is required.
  • Familiarity with implementing ETL (Extract, Transform, Load) solutions in production environments is necessary.
  • Knowledge of SQL and NoSQL databases is required.
  • A good level of English (oral and written) is necessary, with the ability to work in an international environment and communicate effectively with global teams.
  • Knowledge of cloud platforms (AWS, Azure, Google Cloud) is highly desirable.
  • The ability to solve complex problems related to data processing and performance optimization is required.
  • Experience working with version control tools (such as Git) is necessary.
  • The candidate should be able to work independently and collaboratively on team projects.

Benefits:

  • The position is 100% remote from any country in Latin America.
  • Payments will be made in US dollars (USD).
  • Exclusive access to a 60% discount on courses in English, French, German, Portuguese, and Italian through our collaboration with a recognized learning platform is provided.
  • Special discounts on health, psychology, nutrition, and physical training plans are available.
  • Personalized support from an Account Manager throughout the project is included.
  • Upon completing the first project, access to our freelancer community and a list of exclusive projects in more than 5 countries, including the USA, will be granted.
  • The project is challenging and involves a significant technology company.
Apply now
Please, let Multiplica Talent know you found this job on RemoteYeah . This helps us grow 🌱.
About the job
Report this job

Job expired or something else is wrong with this job?

Report this job
Leave a feedback