Remote Data Engineer

Posted

Apply now
Please, let 2brains know you found this job on RemoteYeah. This helps us grow 🌱.

Description:

  • 2Brains is a company dedicated to building and developing the Digital Future of its clients, integrating strategy, design, and technology to drive growth.
  • The Data Engineer will participate in designing and developing new management information models and maintaining existing ones.
  • The role involves supporting advanced analytics initiatives by exploring internal and external information models (Data Discovery).
  • The Data Engineer will obtain historical data from multiple internal information sources to support the team's advanced analytics initiatives.
  • Responsibilities include building and optimizing data pipelines for efficient data ingestion, transformation, and loading.
  • The Data Engineer will manage cloud infrastructures (AWS, GCP, Azure) to ensure scalability and cost efficiency.
  • The role requires automating and monitoring processes using DevOps tools such as Airflow, Terraform, or Kubernetes.
  • Implementing quality controls and governance to ensure data integrity and availability is essential.
  • Collaboration with Data Science, Product, and Development teams to design solutions aligned with business needs is expected.

Requirements:

  • Mandatory experience working with BI technologies is required.
  • Experience in building and operating distributed systems for extracting, ingesting, and processing large datasets with high availability is necessary.
  • Demonstrable capability in data modeling, ETL development, and data storage is required.
  • Experience using business intelligence reporting tools, specifically Power BI, is essential.
  • Mandatory knowledge in consuming REST API microservices is required.
  • Mandatory knowledge in Git, Bitbucket, Docker, Jenkins, and Webhooks is necessary.
  • Proficiency in programming with Python and solid software engineering foundations are required.
  • Skills in automation and scripting are necessary.
  • Experience using Python libraries for data manipulation and analysis, as well as Apache Spark, is required.
  • Knowledge of SQL and NoSQL databases is essential.
  • Familiarity with CI/CD and Dataflow is required.
  • Knowledge of AWS services such as S3, Redshift, and Glue is necessary.

Benefits:

  • The opportunity to work with a high-performance team where learning and development are prioritized.
  • Access to large clients and challenging projects is provided.
  • Continuous learning and growth opportunities, including meetups, training, and cultural activities, are offered.
  • A flexible and dynamic work environment is available.
  • Special benefits include a day off for your birthday and four weeks of vacation.
Leave a feedback