This job post is closed and the position is probably filled. Please do not apply.
π€ Automatically closed by a robot after apply link
was detected as broken.
Description:
We are seeking a skilled and experienced Data Engineer to join our dynamic team.
In this role, you will be responsible for designing, building, and maintaining our data infrastructure to support data-driven decision making across the organization.
You will design, develop, and maintain scalable data pipelines for efficient data extraction, transformation, and loading (ETL) processes.
You will architect and optimize data storage solutions, including data warehouses and data lakes.
Ensuring data quality and integrity through data validation, cleansing, and error handling is a key responsibility.
You will collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver relevant datasets.
Implementing data security measures and access controls to protect sensitive information is essential.
You will automate and improve data processes and workflows for scalability and efficiency.
Monitoring data infrastructure for performance and reliability, addressing issues promptly, is part of the role.
Staying current with industry trends and emerging technologies in data engineering is expected.
You will document data pipelines, processes, and best practices for knowledge sharing.
Participating in data governance and compliance efforts to meet regulatory requirements is required.
Providing technical support and mentoring to junior data engineers is part of your responsibilities.
You will continuously optimize data architecture to support the company's evolving data needs.
Requirements:
A Bachelor's degree in Computer Science, Information Technology, or a related field is required; a Master's degree is a plus.
You must have 3+ years of experience as a Data Engineer or in a similar role.
Strong programming skills in languages such as Python, Java, or Scala are necessary.
Expertise in SQL and experience with both relational and NoSQL databases is required.
Proficiency in data modeling and database design is essential.
Experience with big data technologies like Hadoop, Spark, and Hive is necessary.
Familiarity with cloud platforms (e.g., AWS, Azure, or Google Cloud) is required.
Knowledge of data warehousing concepts and ETL tools is essential.
Experience with version control systems (e.g., Git) is required.
Strong problem-solving and analytical skills are necessary.
Excellent communication and teamwork abilities are required.
Benefits:
You will receive a competitive salary commensurate with experience.
Health, dental, and vision insurance will be provided.
A 401(k) retirement plan with company match is included.
Flexible work arrangements are available.
Professional development opportunities will be offered.
You will work on exciting projects with cutting-edge data technologies.