Please, let Sojern know you found this job
on RemoteYeah.
This helps us grow 🌱.
Description:
We are searching for a talented and motivated Staff Data Engineer to join our growing data team.
In this role, you will play a key role in designing, building, and maintaining data pipelines that deliver high-quality data to our data warehouse and other analytics platforms.
You will work closely with data scientists, analysts, and other engineers to ensure that our data infrastructure meets the needs of the business.
Responsibilities include designing, developing, and implementing scalable data pipelines using ETL processes.
You will write and execute complex SQL queries to extract and transform data from various sources.
The role involves developing and maintaining Python scripts for data manipulation, automation, and data quality checks.
You will design and implement data models to optimize data storage and retrieval.
Monitoring and maintaining the data pipelines to ensure data quality, accuracy, and timeliness is essential.
Collaboration with data scientists and analysts to understand data needs and develop solutions is required.
You will document data pipelines and processes for future reference and maintainability.
Staying up-to-date on the latest data engineering tools and technologies is expected.
You will attend communities of practice to keep the team up-to-date on best practices and org-wide initiatives.
Unblocking or mentoring developers to keep stories moving is part of the role.
Participation in the entire Agile development lifecycle, including sprint planning, stand-ups, and retrospectives, is required.
Requirements:
Experience in designing, developing, and implementing ETL pipelines is essential.
Proficiency in SQL and experience with relational databases such as MySQL or PostgreSQL is required.
Strong programming skills in Python, including libraries like Pandas and NumPy, are necessary.
Extensive knowledge of agile methodologies is expected.
The ability to work independently and as part of a team is crucial.
Strong knowledge of cloud computing platforms, such as AWS, Azure, or Google Cloud, is required.
Experience with CI/CD practices for data engineering workflows is necessary.
Nice to have: Experience with Google Cloud, BigQuery, CloudSQL, gGRPC, and Kubernetes.
Nice to have: Experience with orchestration/scheduling tools such as Apache Airflow, Prefect, or similar.
Nice to have: Experience in AdTech, including knowledge about Ad networks, ad exchanges, programmatic advertising, DMPs, DSPs, and Audiences.
Benefits:
Competitive compensation packages, stock options offered to every employee, and a Bonusly program to reward and recognize team wins and performance are provided.
Employees can take up to 40 hours of paid time per year to volunteer and give back to the community.
Flexibility is offered through a Flexi-Friday benefit, hybrid or remote work options for most roles, and time-zone friendly work hours with async collaboration.
Team offsites are planned annually, and there are six employee resource groups, regular virtual and in-office team building events, and monthly company All Hands & leadership Q&As.
PTO allowance to recharge, comprehensive healthcare options, and paid parental leave (16 weeks for birthing parents; 12 weeks for non-birthing parents) are included.
Retirement contributions and investment options are available for applicable locations, along with travel benefits (hotel stay benefit & IATA membership) and mental health, wellness & financial health resources.
A learning & development stipend, mentorship program, career development programs, and leadership training are provided for growth.
Home office tech setup (laptop, monitor, keyboard, mouse), monthly internet and phone allowance, and modern tools to communicate and collaborate (Slack, Google Suite) are included for productivity.
Apply now
Please, let Sojern know you found this job
on RemoteYeah
.
This helps us grow 🌱.