Remote Data Engineer

Posted

This job is closed

This job post is closed and the position is probably filled. Please do not apply.  Automatically closed by a robot after apply link was detected as broken.

Description:

  • The Business Data Infrastructure (BDI) Team at Red Canary is responsible for the business data and analytics ecosystem, ensuring insights, decisions, and innovations are powered by consistent, accurate, and accessible data.
  • The BDI team adopts a data product approach, treating data products as discrete, reusable datasets or pipelines curated for various stakeholders.
  • Data products are designed with clear purposes, defined ownership, and measurable quality standards to deliver tangible value to different departments.
  • The Data Engineer will play a crucial role in designing, building, and maintaining the data infrastructure that supports the organization's insights and decisions.
  • Responsibilities include ensuring data is ingested, transformed, and stored securely and efficiently, promoting consistency, scalability, and accessibility across teams.
  • The Data Engineer will act as a bridge between raw data and actionable insights, enabling departments to leverage reliable datasets for innovation and growth.
  • Key tasks include building and maintaining scalable data infrastructure, developing and managing data products, ensuring data quality and governance, collaborating with cross-functional teams, and continuously optimizing for scalability and efficiency.

Requirements:

  • A Bachelor's or Master's degree or equivalent work experience in Data Engineering, Mathematics, Statistics, Computer Science, or a related engineering discipline is required.
  • A minimum of 3 years of proven experience as a Data Analyst, Data Engineer, data-focused Software Engineer, or similar role is necessary, preferably in a startup or fast-paced environment.
  • Strong knowledge of relational databases, distributed storage, and semi/un-structured data is essential.
  • Substantial experience with cloud computing platforms such as AWS, GCP, or Azure is required.
  • Strong experience in data pipeline execution and design using tools like AWS Sagemaker, AWS Glue, Apache Airflow, Prefect, or similar is necessary.
  • Exposure to distributed computing, such as pySpark, is a plus.
  • Experience with primary data sources like Intacct, Salesforce, Zendesk, or Jira is advantageous.
  • Excellent problem-solving and critical-thinking skills are required, with the ability to distill complex problems into understandable data products.
  • Strong communication skills are necessary to effectively present complex findings to non-technical stakeholders.
  • The candidate must be self-motivated and proactive, capable of working independently and collaborating effectively within a team and across a distributed customer base.

Benefits:

  • The base salary for this role ranges from $117,000 to $135,000 per year.
  • This position is eligible for participation in the company's bonus program.
  • The role includes eligibility for a grant of stock options, subject to the approval of the company's board of directors.
  • The application deadline for this position is February 7, 2025.
About the job
Posted on
Job type
Salary
$ 117,000 - 135,000 USD / year
Location requirements

-

Position
Experience level
Technology stack
Leave a feedback