Remote Security Research Engineer

at Maze

Posted 3 days ago 1 applied

Description:

  • As a Security Research Engineer at Maze, you will define real security risks in AI-powered vulnerability detection.
  • You will analyze cloud vulnerability findings from AI systems, conducting deep research to validate and contextualize threats.
  • Your role involves creating authoritative labels that train AI models to distinguish critical risks from noise.
  • You will serve as the expert human-in-the-loop, embedding your security judgment into the AI platform to protect thousands of organizations.
  • Your contributions will help establish new standards for AI-powered vulnerability assessment.
  • You will conduct comprehensive research into cloud vulnerabilities affecting EC2 images, Docker containers, and cloud infrastructure.
  • You will create detailed technical writeups about exploitation techniques, attack vectors, and remediation strategies for cloud vulnerabilities.
  • You will leverage CVE databases and threat intelligence sources to enrich vulnerability findings with broader context.
  • You will contribute to thought leadership through technical blog posts, security videos/podcasts, and conference presentations.
  • You will collaborate closely with engineering and product teams to translate security research insights into product improvements.

Requirements:

  • You must have 5+ years of hands-on security experience with a proven vulnerability research background.
  • You should possess deep knowledge of AWS security, cloud infrastructure vulnerabilities, and container security.
  • Strong coding and scripting abilities in Python, Go, or similar languages are required for automating research tasks.
  • You must demonstrate the ability to analyze complex security data and communicate findings to both technical and business audiences.
  • Experience working with vulnerability databases and threat intelligence sources is essential for contextualizing security findings.
  • You should have strong communication skills and the ability to work effectively with AI/ML teams.
  • Comfort in a fast-paced startup environment is necessary, where your research impacts product development.
  • Nice to haves include experience with AI/ML security, background at security tooling companies, and expertise in vulnerability research methodologies.

Benefits:

  • You will tackle ambitious challenges using generative AI to address pressing issues in cloud security.
  • You will work with an expert team that has experience in Big Tech and Scale-ups, contributing to leadership behind acquisitions and IPOs.
  • Your work will have a direct impact on how thousands of organizations understand and respond to cloud security threats.
  • You will help establish the gold standard for AI-assisted vulnerability research, enhancing machine learning models with human security expertise.
  • Opportunities to present your work at major security conferences will allow you to establish yourself as a thought leader in AI-powered security.