The role involves designing and building scalable search ranking, indexing, and AI-based systems.
You will integrate user behavior signals, session data, and content metadata to optimize relevance.
A strong analytical and quantitative problem-solving ability is required.
Collaboration with product, data, and infrastructure teams is essential to deploy experiments and measure impact.
You will drive and shape architectural decisions that align with long-term scalability and maintainability.
Experience with LLM technologies is necessary, including developing generative and embedding techniques, modern model architectures, retrieval-augmented generation (RAG), fine-tuning/pre-training LLM, deep reinforcement learning, and evaluation benchmarks.
The role includes designing aspects of real-time and batch ML models using embeddings, collaborative filtering, and deep learning.
You will optimize retrieval, filtering, and ranking algorithms in production search pipelines.
Monitoring model performance and continuously iterating using A/B testing and offline evaluation metrics is part of the job.
You will help shape aspects of MLOps and model governance.
Requirements:
You must have experience in leveraging or critically thinking about how to integrate AI into work processes, decision-making, or problem-solving.
A minimum of 8 years of experience in the full software development life cycle is required, including coding standards, code reviews, source control management, build processes, testing, and operations.
At least 3 years of experience building ML-powered search or recommendation systems is necessary.
You should be able to mentor and guide engineering teams, fostering a culture of technical excellence and innovation.
Strong programming skills in Python and Java are required.
A working knowledge of ML frameworks like TensorFlow and PyTorch is essential.
Hands-on experience working on AI search (text, vector, and hybrid search) is needed.
Knowledge of embedding models, user/item vectorization, or session-based personalization is required.
Experience with large-scale distributed systems such as Spark, Kafka, or Kubernetes is necessary.
Hands-on experience with real-time ML systems is required.
A background in NLP, graph neural networks, or sequence modeling is preferred.
Experience with A/B testing frameworks and metrics like NDCG, MAP, or CTR is necessary.
Benefits:
ServiceNow offers a flexible work environment, allowing for remote or in-office work depending on the nature of the role.
The company is committed to creating an accessible and inclusive experience for all candidates.
ServiceNow is an equal opportunity employer, ensuring all qualified applicants receive consideration for employment without discrimination.
Reasonable accommodations are available for candidates who require assistance during the application process.
Employment is contingent upon obtaining any necessary export control approvals for positions requiring access to controlled technology.