This job post is closed and the position is probably filled. Please do not apply.
🤖 Automatically closed by a robot after apply link
was detected as broken.
Description:
Pavago is hiring a Database Engineer to support an innovative project involving the integration of advanced algorithms and dynamic user interactions for their client.
The project emphasizes the analysis and visualization of large-scale engineering datasets, requiring cutting-edge expertise in database architecture and optimization.
Key responsibilities include creating, optimizing, and enhancing database systems to efficiently manage and process large-scale data operations.
The role involves implementing robust data pipelines for seamless data exchange between algorithms and the database.
The Database Engineer will enhance database performance by optimizing SQL queries, indexing, and architecture.
Collaboration with the development team is essential to ensure real-time interactivity between users and graph data.
The position also includes assisting in refining proprietary algorithms to improve statistical and economic insights and contributing to the integration of machine learning frameworks for enhanced data processing and analysis.
Requirements:
Candidates must have strong knowledge of SQL, C#, and Python for optimizing databases and integrating algorithms, along with familiarity in big data technologies, cloud services (Azure, MongoDB, NoSQL), and machine learning frameworks like TensorFlow or PyTorch.
Exceptional analytical and problem-solving abilities are required, as well as strong communication skills.
Proficiency with project management tools such as Asana, Azure Boards, Slack, or Jira is necessary.
Candidates should have 3-4 years of proven experience in database design and optimization, developing data pipelines for large-scale operations, and working in fast-paced environments with cross-functional teams.
Benefits:
The position offers the opportunity to work on a cutting-edge project at the intersection of data optimization and innovation.
Candidates will have the chance to tackle complex data challenges and enhance efficiency and scalability.
The role allows for remote work, providing flexibility in working hours.
The interview process includes an initial phone call, a technical test, a Zoom call interview, a final interview with the client, and background checks to verify references and past employment.