This job post is closed and the position is probably filled. Please do not apply.
🤖 Automatically closed by a robot after apply link
was detected as broken.
Description:
Design, develop, and launch strategic data mining solutions using AI and analytical techniques at scale.
Take ownership of the end-to-end software development lifecycle, including design, testing, deployment, and operations.
Participate in technical discussions, strategy, design reviews, code reviews, and implementation.
Develop highly scalable platforms for extracting, analyzing, and processing large amounts of contextual data in real-time and batch modes.
Create large-scale knowledge graphs by linking information and datasets from multiple domains.
Maintain high standards of technical rigor, build resilient and scalable systems, and drive operational and process improvements.
Requirements:
Degree in mathematics/computer science or related discipline.
3+ years of experience in software development lifecycle, including design, coding, testing, deployments, and operations.
Proficiency in at least one programming language, preferably Python or Java.
Experience with distributed and big data technologies like MapReduce, Spark, Flink, Kafka, PySpark, NoSQL, etc.
MS or PhD in Computer Science or equivalent experience.
Experience with large-scale distributed systems, preferably on cloud platforms like AWS, Azure, Google Cloud.
Familiarity with real-world large-scale datasets.
Strong understanding and passion for statistical/mathematical modeling and data analysis.
Benefits:
Autonomous and empowered work culture promoting ownership and rapid growth.
Flat hierarchy with fast decision-making and a startup-oriented "get things done" culture.
Positive environment with regular celebrations of success, fostering inclusivity, diversity, and authenticity.