✨ About The Role
- This role involves leading the development of trust and safety models to detect fraud and violations on the Scale AI platform at a large scale.
- The successful candidate will be responsible for ensuring that contributors on the platform are trustworthy and provide high-quality data.
- The position requires expertise in classical machine learning as well as familiarity with neural networks and large language models.
- Candidates will need to have strong intuitions regarding testing detection systems, particularly in the presence of extreme class imbalance.
- The role may involve hands-on production experience developing models specifically for detecting trust and safety violations.
âš¡ Requirements
- The ideal candidate should have at least three years of experience addressing sophisticated machine learning problems in either a research or product development setting.
- A strong foundation in machine learning is essential, along with practical experience in deploying models to production in a microservices cloud environment.
- Familiarity with large language models (LLMs) and proficiency in frameworks such as scikit-learn, PyTorch, Jax, or TensorFlow is highly desirable.
- Candidates should possess strong written and verbal communication skills, enabling them to operate effectively in cross-functional teams.
- Experience working with cloud technology stacks, such as AWS or GCP, is important for this role.