Machine Learning Security Research Fellowship
Founded in 2012 by three expert hackers with no investment capital, Trail of Bits is the premier place for security experts to boldly advance security and address technology's newest and most challenging risks. It has helped secure some of the world's most targeted organizations and devices. Our combination of novel research with practical solutions reduces the security risks that our clients face from emerging technologies. Our work helps drive the security industry and the public understanding of the technology underlying our world.
Cybersecurity preparedness is a moving target. Companies like ours are the tip of the spear in the fight against attackers. Our research-based and custom-engineering approach ensures that our client's capabilities are at the forefront of what's available. For companies and technologies that live and die by their security, a proactive, tailored approach is required to keep one step ahead of attackers.
Democratizing security information is essential. As part of our business, we provide ongoing informational support through blogs, whitepapers, newsletters, meetups, and open-source tools. The more the community understands security, the more they'll understand why a company like ours is so unique and valuable.
Role
Trail of Bits is launching a Machine Learning Security Research Fellowship designed for researchers seeking high-impact industry experience. This one-year fellowship positions the researcher at the intersection of cutting-edge AI/ML research and real-world security, working with the world's most advanced AI/ML systems deployed by leading AI organizations. The fellow will conduct original security research on frontier AI/ML systems while collaborating with our AI Assurance team on high-stakes client engagements.
This fellowship offers the intellectual rigor of academic research combined with direct impact on production AI/ML systems at scale, making it ideal for PhD candidates exploring alternatives to academic careers or recent graduates seeking industry research experience. No traditional security background required—we're looking for exceptional AI/ML researchers who can think adversarially about complex systems.
What You'll Achieve
- Independent Research Agenda: Pursue your own AI/ML security research interests with support from Trail of Bits' research team, with opportunities to publish findings and present at leading conferences.
- Frontier System Assessment: Gain hands-on experience evaluating the security of state-of-the-art AI/ML systems deployed by top AI organizations, working on problems that represent the cutting edge of AI/ML security.
- Novel Attack & Defense Development: Design and implement new attack methodologies, defensive techniques, and evaluation frameworks for adversarial AI/ML scenarios including model poisoning, adversarial examples, jailbreaks, and data extraction.
- Open-Source Impact: Build and release AI/ML security tools and frameworks that benefit the broader research community, with support for open-source contribution as a core fellowship objective.
- Mentorship & Collaboration: Work alongside Trail of Bits' security research team, gaining exposure to security engineering practices while maintaining focus on research excellence.
- Research Output: Produce publishable research, technical blog posts, and open-source tools that advance the state of AI/ML security understanding—with explicit support for academic publication.
What You'll Bring
- PhD-Level AI/ML Expertise: Currently pursuing or recently completed (within 2 years) a PhD in machine learning, computer science, statistics, or related field, with strong research credentials.
- Research Excellence: Track record of high-quality research through publications, preprints, workshop papers, or significant open-source contributions that demonstrate deep AI/ML expertise.
- AI/ML Systems Proficiency: Strong hands-on experience with modern AI/ML frameworks (PyTorch, JAX, TensorFlow), foundation models, and the full AI/ML research workflow including experimentation, training, and evaluation.
- Security Mindset: Demonstrated ability to think adversarially about systems, identify edge cases, or explore failure modes—even without formal security training. Interest in adversarial AI/ML, robustness, or AI safety highly valued.
- Strong Programming Skills: Proficient in Python and comfortable with systems programming. Experience implementing research prototypes and experimental frameworks.
- Intellectual Independence: Self-directed researcher capable of defining research questions, designing experiments, and driving projects to completion with minimal supervision.
- Communication Ability: Can explain complex technical concepts clearly to diverse audiences and synthesize research findings into actionable insights.
Fellowship Structure
- Duration: One-year commitment with potential pathway to full-time position.
- Research Time: Dedicated time allocated for independent research and publication.
- Conference Support: Travel funding for conference presentations and research community engagement.
- Mentorship: Regular collaboration with Trail of Bits researchers and exposure to client work.
- Flexibility: Opportunity to shape the fellowship around your research interests within AI/ML security.
Reporting Manager: Dan Guido, CEO
The base salary for this full-time position ranges from $100,000 to $120,000, excluding benefits and potential bonuses. Various factors influence our salary ranges, including the specific role, level of seniority, geographic location, and the nature of the employment contract. An individual's specific work location, unique skills, experience, and relevant educational background will determine the final offer within this range. The presented salary range encompasses the starting salaries for all U.S. locations. For a precise salary estimate tailored to your preferred location, please discuss it with your recruiter during the hiring process.