Machine Learning Engineer
Automation Anywhere is the leader in Agentic Process Automation (APA), transforming how work gets done with AI-powered automation. Its APA system, built on the industry's first Process Reasoning Engine (PRE) and specialized AI agents, combines process discovery, RPA, end-to-end orchestration, document processing, and analytics—all delivered with enterprise-grade security and governance. Guided by its vision to fuel the future of work, Automation Anywhere helps organizations worldwide boost productivity, accelerate growth, and unleash human potential.
Key responsibilities:
- Develop and optimize machine learning models leveraging NLP, Computer Vision, and GenAI.
- Architect and implement scalable ML pipelines for training, validation, deployment, and monitoring of production models.
- Drive the development of large-scale ML infrastructure, ensuring low-latency inference and efficient resource utilization across cloud and hybrid environments.
- Implement MLOps best practices, automating model training, validation, deployment, and performance monitoring.
- Work closely with data engineers, software engineers, and product teams to ensure seamless integration of ML solutions into production systems.
- Optimize ML models for performance, scalability, and efficiency, leveraging techniques like quantization, pruning, and distributed training.
- Enhance model reliability by implementing automated monitoring, CI/CD pipelines, and versioning strategies.
- Lead efforts in data acquisition and preprocessing, including annotation and refinement of datasets to improve model accuracy.
- Stay updated with state-of-the-art ML research, identifying opportunities to integrate new techniques and technologies into production systems.
Educational qualifications:
- Bachelor's or Master's Degree in Computer Science, Data Science, or related fields. Advanced degrees are a plus.
- 6+ years of hands-on experience in building and deploying machine learning models, with a focus on NLP, Computer Vision, or GenAI solutions.
- Proven experience deploying machine learning models into production environments, ensuring high availability, scalability, and reliability.
- Proficiency with modern ML frameworks (e.g., TensorFlow, PyTorch).
- Experience in building ML pipelines and implementing MLOps for automating and scaling machine learning workflows.
- Strong programming skills in Python, R, SQL, and experience with big data technologies (e.g., Spark, Hadoop) for data processing and analytics.
- Basic proficiency in at least one cloud-based ML services (e.g., AWS SageMaker, Azure ML, Google AI Platform) for training, deploying, and scaling machine learning models.
- Hands-on experience with containerization (Docker), orchestration (Kubernetes), and model serving platforms (e.g., Triton Inference Server, ONNX) for production-ready ML deployments.
- Familiarity with end-to-end ML pipelines, including data collection, feature engineering, model training, and model evaluation.
- Knowledge of model optimization techniques (e.g., quantization, pruning) to improve inference performance on cloud or edge devices.
- Excellent problem-solving skills, with the ability to break down complex challenges in document extraction and transform them into scalable ML solutions.
- Strong communication skills, with the ability to articulate ML problems clearly and work autonomously.
Nice to have:
- Experience in fine-tuning large language models (LLMs) and applying GenAI techniques.
- Experience with distributed training techniques to optimize large-scale model training across multiple GPUs or cloud environments.
- Familiarity with CI/CD pipelines for ML, automated model versioning, and monitoring tools for performance and drift in production models.
All unsolicited resumes submitted to any @automationanywhere.com email address, whether submitted by an individual or by an agency, will not be eligible for an agency fee.