I

[Remote] AI/ML Research Engineer, LLM Post-Training & Evaluation

Innodata Inc.via Jobright
RemotoUsSêniorCLT17 dias atrás

Salário Estimado

R$ 12.870,00 - R$ 19.305,00

0de 100

Excelente

Score da Vaga

Descrição da Vaga

Note: The job is a remote job and is open to candidates in USA.


Innodata Inc. is a leading data engineering company specializing in AI technology solutions.


They are seeking an AI/ML Research Engineer to design and optimize training and evaluation systems for large language models, working closely with various technical stakeholders to ensure robust model improvements.


Responsibilities • Lead or co-lead technically complex ML engineering projects from initial customer discussions through implementation and delivery

Design, build, and improve LLM training and post-training pipelines, including data ingestion, preprocessing, fine-tuning, evaluation, and experiment tracking • Implement and optimize evaluation systems for LLMs and multimodal models, including offline benchmarks and task-specific test harnesses
Integrate human-in-the-loop and AI-augmented evaluation signals into model development workflows • Build robust infrastructure and tooling for reproducible experimentation, metrics logging, and regression monitoring
Diagnose model behavior and pipeline failures, including data issues, training instability, metric inconsistencies, and evaluation drift • Collaborate with Language Data Scientists and Applied Research Scientists to translate evaluation frameworks into executable systems
Work closely with customer technical stakeholders to understand goals, constraints, and success criteria; propose and implement technically sound solutions • Contribute to internal research and platform development, including benchmark frameworks, evaluation tooling, and post-training workflow improvements
Contribute to best practices and standards for LLM training, evaluation, and quality assurance across projects • Mentor junior engineers and contribute to technical design reviews, documentation, and engineering rigor across the team Skills
BS/MS/PhD in Computer Science, Machine Learning, AI, Applied Mathematics, or a related quantitative technical field (MS/PhD preferred) • 2-3 years of relevant industry or research engineering experience in ML/AI systems
Hands-on experience with LLM training / fine-tuning / post-training, including at least one of: supervised fine-tuning (SFT), preference optimization (e.g., DPO or related methods), RLHF / RLAIF-style workflows, task- or domain-adaptation of foundation models • Strong programming skills in Python and experience building production-quality ML code
Experience with modern ML frameworks (e.g., PyTorch, JAX, TensorFlow) and model libraries/tooling (e.g., Hugging Face ecosystem, vLLM, distributed training stacks) • Experience designing and implementing evaluation pipelines for LLM/ML systems, including metrics computation, dataset handling, and experiment comparisons
Strong understanding of data pipelines and ML systems engineering, including reproducibility, observability, and debugging • Experience with large-scale distributed ML systems and performance optimization for training/evaluation workloads (GPU/accelerator environments preferred)
Experience with large-scale data processing and workflow orchestration in support of model training/evaluation • Ability to collaborate directly with technical stakeholders including research scientists, ML engineers, data engineers, and customer technical leads
Strong written and verbal communication skills, including the ability to explain complex technical tradeoffs to both technical and non-technical audiences • Experience with multimodal model training/evaluation (text + image/audio/video)
Experience with long-context evaluation and/or model adaptation for long-context tasks • Experience with agentic or multi-turn evaluation harnesses, tool-use simulation, or interactive environment testing
Experience working in customer-facing technical consulting, solutions engineering, or applied research delivery • Familiarity with LLM safety, alignment, robustness, or red-teaming evaluation approaches
Contributions to open-source ML/LLM tooling or published technical work in relevant areas Company Overview • (NASDAQ: INOD) Innodata is a global data engineering company.

We believe that data and AI are inextricably linked.


It was founded in 1988, and is headquartered in Hackensack, New Jersey, USA, with a workforce of 5001-10000 employees.


Its website is http://www.innodata.com.


Company H1B Sponsorship • Innodata Inc. has a track record of offering H1B sponsorships, with 2 in 2024.


Please note that this does not guarantee sponsorship for this specific role.

Requisitos

  • 2-3 years of relevant industry or research engineering experience in ML/AI systems
  • Hands-on experience with LLM training / fine-tuning / post-training, including at least one of: supervised fine-tuning (SFT), preference optimization (e.g., DPO or related methods), RLHF / RLAIF-style workflows, task- or domain-adaptation of foundation models
  • Strong programming skills in Python and experience building production-quality ML code
  • Experience with modern ML frameworks (e.g., PyTorch, JAX, TensorFlow) and model libraries/tooling (e.g., Hugging Face ecosystem, vLLM, distributed training stacks)
  • Experience designing and implementing evaluation pipelines for LLM/ML systems, including metrics computation, dataset handling, and experiment comparisons
  • Strong understanding of data pipelines and ML systems engineering, including reproducibility, observability, and debugging
  • Experience with large-scale data processing and workflow orchestration in support of model training/evaluation
  • Ability to collaborate directly with technical stakeholders including research scientists, ML engineers, data engineers, and customer technical leads
  • Strong written and verbal communication skills, including the ability to explain complex technical tradeoffs to both technical and non-technical audiences
  • Experience with multimodal model training/evaluation (text + image/audio/video)
  • Experience with long-context evaluation and/or model adaptation for long-context tasks
  • Experience with agentic or multi-turn evaluation harnesses, tool-use simulation, or interactive environment testing
  • Experience working in customer-facing technical consulting, solutions engineering, or applied research delivery
  • Familiarity with LLM safety, alignment, robustness, or red-teaming evaluation approaches
  • Contributions to open-source ML/LLM tooling or published technical work in relevant areas

Responsabilidades

  • Lead or co-lead technically complex ML engineering projects from initial customer discussions through implementation and delivery
  • Design, build, and improve LLM training and post-training pipelines, including data ingestion, preprocessing, fine-tuning, evaluation, and experiment tracking
  • Implement and optimize evaluation systems for LLMs and multimodal models, including offline benchmarks and task-specific test harnesses
  • Integrate human-in-the-loop and AI-augmented evaluation signals into model development workflows
  • Build robust infrastructure and tooling for reproducible experimentation, metrics logging, and regression monitoring
  • Diagnose model behavior and pipeline failures, including data issues, training instability, metric inconsistencies, and evaluation drift
  • Collaborate with Language Data Scientists and Applied Research Scientists to translate evaluation frameworks into executable systems
  • Work closely with customer technical stakeholders to understand goals, constraints, and success criteria; propose and implement technically sound solutions
  • Contribute to internal research and platform development, including benchmark frameworks, evaluation tooling, and post-training workflow improvements
  • Contribute to best practices and standards for LLM training, evaluation, and quality assurance across projects
  • Mentor junior engineers and contribute to technical design reviews, documentation, and engineering rigor across the team

Vagas Semelhantes

RemotoBrHoje

R$ 13k - 19k/mês

SêniorCLT

Por que ser Zappter? • Aqui você tem a oportunidade de colaborar em projetos que ajudam nossos clientes a superar desafios de negócios utilizando soluções tecnológicas. • Nossa cultura é 100% remota, e você pode trabalhar de qualquer lugar. Não é atoa que já estamos presentes em 18 estados do Brasil...

RemotoSão Paulo6 dias atrás

R$ 16k - 23k/mês

SêniorCLT

Descrição da empresa Na Bosch, moldamos o futuro por meio das inovações tecnológicas de alta qualidade e de serviços que despertam entusiasmo e melhoram a vida das pessoas. Temos uma promessa sólida para nossos colaboradores: crescemos juntos, gostamos do nosso trabalho e inspiramos uns aos outros. ...

RemotoSão Paulo8 dias atrás

R$ 16k - 23k/mês

SêniorCLT

Descrição da empresa Na Bosch, moldamos o futuro por meio das inovações tecnológicas de alta qualidade e de serviços que despertam entusiasmo e melhoram a vida das pessoas. Temos uma promessa sólida para nossos colaboradores: crescemos juntos, gostamos do nosso trabalho e inspiramos uns aos outros. ...

RemotoBr16 dias atrás

R$ 13k - 19k/mês

SêniorCLT

Fusemachines Founded in 2013, Fusemachines is a global provider of enterprise AI products and services, on a mission to democratize AI. Leveraging proprietary AI Studio and AI Engines, the company helps drive the clients’ AI Enterprise Transformation, regardless of where they are in their Digital AI...

Interessado nesta vaga?

Candidatar-se

Você será redirecionado para o site original

Informações

NívelSênior
ContratoCLT
LocalUs
RemotoSim
MoedaBRL
Publicada17 dias atrás
FonteJobright

Análise de Vaga com IA

Estimativa salarial, match de tecnologias e análise de requisitos feitos com Inteligência Artificial

Quer se preparar melhor? Pratique entrevistas com IA no Recrutadoria ou melhore suas habilidades no BitMentor

← Voltar às Vagas