Data & Test Engineer – Robotics & ML (m/f/d)

Vollzeit
vor 6 Stunden
Deine Aufgaben

As a Data & Test Engineer for Robotics & ML Evaluation, you will own the ecosystem that measures how well our robot learning models perform - in simulation and on real robots. You will build datasets, metrics, tools, and testing workflows that enable ML researchers and robotics engineers to evaluate models reliably, reproducibly, and at scale.

Your work ensures that every model deployed on our robots is backed by clear, high-quality evaluation signals: robust datasets, well-defined metrics, automated test flows, and consistent test procedures. If you thrive at the intersection of data engineering, QA, simulation, and robotics, this role will give you ownership of a core pillar of our learning stack.

Your Responsibilities
  • Build evaluation infrastructure – Develop and maintain reproducible test frameworks for robot learning models and integrate them into CI/CD and release pipelines.

  • Develop tools for model testing – Enable ML engineers to run evaluations easily and obtain standardized performance metrics (success rates, robustness, generalization, latency, regressions).

  • Manage datasets & test sets – Organize, annotate, and version multimodal datasets including demonstrations, trajectories, logs, and sensor data.

  • Coordinate simulation & real-world tests – Define and maintain scenes, assets, and procedures for simulation testing; align real-world test setups to ensure reproducibility and safety.

  • Define metrics & reporting – Establish evaluation metrics, build dashboards or analytics tools, and track performance trends and regressions over time.

  • Collaborate cross-functionally – Work with ML, robotics, autonomy, simulation, and product teams to align evaluation with real-world requirements and maintain data quality standards.

Dein Profil

  • Academic background in Data Engineering, Data Science, QA Engineering, Simulation, Technical Test Engineering or related fields

  • Strong experience managing datasets, data pipelines, versioning, and quality control

  • Proficiency in Python and common ML/data tooling (NumPy, Pandas, PyTorch for evaluation, Spark, Ray Data or other large-scale tools for running large-scale evaluations)

  • Experience creating metrics, analytics, dashboards, or performance reporting tools

  • Familiarity with simulation frameworks (Unity, Unreal, Isaac Sim, or equivalents)

  • Excellent documentation, organization, and communication skills

  • Comfortable working across multiple engineering disciplines and aligning on evaluation criteria

Warum wir?

  • Own a central, high-impact component of RobCo’s robot learning pipeline

  • Work closely with ML researchers, robotics engineers, and simulation experts

  • Define best practices for evaluation in a fast-evolving, high-growth robotics environment

  • Shape the reliability, rigor, and scalability of our robot learning stack

  • Hybrid work model, flexible hours, and modern equipment