Yash Prabhu

yashprabhu19@gmail.com | 267-356-9767 | www.yashprabhu.com | https://www.linkedin.com/in/yash-prabhu-b9695520a/

Education

Massachusetts Institute of Technology (MIT)

GPA: 4.5/5.0

  • Candidate for Bachelor’s in Electrical Engineering and Computer Science — Flexible: Dec 2026/Jun 2027

Relevant coursework:

  • Computer Architecture   Design/Analysis of Algorithms   Machine Learning   Computer Vision
  • Lin. Alg/Optimization   Statistics   Probability/Random Vars   Robotics & Autonomy
  • Low-Level Programming   Fundamentals of Programming   Multivariable Calculus

Extracurriculars:

Men’s Varsity Soccer, Undergraduate Research

North Penn High School, Lansdale, PA

Jun 2021

Awards: International Science and Engineering Fair Finalist, Pennsylvania All-State Bb Clarinet | SAT: 1570

GPA: 4.21/4.0

Work/Internship Experience

Nyro Robotics, San Francisco, CA

Robot Learning Assistant • Jun 2025 - Present

  • Refactored OpenHOMIE training pipeline to support multi-GPU simulation and visualization on remote GCP clusters; enabled concurrent use of multiple CUDA devices by decoupling graphics and simulation hardware
  • Designed a fully automated visualization + video recording system for headless GPU nodes using X11 forwarding and X virtual frame buffer
  • Wrote bash scripts and parameterizable tools for generating benchmark videos of policy visualization from arbitrary checkpoints and robot state inputs and added in functionality to upload videos to Google Drive and Weights and Biases
  • Integrated Slack + Weights & Biases alerts for reward drops and key training milestones; improved observability and reduced idle GPU time during long runs by managing GPU use for visualization to integrate visualization and training simultaneously on four GPUs
  • Tune reward functions and control parameters for reinforcement learning trainings using Proximal Policy Optimization (PPO) to stabilize Sim2real locomotion using OpenHOMIE action curriculum under distributional shift (Booster vs. Unitree robot morphologies); emphasize reproducibility via branch-tracked reward diffs and config

MIT CS & AI Lab, Cambridge, MA

Robotics Research Intern (Improbable AI Group)

  • Adapting a MuJoCo-based monocular ball-tracking system I wrote to IsaacLab by integrating photorealistic rendering and reducing visual distractors to improve robustness of egocentric object detection under sim-to-real transfer
  • Used an existing IsaacLab pipeline to explore humanoid catching of medicine ball on Unitree H1; utilized PPO, and vectorized environments to parallelize rollouts across multiple GPUs
  • Conducted systematic environment modification and reward shaping for velocity-based locomotion Unitree H1 configuration; reverse-engineered IsaacLab’s manager-based task API to customize observations, command spaces, URDF properties, and PD control parameters
  • Developed tools to inject custom objects (e.g. thrown ball) into IsaacSim scenes with tunable initial states, enabling egocentric catch trajectories for humanoids
  • Trained humanoid walking using Pure RL controller and Procedurally Generated RL controller from Laflan dataset
  • Experimented with data scaling technologies and developed custom infrastructure for Universal Manipulation Interface pipeline to be used on a bimanual robot setup. Mimicked driver construction to use RobotIQ gripper. (2024)
  • Porting real-world policy evaluation code for the Universal Manipulation Interface (UMI) to simulation by modifying low-level RTDE commands; refactored environment and controller infrastructure (eval_real.py, RTDEInterpolationController) for sim compatibility
  • Debugged and reconfigured video input pipeline for UMI deployment; resolved multi-camera device conflicts by filtering v4l paths for Elgato cameras to enable real-time synchronized capture
  • Replaced UMI’s hardcoded WSG50 gripper interface—which parsed serial commands from an action buffer—with a mock-compatible Robotiq driver using the Python API; maintained the original control pipeline by replicating WSG50 method signatures and memory queuing structure for seamless integration
  • Figure out how to write the stuff about Drake

Tangible Robotics, San Francisco, CA

Robotics Intern • Jan 2025 – Feb 2025

  • Built a UDP-based streaming system to transmit MetaQuest controller poses from Unity to a ROS node over WiFi for real-time robot teleoperation
  • Evaluated multiple ROS integration strategies for VR control, including direct ROS publishing, TCP bridging, and Unity-ROS socket communication
  • Analyzed end-to-end round-trip latency using custom UDP benchmarking; achieved median pose streaming latency as low as 2.05 ms under local conditions
  • Developed a simulated ROS1 environment for remote control of robot arms using VR joystick data; tuned servo timing to balance responsiveness and stability and used Inverse Kinematics and end-effector mapping
  • Built and wired a full differential inverse kinematics controller stack in Drake for bimanual Realman RM-75 6F arm teleoperation in simulation; learned Drake’s diagram-based system abstraction by connecting PID, inverse dynamics, and MeshCat visualization components from scratch
  • Researched and benchmarked low-cost teleoperation exoskeletons (e.g., GELLO, AirExo) for integration with custom bimanual robot platforms; evaluated hardware/software tradeoffs and encoder specs; also evaluated teleoperation hardware such as Inspire Hands and Manus teleoperation gloves
  • Co-presented Tangible Robotics’ live demo at Founder’s Inc. “Cold Start” Demo Day, showcasing remote teleoperation of a mobile base and robotic arm via VR interface; presented alongside CTO and Head of Research to an audience of 750+ investors, engineers, and VCs — one of only 5 startups selected for the Founder’s Inc. portfolio
  • Regularly met with partners at Founder’s Inc. accelerator alongside the founders of Tangible Robotics and learned how to form a business plan, fundraise, and launch an early product

AutoUpLink

Software Engineering Intern • Jan 2024 – Feb 2024

  • Helped provide initial designs for UI/UX in revamp of AutoUpLink’s car inventory app in Figma; presented design flow and feature rationale directly to the (Fill in positions)

Projects

Egocentric Baseball Detection using Simulation @ Improbable AI

May 2025

  • Fine-tuned YOLOv8 on domain-randomized MuJoCo data (from Polycam scans) to detect baseballs from monocular RGB input
  • Achieved +20 mAP@0.5 IoU threshold on real test data via early stopping; currently porting to IsaacLab for photorealism and reduced distractors

RISC-V Computer Processor

May 2025

  • Designed a 4-stage processor with RISC-V architecture in MiniSpec. Learned about digital logic, caching, virtual memory, parallelization
  • Accelerated MNIST inference via hardware multiplication, loop unrolling, and custom packed multiplication; received highest achievable score

A CNN-Based Automated Stuttering Identification System (First-author, IEEE ICMLA 22’)

DATE 2022

  • Designed and trained multiple CNN classifiers in a full Tensorflow Pipeline using spectrograms derived from stuttered speech clips in the Sep-28k dataset, achieving F1 scores above 0.92 across disfluency types including blocks, repetitions, and prolongations.
  • Proposed framework to support speech pathologists in low-resource settings; paper presented at 21st IEEE International Conference on Machine Learning and Applications (ICMLA)

Autonomous COVID-19 Screening with Deep Learning and Thermal Imaging (ISEF)

DATE 2021

  • Built low-cost thermal mask/fever detection system using Faster R-CNN and FLIR Lepton 3.5. Evaluated multiple object detection models (SSD Mobilenet, Faster R-CNN) and thermal sensors (MLX90614, AMG8833, FLIR) for performance and accuracy
  • Deployed prototype at public library; selected as ISEF Finalist in Robotics & Intelligent Machines

Low-Cost Firearm Detection with Mixed Deep Learning and Computer Vision

DATE 2020

  • Developed and benchmarked three weapon detection pipelines (OpenCV, TensorFlow SSD, and hybrid), achieving up to 91% detection rate with low-latency real-time performance
  • Engineered a custom mixed model combining classical CV and deep learning; improved detection rate by 10% while minimizing GPU usage and computation time, and validated results with bounding box accuracy, detection rate, and statistical significance

Skills