Software Engineer, Perception (Robotics)
Pony.ai
Founded in 2016 in Silicon Valley, Pony.ai has quickly become a global leader in autonomous mobility and is a pioneer in extending autonomous mobility technologies and services at a rapidly expanding footprint of sites around the world. Operating Robotaxi, Robotruck and Personally Owned Vehicles (POV) business units, Pony.ai is an industry leader in the commercialization of autonomous driving and is committed to developing the safest autonomous driving capabilities on a global scale. Pony.ai’s leading position has been recognized, with CNBC ranking Pony.ai #10 on its CNBC Disruptor list of the 50 most innovative and disruptive tech companies of 2022. In June 2023, Pony.ai was recognized on the XPRIZE and Bessemer Venture Partners inaugural “XB100” 2023 list of the world’s top 100 private deep tech companies, ranking #12 globally. As of August 2023, Pony.ai has accumulated nearly 21 million miles of autonomous driving globally. Pony.ai went public at NASDAQ in November 2024.
About The Role
As part of the Perception team, you will help design and build the sensor data pipeline that powers our self-driving vehicles. Our team is responsible for turning raw sensor signals into reliable, real-time information that enables advanced perception models. You’ll work across multiple sensing modalities — cameras, lidars, radars, IMUs, microphones, and more — and help ensure that our autonomous driving system can perceive the world with accuracy and robustness. This role is a great fit for engineers excited about robotics, sensor systems, and building the bridge between hardware and AI models.
Responsibilities
- Work on algorithms, tools, and models that extract critical information from multi-modal sensors in real time.
- Develop and validate systems that ensure sensor data is accurate, synchronized, and reliable, including calibration, error detection, and health monitoring.
- Integrate sensor data into the perception stack and build efficient data flows that power real-time algorithms.
- Preprocess multi-sensor inputs to improve perception performance, such as time synchronization and ground detection.
- Contribute to the overall perception pipeline, from raw sensor integration to AI-ready features.
- Bachelor’s, Master’s, or PhD degree in Computer Science, Robotics, Computer Vision, or related fields.
- Solid programming skills in C++ and/or Python.
- Strong problem-solving and debugging skills, with exposure to real-time or systems-level software a plus.
- Familiarity with one or more areas: robotics, computer vision, signal processing, or deep learning.
- Excellent communication skills and ability to work in a collaborative, fast-paced environment.
Compensation and Benefits
Base Salary Range: $120,000 - $200,000 Annually
Compensation may vary outside of this range depending on many factors, including the candidate’s qualifications, skills, competencies, experience, and location. Base pay is one part of the Total Compensation and this role may be eligible for bonuses/incentives and restricted stock units.
Also, we provide the following benefits to the eligible employees:
- Health Care Plan (Medical, Dental & Vision)
- Retirement Plan (Traditional and Roth 401k)
- Life Insurance (Basic, Voluntary & AD&D)
- Paid Time Off (Vacation & Public Holidays)
- Family Leave (Maternity, Paternity)
- Short Term & Long Term Disability
- Free Food & Snacks
Please click here for our privacy disclosure.