We are looking for a Perception Engineer to play a critical role in developing and optimizing perception algorithms and systems for autonomous vehicles. Leveraging your expertise in computer vision, sensor fusion, and machine learning, you will design and implement state-of-the-art perception solutions that enable AVs to understand and interpret their surroundings in real-time. This role offers an exciting opportunity to work on cutting-edge technology, collaborate with cross-functional teams, and shape the future of autonomous driving.

Key Responsibilities:

  • Design, develop, and optimize perception algorithms and systems for autonomous vehicles, including object detection, tracking, classification, and scene understanding.
  • Research and implement computer vision techniques, sensor fusion strategies, and machine learning models to extract meaningful information from sensor data (e.g., LiDAR, cameras, radar).
  • Collaborate with hardware engineers to integrate and calibrate sensors, cameras, and other perception components into AV platforms.
  • Collect, annotate, and preprocess large-scale datasets for training and evaluation of perception models, ensuring high quality and diversity.
  • Develop simulation environments and tools for testing and validation of perception algorithms under various driving scenarios and conditions.
  • Conduct performance analysis, optimization, and tuning of perception systems to meet accuracy, speed, and reliability requirements for real-world deployment.
  • Collaborate with software engineers, system architects, and researchers to integrate perception capabilities into AV software stacks and platforms.
  • Stay up-to-date with the latest advancements in perception, computer vision, and AI research to drive innovation and best practices within the team.

Qualifications:

  • Bachelor’s, Master’s, or Ph.D. degree in Computer Science, Electrical Engineering, Robotics, or related field.
  • Proven experience in perception, computer vision, or sensor fusion for autonomous vehicles or robotics applications.
  • Strong proficiency in programming languages such as C++, Python, or MATLAB, as well as experience with libraries/frameworks such as OpenCV, TensorFlow, or PyTorch.
  • Solid understanding of machine learning techniques, including deep learning, convolutional neural networks (CNNs), and recurrent neural networks (RNNs).
  • Experience with sensor technologies such as LiDAR, cameras, radar, and IMUs, as well as familiarity with sensor calibration and synchronization.
  • Excellent problem-solving skills and the ability to work independently and collaboratively in a team environment.
  • Effective communication skills with the ability to present technical concepts and solutions to stakeholders.

Preferred Qualifications:

  • Experience with perception software frameworks and platforms such as ROS (Robot Operating System) or Autoware.
  • Familiarity with software development practices, version control systems (e.g., Git), and continuous integration/continuous deployment (CI/CD) pipelines.
  • Previous work on AV projects or in the automotive industry.
  • Contributions to open-source projects or publications in perception and autonomous driving conferences/journals.

Location: San Francisco, CA

Salary: $120,000 + 15% bonus

Please email your resume to info@vnasearch.com or call us on 646 661 2971

    Your Name (required)

    Your Email (required)

    Subject

    Your Message

    Enquiring About