Long Chen

Long Chen

Staff Scientist

Wayve

Google Scholar Citations Badge

Long Chen is a distinguished research leader, with a proven track record in developing disruptive AI technologies. He currently holds the position of Staff Scientist at Wayve, where he is at the forefront of building vision-language-action (VLA) models for the next wave of autonomous driving, such as Driving-with-LLMs and LINGO. Previously, he was a research engineer at Lyft Level 5, where he led the data-driven planning models from crowd-sourced data for Lyft’s self-driving cars. His extensive experience also includes applying AI technologies in various domains such as mixed reality, surgical robots, and healthcare.

Interests
  • Artificial Intelligence
  • Computer Vision
  • Multi-modal Large Language Models (LLMs)
  • Robotics
Education
  • PhD in Computer Vision / Machine Learning, 2015 - 2018

    Bournemouth University, UK

  • MSc in Medical Image Computing, 2013 - 2014

    University College London (UCL), UK

  • BSc in Biomedical Engineering, 2009 - 2013

    Dalian University of Technology (DUT), China

Recent News

Experience

 
 
 
 
 
Wayve
Staff Scientist
August 2021 – Present London, UK
AV2.0 - building the next generation of self-driving cars with End-to-End Machine Learning, Vision-Language-Action (VLA) models.
 
 
 
 
 
Lyft Level 5
Research Engineer
Lyft Level 5
May 2018 – July 2021 London, UK
Autonomy 2.0 - data-driven planning models for Lyft’s self-driving vehicles

Featured Publications

Recent Publications

Full publication list can be found on Google Scholar.
LingoQA: Video Question Answering for Autonomous Driving
Autonomous driving has long faced a challenge with public acceptance due to the lack of explainability in the decision-making process. …
LingoQA: Video Question Answering for Autonomous Driving
Driving with LLMs: Fusing Object-Level Vector Modality for Explainable Autonomous Driving
Large Language Models (LLMs) have shown promise in the autonomous driving sector, particularly in generalization and interpretability. …
Driving with LLMs: Fusing Object-Level Vector Modality for Explainable Autonomous Driving
One Thousand and One Hours: Self-driving Motion Prediction Dataset
Motivated by the impact of large-scale datasets on ML systems we present the largest self-driving dataset for motion prediction to …
One Thousand and One Hours: Self-driving Motion Prediction Dataset
SimNet: Learning reactive self-driving simulations from real-world observations
In this work, we present a simple end-to-end trainable machine learning system capable of realistically simulating driving experiences. …
SimNet: Learning reactive self-driving simulations from real-world observations
What data do we need for training an av motion planner?
We investigate what grade of sensor data is required for training an imitation-learning-based AV planner on human expert demonstration. …
What data do we need for training an av motion planner?
Recent Developments and Future Challenges in Medical Mixed Reality
Mixed Reality (MR) is of increasing interest within technology-driven modern medicine but is not yet used in everyday practice. This …
Recent Developments and Future Challenges in Medical Mixed Reality