LAformer

Trajectory Prediction for Autonomous Driving with Lane-Aware Scene Constraints

authored by
Mengmeng Liu, Hao Cheng, Lin Chen, Hellward Broszio, Jiangtao Li, Runjiang Zhao, Monika Sester, Michael Ying Yang
Abstract

Existing trajectory prediction methods for autonomous driving typically rely on one-stage trajectory prediction models, which condition future trajectories on observed trajectories combined with fused scene information. However, they often struggle with complex scene constraints, such as those encountered at intersections. To this end, we present a novel method, called LAformer. It uses an attention-based temporally dense lane-aware estimation module to continuously estimate the likelihood of the alignment between motion dynamics and scene information extracted from an HD map. Additionally, unlike one-stage prediction models, LAformer utilizes predictions from the first stage as anchor trajectories. It leverages a second-stage motion refinement module to further explore temporal consistency across the complete time horizon. Extensive experiments on nuScenes and Argoverse 1 demonstrate that LAformer achieves excellent generalized performance for multimodal trajectory prediction. The source code of LAformer is available at github.com/mengmengliu1998/LAformer.

Organisation(s)
Institute of Cartography and Geoinformatics
Graduiertenkolleg 2159: Integrität und Kollaboration in dynamischen Sensornetzen
External Organisation(s)
University of Twente
VISCODA GmbH
PhiGent Robotics
University of Bath
Type
Conference contribution
Pages
2039-2049
No. of pages
11
Publication date
17.06.2024
Publication status
Published
Peer reviewed
Yes
ASJC Scopus subject areas
Computer Vision and Pattern Recognition, Electrical and Electronic Engineering
Electronic version(s)
https://doi.org/10.48550/arXiv.2302.13933 (Access: Open)
https://doi.org/10.1109/CVPRW63382.2024.00209 (Access: Closed)