Trajectory Prediction for Autonomous Driving Cars
Here, we propose a graph-based trajectory prediction solution for autonomous driving cars. The following figure explains our scheme at a high level.
Framework
Experimental Results
1. Datasets
We evaluate our scheme on two well-known trajectory prediction datasets:
- NGSIM I-80 [1]
- NGSIM US-101 [2]
2. Comparison Results
We consider the following baseline:
- Constant Velocity (CV): This is a baseline that only uses a constant velocity Kalman filter to predict trajectories in the future.
- Vanilla LSTM (V-LSTM): A baseline that feeds a tack history of the predicted object to an LSTM model to predict a distribution of its future position.
- C-VGMM + VIM: In [3], Deo et al. propose a ma- neuver based variational Gaussian mixture model with a Markov random field based vehicle interaction module.
- GAIL-GRU: Kuefler et al. [4] use a generative ad- versarial imitation learning model for vehicle trajectory prediction. However, they use ground truth data for surrounding vehicles as input during prediction phase.
- CS-LSTM (M): This is the model that an LSTM model with convolutional social pooling layers proposed by Deo et al. in [5]. A maneuver classier is included.
- CS-LSTM: A CS-LSTM model without the maneuver classifier described in [5].
Our model predicts all observed objects simultaneously, while others only predict the future trajectory for a single traffic agent (the one in the central location) each time.
Root Mean Square Error (RMSE) for trajectory prediction on NGSIM I-80 and US-101 datasets. Data are converted into the meter unit. All results except ours are extracted from [5]. The smaller the value, the better.
Pred. 1(s) | Pred. 2(s) | Pred. 3(s) | Pred. 4(s) | Pred. 5(s) | |
---|---|---|---|---|---|
CV | 0.73 | 1.78 | 3.13 | 4.78 | 6.68 |
V-LSTM | 0.68 | 1.65 | 2.91 | 4.46 | 6.27 |
C-VGMM + VIM [3] | 0.66 | 1.56 | 2.75 | 4.24 | 5.99 |
GAIL-GRU [4] | 0.69 | 1.51 | 2.55 | 3.65 | 4.71 |
CS-LSTM(M) [5] | 0.62 | 1.29 | 2.13 | 3.20 | 4.52 |
CS-LSTM [5] | 0.61 | 1.27 | 2.09 | 3.10 | 4.37 |
Ours (Compared to CS-LSTM) |
0.37 (40%↑ -0.24) |
0.86 (32%↑ -0.41) |
1.45 (31%↑ -0.64) |
2.21 (29%↑ -0.89) |
3.16 (28%↑ -1.21) |
3. Visualization of Prediction Results
In the following figure, we visualize several prediction results in mild, moderate, and congested traffic conditions (from left to right).
- Blue rectangles are the cars located in the middle which is the car that CS-LSTM [5] trys to predict.
- Black boxes are surrounding cars.
- Black-solid lines are the observed history.
- Red-dashed lines are the ground truth in the future.
- Yellow-dashed lines are the predicted results (5 seconds) of our scheme.
- Green-dashed lines are the predicted results (5 seconds) of CS-LSTM [5].
- Region from −90 to 90 feet are observed areas.
PS: More details and experimental results are coming soon.
References
- J. Colyar and J. Halkias, “Us highway 80 dataset,” Federal Highway Administration (FHWA), Tech. Rep. FHWA-HRT-07-030, 2007.
- J. Colyar and J. Halkias, “Us highway 101 dataset,” Federal Highway Administration (FHWA), Tech. Rep. FHWA-HRT-07-030, 2007.
- N. Deo, A. Rangesh, and M. M. Trivedi, “How would surround vehicles move? a unified framework for maneuver classification and motion prediction,” IEEE Transactions on Intelligent Vehicles, vol. 3, no. 2, pp. 129–140, 2018.
- A. Kuefler, J. Morton, T. Wheeler, and M. Kochenderfer, “Imitating driver behavior with generative adversarial networks,” in 2017 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2017, pp. 204–211.
- N. Deo and M. M. Trivedi, “Convolutional social pooling for vehicle trajectory prediction,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2018, pp. 1468– 1476.