About Me

I got my Ph.D. degree from NLPR, Institute of Automation, Chinese Academy of Sciences in June 2024, supervised by Prof. Zhaoxiang Zhang. I was an intern at TuSimple developing perception algorithm for autonomous trucks from May 2020 to April 2023, supervised by Dr. Naiyan Wang and Dr. Feng Wang. I got my bachelor's degree from Xi'an Jiaotong University (XJTU) in 2019, majoring in automation.

My research interests focus on the perception/decision/generation algorithms in autonomous driving scenarios. My representive research lies in a series of algorithms for LiDAR-based fully sparse detection, supporting the super long-range perception and enhancing the driving safety.

Selected Work

*: Equal Contribution; †: Corresponding Author

Trim 3D Gaussian Splatting for Accurate Geometry Representation
Lue Fan*, Yuxue Yang*, Minxing Li, Hongsheng Li†, Zhaoxiang Zhang†.
Code (fully released!), Project Page

We propose contribution-based trimming strategy to refine messy Gaussians to be geometrically accurate. This strategy has a potential to be integrated with any Gaussians!

Enhancing End-to-End Autonomous Driving with Latent World Model
Yingyan Li, Lue Fan, Jiawei He, Yuqi Wang, Yuntao Chen, Zhaoxiang Zhang, Tieniu Tan.
Code (stay tuned)

We propose a practical LAtent World Model (LAW) to enhance end-to-end autonomous driving in a self-supervised manner without leveraging time-consuming video generation, achieving strong and realtime performance in both open-loop and close-loop benchamrks.

Voxel Mamba: Group-Free State Space Models for Point Cloud based 3D Object Detection
Guowen Zhang, Lue Fan, Chenhang He, Zhen Lei, Zhaoxiang Zhang, Lei Zhang.
Code (coming soon)

The first voxel-based state space model for LiDAR-based 3D object detection in driving scenes, achiving state-of-the-art performance in both Waymo and nuScene benchmarks.

Driving into the Future: Multiview Visual Forecasting and Planning with World Model for Autonomous Driving
Yuqi Wang*, Jiawei He*, Lue Fan*, Hongxin Li*, Yuntao Chen†, Zhaoxiang Zhang†.
CVPR, 2024  
Code / Project Page

Drive-WM is the first multi-view world model for planning in autonomous driving.

MixSup: Mixed-grained Supervision for Label-efficient LiDAR-based 3D Object Detection
Yuxue Yang, Lue Fan†, Zhaoxiang Zhang†.
ICLR, 2024   Code

MixSup achieves strong performance with a few accurate box labels and cheap cluster labels. PointSAM is developed to generate cluster labels. It achieve on-par performance with SoTA 3D segmentation methods in nuScenes without any 3D annotations!

FSD V2: Improving Fully Sparse 3D Object Detection with Virtual Voxels
Lue Fan, Feng Wang, Naiyan Wang, Zhaoxiang Zhang.
Code

FSDv2 is an improved version of FSD, removing the handcrafted heuristics in FSD. FSDv2 achieves strong performance in Waymo, nuScenes, and Argoverse 2 dataset, and are fully open-sourced!

Once Detected, Never Lost: Surpassing Human Performance in Offline LiDAR based 3D Object Detection (CTRL)
Lue Fan, Yuxue Yang, Yiming Mao, Feng Wang, Yuntao Chen, Naiyan Wang, Zhaoxiang Zhang.
ICCV, Oral, 2023  
Code

CTRL is the first open-sourced LiDAR-based 3D object autolabeling system, surpassing the performance of human annotators!

Fully Sparse Fusion for 3D Object Detection (FSF)
Yingyan Li, Lue Fan, Yang Liu, Zehao Huang, Yuntao Chen, Naiyan Wang, Zhaoxiang Zhang, Tieniu Tan
Code

FSF explores multi-modal 3D object detection with fully sparse architecture by seamlessly integrating 2D instance segmentation and 3D instance segmentaion in a unified framework.

Super Sparse 3D Object Detection (FSD++)
Lue Fan, Yuxue Yang, Feng Wang, Naiyan Wang, Zhaoxiang Zhang
TPAMI, 2023  
Code

FSD++ extends FSD into the multi-frame setting. In addition to the spatial sparsity, FSD++ emphaiszes temporal sparsity.

Fully Sparse 3D Object Detection (FSD)
Lue Fan, Feng Wang, Naiyan Wang, Zhaoxiang Zhang
NeurIPS, 2022  
Code

FSD first proposes the concept of LiDAR-based "fully sparse detection", achieving state-of-the-art performance in both conventional benchmark and long-range (>200m) LiDAR detection benchmark.

Embracing Single Stride 3D Object Detector with Sparse Transformer (SST)
Lue Fan, Ziqi Pang, Tianyuan Zhang, Yu-Xiong Wang, Hang Zhao, Feng Wang, Naiyan Wang, Zhaoxiang Zhang
CVPR, 2022  
Code

SST emphasize the small object sizes and sparsity of point clouds. Its sparse transformers enlight new backbones for outdoor LiDAR-based detection.

RangeDet: In Defense of Range View for Lidar-based 3D Object Detection
Lue Fan*, Xuan Xiong*, Feng Wang, Naiyan Wang, Zhaoxiang Zhang.
ICCV, 2021  
Code

RangeDet greatly narrows the performance gap between range view based LiDAR detection and voxel/BEV based LiDAR detection.

Contact