About Me
I got my Ph.D. degree from NLPR, Institute of Automation, Chinese Academy of Sciences in June 2024, supervised by Prof. Zhaoxiang Zhang. I was an intern at TuSimple developing perception algorithm for autonomous trucks from May 2020 to April 2023, supervised by Dr. Naiyan Wang and Dr. Feng Wang. I got my bachelor's degree from Xi'an Jiaotong University (XJTU) in 2019, majoring in automation.
My research interests focus on the perception/decision/generation algorithms in autonomous driving scenarios. My representive research lies in a series of algorithms for LiDAR-based fully sparse detection, supporting the super long-range perception and enhancing the driving safety.
Selected Work
*: Equal Contribution; †: Corresponding Author
|
Lue Fan*, Yuxue Yang*, Minxing Li, Hongsheng Li†, Zhaoxiang Zhang†. Code (fully released!), Project Page We propose contribution-based trimming strategy to refine messy Gaussians to be geometrically accurate. This strategy has a potential to be integrated with any Gaussians! |
|
Yingyan Li, Lue Fan, Jiawei He, Yuqi Wang, Yuntao Chen, Zhaoxiang Zhang, Tieniu Tan. Code (stay tuned) We propose a practical LAtent World Model (LAW) to enhance end-to-end autonomous driving in a self-supervised manner without leveraging time-consuming video generation, achieving strong and realtime performance in both open-loop and close-loop benchamrks. |
|
Guowen Zhang, Lue Fan, Chenhang He, Zhen Lei, Zhaoxiang Zhang, Lei Zhang. Code (coming soon) The first voxel-based state space model for LiDAR-based 3D object detection in driving scenes, achiving state-of-the-art performance in both Waymo and nuScene benchmarks. |
|
Yuqi Wang*, Jiawei He*, Lue Fan*, Hongxin Li*, Yuntao Chen†, Zhaoxiang Zhang†. CVPR, 2024   Code / Project Page Drive-WM is the first multi-view world model for planning in autonomous driving. |
|
Yuxue Yang, Lue Fan†, Zhaoxiang Zhang†. ICLR, 2024   Code MixSup achieves strong performance with a few accurate box labels and cheap cluster labels. PointSAM is developed to generate cluster labels. It achieve on-par performance with SoTA 3D segmentation methods in nuScenes without any 3D annotations! |
|
Lue Fan, Feng Wang, Naiyan Wang, Zhaoxiang Zhang. Code FSDv2 is an improved version of FSD, removing the handcrafted heuristics in FSD. FSDv2 achieves strong performance in Waymo, nuScenes, and Argoverse 2 dataset, and are fully open-sourced! |
|
Lue Fan, Yuxue Yang, Yiming Mao, Feng Wang, Yuntao Chen, Naiyan Wang, Zhaoxiang Zhang. ICCV, Oral, 2023   Code CTRL is the first open-sourced LiDAR-based 3D object autolabeling system, surpassing the performance of human annotators! |
|
Yingyan Li, Lue Fan, Yang Liu, Zehao Huang, Yuntao Chen, Naiyan Wang, Zhaoxiang Zhang, Tieniu Tan Code FSF explores multi-modal 3D object detection with fully sparse architecture by seamlessly integrating 2D instance segmentation and 3D instance segmentaion in a unified framework. |
|
Lue Fan, Yuxue Yang, Feng Wang, Naiyan Wang, Zhaoxiang Zhang TPAMI, 2023   Code FSD++ extends FSD into the multi-frame setting. In addition to the spatial sparsity, FSD++ emphaiszes temporal sparsity. |
|
Lue Fan, Feng Wang, Naiyan Wang, Zhaoxiang Zhang NeurIPS, 2022   Code FSD first proposes the concept of LiDAR-based "fully sparse detection", achieving state-of-the-art performance in both conventional benchmark and long-range (>200m) LiDAR detection benchmark. |
|
Lue Fan, Ziqi Pang, Tianyuan Zhang, Yu-Xiong Wang, Hang Zhao, Feng Wang, Naiyan Wang, Zhaoxiang Zhang CVPR, 2022   Code SST emphasize the small object sizes and sparsity of point clouds. Its sparse transformers enlight new backbones for outdoor LiDAR-based detection. |
|
Lue Fan*, Xuan Xiong*, Feng Wang, Naiyan Wang, Zhaoxiang Zhang. ICCV, 2021   Code RangeDet greatly narrows the performance gap between range view based LiDAR detection and voxel/BEV based LiDAR detection. |
Contact
- Email: lue.fan@ia.ac.cn