About Me
I am currently an assistant professor in NLPR, Institute of Automation, Chinese Academy of Sciences. I got my Ph.D. degree from this lab in June 2024, supervised by Prof. Zhaoxiang Zhang, and bachelor's degree from Xi'an Jiaotong University (XJTU) in 2019, majoring in automation. I was an research intern at TuSimple developing perception algorithm for autonomous trucks from May 2020 to April 2023, supervised by Dr. Naiyan Wang and Dr. Feng Wang.
My research interests focus on the perception/decision/simulation algorithms in autonomous driving scenarios. During Ph.D., my representive research lies in a series of algorithms for LiDAR-based fully sparse detection, supporting the super long-range perception and enhancing the driving safety. Currently, I am mainly focusing on driving simulation and work closely with Prof. Hongsheng Li @ MMLab.
Selected Work
*: Equal Contribution; †: Corresponding Author
![]() |
Lue Fan*, Hao Zhang*, Qitai Wang, Hongsheng Li†, Zhaoxiang Zhang†. CVPR 2025   Project Page After FreeVS and FlexDrive, we propose FreeSim, a generation-reconstruction hybrid method for free-viewpoint camera simulation, taking the best of two worlds! |
![]() |
Jingqiu Zhou*, Lue Fan*, Linjiang Huang, Xiaoyu Shi, Si Liu, Z. Zhang†, Hongsheng Li†. CVPR 2025   After FreeVS, we propose FlexDrive, a purely geometry-driven method for novel-trajectory camera simulation in driving scenes. |
![]() |
Qitai Wang, Lue Fan, Yuqi Wang, Yuntao Chen†, Zhaoxiang Zhang†. ICLR 2025   Project Page FreeVS is the first method that supports high-quality generative view synthesis on free driving trajectory, which is a crucial feature for driving simulators. |
![]() |
Yingyan Li, Lue Fan, Jiawei He, Yuqi Wang, Yuntao Chen, Zhaoxiang Zhang, Tieniu Tan. ICLR 2025   Code We propose a practical LAtent World Model (LAW) to enhance end-to-end autonomous driving in a self-supervised manner without leveraging time-consuming video generation, achieving strong and realtime performance in both open-loop and close-loop benchamrks. |
![]() |
Lue Fan*, Yuxue Yang*, Minxing Li, Hongsheng Li†, Zhaoxiang Zhang†. Code (fully released!), Project Page We propose contribution-based trimming strategy to refine messy Gaussians to be geometrically accurate. This strategy has a potential to be integrated with any Gaussians! |
![]() |
Guowen Zhang, Lue Fan, Chenhang He, Zhen Lei, Zhaoxiang Zhang, Lei Zhang. NeurIPS 2024   Code The first voxel-based state space model for LiDAR-based 3D object detection in driving scenes, achiving state-of-the-art performance in both Waymo and nuScene benchmarks. |
![]() |
Yuqi Wang*, Jiawei He*, Lue Fan*, Hongxin Li*, Yuntao Chen†, Zhaoxiang Zhang†. CVPR, 2024   Code / Project Page Drive-WM is the first multi-view world model for planning in autonomous driving. |
![]() |
Yuxue Yang, Lue Fan†, Zhaoxiang Zhang†. ICLR, 2024   Code MixSup achieves strong performance with a few accurate box labels and cheap cluster labels. PointSAM is developed to generate cluster labels. It achieve on-par performance with SoTA 3D segmentation methods in nuScenes without any 3D annotations! |
![]() |
Lue Fan, Feng Wang, Naiyan Wang, Zhaoxiang Zhang. Code FSDv2 is an improved version of FSD, removing the handcrafted heuristics in FSD. FSDv2 achieves strong performance in Waymo, nuScenes, and Argoverse 2 dataset, and are fully open-sourced! |
![]() |
Lue Fan, Yuxue Yang, Yiming Mao, Feng Wang, Yuntao Chen, Naiyan Wang, Zhaoxiang Zhang. ICCV, Oral, 2023   Code CTRL is the first open-sourced LiDAR-based 3D object autolabeling system, surpassing the performance of human annotators! |
![]() |
Yingyan Li, Lue Fan, Yang Liu, Zehao Huang, Yuntao Chen, Naiyan Wang, Zhaoxiang Zhang, Tieniu Tan Code FSF explores multi-modal 3D object detection with fully sparse architecture by seamlessly integrating 2D instance segmentation and 3D instance segmentaion in a unified framework. |
![]() |
Lue Fan, Yuxue Yang, Feng Wang, Naiyan Wang, Zhaoxiang Zhang TPAMI, 2023   Code FSD++ extends FSD into the multi-frame setting. In addition to the spatial sparsity, FSD++ emphaiszes temporal sparsity. |
![]() |
Lue Fan, Feng Wang, Naiyan Wang, Zhaoxiang Zhang NeurIPS, 2022   Code FSD first proposes the concept of LiDAR-based "fully sparse detection", achieving state-of-the-art performance in both conventional benchmark and long-range (>200m) LiDAR detection benchmark. |
![]() |
Lue Fan, Ziqi Pang, Tianyuan Zhang, Yu-Xiong Wang, Hang Zhao, Feng Wang, Naiyan Wang, Zhaoxiang Zhang CVPR, 2022   Code SST emphasize the small object sizes and sparsity of point clouds. Its sparse transformers enlight new backbones for outdoor LiDAR-based detection. |
![]() |
Lue Fan*, Xuan Xiong*, Feng Wang, Naiyan Wang, Zhaoxiang Zhang. ICCV, 2021   Code RangeDet greatly narrows the performance gap between range view based LiDAR detection and voxel/BEV based LiDAR detection. |
Contact
- Email: lue.fan@ia.ac.cn