About me

I am currently a MEXT-funded Ph.D. candidate at The University of Tokyo, working with Prof. Toshihiko Yamasaki. Prior to that, I received a Master’s degree from Peking University in 2021, advised by Prof. Chao Zhang. My research primarily focuses on efficient representation learning for vision-language and pure vision foundation models. This involves two key aspects: 1) designing compute-efficient foundation models, e.g., efficient Transformers; and 2) devising label-efficient learning algorithms that leverage self-supervision. Throughout my academic journey, I have been fortunate to have the opportunity to collaborate with esteemed researchers including Prof. Hongyang Zhang, Dr. Yuhui Yuan, Dr. Shan You, and Dr. Zichuan Liu.

I am actively seeking a fulltime research scientist position. Please find my CV here and do not hesitate to contact me if there’s a good fit.

News

  • 2023/07: One paper SimMatchv2 was accepted by ICCV 2023.
  • 2023/05: I started an internship at Adobe, San Jose.
  • 2023/03: Gave a talk on SAT (slides) at Waves in AI seminar.
  • 2022/10: One paper (SAT) was accepted at T-PAMI.
  • 2022/09: One paper (GreenMIM) was accepted at NeurIPS 2022.
  • 2022/03: Two papers (LEWEL and SimMatch) were accepted at CVPR 2022.