Amanda Shao
Jing Shao is currently at Shanghai Artificial Intelligence Laboratory as a research scientist focusing on multi-modal foundation models and agents, with special interests in understanding various properties of current models beyond their accuracy, such as explainability, robustness, safety and generalization, towards the reliableness of AI systems. She is also an Adjunt Ph.D. supervisor at the Shanghai Jiao Tong University, and a Co-PI of S-Lab in Nanyang Technological University.
She received her Ph.D. (2016) in Electronic Engineering from The Chinese University of Hong Kong (CUHK), supervised by Prof. Xiaogang Wang, and work closely with Prof. Chen Change Loy and the Multimedia Lab (MMLab) led by Prof. Xiaoou Tang. She has published 40+ peer-reviewed articles (including 20 first/co-first/corresponding author papers) in top-tier conferences and journals such as TPAMI, IJCV, ICML, ICLR, NeurIPS and CVPR, with 7300+ citations in Google Scholar. She serves as the reviewer of IJCV, T-PAMI, T-CSVT, T-MM, T-ITS, and CVIU, and reviewed CVPR, ICCV, ECCV, NeurIPS, ICLR, AAAI, IJCAI, and ACM MM.
To Prospective Students: We are activbely hiring Full-time Researchers/Interns working together on safety/robustness/explainablity of generative models and agents. I am also looking for talented students targeted to Master or Ph.D. degree. Please drop me an email with your resume if you are interested
News
Jul 27, 2024 | ICML TiFA Workshop |
---|---|
Jul 26, 2024 | ICML MFM-EAI WorkShop |
Jul 06, 2024 | WAIC(World AI Conference & High-Level Meeting on Global AI Governance) 2024 |
Selected Publications
- SPA-VL: A Comprehensive Safety Preference Alignment Dataset for Vision Language ModelArxiv, 2024
-
- CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code CompletionACL, 2024
- Towards Tracing Trustworthiness Dynamics: Revisiting Pre-training Period of Large Language ModelsACL, 2024
- SALAD-Bench: A Hierarchical and Comprehensive Safety Benchmark for Large Language ModelsACL, 2024