Amanda Shao

Shanghai Artificial Intelligence Laboratory

Profile.JPG

Jing Shao is currently at Shanghai Artificial Intelligence Laboratory as a research scientist focusing on multi-modal foundation models and agents, with special interests in understanding various properties of current models beyond their accuracy, such as explainability, robustness, safety and generalization, towards the reliableness of AI systems. She is also an Adjunt Ph.D. supervisor at the Shanghai Jiao Tong University, and a Co-PI of S-Lab in Nanyang Technological University.

She received her Ph.D. (2016) in Electronic Engineering from The Chinese University of Hong Kong (CUHK), supervised by Prof. Xiaogang Wang, and work closely with Prof. Chen Change Loy and the Multimedia Lab (MMLab) led by Prof. Xiaoou Tang. She has published 40+ peer-reviewed articles (including 20 first/co-first/corresponding author papers) in top-tier conferences and journals such as TPAMI, IJCV, ICML, ICLR, NeurIPS and CVPR, with 7300+ citations in Google Scholar. She serves as the reviewer of IJCV, T-PAMI, T-CSVT, T-MM, T-ITS, and CVIU, and reviewed CVPR, ICCV, ECCV, NeurIPS, ICLR, AAAI, IJCAI, and ACM MM.


To Prospective Students: We are activbely hiring Full-time Researchers/Interns working together on safety/robustness/explainablity of generative models and agents. I am also looking for talented students targeted to Master or Ph.D. degree. Please drop me an email with your resume if you are interested


News

Selected Publications

  1. SPA-VL.png
    SPA-VL: A Comprehensive Safety Preference Alignment Dataset for Vision Language Model
    Yongting Zhang, Lu Chen, Guodong Zheng, and 10 more authors
    Arxiv, 2024
  2. ch3ef.png
    Assessment of Multimodal Large Language Models in Alignment with Human Values
    Zhelun Shi, Zhipin Wang, Hongxing Fan, and 7 more authors
    Arxiv, 2024
  3. CodeAttack.png
    CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion
    Qibing Ren, Chang Gao, Jing Shao, and 4 more authors
    ACL, 2024
  4. tracing.png
    Towards Tracing Trustworthiness Dynamics: Revisiting Pre-training Period of Large Language Models
    Chen Qian, Jie Zhang, Wei Yao, and 5 more authors
    ACL, 2024
  5. saladbench.png
    SALAD-Bench: A Hierarchical and Comprehensive Safety Benchmark for Large Language Models
    Lijun Li, Bowen Dong, Ruohui Wang, and 5 more authors
    ACL, 2024