Xilie Xu bio photo

Email

Twitter

LinkedIn

Github

Google Scholar

[Google Scholar] [DBLP] [Patent]

Research Statement

I have a strong interest in both the theories and practical applications of trustworthy machine learning, with a particular focus on adversarial robustness. Overall, my research works lie in the following three categories:
(1) Towards developing and fine-tuning trustworthy foundation models: [NeurIPS’24], [Preprint’24], [ICLR’24 Blogpost], [ICLR’24a], [NeurIPS’23a, Spotlight], [NeurIPS’23b].
(2) Towards evaluating and enhancing adversarial robustness of AI-powered applications (e.g., statistical tools, LLMs, diffusion models): [ICLR’24b], [AAAI’24 Workshop], [ICML’22].
(3) Towards enhancing supervised adversarial training: [TMLR’22], [TDSC’22], [ICML’20].

Preprint & Workshop Paper & Blogpost

(* refers to equal contributions)

  1. Privacy-Preserving Low-Rank Adaptation for Latent Diffusion Models.
    Zihao Luo*, Xilie Xu*, Feng Liu, Yun Sing Koh, Di Wang, Jingfeng Zhang.
    ArXiv preprint, 2024.
    [PDF] [Code]
  2. Towards Robust Foundation Models: Adversarial Contrastive Learning.
    Jingfeng Zhang, Xilie Xu.
    The Third Blogpost Track at ICLR 2024 (BT@ICLR 2024), Vienna, Austria, 2024.
    [Blogpost]
  3. AdvGLUE-GPT: Towards Effective and Efficient Robustness Evaluation of Large Language Models.
    Xilie Xu, Keyi Kong, Ning Liu, Lizhen Cui, Di Wang, Jingfeng Zhang, Mohan Kankanhalli.
    AAAI Workshop on Responsible Language Models, Vancouver, Canada, 2024.
    [PDF] [Code]

Publication

  1. Perplexity-aware Correction for Robust Alignment with Noisy Preferences.
    Keyi Kong*, Xilie Xu*, Di Wang, Jingfeng Zhang, Mohan Kankanhalli.
    The 38th Annual Conference on Neural Information Processing Systems (NeurIPS 2024), Vancouver, Canada, 2024.
  2. An LLM can Fool Itself: A Prompt-Based Adversarial Attack.
    Xilie Xu*, Keyi Kong*, Ning Liu, Lizhen Cui, Di Wang, Jingfeng Zhang, Mohan Kankanhalli.
    The 12th International Conference on Learning Representations (ICLR 2024), Vienna, Austria, 2024.
    [PDF] [Code] [Poster] [Project Page]
  3. AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework.
    Xilie Xu, Jingfeng Zhang, Mohan Kankanhalli.
    The 12th International Conference on Learning Representations (ICLR 2024), Vienna, Austria, 2024.
    [PDF] [Code] [Poster]
  4. Efficient Adversarial Contrastive Learning via Robustness-Aware Coreset Selection.
    Xilie Xu*, Jingfeng Zhang*, Feng Liu, Masashi Sugiyama, Mohan Kankanhalli.
    The 37th Annual Conference on Neural Information Processing Systems (NeurIPS 2023), New Orleans, US, 2023.
    [PDF] [Code] [Poster] [知乎 (Chinese Quora)] [Spotlight, top 3.06%]
  5. Enhancing Adversarial Contrastive Learning via Adversarial Invariant Regularization.
    Xilie Xu*, Jingfeng Zhang*, Feng Liu, Masashi Sugiyama, Mohan Kankanhalli.
    The 37th Annual Conference on Neural Information Processing Systems (NeurIPS 2023), New Orleans, US, 2023.
    [PDF] [Code] [Poster] [Project Page]
  6. Adversarial Attack and Defense for Non-Parametric Two-Sample Tests.
    Xilie Xu*, Jingfeng Zhang*, Feng Liu, Masashi Sugiyama, Mohan Kankanhalli.
    The 39th International Conference on Machine Learning (ICML 2022), Baltimore, US, 2022.
    [PDF] [Code]
  7. NoiLin: Improving Adversarial Training and Correcting Stereotype of Noisy Labels.
    Jingfeng Zhang*, Xilie Xu*, Bo Han, Tongliang Liu, Lizhen Cui, Gang Niu, Masashi Sugiyama.
    Transactions on Machine Learning Research (TMLR 2022).
    [PDF] [Code]
  8. Decision Boundary-aware Data Augmentation for Adversarial Training.
    Chen Chen*, Jingfeng Zhang*, Xilie Xu, Lingjuan Lyu, Chaochao Chen, Tianlei Hu, Gang Chen. IEEE Transactions on Dependable and Secure Computing (TDSC 2022).
    [PDF]
  9. Attacks Which Do Not Kill Training Make Adversarial Learning Stronger.
    Jingfeng Zhang*, Xilie Xu*, Bo Han, Gang Niu, Lizhen Cui, Masashi Sugiyama, Mohan Kankanhalli.
    The 37th International Conference on Machine Learning (ICML 2020), online, 2022.
    [PDF] [Code]