Zhiqi Huang

Ph.D. in computer science

profile_img.jpg
photo @ Yellowstone

Hi, I am an Applied Researcher at Capital One. I completed my Ph.D. under the mentorship of my advisor, Prof. James Allan, at the Center for Intelligent Information Retrieval (CIIR), Manning College of Information and Computer Sciences, University of Massachusetts Amherst. My research endeavors lie at the intersection of information retrieval, natural language processing, and machine learning (IR/NLP/ML).

I received my master’s in Statistics from the University of Maryland, College Park, and my bachelor’s in Applied Math from Sun Yat-sen University.

news

Oct 29, 2025 Our paper, Distillation versus Contrastive Learning: How to Train Your Rerankers, has been accepted at the IJCNLP-AACL 2025.
Oct 15, 2025 I will be presenting our work, Confidence-Based Response Abstinence: Improving LLM Trustworthiness via Activation-Based Uncertainty Estimation, at the 2nd UncertaiNLP Workshop at EMNLP 2025. See you in Suzhou, China!
Aug 29, 2025 Check out our work on uncertainty quantification, Uncertainty as Feature Gaps: Epistemic Uncertainty Quantification of LLMs in Contextual Question Answering, which will be presented at the Reliable ML Workshop at NeurIPS 2025.
Jul 10, 2025 Looking for a tool for uncertainty quantification in LLMs? Check out TruthTorchLM (code) — to be presented at EMNLP 2025.
Feb 20, 2025 Check our survey, A Survey of Model Architectures in Information Retrieval, for the development of information retrieval (IR) modeling and challenges in the era of large language models (LLMs).

selected publications

  1. Soft Prompt Decoding for Multilingual Dense Retrieval
    Zhiqi Huang, Hansi Zeng, Hamed Zamani, and James Allan
    In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2023
  2. Language Concept Erasure for Language-invariant Dense Retrieval
    Zhiqi Huang, Puxuan Yu, Shauli Ravfogel, and James Allan
    In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, Nov 2024
  3. Confidence-Based Response Abstinence: Improving LLM Trustworthiness via Activation-Based Uncertainty Estimation
    Zhiqi Huang, Vivek Datla, Chenyang Zhu, Alfy Samuel, Daben Liu, Anoop Kumar, and Ritesh Soni
    In Proceedings of the 2nd Workshop on Uncertainty-Aware NLP (UncertaiNLP 2025), Nov 2025