Zhiqi Huang

Ph.D. in computer science

profile_img.jpg
photo @ Yellowstone

Hi, I am an Applied Researcher at Capital One. I completed my Ph.D. under the mentorship of my advisor, Prof. James Allan, at the Center for Intelligent Information Retrieval (CIIR), Manning College of Information and Computer Sciences, University of Massachusetts Amherst. My research endeavors lie at the intersection of information retrieval, natural language processing, and machine learning (IR/NLP/ML).

I received my master’s in Statistics from the University of Maryland, College Park, and my bachelor’s in Applied Math from Sun Yat-sen University.

news

Apr 04, 2026 I’m excited to co-organize MeLLM 2026: The 1st Workshop on Multilinguality in the Era of Large Language Models, to be held at ACL 2026 in San Diego. We welcome paper submissions on all topics related to multilingual LLMs.
Mar 01, 2026 Our paper, Uncertainty as Feature Gaps: Epistemic Uncertainty Quantification of LLMs in Contextual Question-Answering, has been accepted at the ICLR 2026.
Oct 29, 2025 Our paper, Distillation versus Contrastive Learning: How to Train Your Rerankers, has been accepted at the IJCNLP-AACL 2025.
Oct 15, 2025 I will be presenting our work, Confidence-Based Response Abstinence: Improving LLM Trustworthiness via Activation-Based Uncertainty Estimation, at the 2nd UncertaiNLP Workshop at EMNLP 2025. See you in Suzhou, China!
Aug 29, 2025 Check out our work on uncertainty quantification, Uncertainty as Feature Gaps: Epistemic Uncertainty Quantification of LLMs in Contextual Question Answering, which will be presented at the Reliable ML Workshop at NeurIPS 2025.

selected publications

  1. Soft Prompt Decoding for Multilingual Dense Retrieval
    Zhiqi Huang, Hansi Zeng, Hamed Zamani, and James Allan
    In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2023
  2. Language Concept Erasure for Language-invariant Dense Retrieval
    Zhiqi Huang, Puxuan Yu, Shauli Ravfogel, and James Allan
    In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, Nov 2024
  3. Confidence-Based Response Abstinence: Improving LLM Trustworthiness via Activation-Based Uncertainty Estimation
    Zhiqi Huang, Vivek Datla, Chenyang Zhu, Alfy Samuel, Daben Liu, Anoop Kumar, and Ritesh Soni
    In Proceedings of the 2nd Workshop on Uncertainty-Aware NLP (UncertaiNLP 2025), Nov 2025