2024.08.13 [23’ EMNLP] Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning Language Interpretability
2024.08.13 [24’] Not All Layers of LLMs Are Necessary During Inference Language Decoding Interpretability
2024.07.30 [24’ CVPR] OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allocation Multimodal Hallucination Interpretability
2024.07.30 [24’ CVPR] HallusionBench: An Advanced Diagnostic Suite for Entangled Language Hallucination and Visual Illusion in Large Vision-Language Models Multimodal Benchmark Hallucination
2024.07.26 [24’ ICML-WS] Transformers need glasses! Information over-squashing in language tasks Language Hallucination Interpretability