2024.06.28 [22’ ECCV] BioViL: Making the Most of Text Semantics to Improve Biomedical Vision-Language Processing Medical Contrastive Learning
2024.06.28 [23’] BiomedCLIP: A Multimodal Biomedical Foundation Model Pretrained from Fifteen Million Scientific Image-text Pairs Medical Contrastive Learning
2024.06.27 [21’] PubMedCLIP: Does CLIP Benefit Visual Question Answering in the Medical Domain as Much as it Does in the General Domain? Medical Contrastive Learning Dataset
2024.06.27 [22’ EMNLP] MedCLIP: Contrastive Learning from Unpaired Medical Images and Text Medical Contrastive Learning
2024.06.27 [Summary] List of Pre-trained CLIP Models for Medical Domain Medical Contrastive Learning Summary