Overview
MMLL investigates fundamental challenges in developing trustworthy machine learning models, addressing three core problems:
- Model robustness under distributional shifts and data imperfections
- Privacy-preserving learning in distributed settings
- Effective multimodal representation learning across heterogeneous data types
Our priority areas include computer vision, medical image analysis, energy, agriculture, and low-resource language processing.
Technical Focus
- Model Robustness: Out-of-distribution detection (OODD), model robustness under distributional shifts and data imperfections.
- Privacy-preserving Learning: Federated learning in distributed settings.
- Multimodal Learning: Learning across heterogeneous inputs including images, text, and other forms of structured data.
Impact Domains
While our methods are designed to be domain-agnostic, we frequently validate approaches using medical imaging datasets (including endoscopic images, histopathology slides, and chest radiographs.)
Team
Research Scientist
Research Assistants
Research Interns
Research Interns: Aavash Chhetri, Bibek Niroula, Niyoj Oli, Pratik Shrestha