Research

Pushing the frontiers of AI for equitable innovation through autonomous research groups.

Research Circle

Research Areas

Knowledge Representation & Reasoning
Computer Vision
Natural Language Processing
Federated & Privacy- Preserving Learning
Multimodal AI
Speech & Audio Processing
Learning Algorithms & Optimization
Research DecorationsScientific Progress

Expand human knowledge through high-integrity research that pushes the frontiers of AI.

Research DecorationsSectoral Impact

Address critical real-world challenges where AI can create valuable impact.

Research DecorationsGlobal Equity

Ensure benefits of AI reach everyone and its progress is shaped by all.

Our Impact

0

publications in top-tier venues

$0.0M+

research funding accross 9 international grants

0%

international collaboration rate

0+

open-source repositories impacting the AI community

Featured Projects

Feature Publications

2023
FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare
Karim Lekadir, Bishesh Khanal, Martijn Starmans
arXiv preprint arXiv:2309.12325 , 2023
PDFView PDF
2025
Transforming healthcare through just, equitable and quality driven artificial intelligence solutions in South Asia
Sushmita Adhikari, Iftikhar Ahmed, Deepak Bajracharya, Bishesh Khanal, Chandrasegarar Solomon, Kapila Jayaratne, Khondaker Abdullah Al Mamum, Muhammad Shamim Hayder Talukder, Sunila Shakya, Suresh Manandhar, Zahid Ali Memon, Moinul Haque Chowdhury, Ihtesham ul Islam, Noor Sabah Rakhshani & M. Imran Khan
npj Digital Medicine (nature)
Peer Reviewed Journal articleTOGAI (Transforming Global Health with AI)
PDFView PDF
2025
Assistive Artificial Intelligence in Epilepsy and Its Impact on Epilepsy Care in Low- and Middle-Income Countries
Nabin Koirala, Shishir Raj Adhikari, Mukesh Adhikari, Taruna Yadav, Abdul Rauf Anwar, Dumitru Ciolac, Bibhusan Shrestha, Ishan Adhikari, Bishesh Khanal, Muthuraman Muthuraman
Brain Sciences (MDPI) 2025
Peer Reviewed Journal articleTOGAI (Transforming Global Health with AI)
PDFView PDF
2025
Multimodal Federated Learning With Missing Modalities through Feature Imputation Network
Pranav Poudel, Aavash Chhetri, Prashnna Gyawali, Georgios Leontidis, Binod Bhattarai
Medical Image Understanding and Analysis (MIUA) 2025
Peer Reviewed Conference articleBBMMLL (B Bhattarai Multi-Modal Learning Lab)
PDFView PDF
2024
NLPineers@ NLU of Devanagari Script Languages 2025: Hate speech detection using ensembling of BERT-based models
Anmol Guragain, Nadika Poudel, Rajesh Piryani, Bishesh Khanal
CHiPSAL: Challenges in Processing South Asian Languages. COLING 2025
Peer Reviewed Workshop articleTOGAI (Transforming Global Health with AI)
PDFView PDFSource Code

News & Updates

Out-of-Distribution Detection
April 20, 2026
Out-of-Distribution Detection
Author: Anju ChhetriIn a now-famous study, expert radiologists were asked to scan CT images of lungs for nodules. Hidden in one of those scans was something no one expected: a gorilla, 48 times the size of the average nodule. However, interestingly more than half the radiologists never noticed it. Their attention was so finely tuned to detecting tumors that they overlooked an unexpected and obvious anomaly. This phenomenon is known as inattentional blindness, where focused expertise can paradoxically limit perception [1].While this finding is surprising, its implications extend far beyond human cognition. It offers a powerful analogy for understanding a critical challenge in machine learning. Consider a model trained to detect malignant cancer cells. It performs well when the input data resembles what it has seen during training. But what happens when it encounters a completely new disease, something outside its learned distribution? This scenario is known as out-of-distribution data, where the statistical properties differ from the training set.In such cases, the model does not recognize its own uncertainty. Instead, it forcefully maps the unfamiliar input to one of its known categories. The result can be a confident but incorrect prediction, potentially leading to dangerous misdiagnoses. This limitation arises because most machine learning systems operate under what is called a closed-world assumption. They assume that every input belongs to one of the predefined classes.[2]This challenge is not limited to healthcare. In domains like self-driving cars, encountering unexpected objects or rare environmental conditions can lead to similarly flawed decisions. A plastic bag drifting across the road or an unusual vehicle shape may not fit neatly into the model’s learned categories, yet the system must still respond.To address this, researchers focus on out-of-distribution detection. The goal is to identify when an input does not belong to the training distribution and flag it instead of forcing a classification. But this raises a key question. If a model is trained only to assign inputs to known classes, how can it recognize something unfamiliar?One approach is to look beyond final predictions. Instead of relying solely on class labels, we can analyze intermediate signals within the model. These include internal representations, logits, and probability scores. Patterns in these signals can help distinguish familiar inputs from anomalous ones, sometimes using a combination of multiple indicators [3,4,5].But OOD detection alone isn't enough. As models grow more complex, a deeper question emerges: how do we understand and trust the decisions they make? This demands collaboration beyond machine learning robustness, drawing in interpretability research and explainable AI to make model behavior transparent, not just accurate.[1] Drew, Trafton, Melissa L-H. Võ, and Jeremy M. Wolfe. "The invisible gorilla strikes again: sustained inattentional blindness in expert observers." Psychological science 24.9 (2013): 1848-1853.[2] Hou, Mun. “Detecting Out-of-Distribution Samples with kNN | Mun Hou’s Blog.” Detecting Out-of-Distribution Samples with kNN , 2022, blog.munhou.com/2022/12/01/Detecting Out-of-Distribution Samples with Knn/.[3] Wang, Haoqi, et al. "Vim: Out-of-distribution with virtual-logit matching." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.[4] Hendrycks, Dan, and Kevin Gimpel. "A baseline for detecting misclassified and out-of-distribution examples in neural networks." arXiv preprint arXiv:1610.02136 (2016).[5] Lee, Kimin, et al. "A simple unified framework for detecting out-of-distribution samples and adversarial attacks." Advances in neural information processing systems 31 (2018). 
Decoration
3-Day AI-driven System for Post-Disaster Agricultural Damage Assessment Workshop Concludes
November 30, 2025
3-Day AI-driven System for Post-Disaster Agricultural Damage Assessment Workshop Concludes
Date: 19–21 November 2025Venue: Alice Reception, Lalitpur (Day 1) & Alliance Française, Lalitpur (Day 2 and 3)NAAMII, with support from FAO Nepal, concluded a three-day workshop designed to bring key stakeholders together around a unified approach for post-disaster agricultural damage assessment. The sessions were part of the ongoing A2 Innovation Lab project with FAO Nepal, which aims to strengthen Nepal’s capacity to use drones, geospatial systems, and AI for timely and transparent crop loss estimation.The workshop centered on aligning perspectives from farmers, local governments, technical agencies, and national institutions to validate and refine standard operating procedures. Alongside these discussions, it also educated students and young practitioners in UAV operations, GIS workflows, and AI-based image analysis.Day 1The first day focused on establishing a shared technical foundation. Bal Kumar Lamsal, Drone and GIS Data Officer at NAXA, provided an overview of UAV types, autonomous systems, and mapping software including PICTERRRA and PIX4D React. Er. Moti Ram Itani, Aeronautical Engineer and Founder of Pushpak Udaan Aviation Academy, shared his expertise on aircraft design, drone policies, and licensing regulations in Nepal. Practical demonstrations by Sishir Lamsal helped participants understand flight planning, geospatial data collection, and software applications, providing a foundational knowledge of drones, mapping, and AI integration, setting the stage for practical problem-solving.Day 2The second day brought stakeholders into problem-driven discussions. Suyog Chalise from Impact 477 guided students through design thinking and problem identification. A representative from Amit drone consultancy demonstrated LiDAR applications for analyzing Nepal’s land and resources. Tashi Bista and Sajal Pradhan from Salt, a wildlife media company, shared their experiences in wildlife photography and videography, including collaborations with the BBC, and highlighted challenges in acquiring filming rights. The day concluded with Sishir Lamsal and Lalit BC demonstrating AI/ML model training for image analysis, enabling students to understand the intersection of drones and AI.Day 3The final day centered on feedback, real-world constraints, and pathways to scale. Arun GC from FAO Nepal delivered opening remarks and feedback, followed by Ritesh Jha from the Department of Hydrology who discussed challenges in precision agriculture and flood damage assessment, emphasizing how drone-based AI/ML solutions can improve accuracy and decision-making. Participants explored scaling prototypes, integrating with existing systems, and using baseline data for predictive analysis, gaining practical insights into turning technology into tangible impact. The sessions concluded with discussions on how these projects could be further developed to benefit farmers, local governments, and disaster management efforts across Nepal.The workshop strengthened coordination among institutions, technical experts, and young practitioners, creating momentum for a national system that links drone data, geospatial analysis, and AI into a unified and reliable process for agricultural damage assessment across Nepal.
Decoration
Dr. Bipendra Basnyat on AI and the Future of Farming in Nepal
November 1, 2025
Dr. Bipendra Basnyat on AI and the Future of Farming in Nepal
Dr. Bipendra Basnyat, Adjunct Research Scientist leading NAAMII’s Agri AI (A²) Innovation Lab, recently shared his insights on AI-driven agriculture in a podcast hosted by Sushant Pradhan. With over two decades of experience in AI and machine learning, Dr. Basnyat discussed the challenges and opportunities of integrating advanced technologies into Nepali farming, sustainable agriculture, and climate-smart practices.NAAMII’s A² Innovation Lab combines cutting-edge technology with traditional farming knowledge to develop resilient, scalable, and sustainable agricultural systems. Its work spans climate-smart agriculture, regenerative practices, and permaculture, using AI, IoT, satellite imagery, and computer vision to optimize resource management, improve crop resilience, and preserve local knowledge.Dr. Basnyat’s conversation also addressed the misconceptions around AI, data security, and the role of technology in empowering farmers. He highlighted practical applications of AI in Nepali agriculture, from precision farming to intelligent systems for real-world use, demonstrating how innovation can bridge the technology gap and strengthen local farming communities.The A² Innovation Lab continues to develop tools and insights that support farmers, researchers, and communities, while advancing AI for conscious living and sustainable agriculture.
Decoration
Research