Wednesday, March 25, 2026
HomeAIHow to create a “humble” AI

How to create a “humble” AI

MIT Scientists Advocate for Humble AI in Healthcare

An international assembly of researchers, spearheaded by scientists from the Massachusetts Institute of Technology (MIT), have raised concerns about the potential pitfalls of overreliance on artificial intelligence (AI) in healthcare. They argue that the current AI models in medicine may inadvertently guide doctors towards erroneous diagnoses due to their overconfidence in making decisions. However, the team has proposed a solution: developing more ‘modest’ AI systems. These advanced systems could recognize their own uncertainty and encourage users to seek additional information when a diagnosis is ambiguous. Such a move could transform AI from being an oracle to a coach, enhancing our ability to retrieve information and see connections, according to Leo Anthony Celi, senior research scientist at MIT’s Institute for Medical Engineering and Science. [source]

Designing AI Systems with Curiosity and Humility

The researchers, including Celi and his colleagues, have devised a framework aimed at helping AI developers design systems that exhibit traits of curiosity and humility. This innovative approach could foster a partnership between doctors and AI systems, preventing AI from having an excessive influence over medical decisions. The study, which is published in BMJ Health and Care Informatics, suggests that AI systems should be programmed to communicate human values, rather than presenting themselves as infallible authorities. This shift in perspective could reduce errors in the medical field. [source]

Introducing the Epistemic Virtue Score

To facilitate this shift, the researchers have designed a framework with several computing modules which can be integrated into existing AI systems. One of these modules, the Epistemic Virtue Score, requires an AI model to evaluate its own confidence level in making diagnostic predictions. By ensuring that the system’s confidence is tempered by the inherent uncertainty and complexity of each clinical scenario, the model can adapt its response to the situation. If the system determines that its confidence surpasses the available evidence, it can pause, flag the mismatch, request specific tests or recommend specialist advice. The ultimate goal is to create an AI system that not only provides answers, but also signals when those answers should be treated with caution. [source]

Working Towards a More Inclusive AI

This study is part of a larger initiative by Celi and his team to develop AI systems that are built by and for those who will be most impacted by these tools. The researchers acknowledge the potential biases found in many AI models, which are often trained on publicly available data from the US. This can lead to a skewed approach to thinking about medical issues and the exclusion of other viewpoints. By bringing in additional perspectives, the team aims to overcome these potential biases, with each member of the global consortium bringing a unique perspective to a broader, collective understanding. [source]

Another challenge with existing AI systems for diagnostics is that they are typically trained on electronic health records that were not originally intended for this purpose. This means the data lacks much of the context that would be useful in making diagnoses and treatment recommendations. Furthermore, many patients, such as those living in rural areas, are often excluded from these datasets due to lack of access. At data workshops hosted by MIT Critical Data, diverse groups work together to develop new AI systems, ensuring that they don’t inadvertently encode existing structural inequalities into their models. [source]

The research was funded by the Boston-Korea Innovative Research Project through the Korea Health Industry Development Institute. [source]

Must Read
Related News

LEAVE A REPLY

Please enter your comment!
Please enter your name here