Try : Insurtech, Application Development

AgriTech(1)

Augmented Reality(21)

Clean Tech(9)

Customer Journey(17)

Design(45)

Solar Industry(8)

User Experience(68)

Edtech(10)

Events(34)

HR Tech(3)

Interviews(10)

Life@mantra(11)

Logistics(5)

Manufacturing(3)

Strategy(18)

Testing(9)

Android(48)

Backend(32)

Dev Ops(11)

Enterprise Solution(33)

Technology Modernization(9)

Frontend(29)

iOS(43)

Javascript(15)

AI in Insurance(38)

Insurtech(66)

Product Innovation(58)

Solutions(22)

E-health(12)

HealthTech(24)

mHealth(5)

Telehealth Care(4)

Telemedicine(5)

Artificial Intelligence(152)

Bitcoin(8)

Blockchain(19)

Cognitive Computing(8)

Computer Vision(8)

Data Science(23)

FinTech(51)

Banking(7)

Intelligent Automation(27)

Machine Learning(48)

Natural Language Processing(14)

expand Menu Filters

Why Interoperability is Key To Unlocking India’s Digital Healthcare Ecosystem

India’s mammoth hospital landscape accounts for nearly 60% of the overall health ecosystem’s revenues. The COVID-19 Pandemic has escalated digital health-seeking behaviour within the public consciousness and renewed India’s impetus towards healthcare innovation. Traditional modes of healthcare delivery are being phased out, in favour of new and disruptive models. The creation of the National Health Stack (NHS), a digital platform with the aim to create universal health records for all Indian citizens by 2022, will bring both central & state health verticals under a common banner.

Yes, progress is slow, but the addition of new frameworks for Health ID, PHR, telemedicine, and OPD insurance will create macro-level demand beyond local in-patient catchment zones. India’s Healthcare ecosystem is now slowly but surely moving towards a wellness-driven model of care delivery from its historically siloed & episodic intervention approach. This streamlining of healthcare creates a new wealth of opportunities for healthcare enterprises. 

But at the core of this approach lies the biggest challenge yet for Indian healthcare — Interoperability or the lack thereof as of now. The ability of health information systems, applications, and devices to send or receive data is paramount to the success of this new foundational framework.

What does the NDHM blueprint have for us? 

By design, the NDHM envisions the healthcare ecosystem to be a comprehensive set of digital platforms—sets of essential APIs, with a strong foundational architecture framework—that brings together multiple groups of stakeholders enabled by shared interfaces, reusable building blocks, and open standards. 

The Blueprint underlines key principles which include the domain perspective—Universal Health Coverage, Security & Privacy by Design, Education & Empowerment, and Inclusiveness of citizens; and the technology perspective—Building Blocks, Interoperability, a set of Registries as single sources of truth, Open Standards, and Open APIs. 

For ‘Technical interoperability’ considerations, all participating health ecosystem entities will need to adopt the standards defined by the IndEA framework. This will allow the integration of all disparate systems under one roof to securely achieve the exchange of clinical records and patient-data portability across India.

The NDHM Ecosystem will allow healthcare providers to gain better reach to new demand pools in OPD & IPD care. India’s OPD rates are currently only at 4 per day per 1000 population. For the patient, this means more preventive check-ups, lower out-of-pocket expenses, timely access to referrals, follow-up care, and improved health-seeking behavior. 

Centralized ID systems across International Territories 

All of this is being tied to a unique health ID for each citizen (or patient in a healthcare setting). What’s unique about health IDs is that each health ID is linked to ‘care contexts’ which carry information about a person’s health episode and can include health records like out-patient consultation notes, diagnostic reports, discharge summaries, and prescriptions. They are also linked to a health data consent manager to help manage a person’s privacy and consent. 

Centralised ID systems, although they come with great privacy & security-related risks, are essential to expanding coverage and strengthening links to service delivery for underprivileged citizens. India’s Unique Identification (UID) project, commonly known as Aadhaar, has also spurred interest in countries like Russia, Morocco, Algeria, Tunisia, Indonesia, Thailand, Malaysia, Philippines, and Singapore – who are now looking to develop Aadhaar-like identification systems for their territories.

By tying together unique IDs that are carefully secured with our health records, health systems can ‘talk’ with each other through secure data exchanges and facilitate optimization of innovative healthcare delivery models. For instance, a patient with a chronic condition (like diabetes, heart disease, etc.) can choose to send their health data to their practitioner of choice and have medical information, treatment, and advice flow to them, instead of them having to step into a doctor’s office.

Platforms that help add richness to existing Medical Information Systems

Distribution in healthcare will get a new and long-awaited facelift with the influx of health startups and other innovative solutions being allowed to permeate the market. Modern EHRs play a significant role in enhancing these new business models — by pulling information that has been traditionally siloed into new systems built on top of the EHRs, that can draw ‘patient-experience changing’ insights from them. For instance, Epic’s App Orchard and Cerner’s Code, and Allscripts’ Development Program — have opened up their platforms to encourage app development in this space. Data that flows into EHR systems, like Orchard or Allscripts, can then be fed into a clinical decision support system (CDSS) — from where developers can train models and provide inferences. For example, take the case of a patient who has a specific pattern of disease history. With the aid of Machine learning trained models, a CDSS can prompt the clinician with guidance about diagnosis options based on the patient’s previous history.

Let’s look at another example, where traditional vital signs and lab values are used to signal alarms for a patient’s health condition. A patient who has previously been treated for chronic bronchitis may come in because they are experiencing an unknown allergic reaction. In a typical scenario, the clinician has to depend on lab values, extensive tests, and context-less medical history reports — to get to the root of the issue. 

But this can be replaced by continuously monitoring AI tools that detect early patterns in health deterioration. In this example case, it could have helped the clinician identify immediately that the patient’s condition may be caused by exposure to allergy triggers, causing ‘allergic bronchitis’. Curated data from EHRs can be used to train models that help risk-stratify patients and assist decision-makers in classifying preoperative & non-operative patients into multiple risk categories.

Data warehouses contain the valuable oil, that is EHR data, but are also enriched with other types of data – like claims data, imaging data, genetic information-type, patient-generated data such as patient-reported outcomes, and wearable-generated data that includes nutrition, at-home vitals monitoring, physical activity status – collected from smartphones and watches. 

Today, data sharing is far from uncommon. For example, The OneFlorida Clinical Research Consortium uses clinical data from twelve healthcare organizations that provide care for nearly fifteen million Florida residents in 22 hospitals. Another example is the European Medical Information Framework (EMIF) which contains EHR data from 14 countries, blended into a single data model to enable new medical discovery and research.

Unsurprisingly, EHR companies were amongst the first to comply with interoperability rules. To that effect, EHR APIs are used for extracting data elements and other patient information from health records stored within one health IT system. With this data, healthcare organizations can potentially build a broad range of applications from patient-facing health apps, telehealth platforms, patient management solutions for treatment monitoring to existing patient portals. 

What’s Next?

In the next ten years, Cisco predicts that 500 billion sensory devices with 4-5 signals each will be connected to the Internet of Everything. This will create about 250 sensory data points per person on average. This wealth of data is ushering in a new wave of opportunities within healthcare. Deriving new interactions from the patient’s journey can be quite arduous. As the health consumer is being ushered into the ‘age of experiences’, the onus is on digital healthcare enterprises to make them more relevant, emotional, and personalized. 

By preparing for ‘Integration Readiness’, healthcare providers can access new patient demand pools from tier-2 & tier-3 cities, identify insights about the health consumer’s life cycle needs, and leverage new technologies to draw in more value from these interactions than ever before. Consequently, hospitals will be able to drive improved margins from reduced administrative costs and gain higher utilization through increased demand.

Parag Sharma, CEO & Founder, Mantra Labs featured in CXO Outlook. Read More – CXO Outlookhttps://www.cxooutlook.com/why-interoperability-is-key-to-unlocking-indias-digital-healthcare-ecosystem/

Cancel

Knowledge thats worth delivered in your inbox

Machines That Make Up Facts? Stopping AI Hallucinations with Reliable Systems

There was a time when people truly believed that humans only used 10% of their brains, so much so that it fueled Hollywood Movies and self-help personas promising untapped genius. The truth? Neuroscientists have long debunked this myth, proving that nearly all parts of our brain are active, even when we’re at rest. Now, imagine AI doing the same, providing information that is untrue, except unlike us, it doesn’t have a moment of self-doubt. That’s the bizarre and sometimes dangerous world of AI hallucinations.

AI hallucinations aren’t just funny errors; they’re a real and growing issue in AI-generated misinformation. So why do they happen, and how do we build reliable AI systems that don’t confidently mislead us? Let’s dive in.

Why Do AI Hallucinations Happen?

AI hallucinations happen when models generate errors due to incomplete, biased, or conflicting data. Other reasons include:

  • Human oversight: AI mirrors human biases and errors in training data, leading to AI’s false information
  • Lack of reasoning: Unlike humans, AI doesn’t “think” critically—it generates predictions based on patterns.

But beyond these, what if AI is too creative for its own good?

‘Creativity Gone Rogue’: When AI’s Imagination Runs Wild

AI doesn’t dream, but sometimes it gets ‘too creative’—spinning plausible-sounding stories that are basically AI-generated fake data with zero factual basis. Take the case of Meta’s Galactica, an AI model designed to generate scientific papers. It confidently fabricated entire studies with fake references, leading Meta to shut it down in three days.

This raises the question: Should AI be designed to be ‘less creative’ when AI trustworthiness matters?

The Overconfidence Problem

Ever heard the phrase, “Be confident, but not overconfident”? AI definitely hasn’t.

AI hallucinations happen because AI lacks self-doubt. When it doesn’t know something, it doesn’t hesitate—it just generates the most statistically probable answer. In one bizarre case, ChatGPT falsely accused a law professor of sexual harassment and even cited fake legal documents as proof.

Take the now-infamous case of Google’s Bard, which confidently claimed that the James Webb Space Telescope took the first-ever image of an exoplanet, a factually incorrect statement that went viral before Google had to step in and correct it.

There are more such multiple instances where AI hallucinations have led to Human hallucinations. Here are a few instances we faced.

When we tried the prompt of “Padmavaat according to the description of Malik Muhammad Jayasi-the writer ”

When we tried the prompt of “monkey to man evolution”

Now, if this is making you question your AI’s ability to get things right, then you should probably start looking have a checklist to check if your AI is reliable.

Before diving into solutions. Question your AI. If it can do these, maybe these will solve a bit of issues:

  • Can AI recognize its own mistakes?
  • What would “self-awareness” look like in AI without consciousness?
  • Are there techniques to make AI second-guess itself?
  • Can AI “consult an expert” before answering?

That might be just a checklist, but here are the strategies that make AI more reliable:

Strategies for Building Reliable AI

1. Neurosymbolic AI

It is a hybrid approach combining symbolic reasoning (logical rules) with deep learning to improve factual accuracy. IBM is pioneering this approach to build trustworthy AI systems that reason more like humans. For example, RAAPID’s solutions utilize this approach to transform clinical data into compliant, profitable risk adjustment, improving contextual understanding and reducing misdiagnoses.

2. Human-in-the-Loop Verification

Instead of random checks, AI can be trained to request human validation in critical areas. Companies like OpenAI and Google DeepMind are implementing real-time feedback loops where AI flags uncertain responses for review. A notable AI hallucination prevention use case is in medical AI, where human radiologists verify AI-detected anomalies in scans, improving diagnostic accuracy.

3. Truth Scoring Mechanism

IBM’s FactSheets AI assigns credibility scores to AI-generated content, ensuring more fact-based responses. This approach is already being used in financial risk assessment models, where AI outputs are ranked by reliability before human analysts review them.

4. AI ‘Memory’ for Context Awareness

Retrieval-Augmented Generation (RAG) allows AI to access verified sources before responding. This method is already being used by platforms like Bing AI, which cites sources instead of generating standalone answers. In legal tech, RAG-based models ensure AI-generated contracts reference actual legal precedents, reducing AI accuracy problems.

5. Red Teaming & Adversarial Testing

Companies like OpenAI and Google regularly use “red teaming”—pitting AI against expert testers who try to break its logic and expose weaknesses. This helps fine-tune AI models before public release. A practical AI reliability example is cybersecurity AI, where red teams simulate hacking attempts to uncover vulnerabilities before systems go live 

The Future: AI That Knows When to Say, “I Don’t Know”

One of the most important steps toward reliable AI is training models to recognize uncertainty. Instead of making up answers, AI should be able to respond with “I’m unsure” or direct users to validated sources. Google DeepMind’s Socratic AI model is experimenting with ways to embed self-doubt into AI.

Conclusion:

AI hallucinations aren’t just quirky mistakes—they’re a major roadblock in creating trustworthy AI systems. By blending techniques like neurosymbolic AI, human-in-the-loop verification, and retrieval-augmented generation, we can push AI toward greater accuracy and reliability.

But here’s the big question: Should AI always strive to be 100% factual, or does some level of ‘creative hallucination’ have its place? After all, some of the best innovations come from thinking outside the box—even if that box is built from AI-generated data and machine learning algorithms.

At Mantra Labs, we specialize in data-driven AI solutions designed to minimize hallucinations and maximize trust. Whether you’re developing AI-powered products or enhancing decision-making with machine learning, our expertise ensures your models provide accurate information, making life easier for humans

Cancel

Knowledge thats worth delivered in your inbox

Loading More Posts ...
Go Top
ml floating chatbot