Try : Insurtech, Application Development

AgriTech(1)

Augmented Reality(21)

Clean Tech(9)

Customer Journey(17)

Design(45)

Solar Industry(8)

User Experience(68)

Edtech(10)

Events(34)

HR Tech(3)

Interviews(10)

Life@mantra(11)

Logistics(5)

Manufacturing(3)

Strategy(18)

Testing(9)

Android(48)

Backend(32)

Dev Ops(11)

Enterprise Solution(33)

Technology Modernization(9)

Frontend(29)

iOS(43)

Javascript(15)

AI in Insurance(38)

Insurtech(66)

Product Innovation(58)

Solutions(22)

E-health(12)

HealthTech(24)

mHealth(5)

Telehealth Care(4)

Telemedicine(5)

Artificial Intelligence(152)

Bitcoin(8)

Blockchain(19)

Cognitive Computing(8)

Computer Vision(8)

Data Science(23)

FinTech(51)

Banking(7)

Intelligent Automation(27)

Machine Learning(48)

Natural Language Processing(14)

expand Menu Filters

The Growth of Usage-Based Insurance in India

Usage-based insurance (UBI), or telematics insurance, is a type of auto insurance policy that considers the insured individual’s traditional auto insurance. It relies on general demographic information and historical claims data to determine premiums.

UBI uses real-time data from telematics devices or smartphone apps to assess risk and calculate premiums.

While it is still a relatively less used insurance product, several prominent insurance companies in India offer UBI:

  1. Bharti AXA General Insurance: Bharti AXA offers a telematics-based motor insurance policy called “DriveSmart.” This policy uses a smartphone app to collect data on driving behavior and offers discounts based on safe driving habits.
  2. ICICI Lombard General Insurance: ICICI Lombard offers a usage-based motor insurance policy called “Pay as You Drive.” It uses a telematics device installed in the insured vehicle to monitor driving behavior and provides premium discounts based on safe driving.
  3. HDFC ERGO General Insurance: HDFC ERGO provides a telematics-based motor insurance policy called “My: Health Drive.” 
  4. Reliance General Insurance: Reliance General Insurance offers a usage-based motor insurance policy called “Pay-As-You-Drive.” It uses a telematics device to track driving behavior and offers discounts based on the collected data.

How has the adoption of usage-based insurance grown in India?

The adoption of usage-based insurance (UBI) in India has steadily grown in recent years. While it is still a relatively new concept in the Indian insurance market, several factors have contributed to its increasing popularity:

  1. Technological Advancements: The widespread availability of smartphones and the advancement of telematics technology have made it easier and more cost-effective for insurance companies to implement UBI programs in India. Telematics devices and smartphone apps can now accurately collect and transmit driving data, enabling insurers to assess risk and calculate premiums based on individual driving behavior.
  2. Cost Savings Potential: One of the critical drivers for adopting UBI in India is the potential cost savings for policyholders. By incentivizing safe driving habits, UBI policies offer the opportunity for individuals to lower their premiums based on their driving behavior. This appeals to cost-conscious consumers who are looking for personalized insurance options.
  3. Increasing Awareness of Road Safety: India has been actively promoting road safety initiatives and campaigns in recent years to address the country’s high number of road accidents. UBI aligns with these efforts by encouraging responsible driving behaviors and offering rewards for safe driving. As individuals become more aware of the importance of road safety, the appeal of UBI policies grows.
  4. Shift in Consumer Preferences: With the advent of digital transformation and changing consumer expectations, there has been a shift in the way people perceive and interact with insurance. Customers now seek personalized and flexible insurance options that align with their lifestyles and preferences. UBI caters to this demand by offering tailored coverage and potential cost savings based on individual driving patterns.

While the adoption of UBI in India is still relatively modest compared to traditional insurance policies, it is expected to grow further as more insurance companies introduce UBI offerings as consumer awareness and acceptance continue to increase. 

Here are some suggestions to increase user adoption and usage of UBI in India:

  1. Consumer Awareness: Educate the customers about the benefits of UBI, such as personalized premiums, safe driving incentives, reduced frauds, and better claims management
  2. Subscription Options: At the initial stages of adoption, it is essential to help users with various payment structures to assuage fears. Similar to the “try and buy” and “cash on delivery” models adopted at the beginning of e-commerce shopping in India, companies can provide various types of UBI products to suit different customer segments and preferences, such as Pay as You Drive (PAYD), Pay How You Drive (PHYD), Pay as You Go (PAYG), and Distance-based Insurance.
  3. Transparency: This form of insurance relies on the free flow of data using technology to collect and analyze it. For example, with mobile apps, plug-in devices, GPS devices, onboard sensors, mileage detection, etc. Communication on how the data is used through videos, informational widgets, or notifications helps ensure the customer is aware of the data privacy and security measures undertaken by an insurer.
  4. Leveraging Channel Partners: UBI requires a robust ecosystem for easy adoption. Companies can partner with OEMs, dealers, aggregators, and other stakeholders for UBI distribution and service.

UBI is relatively new in India but is gaining popularity among car owners who want more control over their insurance costs. The way forward for UBI in India depends on several factors, such as adopting telematics technology, the regulatory framework, consumer awareness, and market competition. UBI has the potential to transform the car insurance industry in India by making it more transparent, fair and customer-centric

Cancel

Knowledge thats worth delivered in your inbox

Machines That Make Up Facts? Stopping AI Hallucinations with Reliable Systems

There was a time when people truly believed that humans only used 10% of their brains, so much so that it fueled Hollywood Movies and self-help personas promising untapped genius. The truth? Neuroscientists have long debunked this myth, proving that nearly all parts of our brain are active, even when we’re at rest. Now, imagine AI doing the same, providing information that is untrue, except unlike us, it doesn’t have a moment of self-doubt. That’s the bizarre and sometimes dangerous world of AI hallucinations.

AI hallucinations aren’t just funny errors; they’re a real and growing issue in AI-generated misinformation. So why do they happen, and how do we build reliable AI systems that don’t confidently mislead us? Let’s dive in.

Why Do AI Hallucinations Happen?

AI hallucinations happen when models generate errors due to incomplete, biased, or conflicting data. Other reasons include:

  • Human oversight: AI mirrors human biases and errors in training data, leading to AI’s false information
  • Lack of reasoning: Unlike humans, AI doesn’t “think” critically—it generates predictions based on patterns.

But beyond these, what if AI is too creative for its own good?

‘Creativity Gone Rogue’: When AI’s Imagination Runs Wild

AI doesn’t dream, but sometimes it gets ‘too creative’—spinning plausible-sounding stories that are basically AI-generated fake data with zero factual basis. Take the case of Meta’s Galactica, an AI model designed to generate scientific papers. It confidently fabricated entire studies with fake references, leading Meta to shut it down in three days.

This raises the question: Should AI be designed to be ‘less creative’ when AI trustworthiness matters?

The Overconfidence Problem

Ever heard the phrase, “Be confident, but not overconfident”? AI definitely hasn’t.

AI hallucinations happen because AI lacks self-doubt. When it doesn’t know something, it doesn’t hesitate—it just generates the most statistically probable answer. In one bizarre case, ChatGPT falsely accused a law professor of sexual harassment and even cited fake legal documents as proof.

Take the now-infamous case of Google’s Bard, which confidently claimed that the James Webb Space Telescope took the first-ever image of an exoplanet, a factually incorrect statement that went viral before Google had to step in and correct it.

There are more such multiple instances where AI hallucinations have led to Human hallucinations. Here are a few instances we faced.

When we tried the prompt of “Padmavaat according to the description of Malik Muhammad Jayasi-the writer ”

When we tried the prompt of “monkey to man evolution”

Now, if this is making you question your AI’s ability to get things right, then you should probably start looking have a checklist to check if your AI is reliable.

Before diving into solutions. Question your AI. If it can do these, maybe these will solve a bit of issues:

  • Can AI recognize its own mistakes?
  • What would “self-awareness” look like in AI without consciousness?
  • Are there techniques to make AI second-guess itself?
  • Can AI “consult an expert” before answering?

That might be just a checklist, but here are the strategies that make AI more reliable:

Strategies for Building Reliable AI

1. Neurosymbolic AI

It is a hybrid approach combining symbolic reasoning (logical rules) with deep learning to improve factual accuracy. IBM is pioneering this approach to build trustworthy AI systems that reason more like humans. For example, RAAPID’s solutions utilize this approach to transform clinical data into compliant, profitable risk adjustment, improving contextual understanding and reducing misdiagnoses.

2. Human-in-the-Loop Verification

Instead of random checks, AI can be trained to request human validation in critical areas. Companies like OpenAI and Google DeepMind are implementing real-time feedback loops where AI flags uncertain responses for review. A notable AI hallucination prevention use case is in medical AI, where human radiologists verify AI-detected anomalies in scans, improving diagnostic accuracy.

3. Truth Scoring Mechanism

IBM’s FactSheets AI assigns credibility scores to AI-generated content, ensuring more fact-based responses. This approach is already being used in financial risk assessment models, where AI outputs are ranked by reliability before human analysts review them.

4. AI ‘Memory’ for Context Awareness

Retrieval-Augmented Generation (RAG) allows AI to access verified sources before responding. This method is already being used by platforms like Bing AI, which cites sources instead of generating standalone answers. In legal tech, RAG-based models ensure AI-generated contracts reference actual legal precedents, reducing AI accuracy problems.

5. Red Teaming & Adversarial Testing

Companies like OpenAI and Google regularly use “red teaming”—pitting AI against expert testers who try to break its logic and expose weaknesses. This helps fine-tune AI models before public release. A practical AI reliability example is cybersecurity AI, where red teams simulate hacking attempts to uncover vulnerabilities before systems go live 

The Future: AI That Knows When to Say, “I Don’t Know”

One of the most important steps toward reliable AI is training models to recognize uncertainty. Instead of making up answers, AI should be able to respond with “I’m unsure” or direct users to validated sources. Google DeepMind’s Socratic AI model is experimenting with ways to embed self-doubt into AI.

Conclusion:

AI hallucinations aren’t just quirky mistakes—they’re a major roadblock in creating trustworthy AI systems. By blending techniques like neurosymbolic AI, human-in-the-loop verification, and retrieval-augmented generation, we can push AI toward greater accuracy and reliability.

But here’s the big question: Should AI always strive to be 100% factual, or does some level of ‘creative hallucination’ have its place? After all, some of the best innovations come from thinking outside the box—even if that box is built from AI-generated data and machine learning algorithms.

At Mantra Labs, we specialize in data-driven AI solutions designed to minimize hallucinations and maximize trust. Whether you’re developing AI-powered products or enhancing decision-making with machine learning, our expertise ensures your models provide accurate information, making life easier for humans

Cancel

Knowledge thats worth delivered in your inbox

Loading More Posts ...
Go Top
ml floating chatbot