Try : Insurtech, Application Development

AgriTech(1)

Augmented Reality(20)

Clean Tech(8)

Customer Journey(17)

Design(44)

Solar Industry(8)

User Experience(67)

Edtech(10)

Events(34)

HR Tech(3)

Interviews(10)

Life@mantra(11)

Logistics(5)

Strategy(18)

Testing(9)

Android(48)

Backend(32)

Dev Ops(11)

Enterprise Solution(29)

Technology Modernization(8)

Frontend(29)

iOS(43)

Javascript(15)

AI in Insurance(38)

Insurtech(66)

Product Innovation(57)

Solutions(22)

E-health(12)

HealthTech(24)

mHealth(5)

Telehealth Care(4)

Telemedicine(5)

Artificial Intelligence(146)

Bitcoin(8)

Blockchain(19)

Cognitive Computing(7)

Computer Vision(8)

Data Science(21)

FinTech(51)

Banking(7)

Intelligent Automation(27)

Machine Learning(47)

Natural Language Processing(14)

expand Menu Filters

Africa: The Hidden Workforce Behind AI

The machines are learning. Slowly, sure, but they are learning and we (humans) are the ones teaching them. We tell the machines how they should learn through the algorithms we write, and then feed them an enormous amount of data, so that it trains endlessly. Data labeling (the process of augmenting unlabelled data with meaningful and informative tags), is a necessary part of machine learning and sadly there’s a simple reason behind the use of a lower-wage workforce to train ML (Machine Learning) models — you only pay them half as much. The market for AI data preparation is projected to leap from $500M in 2018 to $1.2B by 2023.

Data is the only real fodder for any type of AI system. The more it trains on large amounts of ‘good data’, the faster it learns. Behind every piece of machine learning code intended to solve real issues, is a network of digital construction workers bearing the burden of building the foundation for AI — preparing data. For example, AI systems are trained to recognize objects. Data Labelers upload, categorize and cluster millions of images — just about everything from people, animals, buildings, plants, cars, signs, shapes, and things. In doing so, you now have an AI system that can begin to recognize these objects in the real world.

Again, for example, an algorithm meant to classify images of animals uses a large volume of images of different types of animals (dogs, leopards, giraffes, zebras, etc.) to train the model. These images will be labeled and classified for the model to work. A data labeler typically performs this essential function. It annotates the images with the right answers and transforms the dataset into a format suitable for machine/ deep learning.


Data Enrichment for Training ML Models

The real underlying aspect to machine intelligence is ‘the human’ in the AI loop — and it isn’t going away anytime soon either. Functions like data labeling are vital for AI quality control. Big Tech firms readily outsource these tasks to parts of the world where the minimum wage is significantly lower in order to meet extremely ambitious goals within budget. Data preparation and engineering tasks represent over 80% of the time consumed in most AI and machine learning projects. 

For instance, small data labeling companies in Kenya (and others spread across Africa) are working with large American & European firms to help them classify and organize millions of datasets. The task involves highlighting and labeling images of vehicles, traffic lights, landmarks, road signs and pedestrians captured by cameras fixed on autonomous vehicles so that these machines can become aware of the objects around them.


Bounding Boxes (tagging images for machine or deep learning models)


Image Segmentation (recognize objects of different shapes, sizes, and positions)
(source: clickworker)

Automation (the precursor to true AI) has put low-skilled jobs at supposed “extinction-level” risk for several decades now, as self-driving cars, rules-based process bots, and speech recognition will continue to exacerbate this trend. In reality, the advances of digital industrialism are not new, neither is the elimination or replacement of low-skill jobs with newer low-skill jobs. 

Sebenz.ai, a South African AI firm, is trying to create job opportunities for people throughout Africa leveraging the growing demand locally for data labelers. They have produced a Machine Learning ‘labeling game’ that allows people to earn money on their phones by labeling training data for ML models. Using this innovative approach, Sebenz is able to create labeled-data with real-time responses almost in parallel to train these models accurately.

According to the firm, it takes 10,000 hours of audio to train a speech-to-text model. With 1 data labeler, it would take 65 months, but with 10,000 people it would be ready in a few hours. In return, the data labelers are compensated around $16 per day, (minimum wage in the African continent is only a paltry $3 per day), albeit affording them the opportunity to make a better living. Most of the people drawn to data labeling jobs are often unskilled workers and live below the poverty line.

According to a 2018 KPMG research report, 5% or more of the global workforce will be replaced by automation within the next 2 years

When Silicon Valley first began importing ‘cleaned’ data in bulk at nearly a fraction of the price, then it would otherwise cost them in their own markets — it wasn’t initially received as the modest competitive advantage as it is today. However, looking ahead at the ‘future of work’ and the role of Big Tech in shaping the informal economy — the low skilled jobs fueling automation and AI will soon become automated themselves, creating newer jobs and roles for people en masse to move into, yet again.

webinar: AI for data-driven Insurers

Join our Webinar — AI for Data-driven Insurers: Challenges, Opportunities & the Way Forward hosted by our CEO, Parag Sharma as he addresses Insurance business leaders and decision-makers on April 14, 2020.

AI is shaping the future of enterprises and consumer-services in affordable and scalable ways. To learn more about how we can transform your AI journey, reach out to us at hello@mantralabsglobal.com

Cancel

Knowledge thats worth delivered in your inbox

Lake, Lakehouse, or Warehouse? Picking the Perfect Data Playground

By :

In 1997, the world watched in awe as IBM’s Deep Blue, a machine designed to play chess, defeated world champion Garry Kasparov. This moment wasn’t just a milestone for technology; it was a profound demonstration of data’s potential. Deep Blue analyzed millions of structured moves to anticipate outcomes. But imagine if it had access to unstructured data—Kasparov’s interviews, emotions, and instinctive reactions. Would the game have unfolded differently?

This historic clash mirrors today’s challenge in data architectures: leveraging structured, unstructured, and hybrid data systems to stay ahead. Let’s explore the nuances between Data Warehouses, Data Lakes, and Data Lakehouses—and uncover how they empower organizations to make game-changing decisions.

Deep Blue’s triumph was rooted in its ability to process structured data—moves on the chessboard, sequences of play, and pre-defined rules. Similarly, in the business world, structured data forms the backbone of decision-making. Customer transaction histories, financial ledgers, and inventory records are the “chess moves” of enterprises, neatly organized into rows and columns, ready for analysis. But as businesses grew, so did their need for a system that could not only store this structured data but also transform it into actionable insights efficiently. This need birthed the data warehouse.

Why was Data Warehouse the Best Move on the Board?

Data warehouses act as the strategic command centers for enterprises. By employing a schema-on-write approach, they ensure data is cleaned, validated, and formatted before storage. This guarantees high accuracy and consistency, making them indispensable for industries like finance and healthcare. For instance, global banks rely on data warehouses to calculate real-time risk assessments or detect fraud—a necessity when billions of transactions are processed daily, tools like Amazon Redshift, Snowflake Data Warehouse, and Azure Data Warehouse are vital. Similarly, hospitals use them to streamline patient care by integrating records, billing, and treatment plans into unified dashboards.

The impact is evident: according to a report by Global Market Insights, the global data warehouse market is projected to reach $30.4 billion by 2025, driven by the growing demand for business intelligence and real-time analytics. Yet, much like Deep Blue’s limitations in analyzing Kasparov’s emotional state, data warehouses face challenges when encountering data that doesn’t fit neatly into predefined schemas.

The question remains—what happens when businesses need to explore data outside these structured confines? The next evolution takes us to the flexible and expansive realm of data lakes, designed to embrace unstructured chaos.

The True Depth of Data Lakes 

While structured data lays the foundation for traditional analytics, the modern business environment is far more complex, organizations today recognize the untapped potential in unstructured and semi-structured data. Social media conversations, customer reviews, IoT sensor feeds, audio recordings, and video content—these are the modern equivalents of Kasparov’s instinctive reactions and emotional expressions. They hold valuable insights but exist in forms that defy the rigid schemas of data warehouses.

Data lake is the system designed to embrace this chaos. Unlike warehouses, which demand structure upfront, data lakes operate on a schema-on-read approach, storing raw data in its native format until it’s needed for analysis. This flexibility makes data lakes ideal for capturing unstructured and semi-structured information. For example, Netflix uses data lakes to ingest billions of daily streaming logs, combining semi-structured metadata with unstructured viewing behaviors to deliver hyper-personalized recommendations. Similarly, Tesla stores vast amounts of raw sensor data from its autonomous vehicles in data lakes to train machine learning models.

However, this openness comes with challenges. Without proper governance, data lakes risk devolving into “data swamps,” where valuable insights are buried under poorly cataloged, duplicated, or irrelevant information. Forrester analysts estimate that 60%-73% of enterprise data goes unused for analytics, highlighting the governance gap in traditional lake implementations.

Is the Data Lakehouse the Best of Both Worlds?

This gap gave rise to the data lakehouse, a hybrid approach that marries the flexibility of data lakes with the structure and governance of warehouses. The lakehouse supports both structured and unstructured data, enabling real-time querying for business intelligence (BI) while also accommodating AI/ML workloads. Tools like Databricks Lakehouse and Snowflake Lakehouse integrate features like ACID transactions and unified metadata layers, ensuring data remains clean, compliant, and accessible.

Retailers, for instance, use lakehouses to analyze customer behavior in real time while simultaneously training AI models for predictive recommendations. Streaming services like Disney+ integrate structured subscriber data with unstructured viewing habits, enhancing personalization and engagement. In manufacturing, lakehouses process vast IoT sensor data alongside operational records, predicting maintenance needs and reducing downtime. According to a report by Databricks, organizations implementing lakehouse architectures have achieved up to 40% cost reductions and accelerated insights, proving their value as a future-ready data solution.

As businesses navigate this evolving data ecosystem, the choice between these architectures depends on their unique needs. Below is a comparison table highlighting the key attributes of data warehouses, data lakes, and data lakehouses:

FeatureData WarehouseData LakeData Lakehouse
Data TypeStructuredStructured, Semi-Structured, UnstructuredBoth
Schema ApproachSchema-on-WriteSchema-on-ReadBoth
Query PerformanceOptimized for BISlower; requires specialized toolsHigh performance for both BI and AI
AccessibilityEasy for analysts with SQL toolsRequires technical expertiseAccessible to both analysts and data scientists
Cost EfficiencyHighLowModerate
ScalabilityLimitedHighHigh
GovernanceStrongWeakStrong
Use CasesBI, ComplianceAI/ML, Data ExplorationReal-Time Analytics, Unified Workloads
Best Fit ForFinance, HealthcareMedia, IoT, ResearchRetail, E-commerce, Multi-Industry
Conclusion

The interplay between data warehouses, data lakes, and data lakehouses is a tale of adaptation and convergence. Just as IBM’s Deep Blue showcased the power of structured data but left questions about unstructured insights, businesses today must decide how to harness the vast potential of their data. From tools like Azure Data Lake, Amazon Redshift, and Snowflake Data Warehouse to advanced platforms like Databricks Lakehouse, the possibilities are limitless.

Ultimately, the path forward depends on an organization’s specific goals—whether optimizing BI, exploring AI/ML, or achieving unified analytics. The synergy of data engineering, data analytics, and database activity monitoring ensures that insights are not just generated but are actionable. To accelerate AI transformation journeys for evolving organizations, leveraging cutting-edge platforms like Snowflake combined with deep expertise is crucial.

At Mantra Labs, we specialize in crafting tailored data science and engineering solutions that empower businesses to achieve their analytics goals. Our experience with platforms like Snowflake and our deep domain expertise makes us the ideal partner for driving data-driven innovation and unlocking the next wave of growth for your enterprise.

Cancel

Knowledge thats worth delivered in your inbox

Loading More Posts ...
Go Top
ml floating chatbot