Try : Insurtech, Application Development

AgriTech(1)

Augmented Reality(20)

Clean Tech(8)

Customer Journey(17)

Design(44)

Solar Industry(8)

User Experience(67)

Edtech(10)

Events(34)

HR Tech(3)

Interviews(10)

Life@mantra(11)

Logistics(5)

Strategy(18)

Testing(9)

Android(48)

Backend(32)

Dev Ops(11)

Enterprise Solution(29)

Technology Modernization(8)

Frontend(29)

iOS(43)

Javascript(15)

AI in Insurance(38)

Insurtech(66)

Product Innovation(57)

Solutions(22)

E-health(12)

HealthTech(24)

mHealth(5)

Telehealth Care(4)

Telemedicine(5)

Artificial Intelligence(146)

Bitcoin(8)

Blockchain(19)

Cognitive Computing(7)

Computer Vision(8)

Data Science(21)

FinTech(51)

Banking(7)

Intelligent Automation(27)

Machine Learning(47)

Natural Language Processing(14)

expand Menu Filters

Google I/O 2018, Day 2 Highlights: AI for Healthcare Sector

The second day of the Google I/O 18 consisted of several talks including AI and ML in healthcare sector, Web Assembly, Polymer, Chrome Dev Tools, Flutter. Let’s dive in to know what’s new and the improvements that have been made to the web!

Artificial Intelligence(AI):

 

Google’s AI to Improve Healthcare

We all have unfortunately, heard a doctor express with utmost despair that they could have done better if the patient could have been brought to medical attention a little early-on. Google thinks it can solve this problem for mankind. Last year at Google I/O, Sundar Pichai demonstrated the advanced computing powers of Google named Tensor Processing Units (TPU) which now has been working with eye hospitals to help doctors use Deep Learning, a machine learning module to screen Diabetic Retinopathy in a better way. 

Google’s AI analyses about 100,000 data points per patient, something humans can’t do, to determine if their health is likely to deteriorate in near future and if they may need readmission. The current level of accuracy in prediction by Google’s AI is better than traditional methods by 10%; the aim is to empower doctors by nearly 48 to 24 hours in advance before a patient falls sick again. 

Google focuses to make healthcare better by helping doctors be more efficient and for patients to get better quality of healthcare, in Google I/O 2018 this is only a beginning of human and machine working together.

Do It Yourself Artificial Intelligence

Google introduced the AIY Kits: a series of open source projects that include hardware and software tools, showcasing on-device artificial intelligence. With AIY Kits, users can use artificial intelligence to make human-to-machine interaction more like human-to-human interactions.

The first open source project is the Voice Kit. The speech recognition ability in this kit allows you to add voice recognition to assistive robots, talk to control devices such as light bulbs and replace physical buttons on household appliances and consumer electronics.

For Developers:

1. Chrome DevTools

There is a new shortcut, Ctrl + F that will pull up a new search sidebar in the Network pane of Chrome DevTools. With this search sidebar, you can search through headers and their values. You need to make sure Site Isolation for Chrome is enabled by heading to chrome://flags#enable-site-per-process and activating it.

  • Performance IsolationThe Performance panel has been improved to provide flame charts for every process.
  • Certificate Transparency The Security panel now provides the ability to show the certificate transparency information of a secure website.
  • Sources Panel The Sources Panel has a Network tab. The Network tab is now called the Page tab.

2. Web Performance

Google analyzes a lot of sites and has learned over time how to make them extremely fast. The Web Performance made easy talk by Eva and Addy Osmani showed how to fix common web performance bottlenecks and take advantage of the latest browser APIs to improve loading experience.

  • New Lighthouse Web Performance Audits
  • Optimizing Caching Strategies – Cache as many resources as possible efficiently.
  • Remove unnecessary bytes and don’t send things twice – Optimize Caching strategies. Cache as many resources as possible.
  • Remove unused JavaScript and CSS from the Critical Path.
  • Eliminate unnecessary downloads.
  • Don’t serve un-optimized or unnecessary images to your users.
  • Help browsers deliver critical resources.
  • Have a Web Font Loading Strategy.

Google announced a new experimental browser feature called Priority Hints. It allows you to specify the importance of a resource. The browser loads the resources with high importance first before the others.

3. Web Assembly

Google is working really hard to allow developers import web assembly modules into their JavaScript apps and have Chrome render it effectively. With Web Assembly, software like AutoCAD and Complex3 have created complex but fast UI web experiences.

Cancel

Knowledge thats worth delivered in your inbox

Lake, Lakehouse, or Warehouse? Picking the Perfect Data Playground

By :

In 1997, the world watched in awe as IBM’s Deep Blue, a machine designed to play chess, defeated world champion Garry Kasparov. This moment wasn’t just a milestone for technology; it was a profound demonstration of data’s potential. Deep Blue analyzed millions of structured moves to anticipate outcomes. But imagine if it had access to unstructured data—Kasparov’s interviews, emotions, and instinctive reactions. Would the game have unfolded differently?

This historic clash mirrors today’s challenge in data architectures: leveraging structured, unstructured, and hybrid data systems to stay ahead. Let’s explore the nuances between Data Warehouses, Data Lakes, and Data Lakehouses—and uncover how they empower organizations to make game-changing decisions.

Deep Blue’s triumph was rooted in its ability to process structured data—moves on the chessboard, sequences of play, and pre-defined rules. Similarly, in the business world, structured data forms the backbone of decision-making. Customer transaction histories, financial ledgers, and inventory records are the “chess moves” of enterprises, neatly organized into rows and columns, ready for analysis. But as businesses grew, so did their need for a system that could not only store this structured data but also transform it into actionable insights efficiently. This need birthed the data warehouse.

Why was Data Warehouse the Best Move on the Board?

Data warehouses act as the strategic command centers for enterprises. By employing a schema-on-write approach, they ensure data is cleaned, validated, and formatted before storage. This guarantees high accuracy and consistency, making them indispensable for industries like finance and healthcare. For instance, global banks rely on data warehouses to calculate real-time risk assessments or detect fraud—a necessity when billions of transactions are processed daily, tools like Amazon Redshift, Snowflake Data Warehouse, and Azure Data Warehouse are vital. Similarly, hospitals use them to streamline patient care by integrating records, billing, and treatment plans into unified dashboards.

The impact is evident: according to a report by Global Market Insights, the global data warehouse market is projected to reach $30.4 billion by 2025, driven by the growing demand for business intelligence and real-time analytics. Yet, much like Deep Blue’s limitations in analyzing Kasparov’s emotional state, data warehouses face challenges when encountering data that doesn’t fit neatly into predefined schemas.

The question remains—what happens when businesses need to explore data outside these structured confines? The next evolution takes us to the flexible and expansive realm of data lakes, designed to embrace unstructured chaos.

The True Depth of Data Lakes 

While structured data lays the foundation for traditional analytics, the modern business environment is far more complex, organizations today recognize the untapped potential in unstructured and semi-structured data. Social media conversations, customer reviews, IoT sensor feeds, audio recordings, and video content—these are the modern equivalents of Kasparov’s instinctive reactions and emotional expressions. They hold valuable insights but exist in forms that defy the rigid schemas of data warehouses.

Data lake is the system designed to embrace this chaos. Unlike warehouses, which demand structure upfront, data lakes operate on a schema-on-read approach, storing raw data in its native format until it’s needed for analysis. This flexibility makes data lakes ideal for capturing unstructured and semi-structured information. For example, Netflix uses data lakes to ingest billions of daily streaming logs, combining semi-structured metadata with unstructured viewing behaviors to deliver hyper-personalized recommendations. Similarly, Tesla stores vast amounts of raw sensor data from its autonomous vehicles in data lakes to train machine learning models.

However, this openness comes with challenges. Without proper governance, data lakes risk devolving into “data swamps,” where valuable insights are buried under poorly cataloged, duplicated, or irrelevant information. Forrester analysts estimate that 60%-73% of enterprise data goes unused for analytics, highlighting the governance gap in traditional lake implementations.

Is the Data Lakehouse the Best of Both Worlds?

This gap gave rise to the data lakehouse, a hybrid approach that marries the flexibility of data lakes with the structure and governance of warehouses. The lakehouse supports both structured and unstructured data, enabling real-time querying for business intelligence (BI) while also accommodating AI/ML workloads. Tools like Databricks Lakehouse and Snowflake Lakehouse integrate features like ACID transactions and unified metadata layers, ensuring data remains clean, compliant, and accessible.

Retailers, for instance, use lakehouses to analyze customer behavior in real time while simultaneously training AI models for predictive recommendations. Streaming services like Disney+ integrate structured subscriber data with unstructured viewing habits, enhancing personalization and engagement. In manufacturing, lakehouses process vast IoT sensor data alongside operational records, predicting maintenance needs and reducing downtime. According to a report by Databricks, organizations implementing lakehouse architectures have achieved up to 40% cost reductions and accelerated insights, proving their value as a future-ready data solution.

As businesses navigate this evolving data ecosystem, the choice between these architectures depends on their unique needs. Below is a comparison table highlighting the key attributes of data warehouses, data lakes, and data lakehouses:

FeatureData WarehouseData LakeData Lakehouse
Data TypeStructuredStructured, Semi-Structured, UnstructuredBoth
Schema ApproachSchema-on-WriteSchema-on-ReadBoth
Query PerformanceOptimized for BISlower; requires specialized toolsHigh performance for both BI and AI
AccessibilityEasy for analysts with SQL toolsRequires technical expertiseAccessible to both analysts and data scientists
Cost EfficiencyHighLowModerate
ScalabilityLimitedHighHigh
GovernanceStrongWeakStrong
Use CasesBI, ComplianceAI/ML, Data ExplorationReal-Time Analytics, Unified Workloads
Best Fit ForFinance, HealthcareMedia, IoT, ResearchRetail, E-commerce, Multi-Industry
Conclusion

The interplay between data warehouses, data lakes, and data lakehouses is a tale of adaptation and convergence. Just as IBM’s Deep Blue showcased the power of structured data but left questions about unstructured insights, businesses today must decide how to harness the vast potential of their data. From tools like Azure Data Lake, Amazon Redshift, and Snowflake Data Warehouse to advanced platforms like Databricks Lakehouse, the possibilities are limitless.

Ultimately, the path forward depends on an organization’s specific goals—whether optimizing BI, exploring AI/ML, or achieving unified analytics. The synergy of data engineering, data analytics, and database activity monitoring ensures that insights are not just generated but are actionable. To accelerate AI transformation journeys for evolving organizations, leveraging cutting-edge platforms like Snowflake combined with deep expertise is crucial.

At Mantra Labs, we specialize in crafting tailored data science and engineering solutions that empower businesses to achieve their analytics goals. Our experience with platforms like Snowflake and our deep domain expertise makes us the ideal partner for driving data-driven innovation and unlocking the next wave of growth for your enterprise.

Cancel

Knowledge thats worth delivered in your inbox

Loading More Posts ...
Go Top
ml floating chatbot