Try : Insurtech, Application Development

AgriTech(1)

Augmented Reality(20)

Clean Tech(8)

Customer Journey(17)

Design(44)

Solar Industry(8)

User Experience(67)

Edtech(10)

Events(34)

HR Tech(3)

Interviews(10)

Life@mantra(11)

Logistics(5)

Strategy(18)

Testing(9)

Android(48)

Backend(32)

Dev Ops(11)

Enterprise Solution(29)

Technology Modernization(8)

Frontend(29)

iOS(43)

Javascript(15)

AI in Insurance(38)

Insurtech(66)

Product Innovation(57)

Solutions(22)

E-health(12)

HealthTech(24)

mHealth(5)

Telehealth Care(4)

Telemedicine(5)

Artificial Intelligence(146)

Bitcoin(8)

Blockchain(19)

Cognitive Computing(7)

Computer Vision(8)

Data Science(21)

FinTech(51)

Banking(7)

Intelligent Automation(27)

Machine Learning(47)

Natural Language Processing(14)

expand Menu Filters

Google I/O 2021: What’s in it for Developers and Consumers this Year

7 minutes read

After a year-long hiatus due to the COVID-19 pandemic, Google’s developer conference, Google I/O returned in a virtual avatar this year with several new announcements. The event is hosted annually by the company to announce new products and services. While the search giant did not mention its upcoming Pixel devices, they announced upgrades expected on the phones. 

From AI in digital health, wearable technologies, a brand-new Android build, better security across websites via Google Chrome’s password manager, a new digital friend called LaMDA, a carbon-intelligent cloud computing platform, and more, there’s a lot in store for both developers and businesses this year, offered by Google. 

Sundar Pichai, CEO of Google and Alphabet, said on Google’s blog, “The last year has put a lot into perspective. At Google, it’s also given renewed purpose to our mission to organize the world’s information and make it universally accessible and useful. We continue to approach that mission with a singular goal: building a more helpful Google, for everyone. That means being helpful to people in the moments that matter and giving everyone the tools to increase their knowledge, success, health, and happiness.” 

Let’s take a look at what’s expected to make waves this year, categorized by their respective fields: 

Android 12

With brand-new privacy features and other useful experiences, like improved accessibility features for people with impaired vision, scrolling screenshots, conversation widgets, Android 12 focuses on building a secure operating system that adapts to you, and makes all your devices work better together. Google has described this update to its operating system, Android 12, as “the biggest design change in Android’s history”.

Android 12 will first be introduced on Pixel devices, and allow users to completely personalize their phones with a custom color palette and redesigned widgets. This Android build also unifies the entire software and hardware ecosystems under a single design language called Material You. 

It also introduced a new Privacy Dashboard offering a single view into your permissions settings as well as what data is being accessed, at what intervals, and by which apps. A new indicator to the top right of the status bar will tell the user when apps are accessing the phone’s microphone or camera.

Project Starline: A revolutionary 3D video conferencing 

The pandemic has led to a surge in video conferencing, video-based meets, webinars, and more. Google had previously announced it was working on a new video chat system that enables you to see the person you’re chatting to in 3D.

The project, titled Starline, aims to create uber-realistic projections for video chats. The future of videoconferencing will use 3D imaging to make video calls feel like you’re speaking with someone in person, just like you would in a pre-pandemic world. While other video conferencing apps including Zoom, Google Meet, Microsoft Teams, and others have allowed us to stay in touch with family, friends, colleagues, and peers even as we all stayed home, Project Starline’s introduction comes at a time when despite eased restrictions, the need for better remote conferencing tools might still be on a rise so the world stays effectively connected. 

LaMDA: Your new digital friend

LaMDA, a conversational language model built on Google’s neural network architecture called Transformer, is one of the most fascinating introductions at Google I/O 2021. Unlike other pre-existing language models which are trained to answer queries, LaMDA is being trained on dialogue to engage in free-flowing conversations on nearly any topic under the sun (or solar system). 

During the Keynote address, Google gave a demo of LaMDA acting as the planet Pluto at first, and a paper airplane thereafter, both very real. LaMDA, currently in its R&D phase, is likely to be used to power Google Assistant and other Google products in the future, including a key aspect for Google’s new Smart Home. 

Pushing the frontier of computing with TPU V4: 

TPUs, Google’s custom-built machine learning processes, enable advancements including Translation, image recognition, and voice recognition via LaMDA and multimodal models. TPU v4, which debuted at Google I/O 2021 is powered by the v4 chip and touted to be twice as fast as its previous generation. A single pod can deliver more than one exaflop, which is equivalent to the computing power of 10 million laptops combined. “This is the fastest system we’ve ever deployed, and a historic milestone for us. Previously to get to an exaflop, you needed to build a custom supercomputer. And we’ll soon have dozens of TPUv4 pods in our data centers, many of which will be operating at or near 90% carbon-free energy. They’ll be available to our Cloud customers later this year,” explained Google on their official blog. 

Google has opened a new state-of-the-art Quantum AI campus with their first quantum data center and quantum processor chip fabrication facilities, with a multi-year plan in the pipeline.  

No language barrier: Multitask Unified Model (MUM) 

MUM or the Multitask Unified Model comes as Google’s latest milestone to transfer knowledge without any language barriers for the user, thereby making it far more powerful than BERT, the Transformer AI model launched by Google in 2019. MUM can learn across 75 languages at the same time where most AI models train on one language at a time. It can also understand information across text, images, video, and other media.

“Every improvement to Google Search undergoes a rigorous evaluation process to ensure we’re providing more relevant, helpful results. Human raters, who follow our Search Quality Rater Guidelines, help us understand how well our results help people find information,” says Google on their blog. 

Digital Health: Google AI to help identify skin conditions

An AI tool by Google will be able to spot skin, hair, and nail conditions, based on images uploaded by patients. Expected to launch later this year, this ‘dermatology assist tool’ has been awarded a CE mark for use as a medical tool in Europe. 

The app took three years to develop and has been trained on a dataset of 65,000 images of diagnosed conditions, marks people were concerned about, and also pictures of healthy skin, in all shades and tones. It is said to be able to recognize 288 skin conditions but not designed to substitute medical diagnosis and treatment.

This app is based on previously developed tools for learning to spot the symptoms of certain types of cancers and tuberculosis. 

Digital Health & Lifestyle: Wear OS in collaboration with Samsung and Fitbit

Google’s Wear OS and Samsung’s Tizen are merging to form one super platform, Wear. It will likely lead to solid boosts in battery life, smoother running apps, and up to 30% faster app load times. Other updates such as a standalone version of Google Maps, offline Spotify and YouTube downloads, and a few of Fitbit’s best features will be a part of this platform. Wear OS will also be getting a fresh coat of Material You. 

AI for Lifestyle: A better shopping experience

Google has also announced that they are working with Shopify to aid merchants to feature their products across Google. From a customer’s POV, Google will be introducing a new feature in Chrome to help you continue shopping where you left off. On a new tab, Chrome will display all open shopping carts from across different shopping sites. 

On Android, on the other hand, Google Lens in Photos will soon be getting a “Search inside screenshot” button to help scan things like shoes, t-shirts, and other objects in a photo and suggest relevant products.

AI for Lifestyle: AI-driven Google Maps

Google Maps, powered by AI, will now be able to save users from “hard-breaking” moments by providing relevant information about the routes to avoid unnecessary roadblocks. “We’ll automatically recommend that route if the ETA is the same or the difference is minimal. We believe that these changes have the potential to eliminate 100 million hard-braking events in routes driven with Google Maps each year,” Google said on their official blog. The Live View tool will also get a renewed display with detailed information.

AI for Lifestyle: Curated albums on Google Photos

Google Photos will use AI to curate collections with similar images, landscape and more to share with the user, quite similar to Memories on both Apple and Facebook.

Google says they have taken people’s emotions regarding events and memories into consideration and avoid bringing forth anything they might want to get rid of. This new update allows users to control which photos they do or don’t see by letting them remove images, people or time periods. 

Another feature that will be introduced is called “little patterns” which will use AI to scan pictures and create albums based on similarities within them. 

Lastly, Google is also using machine learning to create “cinematic moments” which will analyze two or three pictures taken within moments of each other to create a moving image, akin to Apple’s Live Photos.

ARCore

ARCore, Google’s augmented reality platform, is gaining two new APIs namely, ARCore Raw Depth API and ARCore Recording and Playback API. ARCore Raw Depth API will enable developers to capture more detailed representations of surrounding objects. Additionally, the ARCore Recording and Playback API allow developers to capture video footage with AR metadata.  

Tensorflow.js: How’s it being used and what developers can expect? 

Google is releasing a new ML interface stack for Android to provide developers an integrated platform with a common set of tools and APIs to deliver a seamless ML experience across Android devices and other platforms. As part of this project, Google will also roll out TensorFlow Lite through Google Play Store, so developers don’t have to bundle it with their own apps, and thus can reduce the overall APK size.

This update will enable Machine learning to understand ethics, accessibility via Project Shuwa that’s being built to understand sign language and how it can solve everyday problems. An updated version of Face Mesh is also due for release which enables iris support and more robust tracking. Conversation Intent Detection, based on BERT architecture, identifies user intent along with related entities to fulfill said intent. 

The Google I/O 2021 also gave a close look into building cyber awareness through their Auto-Delete function, increased privacy, better camera features including a new selfie algorithm to make a more inclusive camera experience for everyone, TPU v4, the custom-built machine learning processes, and Multitask Unified Model (MUM) help make Google Search a lot smarter. 

What piqued your interest the most at this year’s Google I/O?

Cancel

Knowledge thats worth delivered in your inbox

Lake, Lakehouse, or Warehouse? Picking the Perfect Data Playground

By :

In 1997, the world watched in awe as IBM’s Deep Blue, a machine designed to play chess, defeated world champion Garry Kasparov. This moment wasn’t just a milestone for technology; it was a profound demonstration of data’s potential. Deep Blue analyzed millions of structured moves to anticipate outcomes. But imagine if it had access to unstructured data—Kasparov’s interviews, emotions, and instinctive reactions. Would the game have unfolded differently?

This historic clash mirrors today’s challenge in data architectures: leveraging structured, unstructured, and hybrid data systems to stay ahead. Let’s explore the nuances between Data Warehouses, Data Lakes, and Data Lakehouses—and uncover how they empower organizations to make game-changing decisions.

Deep Blue’s triumph was rooted in its ability to process structured data—moves on the chessboard, sequences of play, and pre-defined rules. Similarly, in the business world, structured data forms the backbone of decision-making. Customer transaction histories, financial ledgers, and inventory records are the “chess moves” of enterprises, neatly organized into rows and columns, ready for analysis. But as businesses grew, so did their need for a system that could not only store this structured data but also transform it into actionable insights efficiently. This need birthed the data warehouse.

Why was Data Warehouse the Best Move on the Board?

Data warehouses act as the strategic command centers for enterprises. By employing a schema-on-write approach, they ensure data is cleaned, validated, and formatted before storage. This guarantees high accuracy and consistency, making them indispensable for industries like finance and healthcare. For instance, global banks rely on data warehouses to calculate real-time risk assessments or detect fraud—a necessity when billions of transactions are processed daily, tools like Amazon Redshift, Snowflake Data Warehouse, and Azure Data Warehouse are vital. Similarly, hospitals use them to streamline patient care by integrating records, billing, and treatment plans into unified dashboards.

The impact is evident: according to a report by Global Market Insights, the global data warehouse market is projected to reach $30.4 billion by 2025, driven by the growing demand for business intelligence and real-time analytics. Yet, much like Deep Blue’s limitations in analyzing Kasparov’s emotional state, data warehouses face challenges when encountering data that doesn’t fit neatly into predefined schemas.

The question remains—what happens when businesses need to explore data outside these structured confines? The next evolution takes us to the flexible and expansive realm of data lakes, designed to embrace unstructured chaos.

The True Depth of Data Lakes 

While structured data lays the foundation for traditional analytics, the modern business environment is far more complex, organizations today recognize the untapped potential in unstructured and semi-structured data. Social media conversations, customer reviews, IoT sensor feeds, audio recordings, and video content—these are the modern equivalents of Kasparov’s instinctive reactions and emotional expressions. They hold valuable insights but exist in forms that defy the rigid schemas of data warehouses.

Data lake is the system designed to embrace this chaos. Unlike warehouses, which demand structure upfront, data lakes operate on a schema-on-read approach, storing raw data in its native format until it’s needed for analysis. This flexibility makes data lakes ideal for capturing unstructured and semi-structured information. For example, Netflix uses data lakes to ingest billions of daily streaming logs, combining semi-structured metadata with unstructured viewing behaviors to deliver hyper-personalized recommendations. Similarly, Tesla stores vast amounts of raw sensor data from its autonomous vehicles in data lakes to train machine learning models.

However, this openness comes with challenges. Without proper governance, data lakes risk devolving into “data swamps,” where valuable insights are buried under poorly cataloged, duplicated, or irrelevant information. Forrester analysts estimate that 60%-73% of enterprise data goes unused for analytics, highlighting the governance gap in traditional lake implementations.

Is the Data Lakehouse the Best of Both Worlds?

This gap gave rise to the data lakehouse, a hybrid approach that marries the flexibility of data lakes with the structure and governance of warehouses. The lakehouse supports both structured and unstructured data, enabling real-time querying for business intelligence (BI) while also accommodating AI/ML workloads. Tools like Databricks Lakehouse and Snowflake Lakehouse integrate features like ACID transactions and unified metadata layers, ensuring data remains clean, compliant, and accessible.

Retailers, for instance, use lakehouses to analyze customer behavior in real time while simultaneously training AI models for predictive recommendations. Streaming services like Disney+ integrate structured subscriber data with unstructured viewing habits, enhancing personalization and engagement. In manufacturing, lakehouses process vast IoT sensor data alongside operational records, predicting maintenance needs and reducing downtime. According to a report by Databricks, organizations implementing lakehouse architectures have achieved up to 40% cost reductions and accelerated insights, proving their value as a future-ready data solution.

As businesses navigate this evolving data ecosystem, the choice between these architectures depends on their unique needs. Below is a comparison table highlighting the key attributes of data warehouses, data lakes, and data lakehouses:

FeatureData WarehouseData LakeData Lakehouse
Data TypeStructuredStructured, Semi-Structured, UnstructuredBoth
Schema ApproachSchema-on-WriteSchema-on-ReadBoth
Query PerformanceOptimized for BISlower; requires specialized toolsHigh performance for both BI and AI
AccessibilityEasy for analysts with SQL toolsRequires technical expertiseAccessible to both analysts and data scientists
Cost EfficiencyHighLowModerate
ScalabilityLimitedHighHigh
GovernanceStrongWeakStrong
Use CasesBI, ComplianceAI/ML, Data ExplorationReal-Time Analytics, Unified Workloads
Best Fit ForFinance, HealthcareMedia, IoT, ResearchRetail, E-commerce, Multi-Industry
Conclusion

The interplay between data warehouses, data lakes, and data lakehouses is a tale of adaptation and convergence. Just as IBM’s Deep Blue showcased the power of structured data but left questions about unstructured insights, businesses today must decide how to harness the vast potential of their data. From tools like Azure Data Lake, Amazon Redshift, and Snowflake Data Warehouse to advanced platforms like Databricks Lakehouse, the possibilities are limitless.

Ultimately, the path forward depends on an organization’s specific goals—whether optimizing BI, exploring AI/ML, or achieving unified analytics. The synergy of data engineering, data analytics, and database activity monitoring ensures that insights are not just generated but are actionable. To accelerate AI transformation journeys for evolving organizations, leveraging cutting-edge platforms like Snowflake combined with deep expertise is crucial.

At Mantra Labs, we specialize in crafting tailored data science and engineering solutions that empower businesses to achieve their analytics goals. Our experience with platforms like Snowflake and our deep domain expertise makes us the ideal partner for driving data-driven innovation and unlocking the next wave of growth for your enterprise.

Cancel

Knowledge thats worth delivered in your inbox

Loading More Posts ...
Go Top
ml floating chatbot