Try : Insurtech, Application Development

AgriTech(1)

Augmented Reality(21)

Clean Tech(9)

Customer Journey(17)

Design(45)

Solar Industry(8)

User Experience(68)

Edtech(10)

Events(34)

HR Tech(3)

Interviews(10)

Life@mantra(11)

Logistics(5)

Manufacturing(3)

Strategy(18)

Testing(9)

Android(48)

Backend(32)

Dev Ops(11)

Enterprise Solution(33)

Technology Modernization(9)

Frontend(29)

iOS(43)

Javascript(15)

AI in Insurance(38)

Insurtech(66)

Product Innovation(58)

Solutions(22)

E-health(12)

HealthTech(24)

mHealth(5)

Telehealth Care(4)

Telemedicine(5)

Artificial Intelligence(153)

Bitcoin(8)

Blockchain(19)

Cognitive Computing(8)

Computer Vision(8)

Data Science(23)

FinTech(51)

Banking(7)

Intelligent Automation(27)

Machine Learning(48)

Natural Language Processing(14)

expand Menu Filters

Google I/O 2021: What’s in it for Developers and Consumers this Year

7 minutes read

After a year-long hiatus due to the COVID-19 pandemic, Google’s developer conference, Google I/O returned in a virtual avatar this year with several new announcements. The event is hosted annually by the company to announce new products and services. While the search giant did not mention its upcoming Pixel devices, they announced upgrades expected on the phones. 

From AI in digital health, wearable technologies, a brand-new Android build, better security across websites via Google Chrome’s password manager, a new digital friend called LaMDA, a carbon-intelligent cloud computing platform, and more, there’s a lot in store for both developers and businesses this year, offered by Google. 

Sundar Pichai, CEO of Google and Alphabet, said on Google’s blog, “The last year has put a lot into perspective. At Google, it’s also given renewed purpose to our mission to organize the world’s information and make it universally accessible and useful. We continue to approach that mission with a singular goal: building a more helpful Google, for everyone. That means being helpful to people in the moments that matter and giving everyone the tools to increase their knowledge, success, health, and happiness.” 

Let’s take a look at what’s expected to make waves this year, categorized by their respective fields: 

Android 12

With brand-new privacy features and other useful experiences, like improved accessibility features for people with impaired vision, scrolling screenshots, conversation widgets, Android 12 focuses on building a secure operating system that adapts to you, and makes all your devices work better together. Google has described this update to its operating system, Android 12, as “the biggest design change in Android’s history”.

Android 12 will first be introduced on Pixel devices, and allow users to completely personalize their phones with a custom color palette and redesigned widgets. This Android build also unifies the entire software and hardware ecosystems under a single design language called Material You. 

It also introduced a new Privacy Dashboard offering a single view into your permissions settings as well as what data is being accessed, at what intervals, and by which apps. A new indicator to the top right of the status bar will tell the user when apps are accessing the phone’s microphone or camera.

Project Starline: A revolutionary 3D video conferencing 

The pandemic has led to a surge in video conferencing, video-based meets, webinars, and more. Google had previously announced it was working on a new video chat system that enables you to see the person you’re chatting to in 3D.

The project, titled Starline, aims to create uber-realistic projections for video chats. The future of videoconferencing will use 3D imaging to make video calls feel like you’re speaking with someone in person, just like you would in a pre-pandemic world. While other video conferencing apps including Zoom, Google Meet, Microsoft Teams, and others have allowed us to stay in touch with family, friends, colleagues, and peers even as we all stayed home, Project Starline’s introduction comes at a time when despite eased restrictions, the need for better remote conferencing tools might still be on a rise so the world stays effectively connected. 

LaMDA: Your new digital friend

LaMDA, a conversational language model built on Google’s neural network architecture called Transformer, is one of the most fascinating introductions at Google I/O 2021. Unlike other pre-existing language models which are trained to answer queries, LaMDA is being trained on dialogue to engage in free-flowing conversations on nearly any topic under the sun (or solar system). 

During the Keynote address, Google gave a demo of LaMDA acting as the planet Pluto at first, and a paper airplane thereafter, both very real. LaMDA, currently in its R&D phase, is likely to be used to power Google Assistant and other Google products in the future, including a key aspect for Google’s new Smart Home. 

Pushing the frontier of computing with TPU V4: 

TPUs, Google’s custom-built machine learning processes, enable advancements including Translation, image recognition, and voice recognition via LaMDA and multimodal models. TPU v4, which debuted at Google I/O 2021 is powered by the v4 chip and touted to be twice as fast as its previous generation. A single pod can deliver more than one exaflop, which is equivalent to the computing power of 10 million laptops combined. “This is the fastest system we’ve ever deployed, and a historic milestone for us. Previously to get to an exaflop, you needed to build a custom supercomputer. And we’ll soon have dozens of TPUv4 pods in our data centers, many of which will be operating at or near 90% carbon-free energy. They’ll be available to our Cloud customers later this year,” explained Google on their official blog. 

Google has opened a new state-of-the-art Quantum AI campus with their first quantum data center and quantum processor chip fabrication facilities, with a multi-year plan in the pipeline.  

No language barrier: Multitask Unified Model (MUM) 

MUM or the Multitask Unified Model comes as Google’s latest milestone to transfer knowledge without any language barriers for the user, thereby making it far more powerful than BERT, the Transformer AI model launched by Google in 2019. MUM can learn across 75 languages at the same time where most AI models train on one language at a time. It can also understand information across text, images, video, and other media.

“Every improvement to Google Search undergoes a rigorous evaluation process to ensure we’re providing more relevant, helpful results. Human raters, who follow our Search Quality Rater Guidelines, help us understand how well our results help people find information,” says Google on their blog. 

Digital Health: Google AI to help identify skin conditions

An AI tool by Google will be able to spot skin, hair, and nail conditions, based on images uploaded by patients. Expected to launch later this year, this ‘dermatology assist tool’ has been awarded a CE mark for use as a medical tool in Europe. 

The app took three years to develop and has been trained on a dataset of 65,000 images of diagnosed conditions, marks people were concerned about, and also pictures of healthy skin, in all shades and tones. It is said to be able to recognize 288 skin conditions but not designed to substitute medical diagnosis and treatment.

This app is based on previously developed tools for learning to spot the symptoms of certain types of cancers and tuberculosis. 

Digital Health & Lifestyle: Wear OS in collaboration with Samsung and Fitbit

Google’s Wear OS and Samsung’s Tizen are merging to form one super platform, Wear. It will likely lead to solid boosts in battery life, smoother running apps, and up to 30% faster app load times. Other updates such as a standalone version of Google Maps, offline Spotify and YouTube downloads, and a few of Fitbit’s best features will be a part of this platform. Wear OS will also be getting a fresh coat of Material You. 

AI for Lifestyle: A better shopping experience

Google has also announced that they are working with Shopify to aid merchants to feature their products across Google. From a customer’s POV, Google will be introducing a new feature in Chrome to help you continue shopping where you left off. On a new tab, Chrome will display all open shopping carts from across different shopping sites. 

On Android, on the other hand, Google Lens in Photos will soon be getting a “Search inside screenshot” button to help scan things like shoes, t-shirts, and other objects in a photo and suggest relevant products.

AI for Lifestyle: AI-driven Google Maps

Google Maps, powered by AI, will now be able to save users from “hard-breaking” moments by providing relevant information about the routes to avoid unnecessary roadblocks. “We’ll automatically recommend that route if the ETA is the same or the difference is minimal. We believe that these changes have the potential to eliminate 100 million hard-braking events in routes driven with Google Maps each year,” Google said on their official blog. The Live View tool will also get a renewed display with detailed information.

AI for Lifestyle: Curated albums on Google Photos

Google Photos will use AI to curate collections with similar images, landscape and more to share with the user, quite similar to Memories on both Apple and Facebook.

Google says they have taken people’s emotions regarding events and memories into consideration and avoid bringing forth anything they might want to get rid of. This new update allows users to control which photos they do or don’t see by letting them remove images, people or time periods. 

Another feature that will be introduced is called “little patterns” which will use AI to scan pictures and create albums based on similarities within them. 

Lastly, Google is also using machine learning to create “cinematic moments” which will analyze two or three pictures taken within moments of each other to create a moving image, akin to Apple’s Live Photos.

ARCore

ARCore, Google’s augmented reality platform, is gaining two new APIs namely, ARCore Raw Depth API and ARCore Recording and Playback API. ARCore Raw Depth API will enable developers to capture more detailed representations of surrounding objects. Additionally, the ARCore Recording and Playback API allow developers to capture video footage with AR metadata.  

Tensorflow.js: How’s it being used and what developers can expect? 

Google is releasing a new ML interface stack for Android to provide developers an integrated platform with a common set of tools and APIs to deliver a seamless ML experience across Android devices and other platforms. As part of this project, Google will also roll out TensorFlow Lite through Google Play Store, so developers don’t have to bundle it with their own apps, and thus can reduce the overall APK size.

This update will enable Machine learning to understand ethics, accessibility via Project Shuwa that’s being built to understand sign language and how it can solve everyday problems. An updated version of Face Mesh is also due for release which enables iris support and more robust tracking. Conversation Intent Detection, based on BERT architecture, identifies user intent along with related entities to fulfill said intent. 

The Google I/O 2021 also gave a close look into building cyber awareness through their Auto-Delete function, increased privacy, better camera features including a new selfie algorithm to make a more inclusive camera experience for everyone, TPU v4, the custom-built machine learning processes, and Multitask Unified Model (MUM) help make Google Search a lot smarter. 

What piqued your interest the most at this year’s Google I/O?

Cancel

Knowledge thats worth delivered in your inbox

AI Code Assistants: Revolution Unveiled

AI code assistants are revolutionizing software development, with Gartner predicting that 75% of enterprise software engineers will use these tools by 2028, up from less than 10% in early 2023. This rapid adoption reflects the potential of AI to enhance coding efficiency and productivity, but also raises important questions about the maturity, benefits, and challenges of these emerging technologies.

Code Assistance Evolution

The evolution of code assistance has been rapid and transformative, progressing from simple autocomplete features to sophisticated AI-powered tools. GitHub Copilot, launched in 2021, marked a significant milestone by leveraging OpenAI’s Codex to generate entire code snippets 1. Amazon Q, introduced in 2023, further advanced the field with its deep integration into AWS services and impressive code acceptance rates of up to 50%. GPT (Generative Pre-trained Transformer) models have been instrumental in this evolution, with GPT-3 and its successors enabling more context-aware and nuanced code suggestions.

Image Source

  • Adoption rates: By 2023, over 40% of developers reported using AI code assistants.
  • Productivity gains: Tools like Amazon Q have demonstrated up to 80% acceleration in coding tasks.
  • Language support: Modern AI assistants support dozens of programming languages, with GitHub Copilot covering over 20 languages and frameworks.
  • Error reduction: AI-powered code assistants have shown potential to reduce bugs by up to 30% in some studies.

These advancements have not only increased coding efficiency but also democratized software development, making it more accessible to novice programmers and non-professionals alike.

Current Adoption and Maturity: Metrics Defining the Landscape

The landscape of AI code assistants is rapidly evolving, with adoption rates and performance metrics showcasing their growing maturity. Here’s a tabular comparison of some popular AI coding tools, including Amazon Q:

Amazon Q stands out with its specialized capabilities for software developers and deep integration with AWS services. It offers a range of features designed to streamline development processes:

  • Highest reported code acceptance rates: Up to 50% for multi-line code suggestions
  • Built-in security: Secure and private by design, with robust data security measures
  • Extensive connectivity: Over 50 built-in, managed, and secure data connectors
  • Task automation: Amazon Q Apps allow users to create generative AI-powered apps for streamlining tasks

The tool’s impact is evident in its adoption and performance metrics. For instance, Amazon Q has helped save over 450,000 hours from manual technical investigations. Its integration with CloudWatch provides valuable insights into developer usage patterns and areas for improvement.

As these AI assistants continue to mature, they are increasingly becoming integral to modern software development workflows. However, it’s important to note that while these tools offer significant benefits, they should be used judiciously, with developers maintaining a critical eye on the generated code and understanding its implications for overall project architecture and security.

AI-Powered Collaborative Coding: Enhancing Team Productivity

AI code assistants are revolutionizing collaborative coding practices, offering real-time suggestions, conflict resolution, and personalized assistance to development teams. These tools integrate seamlessly with popular IDEs and version control systems, facilitating smoother teamwork and code quality improvements.

Key features of AI-enhanced collaborative coding:

  • Real-time code suggestions and auto-completion across team members
  • Automated conflict detection and resolution in merge requests
  • Personalized coding assistance based on individual developer styles
  • AI-driven code reviews and quality checks

Benefits for development teams:

  • Increased productivity: Teams report up to 30-50% faster code completion
  • Improved code consistency: AI ensures adherence to team coding standards
  • Reduced onboarding time: New team members can quickly adapt to project codebases
  • Enhanced knowledge sharing: AI suggestions expose developers to diverse coding patterns

While AI code assistants offer significant advantages, it’s crucial to maintain a balance between AI assistance and human expertise. Teams should establish guidelines for AI tool usage to ensure code quality, security, and maintainability.

Emerging trends in AI-powered collaborative coding:

  • Integration of natural language processing for code explanations and documentation
  • Advanced code refactoring suggestions based on team-wide code patterns
  • AI-assisted pair programming and mob programming sessions
  • Predictive analytics for project timelines and resource allocation

As AI continues to evolve, collaborative coding tools are expected to become more sophisticated, further streamlining team workflows and fostering innovation in software development practices.

Benefits and Risks Analyzed

AI code assistants offer significant benefits but also present notable challenges. Here’s an overview of the advantages driving adoption and the critical downsides:

Core Advantages Driving Adoption:

  1. Enhanced Productivity: AI coding tools can boost developer productivity by 30-50%1. Google AI researchers estimate that these tools could save developers up to 30% of their coding time.
IndustryPotential Annual Value
Banking$200 billion – $340 billion
Retail and CPG$400 billion – $660 billion
  1. Economic Impact: Generative AI, including code assistants, could potentially add $2.6 trillion to $4.4 trillion annually to the global economy across various use cases. In the software engineering sector alone, this technology could deliver substantial value.
  1. Democratization of Software Development: AI assistants enable individuals with less coding experience to build complex applications, potentially broadening the talent pool and fostering innovation.
  2. Instant Coding Support: AI provides real-time suggestions and generates code snippets, aiding developers in their coding journey.

Critical Downsides and Risks:

  1. Cognitive and Skill-Related Concerns:
    • Over-reliance on AI tools may lead to skill atrophy, especially for junior developers.
    • There’s a risk of developers losing the ability to write or deeply understand code independently.
  2. Technical and Ethical Limitations:
    • Quality of Results: AI-generated code may contain hidden issues, leading to bugs or security vulnerabilities.
    • Security Risks: AI tools might introduce insecure libraries or out-of-date dependencies.
    • Ethical Concerns: AI algorithms lack accountability for errors and may reinforce harmful stereotypes or promote misinformation.
  3. Copyright and Licensing Issues:
    • AI tools heavily rely on open-source code, which may lead to unintentional use of copyrighted material or introduction of insecure libraries.
  4. Limited Contextual Understanding:
    • AI-generated code may not always integrate seamlessly with the broader project context, potentially leading to fragmented code.
  5. Bias in Training Data:
    • AI outputs can reflect biases present in their training data, potentially leading to non-inclusive code practices.

While AI code assistants offer significant productivity gains and economic benefits, they also present challenges that need careful consideration. Developers and organizations must balance the advantages with the potential risks, ensuring responsible use of these powerful tools.

Future of Code Automation

The future of AI code assistants is poised for significant growth and evolution, with technological advancements and changing developer attitudes shaping their trajectory towards potential ubiquity or obsolescence.

Technological Advancements on the Horizon:

  1. Enhanced Contextual Understanding: Future AI assistants are expected to gain deeper comprehension of project structures, coding patterns, and business logic. This will enable more accurate and context-aware code suggestions, reducing the need for extensive human review.
  2. Multi-Modal AI: Integration of natural language processing, computer vision, and code analysis will allow AI assistants to understand and generate code based on diverse inputs, including voice commands, sketches, and high-level descriptions.
  3. Autonomous Code Generation: By 2027, we may see AI agents capable of handling entire segments of a project with minimal oversight, potentially scaffolding entire applications from natural language descriptions.
  4. Self-Improving AI: Machine learning models that continuously learn from developer interactions and feedback will lead to increasingly accurate and personalized code suggestions over time.

Adoption Barriers and Enablers:

Barriers:

  1. Data Privacy Concerns: Organizations remain cautious about sharing proprietary code with cloud-based AI services.
  2. Integration Challenges: Seamless integration with existing development workflows and tools is crucial for widespread adoption.
  3. Skill Erosion Fears: Concerns about over-reliance on AI leading to a decline in fundamental coding skills among developers.

Enablers:

  1. Open-Source Models: The development of powerful open-source AI models may address privacy concerns and increase accessibility.
  2. IDE Integration: Deeper integration with popular integrated development environments will streamline adoption.
  3. Demonstrable ROI: Clear evidence of productivity gains and cost savings will drive enterprise adoption.
  1. AI-Driven Architecture Design: AI assistants may evolve to suggest optimal system architectures based on project requirements and best practices.
  2. Automated Code Refactoring: AI tools will increasingly offer intelligent refactoring suggestions to improve code quality and maintainability.
  3. Predictive Bug Detection: Advanced AI models will predict potential bugs and security vulnerabilities before they manifest in production environments.
  4. Cross-Language Translation: AI assistants will facilitate seamless translation between programming languages, enabling easier migration and interoperability.
  5. AI-Human Pair Programming: More sophisticated AI agents may act as virtual pair programming partners, offering real-time guidance and code reviews.
  6. Ethical AI Coding: Future AI assistants will incorporate ethical considerations, suggesting inclusive and bias-free code practices.

As these trends unfold, the role of human developers is likely to shift towards higher-level problem-solving, creative design, and AI oversight. By 2025, it’s projected that over 70% of professional software developers will regularly collaborate with AI agents in their coding workflows1. However, the path to ubiquity will depend on addressing key challenges such as reliability, security, and maintaining a balance between AI assistance and human expertise.

The future outlook for AI code assistants is one of transformative potential, with the technology poised to become an integral part of the software development landscape. As these tools continue to evolve, they will likely reshape team structures, development methodologies, and the very nature of coding itself.

Conclusion: A Tool, Not a Panacea

AI code assistants have irrevocably altered software development, delivering measurable productivity gains but introducing new technical and societal challenges. Current metrics suggest they are transitioning from novel aids to essential utilities—63% of enterprises now mandate their use. However, their ascendancy as the de facto standard hinges on addressing security flaws, mitigating cognitive erosion, and fostering equitable upskilling. For organizations, the optimal path lies in balanced integration: harnessing AI’s speed while preserving human ingenuity. As generative models evolve, developers who master this symbiosis will define the next epoch of software engineering.

Cancel

Knowledge thats worth delivered in your inbox

Loading More Posts ...
Go Top
ml floating chatbot