AWS, IBM, Google, and Microsoft are taking AI from 1.0 to 2.0, according to Forrester

0
418

A new report says that the hyperscalers are using reinforcement learning and transformer networks to make AI smarter and more mobile.

forrester-ai-chart.jpg

A new report on AI 2.0 from Forrester describes the five advancements in the technology that address portability, accuracy, and security challenges.

Image: Forrester

While many companies are still in the early stages of implementing artificial intelligence platforms, the early adopters are moving on to AI 2.0. A new report from Forrester, “AI 2.0: Upgrade Your Enterprise With Five Next-Generation AI Advances,” explains what these changes are and why they are important.

These new capabilities include:

  • Transformer networks
  • Synthetic data
  • Reinforcement learning 
  • Federated learning
  • Causal inference

Authors Kjell Carlsson, Brandon Purcell, and Mike Gualtieri describe how these advances impact AI in terms of technical feasibility and business applications. 

These changes address some of the limitations of AI 1.0, such as data, accuracy, speed, and security limitations that have made it hard for businesses to develop robust use cases. The authors describe AI 1.0 as focused on pattern recognition task-specific models, and centralized training and deployment, while AI 2.0 is characterized by language, vision, and other general data generation models and it is embedded everywhere. This is a discontinuous change for AI, meaning that these new capabilities are a significant break with the history of AI to date, according to the report.

SEE: Natural language processing: A cheat sheet (TechRepublic)

In addition to AI 2.0 being able to automatically generate content and software code and summarize articles and generate questions, these capabilities can be deployed anywhere. The authors state that AI models can be placed and trained at the edge, meaning that new applications need to be cheaper, faster, and more secure. 

According to the authors, companies already have access to most of the tools and services needed to start building AI 2.0 solutions from hyperscalers such as Amazon Web Services, Google, IBM, and Microsoft. Here’s a look at each of the five technologies.

Transformer networks

These networks can handle tasks with a time or context element, such as natural language processing and generation. This advancement makes it possible to train giant models to conduct multiple tasks at once with higher accuracy and less data than individual models operating separately. According to the report, Microsoft uses these transformer networks in business applications such as natural language search, auto-captioning of images, moderating inappropriate gamer language, and automated customer support. Photon from Salesforce Research uses these networks to turn questions from business users into automatically generated SQL queries. 

Synthetic data

AI runs on data and it’s not easy or cheap to get the volume of data needed to train models and build enterprise use cases. Synthetic data solves that problem and improves accuracy, robustness, and generalizability of models, according to the report. Companies like MDClone are using synthetic data in healthcare settings to fill data gaps and protect patient privacy. This is one example of the new ecosystem of vendors providing this service to companies that don’t want to create synthetic data in-house. 

Reinforcement learning

This new functionality makes it easier for companies to react swiftly to changes in data. Reinforcement learning learns from interacting with a real or simulated environment through trial and error instead of relying on historical data. An oil and gas exploration company is using Microsoft’s Project Bonsai to find the most promising paths for horizontal drilling underground, the report authors said.

Federated learning

One barrier to wider distribution of learnings from AI is the need to transfer data from multiple sources. Transferring this data can be “costly, difficult, and often risky from a security, privacy, or competitiveness perspective.” Federated learning allows separate AI models to share models instead of the underlying data. This means intelligence can be shared “quickly, cheaply, and more securely” within a single organization and across several organizations. The report authors state that Google’s Android 11 uses federated learning to generate smart replies and suggest emojis.

Causal inference

This technique can identify cause-and-effect relationships between variables which can suggest relationships supported by data. This can’t prove causality but it can make it easier to avoid faulty business decisions based on poorly performing models. This capability is in an early stage of development compared to the other four factors. 

Forrester recommends that companies take these steps to incorporate these new capabilities into existing AI effort:

  1. Continue the AI 1.0 journey while laying the groundwork for 2.0 functionality.
  2. Invest in training existing staff because people with AI 2.0 expertise don’t exist yet.
  3. Look for use cases that score highly on both business value and technical feasibility.
  4. Look for AI 2.0 offerings from Amazon Web Services, Google, IBM, and Microsoft.
  5.  Watch for a killer use case or technological breakthrough. 

Also see

Source is TechRepublic

Vorig artikelBig Data And Assistive Technology For Cerebral Palsy
Volgend artikelNieuws & Blogs – Legal tech, nu al voor iedereen?