Executive interview: AWS’s GenAI innovation opportunity

0
86
An encouraging new conversation around sustainable IT, says Nordic CIO

Source is ComputerWeekly.com

Amazon has been building artificial intelligence (AI) systems and using machine learning for well over 20 years. Personalisation and recommendations were among the early innovations introduced on the ecommerce site, and these and other technology concepts, such as the Alexa voice assistant, help to drive forward AI innovation in the Amazon Web Services (AWS) public cloud service. That then becomes available to its enterprise IT customers.

Getting started with AI can be a daunting task, given the almost constant industry chatter that seems to dominate IT conversations. Computer Weekly recently discussed the challenges of AI in the enterprise with Francessca Vasquez, vice-president of professional services and the GenAI Innovation Center for AWS.

When asked about how IT and business leaders can develop a viable AI strategy, given all the industry hype surrounding the technology, Vasquez urges enterprises building AI into their business strategy to start out by considering the capabilities of their IT infrastructure, which is needed to build and train foundation models.

With all that seems to be going on with AI, Vasquez believes that for many organisations, machine learning remains a very useful tool.

“You don’t necessarily need some of the complex deep learning input and outputs that generative AI [GenAI] provides,” says Vasquez, adding that companies are prioritising use cases for AI and machine learning that they feel are the most meaningful and impactful. Such projects, she says, generally have a good return on investment.

“They are typically a lower risk and they allow organisations to get started faster,” says Vasquez. This is a bit like when automation was being deployed to solve the so-called “low-hanging fruit” inefficiencies that organisations faced.

Providing a level of intelligence in the automation of such tasks allows the organisation to run faster, in terms of streamlining inefficient steps in business processes.

“What I both get most excited about and think every single customer can drive tangible results is when you look at things like how developers produce software and that whole software development life cycle,” she says. “That is, for me, a great case of automation plus AI plus humans all being used to drive greater efficiency.”

AI services on AWS

Looking at AWS’s AI offerings, Vasquez says: “We’ve been investing very heavily in our own compute and custom silicon.”

Above the hardware, AWS operates a platform layer known as Bedrock for GenAI. “This is really the managed services where we allow organisations to use large language models (LLMs) and foundation models,” she says.

Bedrock offers what AWS calls a foundation for building and scaling secure GenAI applications. Specifically, it aims to provide a single platform via a single application programming interface (API) that Vasquez says gives access to the company’s Titan LLM [large language model] along with several third-party foundational models. These include the models provided by AI21 Labs, Cohere, Stability AI or Anthropic, Meta and Mistral AI.

“What I get really excited about is at the top of our stack for generative AI is where you see innovation happening with the ability to build GenAI applications,” she says.

One of these AI applications is Amazon Q, a GenAI-powered assistant that can answer questions, provide summaries, generate content and complete tasks based on data and information in enterprise systems. AWS says this can all be achieved securely.

Managing AI models and data access

There is always a balance between locking down access to data for compliance and ensuring strong cybersecurity policies are met and the ability to use company-specific data to drive innovation and generate value. There have been a number of high profile examples of where data has inadvertently been leaked when using public LLMs.

When asked about the advice she would give enterprises considering LLMs, Vasquez says: “The first thing I will say is that data is growing at an exponential rate. We should all be grounded in that.”

Most organisations are storing terabytes; some have petabytes of data storage; and, in some rare cases, some are storing exabytes of data. “Information scale is growing and information creation is coming in more formats beyond what you would think of as structured data,” she says.

For Vasquez, to get value out of all of the organisation’s different data stores that hold troves of data in a multitude of formats, businesses need the power of GenAI. “Most organisations are first going to have to get to the public cloud to take advantage of generative AI,” she says.

Vasquez explains: “At AWS, if I just think about our cloud, security, as in data privacy, is a pretty big priority.”

This means that as AWS develops and releases new services, security is not considered separately. “We believe that all information has to be encrypted and governed,” she says. “We still apply the same shared responsibility constructs. You have to be able to build applications in a virtual public cloud [VPC], and that information never leaves the VPC.”

This thinking is, according to Vasquez, evolving to support what AWS customers expect from LLM services. “Customers need stronger guardrails on access controls and governance on models that can automatically filter out unwanted concepts or speech or profanity, or things you don’t want feeding into the model,” she says.

AWS’s approach is to build such capabilities into Bedrock.

Training to avoid confusion

Vasquez acknowledges that LLMs can easily become confused, such as when a chatbot answers with nonsense to an ambiguous question. “As we look at how these models get applied globally, that becomes even more important,” she says. “We don’t envision a world where there’ll just be one single foundation model that can do everything.”

Vasquez urges businesses deploying LLMs to focus on optimising the foundation models they use. A common example is in the context of learning with retrieval augmented generation, where the model adapts based on additional data pulled in.

Some businesses may need to go beyond self-learning. “We do think some customers will want the ability to fine-tune, and you’ll see some customers who will want continuous pre-training of the models as new information comes in,” she says.

For Vasquez, there will always be an element of AI models working with humans to make sense of those situations where the model’s training is inadequate . “It’s all reasoning at the end of the day,” she says. “You can call it human logic or human intelligence, maybe.”

Listen to the podcast with AWS’s Francessca Vasquez here.

Source is ComputerWeekly.com

Vorig artikelCertificaatvrijheid met Let’s Encrypt
Volgend artikelOpening datacenter Eindhoven 2