Amazon Web Services (AWS) is making certain features of its generative AI application-building service, Amazon Bedrock, available to the general public, the company announced on Tuesday.
These features include guardrails for AI, a model evaluation tool, and new large language models (LLMs).
The guardrails for AI feature, known as Guardrails for Amazon Bedrock, was introduced last year and has been in preview mode since then.
Guardrails for Amazon Bedrock, which is integrated as a wizard within Bedrock, can help block up to 85% of harmful content, the company stated. It can be utilized on fine-tuned models, AI agents, and all LLMs offered as part of Bedrock.
Some of these LLMs include Amazon Titan Text, Anthropic Claude, Meta Llama 2, AI21 Jurassic, and Cohere Command.
Businesses have the option to use the Guardrails wizard to create customized safeguards based on their company policies and implement them.
These safeguards encompass denied topics, content filters, and personally identifiable information (PII) redaction.
“Enterprises can specify a set of topics that are undesirable within the context of your application using a brief natural language description,” the company explained in a blog post. It also mentioned that the guardrail can be tested to ensure it is functioning as expected.
Furthermore, the content filters offer toggle buttons that enable businesses to filter out harmful content across categories like hate speech, insults, sexual content, and violence.
The PII redaction feature within Guardrails for Amazon Bedrock, currently in development, is anticipated to allow businesses to redact personal information such as email addresses and phone numbers from LLM responses.
In addition, Guardrails for Amazon Bedrock integrates with Amazon CloudWatch, enabling businesses to monitor and analyze user inputs and model responses that violate the policies set in the guardrails.
AWS is catching up with IBM and others
Similar to AWS, several other model providers like IBM, Google Cloud, Nvidia, and Microsoft offer comparable features to help businesses manage AI bias.
According to Amalgam Insights’ chief analyst Hyoun Park, AWS is following the lead of IBM, Google, Microsoft, Apple, Meta, Databricks, and other companies providing AI services by offering governed guardrails.
“It is increasingly evident that the real value in AI will be related to governance, trust, security, semantic accuracy, and subject matter expertise of the responses provided. AWS cannot compete in AI solely by being faster and larger; it also needs to offer similar or superior guardrails compared to other AI vendors to deliver a customer-centric experience,” Park explained.
However, he noted that IBM has a significant advantage over other model providers and AI vendors in developing guardrails for AI, as IBM has been doing so for its AI assistant Watson for over a decade.
“While IBM’s efforts were not entirely successful, the experience gained by IBM in working with challenging datasets in various sectors has given them a head start in developing AI guardrails,” Park added, stating that AWS is in the early stages of introducing guardrails for AI to catch up and make progress in the realm of LLMs and generative AI.
Custom model import capability for Bedrock
As part of the updates, AWS is introducing a new custom model import capability that allows businesses to bring their own customized models to Bedrock, aiming to reduce operational overhead and expedite application development.
This capability is being added in response to the demand from businesses that develop their own models or fine-tune publicly available models in their industry sector with their data to access tools like knowledge bases, guardrails, model evaluation, and agents through Bedrock, explained Sherry Marcus, director of applied science at AWS.
However, Amalgam Insights’ Park suggested that AWS is likely adding the API to assist businesses with a significant amount of data on AWS who have used its SageMaker service to train their AI models.
This move also enables businesses to consolidate all services under one bill rather than establishing multiple vendor relationships, Park noted, indicating that this strategy is aimed at demonstrating that AI-related workloads are best supported on AWS.
The custom model import capability, currently in preview, can be accessed via a managed API within Bedrock and supports three open model architectures, including Flan-T5, Llama, and Mistral.
Model evaluation capability and LLMs transition to general availability
AWS is transitioning the model evaluation capability of Bedrock, showcased at re:Invent last year, to general availability.
Named Model Evaluation on Amazon Bedrock, this feature aims to simplify tasks such as identifying benchmarks, setting up evaluation tools, and running assessments, ultimately saving time and costs, according to the company.
Updates to Bedrock also include the introduction of new LLMs, such as the new Llama 3 and Cohere’s Command models.
Simultaneously, the cloud service provider is moving the Amazon Titan Image Generator model to general availability.
In its tested form, the model had an invisible watermarking feature. The available version of the model will now add invisible watermarks to all created images, Marcus mentioned.
“We will also be introducing a new watermark detection API in preview to determine if an image provided contains an AWS watermark,” Marcus added.
Another notable LLM enhancement is the inclusion of the Amazon Titan Text Embeddings V2 model, optimized for retrieval augmented generation (RAG) use cases like information retrieval, question-and-answer chatbots, and personalized recommendations.
The V2 model, set to launch soon, reduces storage and compute costs by enabling flexible embeddings, as per AWS.
“Flexible embeddings reduce overall storage by up to 4x, significantly cutting operational costs while maintaining 97% accuracy for RAG use cases,” Marcus explained.
Current Amazon Bedrock customers include Salesforce, Dentsu, Amazon, Pearson, and others.
Copyright © 2024 IDG Communications, Inc.