“`html
Update 4/3/2024: We have updated this post to add information about the new Paris Region availability for Amazon Bedrock, and provided a new resource link for guidance on inference requests. Last month, we announced the availability of two high-performing Mistral AI models, Mistral 7B and Mixtral 8x7B on Amazon Bedrock. Mistral 7B, as the first foundation model of Mistral, supports English text generation tasks with natural coding capabilities. Mixtral 8x7B is a popular, high-quality, sparse Mixture-of-Experts (MoE) model, that is ideal for text summarization, question and answering, text classification, text completion, and code generation. Today, we’re announcing the availability of Mistral Large on Amazon Bedrock. Mistral Large is ideal for complex tasks that require substantial reasoning capabilities, or ones that are highly specialized, such as Synthetic Text Generation or Code Generation. At the AWS Paris Summit today, we also announced that Amazon Bedrock is now available in the Paris AWS Region, adding to the list of exciting news. What you need to know about Mistral Large: It’s natively fluent in English, French, Spanish, German, and Italian, with a nuanced understanding of grammar and cultural context. It has a 32K token context window allowing precise information recall from large documents. Its precise instruction-following enables you to design your moderation policies – the folks at Mistral AI used it to set up the system-level moderation of their beta assistant demonstrator le Chat. Your first interaction with Large Language Models (LLMs) revolves around prompts, so the art of crafting effective prompts is essential for generating desirable responses from LLMs. In this Amazon Bedrock guide we provide further details on how to make inference requests to Mistral AI models. Getting started with Mistral LargeTo get started with Mistral Large on Bedrock, you first need to get access to the model. On the Amazon Bedrock console, select Model access, and then select Manage model access. Next, select Mistral Large, and then select Save changes. Once you have access to Mistral Large, you can use the model on Bedrock. Refresh the base models table to view the updated status. To see the complex reasoning aspect of Mistral Large in action I tried the following prompt in the Playgrounds section: Prompt <s>[INST]Calculate the difference in payment dates between the two customers whose payment amounts are closest to each other in the given dataset: \'{ \”transaction_id\”:{\”0\”:\”T1001\”,\”1\”:\”T1002\”,\”2\”:\”T1003\”,\”3\”:\”T1004\”,\”4\”:\”T1005\”}, \”customer_id\”:{\”0\”:\”C001\”,\”1\”:\”C002\”,\”2\”:\”C003\”,\”3\”:\”C002\”,\”4\”:\”C001\”}, \”payment_amount\”:{\”0\”:125.5,\”1\”:89.99,\”2\”:120.0,\”3\”:54.3,\”4\”:210.2}, \”payment_date\”:{\”0\”:\”2021-10-05\”,\”1\”:\”2021-10-06\”,\”2\”:\”2021-10-07\”,\”3\”:\”2021-10-05\”,\”4\”:\”2021-10-08\”}, \”payment_status\”:{\”0\”:\”Paid\”,\”1\”:\”Unpaid\”,\”2\”:\”Paid\”,\”3\”:\”Paid\”,\”4\”:\”Pending\”} }\'[/INST] Output To find the difference in payment dates between the two customers whose payment amounts are closest to each other, we first need to parse the given dataset. We can use the pandas library in Python to convert the dataset into a DataFrame for easier manipulation. Here\’s the Python code to achieve this: “`python import pandas as pd from datetime import datetime # Given dataset data = { \”transaction_id\”: [\”T1001\”, \”T1002\”, \”T1003\”, \”T1004\”, \”T1005\”], \”customer_id\”: [\”C001\”, \”C002\”, \”C003\”, \”C002\”, \”C001\”], \”payment_amount\”: [125.5, 89.99, 120.0, 54.3, 210.2], \”payment_date\”: [\”2021-10-05\”, \”2021-10-06\”, \”2021-10-07\”, \”2021-10-05\”, \”2021-10-08\”], \”payment_status\”: [\”Paid\”, \”Unpaid\”, \”Paid\”, \”Paid\”, \”Pending\”] } # Convert dataset to DataFrame df = pd.DataFrame(data) # Convert payment_date column to datetime format df[\”payment_date\”] = pd.to_datetime(df[\”payment_date\”]) # Calculate the difference between all pairs of payment amounts amount_diffs = df[\”payment_amount\”].sub(df[\”payment_amount\”], axis=0).abs().combine_first(df[\”payment_amount\”].sub(df[\”payment_amount\”], axis=1).abs()) # Find the minimum difference min_diff = amount_diffs.min().min() # Find the indices of the two customers with the closest payment amounts closest_indices = amount_diffs[amount_diffs == min_diff].stack().index.tolist() # Get the customer indices customer_index1, customer_index2 = closest_indices[0] # Get the payment dates Earlier in this post I mentioned that Mistral Large is natively fluent in several languages, with a nuanced understanding of grammar and cultural context. I tested the model’s understanding of the French language: Prompt <s>[INST]Lequel est le plus lourd une livre de fer ou un kilogramme de plume[/INST] Output Une livre de fer pèse environ 0.453 kilogramme, tandis qu\’un kilogramme de plumes pèse 1 kilogramme. Donc, un kilogramme de plumes est plus lourd qu\’une livre de fer. Cependant, il est important de noter que le volume occupé par un kilogramme de plumes serait beaucoup plus grand que le volume occupé par une livre de fer, car les plumes sont beaucoup moins denses que le fer. Programmatically interact with Mistral LargeYou can also use AWS Command Line Interface (CLI) and AWS Software Development Kit (SDK) to make various calls using Amazon Bedrock APIs. Following, is a sample code in Python that interacts with Amazon Bedrock Runtime APIs with AWS SDK. If you specify in the prompt that “You will only respond with a JSON object with the key X, Y, and Z.”, you can use JSON format output in easy downstream tasks: import boto3 import json bedrock = boto3.client(service_name=\”bedrock-runtime\”, region_name=\”us-east-1\”) prompt = \”\”\” <s>[INST]You are a summarization system that can provide summaries with associated confidence scores. In clear and concise language, provide three short summaries of the following essay, along with their confidence scores. You will only respond with a JSON object with the key Summary and Confidence. Do not provide explanations.[/INST] # Essay: The generative artificial intelligence (AI) revolution is in full swing, and customers of all sizes and across industries are taking advantage of this transformative technology to reshape their businesses. From reimagining workflows to make them more intuitive and easier to enhancing decision-making processes through rapid information synthesis, generative AI promises to redefine how we interact with machines. It’s been amazing to see the number of companies launching innovative generative AI applications on AWS using Amazon Bedrock. Siemens is integrating Amazon Bedrock into its low-code development platform Mendix to allow thousands of companies across multiple industries to create and upgrade applications with the power of generative AI. Accenture and Anthropic are collaborating with AWS to help organizations—especially those in highly-regulated industries like healthcare, public sector, banking, and insurance—responsibly adopt and scale generative AI technology with Amazon Bedrock. This collaboration will help organizations like the District of Columbia Department of Health speed innovation, improve customer service, and improve productivity, while keeping data private and secure. Amazon Pharmacy is using generative AI to fill prescriptions with speed and accuracy, making customer service faster and more helpful, and making sure that the right quantities of medications are stocked for customers. To power so many diverse applications, we recognized the need for model diversity and choice for generative AI early on. We know that different models excel in different areas, each with unique strengths tailored to specific use cases, leading us to provide customers with access to multiple state-of-the-art large language models (LLMs) and foundation models (FMs) through a unified service: Amazon Bedrock. By facilitating access to top models from Amazon, Anthropic, AI21 Labs, Cohere, Meta, Mistral AI, and Stability AI, we empower customers to experiment, evaluate, and ultimately select the model that delivers optimal performance for their needs. Announcing Mistral Large on Amazon Bedrock Today, we are excited to announce the next step on this journey with an expanded collaboration with Mistral AI. A French startup, Mistral AI has quickly established itself as a pioneering force in the generative AI landscape, known for its focus on portability, transparency, and its cost-effective design requiring fewer computational resources to run. We recently announced the availability of Mistral 7B and Mixtral 8x7B models on Amazon Bedrock, with weights that customers can inspect and modify. Today, Mistral AI is bringing its latest and most capable model, Mistral Large, to Amazon Bedrock, and is committed to making future models accessible to AWS customers. Mistral AI will also use AWS AI-optimized AWS Trainium and AWS Inferentia to build and deploy its future foundation models on Amazon Bedrock, benefitting from the price, performance, scale, and security of AWS. Along with this announcement, starting today, customers can use Amazon Bedrock in the AWS Europe (Paris) Region. At launch, customers will have access to some of the latest models from Amazon, Anthropic, Cohere, and Mistral AI, expanding their options to support various use cases from text understanding to complex reasoning. Mistral Large boasts exceptional language understanding and generation capabilities, which is ideal for complex tasks that require reasoning capabilities or ones that are highly specialized, such as synthetic text generation, code generation, Retrieval Augmented Generation (RAG), or agents. For example, customers can build AI agents capable of engaging in articulate conversations, generating nuanced content, and tackling complex reasoning tasks. The model’s strengths also extend to coding, with proficiency in code generation, review, and comments across mainstream coding languages. And Mistral Large’s exceptional multilingual performance, spanning French, German, Spanish, and Italian, in…
“`
Source link