Resilience is crucial for the development of any workload, including generative AI workloads. When engineering generative AI workloads, there are specific considerations that need to be taken into account. In this post, we will discuss the different components of a generative AI workload and the important factors to consider.
Full stack generative AI involves more than just the models themselves. It requires a combination of people, skills, and tools from various domains. For example, in addition to model builders and model integrators, there is now a need for model tuners. The traditional MLOps stack may not fully cover the experiment tracking and observability requirements for generative AI workloads.
One important aspect of generative AI is agent reasoning. Retrieval Augmented Generation (RAG) models allow for more accurate and contextually relevant responses by incorporating external knowledge sources. When using RAG, it is important to set appropriate timeouts to ensure a good customer experience. Validating prompt input data and size is also crucial to stay within the allocated character limits defined by the model. Prompt engineering should involve persisting prompts to a reliable data store for disaster recovery purposes.
Data pipelines are necessary when providing contextual data to the foundation model using the RAG pattern. These pipelines ingest the source data, convert it to embedding vectors, and store them in a vector database. In batch pipelines, challenges may arise when ingesting data from different sources such as PDF documents, CRM tools, or existing knowledge bases. Throttling, error handling, and retry logic should be implemented to account for potential issues. The embedding model used in the pipeline can also be a performance bottleneck, requiring careful workload management.
Vector databases serve as storage systems for embedding vectors and provide similarity search capabilities. Factors to consider when choosing a vector database include latency, scalability, and high availability. The database should be able to handle high or unpredictable loads and replicate data for disaster recovery purposes.
The application tier in a generative AI solution requires special considerations. Foundation models often run on large GPU instances, which can lead to high latency. Best practices such as rate limiting, backoff and retry, and load shedding should be implemented to mitigate latency issues. Security posture is also important when integrating agents, tools, and plugins with other systems. Following least-privilege access principles and restricting prompts from external systems can help ensure security.
Capacity planning is crucial for both inference and training model data pipelines. Generative AI workloads have specific CPU and memory requirements, and obtaining instances that support these workloads can be challenging. Reserving or pre-provisioning instance types can help ensure availability when needed.
Observability is essential for monitoring and troubleshooting generative AI workloads. In addition to traditional resource metrics, GPU utilization should be closely monitored to avoid instability. Tracing the flow of calls between agents and tools can help identify performance issues and new error scenarios. Monitoring tools like Amazon GuardDuty can be used to detect security risks and threats.
Having a disaster recovery strategy is a must for any workload, including generative AI workloads. Understanding the failure modes specific to your workload will guide your strategy. If you are using AWS managed services, ensure that they are available in your recovery AWS Region. Currently, AWS services like Amazon Bedrock and SageMaker do not natively support data replication across AWS Regions, so alternative solutions should be considered.
Source link