Developers can enhance AI models by incorporating guardrails and adjusting user prompts to promote inclusivity. OpenAI has adopted this approach, as demonstrated by my interaction with Dall-E 3 via ChatGPT. When I requested a cartoon of a queer couple in the Castro District, the AI not only fulfilled the request but also added details on gender, race, and background without prompting. The expanded prompt created a vivid scene depicting a Caucasian woman with red hair and a Black man enjoying a night out in a diverse and vibrant setting.
While this approach can be beneficial, it can also lead to negative outcomes if not implemented properly. Google faced backlash when its AI platform, Gemini, generated inappropriate images like Black Nazis due to prompt alterations. This incident highlights the challenges of balancing inclusivity and accuracy in AI development.
Even with improved data and safeguards, AI models may struggle to accurately represent the complexity of human experiences. William Agnew, a Queer in AI organizer, emphasizes the risk of perpetuating stereotypes and limiting the understanding of queer communities through AI-generated content.
Despite these challenges, advancements in generative AI continue to evolve rapidly. OpenAI’s Sora model, for example, produces visually realistic video clips based on text prompts. I explored how Sora represents queer individuals by requesting various scenarios, resulting in unique and sometimes imperfect videoclips.
Despite its flaws, Sora’s videos offer a fascinating glimpse into the capabilities and limitations of AI representations, especially when depicting nonbinary individuals in unconventional settings.
Feedback from members of Queer in AI highlights the importance of diverse and accurate representations in AI-generated content. As AI technology continues to advance, ensuring inclusive and respectful portrayals of marginalized communities remains a crucial challenge.