The Trustworthy Language Model utilizes various techniques to calculate its scores. Initially, every query sent to the tool is forwarded to one or more extensive language models. According to Northcutt, the tool is compatible with any model, whether it be closed-source models like OpenAI’s GPT series or open-source models like DBRX from Databricks. If the responses from these models align closely, it contributes to a higher score.
Additionally, the Trustworthy Language Model sends modified versions of the original query to each model, substituting words with similar meanings. If the responses to these synonymous queries are consistent, it also adds to the score. Northcutt explains, “We modify them in various ways to obtain different outputs and assess their agreement.”
The tool can also facilitate interactions between multiple models, allowing them to exchange responses and opinions. This collaborative process is monitored and factored into the score as well.
Nick McKenna, a computer scientist at Microsoft Research, is hopeful about the potential of this approach but acknowledges its imperfections. He highlights the subtle risks of model hallucinations that may arise.
Through numerous tests with different large language models, Cleanlab has demonstrated that its trustworthiness scores correlate effectively with the accuracy of model responses. Scores close to 1 align with correct responses, while scores near 0 correspond to incorrect ones. Furthermore, using the Trustworthy Language Model in conjunction with GPT-4 has shown to produce more reliable responses compared to using GPT-4 alone.
Large language models predict text by determining the most probable next word in a sequence. Cleanlab aims to enhance the accuracy of its scores in future iterations by leveraging the probabilities utilized by models for their predictions. It also seeks access to the numerical values assigned to each word in a model’s vocabulary to calculate these probabilities, a level of detail available on platforms like Amazon’s Bedrock.
Cleanlab tested its methodology with data provided by Berkeley Research Group, where the Trustworthy Language Model significantly reduced the workload of identifying health-care compliance issues in corporate documents. Similarly, a large bank (unnamed but a competitor to Goldman Sachs) saw a substantial decrease in the number of documents requiring manual review for insurance claims references.