The new tokenizer has a total of 200,000 tokens, with approximately 25% being in non-English languages, as stated by Deedy Das, an AI investor at Menlo Ventures. He utilized language filters to determine the number of tokens in various languages, with Russian, Arabic, and Vietnamese being the top languages besides English.
“The main impact of the tokenizer, in my view, is the reduction in cost in these languages, rather than a significant improvement in quality,” Das explains. With better and longer tokens in non-English languages, an LLM can analyze prompts more quickly and charge users less for the same answer. The new tokenizer could lead to an almost fourfold reduction in costs,” he adds.
Das, who is also proficient in Hindi and Bengali, examined the longest tokens in those languages. These tokens reflect ongoing discussions in those languages, containing words like “Narendra” or “Pakistan,” along with common English terms like “Prime Minister,” “university,” and “international.” Unlike Chinese tokens, they do not exhibit any issues.
This likely reflects the training data in those languages, Das suggests: “My theory is that websites in Hindi and Bengali are quite basic, mainly consisting of news articles. Therefore, I would expect this outcome. There are fewer spam bots and porn websites in these languages, with English content prevailing.”
Polluted data and a lack of cleaning
However, the situation is starkly different in Chinese. According to several researchers examining the new library of tokens for GPT-4o, the longest Chinese tokens mostly consist of spam words related to pornography, gambling, and scams. Even shorter Chinese words, with three characters, largely revolve around these topics.
“The issue is evident: the corpus used to train the tokenizer is not clean. While the English tokens appear to be fine, the Chinese tokens are not,” notes Cai from Princeton University. While it’s not uncommon for language models to encounter spam during data collection, efforts are typically made to clean the data before use. “It’s possible that proper data cleaning was not conducted for Chinese,” he suggests.
The content of these Chinese tokens indicates potential pollution by a specific phenomenon: websites incorporating unrelated Chinese or other language content to promote spam messages.
These messages often advertise pornography videos and gambling websites, which could be legitimate businesses or scams. The language is embedded in content farm websites or occasionally legitimate sites to evade spam filters and appear in random searches. For instance, Google indexed a search result page on a US National Institutes of Health website, listing a Chinese porn site. The same site name was found in at least five Chinese tokens in GPT-4o.