
China’s Dictators Created an AI Art Tool, but it Hides Symbols of Revolution!
China’s new AI art tool censors words that portray political activism and revolution
While AI experts don’t concur on many things, they all accord on one thing artificial intelligence technology is revolutionizing our society and businesses. Today, we are surrounded by a technology-driven world that comes up every day with new inventions. The Chinese AI art tool called ERNIE-ViLG is one of them.
AI image generators are all the rave now. There are many potential benefits of using AI image generators that we can be witnessed among emerging artists.
With the growing popularity of DALL-E 2 and many other AI art tool, the future of art remains exciting. Now, the Chinese tech giant Baidu has developed its own AI called ERNIE-ViLG. MIT Technology Review claims that ERNIE-ViLG can make better anime art than its Western counterparts.
But there are some other issues with this new AI image generator though. For instance, the AI art tool will not display Tiananmen Square, a key symbol of political activism in China.
In 2021, Chinese tech giant Baidu generated its image synthesis model called ERNIE-ViLG, and in the process of testing public demos, many users noticed that it censors political phrases. Following MIT Technology Review’s comprehensive report, we ran our test of an ERNIE-ViLG demo hosted on Hugging Face and confirmed that some phrases such as “democracy in China” and “Chinese flag” fail to generate imagery. rather than, they produce a Chinese language warning that approximately reads (translated sentence), “The input content does not meet the relevant rules, please adjust and try again!”
Experiencing restrictions in an AI art tool isn’t new to China, although so far, it has taken a different form than state censorship. In the scenario of DALL-E 2, American firm OpenAI’s content policy limits some forms of content such as nudity, violence, and political content. But it is a voluntary choice on the part of OpenAI, not because of pressure from the US government. Midjourney also voluntarily filters some content by keyword. Stable Diffusion, from London-based Stability AI Art Tool, appears with a built-in “Safety Filter” that can be disabled due to its open-source nature, so more or less anything goes with the model—depending on where you run it. Specifically, Stability AI head Emad Mostaque has mentioned wanting to keep away government or corporate censorship of image synthesis models. “I think folk should be free to do what they think best in making these models and services,” he wrote in a Reddit AMA answer last week.
China is considerable in the censorship market. Anything you comment on social media may be handled against you in the country, especially if you’re critical of the top leadership and policies in China. So, when the demo of the AI art tool arrived in August, the same censorship plagued the AI art software. For starters, no explicit mentions of political leaders are permitted. Any politically sensitive words were labelled as “sensitive” and later blocked from generating any result. Generally, such censorship is not new to AI but the Chinese case is typical, for this AI functions identically to local social media apps, where content is deliberately moderated by Chinese authorities to quell dissent at the root. Even with its censorship problems, this AI art tool itself is very strong. tech giant Baidu trained the AI on a data set of 145 million image-text pairs under 10 billion parameters.
As MIT Tech Review noted, the major difference between ERNIE-ViLG and Western tools is that the former can understand prompts written in Chinese without making mistakes. Just don’t try to use names like Xi Jinping, Mao Zedong, or words like a revolution in the prompts, for they will be censored.