AI Becomes an Invisible Cage! China’s New Prosecution System is Destructive and Unjust
The Chinese government is prosecuting citizens with the help of AI, even before the crime is committed
Any good thing on earth can be used and misused. It seems that AI is no exception in this regard. Things become more complicated if such acts of misuse have a futuristic orientation. China, which has already shown a clear strategy of calibrating the great AI power to its advantage has become a classic case here. The government of China is getting ready to impose an AI prosecution system for prosecuting people for crimes they are yet to commit. All this by maneuvering an AI model in the Chinese tech market. Though as of yet, there is no official information about it, the leaked documents, in possession of The New York Times, confirm the fact with much confidence.
Surveillance has been the favorite preoccupation of the government of China. It has been officially defended with the logic that it is done to defend and promote public interest and public order in what is a highly regimented country. China has for long been at ease with citizen profiling in everyday life even if it causes many protests in many other countries of the world. Now China is bent on a more intense approach to making a database of 2.5 billion facial images. With AI as the high-tech driving force of the Chinese tech market, the Chinese government thinks the prosecution system can preempt criminal activities by identifying gatherings of some people in a specific location and by sending police to prevent the people there from indulging in any criminal acts.
The human rights activists have aptly described the phenomenon as an “invisible cage”. It is appropriately stated because there is no visibility of the ‘cage’ through AI model that Chinese citizens are subject to. It only appears when they are presumed and targeted by government officials as ‘potential criminals.’ The crime need not be committed, it is enough to prosecute them if the AI-led profiling leads the state officials to believe that it is “going to be committed”. On the other end, Megvii, the start-up in the Chinese tech market, responsible for this act, has come up with the assurance that the preemptive act is ‘benevolent’. This assurance, however, does not convince the human rights activists a wee bit because they predict that such an act is going to intensify the already existing repressions and discrimination faced by the Chinese citizens. China has constantly come under critical scrutiny not only for repressing people whom it considers political dissidents but also for those whose ethnic, religious, and racial identities are held in suspicion by the government. China has already unleashed an AI-powered mechanism or an advanced AI model to detect those who watch porn movies, which are banned in the country.
In any case, many AI researchers are candid enough to point out that an AI model may not be 100 percent accurate in every case it encounters. But it seems that the Chinese government and the company that is sponsoring the case think otherwise. Beyond such binary claims what remains most important and perhaps perturbing is that the common people would be subject to greater surveillance with a greater risk of prosecution by a technology-induced act about which they are not supposed to be aware of. To put it straight, it is indeed a scary proposition of imposing a new prosecution system for using AI. It is a destructive act of making fun of justice.