MIT might Not Really be Interested in Ethics as its Innovations Scream Racist AI!
Whether MIT is supporting ethical AI or racist AI, only time and experiments will show it off.
MIT has claimed to build a new artificial intelligence (AI) model that can easily identify any person’s race with only medical images. The AI model can act as an Ethical AI to predict a person’s self-reported race, but instead, it is screaming like a racist AI. Ethical AI deep learning models are gaining popularity for providing training to predict a person’s race just from multiple medical images across different imaging modalities.
AI models are known for predicting anything humans want such as market trends, concepts, and many more activities. But there are differences between prediction and identification inAI models for demonstrating and the purposes of identification. This MIT AI model has achieved less than 99% accuracy on labeled data for the correct assessment through the Ethical AI approach.
The main concern is even if the MIT AI model claims to be nearly 99% accurate in identifying a person’s race with medical images, there can be misinterpretation that can lead to it being called racist AI. There are millions of data available for the artificial intelligence model to act as a database with an image of every living human. It is very difficult to eliminate what this AI system will claim from the trained data. It is because MIT cannot visit every person in the whole world to confirm the assessments of this AI model through a particular image.
AI models can be taught to identify the labels in a database for tricking the database of labels. But now AI models can be transformed from ethical to racist for not determining the race and predicting labels in specific datasets.
That being said, it is very harmful to the human race to innovate such a disruptive form of tech. What if artificial intelligence models start with the miseducation of algorithms in identifying the wrong race? And also eliminate the sensitive features from the essential data that may act as a viable tweak.
This has shown that MIT is indirectly claiming to launch a racist AI instead of being ethical about it. MIT is claiming that it can use the imaging data of chest X-rays, chest CT scans, limb X-rays, and many more to identify races like black, white, Asian, and so on. Clearly, the ethics part of innovating tech is what the team of researchers working at MIT has missed!