
Microsoft Finally Realizes its Tech Autocracy Has Gotten Out of Hand!
Emotion recognition is unethical, Microsoft backs out from toxic tech domination
Two vital issues acquire central importance in this article. First is the declaration that the AI-powered ‘emotion recognition’ is “unscientific”. Second, as a consequence of this tech declaration, in turn, caused a blow to Microsoft and its tech autocracy. One can even go on to argue that the declaration marks a major turn of events in the tech world.
Microsoft has been consistently at the forefront of commercial use of cutting-edge technology like AI and its power for years. One latest feather that it added to its cap of success is the grand AI-powered facial analysis tools, that is also enhanced by ‘emotion recognition algorithms’— the power of identifying the emotions of the subject from pictures and videos.
Musk is Going to Lose His Tesla Employees to Microsoft! Time to Rethink Decision?
London Stock Exchange Group Enhanced Investors Analysis Process with Microsoft
Feintool Deployed Digital Workplace Solutions by Microsoft to Enhance Efficiency
Despite advancements, tech critics have not been impressed with the tech autocracy in facial analysis tools. The problem occurs at two levels: i) the assumption that facial expressions have a universal trait cutting across variations in populations. Thus, the fact that be it anger or happiness or fear or expression of doubt certain specific facial movements like stretching or twisting of eyes, eyebrows and mouths will be used as markers, lack scientific validity; ii) the way the external expressions are correspondingly related to internal feelings, here again with some fixed formula, is considered by critics, many of them who are psychologists, to be grossly “unscientific” with the argument that codification of expressions remains ignorant of the variations and nuances of expressions. Several studies clarify on the basis of scientific research that how people communicate anger, disgust, fear, happiness, sadness, and surprise varies substantially across cultures, situations, and even across people within a single situation.
Beyond the promotion of ‘emotion recognition’ lies the commercial interest of companies like Microsoft as it helps to sell software technologies for as different purposes as surveillance of ‘suspicious behaviour’ and scanning ‘interested’/ ‘indifferent’/’callous’ candidates for job interviews. In such commercial ventures what is never brought to the fore is the point that however strong the case for sophisticated technology is made on the basis of the claim of advanced machine learning and however much the success rate of facial recognition or facial profiling is proclaimed, the case of ‘emotion recognition is more complex with much greater scope for margin of error in analyzing facial gestures and movements through facial recognition tools. Experts note that it is comparatively easier to detect behaviour than to detect the state of mind and intent.
Automation of behaviour and feelings of human beings has always raised the question of whether technology should be in the service of human beings or be used in controlling them.
The ‘emotion recognition’ claim further aggravated the question of facial analysis tools. The denial of scientific recognition of the high-profile claim results in a setback to a dominant trend in the tech world to automate human emotions by using cutting-edge technology like AI and by this way to prove that machines are going to be as good as human beings.
At least up until now, it is clear that such claims may have commercial grounds, but no scientific basis at all. What Microsoft learns the hard way through tech autocracy is also a lesson for the big tech firms as a whole.