Follow us on social

Latest Posts

Stay in Touch With Us

For Advertising, media partnerships, sponsorship, associations, and alliances, please connect to us below

Email
info@globaltechoutlook.com

Phone
+91 40 230 552 15

Address
540/6, 3rd Floor, Geetanjali Towers,
KPHB-6, Hyderabad 500072

Follow us on social

Globaltechoutlook

  /  Chatbots   /  Why ML Chatbots are Afraid of Conversational Nuances and Subtleties?
ML chatbots

Why ML Chatbots are Afraid of Conversational Nuances and Subtleties?

Why ML Chatbots are Afraid of Conversational Nuances and Subtleties?

Chatbots nowadays are part and parcel of online conversations as they are specifically designed AI-sourced programmes designed to stimulate conversations among human agents. In this era when time and speed engulf competition for companies ML chatbots are being widely used for ensuring efficient and effective services. Allowing the interaction of machine and human users, conversational chatbots play a vital role in delivering customer service support and enhancing customer engagement. However, even if ML chatbots are products of highly sophisticated machine learning they are not free from certain limitations. One major limitation which has gradually become evident is their deficiency in dealing with an idiom, metaphor, simile, rhetoric, sarcasm, and humour. The problem surfaces in the text-based conversations and the fear of conversational nuances from these chatbots.

It has been found that while chatbots are very proficient in coming up with grammatically correct constructions when it comes to nuances and subtleties they tend to falter. It is a problem because when human users engage in conversations into which idioms, metaphors, simile, rhetoric, sarcasm, and humour are injected more often. It is only natural because human communication is not mechanical like ML chatbots. Nor it is linear, nor totally predictable. The numerical and mathematical formulas that guide conversational chatbots, in which words assume the form of numbers, fall short of capturing the body language such as facial expressions or movement of hands. It also cannot comprehend the specific context that steers the conversation in certain directions but with sudden if not abrupt shifts. The ‘culprit’ here is the meaning of expressions that have to be heavily negotiated by chatbots. Thus, machine learning algorithms are making ML chatbots afraid of conversational nuances and subtleties in human communication.

The issue of figurative language in such development occupies a central role and by being so it has a lot of significance in contemporary research. In a joint paper, presented in a conference on Empirical Methods in Natural Language Processing, held in October 2021, joint authors investigate the robustness of dialog models vis-à-vis popular figurative model four researchers and reveal “large gaps”. The gaps, they show, also have much impact on the performance of the concerned machine learning models. The impact is adverse in the sense that it results in the decline of 10 percent to 20 per cent in comparison with the performance level with regard to straightforward, easily understandable conversations. They make a plea for “future research in dialog modeling to separately analyze and report results on figurative language in order to better test model capabilities relevant to real-world use”. They go on to prescribe “lightweight solutions” to make existing machine learning models more robust to figurative language by using an external resource to translate the figurative language to literal (non-figurative) forms while preserving the meaning to the best extent possible.

Various responses have emerged in addressing the problem of conversational chatbots. It has been observed that ‘understanding’ is not something that artificial intelligence, machine learning or the neural network is comfortable with. There is also a view that the problem is not just a question of interpretation, it is more of making the judgment that only human agents are capable of. A more skeptical view, based on the problem under consideration here, is that not much progress has been effectively achieved in the field of computer intelligence since the discovery of Eliza in the 1960s. There is also an interesting observation that humour and sarcasm are “not calculable”. Some have been sympathetic to the computer in stating that not even human beings fully understand the conversational nuances and subtleties.

Not with standing the growing awareness and the emerging efforts to address the core problem of figurative language researchers more or less agree that to have more fruitful solutions in this regard experiments with larger models are required. Thus, it has been observed that ML chatbots are afraid of conversational nuances and subtleties.