Follow us on social

Latest Posts

Stay in Touch With Us

For Advertising, media partnerships, sponsorship, associations, and alliances, please connect to us below

Email
info@globaltechoutlook.com

Phone
+91 40 230 552 15

Address
540/6, 3rd Floor, Geetanjali Towers,
KPHB-6, Hyderabad 500072

Follow us on social

Globaltechoutlook

  /  Artificial Intelligence   /  Wish to Make Some Bad Life Choices? Ask Reddit’s AI Clone for Advice
Reddit’s AI Clone for Advice

Wish to Make Some Bad Life Choices? Ask Reddit’s AI Clone for Advice

You can definitely approach Reddit’s AI clone for advice if you wish to make some really bad choices

Necessary disclaimer: Firstly, do not use Reddit’s AI clone for advice to solve ethical issues. Second, the results are interesting, so you can sometimes share these dilemmas as the worst AITA stories on Reddit. These results are yielded through the AI-powered simulations from Reddit.

Are You The Asshole? As the name implies, is built to mimic Reddit’s crowdsourced advice forum, named as r/AmItheAsshole (AITA). Created with the support of DigitalVoid, you can definitely approach this Reddit’s AI Clone for advice that will allow you to enter scenarios and seek advice. It then generates a series of feedback posts depending on the situation. The feedback is very good at capturing the style of real human-generated responses, but it has a strange and slightly heterogeneous bias that many AI language models generate. It reacts to the plot of the classic science fiction roadside picnic. Apart from the strangeness of the assumptions typed, they tend towards etiquette that doesn’t exactly match the prompts, but the writing and content are seemingly very convincing.

In last year’s contentious “Bad Art Friend” debate, the first two bots were even more confused by this. But there were a lot of people to be fair. You can find some more examples in the subreddit dedicated to Reddit’s clone site for advice. AYTA is actually the result of three different language models, each trained with a different subset of data. As explained on the website, creators have captured approximately 100,000 AITA posts since 2020, along with comments related to them. Next, trained a custom text generation system with various data snippets. The bot is provided with a series of comments, concluding that the original contributor is an NTA (not a dislike), one is given the opposite post and the other is a combination of data. It contains both the previous sentence and a comment explaining that all involved were not negligent. Oddly enough, a few years ago someone created an Allbot version of Reddit with prompts that had a much more surrealistic effect.

AYTA is similar to an earlier tool called Ask Delphi. This tool uses the Worst AITA stories on Reddit and was trained in AITA posts, but in combination with responses from the hired respondents rather than Redditors to analyze the morality of user prompts. However, the frameworks of the two systems are quite different. Ask Delphi implicitly emphasized many flaws in using AI for bad life choices and analysis for moral judgment, and specifically how often it responds to the tone of the post rather than the content of the post. AYTA is clearer about that absurdity. For one thing, it’s not an indifferent referee but mimics the snappy style of Reddit commentators. Second, it doesn’t provide a single verdict, instead, you can see how AITA Reddit leads to different conclusions.

This project is about the prejudice and motivated reasoning that teach AI for making bad life choices. The biased AI shows only the situation where the three models show only comments from people who are calling people they dislike each other, and only the comments from people who say that the poster is absolutely correct. It looks like you’re trying to analyze the ethical nuances of the situation. Unlike the recent AI, text generators aren’t familiar with the language. They are very good at imitating human style-not perfectly, but that’s where the fun comes in. Some of the weird answers are clearly not wrong and they are obviously inhumane for people.