Follow us on social

Latest Posts

Stay in Touch With Us

For Advertising, media partnerships, sponsorship, associations, and alliances, please connect to us below

Email
info@globaltechoutlook.com

Phone
+91 40 230 552 15

Address
540/6, 3rd Floor, Geetanjali Towers,
KPHB-6, Hyderabad 500072

Follow us on social

Globaltechoutlook

  /  Artificial Intelligence   /  This 4Chan Trained Racist AI will Hurt Your Sentiments More Than You Could Imagine
AI

This 4Chan Trained Racist AI will Hurt Your Sentiments More Than You Could Imagine

A YouTuber trained an AI from 4chan, making it extremely offensive and hurtful

A YouTuber named Yannic Kilcher has sparked controversy in the AI world after training a bot on posts collected from the Politically Incorrect Board of 4Chan, a place infamous for its racism and other forms of bigotry. The board of 4Chan is most popular and well-known for its toxicity. 4chan is a series of wholly anonymous, anything-goes forums. In its layout and fundamental operation, its first boards were created for posting images and discussions related to anime. Posters share racist, misogynistic, and anti-sentimental messages, which the bot named GPT-4Chan made through extensive research conducted by OpenAI, as the GPT language models learned to imitate.

 

AI Trained on 4Chan

After training his model, Kilcher released it back onto 4Chan as multiple bots, and it unsurprisingly created a wave of hate. In the space of 24 hours, the bots wrote 15,000 posts that frequently included or interacted with racist content. They represented more than 10% of posts on pol that day.

Kilcher describes the project in a video on YouTube saying that the model was good, in a terrible sense and it perfectly encapsulated the mix of offensiveness, nihilism, trolling, and deep distrust of any information whatsoever that permeates most posts on the political board. And he took care to dodge 4Chan’s defenses against proxies and VPNs and even used a VPN to make it look like the bot posts originated from Seychelles.

It’s interesting to compare Kilcher’s work with the most famous example of bots-gone-bad from the past: Microsoft’s Tay. It’s a reminder that trained AI is only as good as its source material. Kilcher’s project shows that much more advanced tools are now accessible to any one-person coding team. There were clear ethical concerns with the project, and Kilcher himself said he should focus on “much more positive” work in the future.