AI Invents a Language on Its Own! Experts Say it Sounds Demonic
Researchers have newly discovered that AI has generated its own strange secret language!
A popular AI tool that turns text into images appears to be creating its own language! The DALL-E tool, which uses AI to generate images from texts is seemingly generating some nonsense text too. DALL-E is an AI program that creates images from textual descriptions. It is a 12-billion parameter version of GPT-3 trained to generate images from text descriptions, using a dataset of text-image pairs. It has sparked debate among AI experts who claim it is creating a secret language to categorize images.
AI Now Generates its Own Secret Language
AI researchers can prompt the development of language in multi-specialist frameworks when adequately fit AI specialists have a motivating force to coordinate on an undertaking and the capacity to exchange a set of symbols capable of serving as tokens in a generated language. Computer science Ph.D. student Giannis Daras claimed in the viral thread that if you enter the gibberish words created by the AI back into the system, it will generate images linked to those phrases. AI researchers discover that this AI-produced text is not irregular, but rather reveals a hidden vocabulary that the model seems to have developed internally.
When AI researchers fed the text Vicootes from the past picture to DALL-E 2, the result was shocking! It demonstrated a picture of dishes with vegetables. And they suspect that the advanced image-generating AI may have generated a hidden vocabulary that could be working in parallel with its primary function. The discoveries highlight the numerous complexities of understanding the inner workings of AI frameworks.
This secret vocabulary seems robust in simple and unbiased prompts but not in hard ones. These tokens might deliver low trust in the generator and small perturbations move it in random directions. If a system behaves in an unpredictable way, even if that happens 1/10 times, that is still a massive security and interpretability issue, worth understanding.