AI-Generated Deepfakes are Soliciting LinkedIn! Beware!
Here’s all you need to know about the deepfakes that are soliciting the corporate infrastructure on LinkedIn
Deepfake technology involves using artificial intelligence (AI) to generate convincing images or videos of made-up or real people. It is surprisingly accessible and has been put to various uses, including in entertainment, misinformation, harassment, propaganda, and pornography. Researchers Renée DiResta and Josh Goldstein of the Stanford Internet Observatory have found that this technology is now being used to boost sales for companies on LinkedIn, according to an NPR report. But deepfakes are also abundantly used by malicious practitioners to solicit the corporate infrastructure on it.
Steps Taken by Linkedin
LinkedIn says it has since investigated and removed those profiles that broke its policies, including rules against creating fake profiles or falsifying information, the publication reported. The investigation began when DiResta received a message from a seemingly fake account. The eyes were aligned perfectly in the middle of the image, the background was vague, and one earring was missing.
LinkedIn has declared that its policies make it clear that every LinkedIn profile must represent a real person. It is constantly updating its technical defenses to better identify fake profiles and remove them from its community, as it has in this case.
Recent Use Cases
One of the lead-generation companies that LinkedIn reportedly removed after Stanford Internet Observatory’s research was conducted was Delhi-based LIA. The company offered hundreds of “ready-to-use” AI-generated avatars for US$300 a month each according to LIA’s website, from which all information was recently removed, the report mentioned.
Recently, a deep-faked video made an appearance on social media, in which Ukrainian president Volodymyr Zelenskyy appeared to ask Ukrainian troops to lay down their arms. A recent study published in the Proceedings of the National Academy of Science found that people have just a 50 percent chance of guessing correctly whether a face was generated by artificial intelligence. AI synthesized faces were found to be indistinguishable from real faces and, somehow, more trustworthy.