Follow us on social

Latest Posts

Stay in Touch With Us

For Advertising, media partnerships, sponsorship, associations, and alliances, please connect to us below

Email
info@globaltechoutlook.com

Phone
+91 40 230 552 15

Address
540/6, 3rd Floor, Geetanjali Towers,
KPHB-6, Hyderabad 500072

Follow us on social

Globaltechoutlook

  /  Latest News   /  How Can We Protect AI from Adversarial Attacks?
AI

How Can We Protect AI from Adversarial Attacks?

Today, the widespread use of machine learning and its counterpart, deep learning, has inevitably led to adversarial attacks.

From the facial recognition lock on smart phones to Alexa’s voice recognition feature and spam filters in our emails, machine learning and AI has now become an essential part of many of the applications we use nowadays.

However, the widespread use of machine learning and its counterpart, deep learning, has inevitably led to adversarial attacks, a class of exploits that control algorithm actions by feeding them carefully designed input data.

 

What is an Adversarial Attack?

According to Y Combinator, Machine learning algorithms accept inputs as numeric vectors. Designing an input in a specific way to get the wrong result from the model is called an adversarial attack.

How is this possible? No machine learning algorithm is perfect and they make mistakes — albeit very rarely. However, machine learning models consist of a series of specific transformations, and most of these transformations turn out to be very sensitive to slight changes in input. Harnessing this sensitivity and exploiting it to modify an algorithm’s behavior is an important problem in AI security.

One of the most difficult problems challenging today’s artificial intelligence systems is adversarial machine learning. They have the potential to cause machine learning models to malfunction in unexpected ways or to become vulnerable to cyberattacks.

 

Defend yourself against adversarial examples

Adversarial training is one of the most common ways to defend machine learning models from adversarial examples. Engineers of machine learning algorithms train up their models on adversarial examples in adversarial testing to make them resilient against data anomalies.

However, adversarial training is a time-consuming and costly operation. Every training example must be examined for adversarial flaws, and the model must then be retrenched on all of them. Scientists are working on ways to speed up the process of identifying and fixing adversarial flaws in machine learning models.

TechTalks mentioned, “Creating AI systems that are resilient against adversarial attacks have become an active area of research and a hot topic of discussion at AI conferences. In computer vision, one interesting method to protect deep learning systems against adversarial attacks is to apply findings in neuroscience to close the gap between neural networks and the mammalian vision system. Using this approach, researchers at MIT and MIT-IBM Watson AI Lab have found that directly mapping the features of the mammalian visual cortex onto deep neural networks creates AI systems that are more predictable in their behavior and more robust to adversarial perturbations. In a paper published on the bioRxiv preprint server, the researchers introduce VOneNet, an architecture that combines current deep learning techniques with neuroscience-inspired neural networks.”