Artificial intelligence is improving by each passing day. All those systems which use it regularly can even diagnose illness more accurately and quickly than those physicians having some years of experience.
Now AI-driven tools are necessary for combating cybercrimes because attackers are already using AI to launch sophisticated attacks. A survey report stats reveals that 69% of the organizations agreed AI had become an essential tool to prevent breaches and hacks.
Besides having tons of advantages, AI is still not free from various flaws and imperfections. Recently, this thing has come to light that AI has blind spots, which are also known as adversarial examples. These adversarial examples often represent what risk AI can impose when it miscalculates things.
Some bugs can make AI overlook information that it can recognize easily. People who are familiar with AI and machine learning better understand that adversarial examples can immediately produce after someone makes small tweaks to an image.
It can lead to several other problems. However, some researchers believed that they could use AI blind spots to protect user’s privacy. Read the remaining article to become well aware of how imperfections with Artificial Intelligence can be helpful in today’s world.
Influencing AI data to cause mistakes:
Recently Neil Gong joined the Duke University to analyze the effectiveness of inserting false information within a person’s profile to secure their privacy. Other things which were included in his work was checking out which information works best for that, and how much is required to keep the data secure from snooping eyes.
Another Duke researcher named Jinyuan Jia, along with Gong, depended on a set of data similar to the one associated with the Cambridge Analytica scandal, which exposed Facebook profile information to a third-party without the consent of the users.
The team of researchers used the collected information from the ratings given by users in the Google Play Store. The fundamental reason behind it was to work with users who previously had revealed their locations while submitting their views and opinions regarding different apps.
Both the researchers trained a machine-learning algorithm with those users and later found that it could predict a person’s city based on the Google Play likes on the first try with 44% of accuracy.
The scientists put forward some minor adjustments to achieve complete perfection. Like for instance, if they removed app ratings or made three ratings while mentioning the incorrect city, the algorithm’s accuracy suddenly changed on sameness with random guesses.
Using Adversarial Examples to Prevent Privacy Leaks:
Researchers from the University of Texas and Rochester Institute of Technology came up with yet another method for protecting privacy associated with adversarial examples.
Cybercriminals frequently use a technique known as web fingerprinting to recognize website people. The team found that by adding noise along with adversarial examples, the accuracy rate of that technique came down from 95% to somewhere between 29-57%.
Later, the changes from adversarial examples were mixed in with decoy web traffic with a randomized method that would be difficult for the attacker to notice. It is essential because cybercriminals can conduct adversarial training to fool the existing algorithms, which they think are in place to protect privacy.
Seeing AI Flaws from a Different Perception:
Nowadays, people are becoming increasingly interested in learning about AI and how it might shape the future. The fascination towards AI opens up opportunities for people such as Tim Hwang to weigh in what to expect.
Tim Hwang spent time at Google and MIT and participate in a $26 million AI initiative. Now he spends some of his precious time as a guest speaker who informs and educates people on machine learning and other related topics.
As the general public becomes more and more familiar with AI and its operation, they might generally realize that, in the circumstances like described above, imperfections within AI might not be a bad thing always. Using AI flaws to enhance privacy, though mistakes associated with AI, can remind developers to slow down and remember that some AI advancement might carry unintended outcomes.
For example, researchers already know that AI algorithms might have an unintended bias. When this happens, responsible developers shut all those projects and return to the drawing board with them.
A common belief is that AI is not as threatening as much, but the biases brought into it by human beings pose significant dangers. However, in some cases, the instances that cause AI to make mistakes could remind people not to develop AI tools rapidly and put too much faith within it.
Improve Privacy By Another AI-Based Method:
Implementing adversarial examples of machine learning is an exciting way to enhance privacy. But, it is not the only available option for bringing AI to help to deal with all privacy-related concerns. There is a program called Deep Privacy, which involves using generative adversarial networks (GANs) to swap someone’s face with bits of certain features from a database comprising of 1.47 million faces.
The result is a mask-like version of constantly shifting face parts that show up in place of someone’s actual face. The technique makes it impossible to recognize a person by their facial features.
Also Read: How AI Is Making Human Life More Easy
The Deep Privacy progress is still in development, although it doesn’t anonymize each part of a person’s face, including their ears. Anyway, the research might lead to some better ways of disguising someone’s features when they talk in front of a camera and provide them with any confidential or sensitive information.
To sum up all, AI has come a tremendous way, but it’s not perfect, and that’s completely okay. All the examples given above are to motivate people to widen their perceptions and views of what they deem flawed AI. Even when AI does not perform precisely as people do intend in every case, it still can be valuable.