"Seeing is believing" or is it? There once was a time when we could have confidence that what we saw depicted in photos and videos was real. Even when Photoshopping images became popular, we still knew that the images started as originals. Now, with advances in artificial intelligence, the world is becoming more artificial, and you can't be sure what you see or hear is real or a fabrication of artificial intelligence and machine learning. In many cases, this technology is used for good, but now that it exists, it can also be used to deceive.
When AI Fabrication is Acceptable
Typically, viewers will accept the fabrications of artificial intelligence if they are aware of it. Through the years many of us have come to accept, for the benefit of entertainment, representations of real-life on movie and television screens. However, now Hollywood is getting an AI assist for script writing. With the growth of machine learning, algorithms can sort through extensive amounts of data to understand what elements are more likely to make a movie an award-winner, a commercial success or more popular with viewers. This is just another example of AI helping make the creative process more efficient for humans even though in some cases AI is creating all on its own.
It’s now possible with just snippets of audio for machine learning to mimic someone’s voice thereby blurring the line between real and fake. This would certainly be helpful in some instances such as to fix flubbed lines in a movie without recalling the actor back on location to re-record, but an opportunity for abuse is also possible and easily imagined.
The advent of personalised or smart content is dual-edged, and just like any other AI manipulation, it should be transparent to the user, so they are empowered by the technology rather than misled. Smart content is when the content itself changes based on who is seeing, reading, watching or listening to it and it’s being tested by Netflix and TikTok, a short-form video app, among others. We are accustomed to our search and recommendation engines providing ideas to us based on who we are, but up until now the pieces of content would be the same for every individual who reviews it. Smart content allows every user to get a different experience.
An AI model called GPT2 created by OpenAI, a nonprofit research organisation backed by Elon Musk and others, is a text generator capable of creating content in the style/tone of the data it was fed whether it’s for a newsfeed, a work of fiction or another form of writing. The group did not release its research publicly since the results were so realistic they had fear of the technology being misused.
Fake Images So Real They’ll Fool You
In an effort to raise awareness of how powerful AI technology has become, Phillip Wang created the website “This Person Does Not Exist.” Every human face represented on the site looks like a real human, but they are all AI generated. Another similar site, Whichfaceisreal.com was created to also raise awareness that the technology is so good it’s easy to fool people as to what is real and what’s artificial. Both sites intended to represent the power of the technology developed by software engineers at NVIDIA Corporation. They used a General Adversarial Network where two neural networks compete as they create artificial images and see if the other one can figure it out. Although there are a few tell-tale signs in some of the human faces that were generated that allow you to know they are artificial, many of them are quite convincing.
Fake Videos Could Be Dangerous
At first blush, the capabilities of artificial intelligence to create realistic voices, images and videos that are so lifelike it’s difficult to tell they are artificial is exciting, intriguing and mind-boggling. But before getting too caught up with how amazing this technology is, we must pause to consider the more nefarious uses.
“Deepfake” technology uses computer-generated audio and video to present something that didn’t actually occur. It has been used to swap Scarlett Johansson’s head and other notable figures on a pornographic film to make it seem like they were the stars performing in them.
Aside from the personal misrepresentation and possible damage to individuals, some lawmakers are concerned about the misinformation this technology could spread on the internet.
This video of President Obama shows the possibilities for the audio/video to be manipulated to give the appearance a person of authority said something they in fact never did. This deceit could have negative consequences for national security and in elections in addition to impacting personal reputations.
Xinhua, China’s state-run press agency, has already created AI anchors who appear like regular humans as they report the day’s news. To the general population and even experienced experts, these AI anchors appear real so viewers would assume they are human unless told otherwise.
As AI continues to get more sophisticated, it will become even more challenging to determine between what is real and what is artificial. If "fake" information whether it's conveyed in text, photos, audio or video is dispersed and seen as real, it could be used for evil purposes. We can no longer be certain "seeing is believing."
Bernard Marr is a bestselling author, keynote speaker, and advisor to companies and governments. He has worked with and advised many of the world's best-known organisations. LinkedIn has recently ranked Bernard as one of the top 10 Business Influencers in the world (in fact, No 5 - just behind Bill Gates and Richard Branson). He writes on the topics of intelligent business performance for various publications including Forbes, HuffPost, and LinkedIn Pulse. His blogs and SlideShare presentation have millions of readers.