In class yesterday, we discussed artificial intelligence and the risks it poses. AI has been the subject of discussion for awhile, but I believe we are only in its infancy of potential. AI research is exponentially growing, and every day we learn about more developments. Not all AI developments are bad, and I will first highlight some very useful and interesting articles. AI algorithms can be used to interesting results, like this one below. (Diaz) The photo on the left is the original image taken of a bird, the photo in the middle is the image that was fed to the AI, and the photo on the right is the resulting image that the AI returned. This specific AI can recreate blurry photos with extreme detail, just like the image enhancing software from crime shows. This has many practical uses, such as facial recognition in blurry CCTV cameras to identify criminals, or just generally helping increase image quality.
AI can also be beneficial to saving lives as well. This algorithm can detect Alzheimer’s in patients even before doctors. (Haridy) The machine-learning algorithm was fed over 2000 scans of human brains, and was able to make conjectures based on similar photos. The network outperformed human subjects, predicting Alzheimer’s cases 6 years before any symptoms appeared. There is other logistic issues, such as these certain brain scans not being easily available, but this still shows the large room for growth that AI has in the medical field. Humans can often make small errors or mistakes, but a calculated robot can perform an action with pinpoint precision. However, would you trust a robot to perform a life saving operation?
Now I want to discuss the risks associated with increasingly improving AI. In recent times, discussion of these risks is becoming more prevalent. Politicians, CEO’s, and other academics are bringing to light the dangers AI can pose. One prevalent result of using AI for malicious intents is the subject of deep fakes. Photo shopping pictures of people to have someone else’s face on theirs is not a new idea. However, deepfaking involves manipulating a picture or video to have someone else’s likeness onto theirs. This has scary implications for stuff such as politics, where anyone can create a video of anyone saying anything. Its not hard to imagine a world one the brink of conflict, where all it takes to spur war is a fabricated video of a world leader saying something unsavory.
This video shows an example of a deepfake, one of many that Google released. This woman’s face is not her own, and has been transplanted on her using someone else’s. If I didn’t tell you that it was a manipulated video, would you notice any difference? But how do these giant media companies plan to approach and pick out these videos from real ones. Unfortunately, not well.
“Google and Facebook are releasing deepfake datasets shows they are struggling to develop technical solutions themselves.” (Knight) Not many people know ways to identify deepfakes that are especially convincing. We need more data and human researchers tackling this problem in order to come to a solution. We live in a society that has accepted “fake news” as a result of our 24 hour news cycle. How will we begin to attempt solve these deepfakes before they get out of control?
I think in order to curb this tech before tings go bad, we need more research in the field of AI. We study how to create AI, but to my knowledge there isnt much research on how to regulate it. Maybe even using AI to sift out deepfakes from real videos would be a fighting fire with fire type of way to solve the problem. There could be benefits from these deepfake videos too. Maybe a sick toddler or young child in a hospital could see a video of their favorite superhero talking specifically to them. As we research this tech, we can figure out how to use it to our benefit.
Diaz, Jesus. “This AI Turns Unrecognizable Pixelated Photos Into Crystal-Clear Images.” Fast Company, Fast Company, 9 July 2018, https://www.fastcompany.com/90149773/this-ai-turns-unrecognizable-pixelated-photos-into-crystal-clear-images.
Haridy, Rich. “Deep Learning Algorithm Detects Alzheimer’s up to Six Years before Doctors.” New Atlas, 9 Nov. 2018, https://newatlas.com/ai-algorithm-pet-scan-alzheimers-diagnosis/57138/?fbclid=IwAR36hOMrS4LaNsF4JJH2KP1qWYqPmtwTv4JqfuCEkjWrMOArwrWVUgPCxx8.
Knight, Will. “Even the AI Behind Deepfakes Can’t Save Us From Being Duped.” Wired, Conde Nast, 2 Oct. 2019, https://www.wired.com/story/ai-deepfakes-cant-save-us-duped/.
You must be logged in to post a comment.