Facebook scientists said on Wednesday that they have developed artificial intelligence software that can not only identify “deeply faked” images, but also find their source.
Deepfakes are photos, videos, or audio clips that use artificial intelligence to change them to make them look real. Experts warn that these can be misleading or completely wrong.
Facebook research scientists Tal Hassner and Xi Yin said that their team has developed a software in cooperation with Michigan State University that can reverse engineer deep fake images to determine how and where they were made.
The scientists said in a blog post: “Our method will facilitate deep forgery detection and tracking in real-world environments, where the deep forged image itself is usually the only information that the detector must use.”
They added: “This work will provide researchers and practitioners with tools to better investigate incidents of collaborative misinformation using deep forgery, and open up new directions for future research.”
Facebook’s new software runs Deepfake over the Internet to search for defects left in the manufacturing process. Scientists say these defects will change the digital “fingerprints” of images.
“In digital photography, fingerprints are used to identify the digital camera used to generate the image,” the scientists said.
“Similar to device fingerprints, image fingerprints are unique patterns left on the image…also can be used to identify the generative model from which the image came.
“Our research pushes the boundaries of understanding deep forgery detection,” they said.
At the end of last year, Microsoft launched software that can help detect deeply forged photos or videos, adding a series of programs designed to fight hard-to-detect images before the US presidential election.
The company’s Video Authenticator software analyzes each frame of an image or video, looking for evidence of manipulation that is invisible to the naked eye.