Browser plug-ins that spot fake news show the difficulty of tackling the “information apocalypse”

(Source: www.theverge.com)

According to some, the world is headed toward an information apocalypse. A combination of AI-generated fakes, fake news gone wild, and faltering confidence in the media means soon, no one will be able to trust what they see or hear online. But don’t panic yet, says a subset of these same prophets, for a solution is already at hand: more technology.

This week, two projects were unveiled that are intended to act as buffers between the world and fake news. The first, SurfSafe, was created by a pair of UC Berkley undergrads, Ash Bhat and Rohan Phadte. The second, Reality Defender, is the work of the AI Foundation, a startup founded in 2017 that has yet to release a commercial product. Both projects are browser plug-ins that will alert users to misinformation by scanning images and videos on the webpages they’re looking at and flagging any doctored content.

“a free society depends on people having some sort of agreement on what objective reality is.”

Lars Buttler, CEO of AI Foundation, tells The Verge that his team was motivated to create the plug-in because of escalating fears over misinformation, including AI-generated fakes. “We felt we were at the threshold of something that could be very powerful but also very dangerous,” says Buttler. “You can use these tools in a positive way, for entertainment and fun. But a free society depends on people having some sort of agreement on what objective reality is, so I do think we should be scared about this.”

They’re not the only ones. Over the past year, a growing number of initiatives have been launched with the aim of helping us navigate the “post-truth” world. In many ways, the fears they express are just a continuation of a trend that emerged in the mid-2000s under the presidency of George W. Bush. (Think Stephen Colbert satirizing Fox News’ love of “truthiness.”) But this latest iteration also has a sharper edge, honed by the ascendency of President Trump and hype surrounding new technology like AI.

Indeed, most players involved namecheck machine learning somewhere in their pitch. These range from startups like Factmata, which raised $1 million to create “automated machine journalism that makes the most unbiased articles,” to DARPA’s upcoming deepfake competition, which will pit expert against expert in a battle to generate and detect AI fakes. As you might expect, the credibility of these projects varies, both in terms of apparent motivation and how technologically feasible their plans are. And looking at the Reality Defender and SurfSafe plug-ins in more detail makes for a good case study in this regard.

Different ways to spot a fake

Of the two plug-ins, SurfSafe’s approach is simpler. Once installed, users can click on pictures, and the software will perform something like a reverse-image search. It will look for the same content that appears on trusted “source” sites and flag well-known doctored images. Reality Defender promises to do the same (the plug-in has yet to launch fully), but in a more technologically advanced manner, using machine learning to verify whether or not an image has been tinkered with. Both plug-ins also encourage users to help out with this process, identifying pictures that have been manipulated or so-called “propaganda.”

The two approaches are very different. SurfSafe’s leans heavily on the expertise of established media outlets. Its reverse-image search is basically sending readers to look at other sites’ coverage in the hope that they have spotted the fake. “We think there are groups doing a great job of [fact-checking content], but we want users to get that information at the click a mouse,” says SurfSafe’s Ash Bhat. Reality Defender, meanwhile, wants to use technology to automate this process.

Going down the latter route is undoubtedly harder, as spotting doctored images with software isn’t something we’re able to reliably automate. Although there are a number of methods that can help (like looking for inconsistency in compression artifacts), humans still have to make the final check. The same is true of newer types of fakes made using artificial intelligence. One promising technique to identify AI face swaps compares skin color frame by frame to spot an active pulse, but it’s yet to be tested at a wide scale.

Any automated checks are more than likely to fail, says Dartmouth College professor and forensics expert Hany Farid. Speaking to The Verge, Farid says he’s “extremely skeptical” of Reality Defender’s plans. Even ignoring the technical challenges, says Farid, there are far broader questions to address when it comes to deciding what is fake and what is not.

“Images don’t fall neatly into categories of fake and real,” says Farid. “There is a continuum; an incredibly complex range of issues to deal with. Some changes are meaningless, and some fundamentally alter the nature of an image. To pretend we can train an AI to spot the difference is incredibly naïve. And to pretend we can crowdsource it is even more so.”

As for the crowdsourcing component, Farid notes that a number of studies show that humans are very bad at spotting fake images. They miss subtle things like shadows pointing the wrong way as well as more obvious changes like extra limbs Photoshopped onto bodies. He points out that with crowdsourcing, there’s also the threat of groups manipulating the truth: voting politically in line with personal convictions, for example, or just trolling.

Bhat says SurfSafe will side-step some of these problems by letting users choose their own trusted sources. So, they can use The New York Times to tell them what images might be doctored or “propaganda,” or they can use Breitbart and Fox News. When asked what’s to stop this from leading to users to simply follow their own existing biases, Bhat says the team thoughts about this a lot, but found that news outlets on different sides of the political spectrum agree on the majority of stories. This minimizes the potential of users to fall into echo chambers, suggests Bhat.

These challenges don’t mean we’re helpless, however. We can certainly call out obvious faked viral content, like the doctored video of Parkland shooting survivor Emma González appearing to tear up a copy of the US Constitution (it was actually a shooting range target), or the crudely Photoshopped image of a Seattle Seahawks player dancing with a burning American flag (the dance was real; the flag was not).

But while such images don’t force us to deal with philosophical quandaries, they can still be tricky to nail down. In our tests, for example, the SurfSafe plug-in recognized the most widely circulated version of the Seahawks picture as a fake, but it couldn’t spot variants shared on Facebook where the image had been cropped or was a screenshot taken from a different platform. The plug-in was even worse at identifying stills from the González video, failing to even identify a number of screenshots hosted on Snopes.

Some of these difficulties are due to how SurfSafe catalogs images. Bhat says it can only verify pictures that have been seen by a user at least once (it’s currently collected “a million” images in a few days of use), which could explain the lapse. But another reason might be the plug-in’s use of hashing, a mathematical process that turns images and videos into unique strings of numbers. Both SurfSafe and Reality Defender use this method to create their indexes of real and doctored images, as searching (and storing) strings of numbers is much quicker than using full-sized images.

But creating a hash is an art. The key questions are: how different does a picture have to be for it to get its own unique code? If it’s cropped, does it count as the same image? What about small changes to color or compression? These are decisions without easy answers. If your hash applies to a broader range of images, it risks overlooking key changes; if it’s too sensitive, then you have to verify or fact-check a much larger number of images.

When asked about these challenges, both the AI Foundation and SurfSafe err on the side of caution. They say their products are still being developed (and not yet even released in the case of Reality Defender), and they don’t expect such problems to be solved overnight.

A plug-in that you tune out

This last point is certainly true, and experts suggest that our current malaise is here to stay. Sarah Roberts, an assistant professor at UCLA who specializes in digital information and media, tells The Verge that products like Reality Defender are an important “instantiation of contemporary anxieties,” but they don’t address underlying issues.

“I think people are sensing a vacuum.”

“I think people are sensing a vacuum,” says Roberts. “They sense a void, a lack of trust in institutions.” She adds that declining readership of established news media and lack of government support for libraries and public schools are “worrying trends.” These were places where people could teach themselves and learn how information in society is produced and disseminated, says Roberts, and now they’re being forgotten.

“It’s not like people woke up without a desire to be informed or without a desire to have trusted, vetted information sources,” she says. “In fact, it’s quite the opposite. In the era of abundant information, people need that expertise now more than ever.”

That expertise, arguably, hasn’t gone away. It’s just been drowned out by louder voices, happy to throw misinformation chum into the waters of online media just to cause a frenzy. And this brings us to a larger question for both Reality Defender and SurfSafe. Even if the products can achieve their stated aims, how will they encourage people to actually use them? How do they make themselves heard in the din? As Wired notes in its coverage of SurfSafe, it’s a lack of digital literacy that makes people vulnerable to fake news and viral hoaxes in the first place. Getting those individuals to install such a plug-in won’t be easy.

But for Roberts, there’s just a certain “paucity of imagination” in these solutions. They’re doing something, she says, but it’s not nearly enough. “What people are panicking about is a political situation, an economic situation. And how do you have an intervention at that level? A browser-level intervention… What will that achieve?”

More Info: www.theverge.com

Leave a Reply