Preview: A look at the demos participating in Fake News Horror Show!

Fake News Horror Show! will convene researchers, digital media experts and technologists, and those concerned with the political and social implications of computational propaganda to imagine the worst. Picture a science fair of terrifying propaganda tools- some real and some imagined, but all based on plausible technologies. Below, please find a preview of demos to be on view during the event. The demo application is still open on a rolling basis here. 


State of the Medium. Jaycee Holmes, Akmyrat Tuyliyev, and Richard Lapham; NYU ITP.  An AR experience that explores the evolution of news and media consumption over the last 50 years and next 50 years, focusing on the annual State of the Union Address. 

Nara: Deepfakes for Audio. By founder Christian Flores. Cloning voices to create the first deepfakes for audio. Learn more about this startup here

"War of the Worlds" panic on turbo. Alex Kaplan of Media Matters. The demo shows how radio stations around the country have become a conduit for fake news. Radio stations already regularly report articles with a satire label as real, along with hoaxes from internet trolls and fake news websites during breaking news situations. The demo aims to present how susceptible radio stations are to the worst case scenario of where radio stations are falling for deepfakes and airing fake audio as real, particularly related to elections, constantly, without researching it first and rarely issuing a correction. 

Fear Uncertainty and Doubt. NYU IDM faulty R. Luke Dubois. The demo will show off lots of unethical facial recognition software in the context of targeted marketing and microaggression as a sort of horrific selfie booth... this booth will try to get you to register an emotion (fear, anger, happiness, disgust, etc.) in response to visual prompts and obnoxious verbal cues (e.g. 'smile, honey').

Reddit Hole. Mikaela Brown and Devon Bain; Cornell Tech. The potential to find extremist content simply by clicking on recommendations has been well-documented on sites like YouTube and Facebook. We wanted to see if toxic content is also only a few clicks away on Reddit, which has not been studied as extensively. Can the sidebar links created by subreddit moderators lead users down rabbit holes of hateful content?

Wikibabel. Zach Coble; NYU ITP. Wikibabel examines contemporary epistemology through an alternate version of Wikipedia. The site, aesthetically and functionally similar to Wikipedia, is created using a machine learning program (specifically, long short term memory) that uses a data dump of Wikipedia as the training corpus and then generates new articles based on the linguistic and structural style of the original entries. The project employs parody and satirical critique to explore how Wikipedia, as a volunteer-edited site representing all of human knowledge, embodies a shift in epistemology brought about by the social web and became an unwitting influence on the fake news moment.

Talking Points. Barak Chamo; NYU ITP. Talking points is an interactive media installation that exposes hidden agendas, media bias, subjective coverage and the "talking point bubble" in broadcast media. The project focuses on how talking points are pushed and homogenized across networks, forcing ideas and beliefs to the minds of audiences through repetitive reciting of promoted narratives.