Tag: astroturf

Social media research

Increased data access would enable researchers to perform studies on a broader scale, allow for improved characterization of misinformation in real-world contexts, and facilitate the testing of interventions to prevent the spread of misinformation. The current paper highlights 15 opinions from researchers detailing these possibilities and describes research that could hypothetically be conducted if social media data were more readily available. As scientists, our findings are only as good as the dataset at our disposal, and with the current misinformation crisis, it is urgent that we have access to real-world data where misinformation is wreaking the most havoc.

Information warfare attack

As DHS, DNI, FBI, and the Pentagon come together before the public to say Russia is actively attacking our midterm elections, as we have long been warned they’d do, please remember that exactly 2.5 weeks ago Donald Trump stood next to Russian President Vladimir Putin, refused to confront him on the 2016 infowar campaign our intelligence officials all say happened, and called Putin’s denial of the 2016 infowar “strong and powerful.”

Seeing all the intel chiefs on stage say one thing, and knowing the President — who wasn’t there? — believes another was weird.

All of the directors seemed to be saying they believe the nature of the attacks was overwhelmingly psyops, or online campaigns intended to influence opinion and voting choices, rather than direct attacks on voting infrastructure.

80s climate mitigations

in the decade that ran from 1979 to 1989, we had an excellent opportunity to solve the climate crisis. The world’s major powers came within several signatures of endorsing a binding, global framework to reduce CO2 emissions — far closer than we’ve come since. During those years, the conditions for success could not have been more favorable. The obstacles we blame for our current inaction had yet to emerge. Almost nothing stood in our way — nothing except ourselves.

Automated Crowdturfing

Malicious crowdsourcing forums are gaining traction as sources of spreading misinformation online, but are limited by the costs of hiring and managing human workers. In this paper, we identify a new class of attacks that leverage deep learning language models to automate the generation of fake online reviews for products and services. Not only are these attacks cheap and therefore more scalable, but they can control rate of content output to eliminate the signature burstiness that makes crowdsourced campaigns easy to detect. Using Yelp reviews as an example platform, we show how a 2 phased review generation and customization attack can produce reviews that are indistinguishable by state-of-the-art statistical detectors. We conduct a survey-based user study to show these reviews not only evade human detection, but also score high on “usefulness” metrics by users. Finally, we develop novel automated defenses against these attacks, by leveraging the lossy transformation introduced by the RNN training and generation cycle. We consider countermeasures against our mechanisms, show that they produce unattractive cost-benefit tradeoffs for attackers, and that they can be further curtailed by simple constraints imposed by online service providers.

Astroturfing by Authoritarians

someone needs to do the obvious follow up in the us, with all the sock puppet accounts here:

If these estimates are correct, a large proportion of government web site comments, and ~0.6% social media posts on commercial sites, are fabricated by the government. The posts are not randomly distributed but, as we show in Figure 2, are highly focused and directed, all with specific intent and content.

FB Fake News Defeat

Some of the posts on the fake news sites’ pages went extremely viral many months after Facebook announced its crackdown. An Empire News story reporting that Boston Marathon bombing suspect Dzhokhar Tsarnaev sustained serious injuries in prison received 240k likes, 43k shares, and 28k comments on its Facebook page. The incident was pure fiction, but still spread like wildfire on the platform. An even less believable post about a fatal gang war sparked by the “Blood” moon was shared 22k times from the Facebook page of Huzlers, another fake news site.

Rise of the trollbot

trolling will soon be automated, and in the case of gamergate may have already happened: Welcome to 2018 or so. Half your social media friends are probably robots – and they’re probably the half that you like the most. Every so often one of the remaining humans gets driven off the Internet thanks to a furious 24/7 Twitter assault that might be a zeitgeist moment, or might just be a bot assault. And you can’t even tell if what you think is the zeitgeist is entirely manufactured by one guy with an overheating graphics card and a Mission.

2023-02-03: Scott Alexander considers these rampant fears and concludes that they won’t be a big deal

Overall I think it will happen to a very limited degree or not at all:

  1. There Are Already Plenty Of Social Anti-Bot Filters
  2. …And Technological Anti-Bot Filters
  3. Fear Of Backlash Will Limit Adoption
  4. Propagandabots Spreading Disinformation Is Probably The Opposite Of What You Should Worry About
  5. Realistically This Will All Be Crypto Scams
  6. I Do Think This Might Decrease Serendipitous Friendship, Though
  7. You Can Solve For The Equilibrium

End of the uncanny valley

I can tell from the pixels is coming to an end.

As computer-generated characters become increasingly photorealistic, people are finding it harder to distinguish between real and computer-generated

With photo retouching, postproduction in film, plastic surgery, and increasingly effective makeup & skin care products, we’re being bombarded with a growing amount of imagery featuring people who don’t appear naturally human.

bye bye uncanny valley.
2021-10-17: Things are now at the point where you can win prestigious photography prizes for fake images:

The Book of Veles: How Jonas Bendiksen hoodwinked the photography industry. The photographer explains the many layers of intrigue that went into the creation of his book about misinformation in the contemporary media landscape. If computer-generated fake news pictures are accepted by the curators who have to pick the highlights of all the year’s best photojournalism, it shows that the whole industry is quite vulnerable. The big tech companies regularly recruit top-level hackers, even criminal ones, to try to break into their systems. They are called penetration testers. They are paid top dollar to hack as much as they can and search for weaknesses in company’s system architecture, so that they can go fix the loopholes and protect themselves against being taken advantage of. I guess I see what I did as a similar service for documentary photography and photojournalism, just on a volunteer basis.