digests /

Last Week in AI #48

Fighting fake news bots, algorithmic bias, and more!

Last Week in AI #48

Image credit: Annette Zimmermann, Elena Di Rosa, Hochan Kim / Boston Review

Mini Briefs

We’re fighting fake news AI bots by using more AI. That’s a mistake.

There has been widespread opinion that machine learning algorithms are giving political bots the ability to learn from surroundings and interact with people in a sophisticated way. In events such as Brexit where researchers believe political bots and disinformation played a key role, there is also a belief that AI allowed computers to pose as humans and manipulate the public conversation. However, AI has actually played little role in computation propaganda campaigns thus far. In fact, it is only beginning to be used.

Recently, tech leaders such as Mark Zuckerberg have claimed that AI will be the solution to the problem of digital disinformation. In certain cases, machine learning can help social-media firms pick up and verify fact-checks, but there is debate over whether identifying potentially false information is actually effective. However, these efforts come after the fact, after false articles go viral. Facebook, Google, and others like them are not very zealous in their efforts to get rid of disinformation.

It will take a combination of human labor and AI to succeed in combating computational propaganda, but it is not clear how exactly this will happen. While AI has accomplished a great deal, it still lacks common sense and general intelligence–the propaganda and disinformation we see and attribute to AI are therefore fundamentally human-driven. To address the problem of computational propaganda, we will need to focus on the people behind the tools.

Technology Can’t Fix Algorithmic Injustice

Worries about risks to humanity from “strong” AI have been expressed by many, including Stephen Hawking and Elon Musk. While these are legitimate long-term worries, prioritizing them distracts from the ethical questions “weak” AI is already raising. AI is already working behind the scenes of many of our social systems, influencing high-stakes domains from criminal justice to healthcare. Deploying today’s “weak” AI will require making consequential choices that demand greater democratic oversight not just from AI developers, but from all members of society.

While there has been discussion of creating better algorithms to deal with bias, the data algorithms are trained on are themselves biased–therefore even if algorithms manage to achieve neutrality in themselves, they are not likely to be neutral when trained and deployed. For example, predictive policing systems are fed data that we know is biased–using these data has the potential to drive overpolicing of marginalized communities because those communities are overrepresented in policing data.

While countermeasures to correct for bias in data might be a step in the right direction, they cannot correct for the problem of what kind of data has been collected in the first place. Any approach focused on optimizing for procedural fairness—without attention to the social context in which these systems operate—is going to be insufficient. If our society were just then neutral algorithms might secure just solutions, but in the unjust society we live in they will serve only to reinforce the status quo. Even with the “weak” AI we have today, the road to solving algorithmic bias is not a purely technical one. We will have to treat algorithmic bias as a moral and political problem in which all of us have a stake.

Advances & Business

Concerns & Hype

Analysis & Policy

Expert Opinions & Discussion within the field

Explainers

  • The simplest explanation of machine learning you’ll ever read - You’ve probably heard of machine learning and artificial intelligence, but are you sure you know what they are? If you’re struggling to make sense of them, you’re not alone.

  • 10 ML & NLP Research Highlights of 2019 - This post gathers ten ML and NLP research directions that I found exciting and impactful in 2019. For each highlight, I summarise the main advances that took place this year, briefly state why I think it is important, and provide a short outlook to the future.

  • Challenges of real-world reinforcement learning - Last week we looked at some of the challenges inherent in automation and in building systems where humans and software agents collaborate. When we start talking about agents, policies, and modelling the environment, my thoughts naturally turn to reinforcement learning (RL). Today’s paper choice sets out some of the current (additional) challenges we face getting reinforcement learning to work well in many real-world systems.


That’s all for this week! If you are not subscribed and liked this, feel free to subscribe below!

More like this
Follow us
Get more AI coverage in your email inbox: Subscribe
x