Machine Learning and Misinformation
The creative aspects of machine learning are overshadowed by visions of an autonomous future, but machine learning is a powerful tool for communication. Most machine learning in today’s products is related to understanding.
Two similar images tell drastically different stories.
FaceApp6 is a recent mobile app that uses generative models to change certain facial features in photos. The two images above tell very different stories. The second image is Migrant Mother7, an image documenting the harsh conditions during the Great Depression. Knowing that the image comes from the Great Depression helps you understand which of these images is the real one because it is put in the context of a historical period that the image reflects. Propaganda is used by groups to overshadow the reality of a period. If the doctored version of Migrant Mother was published during the Great Depression along with other images that hid the difficult period beneath a lacquer of happiness, we may not know the period as the Great Depression today. Spreading misinformation can change the way that today’s events are written in history.
Images are just one example where generative models can produce realistic results. Amateurs will soon be able to generate realistic voice8 and expert writing9. Taken together, these tools herald a future where an individual troll can wreak havoc by spreading disinformation and hijacking reality. Consider the following scenario which ties in generative systems for text, voice, and photos. A malicious person seeds a text generating model with a few pieces of false data which are used to generate an entire story at the level of a professional writer. A fake quotation from the story is fed into a voice generating system which produces a counterfeit statement. Another sentence is fed into an image generating system which creates an image reflecting the malicious opinion. All of this media together supports a breaking-news report where the individual pieces are increasingly difficult to separate from reality. The immediate dangers of machine learning are not robot uprisings, but rather the destabilizing effects that disruptive technologies have when taken in a fragile social and economic climate that is slow to adapt.
Some people hope that the ease of creating misinformation will cause people to question all media. Unfortunately this ignores the reality of misinformation and media consumption. When you encounter information, it has an immediate unconscious effect on your attitude and memory. Even once misinformation is discredited, it still persists in your attitudes and beliefs, an effect known as Belief Echoes10. Other psychological attributes bode poorly for misinformation consumption. People constantly look for information to absorb that confirms an existing belief or desire, a tendency known as confirmation bias11. This issue is only exemplified by motivated reasoning12, which is a tendency to easily absorb confirming information and disconsider opposing information.
Understanding human perception provides additional background for the effects of misinformation. The Necker Cube is an optical illusion that presents an ambiguous narrative — there are two ways to interpret the orientation of the cube.
The Necker Cube.
Despite containing ambiguous information, your perception forces you to believe one reality at a time. Your perception may flip back and forth between the two orientations, but it is impossible to see both at the same time. The nature of human perception is to form a stable version of reality out of what is presented. In the case of misinformation, our mind tries to figure out how to incorporate the new information into our model of reality, even when that information does not belong.
With the ease of creation that machine learning brings to content generation, it will be easier than ever to effectively communicate. The question that underlies new technology is whether people will use it for benevolent or malicious behavior. We explored the benefits and dangers that machine learning brings in the evolving media landscape. It is naive to create these tools without considering the disastrous impact they can have. Members across technology, academia, and news must begin discussing how to navigate this new landscape. Cooperation is necessary to defend society from the perverse agenda of those determined to hijack reality.
- Man-Computer Symbiosis
- Mental representations
- Generative Adversarial Text-to-Image Synthesis
- The Computer as a Communication Device
- pix2pix. A demo is available here.
- Migrant Mother
- Adobe Voco ‘Photoshop-for-voice’ causes concern
- I taught a computer to write like Engadget
- Belief Echoes: The Persistent Effects of Corrected Misinformation
- Confirmation bias
- Motivated reasoning
Bio: Paul Soulos is an engineer and interaction designer. His goal is to enhance human-computer interaction by researching the correspondences between representations learned by humans and those by machine learning algorithms. He is interested in extended cognition and intelligence amplification.
Original. Reposted with permission.
- Is Regression Analysis Really Machine Learning?
- Making Sense of Machine Learning
- Why Artificial Intelligence and Machine Learning?
|Top Stories Past 30 Days|