J. Scott Christianson

January 8, 2020

J. Scott Christianson

previous post:

FREE AUDIO TRAINING

Learn 3 simple strategies to make giant leaps in your life and work.

FREE download

J. Scott Christianson is a successful instructor and business owner with more than 24 years of experience in networking, videoconferencing technology, and project management. Scott currently serves as an Assistant Teaching Professor at the University of Missouri, where his interests are focused on the impact of technology on society and human well-being. He shares teaching and learning resources at his website and publishes The Free-Range Technologist, a monthly newsletter of resources and lifehacks. You can subscribe here.

1. Why have you grown concerned about Artificial Intelligence, and how do your concerns differ from the mainstream thinking surrounding these technologies?

Concern about Artificial Intelligence focuses on either the development of some great intelligence that might turn against us, or on some form of AI-driven automation that will take our jobs. But we already have massive problems caused by AI and most people don’t even realize it.

The feeds that billions of people consume on social media are all curated by a form of AI called machine learning, and none of it is designed to benefit humans. The concept of machine learning is easy to understand. Give a computer algorithm a large set of data, some users to experiment on, and a goal. Then sit back and let the machine figure out the best way to get people to click on an ad, spend more time on your site, “like” more posts, or share particular articles. The platform will figure out how to do this quickly and will become more accurate as it accumulates more “learning” into its system.

Unfortunately, a machine learning algorithm doesn’t care how it reaches its goal. If presenting fake, highly emotional, or hateful speech gets people to click on an ad, then that is what the algorithm will do. Its goal is not to increase the number of ad clicks while also fostering constructive political dialogue. It just drives clicks. We don’t want to admit it, but we humans are easily hacked. Marketers have long known that we can be nudged into purchasing decisions, political opinions, and even relationships based on the messages and cues we are given. But in the war to manipulate our choices, traditional marketing has been using sticks and stones. In comparison, AI is a hydrogen bomb!

2. How do your classes and talks inform people about the perils of AI and machine learning?

Students don’t realize how much time they spend on these platforms. In my classes, I ask for student volunteers to guess how much time they spend on social media. We then unlock their phones and look at the data. The actual time spent on social media is usually two times the amount they guessed. Then we list the data and meta-data that is collected about their behavior in a given day; the list goes on for several pages. That is enough for students to start realizing that free email, maps, and search services are not designed for providing a service. The service is just a way to gain access for surveilling and collecting data about the user.

I am currently working to develop a classroom exercise on machine learning to show how powerful and easy this technology is to use.

3. What are some immediate steps that people can take right now, individually and communally, to counteract the impact of AI and machine learning on our social media streams?

Individually there is a lot you can do. First, restrict or stop your use of social media. I have deleted Facebook and Instagram and limit my use of Twitter to mornings and evenings.

Second, don’t allow your children on these platforms. That might sound harsh, but the people who investigate crimes against children will tell you that these platforms are a tool used by child predators.  Adolescent self-harm and suicide are on the increase, and several recent studies have directly linked social media to these increases.

As a society, we need regulation to protect our children and ourselves. As we learn more about the effects of these platforms, the calls for regulating them is becoming louder and louder. I wish it were as easy as breaking up or regulating one company. Unfortunately, Facebook, Twitter, and Google are just examples of the problem: Humans using AI against other humans. We need an FDA for AI. We should be able to know when we are just browsing a page or when an AI is tracking our behaviors to drive us to some goal. And we should be able to find out what the goal is: making a purchase, supporting a candidate, etc.. At the global scale, I think that now is the time to develop an international framework for regulating all types of AI. We certainly don’t need a spat of different regulations in different countries for different platforms. That is not helpful to any society or any company.  Just as the Nuclear powers developed a framework for the international regulation of nuclear energy and nuclear weapons, the current AI “powers” must now work to establish a global framework for AI proliferation and regulation.

Bold