07.29.19 | Guest Host: The Social Consequences of Artificial Intelligence Systems

The Social Consequences of Artificial Intelligence Systems

With Dr. Terry Winograd

At our Member Monday discussion on Feb. 25, we asked whether artificial intelligence (AI) is, or can be, real intelligence, in the manner of human beings, or is just a mechanical replication of that intelligence. We found that there is no simple answer to that question, with passionate opinions on both sides.

This week, we consider the social consequences of AI systems. As governments and commercial companies rush to implement these systems for a wide variety of purposes, do they take into account the consequences of delegating what have always been human decisions and actions to machines?

Our first background article for this week (see list below), points out uses of AI that could be considered benign or harmful to society:

Clarifai specializes in technology that instantly recognizes objects in photos and video. Policymakers call this a “dual-use technology.” It has everyday commercial applications, like identifying designer handbags on a retail website, as well as military applications, like identifying targets for drones.

This and other rapidly advancing forms of artificial intelligence can improve transportation, health care and scientific research. Or they can feed mass surveillance, online phishing attacks and the spread of false news.

Other questions raised in the background articles include:

  • Do AI systems reflect the biases of the people who create them?

  • Should we create autonomous weapons?

  • Will AI displace large numbers of human workers?

  • Are AI systems so complex that even the people who design them can’t explain their actions?

  • Can we design AI systems to respect and fit with our social norms?

  • What happens if AI systems bypass human strategy and use something totally unknown to us?

  • Can AI augment human intelligence and intuition, and perhaps even shed light on what it means to be human?

Please refer to these background articles:

Is Ethical A.I. Even Possible?

How to Make A.I. That’s Good for People

The Dark Secret at the Heart of AI

Why AI is a threat to democracy—and what we can do to stop it

Artificial Intelligence’s ‘Black Box’ Is Nothing to Fear

Our lead participant for this meeting will be Terry Winograd. Dr. Winograd is Professor of Computer Science Emeritus at Stanford University, where he created and directed for 20 years the Human-Computer Interaction Group and the teaching and research program in Human-Computer Interaction Design.

His early research on natural language understanding by computers (SHRDLU) was the basis for numerous books and articles. His book Understanding Computers and Cognition: A New Foundation for Design, co-authored with Fernando Flores, took a critical look at work in artificial intelligence and suggested new directions for the integration of computer systems into human activity. He edited Bringing Design to Software, which introduced a design thinking approach into the design of human-computer systems.

Dr. Winograd was a founding member of Computer Professionals for Social Responsibility, of which he was national president from 1987-1990. He was elected to the CHI (Computer-Human Interaction) Academy of the Association for Computing Machinery in 2003, became an ACM Fellow in 2010, and received the CHI Lifetime Research Achievement Award in 2011.

HCC member Steve Winograd, Terry’s brother, will facilitate the meeting, while Steve Smith takes a well-deserved week off.

Ian ReidComment