The disability community has long struggled with “useful” technologies – lessons for everyone in dealing with AI

You may have heard that artificial intelligence is going to revolutionize everything, save the world and give everyone superhuman powers. Or you may have heard that it’s going to take away your job, make you lazy and stupid, and turn the world into a cyberpunk dystopia.

Think of another way to look at AI: as an assistive technology – something that helps you function.

With that perspective, consider a community of experts in giving and receiving help: the disability community. Many disabled people make extensive use of technology, both special assistive technologies like wheelchairs and general technologies like smart home devices.

Likewise, many disabled people receive professional and informal help from other people. And despite stereotypes to the contrary, many disabled people regularly provide help to the disabled and non-disabled people around them.

People with disabilities have a wealth of experience in receiving and giving social and technical support, making them a valuable source of insight into how everyone might interact with AI systems in the future. This potential is a major driver of my work as a disabled person and researcher in AI and robotics.

Learning to live actively with help

Although almost everyone values ​​independence, no one is completely independent. We all depend on others to grow our food, care for us when we are ill, give us advice and emotional support, and help us in a thousand interconnected ways. Being disabled means having support needs that go beyond the norm, and therefore those needs are much more visible. As a result, the disability community has taken a more explicit view of what it means to need help to live than most people without disabilities.

This perspective from the disability community can be invaluable when approaching new technologies that can help both disabled and non-disabled people. You can’t replace pretending to be disabled with the experience of actually being disabled, but accessibility can benefit everyone.

This is sometimes called the ‘curb-cut’ effect, because placing a ramp in a curb to help a wheelchair user get onto the sidewalk also benefits people with strollers, wheeled suitcases and bicycles.

Collaboration in aid provision

You’ve probably experienced someone trying to help you without listening to what you really need. For example, a parent or friend can “help” you tidy up and instead hide everything you need.

Disability advocates have long fought against this kind of well-intentioned but intrusive assistance. They do this, for example, by placing pins on the handles of wheelchairs to prevent people from pushing someone in a wheelchair without permission. They also advocate for services that allow the disabled person to remain in control.

Instead, the disability community offers a model of helping as a collaborative effort. Applying this to AI can help ensure that new AI tools support human autonomy rather than usurp it.

A key goal of my lab’s work is to develop AI-driven assistive robotics that treat the user as an equal partner. We’ve shown that this model is not only valuable, but also unavoidable. For example, most humans find it difficult to use a joystick to move a robotic arm: The joystick can only move forward and back and side to side, but the arm can move in almost as many ways as a human arm.

To help, AI can predict what someone plans to do with the robot and then move the robot accordingly. Previous research assumed people would ignore this help, but we found that people quickly figured out that the system does something, actively tried to understand what it did, and tried to work with the system to get it to do what they wanted.

Most AI systems don’t make this easy, but the new approaches to AI in my lab allow humans to influence the behavior of robots. We have shown that this results in better interactions during creative tasks, such as painting. We’ve also started exploring how people can use this control to solve problems outside of those the robots were designed to solve. For example, people can use a robot trained to carry a cup of water and pour out the water to water their plants instead.

Training AI on human variability

The disability-focused perspective also raises concerns about the massive data sets that power AI. The very nature of data-driven AI is to look for common patterns. In general, the better something is represented in the data, the better the model works.

If a disability means that you have a body or mind that is outside the normal, then disability means that you are not well represented in the data. Whether it’s AI systems designed to detect cheating on exams rather than students’ disabilities, or robots that don’t take wheelchair users into account, the interactions of people with disabilities with AI show just how fragile those systems are.

One of my goals as an AI researcher is to make AI more responsive and adaptable to real human variation, particularly in AI systems that learn directly from interactions with humans. We have developed frameworks to test how robust those AI systems are to real human teaching, and have investigated how robots can learn better from human teachers, even as those teachers change over time.

By viewing AI as an assistive technology and learning from people with disabilities, we can ensure that the AI ​​systems of the future respond to people’s needs, with people at the helm.

This article is republished from The Conversation, a nonprofit, independent news organization that provides you with facts and analysis to help you understand our complex world.

It was written by: Elaine Short, Tufts University.

Read more:

Elaine Short is co-PI of AccessComputing, an organization working to increase the representation of people with disabilities in computing careers, and the co-chair of AccessSIGCHI, an organization dedicated to improving accessibility of the ACM Special Interest Group on Human -Computer Interaction (SIGCHI). She receives funding from the US National Science Foundation, Amazon Robotics, and the Henry Luce Foundation’s Clare Boothe Luce Program.

Leave a Comment