Skip to main content
Master of Science in Computational Linguistics

ChatGPT, Ethics & Careers in Computational Linguistics

Q&A With Program Director Emily M. Bender

Emily Bender

Language technology is all around us, whether it’s predictive text in an email, digital voice assistants like Alexa, translation tools such as Google Translate or countless other applications. The Master of Science in Computational Linguistics program at the University of Washington has been preparing students to make an impact on this dynamic field for nearly two decades.

But when the generative chatbot ChatGPT was released to great fanfare in late 2022, it seemed as if the whole world suddenly took notice. Everyone — from tech experts to pundits to government officials — began speculating about the great promise and potential perils of these kinds of powerful language tools. Many of these observers also referred to ChatGPT as a form of artificial intelligence, or AI, a label that some experts have questioned. Technically, ChatGPT is a large language model that’s been trained to generate text based on a corpus of data.

To help us understand the current moment for language technology, and how the UW's computational linguistics program prepares students to work in the field, we spoke with Professor Emily M. Bender. She’s a nationally recognized expert on the subject and has been director of the graduate program since its 2005 launch.


It seems like computational linguistics is becoming more and more important to society, with the development of new tools like ChatGPT. What’s your take on why prospective students should think about studying technology and language?

My take on this actually hasn’t changed with the advent of ChatGPT. The idea all along has been that computational linguistics is about getting computers to deal with natural language. There are two main paths for doing this kind of work: You can use it to do linguistic research and build better models of how language works, or — and this is what most of our students are interested in — you can build language technology.

We use language in every single domain of human activity, so anywhere that language is being captured electronically there’s scope for language technology to help analyze and classify that data. To get a sense of how broad the domain of application is, consider things like biomedical language processing, used with electronic health records, or helping find appropriate precedent case law during litigation.

Of course, in consumer-facing areas there are lots of places where having computers deal with natural language is incredibly useful. Things like web search are driven by language processing. Spell-check is also language technology, and something like the language learning tool Duolingo is language technology, and virtual digital assistants are language technology. So, there are many different ways that our students can make an impact on the world through their work.

We hear the phrase AI used so much these days in relation to language technology. Is that the correct way to think about a tool like ChatGPT?

I think the term AI is unhelpful because it's what researchers Drew McDermott and Melanie Mitchell call a “wishful mnemonic.” That’s when the creator of a computer program names a function after what they wished it was doing, rather than what it's actually doing.

It’s more valuable to talk in terms of automation rather than artificial intelligence, and to talk specifically about what task is being automated. So you might talk about automatic transcription — it’s not that AI is writing down your words, it’s that you have an automatic transcription system that goes from an audio signal to, say, English orthography. We need to ask ourselves: Why do we want to use automation for this task? Who’s going to be using it? Who's possibly being harmed from its use? And how well will it perform in this specific place that we're planning to use it? 

ChatGPT sprang into everybody's consciousness, not because there was a big technological breakthrough, but because OpenAI made an interface that anybody could go play with. And even though they put some disclaimers on there about how it's wrong some of the time, they presented it to the world as an “artificially intelligent agent” that knows lots of things, and you can ask it questions, rather than as a text synthesis machine trained on some unknown training corpus. A more honest presentation of it would have been that it’s just playing auto-complete with the prefixes that you give it.

We hear a lot in the media about how AI threatens to have major adverse impacts on society. How do you feel about that potential, and how do you think these risks should be framed?

I’ve actually published a blog post recently that talks about this. A lot of what you're hearing is coming from this very strange “AI doomerism” place, which belongs in a tradition of thought that includes things like eugenics and racism. It’s really fringe, but it has a lot of money behind it. And my blog post is asking people to stop holding up these fringe theories as equal to the body of scholarship around the actual risks of so-called AI technologies.

The actual risks are things like, you create a system that sounds so plausibly human that people believe that it can be useful to get medical or legal advice from, when it isn’t. Or you get situations where instead of doing our duty as a society to provide health care and education and legal representation to everyone, people with means still get the real version of that and everybody else is fobbed off on these text synthesis machines like ChatGPT that give a facsimile of it.

There's another set of risks that have to do with the fact that in the training data for these large language models, they pick up all kinds of garbage. The training data has to be enormous, and it’s simply not possible to go through and make sure that it doesn't contain terribly toxic stuff. Even with smaller data sets where you could do that, there's still going to be all kinds of subtle biases in there. And you can't filter the data enough to get rid of all those kinds of stereotypes. So, that’s reproducing systems of oppression. And that’s not to mention the exploitative labor practices involved in trying to clean up system output after the fact. 

How does your program help prepare students to address these challenges?

There are several things that we do. One thing we have is what we call crosscutting themes — topics that get touched on across all the core classes. One of these is ethics — how do we think about how these language systems will impact society? That’s something that we introduce starting with our program orientation, and it gets called out along the way in our core classes. 

We also have some more focused activities. Every year since 2016–17, we’ve offered an elective course on ethics — it’s currently called Societal Impacts of Language Technology — where students explore these issues further.

And then we have a weekly lab meeting called the Treehouse, where I bring up ethics issues for discussion. For a couple years now I've been doing a lab session on whistleblowing, where we think as a group about what you might need to do to prepare to be a whistleblower, should it become necessary. That's always a really interesting discussion.

And how will the students in your program actually combat these problems?

Students who are trained in how to think about these issues can be in a position to raise alternatives. Outside of work, they would be well-positioned to be advocates who can say, “Hey, I think this might not be a very good path. We need stronger policy and stronger application of existing regulation.” On the job, they can be the ones in the room who say, “If this technology could lead to a scenario like the one at Google where its image search algorithm was labeling pictures of Black people as gorillas, that would be a huge problem for the company. We really don’t want to be putting racist technology out into the world. Let's do this other thing instead.”

There are also programs like TechCongress that take people trained in technology and place them as staffers in Congress, to bring more technological savvy into policymaking. We haven't had a grad from our program do that yet, but I would love to see that. And to find other ways in which people with our kind of training might actually end up on the policy side of things, rather than the technology side of things.

One of the clearest lessons that I've learned about this topic is that it’s much more effective to reason about what's going on if you keep the people in the frame. That includes the people making decisions about how to build the technology, the people who are doing the low-paid data labeling work, and the people who are using the technology. If we keep all of them in the frame, the conversation becomes much more sensible.

In the computational linguistics program, we aim to train all our students to do this, so that no matter what professional role they go on to have, they can help steer the development of technology away from harm.


Photo credit: Corinne Thrash, University of Washington College of Arts & Sciences