Logo
    Search

    Your politics might help you get the job — or not

    enFebruary 22, 2021

    About this Episode

    The next time you go for a job interview, you might want to ditch the Che Guevara shirt or the MAGA hat — because what you reveal about your political leanings could determine if you land the gig.

    Recent Episodes from When Experts Attack!

    Culture shapes how our brains learn (transcript)

    Culture shapes how our brains learn (transcript)

    BRENDAN LYNCH, HOST:  We live in a time where nothing is true. An era where reality and hoax look the same on the internet. Whoa, wait a second. There are people who actually know what they're talking about — dangerous people. We call them experts. We're giving these experts a megaphone to drop some truth bombs. If you can handle the truth. I'm Brendan Lynch, and I'm the host of “When Experts Attack!.” We used to think you could teach math to a student in the American Midwest just the same as you'd instruct a student on the other side of the world. But Michael Orosco, professor of educational psychology at the University of Kansas, says culture shapes how our brains learn. New culturally responsive studies in neuroscience show working memory, executive function and other cognitive functions all are influenced by how we grew up, where we were raised and the language we speak. Along the way, Orosco tells “When Experts Attack!” correspondent Mike Krings about teaching math to English learners, why U.S. schools and teachers are ill-equipped to reach diverse learners and ways he and his colleagues are working to better understand how culture and the brain work together.

     

    THEME MUSIC

     

    MIKE KRINGS, CORRESPONDENT: Michael Orosco, welcome to the “When Experts Attack!” podcast. Thanks for joining us.

     

    MICHAEL OROSCO: Nice meeting you, Mike. Good to be here.

     

    MIKE KRINGS: You do a lot of research into educational neuroscience, how the brain learns, how English learners learn, specifically, especially in topics like reading and math. So we're going to talk about a lot of those sorts of things today. But I want to ask you, start off with English learners. What sort of challenges do those these students have that we might not think about?

     

    MICHAEL OROSCO: Something to think about might be just the fact that I work with the federal definition of English learners and that bilingual or dual language learners were actually taken out of one of the most important aspects to cognitive development — their native language. But because of the school districts that I work in, we work with the term “English learner.” And so really, the challenge is, is that these children come to school prepared to learn, but our school system isn't equipped to do deal with their type of learning needs, the teaching needs, they need to be receiving.

     

    MIKE KRINGS: As I mentioned, you look a lot at math, reading and writing. But we don't often think of math and reading as linked. Why it is important to understand the link for English learners?

     

    MICHAEL OROSCO: If you go back to what I was just talking about — that when we look at English learners in the schools — I would have to say that the majority, let's say more than 90% of the kids in here in the United States who are English learners, receive some type of English-only instruction. Well, when you connect math and reading, math and reading is very language-heavy. Imagine you and I are trying to learn math in Russian. Could you imagine trying to learn in the language that you're not quite familiar with, and then trying to learn it with an academic setting? This is why you have to combine both, integrate both the reading, the language and the math. Then I look at problem-solving within these kids, which is a more abstract form of thinking. So math is very-language heavy. We used to think that math was universal. That was a common stereotype. But now the research has shown that math is very specific to your culture, to your native language. And if we're taking away the native language, it becomes more challenging learning in the second language without the first language.

     

    MIKE KRINGS: That's not just for word problems or anything, because that's for math general. So it's not like saying, “ Johnny has three of something and you take away two…”

     

    MICHAEL OROSCO: Think about what you just said: “Johnny had something in you take away three.” That's very phonemic. But it's phonemic relevant to the language you're speaking, which is English. That's that phonological loop we look at and working memory. These kids having to learn the phonological loop. Their brains are having to process this in a second language, but without being able to borrow or use the first language.

     

    MIKE KRINGS: I'm glad you mentioned working memory, because that's something I wanted to ask you about. That's one of the cognitive functions that you study. What is the role that plays in learning to read and write? And how does it affect that specific type of learning?

     

    MICHAEL: I'm going to give you an example. First, contextualized with you, you have a high working memory efficiency, and I'll tell you why. Every time I publish an article or I send you an article to process, we do an interview. Well, this is something new to you. For example, a month ago there was something come out on bilingual cognitive writing that we published in a really good journal paper. I did an experiment. So you and I were doing the interview, you're asking me questions, I have to give you this information, you have to hold this information — which is abstract — and then you have to think about it after we get done with the interview and then put it on paper and pencil. That's your working memory. Working memory is your mental workspace. With children, working memory becomes vital because we use working memory to learn. It gives us the ability to comprehend. It's your mental workspace that allows you to hold information, then begin the process and manipulate that information so you can begin to transfer it into long-term memory, which essentially becomes comprehension.

     

    MIKE KRINGS: Many people, myself included, are what you could call math-averse. You know, we’re not crazy about doing math and get a little uncomfortable or anxious when it comes to that. When we're talking about English learners, or some of this population of students that you work with a lot, what kind of additional stresses and challenges are there for the students, when they or the teachers struggle with math.

     

    MICHAEL OROSCO: I'll give you an example. I do a lot of classroom observations. On any given day, let's say the elementary school classroom does 50 minutes of math. We might do with these kids maybe 12 to 15 minutes of problem-solving because the problem-solving takes a lot more teaching and it's a heavier cognitive load on the brain. If you have teachers who have a math phobia — we find that a lot of our teachers grew up with that math phobia, math anxiety — and then they go on to be teachers. Can you imagine what happens in the classroom? That's really the biggest challenges in classrooms: trying to equip our teachers to become better instructors of problem-solving, and math.

     

    MIKE KRINGS: I imagine this happens a lot with elementary teachers who are teaching a wide variety (of subjects), not like a high school or college teacher who's honed in on it?.

     

    MICHAEL OROSCO: Well, that's one of the challenges our elementary teachers have. I was an elementary school teacher, and on any given day, how many subjects do I have to touch on — four or five? Well, if you think about neuroplasticity and how we can get good at something by practice, if you don't give these teachers enough professional development and enough practice relevant to the math, and then they go out and try to perform it, you can see where the anxiety and motivation happens with all kids, especially English learners who are having to learn in a second language and not in their first one.

     

    MIKE KRINGS: Speaking of your research, you're also the director and founder — is that Is that right? — of the Center for Culturally Responsive Educational Neuroscience? First of all, what is educational neuroscience? And then, why is that important for that concept to be culturally responsive?

     

    MICHAEL OROSCO: One of the reason this has come about is that as I sat in classrooms — and I usually sit in the teachers lounges writing up my notes after observing teachers — and every now and then I would start to have conversations that we're having now, with teachers. This could be in any classroom, elementary classroom, they would talk about the study. “How's the study going?” And then I would get comments like, “Dr. Orosco, I don't understand this working memory. Why are you guys looking at working memory? What is executive functioning? What is controlled attention? Why do we need to be looking at this? Why didn't I get this type of training? And then now I'm working with second language learners, I don't understand this idea of why we need to be more culturally responsive.” I've had these conversations not only with teachers. The last conversation I had was with an assistant superintendent who had close to 500,000 kids in Southern California. She was assistant superintendent over instruction and learning, same idea. “Dr. Orosco, I don't get this. We're testing these kids. We're doing these evidence-based practices. We're doing response intervention, these multi-support systems. I realize it's brain-based, but I can’t make the connection.” So the idea with Culturally Responsive Educational Neuroscience is to begin to fuse what we're learning about the brain, what components are impacted by learning and then by the environment. You can't separate development and culture. Your culture, your development — you can't separate that. If you think about with a lot of our kids, especially our 5 million kids in the system that are English learners, we take a big aspect out of that is we take out their native language. Then as we continue to evolve with neuroscience until it begins to make it back into our field, I have a worry that we're just going to teach it from a monolithic English, white development perspective. We have to begin to paint a picture in our educators’ minds that as they go out, we're beginning to learn about neuroscience, about brain development. But then we also need to look at how our environment and how culture can impact this development.

     

    MIKE KRINGS: You're starting to address this question I was going to ask now. You said before that there's a gap in many practitioners’ understanding of how the brain learns. Could you tell a little bit more about what that gap is? Is it just culture or is there a little bit more to it? And then, how can we address that?

     

    MICHAEL OROSCO: It starts here in higher ed. Ninety percent of all social science research: What population is this done on?

     

    MIKE KRINGS: Probably with college students.

     

    MICHAEL OROSCO: And then what racial demographic?

     

    MICHAEL KRINGS: Probably largely white students.

     

    MICHAEL OROSCO: So that's the problem. As we develop evidence-based practices, the strategies we develop are on what population? Those white, predominately English-speaking. But when we do research on public schools, a lot of our interventions and strategies in the last 20 years have been developed on one population: Not English learners, but English dominant, white students. We make the assumption what is going to work for this population…. And that's where the challenge has been. Also, it's a higher ed thing that we have to begin to have this discussion we're having today. OK, we're going to use these practices in schools, but they may not necessarily work right now. Or we need to begin to differentiate these practices and see how they can work for this population in a second language apart from their native language. But then as I just told you, your culture is very important to development. While a big part of culture is its language. If you're taking away the native language from these children, what are we doing to their development or their cognition?

     

    MIKE KRINGS: Oh, that's a good question.

     

    MICHAEL OROSCO: I'm talking about seeing how this plays out in schools. It's been going on for decades, and this isn’t a silent minority. It's a big part of our school systems. You have about 5 million kids who are learning English as a second language.

     

    MIKE KRINGS: That's across the country, right?

     

    MICHAEL OROSCO: Yeah. Kansas is a prime example that has a growing ELL population. Historically, we see the southwest. I get calls from Tennessee. Texas has over 1 million English learners. So you see, this isn’t just isn't a demographic trend in one area. It's throughout the United States anymore.

     

    MIKE KRINGS: So it seems like it's almost a certainty that every teacher who works in a school system, public or otherwise, is probably going to work with English language.

     

    MICHAEL OROSCO: If you're a public school teacher. The demographics right now is 51% minority, 49%, white in public education. Anymore, some of these are larger urban areas that are predominantly English language learners with about 80% of the population being Hispanic. So it's the fastest growing demographic in public education.

     

    MIKE KRINGS: At the center that we mentioned earlier, some of the students there can take what's called the “mind brain education certificate.” So what is it that they're learning in that certificate program? And then how can that extend to their teaching and beyond?

     

    MICHAEL OROSCO: This actually turned out to be much bigger now. When I developed this certificate, one course was on human development and other courses on psychology, theories of psychology. The third course is on behavioral neuroscience. We're actually teaching the anatomy of the brain, but then we began to connect, contextualize it with environmental experiences to see how that part of the brain may be impacted. For example, an area that we're beginning to look at is the limbic system. The limbic system is emotional. It's our motivation. You have the newer theories being emerged, like Carol Dweck at Stanford. She has growth in mindset. So that's the Behavioral Neuroscience course. And then I added a neuroscience of motivation because we're learning that you have to have motivation in order to drive yourself and want to learn. And then the fifth course is the elective in the spring on executive functioning. We're using the term executive function heavily. However, the way I developed when I was thinking about this, I want it because we have to look at studies longitudinally. How is this impacting this child? In my case, I'm using the example you've seen my research on bilingual working memory. How has this been impacting these experiences these kids are having at the elementary, how that might that be impacting them over the lifespan. So the way I've developed this is how these courses develop over the lifespan. This certificate rolled out two years ago. I'm also beginning to pick up now students from other departments — speech and language, architecture, places like that, who are interested in this because it might help their field move forward. I'm also picking up KU employees. For example, I have one employee who lost her father to Alzheimer's who’s interested in brain development. The certificate that originally was tailored for educators, school psychologist, counseling psychologists — that area — is now is becoming much bigger.

     

     

    MIKE KRINGS: What I was thinking when you were telling me earlier is that different cultures, there's obviously not a one-size-fits-all for education or just about anything. But it seems like this approach really could be applicable across many different areas.

     

     

    MICHAEL OROSCO: Think about it this way: When I developed the cultural responsive teaching center, culture was a central development, but I also give examples outside of my experience. For example, once one student opened up to me  ­­— they were brought up in the Appalachians, very poor. They were white. He asked me, “Do I have a cultural, Dr. O?” I said, “Yeah, you do. You have to think about how your culture being brought up in poverty in the Appalachian Mountains, how that impacted your development.” You see what I'm showing you? How did that impact is he opened up. He talked about it in some papers, about just growing up poor, how that impacted his memory development and things like that. We all have a culture, and that culture is developmental. How does that culture and developmental environment impact our development?

     

    MIKE KRINGS: I see. So it goes far beyond just learning a second language, too. This is just understanding differences from one human being to another.

     

    MICHAEL OROSCO: Yeah, it's educational neuroscience. But this certificate — the way I wanted to develop — was to reach us over the lifespan and make us all realize that we have a culture even though in schools, what I see the predominantly of the research, the type of interventions and practices we use are coming from predominantly English, white dominant perspective. That impacts my field that I look at. But for these courses, I realized I wanted to have some type of collaboration and a larger group of students coming from different departments so we could have a better conversation. And so we do we get into these conversations about just how the environment can impact our neurological development. Thus, that’s why we're moving into this field — educational neuroscience or culturally responsive educational neuroscience.

     

    MIKE KRINGS: That kind of addresses the question I want to ask next. is that just beyond language, you know, why is considering culture in education such a vital component?

     

    MICHAEL OROSCO: Because your culture is your development, your home environment. How you're brought up. You know, if you look at the example of the poor Appalachian white (student), how would you be programmed to learn? What were the behaviors you experienced at home that may have not matched or meshed with what the school is wanting you to do? Culture is really the behaviors, and it's the script that's been given to you on how to live life. Well, if you grew up living life in your environment, or your behaviors, that is very impactful on how you're going to be able to learn in schools. If school instruction or practices don't necessarily align with what you're learning at home, you're going to have a misalignment.

     

    MIKE KRINGS: One of the questions we often ask on our podcasts here is what we're getting wrong or what we don't understand about a certain topic. So I was just kind of curious: What do you most hope to understand or learn about how the brain learns in your work with educational neuroscience?

     

    MICHAEL OROSCO: The biggest challenge right now, as we map out better technology to map out the brain — and we're learning so much within the last 10 years — is how the environment shapes that and then begin to train future teachers, future doctorates with that type of lens. So when they go out and do their research, they can begin to conceptualize brain development, but then look at how their environment is impacting brain development. One thing that I want to point out that this is just a new area in education. In a sense, education is a democracy. It's an experiment. We don't know how much this information can really help us, but this is the direction that some of us and other experts like myself believe that we need to be going. Like last month, you had Jamie Basham on, and Jamie was talking about artificial intelligence and about how AI is going to impact brain development. And then think about how AI is going to go into our classrooms. So how is that I am going to impact our teachers and our students? And what is that going to do to their brain development? That's why we need to bring this into our sphere. That’s really what I'm trying to do right now is help students, help people who take these courses, just to pick something up conceptually so when they go out and do their work, they can begin to apply it. All this started with just simple conversations like we're having now in the lounge. I began to realize I had teachers and special ed teachers, administrators begin to ask me, “Doctor, I don't know what working memory is. We test it, we assess it, but nobody's ever given me any code coursework on where's this in the brain. We're using executive functioning, what do we mean by executive functioning? I'll finish there, Mike.

     

    MIKE KRINGS: So we all could stand to benefit from learning more about how our brain works.

     

    MICHAEL OROSCO: Oh, yeah, if you'd like to talk about this all day, we can. We're learning so much. Really the one thing in the human development course as I teach it through the lifespan, we're learning how to take care of ourselves. One lady who took my course was in her 70s, and she was worried about getting dementia, Alzheimer's, because her husband had gotten it. I said, “Look, I'm not an expert in Alzheimer's, I'm not an M.D., but I know right now we can't cure it.” And she says, “Is there anything, any courses you'd offer that you could help me on this?” And I said, “I offer a human development course through the lifespan,” I said. I covered that specific topic as we got to the topic of aging. Also, I can give you examples how we do prevention. Really, a lot of the educational neuroscience research over the lifespan is beginning to look at prevention, how to improve our well-being, how to improve our mental capacity, how to offset this idea of dementia or Alzheimer's. And I joke with people. I use the term “sin,” Mike, and I'll finish up with this the term sin. I'm not talking about the Las Vegas-type of sin. S-I-N. t's the S-E-N. Sleep, exercise, nutrition. If you do that constantly over the lifespan, you're probably you might be one of those that hits the blue zone.

     

    MIKE KRINGS: Alright, so in this case, it might workout in our favor. Thank you so much, Michael, Orosco. It’s a pleasure as always talking with you. Thanks for joining us.

     

    MIKCHEL OROSCO: And likewise, Mike, thanks for all the great work you do press, the press and everything. It's my pleasure.

     

    HOST: We've come to the end of this glorious episode of “When Experts Attack!.” If you like what you hear, subscribe to our humble podcast and tell a friend about us. We'd love to know what you think. So if you have questions, comments or ideas for future episodes, let us know about it. You can reach us just drop a line to whenexpertsattack@ku.edu We're a co-production of the University of Kansas News Service and Kansas Public Radio. Music was provided by Sack of Thunder. Until next time, this is Brendan Lynch, signing off.

     

    THEME MUSIC

     

    Transcribed by https://otter.ai and edited for clarity and accuracy.

    When Experts Attack!
    enJanuary 22, 2024

    Wrongful convictions are political (transcript)

    Wrongful convictions are political (transcript)

    BRENDAN LYNCH, HOST:  We live in a time where nothing is true. An era where reality and hoax look the same on the internet. Whoa, wait a second. There are people who actually know what they're talking about — dangerous people. We call them experts. We're giving these experts a megaphone to drop some truth bombs. If you can handle the truth. I'm Brendan Lynch, and I'm the host of “When Experts Attack!.” If you've watched “When They See Us,” listened to the podcast “Serial” or learned about local cases in the news — maybe you've noticed more stories about innocent people being exonerated for crimes they didn't commit. Kevin Mullinix, associate professor of political science at the University of Kansas, examines how this growing issue is shaped by people's ideological differences in his new book, titled “The Politics of Innocence: How Wrongful Convictions Shape Public Opinion.” He tells “When Experts Attack!” correspondent John Niccum that policy reforms to reduce wrongful convictions really depend on the political sentiments in any given state, along with the leanings of the governor, and any influence held by innocence advocacy groups.

     

    THEME MUSIC

     

    JON NICCUM, CORRESPONDENT: Is there any one aspect of these wrongful conviction cases that you see over and over again?

     

    KEVIN MULLINIX: Not necessarily over and over again, but there's common threads that lead to wrongful convictions. Usually when we see wrongful conviction and then exoneration subsequently, there tends to be common problems that we see that lead to them. Eyewitness testimony is one of the most common ones. It's really problematic. A substantial number of cases also involve false confessions. There's also even a lot of number of cases that have problems with forensic evidence. A lot of times people always assume, “Oh, if there's forensic evidence, then that obviously means they got the right person for a crime.” But that's not always the case. And so there's multiple factors that contribute to wrongful convictions, and you see some common threads between them.

     

    JON NICCUM: One of the first things we learn in journalism is eyewitness accounts mean absolutely nothing. It's like “Twelve Angry Men,” how the entire court case is dismantled by the fact that the witnesses didn't see anything or didn't see what they thought they saw.

     

    KEVIN MULLINIX: On the eyewitness stuff, I think we're seeing increased awareness of the problems with eyewitness testimony among the public, police, prosecutors, jurors, judges. But the thing is kind of fascinating to me a is I think people are more understanding of the fact that the eyewitnesses themselves makes mistakes. There's problems with their perception, their memory, their recall. Maybe the person had bad vision, the time of day, how long they saw the person. I think there's still a little bit of a lack of awareness that there's also problems with how we handle eyewitness identification. What I mean by that is there's a lot of things that go into the procedures that law enforcement are walking someone through as they go to identify someone that can sometimes lead to mistakes. Like the number of photos and array. How similar the different people are in a lineup. The specific instructions that are given to somebody. The feedback that's being given to someone as they're engaging in identification. So I think people are increasingly aware that there's problems in eyewitness testimony, but I'm not sure everyone fully understands all of the problems that contribute to it.

     

    JON NICCUM: There's an assumption that only guilty people confess to crimes. Is this the reality?

     

    KEVIN MULLINIX: It is not the reality. That's something that I run into a lot like when I talk to my students or my family and friends about wrongful convictions. You talk about sometimes innocent people confess to crimes they didn't commit. I think people have this knee-jerk reaction: “I would never confess to a crime I didn't commit.” But I think it reflects a little bit of a lack of really understanding what it is somebody's going through in this process. There's a lot of research on why it is that sometimes people who are innocent confess to crimes they didn't commit. I think you have to recognize, too, that this can happen at a couple of stages in the process. You can have a false confession during police interrogation, but you can also have someone plead guilty to a crime they didn't commit. There are factors that go into both of these. A lot of the social science research tends to focus on two categories of reasons why people confess to crimes they didn't commit. One of those is the system level variables or the situation — the environmental factors like features of the interrogation. But then there's also research on the characteristics of the individuals that some people are more vulnerable to confessing to a crime they didn't commit than others. On the situational side of things that contribute to why it is sometimes an innocent person confesses to a crime they didn't commit, I think we have to put ourselves in the perspective of someone that's being brought in for interrogation. This usually involves isolation. This is extremely stressful. It’s an emotional experience. That they're being brought in as a suspect, there's a good chance that law enforcement thinks the person did it, and so they might be, like, the whole process is essentially designed to get this person to confess to a crime they think they committed. And police engage in a number of tactics to try to get someone to confess to a crime. This often involves what we call maximization techniques, and then also minimization tactics. Maximization tactics involve emphasizing the worst-case scenario and punishment to somebody. Telling someone “This is how many years you could potentially be in jail or in prison if you don't confess to this crime.” That's also countered with minimization tactics, which may law enforcement trying to almost be sympathetic to the person to say, “Hey, you know, we know you're involved in this crime. We’re sympathetic. You were just at the wrong place at the wrong time.” This combination of these two tactics — being sympathetic to somebody, trying to understand maybe why you did what you did, — but then also maximization techniques of threatening the worst-case scenario. That's a pretty tough experience for people. In all 50 states, it is still legal for law enforcement to lie to people to deceive them and present them with false evidence — at least to adults. There's, I think, three states now that say that you can't do this to juveniles. You can be presented with false evidence that makes you look guilty. If you put yourself in this perspective of being interrogated, you're isolated, this is stressful, this is exhausting. This might go on for several hours. You start almost to have feelings and experiences akin to sleep deprivation. You're being told this is the punishment. You're being presented with all of this evidence that makes you look guilty. And then if you're engaged in this cost-benefit decision-making strategy, that's a tough choice. It's easy for us to sit here and say, “I would never confess to a crime,” but I've never been given the choice of you look really guilty, you can confess and go to prison for a year, or you could face 10 years. I think when we put it in that perspective, then all of a sudden that looks a little different. On the individual characteristics, there's also a lot of research that some people are more vulnerable to confessing to a crime that they didn't commit. I think some of the most notable things that people point out are juveniles and minors are more likely to do so. The Innocence Project tracks verified DNA exoneration cases, and they look at the percentage of those that involved a false confession. The percentages of minors doing that is much higher than when you look at adults. They're more susceptible to that. But also, people with different types of mental illness, people with intellectual and developmental disabilities, they're simply more vulnerable to some of these more coercive interrogation tactics.

     

    JON NICCUM: Another assumption is that most people support reforms for wrongful convictions, but how is it divided by political ideology?

     

    KEVIN MULLINIX: I think that we, both academics and people in the public, oftentimes assume that this is a nonpolitical a political issue.

     

    JON NICCUM: Because everything is so nonpolitical these days.

     

    KEVIN MULLINIX: But like so many other things, in reality, it really is somewhat politicized. The way that it's different is we see pretty substantial differences between liberals and conservatives and their support for reforms to mitigate the likelihood of wrongful convictions. That is, liberals are more likely to support different types of political policy changes to reduce the chances that we have wrongful convictions. This is something we talk quite a bit about in our book, about how liberals and conservatives differ on these issues. But it's not just that they differ in their support for the reforms. What we see is more at a foundational level, that liberals and conservatives differ in their awareness of the problems. We did several surveys with nationally representative samples in which liberals are reporting that they hear about wrongful convictions at higher rates, particularly like news-based stories, than conservatives. Also we see these big differences between liberals and conservatives and their beliefs about how frequently wrongful convictions occur, both with respect to misdemeanors and felonies, but even in perceptions and beliefs about whether or not the justice system has executed somebody that was innocent. It helps if we understand that they differ in those beliefs and their awareness of the issue — that really informs understanding of why we see ideological differences on support for reforms. Liberals are thinking this is happening more often. They think it's a bigger problem, so they want to do more about it. Conservatives don't see it as a pervasive problem, so they're less inclined to support the reforms. But what we have found in our book, though, that I think is really interesting and reassuring is that when people learn more about the problem, they become more supportive of reforms. And that is across the ideological spectrum. So it's not that conservatives are entrenched in their opposition to reducing or passing policies to reduce the likelihood of wrongful convictions. It's when they are given information about how many people have been exonerated in the last couple decades, well then, all of a sudden they start to become much more supportive of saying, “Let's do something to change the likelihood that this happened.”

     

    JON NICCUM: It's surprisingly reassuring. What's a movie or TV show that really captures how wrongful convictions unfold?

     

    KEVIN MULLINIX: We're almost inundated with the numbers of movies and TV shows and stories about the about wrongful convictions and so on. They're all kind of blending together, and I don't know that I have one that really captures it. I think across some, you see a lot of the common stories and problems that lead to wrongful convictions and also the…

     

    JON NICCUM: Let me rephrase. I was thinking about how many movie and TV shows hinged on either someone being wrongfully accused. Is it hard for you to watch shows like that?

     

    KEVIN MULLINIX: It is. Especially it's hard to watch documentaries and TV shows, even ones that are like more fictional or fiction-based stories and movies and novels. This more I've studied this issue, the more you realize the scope of the problem. It's pretty overwhelming to think about the scope of the problem, not just in terms of the thousands of people that are affected by wrongful convictions. We have over 3,000 verified exonerations, so when you watch a movie about one person, that is one of those thousands of stories, and that doesn't encapsulate all the potential wrongful convictions. When we say there's over 3,000 verified exonerations in the United States since 1989, that's only the ones that meet the criteria, which is tough to meet. That doesn't capture everyone who's potentially been wrongfully convicted. Even within each of those individual cases you're sometimes talking about years of their lives spent behind bars, and then there's a ripple effect to each wrongful conviction. That is it doesn't just impact the person who is innocent and convicted of a crime. It impacts their family, their friends, the people in their network. It impacts the victim and the victim's family. Sometimes that impacts the law enforcement that were involved in the case and the prosecutors. It is this massive effect. So when I see these movies or these TV shows, it's really hard for me to even to watch them, because when you start to think about the scale of the problem, it's pretty emotional.

     

    JON NICCUM: A lot of people convicted of crimes claim they're innocent. On a percentage basis, how many are?

     

    KEVIN MULLINIX: Well, that's a number that we can't know for sure. There's been a lot written and a lot of debates about the percentage or the rate at which people are wrongfully convicted. I guess to be clear here to that, there's this belief that everybody who's convicted of crimes says that they're innocent. That is not true. There are surveys that have been done of incarcerated individuals, and they consistently find that a majority of people will actually say that they were correctly convicted. There were some studies done in the 1970s that were some of the earlier research on this. They found that only about 15% of incarcerated individuals would say that they were innocent. There was a survey done in 2015 and 2016, and I think they estimated at around 6% of people who are incarcerated say that they were innocent. This idea that everybody who's convicted says they're innocent is simply not true. It's out of step with reality. But the bigger question about what percentage of people that are incarcerated actually are wrongfully convicted, it's tough to calculate this. If we think about as a fraction, it's tough to even calculate. If the numerator is the number of people that are innocent and the denominator being the number of people that are even convicted, both of those numbers are harder to calculate than what a lot of people assume. Most expert estimates put this at around 3% to 6%. But there's a lot of variation due to even the type and the nature of the conviction.

     

    JON NICCUM: We typically see exonerated people being compensated for their years spent in prison, but that's not always the case, is it?

     

    KEVIN MULLINIX: No, it is not. A number of states have passed legislation in the last couple of decades,] that is supposed to compensate people based on time incarcerated for when they were wrongfully convicted. In theory, this sounds really great in the sense that people that have had an injustice imposed upon them are going to receive some sort of compensation from the state. In reality, it doesn't happen that way. Even after someone has a verified exoneration, and there's a whole definition of what meets an exoneration. Also, each state has different laws about what falls into the wrongful compensation eligibility. But even when they seemingly meet this criteria, it can be a multi-year battle, a legal battle, for them to actually get those funds. This has popped up even in Kansas in recent years. There was a case just a few weeks ago I that caught some news headlines, and I think it went to the Kansas State Supreme Court. It dealt with a person who had been incarcerated in a county jail. They believe that they were entitled to some of these wrongful compensation funds. The state has said that the funds are only available to people that have been placed in a state prison. So is someone in the state of Kansas eligible for wrongful compensation funds if they are in a county jail and not a state prison? You have the laws on the books, but then you also have a little bit of a legal fight to actually access those funds, even if you have been exonerated.

     

    JON NICCUM: Should a prosecutor or judge face penalties for wrongfully convicting someone?

     

    KEVIN MULLINIX: I've never thought deeply about that, but I would probably say no. The reality is there's so many variables that go into a wrongful conviction, and it's tough to usually pinpoint where the actual problem or where the main problem is. It's usually a multitude of errors contributing to this. The justice system is made of human beings, and errors can be made all along the process. Whether we think about eyewitnesses, or maybe it also does involve law enforcement, or maybe it does involve prosecutors that have made decisions and might involve forensic experts that have made errors and how they're interpreting certain lab results. So to hold a single person accountable, like a judge or something, I don't know it’s the right move.

     

    JON NICCUM: Have you made any personal contacts with people who were wrongfully convicted?

     

    KEVIN MULLINIX: I was teaching at a different university at the time. We had a couple speakers come into the university to talk to faculty and students about the issues associated with wrongful convictions. In one of them was a lawyer that had worked on multiple cases. Another one was an individual who had been wrongfully convicted. I heard these two individuals talk on this issue several years ago, when I was just getting interested in this in this topic. Hearing the lawyer talk about the problems associated with wrongful convictions in the legal system was fascinating. But hearing the exoneree tell their personal story was really powerful. I mean, they it was moving. This also then informed some of the subsequent research that we were doing about how is the wrongful convictions impact public opinion. One of the things we ultimately started to study was the power of stories and narratives to transform people's attitudes on this issue. A lot of that can be traced back to I'm sitting in this room hearing this guy talk about what he went through. It was just so painful to hear. And you know, you could see the tears in people's eyes as they're listening to his story. That would probably be the main interaction I had with an exoneree. One of my co-authors, though, does a lot of interviews with exonerees and a lot of research on the difficulties they face when they are no longer incarcerated. Because it's not like they leave, and then everything in their life just goes back to normal and being great.

     

    JON NICCUM: Doesn't everything involving any kind of academic research work better when it's not abstract and you have like a face to put to it?

     

    KEVIN MULLINIX: Yeah, I think so. I think the best research comes from when we stop thinking just about our academic jargon and theories and start to connect stuff to the real world. So for myself, a lot of times where a research project comes from is I'm observing things in the world around me and I want to make sense of them. I study public opinion and why people have the attitudes they do. I'm always wrestling with why do people have the opinions that they do. Why do they think what they do? Or, what are the effects of certain types of information? And so when I got interested in this project, it was hearing like speakers on this, but also, there were so many podcasts, so many documentaries, so many movies coming out. It led to these conversations that I was having with the people that ultimately became my co-authors on these projects about what is the effect of this information about wrongful convictions that we're seeing everywhere and (in) media. What is that doing to public opinion? Then I turned to more academic theories to try to like explain this and understand it and then set up empirical studies to really isolate evidence about what the effects of this information are for people's attitudes.

     

    JON NICCUM: Final question: Do wrongful convictions happen more in the U.S. than in other countries?

     

    KEVIN MULLINIX: I don't know that answer, And I'm not sure if anyone has empirically tried to estimate that. It would be really tough to do in part just because, like I told you before, it's really tough to even estimate what percentage of people are wrongfully convicted. And countries don't keep track of conviction rates the same way. They might not have the same types of legal processes to challenge a conviction. Calculating like the number of exonerees would be really tough to do across countries, not necessarily impossible. My guess would be, though, is that this problem isn't unique to the United States. This is something that we see around the world. But I do think that we probably see variation. Maybe there's some countries that may be handled like eyewitness testimony in a very different way. There might be different policies guiding how the handling of forensic evidence and forensic expert testimony that when you start to see differences in the institutional variables and policies that are at work. Then all of a sudden you're probably going to see differences in the rates at which people are being wrongfully convicted.

     

    JON NICCUM: Well, any final thoughts about this topic?

     

    KEVIN MULLINIX: I want people to start to think about it, that this is an issue and not just,
    “Oh, that's another problem. OK, forget about it.” When people hear these news stories about somebody who has been wrongfully convicted, or when they see a documentary on it or hear a podcast, pause and think about this ripple effect of how many people are hurt by this innocent individual being convicted of a crime they didn't commit. Also, start to think about whether or not we can do things differently and better. The reality is, there are policies that can reduce the likelihood of wrongful convictions, and we should try to move towards those. I think we can always do better.

     

    JON NICCUM: Is there anything else as a powerless individual you can do to help solve this?

     

    KEVIN MULLINIX: I would encourage people to check out The Innocence Project. They have opportunities for people to donate or to get involved and to learn more about the issue. It's also just becoming increasingly aware about it and talking about it.

     

    HOST: We've come to the end of this glorious episode of “When Experts Attack!.” If you like what you hear, subscribe to our humble podcast and tell a friend about us. We'd love to know what you think. So if you have questions, comments or ideas for future episodes, let us know about it. You can reach us just drop a line to whenexpertsattack@ku.edu We're a co-production of the University of Kansas News Service and Kansas Public Radio. Music was provided by Sack of Thunder. Until next time, this is Brendan Lynch, signing off.

     

    THEME MUSIC

     

    Transcribed by https://otter.ai and edited for clarity and accuracy.

    When Experts Attack!
    enJanuary 09, 2024

    AI is an elephant in the classroom (transcript)

    AI is an elephant in the classroom (transcript)

    BRENDAN LYNCH, HOST:  We live in a time where nothing is true. An era where reality and hoax look the same on the internet. Whoa, wait a second. There are people who actually know what they're talking about dangerous people. We call them experts. We're giving these experts a megaphone to drop some truth bombs. If you can handle the truth. I'm Brendan Lynch, and I'm the host of “When Experts Attack!.” For Kathryn Conrad, artificial intelligence is the elephant in the classroom, one that can no longer be safely ignored. It's better, she believes, to try to establish some parameters for its use. That's why just before the school year started, the University of Kansas English professor published a blueprint for an AI Bill of Rights in education in the new scholarly journal Critical AI. For Conrad it's an attempt to establish guideposts in a wild and wooly frontier. While her scholarly expertise in modernism lately centers on 19th century Irish writers like Oscar Wilde, she tells “When Experts Attack!” correspondent Rick Hellman her AI proposal comes from months of study and discussion of the challenges posed by the disruptive technology.

     

    THEME MUSIC

     

    RICK HELLMAN, CORRESPONDANT: You noted in your article we haven't even reached the one-year anniversary of the day that a private company called Open AI introduced ChatGPT and thereby kicked off a national debate about so-called generative AI. And we were just discussing that we feel like futurists have already raced ahead to imagine artificial intelligence as sentient robots and things which they have already in sci fi movies. And yet, it's real life. It's creeping into every facet of our life today, and yet a lot of people don't know about it, what it is, what it means, what it implies. Let's talk about some of the major terms and entities that are out there in the field of artificial intelligence, and hopefully people can see their relevance to the field of education, where your expertise lies. For instance, what is a large language model, like ChatGPT? Is it a chat bot? What can it do and what can't it do?

     

    KATHRYN CONRAD: That's a great question. I love that you started with the robots, because of course, that's our first thought when we hear AI. There's H.A.L. and there’s Skynet. We've seen a lot of alarmist language around AI, especially since last year. I think it's important to note there's debate about whether what we have right now — things like chat GPT, large language models and other kinds of generative AI — actually have anything to do with those fantasies of a sentient robot future. I would say that open AI — and I can say because I've read their read their self-descriptions in more detail than Anthropic and Google's Bard and some of the other models — they are hoping for something like AGI. That's artificial general intelligence. It's not clear what the relationship between that dreamed-up future and what we have right now is. You asked what is ChatGPT, what are large language models. They are text-generating models. Generative AI includes other kinds of generation models, but just for your question, yeah, it is a chat bot. That's important to know. And I'll try to remember to come back to that issue. What they are are text-generating models that use what's called a transformer architecture. That's a kind of processing algorithm that considers the context and weighs the elements in the sequence of tokens or words in order to create a kind of natural-sounding answer. Chat bots have actually been around for a long time. I was teaching with some chat bots back in 2015 just to talk about how we interact with technology. Since 2017, there was a real shift. This transformer architecture really made the difference between something that was kind of quirky and something that's much more natural and the kind of impressive outputs that we get with something like ChatGPT or Bard or Claude. Because chat bots are an interface, that's how we interact with the architecture. If you've ever messed around with ChatGPT, experimented with it, you'll see that it often refers to itself as “I.” It talks about understanding. Sometimes it will use language that makes it sound like an intelligence. That's a deliberate choice. It's a deliberate choice to have us engage with something that seems like a personality. They didn't have to use that kind of model. Chat bots are an interesting choice for companies that have artificial general intelligence as their sort of arc, what they hope to get. If you want to pretend that you're interacting with Jarvis or with Data from “Star Trek,” you can because it uses an AI. But there are other ways you can use artificial intelligence that wouldn't do that. It's a deliberate choice to create something that's known as the Eliza effect. That's when you interact with a computer and you just sort of assume an intelligence you interact with it as if it were human.

     

    HELLMAN: But they want us to think that it's a sci fi robot.

     

    CONRAD:I would just say, if you go to image-generating models, if you were to enter artificial intelligence into something like Midjourney or Dall E-2, it's pretty likely you're going to get a human face, usually female, with some wires coming out of its head. That's definitely part of the image. It's part of the cultural imagination that they were trained on. They're trained on large datasets. That's a really important other part that we'll probably get to. They're trained on large datasets, and then they're tweaked. They're trained and tweaked by human data workers. There's definitely humans behind the veil throughout the process. That's important to remember. Even though when you're interacting with it, there's not a person right behind screen fixing things. They do fix outputs, for sure, and guardrail and reinforce and direct.

     

    HELLMAN: Interesting. Well, I think another thing that people think about when they think about artificial intelligence these days — and certainly in the field of education — is its ability to plagiarize and cheat. This gets to some of the ethical issues with AI that we'll be getting to later. You’re right, there is some truth to the media's obsessive focus on plagiarism or cheating, which is the ease with which students can generate ostensibly passable work on a range of assignments. You say teachers have already been forced to adapt to this. How so?

     

    CONRAD: Pretty quickly it was clear that this would have potential impacts on what students might produce. As teachers, we were used to adapting. We adapted to the internet. We adapted to Wikipedia. We adapted to the pandemic most recently. We're used to pivots. But this was dropped in our laps, fully formed, without any consideration for its impact on education. Teachers have been figuring out whether and how to work with it, how to change policies, how to change assessments, so that we ultimately get what we want. If there are any students listening, what we're looking for when we ask you for assignments is not so much the right answer as an opportunity for you to learn. If you're giving me something that was generated by ChatGPT, then I need to reconsider that assignment. I don't want a perfect paper. I want you to learn how to write a paper. I don't want you to give me perfect code. I want you to learn how to code. This is one of the things teachers have had to deal with from K-12 through graduate school is how to create assessments that allow us to give students the kind of competencies as well as content that we want them to have. That's how we've been adapting.

     

    HELLMAN:  What are you seeing in the college classroom today in regard to the use of AI, should we say chat bots, by students?

     

    CONRAD:  That's a good question. I mean, I've talked to a lot of students over the last 10 months. Certainly there are some teachers that are working with it. There are some students or some teachers who've said students shouldn't use it in their classes. I think a lot of teachers haven't said we're still trying to figure out if there's ethical uses of it and how we might use them in the classroom. What I do sense — again, through students, both in and outside my own classrooms and talking to other educators — is they're sort of at sea. They don't even necessarily know whether if there's no policy, does that mean they can use it or that they can't use it? That's part of the reason I think it's really important to help people understand, to give them opportunities to think about policy and to think about principles for policy. That helps to protect students as well as teachers.

     

    HELLMAN: Well, that's why you wrote this article about a Bill of Rights for AI in education, right? Let's talk about some of the things that you raised in the article. You start first by saying AI entails a host of ethical problems, from scraping of data to amplifying stereotypes and bias to surveillance. Can you talk about some of those bad, basic root cause issues? And then we'll get on to your Bill of Rights itself.

     

    CONRAD: There are a whole lot of ethical issues. And if people are interested in sort of following conversations about it on social media, the hashtag is usually #AIethics for that. So one of the main issues is that the datasets that have been used to train these models, whether they're Visual Media Generators, or whether they're textual or whether they were trained on data that was not specifically consented to by the creators. When I say data, I'd like to remind people that data doesn't just mean your Social Security number or your medical records. It means poems. It means pictures you may have posted on Instagram or an art website that you have copyright over. Because they're publicly available, they were often scraped. When we talk about scraping, that's what we're talking about is taking that as data to train. And the implications for artists is quite profound. Several models have been shown to reproduce artwork that's very close to the original, even with signature sort of garbled in the corners. That's part of the ethical question: Is that fair use? Most artists, I would argue, at least from my research over the last year or so, are not consenting to that. They feel that they should be remunerated for that training data. So that's one of the ethical questions: Is that consent?

     

    HELLMAN: Because the analogous issue with regard to written work, is there not?

     

    CONRAD: Absolutely. It’s been clear to a lot of us who've been experimenting with it for the last several months is that you can get copyrighted responses. You can get responses that include copyrighted text, for sure. But now there have been people who've done more deep probing that makes it real clear that some very large data sets of pirated works have been used to train these models, which is kind of significant.

     

    HELLMAN: Not a little bit creepy.

     

    CONRAD:  Yeah, for sure. And so you've asked about a couple other ones: bias. It's important to recognize that the data set is what's available on the web. It's available in data clusters. On the one hand, it's a whole lot of data that's been scraped. On the other hand, it's limited to what can be on the internet. There's no doubt there are definitely places where there are sort of data gaps, and that reinforces a worldview based on what's available on the internet. That's one thing that reinforces bias. The other are the people who train the models to make sure that they're aligned. That's a very charged word, “alignment,” so I'm not going to use that anymore. But I will say just so they look like what we think that the people who are asking the questions want on the other end of it. There are lots of embedded biases, and there are a lot of people who have written about algorithms and other algorithmic bias that are really important. I'll just mention a couple of names if people want to have some fun like reading over the holidays. Safiya Noble’s “Algorithms of Oppression” is one. Cathy O'Neil's “Weapons of Math Destruction.” You heard that — math. I love that one. It's good “dad joke.” And Joy Buolamwini, who is the founder of the Algorithmic Justice League. She has a book coming out called “Unmasking AI” at the end of this month. Joy, for instance, talks about how facial recognition software was trained primarily on white faces. When she was a grad student, she was trying to work on some of the software and she was testing it, and she put her face in front of it on her screen. It wouldn't read her face. As a Black woman, she was like, “Huh, I wonder what's going on here.” And so she brought in one of her roommates and put her roommate in front of it, and it read her face fine. She found that it would only read her face when she put on a white Halloween mask, and she's like, “Yeah, this is a problem.” These are used for policing. They're used for identification. And I'm also gesturing out towards AI that's beyond necessarily generative AI. This AI is a big tent, and it's involved in everything from generating marketing copy to surveillant policing. It's kind of important to consider these all in a larger network of kinds of technology.

     

    HELLMAN: Next, I wanted to go to the meat of your article itself — the proposed a bill of rights for education. And you say that you were impressed by and modeled it after the Biden administration's Office of Science and Technology Policy, which in 2022 released a blueprint for an AI Bill of Rights. Can you talk about that and some of the main points that you think teachers and students need to be aware of or empowered to apply to their lives?

     

    CONRAD: Absolutely. I started talking to people on social media about what our responsibilities were as educators and trying to figure out a way to start a conversation that we could all be part of in the curated AI feed. Now everything on my social media seems to be about AI. Somebody had mentioned earlier in the spring, this bill blueprint for a bill of rights for our AI Bill of Rights — that's what it's called — and what the White House had put forward last year. I was like, “Oh, that's great. Let me look at that.” And so I'll just read the main points of it. There's more data, if you go on to the website — it's still on online — you can read some of their examples and fuller description. I think it's important for people to know that this is out there. The first principle is safe and effective systems. You should be protected from unsafe or ineffective systems — probably infected systems, too — and algorithmic discrimination protections. You should not face discrimination by algorithms, and systems should be used and designed in an equitable way. That gets to some of the things we were just talking about: data privacy. You should be protected from abusive data practices via built in protections, and you should have agency over how data about you is used. Not just an explanation; you should know that an automated system is being used and understand how and why it contributes to outcomes that impact you. And human alternatives, consideration and fallback: You should be able to opt out where appropriate and have access to a person who can quickly consider and remedy problems you encounter. Those are great. I loved those, and then I clicked through, and there's all of these caveats on a second page, hidden, saying, “You know, this isn't law. We may not have to use these when national security is at stake,” and so forth. Ultimately, there is a blueprint. It’s guidelines, and they're great principles, and I think the Office of Science and Technology Policy that came up with it was really committed to these principles. Ultimately, I was disappointed to learn, for instance, that Congress had already paid ChatGPT for licenses to use and that the White House had invited a lot of the sort of big tech giants in to talk about how to regulate themselves, which is I think what we call regulatory capture, and I was a little disappointed about that. But I still think the principles are great to start with. So that's that. That was a good starting point and a framework, and that's where the title of the my “Blueprint of an AI Bill of Rights for Education” is. And although I don't love it, because it sounds like we're giving AI rights, and that's a whole different…

     

    HELLMAN: Let's talk about how you divided into rights for educators and rights for students.

     

    CONRAD: First I wanted to talk about our role as teachers and protecting our role as teachers. One of the things that I wanted to protect was our own input on purchasing and implementation. I will say that when I first got to KU — and we were back before Blackboard, back before other learning management systems — we were consulted. Faculty were consulted on which systems might make more sense, and I'd like to get back to a more active sort of consultation with faculty around ed tech. And maybe it's just that I'm not the one being consulted, but I do think it's important, since our responsibility as educators is over curriculum. I mean, that's part of what faculty governance is for. It's really important to have domain experts being able to say, “Hey, this technology is or is not appropriate for use.” And I will say that while KU has has tended to be open about this, I do know a lot of K-12 educators are frustrated, because they don't necessarily have those inputs. That's what I was trying to build in.

     

    HELLMAN: There's not a lot of guidance from above yet. Is that what you're saying?

     

    CONRAD: There’s not a lot of guidance, or the guidance is sort of like, “Stop. Don't use it at all.” This is in K-12. I would say I don't really want to see a blanket policy anywhere. I want blanket guidance and protections. That's what I would say. It’s really important, but it's way past time to discuss it. As I say, educators have been pivoting. That's what we do. But is it good? There are so many responsibilities that educators — whether they're eighth grade teachers or whether they're college/university faculty — we have a lot of expectations. Building in some time and space and room at the table to have these conversations is important. That's the next two things in my policies: input on policy. Also, professional development, like KU has, has been great. I just had last week. We're talking in October, and just last week, IDRH did digital jumpstart workshops. I spent an hour and a half talking to people about critical AI literacy in the classroom. I think KU’s has had, maybe seven that I've been involved in. And I know there's more conversation. There’s certainly interest, there's certainly room, but also incentivizing that for people who are busy and have so many things on their plate; giving us opportunities to talk to each other, but also making it valuable when you only have so many waking hours. You have to figure out how to get people who are who are definitely plenty busy to consider these issues.

     

    HELLMAN: It's jumped up on the priority scale, has it not?

     

    CONRAD: I would say yeah, for sure. The other thing: I think people are frustrated, because AI developments are changing all the time. But I think we're at a place now I would say — this fall especially — we're at a place now where we have some of the sense of the impacts of it. We have people who are trying to do research on it, so we do have a little bit more information than we had, say, last spring. It's sort of like the difference between the spring of the pandemic and the fall of the pandemic, where we're just trying to put Band-Aids over the situation. Now we can actually build more actively, and there plenty of great people out there trying to do that. I'm thinking of my colleague Anna Mills in California, Maha Bali in Egypt, lots of other folks who are having conversations about this nationally and internationally and talking to each other. We've got some stuff to build on it. It's to try to build something that's in various fields that will be useful for students for sure.

     

    HELLMAN: Have we covered all the Bill of Rights for educators?

     

    CONRAD: The last one is autonomy. Ultimately, I may have a very different opinion about how to use AI in my classroom, from class to class even, than somebody else in another field. Ultimately, I think as long as we're protecting student rights — and that's big — it's really important. Autonomy is important, and I say, inherent. You should be able to decide whether and how to use automated or generative systems in your courses, but that comes with what we're moving on to next, which is student rights and the real importance of protecting those rights if we're going to use these systems in our classrooms.

     

    HELLMAN: Students are wanting direction, it sounds like.

     

    CONRAD: That's my sense, and that's the first right I list. You should be able to expect clearer guidance from your instructor on whether and how automated and or generative systems are being used in any of your work for a course. And I don't just say AI, I don't just say chat GPT. One of the things I think is really important is that people recognize the difference between a specific model, like chat GPT and a class of things like LLM or visual generators. Or even AI. AI is a big term as we've just suggested. At some level, Siri is AI. At some level, spellcheck is a kind of AI, right? You want to be real clear about that so students feel confident. Part of this is giving them the confidence to learn and build skills. Clear delineation is not punitive ones; just explanatory ones are really important. Students need to be able to ask. I sort of thought it was obvious that students could ask if there was no guidance provided, but I've gotten a sense again over the last year and had specific students say, “Listen, I don't feel comfortable asking, because if I say something…”

     

    HELLMAN: Why, they're afraid there'll be their reputation will be impugned by the very notion, they might use AI to write something?

     

    CONRAD: They look to us. They feel like, especially if they care, or if they're vulnerable. This is really important to remember: There are going to be students who are, say, on college on a scholarship that they're very concerned about losing and they wouldn't be here otherwise. There are students all across the different levels, who are non-native speakers of English, for whom AI detection software is actually really poor and is much more likely to accuse them falsely of using AI. There's a lot of anxiety. And students can even agree about whether no policy means I can use it or that I can't use it. Or what can I use if I use Grammarly, for instance —  a very popular system I used to encourage students to use but now it has a plugin or a component that is generative, which ticks me off to no end because I really would just like to be able to. I liked Grammarly until this because it really confuses things. So I'm trying to be very clear in my syllabi, and I actively in my policies encourage students to ask and tell them that it's OK to ask if they don't know and that I will not assume the worst. I think we need to do that now, especially at this early stage when things are still so up in the air to really make sure students know that they can ask and make it really clear. Don't just assume that they will ask if they don't know for sure.

     

    HELLMAN: Some of the other points you wanted to make: Do you think students have rights with regard to AI?

     

    CONRAD: I've heard plenty of people say, “Well, students give away their data all the time. We all do when we're on the internet.” I'm just going to generalize and say most of us are probably not as attentive to privacy policies and Terms of Use as we should be. We didn't read the fine print. I think maybe most of us didn't read the fine print. If you're on any social media, I feel flayed open. I used to work in surveillance studies a little bit, and so I'm very much aware of what happens to our data. One of the rights that I suggest here is privacy and creative control. They're not necessarily the same thing, but both have to do with sharing data consensually. So making sure that if you put something into the system that may be used to train this system, that you know you're not getting money back from them. That it's your choice to make sure that students know to read the Terms of Use. I've even done an assignment where students did that, which was lots of fun. It surprised all of us, actually. I was expecting to be, “Yeah, yeah, they give this away.” But we were digging in, and it was pretty — I'd say entertaining, if it weren't so scary. Also, privacy. Ultimately, you can choose to give your data away. You can choose to give your information away. We do it a lot, but it is not my role as a teacher to give away my students’ data. It's legally suspect because we do have FERPA, the educational rights protection act that we actually are beholden to. More to the point, I'll use this metaphor: Just because my students may have had their apartment robbed doesn't mean I'm going to assign them leaving their door open. It's not my role to say, “Well, you put yourself at risk, so I'm going top make you be at risk for an assignment.” I do think it's important to have opt-out options as well for using generative AI, because generative AI is also — to use some surveillance studies lingo — it's a leaky container. You can put stuff in there that will come out disaggregated and mushed up, and you don't have to worry about it, but sometimes it will come back looking a lot like it went in. And that's what we call a leaky container: a data container that isn't safe. It’s like when you put your credit card in, you really want it to be encrypted. Well guess what? These models are a lot leakier than then your credit card encryption when you're going to buy something on the internet.

     

    HELLMAN: Anything else that students have a right to expect?

     

    CONRAD: Ultimately, appeal is a big one. I've got two major ones besides a general catchall. Your legal rights should always be respected, regardless. One is the right to appeal. People need to realize AI detection software is itself AI, and it isn't perfect. It isn't even terribly good. There are reasons for that structurally why it probably will never, never be perfect. Even people who are incredibly enthusiastic about using AI in the classroom have been coming out really against AI detection software, because of the false positive rate. It's not super easy, but it is definitely possible and sometimes easy to work around it so that if it's submitted it looks like it was human-generated on the one hand. On the other hand, there's a Stanford study that talks about how non-native speakers of English tested a bunch of TOEFL essay and other essays that were written before they before ChatGPT existed and tested them against this detection software. It was astounding how many were identified as being AI-generated. So that's really, really awful. For equity reasons, that's more important than not then missing them. If you miss some, that's bad enough because that creates an environment in which students who want to cheat think, “Oh, my teachers just relying on this and I found a workaround.” That's not good. People have always cheated, right? We're not going to fix that. You don't want to incentivize cheating. On the other hand, I think it's incredibly important that students not be falsely accused. The right for appeal is really about being able to have a conversation, and that's tied to the last major right that I listed that you should know if an automated system is judging you or assessing you, and you should always be able to ask for a human to make those calls.

     

    HELLMAN: Almost none of these things we've been discussing are yet U.S. law or university policy. You're trying to give guidelines, right? So let's end this on a positive note, because in the article you do say that you think universities can lead the way to a better, more ethical AI. Can you explain what you mean by that, please?

     

    CONRAD: One of the problems we have with AI right now are the problems with AI right now, and that has to do with the particular companies, the big tech companies that created them with the particular landscape in which they emerge and the economic and social and ethical landscapes in which they've sort of embedded themselves. That doesn't mean that AI as a technology doesn't have great potential. It’s obviously awesome. I started to mess about with it, I started to experiment with it last fall in part because I've talked with chat bots before. I've talked with them in order to help students and myself figure out what the nature of our relationship to technology is, less about the technology and more about us. I started to experiment with them because I was interested in what their capabilities are, what their affordances are, and there are lots of interesting potential for them. One of the things is user design. We talked a little bit earlier about chatbots. There are other ways and there are certainly scholars out there, researchers working with technologists in order to create better interfaces that are more purpose-built for education. There's also possibilities of — and this may be the pie in the sky part — but getting training. Not building off these big tech models, but trying to train with ethically obtained datasets that have been checked ethically. One of the things I didn't mention was how many human crowd workers were behind the scenes, trying to scrape it to take out the horrible things that both people put in the to the datasets and what the AI might generate as a result of those. I won't mention them, but just really horrible things that that give a lot of folks who work in these fields, many of them in the gig economy, PTSD. Literal PTSD. Just think about what drone operators deal with, and it's like that, because those are the images and texts that that are being generated by some of these systems or the part of the data that's going in. So ultimately, trying to create datasets and outputs ethically. This is the great thing about higher education. We have researchers in computer science. We have researchers in philosophy, anthropology, English. We have scientists. There lots of people doing great work that are trying to think about these things ethically in this larger context, and that's where I think the potential lies for the future of AI: in systems that are made in consultation with all the stakeholders and with a broader — maybe a few heads that are outside of the Silicon Valley environment — that can think more broadly about impacts.

     

    HOST: We've come to the end of this glorious episode of “When Experts Attack!.” If you like what you hear, subscribe to our humble podcast and tell a friend about us. We'd love to know what you think. So if you have questions, comments or ideas for future episodes, let us know about it. You can reach us just drop a line to whenexpertsattack@ku.edu We're a co-production of the University of Kansas News Service and Kansas Public Radio. Music was provided by Sack of Thunder. Until next time, this is Brendan Lynch, signing off.

     

    THEME MUSIC

     

    Transcribed by https://otter.ai and edited for clarity.

    When Experts Attack!
    enDecember 04, 2023