Stephen Wilson 0:06 Welcome to Episode Six of the Language Neuroscience Podcast, a podcast about the scientific study of language and the brain. I'm Stephen Wilson. Thanks for listening. My guest today is Karen Emmorey, who is Professor of Speech, Language and Hearing Sciences at San Diego State University. Karen has made seminal contributions to the study of sign language, especially its neural basis, and she is one of the world's leading researchers in this area. Her work is centered around deaf people and the deaf community, and she has mentored many deaf and hearing researchers alike. She wrote a very influential book on this topic called "Language, Cognition, and the Brain: Insights From Sign Language Research", which I'll link in the show notes, and which I highly recommend to anyone interested in this field. Given the topic of today's episode, sign language, I wanted to mention that the transcripts of all episodes of the Language Neuroscience Podcast are available on the podcast website, which is langneurosci.org. The transcripts are also embedded in the RSS feed, which means that with certain players, you can see the transcript like captions as you listen to the show. The app that seems to work best for this, in my experience, is podfriend.com. Okay, let's get started. Hi, Karen. How are you? Karen Emmorey 1:14 I'm great. How are you? Stephen Wilson 1:15 I'm good. So it's a lovely spring day in Nashville. The sun's out, the birds are chirping. How about you? I think you told me that you are working from home in Maui? Karen Emmorey 1:29 That's right. We decided since everything is online, why not be online in a beautiful place? So we're just renting here for like a month and enjoying Maui and working during the day and snorkeling in the early morning and that kind of thing. Stephen Wilson 1:44 So it's not really officially a vacation. It's just like you've kind of relocated your home and you're still working? Karen Emmorey 1:48 Exactly. Although I have to admit, we are taking a few vacation days to do some fun things. Stephen Wilson 1:54 Well, that's allowed. And so what part of Maui are you staying in? Karen Emmorey 1:59 We're in the south part of Maui in Makena Wailea area. It's gorgeous. And Maui is small. You can drive all over and there's great hikes and waterfalls, and great beaches. It's just beautiful. So it's just nice, when we go for our walks, we're just in a beautiful place. Stephen Wilson 2:17 Yeah and it's not like, you know, San Diego is too shabby either, right? If you live in San Diego, it's kind of hard to find somewhere to go that's gonna be nicer. Karen Emmorey 2:27 It is true. It is true. We do live in paradise. And we're now visiting paradise. Stephen Wilson 2:32 All right. Before we get started with our conversation really, I just wanted to kind of congratulate you on your Distinguished Career Award from the Society for Neurobiology of Language. I thought that was really great. Karen Emmorey 2:44 Thanks. Thanks. It was really it was a fun talk to give. And it was really an honor, particularly from that society. Stephen Wilson 2:50 Yeah, I think that your work definitely falls very much in line with what that society is interested in. Right? Karen Emmorey 2:57 Yeah. I was the president for a year and also promoted the new journal. And so it's just a great organization. Stephen Wilson 3:04 Yeah, I agree. So I looked at your CV, and I noticed that all your degrees are from UCLA where I also went to graduate school. So are you a native Angelino? Karen Emmorey 3:17 Well, no. So I lived in northern California, and then I moved south. So I went to undergrad at UC Santa Barbara for two years. And then I transferred to UCLA to finish out my undergrad degree, and then stayed there for my graduate degree. And I can tell you the story of how that happened, if that's part of what you're interested in. Stephen Wilson 3:40 Yeah, I'm really interested in people's backgrounds. I guess I want to, even before we even get to that, like just back it up even more, right. So you know, you're obviously a language researcher, a linguist, I would say. I don't know if you call yourself a linguist anymore. A linguist and a cognitive neuroscientist, pretty clearly. Karen Emmorey 3:56 Right. Stephen Wilson 3:58 But I was wondering, when you were a kid, were you interested in language? Or did that interest come later? Karen Emmorey 4:06 A little bit. I was interested in taking languages like in high school. I took French and German and I visited Germany as a summer student. And so I really liked languages. And when I went to college, I took more French and German, but I didn't really want to be a language major. And that's how a lot of people discover linguistics that way, is you're interested in languages, but not any one particular one. And then you discover the science of languages. And that's kind of what happened to me. Stephen Wilson 4:40 So you kind of discovered your interest in grammar in the process of studying French and German? Karen Emmorey 4:48 Yeah. I discovered that Vicki Fromkin, who wrote this great introduction to linguistics, which lots of people have in Linguistics 1, right. Well, she was actually a professor at UCLA. And I said, well I'm going to the horse's mouth. I'm going to transfer to UCLA and really study linguistics there. And also, the linguistics program, had combined majors like linguistics and psychology, which UC Santa Barbara didn't. And I was really interested in the psychology of language. So like a lot of people, the Whorf-Sapir hypothesis, that language can influence how you think. I was really fascinated. And so the language and psychology link worked really well. Stephen Wilson 5:30 Right? Yeah, because Vicki Fromkin is really a psycholinguist. And it's kind of funny. It's always been a little funny to me that like a psycholinguist wrote the most widely used intro linguistics textbook. And it was also the very first book that I ever read about linguistics when I was an undergrad. I remember it well. Yeah, so you transferred to UCLA just because you kind of were interested in Vicki. Karen Emmorey 5:54 And so I was then taking, you know, hardcore syntax and semantics and phonology. And I realized that what I was really interested in was how knowing about the structure of language can tell you something else. About language disorders, actually. I took a class on language disorders from a linguist. And that was really different, because we knew all these things about language, but you were applying it to a different question. And I really liked that. And so I actually ,as a senior, even applied to go into speech language pathology at UC San Francisco. But at the same time at UCLA, they were developing this interdisciplinary PhD program that they called Brain, Language, and Cognition. You would take courses from psychology, from neuroscience, from computer science, and linguistics. They were developing this PhD program. And as an undergraduate, I was actually on the committee, they involved undergraduates in this discussion about what this program might be. And I thought, that's really what I want to do. I want to be this interdisciplinary person. And so I actually applied to the linguistics graduate program the day before the deadline for applications. I told them, I want to do an individual PhD. And you have to be in a home department to do that PhD. So of course, linguistics made sense. People in the phonetics lab knew who I was. As an undergrad, I'd been working as a research assistant. And so they accepted me into the program. And I had to be in my home department for a year before I could apply to do an individual PhD. And so I took courses, and I designed the whole program that would follow this PhD program that wasn't in existence yet. I mean, I could have waited, except that would have been a mistake, because it's never been realized. So I went to the head of department, Bob Stockwell. And I laid out the program and said, "See I need to do an individual PhD, I can't complete all these courses in linguistics." And he kind of took it a little bit as an affront that I couldn't do it in linguistics. And so he waived historical linguistics for me. So to this day I don't really know much about historical linguistics, so that I could take my statistics courses and neuroscience courses. And so I followed that program. But my PhD was in linguistics. And so it worked really well because I had that solid training in linguistics, but also at the same time, knew about statistics and psychology and neuroscience. So it was a very individual program, but with the UCLA Linguistics label. Stephen Wilson 8:34 Yeah. And this was in the 80s. Right? Karen Emmorey 8:37 That's right. Stephen Wilson 8:38 Right. Yeah. I mean, I don't think that very many people were getting serious interdisciplinary training in language and brain in the 80s. Like, I mean, even now, it's kind of hard to come by PhD programs in that. Do you think it's important to have like a solid grounding in a discipline? Do you think that makes you better as a scientist? Karen Emmorey 8:58 I think so. Because then you can branch off from there. But you you need to have solid training in one domain, so that you have the expertise there. And then I'm all for collaborating with people who have expertise in different domains. And that's how the collaborations work really well. If you just have like a surface of everything you need, you do need a core. Stephen Wilson 9:20 Yeah, I kind of agree with that. I guess there's many kinds of cores that can work for our field, right? I mean, I think a core in linguistics can be great, a core in psychology, a core in neuroscience. Speech pathology, I think is another great core. But it's good to have something where you feel really at home. Karen Emmorey 9:36 Yeah. Stephen Wilson 9:37 So I think our listeners probably will be aware that you're a sign language researcher. I don't think you've done much that wasn't about sign language. So I hope it's okay to call you a sign language researcher. Karen Emmorey 9:47 That's fine. I will also just say really quickly that I didn't do any work on sign language as a PhD student. That started in my postdoc. Stephen Wilson 9:57 Yeah, so I was gonna ask how you got into that. Karen Emmorey 9:59 So I did a NIH F32, one of these postdoctoral NRSA proposals, with Ursula Bellugi at the Salk Institute. And these NIH proposals are really nice. What you can do is take what your expertise is. So mine at that time was in psycholinguistics. I'd been doing work on morphology and the psycholinguistics of word recognition. And I wanted to apply those concepts to sign language, which I knew nothing about. So that was a nice juxtaposition of my mentor, my sponsor's expertise in sign language, and my expertise in psycholinguistics. Stephen Wilson 10:40 Where did that idea come from? Your idea that you wanted to apply these things to sign language? Karen Emmorey 10:46 Vicki Fromkin. Stephen Wilson 10:47 Oh okay. Karen Emmorey 10:48 Vikki Fromkin and Ursula Bellugi were colleagues and friends. And she promoted me to Ursula, said "I have this great student, and here's some ideas." And then I went to visit the Salk Institute, Ursula Bellugi, and Ed Klima. We talked about ideas, and it was just really fun. It was a great experience. Lots of ideas flying all over the place. And so it's like, okay, let's put something together and put the proposal together. Stephen Wilson 11:14 That's so cool. Yeah, I had the chance to meet Ursula, about probably 10 years ago. She was so cool. I think she was well over 80. But like, you know, very active and you wouldn't have known it. Karen Emmorey 11:28 She asks great questions and she makes you ask great questions. She really makes you think. The other really wonderful thing about Ursula was she allowed me to be really very independent. So once I got this NRSA, I started working on those projects, doing sort of the first lexical decision experiments with sign language, or gating experiments with sign language. And then I stayed on just as a research scientist, helping with the grants that she was writing. She just let me, "Oh, these are great ideas, just put them in and we'll do these." And then run with anything I was interested in. So it was a unusual position to be in these days, where you can just do research that you want. Stephen Wilson 12:10 Right? Yeah, these days, everything's gotten so big, it's hard to carve out independence as a young person. So I presume that you would have learned to, do you say 'speak' sign language? Like, what's the appropriate verb there? Karen Emmorey 12:25 Yeah I usually say 'use' sign language. Stephen Wilson 12:27 Okay, you learn to 'use' sign language? Karen Emmorey 12:28 So, a sign language user. You're a 'signer'. Stephen Wilson 12:30 So how did that go? Like, was that your first encounter with sign language? What was that like? And how did you learn? Karen Emmorey 12:36 This would have been in the late 80s, early 90s. So there was no ASL classes at either San Diego State University or UCSD. You had to take courses at a community college. And in fact, my first courses were from a deaf research assistant who worked in the lab, but also taught adult education in the evenings. And so I started taking those courses. I took some courses at the community college. Because it sort of stopped at like ASL 4, and I wanted more, I tried taking some interpreting classes. And my respect for interpreters went through the roof, because it is really hard. Interpreting is really hard. I just don't have the cognitive capacity to to interpret, or really the language skills at that time either. I've always worked with deaf researchers and deaf colleagues. And so I've learned through them, communicating with them. My lab now is a signing lab. So everyone signs. Our lab meanings are in ASL, our project meetings are in ASL, so everyone has straight communication. And so my signing over the years has improved to the point now where I can give, you know, a lecture in neuroscience in ASL. But it was a long trajectory with a lot of stumbles. So the very first time I gave a lecture in ASL at the Rochester Institute for Technology, I have to admit, it was just horrible. It was horrible because I thought I could just, I'd given the talk in English many times, I thought I could just translate on the fly into ASL. And listeners out there, if you're bilingual and you think about this, that you are going to give a science talk in another language or maybe even your native language, but you don't give the talk in that language very often, you realize you have to work on it. And so I learned a lot then. I really have to prepare. I have to be thinking ASL in my head in order to give a good talk in ASL. I can't just translate on the fly. Stephen Wilson 14:45 Yeah, that's really interesting. So can we just talk a little bit about the status of ASL. Just how the language is these days. So how many deaf individuals are there in the US, and how many of them use ASL? Karen Emmorey 15:03 So these statistics are not easy to come by. I always get them wrong. But there's definitely, you know, hundreds of thousands of deaf users out there and hearing people who sign with their kids, sign with their families. So how you figure out these numbers is not easy. I think some people are worried that with cochlear implants or something like this, the language is going to die out. And that's clearly not happening. In part because cochlear implants don't make children hearing, and they're variable in their outcomes. And so it's really critical that you get language input early on. And so there's been more movements toward bilingual, bicultural programs for deaf kids so that they get sign language early. There's more opportunities for parents to learn to sign with their kids. So I think it's actually growing, in terms of ASL use. I know back in the early 90s, I only study adults, but people fill out their background form and we would always ask, "Do your parents sign?" If they had hearing parents, it was almost always "no". Now, lots of deaf adults in their early 20s, whose parents are hearing, but they started signing as infants or in preschool. And that never occurred in the early 90s. So now we have very fluent signers, who we call early signers because they don't have deaf parents. So they weren't signed from birth, but very early on. Stephen Wilson 16:38 Oh, sorry. I was gonna say, do you think that's because of the research that you and others did 20, 30 years ago, showing that you'd have a greater command of the language if you learned it as an infant? Did that kind of make it through to the public? Karen Emmorey 16:52 I think so. I think so. I think there's two aspects. One is just showing that sign language is not pantomime, it's a real language. It's on a par with other languages. That's still hard to sometimes get out there in the lay public. I hear in conversations often, "Oh, sign language isn't universal? There are different sign languages?" But the other thing is the research done by the people who are studying child development showing language deprivation. So if you don't get either a sign language or spoken language, because you're just not getting enough spoken input, you really have no language. And that's the worst possible scenario. And so a lot of work showing that over the years has made it really critical that we need to advocate for sign language input for deaf kids. Stephen Wilson 17:42 Right. Yeah, that's really important. It's funny. As I was thinking about this podcast, I was thinking "Well, shall we talk about how signed languages are languages?" And then I was like, "Okay, I don't think so." With the audience of people that listen to this, I think everybody knows that. But I was wondering, is that something you still come up against? And it sounds like it is. Karen Emmorey 18:01 Yeah, it's less in academic circles. But still, it's still kind of amazing. So I was just recently asked to review a paper where deaf and dumb was in the title. In the abstract, they referred to "spoken languages used by normal people". I refused to review and said "you need to not send this out for review and send it back to the authors to change the language. It's just not appropriate." We're in 2021, and that kind of stuff is coming across my desk still. Stephen Wilson 18:37 Yeah, that's pretty crazy. Yeah, so I was thinking, we were just going to that for granted. I wanted to talk a bit about the structure of American Sign Language and sign languages in general, and then kind of talk about the neural correlates. And in terms of structure, what I'm interested in is like, all of the ways in which sign languages are analogous, if not identical to spoken languages, and also the ways in which they're interestingly different. So can we talk about phonology? In your book, your book is really great, by the way. It's 20 years old, but I think it's very current still. I'm sure there's a lot learned since then. But I think it really lays out the field and this field of research really effectively. So if anybody wants more of a background, I would definitely recommend that. So in your book, you kind of talk about this debate about whether it should be called phonology, right? And you kind of talk about, why would you call it phonology, why might you not. Can we talk about that a bit? What do you think about that these days? Karen Emmorey 19:43 Well, I think it was the wise choice to decide to call it phonology, as opposed to early proposals like cherology or something, making it sound really different. And the reason I think it was the right choice to talk about sign language phonology is because I think the parallels really say something fundamental about the nature of human language. The fact that we have a level of structure that is just based on form and that you can see clear parallels in terms of units, rhythmic units. These are all just form-based patterns and rules and structures. I think that's really important to recognize, that human languages work that way even if you have a completely different set of articulators and a different perceptual system. So those parallels are really what's interesting, and seeing the overlap both in linguistic structure but also in regions of the brain that might be processing the phonological structure. There are overlaps. But then of course it also makes it interesting to look at the differences. You see much more serial structure for spoken languages because the articulators are much quicker, the auditory system is good at perceiving fast differences. Sign languages have more parallel structure, simultaneous structure. The hands are slower articulators, so you layer a lot of information. That has interesting consequences both for processing and also for linguistic structure. There's lots of other things you can kind of look at to tease apart, do we see neural patterns that are really linked to the speech articulators, or auditory processing, compared to neural processing that's really linked to the fact that you're using your hands and you're using spatial distinctions. So you can see what's specific to speech and what's specific to sign. Stephen Wilson 21:35 Right. So whereas in spoken language you've kind of got these phonemes that are kind of arranged sequentially, in sign language like you mentioned there's multiple channels. So I think one really salient channel is handshape. And then you've got movement and then location? Are those kind of the fundamental primitives? Karen Emmorey 21:57 Those are the basic parameters that we look at in terms of combination. They clearly need to be assembled. So you see things like slips of the hand and those are the parameters that slip. You substitute one handshape for another or one location or movement for another. You also see tip of the fingers. You have a separation between meaning, you know the meaning of the sign that you want to retrieve, but you're not able to get the form, the phonology of it. Stephen Wilson 22:27 And do people in those circumstances come up with partial knowledge of the form like in English? Or, you know, I know that you have better than average access to the first phoneme, for instance. And in languages with grammatical gender you, generally speaking, know grammatical gender the word you're going for. Do you see that kind of phenomenon in sign language too? Karen Emmorey 22:49 Yeah, so what you see is people tend to recall the handshape or the location. They're less likely to recall the movement. And so what's interesting is that we suggest then that the handshape and location, they often occur simultaneously. At least they're perceived that way often. So that is equivalent to the onset for spoken language, the first sound. So it's kind of like the onset of the sign. The thing that is more difficult to retrieve, then, is the movement, the thing that spans over time. Stephen Wilson 23:24 So can you give me an example of a phonological rule in ASL that looks like a phonological rule in English? Maybe with similar principles involved? Karen Emmorey 23:36 So one principle that's really clear is assimilation. So within compounds, for example, you can combine two signs together to create a compound and you will have handshape assimilation, so the second handshape becomes the same as the first handshape. Something like that. Just like when you combine a prefix with a root you'll get assimilation for bilabials, for example. So that's a very similar type of process. Stephen Wilson 24:03 Okay. And this sort of many similar type processes are what led you and others to conceive of sign phonology as another form of spoken language, or a different form of spoken language phonology? Karen Emmorey 24:16 I think the way to think about it is that there are universal constraints, universal rules that apply both to spoken language and to sign language. Those kinds of unit combinations, those types of representations, are similar. But there's still differences. There is one interesting difference. There's very few environmentally-conditioned rules in sign languages. That may have to do with the serial nature of speech versus the simultaneous nature of sign language. So, the kind of thing where you have an unaspirated /p/ after an /s/. That kind of environmentally conditioned rule is relatively rare in sign languages. And I think that's just kind of an interesting phenomenon. Stephen Wilson 25:06 Yeah, I mean, I guess that must reflect the fact that sign languages are not really, it's not about strings or phonemes, right? Because you rarely have more than one syllable in a word, right. So it's really more about the simultaneous information on different tiers, rather than kind of linearly arranged, meaningless units, right? Is that where that derives from? Karen Emmorey 25:29 Yeah, it's definitely relative. So originally, it was thought there was no linear structure, everything was simultaneous. And then Scott Liddell and others came along and showed that there are signs that have a linear structure, location-movement-location. So you do have linear structure, it's just much less than for spoken languages. Stephen Wilson 25:50 Right. Okay. That makes sense. So, let's talk about syntax. So again, I think in syntax, which is the most interesting part of language clearly, again, I think there are really interesting similarities and some really interesting differences. So do you think you could talk about what are some of the striking similarities and differences between sign languages and spoken languages? Karen Emmorey 26:16 I think there's pretty good evidence that the type of phrase structure combinations and creating phrases is very similar in signed and spoken languages. So that type of phrase structure and constraints on reference, and the neural underpinnings for creating phrases and for basic syntactic processing seems to be very parallel for sign and spoken languages. And so that actually is nice, because then it gives us a way to then again, look at the differences. So the things I've been interested in, in particular, are how space is used for co-reference, for example. For pronouns, for verbs to indicate who did what to whom. And that's really different than spoken languages, where you tend to have an affix that can mark reference, or can mark verb reference. And those can be stored in the lexicon. You have grammatical morphemes that carry out those functions. Whereas for sign language, you set up reference with locations in space, and then I can direct a pronoun toward that location or a verb towards that location to indicate 'object', for example. And that location itself is really not a linguistic representation. That's what makes it really interesting, because there's any number of them. And the way sign languages work is to use space semantically. So if I'm talking about a tall person in my discourse, I'm going to direct the sign at, or I'm going to ask a tall person, toward a high location in space. Stephen Wilson 27:50 Okay, so I'm just going to kind of fill in a bit, because our listeners are not going to have access to the video. You're placing the tall person up high in the signing space. And that's going to be where they're going to be located in the discourse. Right? Karen Emmorey 28:06 Right. Stephen Wilson 28:07 And that's really fascinating to me, this whole use of space as a kind of reference space, I guess. How many participants can you locate simultaneously in a conversation in it? Karen Emmorey 28:21 So my view on that is it's constrained cognitively. That is, you can't remember all of these different places. And so just pragmatically, if you're just talking about individuals in a discourse, maybe three or four. But, for example, if you're describing the location of items in a room, a layout, you can have lots of different locations, because you use signing space as an analogue to the locations of the pieces of furniture in your room that you're describing, for example. This is why I think these locations, you can think of them as gestural. That is, it's an interface between a gestural representation and a linguistic representation. And there's lots of theories now that kind of agree on that basic idea, that you have a linguistic representation of a verb or pronoun, and you have slots that get filled in when you're using the language. Does that make sense? Stephen Wilson 29:14 Yeah, I think so. So do all sign languages make use of space in this way, where it kind of is a place where you can put referents and then refer to them subsequently? Karen Emmorey 29:26 That's a great question. So lots of sign languages do. There is some emerging research on sign languages that may not use space in that same way. So I believe Kata Kolok, which is a sign language used in Bali, does not use signing space in the same way, in part because it's within a smaller community. And so space can be used to point to where actual things are. So you can use pointing towards the city you're talking about, pointing towards the things you're talking about, rather than using these, what we call abstract locations, that you set up and you associate a referent with that location, and then you refer back to that referent. You see it more with spatial descriptions, but not so much with referents. And we're not sure if that's just an unusual sign language. There's a lot of people that are doing research to try to understand how space is used in different sign languages. So most sign languages do use space that way, but not all. Stephen Wilson 30:33 So that's like a really interesting difference, where you have this availability of this spatial frame that we don't have in spoken language, unless we're gesturing. But that's a different thing. What about similarities? Karen Emmorey 30:46 We actually have an experiment planned, when we're able to run participants again, to try to look at this. To try to see when space is being used in this referential way. Do you recruit different regions? There's reason to suspect that you might actually get right parietal regions engage when signers are using this, or comprehending this type of spatial usage compared to just when you're using word order, for example, to indicate who did what to whom. Stephen Wilson 31:13 Right. Right. Yeah. Okay. So how about word order? I think in ASL it's SVO like in English, is that right? Karen Emmorey 31:23 Yes, for basic word order. What's different is that more different word orders are allowed in ASL than in English. So ASL uses a lot of topic-comments. So you can move things to the front of the sentence as the topic, that allows you to have more things like OSV orders. So originally, people thought "Oh, there's no word order in ASL" until Scott Liddell discovered facial expressions that mark the topic. So you can't just move things around, they have to be linguistically marked. And once facial expressions were recognized as a grammatical marking, then it was recognized that we have a basic order, but there's lots of variability, that can be marked. Stephen Wilson 32:08 Cool. Okay, and then so we've talked about phonology and syntax. How about the lexicon? So I think that maybe one of the most interesting differences is that there's a lot more potential for iconicity. Can you talk about the similarities and differences between the lexicon in sign language compared to spoken language? Karen Emmorey 32:28 Yeah, so we've done a lot of work trying to understand the nature of iconicity in sign languages. Part of it is, there's iconicity in spoken languages, so things like onomatopoeia. There's other spoken languages, like Japanese, that has ideophones, and can have a whole systematic sound symbolism system. It's just a little bit more reduced compared to sign languages, partly because it may be harder to make things sound like what they mean. It's a lot easier to make things look like what they mean. You have the hands to show actions, visual representations, tracing of shapes, so there's just a potential for iconicity. I think if spoken languages could do more, they would do more. Stephen Wilson 33:08 I bet they would. Karen Emmorey 33:09 Now the question is, what's the role? So this is a good place, sign languages are a good place to study iconicity. So does it help language acquisition, for example? It's pretty clear adult signers learn iconic signs faster and better than non-iconic signs, because they can use the form-meaning mapping to remember the meanings of signs. Not that iconicity didn't play a very big role in first language acquisition by kids. But there's a lot of work now suggesting that, no, actually iconicity is one of the many variables including phonology, whether it's a frequent phonological form or frequency, iconicity also plays a role in early language acquisition. So iconic signs are learned earlier than non-iconic signs. But now we're also looking to see, does it make any difference in processing? And I would say that evidence is a little bit mixed. For production, we definitely seen there's now at least five or six studies that have shown if you do a picture naming experiment, where you're naming pictures that correspond to iconic signs versus non-iconic signs, you're faster at producing the iconic signs than the non-iconic signs. And we see a nice ERP signature for that too. It turns out that when you're naming iconic signs, it's like naming concrete items. You get actually more negativity than non-iconic signs. Stephen Wilson 34:37 Yeah, I was gonna ask you if that's all sort of covarying out concreteness, because I assume that would be a pretty strong correlation there. And we know that has an effect on naming. Karen Emmorey 34:46 There is but these are all concrete in the sense that they're all nameable and we can covary for concreteness, and you still see an effect of iconicity. Because you're right, there's a strong correlation between concreteness and iconicity. But even when you take concreteness into into account, you still see this effect of iconicity. At least in production. I'm not so convinced about comprehension. So we recently just did an ERP study just looking at sign recognition of, you know, a large group of signs that varied in frequency, that varied in iconicity, that varied in concreteness. We saw a big effect of frequency, we saw a big effect of concreteness, controlling for all the other factors. But not for iconicity. So at least it's not being tracked in the same way that frequency and concreteness are. I don't think we really understand exactly the role of iconicity in language processing. But it does affect behavior. Stephen Wilson 35:44 I mean, maybe it just kind of speaks to the fact that sign languages are languages, right? Just because things can have an iconic basis, once they start being processed as language, they probably start to become bleached of that and then start to be metaphorically extended. And pretty soon, it doesn't really matter too much the origin. I don't think that we think about which words are onomatopoeic when we're speaking, you know? Karen Emmorey 36:07 That's right. That's right. In fact, more frequent signs tend to be less iconic. So the idea is they lose, with form constraints and changes, it can lose its iconicity. But the other interesting thing about metaphorical extensions, there's a beautiful paper by Irit Meir, where she proposes something called the double mapping constraint, where iconicity actually constrains the way signs can be used metaphorically. So there's actually a linguistic structural impact of iconicity that you don't see, or it's not clear you would see in spoken languages. Her examples are something like that "the acid ate the metal". In lots of spoken languages, you can use the verb 'eat' to mean that the acid is sort of devouring metal or a key or something. The sign for 'eat' in many sign languages is you're holding something in your hand and you bring it to your mouth. The idea is that the iconic mapping in the 'eat' sign is depicting holding something and bringing it to the mouth. But the metaphorical mapping doesn't involve that type of mapping. It involves something being consumed, which is not highlighted in that. And so you can't use the sign 'eat' in ASL to mean the acid ate something. You have to use a different sign that has different iconic mapping assigned, which we sometimes gloss as 'nibble', where one hand sort of clenches and moves across the palm. So you can see this movement that's indicating something is being consumed or devoured. That sign can be used in this metaphorical sense of acid eating something. So that's where the double mapping comes in. You have mapping from the articulators to the source domain, the concrete domain, and then a mapping from the concrete domain to the abstract domain. And what Irit Meir discovered is that you have to maintain the structure throughout those mappings. So iconicity constrains the type of metaphorical mappings that sign languages use, or the signs that being used in those metaphorical mappings. So I like that example because it really shows a linguistic impact of iconicity. Stephen Wilson 38:24 Yeah, that's really interesting. Okay, so now I kind of want to ask you, before we start talking about the brain more, there's just one silly question that I've always wanted to know and never had a chance to ask anybody. It's because I have small children. And the question is, is it possible to whine in sign language? Karen Emmorey 38:45 To whine? Yes, yes, you can whine in sign language. A lot of it is facial expression, which is really important, you're going to get a whiny face. Stephen Wilson 38:55 Okay. But does it have like a grating sound to it that makes you want to, like rip your ears off? I guess it doesn't, but does it have any equivalent of generating that emotion? Karen Emmorey 39:09 It's a visual response. So when kids are whining, it's just like, okay, I just look away. And then I don't see the whining. Stephen Wilson 39:18 That's like an advantage, right. Okay, so let's talk about the neural basis of sign language, which I think was investigated by Ursula Bellugi and her colleagues in the 90s. And I think you were involved in some of these studies. So can you talk about what was found with sign aphasia, first of all? Karen Emmorey 39:46 So the first question when people were thinking about the brain organization for sign language was, what hemisphere of the brain is involved in signing? So this is before fMRI. The only data came from signers with stroke or brain injury. We knew from spoken language that if you have injury to the left hemisphere you have frank aphasias. If you have damage to the right hemisphere, you don't have aphasia but you can have spatial impairments, so wayfinding and spatial cognition. Sign language is really interesting because of course the work has shown these are languages, they have the same linguistic structure as spoken languages, syntax, phonology, morphology. But they use space. They use space at every level. So, phonology you have location differences on the body. We've already talked about syntax using space for referents. So, maybe sign languages are represented in the right hemisphere because of the signal, the medium. Maybe they're more bilateral, or maybe they're in the left hemisphere. And what the data from the stroke patients clearly showed was that damage to the left hemisphere created sign language aphasias, so nonfluent aphasias, fluent aphasias, whereas right hemisphere damage did not create aphasia. You did see spatial impairments with right hemisphere damage but they didn't come out in the language. They didn't appear aphasic. So that really told us that what the brain cares about is language. The left hemisphere is really the language hemisphere and it's not there because speech is auditory, for fast processing, or that the vocal tract is being used, because you see the same organization for sign languages. It's telling us, again, something about why the brain is organized the way it is. Stephen Wilson 41:38 Right, yeah it really kind of is problematic for a lot of the theories as to why language is left lateralized at that point. Was it surprising to you at that time that the findings were so clear? That sign aphasia was associated with left and not right hemisphere damage? Or was there enough hints in the prior literature that that was what you were expecting? Karen Emmorey 42:00 I think it was kind of surprising, because in part I was really interested in the spatial aspects of sign language. My research has focused on that and tried to see, well, what is the right hemisphere doing? What aspects of sign language are controlled by the right hemisphere and those kinds of questions. So it was a bit surprising to me. It may not have been surprising to others who were really thinking "Okay, well language is language, and so the left hemisphere should be processing language." Stephen Wilson 42:29 Right, you were really focused on that use of space and it seemed very logical that that could make a difference to its neural underpinnings. Karen Emmorey 42:37 Exactly. Stephen Wilson 42:39 And those studies with aphasia, they also showed that within the left hemisphere, the layout of sign language processing was kind of analogous to spoken language, right? Karen Emmorey 42:50 At a basic level. If you look at frontal damage, what you see are nonfluent aphasias. The signers with frontal damage, they comprehend pretty well but have very effortful signing, effortful articulation. Frustrating, as it is with spoken language aphasia. With more posterior damage, temporal damage, what you see is fluent signing, but with lots of grammatical errors, it doesn't always make sense. Just like what you see with spoken language. Stephen Wilson 43:22 Yeah, it's very cool. I think it really just kind of shows us what that basic layout is. It's tempting to think that Broca's area is where it is because it needs to be near mouth motor cortex, and Wernicke's area is where it is because it needs to be near auditory cortex. And you kind of realize that's not the explanation, right? Karen Emmorey 43:39 Exactly, exactly. When I talk about this I say these are language areas, they're not speech areas. Stephen Wilson 43:45 Right. Okay, so in some of your work in the last 20 years you've definitely documented some differences in the neural correlates of sign language processing and spoken language processing. I'm especially interested in the PET and fMRI studies. Can you talk about the big picture findings there? Karen Emmorey 44:10 So one of the things I've been interested in is spatial language. So, talking about spatial relationships because that's where sign languages and spoken languages are really different. In spoken languages, often to talk about space around you, the layout of a room, you're going to use prepositions. These closed-class grammatical items, 'on', 'under', 'around'. Whereas for sign languages, that's not the way spatial language is produced. Basically you have handshapes that represent the objects that you're talking about. So flat surfaces, cylindrical objects. And then it's the location of where I placed that flat hand or that curved hand in space, one on top of the other, one under it. That's telling you where those items are located. That's much more of a gradient type of representation than 'on' or 'under'. And so in looking at the neural representation for that, or the neural regions that are involved in producing those types of expressions, what we see is bilateral superior parietal involvement involved in the production of those expressions, that you don't see for spoken language. For spoken language, it tends to involve parietal regions, but it's usually left lateralized, and it's in the supramarginal gyrus. So it does seem that to produce these types of expressions, you have to recruit bilateral parietal regions. What's interesting, though, is the handshapes themselves, which are morphemic, are stored in the lexicon. So a particular handshape for curved objects or flat surfaces. When you look at the neural regions that are involved in just retrieving the object classifiers, or the object handshapes, then you see language regions. So left inferior frontal, left inferior temporal, are engaged when you're retrieving those types of expressions. In contrast to the spatial aspects of those expressions. I've tried to tease apart the linguistic aspects of those constructions from the sort of more analog or gestural aspects of them. Stephen Wilson 46:21 Okay, so maybe this is just a philosophical question, but do you see that whole parietal involvement in the spatial aspect of sign language, do you see that as being linguistic or not? Like, what's your take on that? Karen Emmorey 46:33 So I actually am thinking of it as more of a cognitive system. So you, in order to do that kind of spatial mapping, you need to recruit the spatial, it's the interface between spatial cognition and language. So whether you think about interfaces as linguistic or not kind of depends on your point of view. But I think of it as that's where spatial cognition and language come together, and they come together in a different way for sign languages than for spoken languages because of the mapping between language and spatial cognition. Stephen Wilson 47:02 Yeah. You also mentioned the supramarginal gyrus there. I think you said for spoken language, right? Left supramarginal gyrus? But you also see that for sign too, right? You see pretty clear involvement of supramarginal gyrus and kind of motor control of the hands. Karen Emmorey 47:18 And in fact, when we did an experiment just looking at the production of different types of signs, so one-handed or two-handed signs, or body-anchored signs, signs that are produced at locations on the body, to try to understand what neural regions were involved in production of different types of signs. And the supramarginal gyrus was the one that, it was left supramarginal gyrus. A little bit on the right, a little bit bilateral. But it was the region that was engaged for the production of all sign types. And so we are hypothesizing, along with other data, that the left supramarginal gyrus is particularly involved in the assembly of phonology for sign languages. It may also be involved for spoken languages as well. It's probably not exactly the same region, but the area is involved in combining these units. These, again, abstract phonological units for sign. Stephen Wilson 48:08 I definitely think it has the same role in spoken language. It's a little harder to image. Some work from our lab, led by Melodie Yen, a recent graduate, kind of clearly established there's supramarginal involvement in phonological assembly, in encoding. And then there's definitely other fMRI papers showing this, too, from other groups. And I'm curious about whether it will be the exact same region or not, because looking at your papers, it looks very similar. But you know, it's hard to tell without actually doing a direct comparison. Karen Emmorey 48:41 Yeah. And I do think that there's a cascading effect from the sensory or motor regions that are clearly different to the lexical representations and phonological encoding. That's why I think it's probably not exactly the same. I feel like, I mean, this is just a hypothesis, that there's remnants of the articulators that you see in the representations. That's going to then affect how words and signs are represented. Stephen Wilson 49:09 Yeah, I mean, my guess would be that it's not identical, right? I mean, like fronto-parietal connections, you know, different parts of the frontal lobe connect to different parts of the parietal lobe. So you would expect it based on what articulator it has to hook up with. It's cool that that same region has the exact same function in the two modalities. Karen Emmorey 49:38 One of the contrasts that really made me think that there may be differences that carry over from the sensorimotor regions is if you look at by modal bilinguals. So hearing signers, and you just compare comprehending speech versus comprehending sign, without a lot of subtracting out low level baselines. You see these huge differences in brain regions that are involved in sign comprehension and speech comprehension, because of the nature of the sensory systems. And the same thing for production, in terms of what regions are engaged. If you do that for two spoken languages, you don't see hardly anything. If you subtract Spanish from English processing, there's tremendous overlap for bilinguals. And it's not for bimodal bilinguals, because of the huge differences in sensory and motor processing. And I just feel like there's going to be cascading effects of those early sensory differences. Now, they may not be huge. This is where we need more sensitive measures. It may be within the same region that's doing that type of computation. But for example within supramarginal gyrus, it may be that the sign computation is a little bit more posterior, closer to the articulators and speech more anterior. That would be a hypothesis if you still have a reflection of those motor systems in the computation. Stephen Wilson 51:05 Yeah, what happens when you do control, I guess it's really hard to control for the sort of sensory motor aspects of sign. But to the best extent that you're able to do that, what happens if you compare signed and spoken language while fully controlling as much as possible the sensorimotor differences? Karen Emmorey 51:23 Then you see very similar regions. Of course with anything it depends what that control baseline is. For example, something that Ted Supalla's group has used is superimposing a signer. So you have like three different hands. There's lots of visual and facial expression motion going on as the contrast between comprehending signs, and then you get rid of all parietal activation. You don't see the differences that others have seen when they use a simpler baseline, like just a model at rest. So when you include lots of movement in your baseline, you don't see movement areas active during sign comprehension. You see much more of these very core regions active. Stephen Wilson 52:14 Yeah, can you just say again what paper that is and where that's published? Karen Emmorey 52:18 It's Aaron Newman, Elissa Newport, Ted Supalla. Those are the authors. But I can send it to you. Stephen Wilson 52:26 Cool. So what do you think the big outstanding questions about sign language in the brain, and do you have projects that you're particularly excited about right now that you're working on to get at the big questions? Karen Emmorey 52:40 So one of the questions is again the effects of modality, so thinking about, we talked about syntax and figuring out what the role of spatial processing is in syntactic processing. What regions are engaged for that? Another question I'm interested in is the relation between production and perception. This is because, again I'm interested in things that are different about signed and spoken language. So for speech, you hear yourself speak and you use the same computational system when you're listening to someone else's speech. There's a really strong link between auditory feedback and production. For signers, it's different. I don't see myself sign. I can't parse my own signing because I have a very different view of it in the periphery. I'm not using the same comprehension system that I am using when I watch someone else sign, where I can see their face. I can't see my own face when I sign. So I'm interested in that coupling, that interface. And what does it say about inner signing and the efferent copies that are produced when I sign. Is it a visual prediction that I make? Is it motoric prediction? So thinking about how signers monitor their signing is most likely proprioceptive rather than visual. How does that change the way perception and production are linked? What are the brain mechanisms that connect production and perception? So we're working on trying to do some experiments where we use an adaptation paradigm to try to look at adapting stimuli, whether they're motoric or visual processing, to try to understand what those connections are. Stephen Wilson 54:26 Wow, those are really big and very complicated questions. Karen Emmorey 54:29 They are complicated. Stephen Wilson 54:32 Yeah, so is that the main focus of your research right now? Or one of the primary focuses? Karen Emmorey 54:38 Yeah. So just last year we were awarded a grant to look at some of those those issues. Along with the other thing we're going to be looking at, is trying to understand the phonological processing, and how that happens for signs. Looking at handshape recognition, location recognition, and trying to determine whether regions like the extrastriate body area, motion areas, are those engaged in decomposing a sign during recognition? So, thinking about for speech, right, you can look at phoneme representations in superior temporal gyrus. Well, what's the equivalent for sign? Do we see representations of handshape that are tuned and stored maybe either in parietal regions or in extrastriate body area? Stephen Wilson 55:33 Or MT, right? Sort of posterior middle temporal gyrus? Karen Emmorey 55:37 It's possible. And what's really cool about signed languages, is we can compare processing of signers with people who don't know a sign language. So their brain has not been tuned to these linguistic structures. And so we can have this really nice comparison between signers and nonsigners to see, okay, is this just just the way the brain works? We see it in both groups. Or is there really an effect of language in tuning these representations? That's something you can't do for speech because if people can hear, they acquire a spoken language. You can't really get at that specificity within auditory processing because everyone knows a spoken language then. Stephen Wilson 56:17 Right, and even if you're hearing some language that you don't understand, you can still make out phonemes in it, right? There's going to be like enough phonemes overlapping that basic processing is going to be shared. Yeah, very cool. Well, I should probably let you get to your work day in Maui. And I really appreciate you taking the time to talk with me and make make this episode of the podcast. Karen Emmorey 56:43 Oh, I really enjoyed it. I can tell listeners out there, if you have questions about sign language, I'm very responsive to emails. So please email me if you have a question. Stephen Wilson 56:55 That must make you unique among all people in academia. Karen Emmorey 57:00 I'd like to encourage sign language research. There's not enough of us out there. So if somebody has a question that I think is really interesting, I want to try to put them in touch with somebody who can help them answer it, or encourage them to do the work themselves. One thing is, if you're thinking about doing sign language work, you want to hook up with a deaf researcher to work with because you sort of need to understand the people that you're studying. Stephen Wilson 57:26 Absolutely, I think like having a connection to the deaf community would be absolutely critical for this research to succeed, right? Well, I appreciate you joining me today. And I hope to catch up with you soon. Karen Emmorey 57:41 It was my pleasure. I hope to see you soon too. Stephen Wilson 57:43 All right, bye. Karen Emmorey 57:45 Okay, bye. Stephen Wilson 57:47 Okay, well, that's it for Episode Six. Please subscribe to the podcast on your favorite podcast app. And if you have time, rate and review the show on Apple podcasts. If you'd like to learn more about Karen's work, I've linked her lab website and some of the papers we discussed on the podcast website, which is langneurosci.org/podcast. I'd be grateful for any feedback, you can reach me at smwilsonau@gmail.com. I'd like to thank Sam Harvey for assistance with audio engineering and Latané Bullock for assistance with editing the transcript of this episode. Thank you all for listening. Bye for now.