Stephen Wilson 0:06 Welcome to Episode Two of the Language Neuroscience Podcast, a podcast about the scientific study of language and the brain. I'm Stephen Wilson and I'm a neuroscientist at Vanderbilt University Medical Center in Nashville, Tennessee. This podcast is aimed at an audience of scientists and future scientists who are interested in the neural substrates of language. I'd like to thank the many people who listened to the first episode, and have come back to listen again. I'd also like to thank everyone who provided feedback, ideas for topics, and kind words on our first episode. My guest today is a legendary cognitive neuroscientist, Sophie Scott, CBE. That's Commander of the British Empire, for those of you who might not be familiar with the British honor system. Sophie is Professor of Cognitive Neuroscience and Director of the Institute of Cognitive Neuroscience at University College London. She has done seminal work on the neurobiology of vocal communication, including investigating the functional roles of different streams of auditory processing, exploring homologies with non-human primates, and studying not only speech perception, but also the perception of emotion in the voice, nonverbal vocalizations, and many other topics. Hi, Sophie, how are you? Sophie Scott 1:12 I'm fine, thank you. It's deeply weird in the UK at the moment. We're back into lockdown. So everything's got very small, you know, you're indoors most of the day and gazing at outside and hoping that there's sunshine or a bird you can see or something. It is odd, but no, we're all fine. Stephen Wilson 1:28 So the first thing I'd like to ask you is, how did you become a cognitive neuroscientist who studies language and communication? Like, did you have any interests as a kid that pointed to this future career? Or what's your history there? Sophie Scott 1:41 I was, when I was a kid, I was very interested in biology. And I liked biology and enjoyed studying biology. And I started a degree in biology. And I discovered through that... I did a course on animal behavior, which I thought was incredible. And that made me realize you could study psychology. This is, you know, wasn't as popular a degree back in the 1980s, as it is now. So I changed courses and started a new biology degree. But basically, I could take a psychology pathway. And what I really wanted to do was music, what I liked was music psychology, and that's what I wanted to study. And that's what I did, whenever I had a chance to do any projects, or anything where I got to choose a subject, I would sort of study that. And then I, very Luckily, was accepted onto a PhD place at University College London, where the guy who'd accepted me on as PhD supervisor, Peter Howell, had done this work on music that I'd based my third year dissertation on. And I thought that's what I was going to be doing. And then I got to UCL, and his interests really had shifted quite a long time since that paper had been published. And he cared a lot more about speech. So he just moved me sideways onto speech and I was like, okay. And, somehow, no, it's not true. I have kind of got back. But you know, and I was lucky, I got to study, you know, signal processing and phonetics, and, and that was kind of, actually really useful. So having the biology background, and then having this kind of quite fine grained, detailed learning about speech and sound was very useful in terms of... Yeah, I've always been more interested in the sound end of kind of communication. And not I'm not interested in the wider language system. But you know, that front end is interesting to me. And that came about because of my PhD supervisor's interests. So that was really, it meant that when people started doing brain imaging, non spending time end of my PhD, you started to see people talking about PET scans and things like that. But because I had the sort of biology background, and then this kind of fairly intense signal level interest in speech, I got involved in some early functional imaging studies. And that was very interesting. It seemed like a very pleasing technique to be able to ask, or, you know, almost observational questions about the natures of the systems that you're looking at. And that, I never really looked back, actually. So I was very fortunate in that the sequence of things I didn't want to happen, actually got me completely embedded in an area that I love. Stephen Wilson 4:28 Yeah, that's really great. I guess you were just, what a fortuitous time to be right there with that particular training. And yeah, I'd noticed that your dissertation is very speech science, you know, very serious like speech science. And that's obviously something which it did end up playing a role as you as you move forward. And were you a musician back then when you were a kid? Are you still a musician? Sophie Scott 4:50 Yeah, I was. I was a, it sounds odd now. I come from a family of musicians but they were all singers. So there were musical instruments in the house. But there would always be like, you know, things that you would do to accompany singing. So actually, I think I'm probably... My father was very musical, but we didn't have any way of sort of recorded music being played in the house. When I was a kid, we didn't have a record player, or I was quite old before we got a cassette player in the car. Because music was a live thing. Probably one of the last people that kind of generation for whom music was something that you did, rather than something you'd go off and listen to. You might go to a performance, but you know, the thought of buying it and listening to it that way, would just, I just never saw it happen. So yes, I was lucky in that I had a very musical family. And there was a lot of music around. And I think that kind of, as I've got more interested in kind of thinking about speech, and voices, it does kind of take you back to music because there is a great musicality to the voice. It's just, you know, when it's being used for language, has a form that sometimes tricks you away from noticing the musicality. So, I have found, I've found a way back to kind of get to think about music sometimes, which which I'm grateful. Stephen Wilson 6:06 That's really cool. What kind of music did you did your family play and listen to? What style of music? Or styles? Sophie Scott 6:16 My father had been a singer and his father was a singer. And they were church singers. You know, my grandfather was a vicar choral at St. Paul's Cathedral. He's buried in St. Paul's Cathedral. So, you know, my father took you to see singing, it would be, you know, a Bach passion or something, it was that, that was the sort of singing that he'd grown up singing. He was a choir boy. And then he'd, his voice wasn't quite good enough to go into it professionally. So he sang a lot. And he'd sung as a child, but it wasn't something you could kind of carry on into an adult career. So he'd moved off into sales, basically. So he was always singing. And that was, it was interesting. When my son was born, I found that I kind of did the same thing. I would, you know, and you don't realize, I suppose how much you kind of want to share these things until you find yourself doing it. But it clearly was quite a, you know, it's a nice thing to be, you know, to feel. For people to feel comfortable singing, I think, is a nice thing in life. So I'm grateful that I had that experience. I mean, I'm not in any way a good singer. But I enjoy it. And I take pleasure in it. And it's nice to be able to listen to other stuff, but also to get to do yourself. Stephen Wilson 7:23 Yeah, we like to sing in our family too. And, but in that respect, I'm different. My parents would never sing. I don't think... Well, my mom sings. But I've never, I don't think I've ever heard my dad sing a note. He's just not the kind of guy that would like, you know, sing a note of a song, or eat ice-cream in public or anything like that. Sophie Scott 7:43 There's something so kind of, um, exposing about the singing voice. It's very, people are genuinely very anxious about it very often. And I completely understand it because there's something sort of very truthy about it. And I think everyone's got a really lovely voice. You know, there's no, very rarely isn't there something interesting or pleasant about somebody's voice, but people are so worried about it. Like you say, I will sing. Everyone's expecting "Okay, then I need Mariah Carey, do it now", you know, and it's fine. Doing it at all is, is normally just the fun part. Stephen Wilson 8:13 Yeah, I do agree. Okay, so I do want to talk about some science. But before we quite get into that, I understand that you recently became a Commander of the British Empire. I was wondering if you could kind of explain for all of us, non British people, or maybe even for British people, what that is? Sophie Scott 8:34 Well, I've got on a steep learning curve, learning this myself. So in 1917, the then king, I forget which one, it might have been one of the last Georges, he really to, there'd always been a system in the British Army for acknowledging valor and bravery and contributions made by people in the army. And they had brought out this order of medals for civilians, initially to sort of recognize things people have done in the First World War. So it's structured like a military rank. And you have, at the top level, you've got dames. And then you go down a bit, and you get to commanders. And then there's officers and then members. And the title is "of the British Empire". And it's just it's as if the British Empire was a military order. That's basically it. So it's very nice. It's basically a sort of a national honor. So obviously very delighted to receive it. It has long lost its kind of connotations. No one ever really thinks too much about the fact that it's based on the British Empire and the British Army. You know, but people do turn it down for that reason. So I think, I think the reason I got nominated for one was because I did the Royal Institution Christmas lectures in 2017. And if you look at the list of people who've done Royal Institution Christmas lectures, quite a lot of them have an MBE or an OBE or a CBE. So someone has to nominate you. And I strongly suspect I was nominated by the Royal Institution, which is very nice of them. Stephen Wilson 10:24 That's really nice. I mean, what a great honor that is. Was there, like, a fun ceremony where you got a medal and that kind of thing? Sophie Scott 10:31 At some point there will be because it's all happened under COVID. It hasn't happend yet so. But there's a dress code and everything. You have to wear a hat. I don't know what I'm going to do. Stephen Wilson 10:41 I mean, you do have considerable interest in fashion for a neuroscientist. I know this from following you on Twitter. Sophie Scott 10:47 I yeah, I have been thinking quite a lot about the hats and try to, how I can make that work in a way that's not sort of too Hyacinth Bouquet. But no, I'm, it'll be interesting. I mean, who knows when it will happen. But basically, they have an event at the Buckingham Palace and a royal pins a medal on you. And, you know, and you get to take your mum. And you have something, I'm greatly looking forward to, greatly looking forward to take my mum to that. Stephen Wilson 11:13 Yeah, I was pretty chuffed when I saw that. I just kind of thought, you know, one of our field is a Commander of the British Empire. I just kind of thought that was very cool. Sophie Scott 11:23 Well, it's very kind of you. And it is, you know, it's a tremendous honor. It really is a tremendous honor. And it does, you know, it's for services to neuroscience. So that's the kind of, you know, it is a, it is an acknowledgment for the field as well. But obviously, you know, I was hugely delighted with it. Stephen Wilson 11:41 Yeah, well, you surely have earned it. So yeah, let's talk about... I kinda wanted to start with one of your papers that I know very well, because I've used it for teaching for a lot of years. And it's your 2000 paper in Brain. And I think this is a good paper to maybe discuss in some depth, because I think it kind of sets the stage for a lot of the themes that you've gone further with in your career after that. So I was wondering, can you kind of tell me and tell our listeners about that paper? Yeah, I guess I'll just kind of leave, leave it to you to structure it how you want to structure it. And I'll ask more questions if I need to. Sophie Scott 12:15 It sounds really silly now. But when I was, when I first got involved in doing functional imaging, there was this problem, which was when some of the first functional imaging studies people did involved playing people speech or getting them to speak, because we had this very good idea of what brain areas we should see recruited, from neuropsychology. So we would expect to see something represents Wernicke's area and something to do with Broca's area. Definitely would have to be on the left. But when you played someone, it wasn't really, kind of worked when someone was talking. But you when you played someone speech, you didn't see a left lateralized system. And it was thoroughly and unambiguously bilateral. And because you're always looking at relative patterns of activation with these brain scans, we got very caught up in how that might be influenced by what we were using as our baseline comparison. So obviously, if you use rest as a comparison, well, you could be seeing loads of bilateral activation, because actually, most of that has nothing to do with the fact it's speech. It's because you're hearing a sound. So what comparisons should you use? And people tried reversed words and clicks. And there's always some reason there's some wiggle out where the acoustics could still be explaining these differences. And I was having a conversation with Stuart Rosen, who'd come up to Cambridge where I was based at the time to give a seminar, and he was giving a talk with noise vocoded speech. And a lot of stuff about noise vocoded speech, which is speech where you basically replace a lot of the fine spectral detail of speech with noise filtered to different bandwidths. So it sounds like a, sounds like a harsh whisper. And it's interesting, because you've got this very reduced spectro-temporal profile, but people can very quickly learn to understand it. And I was saying, you know, that would be a really interesting study to do. And he said "Yes". And of course, you could use a different baseline and what he did, so I kind of started saying we should do something with noise vocoded speech. And he said, Well, you know, and we were talking about this problem of the baselines. And he said, "Well, I've got an idea for the baseline, and how about using rotated speech?" And rotated speech is a technique that was developed by a guy called Barry Blesser in the 1960s, which basically simply involves flipping the spectrum of a sound upside down, like inverting a face. And like inverting a face, it becomes very hard to decode that information. Although you haven't taken anything away. It's now all in the wrong... For the face, it's in the wrong spatial location, and for the sound, it's in the wrong spectro-temporal point, certainly. But a lot of information is preserved. So you tend to preserve the pitch and the pitch variation and over a very long period of time, particularly if you only ever listen to one person's voice that's been flipped in this way, you can learn to remap it, to understand it, because you haven't taken stuff away. Stephen Wilson 15:16 Right? I'm interrupting from the future to play examples of the stimuli from this study. First, I'll play a spectrally rotated item, and then I'll play the clear speech version of the same item. Example stimuli 15:27 [Rotated speech.] The clown had a funny face. Sophie Scott 15:32 So we ended up with this study where we had two different kinds of speech. We had the speech that is normal, clear, intelligible speech, and noise vocoded speech which people have had very little experience with, and we've gave them a quick training beforehand, so they can understand it. But it doesn't sound, doesn't sound like someone talking. You couldn't tell if it was male or female voice, you would have great difficulty. In fact, I'd go as far as to say it would be very, very hard, certainly with our stimuli, to be able to recognize the talker at all. And now for both of them, we would compare them to their rotated baseline. So something that is a flipped version that people can't hear as something that's intelligible, but which technically, and there are ways in which in fact, this comparison does break down. But certainly, as far as we could see, in terms of how you represent the information on a spectrogram, there was no huge difference in terms of the acoustics. And when we did this... Stephen Wilson 16:26 Oh, sorry, can I just, just kind of for clarity for our listeners, can I just kind of summarize what I'm understanding and just make sure it's all above board. So you've got regular normal speech, you've got noise vocoded speech, which is fully intelligible, that kind of, as you said, sounds like a harsh whisper, it's got like a lot of the spectral detail taken out. Then you've got the control for normal speech is going to be spectrally rotated speech, where you've kind of flipped the spectrogram upside down on the y axis, and so it's got all the spectro-temporal complexity of speech, but is completely unintelligible without a great deal of training. And then you also do that reversal of the noise vocoded condition as well. Is that it? Sophie Scott 17:12 Exactly, so, exactly. And it's a, it's a weird design. People used to call them conjunction designs, and it's basically not a, it sounds like one bit of a two by two ANOVA. Whereas in fact, it isn't really, because what you're saying is, I'm going to look for brain areas that are activated both in both comparisons of the intelligible over the unintelligible where you're using the rotated speech as the baseline for the normal speech and the rotated noise vocoded speech as the baseline for the noise vocoded speech. And that was, that was interesting, because when we did that, we did get a very strongly left lateralized response. And interestingly... Stephen Wilson 17:54 Sorry, can you, can you, for what contrast? Yeah, so can you like kind of lay out the key contrasts and what you saw for each one? Sophie Scott 18:00 So this is if you say, well, we just analyzed it as a initially just as a conjunction design. So the planned study was to say, these two intelligible conditions over the two unintelligible baselines, what areas are commonly activated by both? And this is a PET study. So it's a very crude study. It's not a, you know, you've got a limited number of scans for every participant. But the advantage of PET is, if there is something there, you will see it. It's either there or it's not there. So when we followed this up with fMRI, you do see a more extended network, but it has the same basic emphasis, which is that you see a left, largely left lateralized response with a strong emphasis in the front end of the left temporal lobe. And this was not what we were expecting at all. So we'd kind of, we saw something we were expecting to find, which was that we're seeing something on the left, showing a strongly dominant response on the left. But it wasn't sitting where we thought it should be. And where we thought it would be was where the dominant view in the 80s and 90s of where the core of Wernicke's area. So the idea of Wernicke's area is the brain area that Karl Wernicke described that is sort of functionally associated, damage with that area is associated with problems with the reception of speech. There can be other problems as well. But that's the kind of core of Wernicke's area is the disorder associated with damage to this area is that you have a difficulty understanding speech. And at the time, the clear view in the literature of where that would map onto on the brain was the posterior superior temporal sulcus on the left side. Right at the other end of the temporal lobe is where we were expecting to see a leftwards asymmetry and we didn't see it there, anything like as sharply as we did at the front end of the temporal lobe. Which was surprising until we were all at a conference in Dusseldorf when the Human Brain Mapping conference was in Dusseldorf. And there was a paper being given by Joseph Rauschecker. It was a poster talking about streams of information processing in the non-human primate brain applied to the processing of vocalizations. And he was suggesting there was this sort of anterior-going stream associated with the recognition of sounds for non-human monkeys from sound, albiet from conspecific vocalizations. And we thought, oh, maybe that's what we're seeing. Maybe just the, the kind of the direction of flow isn't, is what's driving this. We're obviously not seeing the whole language system. We're kind of emphasizing the acoustic end of this. But that's, maybe that is why we're seeing something that runs anteriorly rather than posteriorly. And there's interesting stuff that goes posteriorly, but it didn't show this initial response. So that was the thing that we kind of wrote the paper up around and sort of arguing that there's, you can see something left lateralized. But actually, the dominant element of this is something running forwards, not backwards. And we'd been expecting something to go backwards. And it's one of those things, one of those things in science where you think, you know, sometimes you do get a result that makes you change your mind, because it's not giving you what you thought you were gonna get. And so it was an interesting journey, and then that was very, very useful, because it then kind of introduced me into thinking more seriously about the value of engaging with these emerging themes about elements of how information is processed in the non-human primate brain. We'd always sort of ignored it. Stephen Wilson 21:37 Sorry. Yeah, no, I, I definitely wanna, I want to go right into that. I just want to kind of summarize so that, you know, so we can communicate this information solely through the auditory modality. So just to kind of summarize, like, what you found was that the normal speech and the noise vocoded speech would activate left anterior temporal region, specifically, I think it was in the superior temporal sulcus. Sophie Scott 22:04 Yep. Stephen Wilson 22:05 As opposed to the two rotated conditions, which did not. And so that was, that's a contrast of intelligibility. It's kind of like, you just kind of throwing away this whole manipulation of noise vocoding really, and just kind of comparing intelligible to non-intelligible and what you found was that it was left anterior temporal, whereas you'd been expecting left posterior temporal. Sophie Scott 22:27 The thing that was interesting about having the noise vocoded in there was that if you look at the response, in the left anterior superior temporal sulcus, it is equivalent for the clear speech, and for this very novel-sounding noise vocoded speech. So that looked like it didn't care what the voice sounded like. Stephen Wilson 22:46 Yeah no, it's beyond, it's not caring about the auditory stimulus anymore. It's a higher level response. Sophie Scott 22:52 And that's, and in fact, you could sort of see as you move forward down the temporal lobe, the response to say, the rotated speech dropping away, and the rotated speech does sound like something talking. And you can sort of see like, the system is doing what it can with it. And then then in the end, it's just not making it down into this more abstract level, when you probably are moving into something that is interfacing with a more abstract representation of linguistic information, potentially. Now, the other thing that's true about that is that, and you're correctly calling it this, we talked about intelligibility in speech, because because we don't know what different elements of the information in the speech have contributed to this. And we haven't even tried to, we've just had people listening to sentences. So we know there were words, we know there were sentences, there was syntactic information, there was semantic information, there's phonetic information, you know, that all of that we've just collapsed down into intelligibility. And subsequent studies have done a really good job of pulling out different elements of how things are getting processed along that pathway, you know. You can see in a lot of, almost certainly a lot of parallel processing of the information in speech, because it's so fast, and there's so much of it. And it's obviously connecting with a wider language system as well. But there's, you know, but I'm deliberately just collapsing it all down into intelligibility. So we can do this very crude comparison, which is I think, it's been criticized for that. And I think that's an acceptable criticism. It is a very, very crude comparison. Stephen Wilson 24:26 Well yeah, sort of, but at the same time, it was kind of, it's a seminal paper, because it wsa the first great controlled, you know, the first really well controlled study of language comprehension, that was kind of really well controlled for auditory relative to the baseline. And I think that's why it's the clearest, you know, one of the first studies to show a really clear left lateralized response, right? Sophie Scott 24:50 Yeah, and I think that's fair. And I think the thing that I have to say, full credit to this goes to Stuart Rosen, who's really, he not only had the insight about the baselines, but also said you know, "you should be using sentences". Because when you hear, before then we'd always used words, you'd hear like a sequence of people going "apple, table". And actually you hardly ever hear speech like that. You know, it's not like the normal job of the perceptual system is not dealing with these isolated words said with a very strange intonation. Stephen Wilson 25:21 Yeah, and I'm gonna kind of put, I'm gonna sort of edit into the, to the podcast later, some of your stimuli, which you sent me many years ago. And it's the one that I remember, it goes like this. It goes "the clown had a funny face". Yes. And then there's a spectrally rotated version of it, which goes [imitation of rotated speech]. And it's, it's kind of amazing that it's, you know, when you see the spectrogram it like, looks almost identical to the normal spectrogram. And yet, when you listen to it, it's, you know, absolutely unintelligible. Sophie Scott 25:53 With my own, I can hear it now. It sounds to me, only Quentin Summerfield, who was the speaker we were using, when his voice is specially rotated, I can understand that. And it sounds to me like someone saying, talking with their jaw really clenched. Very very odd way of articulating. There's a strong element of that. So there''s something, there's some cue about what the articulators are doing that you do start to pick up on and it's obviously wrong, because it's not that, you know, it's not what the speaker was doing. It's what the rotation has done. Stephen Wilson 26:20 Yeah, it's funny that you've, you know, you've pointed out that like, eventually you can learn to understand rotated speech, because I've listened to a fair bit of it, and I've never been I don't understand it at all. Sophie Scott 26:31 It might be, there may be an accent thing in there as well. But I'd certainly I mean, I have worked with it for, it took years before it started jumping out, but I do realize you've worked with it as well. Stephen Wilson 26:40 Yeah, only a little, but yeah. So you were sort of starting to, before I kind of interrupted you to like, clarify basic findings, you were starting to talk about how you realized how important it was going to be to like, kind of interpret these findings in humans in the context of what was known from animals and animal models. So do you want to kind of talk a little more about that? Sophie Scott 27:01 Yeah, they, for a very long time, the people in the neuroscience end of things would look for non-human models for, you know, different phenomena that you're interested in, because it makes, you know, different sorts of studies are possible and different kinds of ways of investigating are possible. And the dominant view in speech and language was that we didn't have any animal models for, any feasible animal models for language because no other animal spoke. And that seems, you know, at one level, that's true. But it is also true that while other animals don't talk, other animals do still communicate. And you don't have a, you know, what's the best way of phrasing it? Well, I think it just, there's just been a very dominant view, very human-centric view about what that would mean. But what I was very struck by with Joseph Rauschecker's work and Kaas and Hackett's work looking at the wiring of the perceptual system in sound processing in non-human primates was they did relate it to the complexity of the vocalizations that these animals did use. So rather than saying, "well do they speak or not?", they say, "well, what do they do do?" And they do use vocal information for communication. And you do have a, to some degree, and again, the cutoff is difficult to say. But certainly in primates, you can argue that you have a strong element of a different perceptual organization associated with organization, anatomical organization associated with perception. It's very clearly seen in the visual system. And it's almost certainly there for some somatosensory processing. And they weren't looking at it from the view of auditory processing. So I thought, and I wasn't the only person to think this, but I suddenly saw the value. And actually, we can learn a lot about, like the properties of this system that we're asking questions about by asking, in a very crude way, how does the wiring work? You know, we're not the same as other monkeys, but we are still primates. And some of that, it's worked very well in the visual system. So can we, you know, take something from that into what we do when we ask questions about auditory processing of communication calls? So that has been very interesting to me. And I think more generally, really, you know, functional imaging techniques like Positron Emission Tomography, which I was using initially, and then functional magnetic resonance imaging, they are anatomical techniques. It does make sense to try and use anatomy to guide your interpretation and to think about what this could mean in terms of how is the system organized? Stephen Wilson 29:39 Yeah, cool. So, in some of your work, following up from this paper, you did work with Joseph Rauschecker, and with Carolyn McGettigan and others, and you've kind of come up with a number, you've got a number of papers that kind of lay out these models of speech perception. And they all kind of have this, they involve multiple streams of processing, that's an important feature of your models. And you know, you've kind of got this anterior stream that you've been talking about just now, you know, kind of identifying the, maybe it's like the auditory equivalent of the "what" stream. You've talked about a posterior stream that's important more for auditory localization. I'm wondering if you see there as being additional language relevant streams, apart from the anteriorly directed one that we've been talking about so far? Sophie Scott 30:27 Well, I think one of the things that is fairly clear is that in these posterior stream or streams, there's also some sensory-motor integration, so that you get very clear links between, you know, perception and action, seem to have one of the links between those phenomena is occurring via this, via a posterior route. Now the extent to which that's related to localization of sounds, I think it's possibly not totally different. We don't tend to think of speech production as a spatially organized task, but actually it is. You know, your articulators, and the movement of the tongue around the fixed points in your articulators, is an incredibly well determined spatial organization of structures. So we think of it, because it's all in one place relative to the rest of the world. But we sort of forget that, but I think that it's possible that there's some similarity of that coding, although we don't know for sure. Stephen Wilson 31:26 Because wouldn't like a sound, wouldn't sound location in space... I mean, it's a very different space from the space of your articulators, right? Sophie Scott 31:33 It is, but that's still spatial. Your voice actually, you never hear any other sound coming from where your voice comes from. It has a spatial location, and then you're organized, you know, the way that you filter and change that sound by speaking, is using. you know, it's like thinking about guiding, using visual information to guide your hands, you can think about using auditory information and somatosensory information to guide your articulators. But that guidance is still spatial as it would be for your hands that's played out in a spatial configuration. Now, you know, but it is possible they're different. You know, it's possible there's a somatosensory pathway, and it's possible there's a spatial localization of sound pathway, because we know that's happening out there, that does happen in that route. And also, there is something funny that we've never really got to grips with but there, in the visual pathways, there are more than two that people talk about, that have different sort of, so there is that pathway from visual information up to biological motion areas. So this would be in terms of auditory processing, something that's kind of going backwards a bit and down towards the back end of the middle temporal gyrus, that I think we quite often see it. But I don't think we always, you know, because it's such a strongly linked to visual biological motion. And I don't think we've always been good about thinking about what that would translate into in sound. But I have, you do see something happening there. And I wonder a lot about what that would mean for, you know, does that translate into something that's to do with language? Is it important in language? I think it could well be very important in language. But it might be... Stephen Wilson 33:13 I mean, Wernicke was no fool, right? Sophie Scott 33:15 Well, exactly, exactly. And he, and in fact, if you go back, well this is Joseph Rauschecker did this, not me. But if you go back to Wernicke's original paper, he is talking about the superior temporal gyrus, along its length. You know, he wasn't putting a cross at one end and saying it's there. So it is definitely, you know, he was, he was embracing the entirety of the stuff that we've taken apart in more pieces. But he was saying it's all important. And of course, he did, you know, in the bigger picture is, it is all important. Stephen Wilson 33:44 Yeah, no Wernicke didn't really have the kind of like, you know, he didn't have the kind of spatial resolution that we have available to us nowadays. I mean, he only had an autopsy on one of his two patients who he originally based his theory on, so. Sophie Scott 33:57 It's kind of dizzying actually, when you look at those original pictures and think "you've got so much right from that". Stephen Wilson 34:03 Yeah, his intuitions were pretty spectacular. Yeah, so you're kind of open to the idea that there might be a like, posteriorly directed stream as well, that could be relevant for language in some way? Sophie Scott 34:17 I mean, I think it's entirely possible. I think that you get so caught up in like, minimal arguments about this does this or that does that. I think it is really helpful to sort of take a step back and say, well, in the visual system, there are definitely three. So you know, what does that mean in the auditory system, or why would it be constrained to three, maybe there are more. Maybe there's this kind of spatial stuff I'm talking about, disambiguates that way. But you do occasionally see it. So in a paper that we never published, we did an analysis, or Zarinah Agnew did a very interesting analysis of, we had people listening to sounds, we're trying to reanalyze some data at the moment to see if we find this again. People listen to sounds made with different effectors. So, sounds from the mouth, sounds from the hands, sounds from the feet, footsteps. And we had people hearing the sounds, and we had people seeing the effectors making the sounds. And actually where we found the clearest spatial mapping of an overlap to the visual input and the auditory input, and it actually mapped out separately through effectors, was running down into that posterior middle temporal gyrus. And it looked almost like a body map. And, you know, that was just one of those things that ended up, wasn't what we were looking for, we didn't quite have a way of thinking about what it meant. And we've never published it. And as I say, with some data collected by Saloni Krishnan at the moment, we're trying to go back in, because there was some funny... One of the reasons why we were a little bit hmmm about that was the footsteps were very different from the other noises. So we just worried about that. So we've gone in just asking questions about hand and mouth sounds, and whether or not there's a tool involved. And that will be one of the areas we look at. So, you know, I think it's so easy with functional imaging as well to think that you design your study to, you know, have a particular contrast in it. But we know that it shows you it's a very, you know, these kinds of baseline based tasks for want of a better phrase, do just give you a snapshot of the particular stuff revealed by that design. It doesn't tell you the bigger picture, because if it's if it's something commonly activated, then you won't see it, you know. So I think that's, um, you know, it's not, it's easy to kind of get caught up in what you do see, and be a bit blinder to what you haven't seen. Stephen Wilson 36:39 Right, because yeah, I mean, the baseline is so critical, as you kind of mentioned in your, in that first study we were talking about. So what do you think about... Oh, I was gonna say, um, you know, there's a lot of studies of audiovisual speech, perception that kind of like tend to focus on that posterior superior temporal sulcus as well. So that kind of that, you know, where the auditory and the visual come together, you know, there's a lot of evidence that that's back there. Sophie Scott 37:02 And certainly, like Lynne Bernstein has argued, that's where you get visemes, you know the phonetic representation that you get from the face. That's there. And I think that would be, you know, that sounds entirely plausible, and given the, you know, based on where it is in terms of the brain and where it would sit in terms of the visual and the auditory system, being able to speak to each other in an area we know cares about biological motion. Absolutely. Stephen Wilson 37:29 I think that's, that's right. So what do you think about lateralization of speech perception? I know that you've argued against kind of attempts to derive it from acoustic properties of speech. What do you think is the differential roles of the left and right hemispheres in speech perception? Sophie Scott 37:48 I think we still haven't got a good way of describing it. All we knows is for sure is it's not simple. I think it is really interesting that you can drive the right temporal lobe acoustically. It likes longer sounds. It likes sounds that have pitch variation. It likes things that sound like voices, it doesn't care if you understand it or not. There is some evidence that it has a dominant role in processing affective information in sound. So all of those are things you can actually put acoustic labels on. It's not one acoustic label. Acoustic labels. But very rarely do you see something that shows an easily acoustically defined characteristic that drives signal responses or sorry, drives cortical responses in the left temporal lobe, with that same sort of purely acoustic element. So it's like, it's very interesting, because it's definitely there, there is an acoustic basis to it, but it's going in the rightwards direction. And I think, I mean, Tim Chalice, a kind of classic cognitive scientist, has always argued that the left side of the brain is the stuff that does you know, more abstract computational representations. And that's why it has this role in categorization, and anything you know. And of course, one of the, one of the very abstract elements of the representation of information in a linguistic system is categorizing stuff into labels and different kinds of information like that. So it could be that. It could be that there is some other way of characterizing, you know, is the left temporal lobe just really, or part of a bigger system that's very good at sort of dealing with meaning information in that more abstract way. There are a lot of possibilities and I don't... Or even down to, you could say, well, do you know what, it's speech production that's left lateralized, and everything else gets pulled over as a result of that. You know, and then why iy speech protection left lateralized? Well, maybe because attention systems are lateralized as well. You know, when you get to a higher level, you do find other things than speech are lateralized in this way, so there's, you know, there is some... You know, I wouldn't want to, I don't think we know. We've done, you know, people have come up with great ideas, and none of them really have stood, any of the, you know, none of the simplest stuff has worked, because why would it be, I guess. But I think I wouldn't want to rule anything out, because I think we still don't absolutely know. Stephen Wilson 40:21 Okay. Yeah, it sounds like, you know, you've kind of, yeah, you think we're pretty early on, and we don't understand too much about that yet. Sophie Scott 40:29 I think because we've also asked the wrong questions often, by focusing so much on speech. And not all the other information that's there when you hear a voice? Stephen Wilson 40:39 Yeah. So you've, you know, you've really emphasized in your work that speech is social, and that there's a lot of information being conveyed apart from linguistic information. So you think that's been underemphasized in language neuroscience research? Sophie Scott 40:52 Well, I think, I mean, I can see why because there are so many applied things for which you would want to know how speech works, you know speech synthesis, clinical problems, speech recognition systems. There's a great history of focusing in on that level, because, you know, someone has aphasic stroke, well, that is what they will complain about, you know. A stroke that robs you of your ability to enjoy music frequently can be silent, you know. But also, so that, I can totally understand that. And it's what a lot of our, you know, that's what Wernicke and Broca were interested in. They were interested in the faculty of language, and that's what we meant by language. Um, but I think that, for example, we know that this, the speech information, the linguistic information in speech is not entirely independent of all the other information in there. So for example, if you know a talker, it will be easier for you to understand that talker. If you are confronted with a novel speaker, who has a idiosyncratic way of producing certain phonemes. So in British English, it's not particularly unusual to meet people who say like a "r" sound slightly differently. So say something like, wed wobin, rather than red robin. People immediately adapt to that, just for that speaker. They don't remap what a "r" sound is for anybody else. So you are doing a very speaker, a very talker-specific adaptation. And if you don't speak a language, people can find it very hard to tell talkers apart. So although we can treat them as separate, in fact, the more you look at how they're used, the more you start to realize that whatever is happening, it is really interacting. And again, the techniques we use for looking at this, for identifying these networks, are not good for showing us where stuff comes together. In a higher order way. But actually, I suspect that's really kind of baked in. So even if it is lateralized earlier on in the system, that's not telling you the end of the story. That's telling you something about how that information is getting pulled out, maybe for lots of different reasons before it's all being pulled back together. And it must be having influences right back down. If it is easy for you to understand the speaker when you know them, that does imply that that must be having kind of backwards effects into the perceptual system. Stephen Wilson 43:10 Right, yeah. Okay. So they're a lot more integrated. And it's not easy to kind of study them by separating them out in the way that we usually force ourselves to. Sophie Scott 43:19 And I, you know, I'm a big fan of separating things out. I will happily defend everything we've ever done that did this. But you only get to so far, by looking at it that way. So these kind of network approaches that let it well, you know, as the field is developed, and we're no longer constrained to just like... When did you last see a conjunction design? Which felt very sophisticated in the 90s, I can tell you. But you know, things, starting to actually ask questions about how these much more complex interactions that are at the heart of how we're normally using speech when we hear it, and when we talk to people, how that's actually reflecting the two, you know, the left and right systems interacting, and how that's influencing stuff all the way back down. You know, that's gonna be the next thing that would be really, really interesting to know about. Where does it happen? Does it happen in only one place? Is it the whole system it's happening in? Stephen Wilson 44:10 Yeah. Cool. So over the last few years, you've been working a lot on laughter. I don't know much about that. I'd like to learn more. Do you think you'd be able to tell me a bit about I noticed there was a recent study in Current Biology that you guys published that showed that laughter can influence how funny we find humor to be. Could you tell me, tell us about that study? Sophie Scott 44:32 Yes. So this was, we've done quite a lot of work with laughter, which I never set out to study laughter. It's just because, from the 90s onwards, I've always had like a sort of side game of looking at emotional vocalizations, because most of the work in emotion recognition systems is with faces. And I had colleagues who needed another modality, so I started helping them with that. And then, so I ended up and you know, that's how I came to the laughter. I never set out to look at laughter. And it's a very interesting behavior because it's a very social emotion. You only really find it in social interactions. It's highly behaviorally contagious. And we were, we have a lot of our studies with laughter, we play people a laugh and then we ask them questions about it. You know, how real do you think that laugh was? How much do you want to join in with that laugh? And my PhD student Ceci Cai, who's the first author on the paper, she got a bit worried about these like direct questions, because they are a bit odd. In particular, do you want to go into like a group of people say with autism? What does it mean to say is that laughter real or not? And you know, do they, does it even mean what we think it means to people? So we wanted an indirect test of how people hear laughter. And she had the idea of adding the laughter onto jokes. And the question is, how funny is the joke? And people are comfortable telling you how funny they think jokes are. I think we have a reasonably comfortable that people understand what you mean, if you say "How funny did you find that? Was it a little bit funny? Not a lot funny, Some-, not at all funny?" So we recorded a set of terrible jokes, which we deliberately chose terrible jokes so that they could be made funnier. And then we edited the laughter onto the end. And we got people to rate the jokes, different groups of people to rate the jokes again. So we've got the jokes rated with no laughter. And then we have them rated when we've added laughter onto the end, and there's nothing that doesn't sound like you know, it doesn't sound like you're hearing a joke told to a theater or anything like that. It sounds highly artificial. But just same question, "How funny is the joke?" And what we find is adding any laugh to a joke makes the joke seem funnier. And the more spontaneous the laughter, the funnier it makes the joke. So this was really, it worked for what Ceci wanted, which is it does seem to give you a signal that people are sensitive to the laughter without asking them about the laughter. But it also was interesting, because it suggests that you can't ignore laughter. Even when you're thinking about something as apparently just about the joke itself. If someone else is laughing, it's a very strong influence on your perception of the humor. So maybe there's, maybe that's still contagion. Maybe it's a sort of social approval. We don't know. But it was, it was very interesting. Stephen Wilson 47:11 That's so cool. Do you think it would be possible for me to make my own jokes funnier by laughing at them? Sophie Scott 47:17 Definitely. And in fact, if you look at comedians, they quite often do that. I was talking to a comedian, she's from the US, called Abigail Arshamom. She works mostly in the UK. And I was telling her about this result. And she said that she, and I've seen other comedians do this, she said that she had a couple of jokes that are really quite dark. And if she smiles when she tells them the audience are much more likely to laugh. And I know he's nobody's favorite comedian at the moment, but Louis CK, I noticed when he tells particularly dark material, he quite often, like properly laughs afterwards. He's kind of telling the audience it's okay, and it's funny. You know, so it is actually, lots of comedians do some variety of this, but they're just like a little "This is so funny, it's made me laugh." Stephen Wilson 48:03 Yeah. Yeah, I remember like Billy Connolly always laughing at his own jokes, when they go to a certain level of funniness. And it was just, the ones that he like would burst out, like that he seemed that he couldn't contain himself, those are the ones which I also remember from his old, like I used to have like this Billy Connolly tape that I would listen to like just every single night and just like, crack myself up when I was probably about thirteen or something. Sophie Scott 48:27 Oh, that's lovely. But it's also true, because what then he would do, would he would laugh on the way to getting to the punchline. Like, I know, this is making me laugh. You're gonna love it. It's a fantastic feeling. Stephen Wilson 48:38 Yeah. It works so well. It's just, yeah, it's great. So can you tell me, what are your current projects that you're most excited about, that you're working on right now? Sophie Scott 48:49 Well, trying to get funding, like we all are at the moment. Stephen Wilson 48:52 That's not exciting. That's not exciting at all. Sophie Scott 48:54 Pretty boring. So recently, I've been working with my former PhD student, Kyle Jasmin, and colleague in Portugal, Cesar Lima, trying to get a bit more computational about what we think might be happening in these auditory streams of processing. And that's been, that's a journey. And I found that engaging and interesting, and I think, you know, when we get out of the current lockdown, and we can actually... No one's been able to really collect data in the UK for nearly a year now. So it's, it's hard but when we get back to it, actually trying to test the whole, these networks out in a way that's asking questions about like the properties of processing rather than is it, "how is this speech processed?" is something I'm looking forward to. And I'm also getting very interested in, in the well, trying to find ways of actually characterizing the neural systems involved in the social use of language. So what we're doing right now, we're talking to each other, aligning our voices, but also, again, you know, we've always, I say, we but people like me, have we studied, here is the perception of speech, here is the production of speech. And that's not how it works in the wild. And also, you know, people have made arguments like you that well, you know that you've got this, there's a very clear role for you can think of motor cortex as perceptual cortex because it has a strong involvement in, it responds to the perception of things out there, not visual, auditory, it's got this information coming in and trying to sort of think about the fact that if you think of the job of the brain as being perception, you're either perceiving stuff that's out there or stuff that you're doing, and how does that get implemented in conversation? How does it actually work? That's, I mean, it's hard to look at, but that's one of the things I'm really, really keen on finding out more about. And we're getting better at doing like, you know, like using different techniques for doing hyper scanning, where you can scan two people in a conversation and that kind of thing. So I'm really interested in that. Stephen Wilson 50:54 Oh that's cool. Sophie Scott 50:55 Yeah. And the dynamics of that. Stephen Wilson 50:57 Yeah. So really just kind of getting into conversation and building, you know, the social aspect that has been understudied and the, you know. Sophie Scott 51:08 There are journals of about social neuroscience, and they never include communication. Stephen Wilson 51:14 Well, I think that's kind off, I think that's our fault. So I mean, we should, we should take responsibility for it, right? The language neuroscience field should say "okay, that's gonna be our, we, you know, we need to like be on that". The social people are not going to, you know, because language is really daunting to people that are not language people, you know. It's just like a huge black box. And linguists are so intimidating, like to people who are not linguists. Sophie Scott 51:42 So I did a study on syntax with one of my colleagues, Zarinah Agnew, and a linguist colleague, Hans van de Koot. And it was on some sort of syntactic movement. And it was one of those things where when I was talking to him, I knew what we'd done. And as soon as he stopped talking, it was gone. I thought, like something to do with syntax, the paper on syntax, there's a response to syntax. With a gun trained on me, I could not tell you actually what we'd done. And it is, it is, you know, that's what I thought language was before I started working with speech. I thought you're always drawing tree diagrams and looking at sort of theta movement and things like that. Yeah. Stephen Wilson 52:15 Oh yeah. So, I found with my first episode of the podcast, a lot of students listened to it. So I was kind of wondering, if you would, what advice would you give to a young scientist who's interested in language in the brain? Like, what do you think they should be doing right now? Sophie Scott 52:36 I would suppose my general advice would be "good call". It's a fascinating area. Language takes you in so many directions, it takes you to motor control, it takes you to auditory processing, it takes you to the social world, it's, it is the dominant mode for social interactions around the world. When we talk about social networks, we mean the people we talk to. So it has this incredible power. It's a, you know, as a topic to study, you've got so many different ways you can go into this, and so much we still don't know. So I think it's a fantastically exciting area. And I wouldn't be afraid to follow your interests, you know. It is entirely valuable and valid to read a paper and think "well, I wonder this, I'd like to know more about that". Ask those questions. I mean, I, like my colleague, John Duncan in Cambridge, always makes this point that people tend to get interested in anything you give them to study, because humans are good at that. And that is what happened with me, my PhD. I did not want to study speech in that way. And then I actually loved it. But also, you know, so don't turn your nose up at stuff you end up having to do. You'll still learn something from that. But also do feel free to, you know, kind of make a note of stuff that you think is interesting. And, you know, one of the joys of science is the pleasure of finding things out and getting to ask questions. So, you know, it's always interesting. But it's even more interesting if it's something that really kind of captures your interest. You want to know more about that. What will we find when we do this? Don't shy away from that. Stephen Wilson 54:09 Cool. And I have one last question, which is, but it's maybe a bit of a silly question for you because you actually have some side gigs. But my question is, like, do you have any fantasy careers besides being a cognitive neuroscientist? What would you be if you weren't a cognitive neuroscientist? Sophie Scott 54:26 Well, I think practically, if I hadn't gone into academia, by the time I was born, my wider family was largely people working in carpeting. And that's probably either working in you know, the sort of sales or management side of carpets, or some other trade. Like my father was really keen for me to go into a non-existent company anymore called ICI. Which would mean because I wouldn't be working. Britain doesn't have a manufacturing sector any longer. But that was what, you know, that's what my grandparents did my mother's side. That's how my parents met. That's, you know, my father got off to work for his uncle's. It's a big family of carpet people. Stephen Wilson 55:07 Okay, that's not the answer I was expecting at all. Sophie Scott 55:11 My heart would have been lying in that. But that's, that's, you know, like, that's what family interests would take me into. I've been lucky enough in, because of academia, not because I ever set out to do it, thinking I'd be good at it, I was to get into doing stand up comedy. And I love doing that, that is an absolute joy. I would recommend it to anybody, particularly if you're thinking "I could never do that". You definitely should do that. Because you will learn so much. And it's an absolute wild ride. And you just get, it is like the most in-at-the-deep-end experience in rebooting confidence, and a sort of sense of, you know, "if I can do that, I can do anything". So, I definitely recommend that. Stephen Wilson 55:50 Do you have some of your stuff up online? Your stand up? Sophie Scott 55:53 I do. People are not fooled, you know. Stephen Wilson 55:59 Okay, can you send the link to your best online stand up so that I can put it in the show notes and people can follow up with you? Sophie Scott 56:07 There's one of me doing stand up comedy at the Royal Institution, which is not the natural home for laughter. But definitely, there's definitely some of it out there. But other than that, you know, it's such a, as scientists, we actually do really interesting jobs and we have interesting stuff to talk about. So in fact, and we're used to giving talks. And that already puts you ahead of a lot of people who might have stuff out. You already have a lot of the skills at your disposal. So I would definitely, absolute advice to anybody, not just young scientists, but really do consider giving it a go because I learnt so much. And I did it for the first time when I was in my mid 40s. And it sort of changed everything for me. So I wouldn't have got the CBE if I hadn't done the stand up comedy because that's what got me a TED talk. And that's what got me the Royal Institution Christmas lectures, and that's what got me this, you know, it's... Stephen Wilson 56:52 Connecting to a wider audience. Sophie Scott 56:56 I owe that. Yeah, exactly. It's given me a lot of opportunities. And, yeah, it's been a, you know, a much more productive exercise than I would have imagined. I thought it was something I try once and then, you know, go "well that's it did that, never going back there". When in fact, it wasn't. It led to lots of other stuff, and it has been really useful. Stephen Wilson 57:16 That's so cool. Well, thanks so much for taking the time to talk with me today. I think our listeners are gonna really enjoy listening to everything that you had to say. Sophie Scott 57:26 No problem at all, and what a good idea in the podcast. Good luck with this. And thank you very much for inviting me. It's been really enjoyable talking with you. And lovely seeing you again. Stephen Wilson 57:33 Yeah, you too. Take care. Sophie Scott 57:35 You take care. Bye bye. Stephen Wilson 57:37 Bye. Okay, well, that's it for this episode. Please subscribe to the podcast on your favorite podcast app, and if you have time, rate and review the show on Apple podcasts. If you'd like to learn more about Sophie's work, I've put some relevant links and notes on the podcast website, which is langneurosci.org/podcast. I'd be grateful for any feedback. You can reach me at smwilsonau@gmail.com or smwilsonau on Twitter. Okay, thanks for listening and see you again soon for Episode Three. Bye for now.