Stephen Wilson 0:06 Welcome to Episode Four of the Language Neuroscience Podcast. I'm Stephen Wilson and I'm a neuroscientist at Vanderbilt University Medical Center. This podcast is aimed at an audience of scientists and future scientists who are interested in the neural substrates of speech and language. Thanks for listening. Today, my guest is Jonas Obleser, who is Professor of Psychology at the University of Lübeck in Germany. I'm sure there's a different way of pronouncing his name in German, but when I asked in advance, he didn't really give me a straight answer, just made various jokes and told me to call him "O'Blazer", which I guess would be the Irish form of it. Jonas's lab is called the Auditory Cognition Research Group. And he has a tagline, which is, "Sounds and speech as a window into the brain". He studies speech perception, auditory attention, auditory perception, and cognition. And he's been doing a lot of translational research looking at hearing loss and aging. A lot of his work involves investigating neural dynamics such as neural oscillations and entrainment or tracking. And that's what we're going to talk about today. I've been following his work for a long time, and I'm always struck by the intricacy, complexity, and methodological rigor of his studies. To be honest, I have always wanted to understand his work better than I do. And so I thought, this is my chance. Hey, Jonas, how are you today? Jonas Obleser 1:19 I'm fine. Thanks, Stephen, for having me. Stephen Wilson 1:21 So just to set the scene, I'm in Nashville, Tennessee. I'm in my front room of my house. I still don't go into work all that often because of the situation. How about yourself? Jonas Obleser 1:31 I'm usually at home as well in Lübeck, Germany, near the Baltic Sea. It's late afternoon here. Today, actually, I did go to my office, there's hardly anybody here, we're all working from home. But my office is like three minutes from home. And it's a nice place to get some work done or to record a podcast in quiet. So I decided to do this here today. Stephen Wilson 1:55 So I think we've had the occasional email back and forward over the years, ever since, since when I was doing my dissertation, which was probably about the time that you were doing your dissertation, too. But we really only got to hang out one time, which was when you were a group leader at Max Planck Institute, in Leipzig, and I got the chance to visit. Jonas Obleser 2:18 And that was in 2011, or something, right? Stephen Wilson 2:20 I think it was, yeah, something like that. I remember we got together and had a few beers. And I met some people from your lab. And that was cool. I've always been really interested in your work, but I want to confess something, which is that I've never understood it terribly well. Because... Jonas Obleser 2:39 You're probably not alone with that. Stephen Wilson 2:41 And so one of the reasons why I wanted to do this podcast was to kind of just put myself out of my comfort zone and try and learn new things by getting to talk to people whose work I admire, but don't fully understand. And I think one of the reasons why I don't understand your work very well is because like I work primarily with hemodynamic imaging, and so I've never been particularly attuned to things that are happening at finer timescales, whereas your work is all about that. Jonas Obleser 3:12 So you think we basically live the, we're examples for how anatomists, or people who think in the anatomical spatial domain and the people who think in a more, yeah, more temporal domain, can really talk to each other? Is this what we're trying to, to overcome maybe today here? Stephen Wilson 3:34 Yeah, exactly. I mean, well, I mean, I don't want to teach you anything but I want you to teach- I want to learn- Jonas Obleser 3:40 It's a two way street. I mean, I admired your early work. I remember that talk actually very well. When you came to Leipzig and talked about, yeah, I think it was, it involved some lesion work, something that's very, I think, very close to your heart also. And so it's interesting that you think of me as an EEG MEG finer, or as you said, finer timescale person, because I don't, I don't even think of that myself. I realized that my output is probably nowadays to large extent, dominated by EEG MEG papers, and thinking in this domain, but um... yeah no, interesting, because I personally also consider myself a hemodynamic person, how did you call this earlier? So, you know, I'm happy to, I dunno, share whatever you're interested in. Sure. Stephen Wilson 4:32 Cool, yeah. No, I mean, I certainly I know you do imaging fMRI papers as well. It's just that, yeah, those ones I sort of understand. It's the other ones that get me. So yeah, like but before we kind of get into that, I just kind of want to, like, get a bit of background about how you became the kind of scientist you are. So you're a Professor of Psychology now. Is that, is that your title? Jonas Obleser 4:56 That's correct. I'm a professor in a pretty classic, if young, psychology department and I'm a trained psychologist. I studied psychology. Although it's interesting that I, when I started studying, I came from having done some civil service with hearing-impaired people, or actually, deaf people. And so somehow my interest in in audition, having been a semi-professional musician, I guess my interest in audition was there. But it somehow got got fueled by the time just before I studied and then I went into psychology as a sort of- I thought of that more as a sort of a backup all around field, psychology. I thought they do all kinds of things, and something will interest me I thought, but I wasn't particularly drawn towards either audition or hearing or language, or the brain for that matter. But then over the years, I got obviously, by those studies in Konstanz, I got very drawn into, into imaging, if you will, or into MEG actually, at the time. And then more, I don't know more by coincidence than anything, around the time I finished my studies, my professor said, Thomas Elbert was that at the time, he was more a sort of, at that time, more a plasticity researcher and he wanted to get into what they called phonemotopic maps, basically. So he was, he wanted to get something done, he had worked on tonotopy a lot. So basically, the cochleotopic representation in auditory cortex. They had done fantastic papers on that , or by the the time, fantastic papers, and interesting work. And then they wanted to do this with vowels or with sounds. So I ended up basically doing my PhD on mapping vowels in auditory cortex, using MEG, using N100 source localization. This is what, this is how I got, and then I got deeper and deeper drawn into well, psycholinguistic territory and other levels of speech and language representation. But the vowel was for me as an auditory-interested person, an interesting starting point. And yeah, I'm very thankful for that. It was an interesting entry into that field. Stephen Wilson 7:12 That's really interesting. So there was kind of these, so you're saying there were these two pre-existing interests, one being a musician and one having worked with hard of hearing or deaf people? And then when you, and then you kind of came back to those interests after having gotten I guess, undergrad training in psychology, is that what you're...? Jonas Obleser 7:30 Exactly, that pretty much it. So, and then I did this PhD and early postdoc work in Konstanz on, you know, pretty psycholinguistic topics. We were at the time, I was paid by a linguist by Aditi Lahiri, who's sort of a phonologist one should say. And she supported my research at the time, so I was being very loyal. And I think it was only when I really started my own lab that these fully low level auditory themes came back to the sort of, to the fore, and I realized I'm not really a psycholinguist, I think. A t the end of the day, I always said at the time, and I would still think so that I'm interested in speech and not in language. I always like that the English language so easily offers us this distinction between those two terms. We don't even have that in German, we could say "gesprochene Sprache", which would be spoken language. But we don't have that speech term which carries, you know, these nice, phonetic or or acoustic and auditory undercurrents. So that's very much where I feel at home these days. Stephen Wilson 8:41 Sure, but you know, you've definitely got at least a good handful of papers that look at top down effects on speech perception as well, where you kind of necessarily delving, you know, putting your foot solidly in the language realm. Jonas Obleser 8:54 That's true. That is true. And that's probably, but again, those papers that you mentioned, or like when my lab and I touch upon concepts that others would call clearly linguistic or cognitive. That's probably where I think, you know, I'm interested in the brain. And I'm using language, or comprehension would be a good term, I'm using that basically as a model to understand brain function. This is also something that's quite dear to me, or an important point to make maybe early on, that I'm not a, I really don't think I'm interested that much honestly, in language. I'm interested in the brain. I consider it a crazy, crazy case of solving a problem to listen, or also to represent something like language. But ultimately, I think, if I would have to choose I would always say I'm a brain scientist or psychologist who's interested in how such crazy tasks are solved, basically. Stephen Wilson 10:01 That's very interesting. Well, you know, you can still be on the Language Neuroscience Podcast, even if you disavow any interest in language. And I wish that, you know, our podcast is audio-only so nobody is going to get to see the awesome image that you have behind you, which is The Smiths in their heyday. Jonas Obleser 10:21 At Glastonbury in 1984 if you want to Google the image, you should google "Glastonbury 1984 Smiths". Stephen Wilson 10:29 It's a very beautiful image. Are they one of your favorite bands? Or am I just- Jonas Obleser 10:33 I think they are. I think whatever is coming out of my mouth, currently faking as English language is, to a large extent, also shaped by Morrissey lyrics in my late teens. I, all of a sudden, discovered them. And if you would give me a guitar, which I'm not very good on, but whatever I could sort of fret out would have a Johnny Marr flavor to it. So I think they've been very influential. Yeah. Stephen Wilson 11:01 So when you said you were a musician, what was your- what did you do? Jonas Obleser 11:05 I was a drummer mainly. So that's what I sort of learned. So I played drums in a series of soul, funk, independent, ska-like bands in the in the mid 90s. And yeah, learned from the other musicians how to treat those other instruments. A bit of guitar here and there or a bit of piano. But I was I was essentially a drummer. Yeah. Stephen Wilson 11:28 So who do you like best? Morrissey or Johnny Marr? Jonas Obleser 11:32 That's a tough one. I would think- I don't know. I mean, it's a bit like the Yin and Yang. I mean, that's exactly the problem that they can't- that they don't really make great, great material separately, I think, honestly. And, but I think Johnny Marr is probably definitely the more talented musician and is probably also maybe the more- how should you say this? Maybe more adult person of the two? But he's nothing without the absolutely absolute sort of borderline insanity of Morrissey of course. Stephen Wilson 12:05 I do agree. When I was a teenager, I had a DYMO label maker sticker in my car. It was like a homemade label and it said, "Marr is a god, Morrissey is a sod." But it wasn't- but I absolutely agree with you that without Morrissey, like it wouldn't have been what it was. And you know, when Johnny Marr went and did his other projects later, I mean, there's some okay stuff, but you know, it wasn't the Smiths. Jonas Obleser 12:33 It's not like the old days anymore, as they say, in Still Ill. Stephen Wilson 12:36 Yeah. Alright, so now that we've covered some of these important topics. So one of the themes of your work, as far as I understand it, is kind of like neural oscillations and neural entrainment, and the importance of these concepts for speech perception and comprehension. So I don't even know if that's a fair framing of the topic. But- Jonas Obleser 13:06 Yeah, no, I do think that's a fair framing. I mean, that's certainly something that I was-have been very interested in for, I don't know, I would, I would think maybe about 10 years or so. The last- so this, this all starts at a time when I was in, when I was in Leipzig, and had back EEG under my belt. So coming from London, coming from fMRI work with Sophie Scott, who was also a guest on your podcast, I think, being back in Leipzig and realizing 'hey, we have all the EEG here and hey, I have been trained as an EEG MEG person', and I sort of rediscovered that temporal domain that you talked about in the beginning. So around that time, and my friend Nathan Weisz, whom I did my PhD with, but sort of we were more friends hanging out, we got in touch science-wise at the time. So around 2008-2009, I got really- I thought I want to do something for the topics I was interested in very much at the time, which was speech and speech degradation, I thought, there must be something in this EEG signal- there must be more than an evoked potential, so to speak. So I at the time, and still to the day, I'm sometimes thinking of evoked potentials as really clever, but ultimately very simple readouts of brain activity. You send something to the brain and the brain gives you sort of an impulse back and says yes, thank you and it might say this, yes, thank you in various ways with various little deflections, but it was a very simplistic and at the time, also a bit sort of well trodden concept and neural oscillations at the time, to be perfectly frank with you, was just a very exciting topic, at the time. It was about 10 years ago. It really sort of lifted lifted off, I think, again, also neural oscillations as an idea the idea that the human brain is expressing itself to us almost, if I say so, in your in oscillations, in rhythmic activity is also a very old observation. But at the time things came together. So in my hands in my lab, I wanted to use that. And I quite naively started out with, basically, I think we started where you start where the light is when you're looking for something. And in the human brain, the light is clearly in the alpha range, the alpha oscillations, even you as a non-EEG person have sort of a vague idea about alpha stages, right? I mean, Hans Berger, about 100 years ago. They are so prominent, you can see them with the raw eyes. So it's always good to start somewhere where you have a good signal-to-noise ratio. And to this day, I think that alpha oscillations are a really, really interesting little creature, or actually more like a monster. But my understanding at the time was a very naive one, I thought we could easily, you know... the speech, it has rhythm, or some people say so, and does the brain this has also rhythm, so they should team up nicely. And maybe speech comprehension- I'm exaggerating, of course, but only slightly. At the time, this was maybe all our thinking a bit like, oh, wow, if they if they match up, then you have, then you have comprehension or something. Right? Things got really, really complicated from there. And I don't know how much has already been solved since then, to be honest- Stephen Wilson 16:28 So just, you know, just in case any of- what is the frequency range of alpha oscillations? Jonas Obleser 16:33 Right. So the field has agreed on calling brain activity that is rhythmic and and cycles in this orderly fashion between, say, 8 cycles per second to about maybe 12-13 cycles per second. So 8 to 12 was always the things. I'm saying cycles per second. But of course, hertz, we came to know it. So actually, it's important because it's true, to this day, that and now, for example, by beautiful work from Brad Voytek and others in San Diego we we can- we have a good technical repertoire to show which frequencies in the brain are actually oscillatory. So where is really rhythmicity, and it's really a it's a case, it's a fact that in the human- front from the human scalp, if you record activity, this activity in the alpha range, so between, say, 8 and 13 hertz, this is really rhythmic. So it's sort of it's lifted up, it peaks out of a of this typical frequency spectrum of brain activity, it really peaks out as an oscillation. So by all accounts, I think everybody would agree that alpha activity is oscillatory, which is not true for other things that people claim to be neural oscillations. And it's also- it has shifted people nowadays, if they want to start- if they want to study neural oscillations or want to make a case for the cognitive involvement, or the involvement of neural oscillations in a cognitive process, this has shifted that they first have to show that they're actually talking about oscillatory activity. Stephen Wilson 18:14 Right. Do you have any idea why the brain oscillates at 8 to 12 hertz. Jonas Obleser 18:23 I'm not in a very good position to tell you this. I should say I'm really, as much as I'm interested in alpha oscillations, I'm not the best expert when it comes to it. So I'm not a biophysicist, by any means, who has enough understanding of the biophysics of how oscillations are generated. I think, it's pretty clear that it's not just a cortical a local phenomenon, but so partly in tune with this frequency. So this frequency being by comparison in the brain, a sort of I don't know, a slow or middle range frequency, which also means it involves probably distances that are sort of middle range. So I wouldn't think of alpha as sort of the long haul frequency, but maybe a sort of a- maybe circuits involving certain cortical regions and the thalamus being added. Actually, it's an old idea, going back many decades that the thalamus is involved in in generating those rhythms. And I think this is an idea that has also gained new prominence partly through work, for example, the cognitive domain, that people like Sabine Kastner have done a few years back, 10 years ago, Saalmann and Kastner. They showed that alpha and also the more cognitive aspects of alpha in attention, for example, it seemed to involve the thalamus. So probably we're talking about, about loops that are not strictly local. So it's maybe unlike unlike some fast gamma acting activities, or gamma, maybe I don't know, let's say for example, in the 70-80 hertz range or even higher. Even the work- maybe you had Eddie Chang on the podcast recently that if they record this high frequency gamma activity quite locally from a certain region, then they can probably be able to pretty sure that it's mostly this local cortical area that produces that signal. And that's probably very different for slower frequencies. Like alpha. Stephen Wilson 20:21 Right. Okay, cool. So, thanks for that. So before I kind of distracted us by asking about those sort of basic definition things, you were gonna sort of say that you were talking about how initially, you had thought there might be a fairly straightforward correspondence between the alpha rhythm and the rhythms of speech. Jonas Obleser 20:38 Exactly, or, exactly, not for the alpha rhythm. I think this was pretty clear pretty early on. I mean, David Poeppel, and many other colleagues, but he was probably at the forefront of this, have made early on a case, which is, it starts as an acoustic observation to say that human speech, like you and I talking here, to each other, involves a lot of so called theta activity. Now calling this theta is denoting frequencies, maybe between four and eight hertz, traditionally, so four to eight events per second. And is, of course, I mean, that's for one, that would just be an observation that are that I'm opening and closing my jaw now at such frequencies that dominating the spectrum, the acoustic spectrum of what I'm producing here. So the interest shifted to theta, and it still is for, for all things related to speech and language. We're talking about frequencies in the theta and delta range of delta is even below that. So I broadly think of the speech-relevant frequency range somewhere between one and eight hertz. So for people more linguistically inclined, they might immediately think, and it's this, it's this parallel that people make between the acoustics and the brain that is so alluring, and but that's also probably misleading, sometimes, but you think immediately of slow prosodic intonation, processes in your language. And you have these faster syllables. So so the obvious questions are, where, in the brain activity, do I find signatures of the brain tracking what somebody else is saying? So tracking is a very charged and loaded term, but that's a term we're using a lot, right? We're thinking of, when you listen to me, is your brain sort of using that register? Or its own endogenous activity in that frequency range, in the delta theta range two, to reorganize? Wouldn't it be clever if you happen to have endogenous delta-theta oscillations ongoing? For whatever reason? Wouldn't it be very clever, if you would bring those oscillations in sync with the stuff I'm providing you with? Basically, I mean, the logic is very simple. If we think of an oscillation as indicating a change between a state of relative excitation, so relative ease of neural communication, and then a state of relative inhibition where it's relatively hard to do anything to that piece of cortex, because it's inhibited broadly speaking, then, of course, having those excitatory elements in phase (so in sync) well lined up with the interesting bits in speech and language, that would be cool, wouldn't it? And that's where the problems start. I think thus far, everybody would agree, I hope that this would be the idea. And this would be cool. But how this is supposed to work? And how effective it is, and whether and how much of the evidence, we're actually already having that this is how comprehension works. Stephen Wilson 23:58 Yeah, it seems... Jonas Obleser 24:00 The bigger- the harder question. Stephen Wilson 24:02 Yeah. I mean, it seems logical, up to a point. But I'll tell you the thing that I've never understood, right, and it might be just naive, or maybe it's a good question, I don't know. But, you know, a rhythm is rhythmic, right? It's like, you know, absolutely. You know, you're a drummer. I mean, it's like every single beat, it's gonna, it's four hertz, it's going to be four per second, it's not going to be like, randomly shifting between three and five, or you'd probably get kicked out of whatever band you're in, Ska or otherwise. But even the most rhythmic language, I don't think has that level of rhythmicity. Right, even though there's a beat. It's not like- unless you like actually rapping or deliberately doing that. Jonas Obleser 24:45 That is a problem for like kind of theory. Stephen Wilson 24:50 How useful is it to be predicting where the next beat's gonna fall when it's- can you really make good predictions? Jonas Obleser 24:56 People would have very different answers to that. I must say, I don't want to- I don't want to duck away from that question. But I have to sort of, to some level because I totally agree with that. I think this is a problem, especially for everybody who moves beyond the syllable or to the into the more complex linguistic domain. So my friend, Lars Meyer, for example, in Leipzig is very, and I admire him for that. He's very invested in thinking how neural oscillations can support more complex phrase structure, parsing, for example. And so I think one I want a potential answer to your to your absolutely correct observation that it's not that rhythmic is of course, you need to say, 'Well, okay, if there is an endogenous oscillation, then that oscillation is rhythmic.' So what would not work is sort of a perfect lining up that I talked about earlier, because that's by definition, then actually, not entrainment. Because this external signal is not really rhythmic. I mean, if you look up the definition of phase synchrony, or sort of synchronization, or entrainment in any proper physics textbook, you will find that it requires two oscillators to sort of get in phase with each other, right? We're having a very obvious problem here already with this with this external part. So what could still be interesting is to say you measure- you sort of, you use in endogenous oscillations as a sort of a measuring stick, you sort of-. So you're not- you don't- you cannot force everything into this excitable, 200 milliseconds of your delta phase or something. But maybe, maybe it's helpful to organize it, nevertheless. You could think of delta oscillations, nevertheless, as an internal click track or something, you know, and I think this is more alluring, but it's also I mean, it's probably what Popper or somebody would call an auxilary hypothesis, right, or Lakatos. You're actually, we're already gluing something. We have to extend the framework already, because it's obviously not so simple. We have to, we have to continue it. But, but that is true. And for example, my former postdoc, Molly Henry, who's now a great and imaginative researcher on her own, and in Frankfurt. She's from a rhythmic perspective interested in those kinds of things. Also, because you mentioned music, I will now mentioned in a study by her where the actual beats you're provided with are very sort of non rhythmic in a in a strict, strict sense. But there's still a groove, there's an underlying groove. So basically, you shift the beats around a bit. [clapping]. Stephen Wilson 27:49 Mhm. Jonas Obleser 27:50 So still, there is an underlying meter, basically, the music people would think. So when we come to language and neural oscillations, maybe the meter, so a bit of a slow-paced marker that you're using, could still be something that slow oscillations can provide on some level. But so again, we're already trying to save the strict neural oscillations hypothesis, and I do think it has severe problems, which is part partly why I'm a bit disenchanted with. So I don't think that we can say, 'This is how language comprehension works'. I do think that neural oscillations might have an auxilary supporting role in sort of achieving the robustness of neural of language comprehension, but I don't think it can- it can hardly be the be all end all mechanism, Stephen Wilson 28:50 Right, it sort of doesn't have enough degrees of freedom in one, you know, does it? Jonas Obleser 28:55 And also just the sheer, you know, when we talk hard science at the end of the day, the sheer effect sizes, the way- where we can find those, those those phenomena we can find them in in near thresholds, psychophysics experiments, etc. So it's, and I love your threshold psychophysics experiments, don't get me wrong, but we're obviously going after after a phenomena that are rather ephemeral in a way. It's kind of- you have to work hard to get at them. And that's usually not something that's- it's not the low hanging fruit, obviously, right in language research. Stephen Wilson 29:29 So in your recent TICS paper, you've distinguished in a very clear way that I found really helpful between neural oscillations and neural entrainment. These concepts that often seem to blend together. Jonas Obleser 29:41 Sorry for interrupting, but yeah, no, go ahead. I interrupted you early. Stephen Wilson 29:47 I was just going to ask you to, kind of like, expand on that distinction or kind of explain that distinction. Jonas Obleser 29:53 Right. So I think we've set maybe the stage already, already for that a bit because this paper that Christoph Kayser and I wrote about a bit more than a year ago on this topic was was, we exactly attempted to make this very clear, what is neural entrainment? And we for that also suggested to call this "entrainment in the narrow sense". So when you have an oscillator that changes its phase relation to an an external rhythm in order to sort of be in sync, and this is strictly speaking entrainment. And we do think that there's not too much evidence for this being in place. And partly is because it's incredibly hard to prove in your little study that you're doing- that you have observed neural entrainment. However, there is a lot of interesting work ongoing and has been ongoing for the last 5, 6, 10 years. That comes from this kind of angle of saying, 'Here's speech is ongoing environmental stimuli like speech, and hear is brain activity and look they are correlated'. And I can learn something about the brain by looking at this correlation. For example, by seeing in which areas, under which conditions, one signal gets and here's the word gets "tracked" more strongly than another one. And what occurred to me over the years after I really struggled with this entrainment concept and find it really dissatisfying that it's so hard to prove. And then I realized it's- and then I had this weird phase for many years where I grumpily, in every review, I wrote, sort of grumpily complained about this not being entrainment and I know that many other people also did this. And sometimes you, as an author, you write entrainment, and as an reviewer, you complain about it. So there was a problem. And so what we tried to say is, okay, I mean, you call it entrainment, if you like, but it's entrainment, if at all in the, in the broad sense, so, so you have some features of entrainment in your study. And these features, which we define in the TICS papers are probably more- it's probably cleaner to call it tracking. There's something in the brain response that responds to something in the stimulus in a sort of, you know, correlated fashion. So you have a readout. And this could be, for example, the strength of some gamma band activity or something. But what it frees you and me from having that academic, utterly academic, discussion over whether this is a true neural oscillation or something. So neural tracking, as a tool is incredibly helpful to study. I don't know all kinds of phenomena, particularly attention and attentive listening, but probably also to study, you know, to study patients, or subjects like infants, who are not very good at giving us behavioral data. I think it's a great readout. It's fantastic to have it. But it's not necessarily neural entrainment in the normal sense. Stephen Wilson 33:05 I think that's so helpful, just to kind of think about the different ways in which, you know, kind of- relatively high temporal resolution brain activity can be associated with language and what it means for speech, whatever you want to call it. And so I think that brings us to the paper that we talked about, and, you know, over email before we met today. I wanted to kind of explore one of your papers in more depth kind of just as a way of getting into seeing what this kind of work looks like on the ground. And we picked a paper that's first authored by Sarah Tune. It's currently a preprint. As far as I- is that Jonas Obleser 33:42 It's in revision in an undisclosed journals. I'm sitting here pressing thumbs while we while we work on a revised version of this. Yeah, but the public is invited to access this as a preprint. Currently, yes. Stephen Wilson 33:56 Okay. So I'll link the preprint in the podcast notes. If anyone wants to read the paper. I enjoyed reading it. I'm going to kind of summarize it from my, my perspective and then kind of just ask you to kind of walk us through the paper and the ideas in it. Sound good. Jonas Obleser 34:17 Ambitious, but yes. Stephen Wilson 34:21 Okay, so the premise for this paper is that successful speech comprehension relies on filtering of relevant and irrelevant auditory inputs. So I think it's kind of a bit of- it's about things like the cocktail party problem, but more generally, perception of speech in noise. And of course, speech in noise is normal speech perception. It's very rare that we actually get the chance to have things in an unencumbered conversation. Jonas Obleser 34:48 Thanks for saying the thing that, yeah Stephen Wilson 34:51 So you and your colleagues explain that there are two filter mechanisms that have been studied, essentially independently. And you're going to compare them compare them directly for the first time in a relatively naturalistic context. And these two filtering mechanisms that have been discussed in literature; one of them is lateralization of alpha power, the other is what you call in this paper, "neural tracking" of speech, which is also often called entrainment. So- Jonas Obleser 35:21 Exactly but in the sense of this TICS paper that we just talked about, just for clarity, this would be basically neural entrainment in the broad sense. So I would personally refrain from calling it entrainment, we call it also in the TICS paper neural tracking. Stephen Wilson 35:33 And I think in this paper, you always call it neural tracking. So, can you help our listeners and me understand these two different mechanisms. So, the first one lateralization of alpha power. What do we know about that? Jonas Obleser 35:46 Right, so, maybe it's nice that we talk about this now, because we've touched upon both of those mechanisms in a way. We talked about alpha oscillations briefly in the beginning. And so alpha oscillations have been around for a long, long time, essentially the first observation ever, hence the name also. For almost 100 years now. But it's been only like 15-20 years ago that Wolfgang Klimesch, and particularly Ole Jensen have made an important contribution by giving us a testable framework for what alpha oscillations might actually be useful for on a more cognitive, or one might even say, practical level. And the idea here is that- it's a parsimonious idea. It's oversimplified, but it's extremely helpful in weeding through the evidence out there. And the idea is that alpha oscillations, when expressed locally in a piece of cortex somewhere, or for example, over over a large part of the left, or the right parietal cortex, might indicate a basically, inhibition. Ole Jensen called it functional inhibition. But this morning, I thought, well, every inhibition is- maybe some reviewer made him do it. I don't know. Every inhibition is functional, I hope on some level. But so the idea is that- so the framework is called functional inhibition. And so what showed up pretty quickly in that research long before us, but which what we're using here, is that spatial attention. So your amazing ability to focus on some part of your environment, and ignoring other chunks of that, basically, the other hemifield, to keep it simple, and is accompanied-. And mind, the non causal language I'm using now, is accompanied by a lateralized alpha activity. Basically, broadly speaking, we all know roughly the visual system, if you have contralateral input, you expect that hemisphere, contralateral to a hemifield, to be particularly involved in processing that input. And so this is actually where you would have relatively decreased alpha power, when you're actively attending there. You know, all in line with this idea of a lot of information can flow in can get processed locally. And in the other hemisphere, so in the ipsilateral hemisphere, you have strong alpha, relatively enhancement of alpha power. So what my lab has been doing for the last 10 years, ever since we got busy on alpha is to say, How could this work, in speech. And we're not the only ones, of course, to have seen that this-. This idea of alpha lateralization works pretty well in all kinds of sensory domains. It works for somatosensory, works for vision. But it also works for audition. And in audition, it's a bit more tricky because-. The alpha activity you seeing is not strictly auditory. It involves the STG. So the superior temporal gyrus, the posterior parts. But it also is always usually working in concert with parietal cortex. Yeah, so we call it- I call it a neural filter mechanism, because I think of it as something-. So on a psychological level, you have attention that you want to implement, you want to achieve a goal of attending, you know, favorably to one side. And we're thinking of alpha as one potential filtering mechanism helping you to do that. So that's one thing. Nice. But is it really doing that on a- in a large sample as we have here, 150 people, and even on single trials. Can I use that alpha signal, that alpha lateralization, that I can pretty reliably measure in EEG, but can I use it to predict whether or not you will perform sort of adequately in that very trial? And that was the question we were after here. And that's what we pitted against neural tracking as the other proposed mechanism more coming from the speech and auditory domain, more used by colleagues who are interested in building brain computer interfaces. So the idea more akin to the work by Eddie Chang and others and Nima Mesgarani, to say, whenever you listening to one competing track against another and you try to ignore that other track. This processing- this prioritized processing should yield, or should be accompanied by or some people might even say, is caused by, better tracking of that of that speech part basically. So that's the idea. And we took that to-. We wanted to build one experiment where all of this is in there. Where we can do nice alpha lateralization, then present sentences to the left and right ears so we can get nice neural tracking. And then we get a behavioral readout to see whether this worked for you. And we give you some Posner-like cue up front to tell you, 'Listen to the left, or listen to the right or listen to... we don't know, either.' And then there were also some semantic cues, which are linguist friends would be more interested in but which only make a cameo in this paper, I should say. Stephen Wilson 41:07 Yeah, that's true. I mean, we could almost skip talking about the semantic cues just to keep it kind of, keep it clear, because... Jonas Obleser 41:15 The data will all be open source, and the data will all be out there. And I hope that generations to come find interest in those data and can do something with it. Stephen Wilson 41:24 Yeah, maybe they can, yeah, they can glean something there. Because it wasn't that clear in your analysis that there was-. So just to kind of recap, because, again, I'm communicating about experiments in the auditory modality only, I think, is very challenging. And so I'm just trying to find ways of doing that. So with these two filtering mechanism, one is lateralization of alpha power. And the way that it plays out in the auditory context is, if you're attending to the right ear, which is somewhat favoring the left hemisphere, although the auditory system is hardly completely crossed, you would expect attenuation of alpha power in the left hemisphere, if you're attending to the right ear, and vice versa. And that's one possible filtering mechanism for attention. The other is neural tracking or having the, you know, any kind of signal, whether it's high gamma or anything else, kind of resembling the envelope of the speech. Jonas Obleser 42:24 Exactly. And this resembling aspect is actually like- we do this by actually testing for resemblance. So we basically correlate a predicted or recon-, in this sense, a reconstructed speech envelope with plain speech envelope, that's really, that's really- so resembling is a good word here. That's exactly what's happening. Stephen Wilson 42:44 Okay, so yes. So then the structure of the trials, you've got, like a lot of you had a lot of participants, I think, over 150, various ages and various hearing statuses. And they listened to sentences. They listened to two simultaneous sentences, one to each ear. And before the sentences, they're given a cue to either attend left, attend right, or they're given an uninformative cue. And then eventually, they have to detect a word in the right ear sentence or the left ear sentence, which either matches the queue, but someday, if there wasn't a queue, then they don't know which one they're gonna- Jonas Obleser 43:24 Then they don't know, then they're a bit left guessing. Because then later, like, again, a jitter period later, a few seconds later, there's this prompt. Basically, there's this probe. They have four words on the screen, and they have to pick the word and we basically, if you, if you had- , if you didn't have a very informative cue up front, where to listen to what you do is you probably try to wing it. And you sort of, you sort of divide your attention, however that's possible and try to take in both of those sentences. And somehow- and then see, when they- you don't know yet whether we will be asking about the word that the left sentence ended in, or the word the right sentence ended in. Exactly. Stephen Wilson 44:05 Exactly. And so this kind of gives you the setup that you need to look at attention to one sentence, attention to the other sentence, divided attention. You can look at success, you can look at your action time, and you can look at these neural measures of lateralization of alpha power as well as neural tracking. Jonas Obleser 44:24 And the fans might recognize somewhere in there when we're going a bit overboard, but we like to call it a sort of a linguistic Posner paradigm. So the psychologists might recognize this flavor of some elements of a classic attention Posner task where you're basically cued to one hemifield. And so this is what we're trying to do here also. That's actually- that's there for the alpha because, you know. So this whole idea of functional inhibition alpha filtering should be beautifully in place and actually is beautifully in place after the cue. So while you're preparing for what's coming. That's when these alpha filters, may I call them this way, when these alpha filters are basically set up, you see beautiful alpha lateralization. Stephen Wilson 45:09 So, behaviorally, did you get the effects you were expecting from the cues? In terms of their effects on accuracy and reaction time? Jonas Obleser 45:17 Yeah, they're totally behaving the way we were expecting them. So, a valid, and this is almost like a sanity check in this kind of paradigm, that a valid spatial cue to say, 'Hey, attend to the left', gives you- or attend to the right for that matter, gives you better performance. So you're more likely to get it right on that on that trial. And you're also faster. And the interesting part, if I already might say something about results here. But the interesting part here really, is that all the participants, so we were very interested in, in attention is still to this contentious thing, whether older people are part of the problems that older listeners have results from a cognitive attention implementation problems that maybe are they may be not able to use such a cue the way you and I, or, for that matter, a 20 year old- Stephen Wilson 46:14 Are we young people for these purposes? Jonas Obleser 46:16 Well, yeah, we are in this study, because we started at 40. But we're not in terms of any other undergrad study you're normally doing. Right. That's that's a good point. No, so so. So in that sense, it was interesting to see that this is largely by and large, preserved function was pretty independent of age. And that worked well. So that's important for this whole study. Because, I mean, if the if here would be already a problem, then you could say, well, the whole attention setup was a bit weird. And since it was a new study, you always want to you want to have those basic effects in place. And I think they're there. Stephen Wilson 46:51 Certainly, yeah. You got the behavioral effect? And then how about- Were you able to induce lateralized alpha power, the way you were hoping with your cues? Jonas Obleser 47:01 Beautifully so I think. So we get we get. So we, in this study, if you should look up the paper, you could look at figure three if you're interested in that. But you would see an sort of an alpha power lateralization. Basically, as the cue comes on the queue is a symbolic cue, but it tells you you have learned over a few trials that this means Oh, they will ask me about the left sentence in a second. You see this alpha lateralization in the expected direction. So this is set up quite neatly. It then breaks down again, because we had the audacity of also showing this other cue that we don't want to talk about, the semantic cue. So that breaks a bit. But it's interesting, because it seems to break this-. It's interesting, because it's circumstantial evidence linking alpha lateralization to these sort of behavioral goals, you want to listen to the left or not, let's say to the left for a second. So you, you bring down your right alpha already in preparation. And then when something else happens, this might go away. And then when the actual sentence comes on, this lateralization is reinstated. So it's not as strong. And that makes sense. Because bottom up, we are driving the system with left and right input. So we're making it- if you think of it really as a top down filter, then we're making it hard for this top down filter to operate because we're pushing with equal volume essentially, into both channels, right. But then towards the end. So this I find this really fascinating because the subjects seem really, really good. They they've learned those tasks. And those trials over hundreds of trials, when you average, they know where the money is that in the end the sentence that's when you see beautifully, how alpha lateralization increases and reaches actually extends, so exceeds the levels that we had in the queue phase. So it's really, I think that works. That's all almost, I would think, in my little world textbook-like alpha lateralization. So that's cool. Stephen Wilson 49:00 Yeah, and just to recap that temporal pattern, which I found really striking too, when I looked at the figures, it's like the cue induces alpha suppression in the expected hemisphere. But then as you said, it goes away for whatever reason. But then it comes back in the sentence, but it especially comes back when you get to the final word, which is the one that they're going to have to report on. So it's like they are allocating their attention as desired, but in a very strategic way. They're not just kind of only attending right one hemisphere. Jonas Obleser 49:27 Exactly. So my my colleague Malte Wöstmann and I, we tend to call this a spatiotemporal filter. Currently, we think of alpha as really something that helps you to-. Whether it helps you is the other big question that we might be getting to in a minute. But it's obviously telling us, as researchers, something about what the what the listener wants intends to do with her attentional resources. Where and when she wants to deploy them. I think this can be nicely read off as alpha lateralization. Yes. Stephen Wilson 50:00 Cool. And so how about neural tracking? You sort of started to answer this already. But I guess... two part question like, firstly, how do you measure neural tracking? And secondly, did you see the neural tracking hemispheric differences that you anticipated? Jonas Obleser 50:17 So neural tracking, again, is- when you think of the math behind it, you're wondering why took so long for it to really become become a household technique. It's really due to two colleagues like, like Jonathan Simon from Maryland, or Ed Lalor from Dublin or now Rochester, who have championed this technique for us. But it's essentially just- what you do is you you correlate in plain terms. You correlate a a signal of interest, for example, and a broadband envelope or some form of acoustic representation of what you play to your subject, with the ongoing raw, essentially raw EEG signal, or MEG, signal. And by doing so, you basically- it's just like a little regression problem you're solving. So you end up with a regression weight. But now the little trick is that you do this in a time-shifted fashion. So it's essentially, to be fair, it's a technique called "reverse correlation" in the single-cell spiking literature. It's been around for much longer, the original paper is probably from around 1970, where the term reverse correlation was used for the first time, so that you come up with basically a timeshifted regression. And this gives you a little kernel. It's also to be seen in many of those figures. It's looks like an ERP. When you're not from our field, you think this looks like a typical waveform, they've been showing this for 40 years. But the difference is that this is now gained from a little mathematical trick done to essentially ongoing speech materials. And this is the breakthrough element, I think, for our field for psycholinguistics. Right? That you don't have to-. Also if you want to take what we're doing here and apply it to much more interesting linguistic questions like people are beginning to do now. People are beginning to study phonetics, studying phonology, studying semantic representation with those very same techniques, that you use some features in your signal to basically see how it's been encoded in the brain signal. And by estimating these responses. And anyway, long story short, when you have these kernels or response functions, you can easily take a new trial as basically it's mean you can also call it machine learning if you're into those kinds of terms. It's also machine learning, because somehow take a single trial that Sarah has sort of put aside during the analysis. And she takes that trial now and says, look, with your little estimated fancy kernel, what do you think? Was this the attended, sentence replayed? Or was the ignored sentence? And then the decoder says, I think it's rather, you know, I'd rather looks like this one. So it probably this was the attended one, because this is stronger. So you're playing these little decoder games there. And we then basically saying, okay, we call a-. We think that, that the listener attended to that sentence where we can just a tad bit better reconstruct the envelope. So basically, the the audio. And to make this short, yes, we it work the same way you basically get better. So you can basically show with that technique, again, that listeners follow your instructions. That's what you find out here. Basically, you tell them to attend left, and in those trials, there, you also get better reconstruction accuracy. So that works. So I think both measures are neatly in place. And yes, you can say to that point, both filtering mechanisms are in place and work fine. Also in older age, and they, yeah, they're well behaved in that sense. Stephen Wilson 54:05 Okay. So yeah, they followed the rules. And they did what they were supposed to do. They performed as expected. You saw the neural evidence for both kinds of filters. Which filters- but, you know, they were they weren't perfect on every trial, right. So their accuracy varied, sometimes they get it wrong, and also reaction times varied. So maybe the central question of the paper was, you know, which of the two filters, or both, are going to actually predict trial by trial performance? Or individual differences in participant performance? What did you find there? Jonas Obleser 54:42 Well, I if I may- so before I cut to the behavior, the thing is, those filters could also be-and I think some colleagues might have well thought so. They could also be very closely related. They could be just basically different sides of the same coin or something, the same metal. I didn't think so. I do think that the whole dynamics and the whole generation of those are very, very different. But I was actually expecting to be very plain honest here. And we show this in some of the graphs, we were thinking that alpha power as it's also present earlier, it's already present before the audio starts, is sort of implementing a filter. So if you are, either as a person or in a given moment, implementing a strong alpha filter, it should also be predictive of how strongly your neural tracking works. So that was the- to me a big surprise, actually challenging a lot of my thinking over the last years. And challenging the thinking that went into the funding for this for this very paper. That these are pretty unrelated. And so having Sarah, as a colleague, very, very statistically very well versed and pushing really these statistical analyses on the single trial level. We were- we could convince ourselves that those are pretty unrelated. If at all, there is sort of a time-lagged relationship between them. But for now, let's say they're surprisingly unrelated. So you could think of them, which is also nice, because you can think of them as two independent ways of implementing auditory attention. And now, when it comes to predicting whether or not you get a trial, correct, it turns out that alpha is, and this might be a letdown for some people. It was for me in a way, but it's that science. And alpha is surprisingly unhelpful in predicting whether or not that works. Stephen Wilson 56:32 So yeah, I would have assumed that was an unexpected finding, right. I would have assumed that was an unexpected finding. Jonas Obleser 56:39 It was an unexpected finding at least before the whole project started. So if this would have been a pre-registered study, you would now read in the pre-registered hypotheses, alpha should predict-. Basically, the degree of alpha lateralization, just prior to the trial or during the trial should be predictive of how strongly this is. And I want to turn this around though, and really say-. If you're an alpha researcher, I do think those are the kinds of analysis you have to show before you call your papers... "alpha supports, alpha does x, y to attention or something". Because I think by all accounts, we measured the behavioral readout of attention, I think. And we've been unsuccessful to show that alpha is, in that sense, causally important. So having a strong or less strong alpha filter, it might be nice to have a nice, strong alpha lateralization. And it might tell us a lot about the way you deploy attention. But it's not directly related to whether or not you will succeed on the trial. So that is an interesting lesson for me. Stephen Wilson 57:49 Yeah. And it's quite surprising neural tracking- Jonas Obleser 57:53 Neural tracking does it. It works. We've tried a lot of things. Stephen Wilson 57:59 What you're saying is neural tracking was predictive of behavioral performance, accuracy and reaction time. Jonas Obleser 58:06 So the stronger-. To say this in one plain sentence; the stronger your neural tracking index, that means the relative neural tracking being stronger for attended-. So if we're if we're really better able to reconstruct your attended speech, attended envelope compared to the ignored envelope. If this distinction is cleaner while you listen to the sentence, then you're also more likely to get it correct. And you're faster in doing so. So for both classic behavioral readouts, the accuracy and speed, your neural tracking is predictive. So I think this is an interesting finding that when pitting those two-. And I should mention, a lot of colleagues have been skeptical about our neural tracking, because we're maybe among the first to try this in a very short- you know, in a typical trial paradigm. Normally, speech tracking is currently done on audio books, it's fantastic. You have these long stories, a lot of data, we tried to push it down to a more simple alpha-like trial structure. And still, we got one and a half seconds or two seconds to work with for neural for the neural tracking analysis. And so it might well be that we're under estimating a bit the power of neural tracking, the explanatory power, I think, because our neural tracking per se is maybe a bit more noisy than what you would get if you would listen for longer. But so yeah, I think that's where the money is right now. That those two filters are there, but the ascribing causal meaning to them in the sense of, are they really needed to get behavior, to get your cocktail party going? I don't know. That's so less clear after that. And I think we have the power with 160 people of varying age of the interesting age, the cocktail party age where it's supposed to get difficult and A they're doing fine. And B, their neural tracking seems also in place. Yeah. Stephen Wilson 1:00:04 Yeah. So I feel kind of excited that like, I actually like read and understood an EEG paper. Doesn't happen very often. Did did I kind of ask you the-. Did we get to all the critical things that you would want to share as the most important aspects of your findings, or is there anything that we missed? Jonas Obleser 1:00:23 No, yeah no, I think so. I think you did good justice. Thanks for, for highlighting that work, I should really say the Sarah Tune is a fantastic psycholinguist experimenter and statistician by now, and I'm very happy to have her and most of the years as co authors who actually, you know, shouldered all the work in this. What you're seeing here is the work of our basically five year European Research Council consolidator grant that is now coming to a close. And so these are now the papers coming out from that, and we're satisfied with having those data because it's-. I think the sample is big enough. And the questions have been principled enough to help maybe settle now and in the future, a few points of contention. Stephen Wilson 1:01:07 Yeah. I thought the paper was really well written. I mean, I think it's a testament to, you know, you and your co-authors, that, from outside of this field, I was able to understand it. And kind of like, follow along with, you know, getting to, you know, what the questions were, and how the data answered the questions. And we left out, we left out a lot of detail in our in our discussion, because, you know, just for practical purposes. But yeah, there's even more in there. Okay, so that was great. There was just one other thing I wouldn't want to like, you know, let you go without touching upon your interest in, let's say, the aesthetics of science. I know that you are the person on my- that I follow on Twitter, who is most likely to complain about something like one line of a paragraph going on to the next page in a manuscript draft? Or something like that. Often headings. Yeah. And I really share that, although I don't share it with the world like you do. So I guess, tell me like, in 2021, like, what what is your ideal manuscript format look like for a submission? Jonas Obleser 1:02:22 Ah, that's so interesting that you asked. Thanks for bringing that up. I don't have an answer. But I just love airtime for typography. I think there should be more airtime it. I don't know whether-. I'm not a big podcast follower so maybe there are lots of podcasts on there. But anyway, I do think that the scientists mind is expressed on some level in some twisted way in, in the copy they had in and in the, in the way so-. I mean, I think you might agree that clean thought-. I mean, when you get something to review, right, clean thought expresses, somehow also in clean copy, I think and in one way or the other. And yeah, I don't want to get Stalinist about it I can also really, like, you know, bitch a lot about typography and all that. And just, and that's fine. But that's not what I what I intend to do here. I think-. Look, like focusing on classical to typographical rules, like, you know, it's easier for the eye, if you have shorter lines, if you have paragraph breaks, if you respect whitespace in a clever way. So if you basically use the whitespace in the sense that-. It helps the eye to grasp what's going on within a figure on a page, because this is really at first nothing to do with science. And I can only apologize to the people who tuned in for the science and now they get lectured on whitespace. But I do think on a deeper level that those things are related. And when it comes to those old papers, I think that's maybe what you're also referred to what I post sometimes on Twitter. I'm also very nostalgic of course. I'm also thinking of the better days of scientific publishing, which were probably not better days. They must have been awful, honestly. But I still-. So when you had to hand in everything by mail and had to wait for proofs and it must have been awful in a way. But I do think that in there in the technical limitations of a well set typeset manuscript lies a lot of beauty, you know. This wasn't-. Why was this? It was partly because it was left to professionals. The manuscripts were typeset by professionals. They knew how to, you know-. They knew the difference between an en dash and a hyphen. They just did. And so nowadays, everybody is is a typesetter and that's rather for worse. And so I think, professional-. So maybe that's a lesson that's in there. That it does cost money to get good typesetting and can totally understand if people are upset about Elsevier or the big companies sort of pressing a lot of money from them, and then not doing a great job on the typesetting. But I'm very, very willing to pay a lot of money for good typesetting and for good copy editing in that sense. And that's why I'm also catching sometimes with people in the sort of- in the open science movement who just want to do away with all kinds of journals or with any any of that. I do think there is value in professional presentation of science. And I don't think that every scientist including me should think like, 'Oh, I can do that myself. I can do a nice manuscript.' Yeah, maybe. Stephen Wilson 1:05:39 Yeah. Most no, based on my reviewing experience. I mean, I think yeah. I think what stings with, you know, the publishing companies is when they charge so much, and then they don't do a good job, you know? Exactly. Jonas Obleser 1:05:50 Yeah, I agree. Stephen Wilson 1:05:51 If you see it get done, well, then I think you can see a value there. Are there any journals that you think have a particularly good layout that you like, these days? Jonas Obleser 1:06:02 They don't pay me, but I do think that Cell Press is still looking great. I mean, they're going full on 90s. If you look up the, how they're called-. It's a graphics design company called Experimental Jetset. They're from Amsterdam, and they only work in Helvetica. And this is a bold move, I mean, just using Helvetica, but it's a it's a way of restricting yourself. And Cell Press, funnily enough, is doing that also. So you must imagine my dismay, when Christof and I finally had this TICS paper out. And ours was the one issue in Trends in Cognitive Sciences that is typeset in a different font. Because for whatever reason, they changed for technical problems at Cell Press they change to a different template, essentially, for a few issues. And ours was in there. Stephen Wilson 1:06:52 That must have been devastating for you. Jonas Obleser 1:06:54 This is devastating. But generally, I think their papers do look pretty great. For one, they're doing a good job in typesetting. And they are respecting the whitespace. And I am-. Yeah, so I think Cell Press is good. I also-. It's also interesting what the Nature journals actually did last year or something. They, particularly Nature itself, they went totally old school. They stripped away all the colors, and it was very black and white. Stephen Wilson 1:07:21 Did you like that? Jonas Obleser 1:07:22 I did like that, yeah, but that's just me. Probably. Stephen Wilson 1:07:26 You have a minimalist kind of aesthetic? Jonas Obleser 1:07:28 Yeah, that's also true. But that's, yeah, that's the way I am. Stephen Wilson 1:07:31 Have you seen the Neurobiology of Language journal layout? Jonas Obleser 1:07:36 I'm not fully aware of the current layout of the print. I was following the phase when the journal was really set up. But now that the article-. The think it's only now the articles keep coming out, right, the fully typeset articles. I should check that out. Stephen Wilson 1:07:48 I wanted to get you to comment on it if you haven't already formed an opinion on it. It has like a very nice green, the greeny-blue color that's used that I kind of like. But there's many other aspects that give me pause. And I'm sure I'll give you pause too. Jonas Obleser 1:08:04 I remember though complimenting the people behind the journal immediately on the logo when that came out. So I thought there was some attention to detail in there. But I haven't seen the full... Stephen Wilson 1:08:15 Yeah, there's some controversial choices that will probably not be attended to by any of the readership. But anyway, it's a great journal, layout aside. Jonas Obleser 1:08:26 It is. Stephen Wilson 1:08:29 Yeah, not meaning to critique the journal. Okay, so last thing. So you're an editor at the Journal of Neuroscience. Can you kind of just share a bit about like, what that's been like, what that experience has been like? Jonas Obleser 1:08:43 It's been great. I mean, I've been doing it for about two years now. 2019 and 20 basically have been the years that I've been fully involved as a so called reviewing editor or what other journals would be called handling editor, or action editor. And it's a great experience, I should say that we're taking really, really a lot of work goes into the papers that don't make it into peer review, I should say, which is always devastating for an author. But it's particularly at JNeuro or it's met with with great care. It involves at least, at least three people. So at least three editors are entering a comparably detailed discussion about papers. And so that's a lot of work but it also means that you learn a lot. Because like other editors might drag me into these discussions basically. So they've been assigned a paper but they want to suggest it basically for editorial reject, and then they're making a case for editorial reject and the other colleagues are then sometimes also playing a bit the devil's advocate and saying, 'Yeah, but I don't if it would have landed on my desk for these and these reasons, always quite quite, actually quite technical already. We're not doing this'. So this has been a quite joyful experience and actually quite, very instructive, of course. So I learned a lot. And you see, really the breadth of the field. I mean, I made mostly seeing auditory EEG language work, fMRI work, here and there. But often everything with a bit of a language twist. So of course, I don't see everything but what I see in working with the different senior editors and the different styles they're having. So I consider it a great learning experience, honestly. Stephen Wilson 1:10:33 That's really cool. Are you gonna be doing that for a while? Is there like a term or? Jonas Obleser 1:10:37 There's a three-year term, so I might be doing it-. I am doing it ending that three year term, and then there's sometimes an extension for another three years, so I might be doing it for a few more years now. Stephen Wilson 1:10:48 That's great. Cool. Okay, well, thank you so much for your time. It's been really good talking with you. I hope we get a chance to catch up in person have a beer again before too long. Jonas Obleser 1:11:00 Yeah, when this pandemic is over. Stephen Wilson 1:11:04 I just want to travel to every conference. Like I want to go to every conference. I want to, you know, stay at random hotels and, you know, eat at random restaurants like for the rest of my life. I just didn't realize that I'm not quite as much of an introvert as I thought that I was. Jonas Obleser 1:11:23 Just to say it in the words of Snoop Dogg as he sat in his car and listened to Frozen, "we will be out soon." Stephen Wilson 1:11:31 We will. All right. Well, take care and I'll talk to you soon. Jonas Obleser 1:11:36 Thank you, Stephen. Thanks for having me. Bye-bye. Stephen Wilson 1:11:38 Bye. Okay, well, that's it for Episode Four. Please subscribe to the podcast on your favorite podcast app. And if you have time, rate and review the show on Apple podcasts. If you'd like to learn more about Jonas's work, I've linked his lab website and the paper we discussed on the podcast website, which is langneurosci.org/podcast. I'd like to thank Sam Harvey for assistance with audio engineering, and Latane Bullock for editing the transcript of this episode. I'd be grateful for any feedback. You can reach me at smwilsonau@gmail.com or @smwilsonau on Twitter. And thanks everyone for listening. See you next time.