I want to tell you guys something about neuroscience. I'm a physicist by training. About three years ago, I left physics to come and try to understand how the brain works. And this is what I found. Lots of people are working on depression. And that's really good, depression is something that we really want to understand.
Here's how you do it: you take a jar and you fill it up, about halfway, with water. And then you take a mouse, and you put the mouse in the jar, OK? And the mouse swims around for a little while and then at some point, the mouse gets tired and decides to stop swimming. And when it stops swimming, that's depression. OK? And I'm from theoretical physics, so I'm used to people making very sophisticated mathematical models to precisely describe physical phenomena, so when I saw that this is the model for depression, I thought to myself, "Oh my God, we have a lot of work to do."
But this is a kind of general problem in neuroscience. So for example, take emotion. Lots of people want to understand emotion. But you can't study emotion in mice or monkeys because you can't ask them how they're feeling or what they're experiencing. So instead, people who want to understand emotion, typically end up studying what's called motivated behavior, which is code for "what the mouse does when it really, really wants cheese." OK, I could go on and on. I mean, the point is, the NIH spends about 5.5 billion dollars a year on neuroscience research. And yet there have been almost no significant improvements in outcomes for patients with brain diseases in the past 40 years. And I think a lot of that is basically due to the fact that mice might be OK as a model for cancer or diabetes, but the mouse brain is just not sophisticated enough to reproduce human psychology or human brain disease. OK?
So if the mouse models are so bad, why are we still using them? Well, it basically boils down to this: the brain is made up of neurons which are these little cells that send electrical signals to each other. If you want to understand how the brain works, you have to be able to measure the electrical activity of these neurons. But to do that, you have to get really close to the neurons with some kind of electrical recording device or a microscope. And so you can do that in mice and you can do it in monkeys, because you can physically put things into their brain but for some reason we still can't do that in humans, OK? So instead, we've invented all these proxies. So the most popular one is probably this, functional MRI, fMRI, which allows you to make these pretty pictures like this, that show which parts of your brain light up when you're engaged in different activities. But this is a proxy. You're not actually measuring neural activity here. What you're doing is you're measuring, essentially, like, blood flow in the brain. Where there's more blood. It's actually where there's more oxygen, but you get the idea, OK?
The other thing that you can do is you can do this—electroencephalography—you can put these electrodes on your head, OK? And then you can measure your brain waves. And here, you're actually measuring electrical activity. But you're not measuring the activity of neurons. You're measuring these electrical currents, sloshing back and forth in your brain. So the point is just that these technologies that we have are really measuring the wrong thing. Because, for most of the diseases that we want to understand—like, Parkinson's is the classic example. In Parkinson's, there's one particular kind of neuron deep in your brain that is responsible for the disease, and these technologies just don't have the resolution that you need to get at that. And so that's why we're still stuck with the animals. Not that anyone wants to be studying depression by putting mice into jars, right? It's just that there's this pervasive sense that it's not possible to look at the activity of neurons in healthy humans.
So here's what I want to do. I want to take you into the future. To have a look at one way in which I think it could potentially be possible. And I want to preface this by saying, I don't have all the details. So I'm just going to provide you with a kind of outline. But we're going to go the year 2100. Now what does the year 2100 look like? Well, to start with, the climate is a bit warmer that what you're used to.
And that robotic vacuum cleaner that you know and love went through a few generations, and the improvements were not always so good.
It was not always for the better. But actually, in the year 2100 most things are surprisingly recognizable. It's just the brain is totally different. For example, in the year 2100, we understand the root causes of Alzheimer's. So we can deliver targeted genetic therapies or drugs to stop the degenerative process before it begins. So how did we do it? Well, there were essentially three steps. The first step was that we had to figure out some way to get electrical connections through the skull so we could measure the electrical activity of neurons. And not only that, it had to be easy and risk-free. Something that basically anyone would be OK with, like getting a piercing. Because back in 2017, the only way that we knew of to get through the skull was to drill these holes the size of quarters. You would never let someone do that to you.
So in the 2020s, people began to experiment—rather than drilling these gigantic holes, drilling microscopic holes, no thicker than a piece of hair. And the idea here was really for diagnosis—there are lots of times in the diagnosis of brain disorders when you would like to be able to look at the neural activity beneath the skull and being able to drill these microscopic holes would make that much easier for the patient. In the end, it would be like getting a shot. You just go in and you sit down and there's a thing that comes down on your head, and a momentary sting and then it's done, and you can go back about your day. So we're eventually able to do it using lasers to drill the holes. And with the lasers, it was fast and extremely reliable, you couldn't even tell the holes were there, any more than you could tell that one of your hairs was missing. And I know it might sound crazy, using lasers to drill holes in your skull, but back in 2017, people were OK with surgeons shooting lasers into their eyes for corrective surgery. So when you're already here, it's not that big of a step. OK?
So the next step, that happened in the 2030s, was that it's not just about getting through the skull. To measure the activity of neurons, you have to actually make it into the brain tissue itself. And the risk, whenever you put something into the brain tissue, is essentially that of stroke. That you would hit a blood vessel and burst it, and that causes a stroke. So, by the mid 2030s, we had invented these flexible probes that were capable of going around blood vessels, rather than through them. And thus, we could put huge batteries of these probes into the brains of patients and record from thousands of their neurons without any risk to them. And what we discovered, sort of to our surprise, is that the neurons that we could identify were not responding to things like ideas or emotion, which was what we had expected. They were mostly responding to things like Jennifer Aniston or Halle Berry or Justin Trudeau. I mean—
In hindsight, we shouldn't have been that surprised. I mean, what do your neurons spend most of their time thinking about?
But really, the point is that this technology enabled us to begin studying neuroscience in individuals. So much like the transition to genetics, at the single cell level, we started to study neuroscience, at the single human level.
But we weren't quite there yet. Because these technologies were still restricted to medical applications, which meant that we were studying sick brains, not healthy brains. Because no matter how safe your technology is, you can't stick something into someone's brain for research purposes. They have to want it. And why would they want it? Because as soon as you have an electrical connection to the brain, you can use it to hook the brain up to a computer. Oh, well, you know, the general public was very skeptical at first. I mean, who wants to hook their brain up to their computers? Well just imagine being able to send an email with a thought.
Imagine being able to take a picture with your eyes, OK?
Imagine never forgetting anything anymore, because anything that you choose to remember will be stored permanently on a hard drive somewhere, able to be recalled at will.
The line here between crazy and visionary was never quite clear. But the systems were safe. So when the FDA decided to deregulate these laser-drilling systems, in 2043, commercial demand just exploded. People started signing their emails, "Please excuse any typos. Sent from my brain."
Commercial systems popped up left and right, offering the latest and greatest in neural interfacing technology. There were 100 electrodes. A thousand electrodes. High bandwidth for only 99.99 a month.
Soon, everyone had them. And that was the key. Because, in the 2050s, if you were a neuroscientist, you could have someone come into your lab essentially from off the street. And you could have them engaged in some emotional task or social behavior or abstract reasoning, things you could never study in mice. And you could record the activity of their neurons using the interfaces that they already had. And then you could also ask them about what they were experiencing. So this link between psychology and neuroscience that you could never make in the animals, was suddenly there.
So perhaps the classic example of this was the discovery of the neural basis for insight. That "Aha!" moment, the moment it all comes together, it clicks. And this was discovered by two scientists in 2055, Barry and Late, who observed, in the dorsal prefrontal cortex, how in the brain of someone trying to understand an idea, how different populations of neurons would reorganize themselves—you're looking at neural activity here in orange—until finally their activity aligns in a way that leads to positive feedback. Right there. That is understanding.
So finally, we were able to get at the things that make us human. And that's what really opened the way to major insights from medicine. Because, starting in the 2060s, with the ability to record the neural activity in the brains of patients with these different mental diseases, rather than defining the diseases on the basis of their symptoms, as we had at the beginning of the century, we started to define them on the basis of the actual pathology that we observed at the neural level. So for example, in the case of ADHD, we discovered that there are dozens of different diseases, all of which had been called ADHD at the start of the century, that actually had nothing to do with each other, except that they had similar symptoms. And they needed to be treated in different ways. So it was kind of incredible, in retrospect, that at the beginning of the century, we had been treating all those different diseases with the same drug, just by giving people amphetamine, basically is what we were doing. And schizophrenia and depression are the same way. So rather than prescribing drugs to people essentially at random, as we had, we learned how to predict which drugs would be most effective in which patients, and that just led to this huge improvement in outcomes.
OK, I want to bring you back now to the year 2017. Some of this may sound satirical or even far fetched. And some of it is. I mean, I can't actually see into the future. I don't actually know if we're going to be drilling hundreds or thousands of microscopic holes in our heads in 30 years. But what I can tell you is that we're not going to make any progress towards understanding the human brain or human diseases until we figure out how to get at the electrical activity of neurons in healthy humans. And almost no one is working on figuring out how to do that today. That is the future of neuroscience. And I think it's time for neuroscientists to put down the mouse brain and to dedicate the thought and investment necessary to understand the human brain and human disease.