And I'll start out by saying this will rankle many of my science friends, and in fact it would have rankled me some years ago. My point is that even science needs to find a way to move beyond its historical boundaries. At least if it's going to address the big issues coming in the 21st century - issues that ultimately, science itself will create. But before you start to worry, science people: this is not about the gooey-headed notion, often repeated, that science can't teach us about feelings, or morality, or the meaning of life. In fact I believe just the opposite - science has an enormous amount to teach us about these issues. I'd even go so far to say that what science has to say here may one day be the only thing that saves us from ourselves. That is, if we find the intellectual and emotional courage to absorb those lessons. Usually we don't do that kind of courage well, but in fairness, it's a very hard thing for our naturally selected brains to do. OK, that issue is also for a later posting.
At this point I should probably explain why - as much as I love art and music - I think science is the greatest thing the human mind puts out. Yes, science weaves breathtakingly beautifully narratives about nature - that's easy to love. But science is also the only human enterprise courageous enough to thrive on not knowing. It's the courage of curiosity that makes science both deeply humble and incredibly bold. It's why the story of science is the triumph of honest, transparent reason over naive intuition and wishful thinking. Again, something that is really, really hard for brains to do. Even when individual scientists may not be, science as a whole is open, dynamic, adaptable, and sure as hell never starts a meaningful line of inquiry with "Newton's sacred Principia teaches that..."
But where I believe science, at least as it currently practiced and understood, hits a wall is in dealing with us. Us, as in the originators of science itself. It is, after all, a human activity whose outputs are the product of human minds. And it depends on finite, objective human language to formulate and communicate its content. That arrangement has worked wonders, of course, and spectacularly so since the Scientific Revolution of the 17th century. We can now sequence the human genome for $1000, and even have satellites that correct for the curvature of space-time just to help us find our friends at the nearest pizza place. And did I mention the robots on Mars? They're actually not quite as 'smart' as the iPhone4.
But even one of the founding figures of the revolution, Renee Descartes, knew there was a problem at the core of the whole thing. A real hard problem, as it's now sometimes formally called. How could any objective language description of the world, Descartes asked, possible account for our subjective experience of consciousness? The job of science is to account for observed phenomena, and there ain't nothing more observable than the fact of our own subjective experience. Strictly speaking, it's the only thing we ever observe. Except that subjective experience is not even a 'thing' - and if you think it is, just try describing it to a piece of computer software. And the more you try to imagine how something that is not even a describable 'thing' could even in principle show up as a 'thing' in any scientific language description, the more frustrating it gets.
It's a hotly disputed question today, and learned opinion is all over the map. Some, like Daniel Dennett, claim subjective experience is just a 'user illusion' of self-referential information processing. Others like David Chalmers think subjectivity could be a sort of fundamental property of matter, like, say an electron's spin and charge. Descartes himself thought subjective experience really was a different kind of 'thing' that's somehow attached to the normal 'things' in the world via the pineal gland in the brain. (OK, let's give Descartes a break here - it was the 17th century, folks.) Getting into the weeds of these and many other opinions worth mentioning will be several future postings, and I certainly have my own two cents to put in. The only point I want to make here, though, is that science and philosophy are looong way from any kind of consensus. In particular we really have no bloody clue about when and why another entity is subjectively conscious. So far, science is not really helping us, and I think there are structural reasons for that. More future postings to come.
The thing for now is, though, we are not that far from being confronted with exactly this question in everyday life. OK, we may never have a cranky hologram-doctor who sings opera like in Star Trek: Voyager. Or an empathic Bicentennial Man android who makes wood carvings and falls in love. But artificial systems are clearly getting better at simulating human-like behavior. In limited contexts, software agents have already passed the Turing Test. Given that you and I only think each other is conscious only through our behavior, how many more decades will it be until our adapted empathy mechanisms cringe at the thought of turning off our computers at night? Sure, it's all still science fiction stuff. But it's also unavoidably what this century will have to deal with. And if that still seem too futuristic, then think about the recent findings in animal intelligence. Never mind great apes that we share 98% of our DNA with anyway. Watch some videos of parrots, crows, octopuses, dolphins, whales even fish and then tell me you don't see someone seriously at home. If you're a vegetarian: bless your intentions, but for perspective, you may want to watch a little plant intelligence in action. Now tell me there's no cause for all of us to wonder if we should look at our food sources differently. The problem of consciousness is moving into prime-time a lot faster than we're ready to deal with.
Other conceptual issues with Iron Age science that have to do with us are different objective problems of self-reference. Here's a bullet point version of some controversial claims I'm making:
- I'm a big believer that quantum mechanics is a much bigger deal than is generally appreciated. Feynman of course famously said that nobody understands quantum mechanics. Maybe he was more right than he realized. Personally, I'm not bothered by randomness or even entanglement - and that somehow things don't 'exist' until they're measured. I'm cool with all that for reasons that, yup, will be in future postings. What really bugs me is: what the hell is a measurement? Yes, it's a question as old as quantum theory itself. I just think we've made zero progress on it. Well, maybe not zero - there are some modern theories which keep Schroedinger's equation valid forever with no collapse. Many-worlds and decoherence are solid, plausible ideas, I think. My problem with those versions, though, is that they imply that the universe as a whole is ultimately linear and deterministic. Ultimately that may turn out to be the case, but a) there seems to be experimental hints that quantum randomness is real and not just appearances, and b) more importantly (to me), I have other philosophical reasons (read: prejudices ;-) for thinking otherwise. In fact, I think the schizophrenic structure of quantum mechanics (nice linear wave function until suddenly - oops - it ain't anymore) is somehow trying to tell us something about us. I have no clue what that the secret message is, and I'm not yet willing to believe in a connection between consciousness and quantum mechanics. I also worry that Wheeler's participatory universe is more poetry than science. On the other hand, hell, maybe it doesn't go far enough. In any case, I do believe that if we really understood quantum mechanics, it would completely, utterly, totally reorder the way we look at science, ourselves and the universe. Just sayin'.
- Another self-reference problem I have is kind of Goedelian: Iron Age science imagines 'laws of nature out there' that account for everything. In as much as human minds 'discover' those laws, those same laws have to account for their own discovery by humans. So a complete Theory of Everything has to have the information content to contain itself among everything else it's accounting for. From an information-theoretic standpoint that may not work out too well, folks, unless the theory has infinite algorithmic complexity. Of course in that case, it's not discoverable anyway. None of that needs to be a strict logical paradox, but at the least it does make a conceptual mess. Again, just sayin'.
- On the subject of conceptual messes: the more we learn about our brains, the more we learn how many and how deep our cognitive biases are. We can label our thinking 'logical' and 'objective' all day long if we want. Truth is, our brains keep us alive by cherry-picking the data it takes in from the world (and from ourselves). We then generate selective interpretations of the selected data for all kinds of weird, opaque 'reasons' of convoluted neural circuitry that would humble us to our bones if we knew how arbitrary the whole thing is. None of this is terribly new. But think of the irony in the context of science. Science is, again, a product of the human mind. So science itself is teaching us not to trust science too much. That's pretty cool and exactly why I love science. But it also leaves me with an odd feeling that if I really took that message to heart, I would wind up with a totally different view of the whole thing.