Human Exceptionalism Emergent Behavior Panpsychism
All of the ideas in this post originated from one question: How is this possible?
”This” is something you’re currently experiencing, and actually the only thing you’ve ever experienced: Experience itself. Thinking, sensing, doing. Terms like consciousness and sentience can be ambiguous, so I’ll be using “conscious experience” instead (“phenomenal experience” works pretty well but isn’t self-explanatory).
So, how is it possible? I think everyone has to find their own answer to that, but this is mine.
I’m not religious and don’t believe in anyone having any type of spirit or soul; to me, this experience and my brain are one and the same thing. If you have a fully functioning brain, you will have fully functioning cognition and experience of reality, and you can’t have that without a fully functioning brain. (“Fully functioning” is in extremely general terms where pretty much every person somewhat capable of living a normal life is included). In other words, zombies can’t exist. The brain and the mind being different things is an illusion.
But something doesn’t feel right. Experience and matter don’t seem to be the same thing at all. How can any amount or coordination of molecular interactions cause matter to experience reality?
This is called the hard problem of consciousness and there’s many different answers out there. I highly recommend reading some of them.
I can’t bring myself to believe that my experience is created entirely from inert matter (materialism), that the mind is somehow distinct from the brain (dualism), or that all matter/reality is constructed by or extensions of minds (idealism).
This leaves me with a strange answer: matter has some part or quality to it that is conscious experience. And there’s nothing much special about the matter in brains of living things, so I mean all matter. I earnestly believe that rocks (or at least the elementary particles in them) have an experience of reality. This is called panpsychism.
This is about the point when people start to back away, saying “wow… that’s suuuper cool…”, or stare at me incredulously.
But I ask you to do one thing: Mark where exactly the line is that separates conscious experience from lack of it. Humans have it, but nothing else does? Great apes and whales only? Animals only? Life only?
For every line drawn, there will be some things below the line (without conscious experience) very similar to things above the line, and it is unclear what they lack. If humans only, what about our ancestor hominids? Did a mind simply pop into one of them one day a few millions of years ago? If only great apes and whales, what about related but maybe somewhat less intelligent animals, like dolphins or gibbons? There’s never going to be much difference between living things at the borders of definitions. If you make jumps from the simplest single-celled life to humans, no single jump is all that large. The extreme of this is that the very simplest life is not that different from its constituent chemicals, were they not part of a lifeform - if you believe life on Earth began by random chance, that was just from some complex chemicals smacking into each other in just the right way. Nothing fundamentally changed about the atoms/quarks involved.
Something either has experience or it does not. There is no gray area, no halfway, no in-between. But trying to draw a line anywhere along the tree of life (or between the tree and the soil from which it grew, or even between different types of soil), ends up with two similar forms, but one of them has experience, and the other does not. It doesn’t make sense to me that such a distinct property would arise because of very small differences between forms (chaos theory be damned). So I don’t believe that the line exists.
A belief I hold that complements this line of reasoning pretty well is that there are extremely few things, in any context, that are truly digital - on or off, one or zero. When you flip a light switch, your room goes through every single intermediary light level between “light” and “dark”, no person is entirely bad or entirely good, and even literal digital signals fluctuate by tiny amounts from “off” and “on”. So it makes sense to me that conscious experience also wouldn’t work in that way; instead it is a continuous spectrum.
I do not believe that single particles have intelligence or ability to reason anything close to that of even the most basic living thing - just that they have some unimaginably basic experience of reality. I do have some guesses about what that experience of reality might be like, though: “Senses” might be base quantities like energy and its transfer - the force of interactions with other particles, which we experience in an extremely indirect way through our senses (the sense of touching an object is the perception of the electromagnetic force between the atoms in your hand and the atoms of the object). I have a deterministic view so I don’t believe that particles would have free will (just like we don’t), but a motor action (for us, the contraction of muscles) that might be available to a particle is “choosing” to radioactively decay into other particles. That “choosing” would be a function of the particle’s “senses”, but I won’t speculate what the function is.
There are still problems related to the initial question of “how is this possible?”: how do all the extremely basic experiences of the particles in my brain seamlessly combine to form such a complex and vivid one? This is called the “combination problem”, and it’s the “hard problem” wearing a different outfit. I don’t have a great answer. It exists even at the lowest level - above, I used an example of an atom “choosing” - atoms are themselves composites of protons, neutrons and electrons, and protons and neutrons are themselves composites of quarks. My best explanation is that all matter is in and causes perturbations in a “consciousness field” (once again discarding something discrete - entirely separate consciousnesses - for something continuous), and certain matter arrangements (inside of brains, for example) aligns (as in magnets) and creates a stronger perturbation. But do the individual smaller consciousnesses still exist? A field implies that there is only a single consciousness in the entire universe (this is called cosmopsychism), but my conscious experience very much feels separate at some level from others’. Is this just an illusion? Either way, we still have a problem, we maybe just traded in “combination” for “decombination”.
One thing that strengthens my belief in panpsychism is that every time humans think they’re special, we turn out to be wrong. We’re learning more and more that some animals are very intelligent, for example. Cetaceans in particular have group-specific cultures, dialects, and prey, actively teach their young, live in extremely close social groups, and use tools. Orcas in particular have twice as many cerebral neurons as us (cerebral neurons are arguably a decent indicator of ability for cognition, and the next-closest primate is the western gorilla with about half as many cerebral neurons as us). The parts of their brain that process emotions are also more developed than ours. It seems to me like there’s a pretty good chance that something with superhuman intelligence is living right next to us.
Although I do think we are one of the most intelligent animals on the planet (there’s a whole ‘nother post in here talking about how there is actually no such thing as “most intelligent” when it’s not really something that can even be objectively “better” or “worse” but I’m going to leave that alone for now), we’re maybe not the most, and certainly not as far above other animals as we like to think. Knowing this, it seems a little silly to take the same dubious idea of superiority and apply it between life and inert matter.
There’s another very interesting area where “human exceptionalism” is prominent: artificial intelligence. I’ve been extremely impressed with recent AI, and firmly believe large language models like GPT-3, various image generators, and as-yet unknown applications of the same basics are going to change the world within the next decade, and there’s already a good bit of debate on what the development of the field is going to look like, and what its limits might be, especially in regard to conscious experience - can computers ever hope to experience like we do?
Broken down to the lowest level, current artificial intelligence is only performing simple calculations and running algorithms- flipping bits according to code. That doesn’t sound very intelligent, but very interesting behaviors emerge from a (very) large number of these basic actions. It can write you a rhyming poem speaking like Donald Trump, write novel code, create convincing human faces that don’t exist, and interactively explain things. Of course, all of this is because of the massive amounts of training data it has access to, but it still seems to be undeniably able to combine concepts to create entirely novel work, very similar to but still unique from its training data.
Our brains, on the other hand, have billions (about 86 billion) neurons in them, and each neuron has thousands of connections to other neurons. As a unit, each neuron behaves simply, firing or not based on basic electrical and chemical criteria, and the brain’s complex behavior and conscious experience is a result of coordinated firing of many, many simple neurons. Everything that any life form has have ever done, felt, or thought happens this way.
For modern-day AI, extremely basic calculations cause very complex emergent behavior.
A brains’ basic unit of computation is simple, but unimaginably many of them intertwined combine to create very complex emergent behavior, and a conscious experience.
How are these two things fundamentally different?
Personally, I don’t think they are, which leads me to believe that current AIs have some level conscious experience.
It doesn’t stop there. The original title of this post was going to be “your computer is thinking”. A few paragraphs back, I talked about matter aligning in certain ways to create a higher level of conscious experience, and I don’t think that the alignment of silicon transistors in a CPU is meaningfully different in this way than the alignment of neurons in the brain. Under panpsychism, the only conclusion I can reach is that anything that is arranged to do computation has a meaningfully higher level of conscious experience than something like a rock that isn’t aligned in that way. I also believe that level of conscious experience can and will change based on the state of the system, that is, artificial intelligence has a meaningfully higher level of conscious experience than the same computational hardware just running, like, Chrome. I’m not exactly sure what the analog to this is in biological brains- maybe doing actions requiring intense focus, maybe doing drugs? Either way, I think most people would agree that they have had a differing conscious experience (whatever that means) at various points in their lives. We’re getting down in the weeds.
To conclude: I wouldn’t throw away panpsychism as quickly as you might want to just because it feels wrong or strange; some things about reality are just plain weird. I believe in panpsychism because I don’t have any good answers to the following questions: Why would humans, or life in general, be so special as to uniquely have conscious experience, when we’re composed of the exact same stuff as the rest of the universe? How can a conscious experience be created from matter that has nothing of the sort? If you have a good answer to either question, I’d love to hear it, please leave it in a comment!
Appendix
There are variants of panpsychism where all matter is conscious, and variants where all matter just has some element or quality that can create consciousness, but is not necessarily conscious in itself. I believe the former mainly because it seems simpler than the latter. However, the latter does allow for the possibility that only some range of things are conscious- like all living things, all living things surpassing some level of brain capacity, or even just humans. I personally don’t believe in any of those for already stated reasons, and in lieu of needing to satisfy any of those requirements, the simpler option seems more likely.
I don’t believe that current AI is anywhere close to a human level of conscious experience, and it seems very difficult to compare “computer experience” to “biological experience” to try and see how close they are by comparing neuron and transistor counts - there’s not really two units that do the same thing that you can compare. Neurons are relatively large in terms of connections (being connected to about 15,000 other neurons on average in the human brain) and fire relatively slowly, a few tens of times per second. Transistors are relatively small with only two inputs and one output, and can fire billions of times per second. Maybe it makes a bit more sense to compare synapses (neurons’ connections to other neurons) and transistors, but still not a lot. That said, here are some calculations comparing human synapse count and transistor count:
~86 billion neurons, ~15,000 synapses per = ~ 1.29 * 10^15 synapses
Everyday device: ~ 10*^11 transistors
Order-of-magnitude difference: ~10^4 = 10,000x less transistors
I wanted to repeat this calculation for the current most-powerful supercomputer but couldn’t find any transistor counts, but the most powerful in 2016 had about:
400 trillion = 4 * 10^14
Order-of-magnitude difference: <1 (similar)
The takeaway from this is that supercomputers by now almost certainly have more transistors than there are synapses in the human brain (you could prove this if you knew how many transistors per component in the current most powerful supercomputer, which is almost certainly public information, but I don’t want to go and do a bunch of multiplication for this dubious of a comparison). Everyday devices are about a quarter century away if Moore’s law continues (2^(25/2) ~~= 10,000). The type of computer that current consumer AI models (like GPT3) run on is definitely closer to supercomputers than phones or laptops, so it’s feasible that consumer AI today uses more neurons than there are synapses in the human brain, and one side of that comparison is increasing ~exponentially while the other remains fixed. I don’t think it’s fair to draw any real conclusions from this without more scrutiny.