The question of the mind of animals is one of the most interesting there is, I think. It is by now clear that the primary school distinction between humans having ‘intelligence’ and animals having ‘instincts’ is highly outdated. Animals possess all kinds of intelligence and consciousness – from the simplest insects to the most complex vertebrates. The most evolved ones, like chimpanzees, apes, elephants and dolphins even seem to possess self-consciousness, distinct individual personalities and the ability to independently solve complex problems and interact with other creatures. One might say they even have a ‘culture’.
Indeed, if you play with a dog or cat, it is beyond doubt to me that this being has a will and mind of its own, and is much more than just a robot following its pre-programmed instincts. My hunch is that research will eventually show that the difference between humans and ‘animals’ is more gradual than qualitative: we have evolved further on the line of language skills and self-consciousness, but are not essentially different.
So here’s an interesting article investigating one of the weirdest smart animals around: the octopus. In fact, this is the most prominent example of an invertebrate creature that has been proven to possess a complex form of intelligence and ability to solve problems. Yet the peculiarity of the octopus makes it even more interesting: with its suckers, it can both taste, feel and ‘see’ at the same time; and each of its eight tentacles seems to possess a mind of its own. Still, octopuses seem to be able to recognize different people. It is thus an intelligent creature that is, at once, similar and radically different from us, in terms of consciousness.
So this raises the question: what is it like to be an octopus? How is the consciousness of an octopus constituted?
I have always loved octopuses. No sci-fi alien is so startlingly strange. Here is someone who, even if she grows to one hundred pounds and stretches more than eight feet long, could still squeeze her boneless body through an opening the size of an orange; an animal whose eight arms are covered with thousands of suckers that taste as well as feel; a mollusk with a beak like a parrot and venom like a snake and a tongue covered with teeth; a creature who can shape-shift, change color, and squirt ink. But most intriguing of all, recent research indicates that octopuses are remarkably intelligent.
Many times I have stood mesmerized by an aquarium tank, wondering, as I stared into the horizontal pupils of an octopus’s large, prominent eyes, if she was staring back at me—and if so, what was she thinking?
Not long ago, a question like this would have seemed foolish, if not crazy. How can an octopus know anything, much less form an opinion? Octopuses are, after all, “only” invertebrates—they don’t even belong with the insects, some of whom, like dragonflies and dung beetles, at least seem to show some smarts. Octopuses are classified within the invertebrates in the mollusk family, and many mollusks, like clams, have no brain.
Only recently have scientists accorded chimpanzees, so closely related to humans we can share blood transfusions, the dignity of having a mind. But now, increasingly, researchers who study octopuses are convinced that these boneless, alien animals—creatures whose ancestors diverged from the lineage that would lead to ours roughly 500 to 700 million years ago—have developed intelligence, emotions, and individual personalities. Their findings are challenging our understanding of consciousness itself.
As we gazed into each other’s eyes, Athena encircled my arms with hers, latching on with first dozens, then hundreds of her sensitive, dexterous suckers. Each arm has more than two hundred of them. The famous naturalist and explorer William Beebe found the touch of the octopus repulsive. “I have always a struggle before I can make my hands do their duty and seize a tentacle,” he confessed. But to me, Athena’s suckers felt like an alien’s kiss—at once a probe and a caress. Although an octopus can taste with all of its skin, in the suckers both taste and touch are exquisitely developed. Athena was tasting me and feeling me at once, knowing my skin, and possibly the blood and bone beneath, in a way I could never fathom.
Occasionally an octopus takes a dislike to someone. One of Athena’s predecessors at the aquarium, Truman, felt this way about a female volunteer. Using his funnel, the siphon near the side of the head used to jet through the sea, Truman would shoot a soaking stream of salt water at this young woman whenever he got a chance. Later, she quit her volunteer position for college. But when she returned to visit several months later, Truman, who hadn’t squirted anyone in the meanwhile, took one look at her and instantly soaked her again.
It seemed to Warburton that some of the octopuses were purposely uncooperative. To run the T-maze, the pre-veterinary student had to scoop an animal from its tank with a net and transfer it to a bucket. With bucket firmly covered, octopus and researcher would take the elevator down to the room with the maze. Some octopuses did not like being removed from their tanks. They would hide. They would squeeze into a corner where they couldn’t be pried out. They would hold on to some object with their arms and not let go.
Some would let themselves be captured, only to use the net as a trampoline. They’d leap off the mesh and onto the floor—and then run for it. Yes, run. “You’d chase them under the tank, back and forth, like you were chasing a cat,” Warburton said. “It’s so weird!”
Octopuses in captivity actually escape their watery enclosures with alarming frequency. While on the move, they have been discovered on carpets, along bookshelves, in a teapot, and inside the aquarium tanks of other fish—upon whom they have usually been dining.
Another measure of intelligence: you can count neurons. The common octopus has about 130 million of them in its brain. A human has 100 billion. But this is where things get weird. Three-fifths of an octopus’s neurons are not in the brain; they’re in its arms.
“It is as if each arm has a mind of its own,” says Peter Godfrey-Smith, a diver, professor of philosophy at the Graduate Center of the City University of New York, and an admirer of octopuses. For example, researchers who cut off an octopus’s arm (which the octopus can regrow) discovered that not only does the arm crawl away on its own, but if the arm meets a food item, it seizes it—and tries to pass it to where the mouth would be if the arm were still connected to its body.
“Meeting an octopus,” writes Godfrey-Smith, “is like meeting an intelligent alien.” Their intelligence sometimes even involves changing colors and shapes. One video online shows a mimic octopus alternately morphing into a flatfish, several sea snakes, and a lionfish by changing color, altering the texture of its skin, and shifting the position of its body. Another video shows an octopus materializing from a clump of algae. Its skin exactly matches the algae from which it seems to bloom—until it swims away.
For its color palette, the octopus uses three layers of three different types of cells near the skin’s surface. The deepest layer passively reflects background light. The topmost may contain the colors yellow, red, brown, and black. The middle layer shows an array of glittering blues, greens, and golds. But how does an octopus decide what animal to mimic, what colors to turn? Scientists have no idea, especially given that octopuses are likely colorblind.
But new evidence suggests a breathtaking possibility. Woods Hole Marine Biological Laboratory and University of Washington researchers found that the skin of the cuttlefish Sepia officinalis, a color-changing cousin of octopuses, contains gene sequences usually expressed only in the light-sensing retina of the eye. In other words, cephalopods—octopuses, cuttlefish, and squid—may be able to see with their skin.
One octopus Mather was watching had just returned home and was cleaning the front of the den with its arms. Then, suddenly, it left the den, crawled a meter away, picked up one particular rock and placed the rock in front of the den. Two minutes later, the octopus ventured forth to select a second rock. Then it chose a third. Attaching suckers to all the rocks, the octopus carried the load home, slid through the den opening, and carefully arranged the three objects in front. Then it went to sleep. What the octopus was thinking seemed obvious: “Three rocks are enough. Good night!”
A nice essay by philosopher and neuroscientist Sam Harris about the mystery of consciousness. Harris seems to believe, and I do too, that the fact that consciousness exists is proof that not everything in this world is material. That is, my consciousness may arise from chemical processes, or even be identical to it, but the fact that I experience something (which cannot be denied) shows that I am more than matter. Subjective experience is a non-material fact of life. Religious people would call this a soul (I wouldn’t, but be my guest).
The eternal question is, of course, how consciousness can possibly arise from non-conscious material (if at all). Harris compares this to the question how the universe could have come into existence out of nothing. Both questions are, in the end, probably unanswerable, but at least engaging to think about. I particularly agree with the fourth paragraph below.
You are not aware of the electrochemical events occurring at each of the trillion synapses in your brain at this moment. But you are aware, however dimly, of sights, sounds, sensations, thoughts, and moods. At the level of your experience, you are not a body of cells, organelles, and atoms; you are consciousness and its ever-changing contents, passing through various stages of wakefulness and sleep, and from cradle to grave.
The term “consciousness” is notoriously difficult to define. Consequently, many a debate about its character has been waged without the participants’ finding even a common topic as common ground. By “consciousness,” I mean simply “sentience,” in the most unadorned sense. To use the philosopher Thomas Nagel’s construction: A creature is conscious if there is “something that it is like” to be this creature; an event is consciously perceived if there is “something that it is like” to perceive it. Whatever else consciousness may or may not be in physical terms, the difference between it and unconsciousness is first and foremost a matter of subjective experience. Either the lights are on, or they are not.
To say that a creature is conscious, therefore, is not to say anything about its behavior; no screams need be heard, or wincing seen, for a person to be in pain. Behavior and verbal report are fully separable from the fact of consciousness: We can find examples of both without consciousness (a primitive robot) and consciousness without either (a person suffering “locked-in syndrome”).
It is surely a sign of our intellectual progress that a discussion of consciousness no longer has to begin with a debate about its existence. To say that consciousness may only seem to exist is to admit its existence in full—for if things seem any way at all, that is consciousness. Even if I happen to be a brain in a vat at this moment—all my memories are false; all my perceptions are of a world that does not exist—the fact that I am having an experience is indisputable (to me, at least). This is all that is required for me (or any other conscious being) to fully establish the reality of consciousness. Consciousness is the one thing in this universe that cannot be an illusion.
The problem, however, is that no evidence for consciousness exists in the physical world. Physical events are simply mute as to whether it is “like something” to be what they are. The only thing in this universe that attests to the existence of consciousness is consciousness itself; the only clue to subjectivity, as such, is subjectivity. Absolutely nothing about a brain, when surveyed as a physical system, suggests that it is a locus of experience. Were we not already brimming with consciousness ourselves, we would find no evidence of it in the physical universe—nor would we have any notion of the many experiential states that it gives rise to. The painfulness of pain, for instance, puts in an appearance only in consciousness. And no description of C-fibers or pain-avoiding behavior will bring the subjective reality into view.
Most scientists are confident that consciousness emerges from unconscious complexity. We have compelling reasons for believing this, because the only signs of consciousness we see in the universe are found in evolved organisms like ourselves. Nevertheless, this notion of emergence strikes me as nothing more than a restatement of a miracle. To say that consciousness emerged at some point in the evolution of life doesn’t give us an inkling of how it could emerge from unconscious processes, even in principle.
I believe that this notion of emergence is incomprehensible—rather like a naive conception of the big bang. The idea that everything (matter, space-time, their antecedent causes, and the very laws that govern their emergence) simply sprang into being out of nothing seems worse than a paradox. “Nothing,” after all, is precisely that which cannot give rise to “anything,” let alone “everything.” Many physicists realize this, of course. Fred Hoyle, who coined “big bang” as a term of derogation, is famous for opposing this creation myth on philosophical grounds, because such an event seems to require a “preexisting space and time.” In a similar vein, Stephen Hawking has said that the notion that the universe had a beginning is incoherent, because something can begin only with reference to time, and here we are talking about the beginning of space-time itself. He pictures space-time as a four-dimensional closed manifold, without beginning or end—much like the surface of a sphere.
To say “Everything came out of nothing” is to assert a brute fact that defies our most basic intuitions of cause and effect—a miracle, in other words. Likewise, the idea that consciousness is identical to (or emerged from) unconscious physical events is, I would argue, impossible to properly conceive—which is to say that we can think we are thinking it, but we are mistaken. We can say the right words, of course—“consciousness emerges from unconscious information processing.” We can also say “Some squares are as round as circles” and “2 plus 2 equals 7.” But are we really thinking these things all the way through? I don’t think so.
Consciousness—the sheer fact that this universe is illuminated by sentience—is precisely what unconsciousness is not. And I believe that no description of unconscious complexity will fully account for it. It seems to me that just as “something” and “nothing,” however juxtaposed, can do no explanatory work, an analysis of purely physical processes will never yield a picture of consciousness. However, this is not to say that some other thesis about consciousness must be true. Consciousness may very well be the lawful product of unconscious information processing. But I don’t know what that sentence means—and I don’t think anyone else does either.
Wow. I wish we’d see more of these passionate defences of the notion of the public good. Private enterprise and economic individualism is all fine and good; but it will not work without any sort of polity, some sort of public framework, that ensures collective goods.
A good, populist counterpoint against current-day Republicans taking the notion of individual responsibility to extremes.
Elizabeth Warren (born Elizabeth Herring; June 22, 1949) is an American attorney, law professor, and United States Senate candidate. She served as Assistant to the President and Special Advisor to the Secretary of the Treasury for the Consumer Financial Protection Bureau. She is also the Leo Gottlieb Professor of Law at Harvard Law School, where she has taught contract law, bankruptcy, and commercial law. In the wake of the 2008-2011 financial crisis, she became the chair of the Congressional Oversight Panel created to oversee the U.S. banking bailout (formally known as the Troubled Assets Relief Program). She long advocated for the creation of a new Consumer Financial Protection Bureau, which was established by the Dodd-Frank Wall Street Reform and Consumer Protection Act signed into law by President Barack Obama on July 21, 2010. As the special advisor she worked on implementation of the CFPB.
In other news: apparently, teenage electropop star Ke$ha has befriended the American postmodernist literary critic Fredric Jameson. While Ke$ha is well known for such hit singles as ‘Tik Tok’, ‘Blah Blah Blah’ and ‘We R Who We R’, Jameson is more famous for his work in the analysis of structuralism and his critique of postmodernism.
There’s some to be a fashion thing here, because the New York Postrecently reported that Lady GaGa has also been spotted hanging around with the recently popular Slovenian neo-marxist critical theorist Slavoj Žižek, who is well known for his work in the continental tradition of Hegelianism and Lacanian psychoanalysis.
“You wouldn’t think we had stuff to talk about,” Ke$ha, the 25-year-old diva of gritty dance-pop, said on a recent spring Saturday at the Bowery Hotel. “There’s like, so much stuff, it’s hard not to talk about stuff,” she said, laughing. “He loves to talk!”
If the “We R Who We R” singer seems uncharacteristically defensive—everyone from her yoga instructor to her mediation coach has described her as friendly and warm—it’s only because a new high-profile friendship (and maybe more!) is the subject of spurious gossip within tabloids and academic quarterlies. Three weeks ago, at Ashley Tisdale’s Hamptons bash celebrating the success of Ashley Tisdale’s West Hollywood party from the week prior, Ke$ha arrived with a YSL carryall on one arm and postmodernist Fredric Jameson on the other.
“F is amazing,” she gushed. “I’ll be, like, complaining about my music-video director, and he’ll just put everything in perspective by being like, ‘The end of the bourgeois ego, or monad, no doubt brings with it the end of the psychopathologies of that ego—what I have been calling the waning of affect. But it means the end of much more—the end, for example, of style, in the sense of the unique and the personal, the end of the distinctive individual brush stroke (as symbolized by the emergent primacy of mechanical reproduction),’ or something, and he’s right.”
Interesting stuff. This is probably more on a linguistic or conceptual level – of course these people will have the sense of a sequence of events occurring, of past, present, future and memory – but apparently they lack a separate word referring to “time” as, well, a concept. Neither do they portray time as something linear through the use of spatial terms, like we do (saying that something is “ahead” or “past” you).
According to the researchers, this has to do with the fact that these people haven’t developed “time technology” – that is, stuff like calendars to ‘measure’ time in. That kinda seems to me like a chicken and egg question though, for how could they have developed calendars if they didn’t have an abstract idea of time first? Or does an abstract idea of time indeed arise out of the use of such technology, which is in essence merely practical?
One caveat though: according to one critic of the study, the tribe may well use these concepts, but the language they use may not reflect it. If I understand his ideas correctly, this has to do with the confined locale in which these people live (which is interesting in itself). Because they don’t see too many rivers or other multiple, similar things to abstract into one category, but rather “the river” that they know well, they don’t use generic words for stuff. This also disables the use of general spatial terms, and therefore the application of these terms in respect to time. Or something like that.
Anyway, very interesting, because “space” and “time” are the most fundamental categories to position oneself in relation to everything else, and these people apparently think, or talk, in another way.
An Amazonian tribe has no abstract concept of time, say researchers.
The Amondawa lacks the linguistic structures that relate time and space – as in our idea of, for example, “working through the night”.
The study, in Language and Cognition, shows that while the Amondawa recognise events occuring in time, it does not exist as a separate concept.
The idea is a controversial one, and further study will bear out if it is also true among other Amazon languages.
The Amondawa were first contacted by the outside world in 1986, and now researchers from the University of Portsmouth and the Federal University of Rondonia in Brazil have begun to analyse the idea of time as it appears in Amondawa language.
“We’re really not saying these are a ‘people without time’ or ‘outside time’,” said Chris Sinha, a professor of psychology of language at the University of Portsmouth.
“Amondawa people, like any other people, can talk about events and sequences of events,” he told BBC News.
“What we don’t find is a notion of time as being independent of the events which are occuring; they don’t have a notion of time which is something the events occur in.”
The Amondawa language has no word for “time”, or indeed of time periods such as “month” or “year”.
The people do not refer to their ages, but rather assume different names in different stages of their lives or as they achieve different status within the community.
But perhaps most surprising is the team’s suggestion that there is no “mapping” between concepts of time passage and movement through space.
Ideas such as an event having “passed” or being “well ahead” of another are familiar from many languages, forming the basis of what is known as the “mapping hypothesis”.
But in Amondawa, no such constructs exist.
“None of this implies that such mappings are beyond the cognitive capacities of the people,” Professor Sinha explained. “It’s just that it doesn’t happen in everyday life.”
When the Amondawa learn Portuguese – which is happening more all the time – they have no problem acquiring and using these mappings from the language.
The team hypothesises that the lack of the time concept arises from the lack of “time technology” – a calendar system or clocks – and that this in turn may be related to the fact that, like many tribes, their number system is limited in detail.
(…)These arguments do not convince Pierre Pica, a theoretical linguist at France’s National Centre for Scientific Research (CNRS), who focuses on a related Amazonian language known as Mundurucu.
“To link number, time, tense, mood and space by a single causal relationship seems to me hopeless, based on the linguistic diversity that I know of,” he told BBC News.
Dr Pica said the study “shows very interesting data” but argues quite simply that failing to show the space/time mapping does not refute the “mapping hypothesis”.
Small societies like the Amondawa tend to use absolute terms for normal, spatial relations – for example, referring to a particular river location that everyone in the culture will know intimately rather than using generic words for river or riverbank.
These, Dr Pica argued, do not readily lend themselves to being co-opted in the description of time.
“When you have an absolute vocabulary – ‘at the water’, ‘upstream’, ‘downstream’ and so on, you just cannot use it for other domains, you cannot use the mapping hypothesis in this way,” he said.
In other words, while the Amondawa may perceive themselves moving through time and spatial arrangements of events in time, the language may not necessarily reflect it in an obvious way.
What may resolve the conflict is further study, Professor Sinha said.
“We’d like to go back and simply verify it again before the language disappears – before the majority of the population have been brought up knowing about calendar systems.”
In his new book Soul Dust: The Magic of Consciousness, Nicholas Humphrey, a distinguished evolutionary psychologist and philosopher, claims to have solved two fairly large intellectual conundrums. One is something of a technical matter, about which you may have thought little or not at all, unless you happen to be a philosopher. This is the so-called “hard problem” of consciousness. The problem is how an entity which is apparently immaterial like the human consciousness – it exists, but you can’t locate it, much less measure it – can have arisen from something purely physical, like the arrangement of cells that make up the human body. The second problem Humphrey claims he has solved is a rather more everyday one, about which you may well have puzzled yourself. This is the problem of the soul. Does it exist? What sort of a thing might it be? Does everyone have one, even atheists?
His solution to both these problems is the same, because for him the strange properties of consciousness, the fact that for those of us that have it the world of dull matter is suffused with meaning, beauty, relevance and awe – means that it makes sense to think that we are permanent inhabitants of a “soul-niche” or “soul-world”. As the jacket blurb of his book has it, “consciousness paves the way for spirituality”, by creating a “self-made show” that “lights up the world for us, making us feel special and transcendent.” Consciousness and the soul are one and the same.
If this all sounds a little bit metaphysical or New Agey, too much like one of those tiresome attempts to bring religion and science into cosy alignment, hold fast. For what, on the face of it, looks like an attempt to validate spirituality using the language of science turns out to be a way to expand the domain of science by accounting for spirituality, and the soul, alongside consciousness in a fully materialist account. Soul Dust is nothing less than Humphrey’s attempt to sketch out a materialist theory of consciousness, and write a “natural history” of the soul.
With this I highly agree though:
The second half – less technical, more poetic and, as Humphrey admits, pretty speculative – is devoted to the question of why? What is it about consciousness, this “magical” ability to perceive and exult in beauty, meaning and a sense of awe, that confers an evolutionary advantage? His answer is simply that this magical show in our own heads which enchants the world is what makes life worth living: “For a phenomenally conscious creature, simply being there is a cause for celebration.” Consciousness infuses us with the belief that we are more than mere flesh, that we matter, that we might have a life after death, that we have a “soul”. All of these are illusions – the magic of his title – but they have real effects, by making us want to live. As for religion? In his book he argues, “Long before religion could begin to get a foothold in human culture human beings must already have been living in soul land.” “Yes,” he tells me, “I suggest that organised religion is parasitic on spirituality, and in fact acts as a restraint on it.”
While the book received a lot of positive reviews, some negative ones have also appeared. Here’s one from The Guardian, for example.
I’m pretty convinced that in the end, political attitudes are not determined based on rational choices or a weighing of evidence, but are derived from mentality, or ‘character’ (whatever that may be). You almost instinctively feel drawn to a certain strand of political thought, and have an inherent dislike to some others. I, for instance, am naturally freaked out by most versions of conservatism, particularly when they stress authority (and want to impose group beliefs). While I may have a lot of factual evidence or logical reasoning to ‘prove’ conservative or right wing prescriptions for society are wrong, ultimately it may come down to the fact that as a person, I don’t wish to be told what’s right by some group or authority, and value individual freedom and open-mindedness. That’s why I instinctively don’t like conservatism or ‘the right’.
But where does that come from? A while ago, we posted about cognitive neuroscientific research showing that conservatives or right-wingers have bigger amygdalas – the part of the brain that regulates fear and stress. Liberals or left-wingers, on the other hand, were shown to have bigger medial prefrontal cortexes, which suppresses fear. Science Daily now reports about a new article in Current Biology, demonstrating pretty much similarly that differences in political orientation may be tied to differences in brain structure.
Individuals who call themselves liberal tend to have larger anterior cingulate cortexes, while those who call themselves conservative have larger amygdalas. Based on what is known about the functions of those two brain regions, the structural differences are consistent with reports showing a greater ability of liberals to cope with conflicting information and a greater ability of conservatives to recognize a threat, the researchers say.
“Previously, some psychological traits were known to be predictive of an individual’s political orientation,” said Ryota Kanai of the University College London. “Our study now links such personality traits with specific brain structure.”
Kanai said his study was prompted by reports from others showing greater anterior cingulate cortex response to conflicting information among liberals. “That was the first neuroscientific evidence for biological differences between liberals and conservatives,” he explained.
There had also been many prior psychological reports showing that conservatives are more sensitive to threat or anxiety in the face of uncertainty, while liberals tend to be more open to new experiences. Kanai’s team suspected that such fundamental differences in personality might show up in the brain.
Pretty much ties in with what you already know about people from certain political persuasions, eh? In my experience it does, at least.
Some caveats though. First, the liberal-conservative divide is very much an Anglo-American construct. While I believe that - in terms of attitudes at least – it corresponds by and large to the ‘left’ and ‘right’-wing divide in continental Europe (which, despite nuances, pretty much exists, let’s be honest), it’s not exactly the same. Where does socialism fit the bill, for example? I may describe myself as left-wing, but definitely not as a socialist, while some other left-wingers would. The difference between us is probably how we value the role of the state in society and believe in the necessity of material equality. But how could that be fitted in the fear/non-fear conservative-liberal divide described above?
Secondly, and most obviously, there’s the question to what extent upbringing and life experiences play a part in determining political attitude (and may perhaps even affect brain structure).
Interesting research though. It may explain why, even though you know and like someone very well, you still can’t get exactly to the bottom of why that person has different attitudes about something. Trying to explain that in terms of character traits can, I think, deliver interesting results.
In a neat little essay, Adam Frank writes something that resonates to some extent with some blatterings I wrote down almost a year ago. It is about the good old science versus religion debate, and about how both sides (in their simplistic form) get it wrong.
I think Frank gets it right. On both sides, to some extent, there is too great a stress on ‘knowing’ - that is, the idea that we can grasp something like ‘objective’ reality. For example, traditional, monotheistic, doctrinal religion revolves all around ‘knowing’ – with certainty – that God exists, and that all the religious and moral doctrines flowing from that fact are always and everywhere correct. There is no place for any spiritual, direct experience of the divine; it is essentially about following the literal ‘truth’ of a book. This can be seen at its worst in calvinism – which is why I think this is one of the most flawed versions of religion.
On the other hand, in positivist, materialist science a similar stress on objective ‘knowing’ can be discerned. Here, too, there is no place for something like experience, at it is reduced to whatever happens in atoms. Mankind is seen as nothing more than essentially a big machine. At its other end, there is a zeal for discovering what the universe is composed of; whether there are parallel universes, whether there is a Theory of Everything, etc. At some hypothetical endpoint of science, we are supposed to ’know’ everything and then be happy with it. This is a sort of ‘nihilism’ that, to me at least, is not only unsatisfying, but also a misrecognition of what it is like to ‘experience’ the world.
The fact that I can experience myself and my own consciousness for me at least is a sort of wonder for which science has no adequate explanation in terms of its meaning (that is, it can describe how it mechanically comes into being, but the experience in itself is idiosyncratic). Frank says something similar. Quoting Sartre, who said ”Even if God did exist, that would change nothing” (interpreted as meaning that even if we would have ‘knowledge’ of a God, that would still leave the mystery of existence untouched), he proposes that we should focus on the act of being rather than the act of knowing.
This is where ‘spirituality’ (screw that word) comes in. But rather than having to do with ridiculous New Agey stuff, this is a call for abandoning the bastions of certainty, found in monotheistic religion and science, which only lead to needless disputes, and focusing on the immediate experience of the self. And then maybe with its connection to other parts of being. I think this is in a nutshell what Heidegger is about. But you can also find it in the eradication of the Cartesian mind-body divide in tenets of Eastern thinking. And in mysticism. Or do drugs.
What exactly are we looking for? What fuels so much of the passion and intensity behind the debates over religion, the debates between religions and the debates surrounding science and religion? At the heart of these debates you will often find the issue of “knowing.”
Knowing if God exists, or not. Knowing how the Universe began and if a creator was necessary, or not. Knowing how human beings “became” and what constitutes appropriate moral codes in light of that becoming. Always and again, the emphasis is on knowledge, on the certainty of understanding something, of knowing some fact and its meaning. What a tragic mistake.
The great comparative mythologist Joseph Campbell once said, “People don’t want the meaning of life, they want the experience of life.” He could not have hit the nail more firmly on the head.
One thing I have never understood in the vitriol that people manage to dredge up in these science vs. religion battles is their lack clarity about goals. Is human spiritual endeavor really about “knowing” the existence of a superbeing? Does this academic “knowing”, as in “I can prove this to be true,” really what lies behind the spiritual genius of people like the ninth century Sufi poet Rumi, the 13th century Zen teacher Dogen, or more modem examples like Martin Luther King or Ghandi?
There are many reasons human beings institutionalized their spiritual longing into religions. Those reasons often devolved into considerations of power, control and real estate. Those institutions certainly have needed to enforce creed and doctrine, i.e. “knowledge.”
But the reasons individuals find their lives transformed by spiritual longing are intimate and deeply personal affairs having little to do with dusty “proofs for the existence of God.” As all those “spiritual but not religious” folks popping up in surveys on religion will tell you, the essence of the question is about experience, not facts.
Along a similar vein, in the pro-science/anti-religion camps one often hears the quest for understanding the universe put in equally ultimate, quasi-theological terms. Finding the final theory, the Theory of Everything, is held up as a kind of moment “when the truth shall be revealed once and for all.” While many practicing scientists might not see it this way, the scientific knowledge/enlightenment trope has been there in popular culture for a long time reaching all the back to Faust and up through movies like Pi.
As the philosopher Jean-Paul Sartre once said “Even if God did exist, that would change nothing.” One way to interpret his meaning was that a formulaic “knowledge” of a superbeing’s existence is beside the point when the real issue before us every day, all day is the verb “to be.”
It’s the act of being that gives rise to our suffering and our moments of enlightenment. Right there, right in the very experience of life, is the warm, embodied truth we long for so completely.
Spirituality, at its best, points us away from easy codifications when it shows us how to immerse ourselves in the simple, inescapable act of being. Science at its root is also an expression of reverence and awe for the endless varied, resonantly beautiful experience we can find ourselves immersed in. So knowing the meaning of life as encoded in a religious creed on a page or an equation on a blackboard is not the issue. A deeper, richer experience of this one life: that is the issue!
So, can we stop thinking that discussions about science and religion have to focus on who has the best set of facts?
When it comes to the natural world, it’s hard to see how science is not going win the “facts” war hands down. But if we broaden our view to see being as the central issue, then connections between science and spiritual longing might be seen in an entirely different light.
I’m kinda hesitant to post this, as I don’t like the idea of these girls being gaped at as natural wonders. Still, the repercussions of these twins existing in this way are huge. Imagine this: not only can these girls see through each other’s eyes, they can also experience each other’s thoughts, as well as emotions. This is because they share one thalamus, the part of the brain that sends physical sensations and motor functions to the cerebral cortex. Thus, to some extent, even though they’re two persons, they share one consciousness. How does this reflect on the idea or concept of the individual?
Conjoined twins Krista and Tatiana have stunned the world’s medical experts by seeing through each other’s eyes.
The pretty four-year-old twins have two separate bodies but share the same brain.
The girls have a conjoined thalamus, the part of the brain that sends physical sensations and motor functions to the cerebral cortex, allowing them to hear each other’s thoughts and see through each other’s eyes.
But it wasn’t until their proud mum Felicia Simms saw them playing that she discovered that they could see through each other’s eyes.
She said: ‘When they are playing, one of the girls will reach over and grab something from her sister’s side and know exactly where it is without possibly being able to see it.
‘It’s absolutely awesome to watch them sometimes because there’s no way she can see the toy she is reaching for and it’s just incredible.’ The girls also seem to experience each other’s emotions.
‘If one of the girls is hurt, the other can feel it and if you discipline one the other will also cry.’
The girls, from Vernon, British Columbia, Canada, have been receiving constant medical care since they were born.
Paediatric neurosurgeon Doug Cochrane, who has looked after them from birth, confirmed they can see through each other’s eyes.
He said: ‘The twins are sharing signals from the other twin’s visual field.
‘One twin may see what the other twin does, as the brain of one of the girls receives electronic impulses from the retina of the opposite twin.’
Besides the changing colours of the leaves, one of the tell-tale signs of the on-set of fall in Canada is the coming of the Massey Lectures. The Lectures are an annual event wherein a noted Canadian or international scholar or cultural figure gives a week-long series of lectures across the vast country on a political, cultural or philosophical topic of their choosing. Past lecturers have included several Nobel laureates, including Martin Luther King, Jr., George Wald, Willy Brandt and Doris Lessing as well as a host of other prominent folks like Noam Chomsky, J. K. Galbraith, Jane Jacobs, Claude Lévi-Strauss and Margaret Atwood.
To prep folks for the lecture (and its following publication as a book), Coupland recently published an article in the Globe and Mail entitled “A radical pessimist’s guide to the next 10 years”. The article offers “45 tips for survival and a matching glossary of the new words you’ll need to talk about your messed-up future“. Some highlights:
1) It’s going to get worse
No silver linings and no lemonade. The elevator only goes down. The bright note is that the elevator will, at some point, stop.
2) The future isn’t going to feel futuristic
It’s simply going to feel weird and out-of-control-ish, the way it does now, because too many things are changing too quickly. The reason the future feels odd is because of its unpredictability. If the future didn’t feel weirdly unexpected, then something would be wrong.
6) The middle class is over. It’s not coming back
Remember travel agents? Remember how they just kind of vanished one day?
That’s where all the other jobs that once made us middle-class are going – to that same, magical, class-killing, job-sucking wormhole into which travel-agency jobs vanished, never to return. However, this won’t stop people from self-identifying as middle-class, and as the years pass we’ll be entering a replay of the antebellum South, when people defined themselves by the social status of their ancestors three generations back. Enjoy the new monoclass!
15) Make sure you’ve got someone to change your diaper
Sponsor a Class of 2112 med student. Adopt up a storm around the age of 50.
16) “You” will be turning into a cloud of data that circles the planet like a thin gauze
While it’s already hard enough to tell how others perceive us physically, your global, phantom, information-self will prove equally vexing to you: your shopping trends, blog residues, CCTV appearances – it all works in tandem to create a virtual being that you may neither like nor recognize.
27) Hooking up will become ever more mechanical and binary
29) You will have more say in how long or short you wish your life to feelTime perception is very much about how you sequence your activities, how many activities you layer overtop of others, and the types of gaps, if any, you leave in between activities.
34) You’re going to miss the 1990s more than you ever thought
35) Stupid people will be in charge, only to be replaced by ever-stupider people. You will live in a world without kings, only princes in whom our faith is shattered
The number of tribal categories one can belong to will become infinite. To use a high-school analogy, 40 years ago you had jocks and nerds. Nowadays, there are Goths, emos, punks, metal-heads, geeks and so forth.
41) The future of politics is the careful and effective implanting into the minds of voters images that can never be removed
42) You’ll spend a lot of time shopping online from your jail cell
Over-criminalization of the populace, paired with the triumph of shopping as a dominant cultural activity, will create a world where the two poles of society are shopping and jail.
The full list is here. I am not convinced about no. 45.
According to some, notably Aldous Huxley, the psychedelic experience (eating a shroom or taking lsd) is like experiencing the world as a baby. The alteration of the chemical balance in your brain results in the reduced functioning of those processes that induce rationality – the ability to filter experiences, separate important from non-important impulses, in short, everything you need to survive as a living being in the world – while opening you up for the “non-filtered” experience of the world. While this of course for the time being impairs your ability to function as an adult, it does enable one to experience and explore the world from angles never thought possible before.
This also has philosophical implications: if one’s experience of the world can differ so much, if one’s “normal” experience of the world is merely the one that we have been pushed and trained in, then what is reality? Or better: how can reality be known? What is “normal”?
Anyways, new research has come out that relates to these considerations. According to a paper by three researchers in Psychological Science, babies experience the world like a lantern. That is, instead of being able to focus their attention on something specific, they experience everything that happens around them, like a lantern that diffuses light in all directions around it. This thesis, by the way, was already put forward by developmental psychologist Alison Gopnik in her book The Philosophical Baby.
And it’s true, I think: have you ever seen the look on a baby’s face? The way it gazes into the world with open mouth and big eyes, staring at everything? Wonder what that’s like.
We all know what attention is. William James said it best:
Attention is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration, of consciousness are of its essence. It implies withdrawal from some things in order to deal effectively with others, and is a condition which has a real opposite in the confused, dazed, scatterbrained state which in French is called distraction, and Zerstreutheit in German.
James is describing the spotlight model of attention: If the world is a vast stage, then we only notice things that fall within the narrow circle of illumination. Everything outside the spotlight remains invisible. This is because, as James pointed out, the act of attention is intertwined with the act of withdrawal; to concentrate on one thing is to ignore everything else.
And this brings me to my question: How do babies pay attention? What is it like to look at the world like an infant? The question is particularly interesting because the ability to pay attention, focusing that spotlight on a thin slice of the stage, depends on the frontal cortex, that lobe of brain behind the forehead. Alas, the frontal cortex isn’t fully formed until late adolescence – ontogeny recapitulates phylogeny – which means that it’s just beginning to solidify in babies. The end result is that little kids struggle to focus.
This has led the UC-Berkeley developmental psychologist Alison Gopnik – I’m a huge fan of her latest book, The Philosophical Baby – to suggest that babies don’t have a spotlight of attention: They have a lantern. If attention is like a focused beam in adults, then it’s more like a glowing bulb in babies, casting a diffuse radiance across the world. This crucial difference in attention has been demonstrated indirectly in a variety of experiments. For instance, when preschoolers are shown a photograph of someone – let’s call her Jane— looking at a picture, and asked questions about what Jane is paying attention to, the weirdness of their attention becomes clear. Not surprisingly, the kids agree that Jane is thinking about the picture she’s staring at. But they also insist that she’s thinking about the picture frame, and the wall behind the picture, and the chair lurking in her peripheral vision. In other words, they believe that Jane is attending to whatever she can see.
And now there’s a brand new paper in Psychological Science by Faraz Farzin, Susan Rivera and David Whitney that provides some of the best evidence yet for the lantern hypothesis. The experiment itself involved tracking the eye movements of infants between 6 and 15 months of age. The researchers used a special stimuli known as a Mooney face. What makes these images useful is that they can’t be perceived using bottom-up sensory processes. Instead, the only way to see the shadowed faces is to stare straight at them – unless we pay attention the faces remain incomprehensible, just a mass of black and white splotches. In this experiment, however, the babies were able to perceive the faces even when they were located in the periphery of their visual field. (Trust me: You can’t do this.) Because their lantern was so diffuse, they were able to notice stimuli on a much vaster sensory stage. In subsequent experiments, the researchers found that this lantern of attention came with a tradeoff. While babies notice more, they see with less precision. In fact, the “effective spatial resolution” of infants’ visual perception was only half that of adults, although it steadily increased with age.
Note: Sometimes, of course, it’s helpful for adults to engage in lantern-like attention. See, for instance, this recent post on latent inhibition and creativity.
Researchers at Carnegie Mellon University, supported by grants from DARPA and Google, have developed a computer program capable of learning semantics. That is, it can learn the meaning of words and language, in its nuances and complexities, and do so independently, by browsing the Internet. The relevance of this is that this program can learn the way humans learn; and that its knowledge has the capability of evolving.
So I wonder what happens if the program starts to learn the meaning of words like ‘self-consciousness’.
Few challenges in computing loom larger than unraveling semantics, understanding the meaning of language. One reason is that the meaning of words and phrases hinges not only on their context, but also on background knowledge that humans learn over years, day after day.
Since the start of the year, a team of researchers at Carnegie Mellon University — supported by grants from the Defense Advanced Research Projects Agency and Google, and tapping into a research supercomputing cluster provided by Yahoo — has been fine-tuning a computer system that is trying to master semantics by learning more like a human. Its beating hardware heart is a sleek, silver-gray computer — calculating 24 hours a day, seven days a week — that resides in a basement computer center at the university, in Pittsburgh. The computer was primed by the researchers with some basic knowledge in various categories and set loose on the Web with a mission to teach itself.
The Never-Ending Language Learning system, or NELL, has made an impressive showing so far. NELL scans hundreds of millions of Web pages for text patterns that it uses to learn facts, 390,000 to date, with an estimated accuracy of 87 percent. These facts are grouped into semantic categories — cities, companies, sports teams, actors, universities, plants and 274 others. The category facts are things like “San Francisco is a city” and “sunflower is a plant.”
NELL also learns facts that are relations between members of two categories. For example, Peyton Manning is a football player (category). The Indianapolis Colts is a football team (category). By scanning text patterns, NELL can infer with a high probability that Peyton Manning plays for the Indianapolis Colts — even if it has never read that Mr. Manning plays for the Colts. “Plays for” is a relation, and there are 280 kinds of relations. The number of categories and relations has more than doubled since earlier this year, and will steadily expand.
The learned facts are continuously added to NELL’s growing database, which the researchers call a “knowledge base.” A larger pool of facts, Dr. Mitchell says, will help refine NELL’s learning algorithms so that it finds facts on the Web more accurately and more efficiently over time.
NELL is one project in a widening field of research and investment aimed at enabling computers to better understand the meaning of language. Many of these efforts tap the Web as a rich trove of text to assemble structured ontologies — formal descriptions of concepts and relationships — to help computers mimic human understanding. The ideal has been discussed for years, and more than a decade ago Sir Tim Berners-Lee, who invented the underlying software for the World Wide Web, sketched his vision of a “semantic Web.”
Still, artificial intelligence experts agree that the Carnegie Mellon approach is innovative. Many semantic learning systems, they note, are more passive learners, largely hand-crafted by human programmers, while NELL is highly automated. “What’s exciting and significant about it is the continuous learning, as if NELL is exercising curiosity on its own, with little human help,” said Oren Etzioni, a computer scientist at the University of Washington, who leads a project called TextRunner, which reads the Web to extract facts.
Blijkens een tot nu toe onbekende brief van de hand van de filosoof, ontdekt door de Utrechtse onderzoeker dr. Ernst-Otto Onnasch, was G.W.F. Hegel niet alleen grondlegger van het Duitse Idealisme, maar ook alcoholist. Zo besteedde hij een kwart van zijn inkomen als bijzonder hoogleraar aan drank, en zoop hij in een paar maanden tijd ettelijke vaten wijn op. Voel ik me even gevalideerd.
De Duitse filosoof Georg Wilhelm Friedrich Hegel (Stuttgart, 27 augustus 1770 – Berlijn, 14 november 1831) telde in 1803 ruim een kwart van zijn toelage als buitengewoon hoogleraar neer voor een goede Bordeauxwijn. Dat blijkt uit een tot nu toe onbekende brief van de filosoof aan het wijnhuis Ramann in Erfurt. Kort voor Hegels 240ste geboortedag is de brief ontdekt door dr. Ernst-Otto Onnasch, onderzoeker van de geschiedenis van de filosofie aan de Universiteit Utrecht.
Hegel bestelt in deze brief van 19 september 1803 ca. 70 liter Bordeauxwijn van goede kwaliteit, voor een bedrag van 26 daalders (Thaler). Alleen al deze bestelling bedroeg ruim een kwart van zijn eerste inkomen van 100 Thaler dat de universiteit van Jena hem als buitengewoon hoogleraar betaalde.
De brief geeft niet alleen blijk van Hegels rooskleurige financiële situatie, maar biedt ook een verrassend inzicht in de snelheid van zijn wijnconsumptie. In een reeds bekende brief van eind november 1803, dus slechts twee maanden na de nu ontdekte bestelling van 70 liter, verzocht hij wijnhuis Ramann hem ca. 35 liter witte wijn te leveren.
Tot de clientèle van de gebroeders Ramann in Erfurt behoorden diverse vooraanstaande Duitse schrijvers en denkers, waaronder Goethe, Schiller en Schelling. Van Hegel waren al zeven wijnbestellingen bij deze firma bekend. De brief die Ernst-Otto Onnasch nu heeft ontdekt, vormt hierop een belangwekkende aanvulling.
Volgens Monty Python, overigens, kon David Hume nog harder zuipen dan Hegel:
In honor of his 116th birthday were he still alive, Dangerous Minds posts a television interview with one of my all-time heroes: writer, essayist, humanist, pacifist, intellectual, spiritual seeker and psychedelic Aldous Huxley (1894-1963).
Huxley is the arch-open minded figure: a hugely talented person, author of Brave New World (1948), who later in life rejected the mores of the establishment to which he belonged, and began a sort of spiritual quest. And of course, Huxley openly took and advocated psychedelic drugs, such as lsd and mescaline (of which he wrote in The Doors of Perception (1954), which everybody should read), and as such stands at the basis of the countercultural revolution of the 1960s. He did this not as a thrill-seeker, but as someone genuinely interested in the worthwhile possibilities of consciousness-altering substances for the human experience. This is a welcome counterexample to the present-day rigidity and bourgeois aversion to psychedelic substances.
In this interview, conducted by the famous news anchor Mike Wallace on The Mike Wallace Interview in 1958, Huxley speaks about:
[How] overpopulation relates to freedom; technological development in proportion to authoritarianism; future dictatorships; Brave New World in America; the power of advertising in politics; subliminals and brainwashing; education and group morality; societal decentralization; how productivity necessitates freedom; and of course drugs.
This is part 1. Find part 2 and part 3 here and here.
TIME reports about a recent article published in Science, authored by two researchers from Utrecht University, concerning the “unconscious will”. It turns out that our so-called conscious actions, superficially coming forth from our free will, are influenced or determined by a great host of circumstancial eventualities. While this doesn’t really come as a surprise, and was already well-known from such phenomena as subliminal advertising, it is the extent to which this is the case that is news. For example, even stimuli within your consciousness can determine unconscious processes, and these are very random things: like the type of chair that you’re sitting in, the air that you breath, the coffee you drink, the clothes you wear. All these can influence “decisions” - from the smallest to the biggest – you make.
Apply these findings to events and situations in your life, and think them over. An interesting pastime. The question is: to which extent do we actually have a free will?
[Recently] psychologists have compiled an impressive body of research that shows how deeply our decisions and behavior are influenced by unconscious thought, and how greatly those thoughts are swayed by stimuli beyond our immediate comprehension.
In an intriguing review in the July 2 edition of the journal Science, published online Thursday, Ruud Custers and Henk Aarts of Utrecht University in the Netherlands lay out the mounting evidence of the power of what they term the “unconscious will.” “People often act in order to realize desired outcomes, and they assume that consciousness drives that behavior. But the field now challenges the idea that there is only a conscious will. Our actions are very often initiated even though we are unaware of what we are seeking or why,” Custers says.
It is not only that people’s actions can be influenced by unconscious stimuli; our desires can be too. In one study cited by Custers and Aarts, students were presented with words on a screen related to puzzles — crosswords, jigsaw piece, etc. For some students, the screen also flashed an additional set of words so briefly that they could only be detected subliminally. The words were ones with positive associations, such as beach, friend or home. When the students were given a puzzle to complete, the students exposed unconsciously to positive words worked harder, for longer, and reported greater motivation to do puzzles than the control group.
The same priming technique has also been used to prompt people to drink more fluids after being subliminally exposed to drinking-related words, and to offer constructive feedback to other people after sitting in front of a screen that subliminally flashes the names of their loved ones or occupations associated with caring like nurse. In other words, we are often not even consciously aware of why we want what we want.
“These are stimuli that people are conscious of — you can feel the hard chair, the hot coffee — but were unaware that it influenced them. Our unconscious is active in many more ways than this review suggests,” he says.
Both Custers and Bargh acknowledge that their research undermines a fundamental principle used to promote human exceptionalism — indeed, Bargh has in the past argued that his work undermines the existence of free will. But Custers also points out that his conclusions are not new: people have long sensed that they are influenced by forces beyond their immediate recognition — be it Greek gods or Freud’s unruly id. What’s more, the unconscious will is vital for daily functioning and probably evolved before consciousness as a handy survival mechanism — Bargh calls it “the evolutionary foundation upon which the scaffolding of consciousness is built.” Life requires so many decisions, Bargh says, “that we would be swiftly overwhelmed if we did not have the automatic processes to deal with them.”
Describing himself as “terribly exhausted,” famed linguist and political dissident Noam Chomsky said Monday that he was taking a break from combating the hegemony of the American imperialist machine to try and take it easy for once.
“I just want to lie in a hammock and have a nice relaxing morning,” said the outspoken anarcho-syndicalist academic, who first came to public attention with his breakthrough 1957 book Syntactic Structures. “The systems of control designed to manufacture consent among a largely ignorant public will still be there for me to worry about tomorrow. Today, I’m just going to kick back and enjoy some much-needed Noam Time.”
“No fighting against institutional racism, no exposing the legacies of colonialist ideologies still persistent today, no standing up to the widespread dissemination of misinformation and state-sanctioned propaganda,” Chomsky added. “Just a nice, cool breeze through an open window on a warm spring day.”
Sources reported that the 81-year-old Chomsky, a vociferous, longtime critic of U.S. foreign policy and the political economy of the mass media, was planning to use Monday to tidy up around the house a bit, take a leisurely walk in the park, and possibly attend an afternoon showing of Date Night at the local megaplex.
Sitting down to a nice oatmeal breakfast, Chomsky picked up a copy of Time, a deceitful, pro-corporate publication that he said would normally infuriate him.
“Yes, this magazine may be nothing more than a subtle media tool intended to obfuscate the government’s violent agenda with comforting bromides, but I’m not going to let that get under my skin,” Chomsky said. “I mean, why should I? It’s absolutely beautiful outside. I should just go and enjoy myself and not think about any of this stuff.”
Added Chomsky, glancing back over at the periodical, “Even if it is just another way in which individuals are methodically fed untruths that slowly shape their perceptions of reality, dulling their ability to challenge and defy a government bent on carrying out its own selfish and destructive—no, no Noam, not today, none of that today.”