A Basic Model for Consciousness

February 4, 2012

The nature of consciousness is a fundamental mystery of human condition, as much a philosophical and epistemological question as a scientific one. Many of the blog posts so far have explored the exact nature of consciousness; this will briefly introduce one that I find attractive: Higher Order Theory.

Higher Order Theory is basically a theory which states that the brain has thoughts, such as “I am hungry” or “It’ll probably rain tomorrow” or “William Henry Harrison was our most prolific president.” However, every now and again, the brain will have thoughts about thoughts (also called metacognition) and that produces consciousness. This theory is intuitively similar to cognitivism, which views consciousness as “the sensation of your most significant thoughts being highlighted.”

Higher Order Theory has several advantages. First of all, it makes sense from an evolutionary standpoint. Humans certainly aren’t the only animals which are self-aware – if an animal can recognize itself in a mirror, doesn’t that count? In The Selfish Gene, Richard Dawkins postulates that consciousness could’ve developed gradually, as an organism’s cognitive worldview became comprehensive enough to include itself. Secondly, it’s a model which lends itself well to definition via computer science and logic, since comprehensive work has already been done in areas such as Kleene’s Recursion Theorem, which could conceivably someday be linked to defining the theory more specifically.


Poverty of The Stimulus and the Rationalist/Empiricist Debate

December 28, 2011

(Note: This is my second post in the series about the philosophy of cognitive science)

Imagine that you’re a traveler passing through a small village in a foreign land, and as you’re trying to communicate (with difficulty) with the locals, you see a rabbit running by you. As it runs by, a local says, “Gavagai!” Now, of course, you might infer that he was referring to the rabbit. So does “gavagai” mean rabbit? Could it mean furry? It could even mean Nice Day! You don’t know.

This dilemma was devised by Van Gordon Quine (1960) to refer to something called the “poverty of the stimulus” problem – under-determination. If we consider the brain to be a sort of information-processing device, an instantiation of the abstract concept of a “mind,” then researchers are dogged by the issue of language-learning, and all kinds of inductive learning in general. Noam Chomsky, renowned linguist and cognitive scientist, framed the problem thus – children can learn a grammar which can produce an infinite number of sentences by only hearing a finite number. How do children possess the universal and consistent ability to learn languages, using a relatively small amount of data to learn an internal representation of language capable of making sense of unfamiliar languages?

Chomsky believes that certain aspects of language must be innate. According to him, certain (universal) aspects of language are encoded into our genome. Important as this question is, it in turn raises bigger ones – to what extent is our knowledge and mind predetermined?

As Dwight Schrute from The Office might say, there are basically two schools of thought. The first is rationalism, the philosophy based on the belief that our mind comes equipped with innate ideas. The  second is empiricism, which believes our mind is fully shaped by experience.

Empiricism, influenced greatly by the thinker John Locke, promotes the idea of a tabula rasa, or blank slate – at birth, we are all equal, and our minds are fully shaped by the experiences and actions of our lifetimes. It’s an attractive idea, and one of the tenets of libertarianism, the political philosophy of which Locke was in many ways an intellectual ancestor. With empiricism, the entirety of knowledge is based in ideas. Some ideas are direct, others are abstract and indirect. Mathematical concepts, such as the triangle, would be considered “abstract.” Although some would argue that triangles, as well as other abstract concepts, might be impossible to properly represent in an idea-scheme, some empiricists such as John Stuart Mill went so far as to proclaim that all of mathematics COULD be depicted in terms of definitional relations between ideas. The purpose of reason and logic, then, were to organize ideas. One important empiricist belief is that the mind is a domain-general device, in that it uniformly picks up and learns from experiences and stimuli.

Rationalism is quite the opposite – according to this philosophy, the mind comes equipped with innate ideas. Essentially, the mind takes an axiomatic approach about learning about the world, with the axioms built in. Some knowledge is not derived from experience. This is because many ideas, such as identity, must be inherent, or otherwise we wouldn’t have any ideas at all. This is where Descartes’ “cogito ergo sum” comes in. This is also where Descartes’ dubious proofs of the existence of God come in – as I mentioned in a previous post, he believed that since we had an idea of God, it had to be implanted in us innately. And what force could possibly provide an idea of the existence of God? Why, God him and or herself. Also, the rationalists believe that the mind is a domain-specific device – because of innate characteristics, it is specially structured to learn from different stimuli differently.

So which is it, empiricism or rationalism? This is, at a base level, the philosophical equivalent of nature vs. nurture, and in many ways, not very practically applicable, because it doesn’t tell us much anything useful. The ideas of the innateness of language and the modularity of the mind (we use different parts of our brain to respond to different situations, eg. we’re really smart if we suspect someone might be cheating) are scientific, not philosophical theories. The rationalism/empiricism debate is an important episode in the history of the philosophy of the mind, but it went back to a time when thinkers didn’t make a connection between the actual physical structure of the brain and the structure of the “mind.” Now, from a scientific perspective, we cannot proclaim rather little, because nature vs. nurture is an ongoing debate with far-reaching societal, and even political, considerations (think libertarianism). Some scientific theories, such as the poverty of the stimulus problem, argue for a certain representation of the mind, but whether or not there are significant preexisting structures in inexperienced minds that produce variation or bias in learning abilities and predispositions is still an open question.

Stay tuned for my next post, on the problem of other minds and first- and third-person perspectives!

 


What’s it Like to Be a Bat? Contemplating the Mind-Body Problem

October 1, 2011

(Note: This is my first post in a series about the philosophy of cognitive science(s))

Today’s story begins with a well-known 16th-century French mathematician, scientist, and philosopher by the name of Descartes. Many will know Descartes for his contributions to mathematics (fun fact: He may be the reason that ‘x’ is most commonly used to express an unknown, such as a variable in an algebraic equation. For a longer discussion, see this).

Descartes also had a novel and extremely important worldview – he was one of the founders of the philosophy called “rationalism” – one which seeks absolute truths from the inside via the “pure light of reason.” Unfortunately, Descartes used the axiom “I think, therefore I am” to derive the existence of God – a dubious proof at best, one that undoubtedly calls into question Descartes’ own willingness to depart from a preexisting worldview.

But all of that is secondary. What Descartes is relevant for in the current discussion is for his theory of Cartesian dualism. Descartes believed that mind and body were dualistic, or two fundamentally separate substances. This view, popularly adapted into “soulism,” is still held by the majority of the world’s population.

Interestingly enough, even though Cartesian dualism is considered naive by today’s cognitive scientists, his work indicated that he was surprisingly willing to push the limits of a mechanistic world-view, even believing that nonhuman animals, or “brutes” as he called them, were entirely robotic. Humans were separate because of capacity for formidably complex mental feats such as natural language, and because of creative responses to novel situations. Acting as a historian, I postulate that Descartes, for the skeptic and rationalist that he was, was unable to completely divorce himself from a Catholic scientific worldview – evidence for this being his curious and convoluted arguments for the existence of God and his placement of humans in a fundamentally different category from other animals. Of course, evolution hadn’t been discovered yet so he’s not too much to blame.

The concept of Cartesian dualism, now rejected by the mainstream of philosophers, evolved into the “mind-body problem.”  To save a lot of explanation, the mind-body problem has essentially two parts:

1) Are things we consider “mental states” entirely physical?
2) If so, how can we explain such phenomena as consciousness?

To elaborate on question 1, we have to consider the difference between dualism and physicalism. Dualism is Descartes’ worldview, which holds that mental states are immaterial, whereas physicalism holds that they are entirely explained away by the particular architecture of human brains and “minds” in general. Dualism is largely debunked by the problem of causation – Burning one’s hand one a hot stove causes a mental reaction, the feeling of pain. The mental desire of hunger causes us to go and eat chips. How do these things affect each other, or how are they causally linked, if mental and physical states are distinct? To answer that we turn to physicalism.

Physicalism is, simply put, the view that all “things” in the universe are physical, so it rejects the possibility of supernatural entities or souls. So physicalism invokes something called the “monist view” of the mind-body problem – that mental states are causally linked to physical  states for the reason that they are physical states themselves. Simple enough. Within physicalism, there are various degrees of strength too. The strongest view is an actual empirical hypothesis waiting to be proven, the “type-identity theory,” which states that mental states and properties are identical to physical states and properties. This is particularly bold because it seems to suggest that every mind which contains the idea that “grass is green” would have some identical structure. I think that this inference isn’t completely accurate – different minds such as that of a cricket compared to that of a snake could contain the representation of grass analogously or isomorphically (Douglas Hofstadter’s term) but wouldn’t necessarily have to be identical to for the type-identity theory to hold.

There are also weaker forms of physicalism, especially property dualism or nonreductive physicalism, which says that even though physicalism is correct, it is not fully reductionist – i. e. low-level properties of physical objects such as magnitude, velocity, chemical composition, etc are not always sufficient to explain higher-level phenomena such as psychological states. This is closely related to emergentism, which holds that even though high levels of organization in a complex system such as the brain are dependent on low-level activity, they cannot be explained through the lower-level activity. Some thinkers, myself included, believe that the theory of emergence sweeps the problem under the rug, and isn’t fundamentally distinct from believing in dualism anyway, since it relies on “mystery” and “the unknown.”

So that largely sums up the answer to the first part of the mind-body problem, ie. whether mental states are physical or not. The second part is a lot trickier though. We arrive at something that Thomas Nagel terms “the knowledge problem” – even if we knew all the physical “facts” about the phenomenon of pain, for example, we wouldn’t know what pain feels like, extrapolating from those facts about pain. In his famous paper “What is it like to be a bat?” Nagel delivers a critique against reductionist physicalists who reduce the problem of understanding consciousness to “low-level neuronal activity” without being more specific. According to Nagel, “consciousness is what makes the mind-body problem really intractable.” Can we imagine different intelligent cognitive architectures? By some stretch of the imagination, we may be able to imagine what it’s like to be a bat. But now we only know what it’s like for a human to be a bat…what about what it’s like for a bat to be a bat? Does being a bat feel like anything? Is that an absurd question, like asking what it feels like to be a water bottle? Or is it somewhere in between?

I guess I’ve probably provided more answers than questions. Consciousness will be dealt with in more detail later. But in the meantime, we swing around full circle to Descartes, who, as you might remember, believed animals were essentially robots. We all observe apparently intelligent behavior in “lower creatures,” even insects, but is it really intelligent or is it just robotic, as Descartes would say? Here’s an excerpt on the “sphex wasp” provided by Douglas Hofstadter in Godel, Escher, Bach:

When the time comes for egg laying, the wasp Sphex builds a burrow for the purpose and seeks out a cricket which she stings in such a way as to paralyze but not kill it. She drags the cricket into the burrow, lays her eggs alongside, closes the burrow, then flies away, never to return. In due course, the eggs hatch and the wasp grubs feed off the paralyzed cricket, which has not decayed, having been kept in the wasp equivalent of a deepfreeze. To the human mind, such an elaborately organized and seemingly purposeful routine conveys a convincing flavor of logic and thoughtfulness — until more details are examined. For example, the wasp’s routine is to bring the paralyzed cricket to the burrow, leave it on the threshold, go inside to see that all is well, emerge, and then drag the cricket in. If the cricket is moved a few inches away while the wasp is inside making her preliminary inspection, the wasp, on emerging from the burrow, will bring the cricket back to the threshold, but not inside, and will then repeat the preparatory procedure of entering the burrow to see that everything is all right. If again the cricket is removed a few inches while the wasp is inside, once again she will move the cricket up to the threshold and reenter the burrow for a final check. The wasp never thinks of pulling the cricket straight in. On one occasion this procedure was repeated forty times, with the same result. [from Dean Wooldridge’s Mechanical Man: The Physical Basis of Intelligent Life]

Well that’s all I have to say on the mind-body problem for now. Stay tuned for more!

Further Reading:

What Is It Like to Be a Bat? – Thomas Nagel

(Note: All information is from MITECS and personal reflection (this is philosophy, after all), unless otherwise stated)


The Philosophy of Cognitive Science: An Introduction

September 22, 2011

I realize that I haven’t posted in a really really long time; my last article about a month ago was just a couple half-hearted summaries on articles I had read months before that. I’ve thus far been settling in and getting used to life in university. But now I think I’m ready to start posting again.

I’m actually for something a bit more ambitious than usual. I’m going to begin a sequence of posts about a 20-page introductory essay by Robert A. Wilson, in The MIT Encyclopedia of Cognitive Sciences. The essay aims to survey and summarize numerous philosophical and epistemological considerations in studying the brain and minds in general. So it’s a fairly packed 20 pages. Which is essentially the problem. MITECS is expensive, and while containing more interesting information about artificial intelligence and cognitive science than I could ever have dreamed of, it’s also inaccessible in some ways. The text is organized almost like a paper version of Wikipedia – certain phrases or words in all-caps and a different font refer to other articles by that name. Of course, since my copy is printed on dead trees, I have to flip to it, a cumbersome process. And just to ballpark, the 20-page essay contains at least 150 references to other articles.

So there you have it: The Wilson’s “Philosophy” essay is dense, long, somewhat inaccessible to novices, tip-of-the-iceberg-style, and expensive to access. What to do, then, about the incredible wealth of knowledge that everyone should rightfully have access to but don’t, even when they own the text? That’s where I come in. I’m making an ambitious commitment to myself and any interested readers to trudge through the essay and uncover, explain, and reflect on as much as possible.

So here begins my version of a post for the half-page introduction:

The areas of philosophy that contribute to and draw on the cognitive sciences are various; they include the philosophy of mind, science, and epistemology. The most direct connections hold between the philosophy of mind and the cognitive sciences, and it is with classical issues in the philosophy of mind that I begin this introduction. I then briefly chart the move from the rise of materialism as the dominant response to one of these classic issues, the mind-body problem, to the idea of a science of the mind. I do so by discussing the early attempts by introspectionists and behaviorists to study the mind. Here I focus on several problems with a philosophical flavor that arise for these views, problems that continue to lurk backstage in the the theater of contemporary cognitive science.

Whew. Here is a list of the 9 sections of the essay, each of which will merit at least one (but probably several) posting(s).

1 – Three Classical Philosophical Issues about the Mind

2 – From Materialism to Mental Science

3 – A Detour Before the Materialistic Turn

4 – The Philosophy of Science

5 – The Mind in Cognitive Science

6 – A Focus on Folk Psychology

7 – Exploring Mental Content

8 – Logic and The Sciences of the Mind

9 – Two Ways to Get Biological

Stay tuned for more in the next couple of days!


Two Pieces of Food for Thought

August 30, 2011

Looks like I haven’t posted in a while, and this post will be on the shorter side. However, here are two interesting ideas I’ve come across recently that are definitely worth sharing:

1) Money: The Unit of Caring – Eliezer Yudkowsky. Yudkowsky says it quite bluntly –

In our society, this common currency of expected utilons is called “money”. It is the measure of how much society cares about something.

This is a brutal yet obvious point, which many are motivated to deny.

And that is true. Many people complain that those who donate money “don’t care.” It does seem callous, donating money. You just thrust your hand into your fat wallet, grab a greasy handful of crumpled green pieces of paper, and walk away and enjoy the rest of the day, having “done a good thing.” What’s there to love? A social worker who gave a talk at my school once said, “There’s a difference between involvement and commitment. In eggs and ham, the hen’s involved but the pig’s committed.” And so on.

Yet at the same time, those greasy crumpled notes aren’t magically appearing in your wallet to opaquely sustain your opulent first-world lifestyle. You’re working for those bills. You paid a lot of money for a college education (might still be doing so), and you work in a job that you are specifically qualified for in order to maximize not only the wealth of yourself but that of your employer and society on a whole. Yudkowsky notes that a lawyer volunteering at a soup kitchen is a total waste because they could instead spend that hour working and then donate an hour’s pay to fund someone to work for 10 hours. Perhaps in a hunter-gatherer society, caring could only be expressed in direct actions, but in a market economy, it may be abstracted through currency, but its impact is very real.

I was glad to see this article. I got tired of my old school’s community service model which was 90% fundraising. But looking back, wasn’t it better to get lots of money from some really rich kids (it’s their parents’ money, they don’t care) rather than force them to do something they’re bad at and don’t want to do but still “help?”

2) Machiavellian Intelligence Hypothesis – we may share 99% of our genes with chimps, but is anyone really denying that we aren’t vastly more intelligent than they are? But the source of this exponential intelligence/brain size explosion still requires full explanation. An interesting theory is the Machiavellian Intelligence Hypothesis  – social intelligence brought about intelligence in general. There’s a book, Frans de Waal’s Chimpanzee Politics: Power and Sex among Apes which argues that escalating social manipulation and shifting coalitions among advanced primates fueled human evolution because intelligence became so essential to survival along with physical strength. The MIH agrees with the theory of “modularity of the mind” that I blogged about earlier – that humans are much better at solving abstract problems when they concern someone cheating in a social situation than when they have to do with something trivial like playing cards. So there you go. We’re all Machiavellians at heart.


The Chinese Room – introduction

August 16, 2011

What’s the Chinese Room Argument?

I’ve blogged before about the Turing Test several times; Alan Turing held the position that if a program was indistinguishable from a human mind in all manners of interaction, then it could be considered “conscious,” whatever that means. This position is formally known as “Strong AI” – in other words, hardware is inessential to the working of a mind – cognitive states can equally well be implemented on computers as on human brains.

For many, this is a troubling stance. It is difficult to argue that computers will never come to pass the Turing test, but there is a position which states that computers can only simulate thought and not demonstrate actual understanding – “weak AI.” Weak AI draws a line between simulating thought and actual thought; thought simulation is the manipulation of abstract symbols to produce output indistinguishable from the of a human, whereas actual thought consists of “states,” connects syntax with semantics, words such as  “tree” associated with sensory experience and memory, and involves understanding.

John R. Searle formulated an impressive argument for weak AI known as the “Chinese Room argument.” Here, we are taken to assume that the Turing Test exchange is in Chinese. Instead of a computer running the program, we have an English speaker who does not understand Chinese. Their job is to receive input in the form of Chinese characters. Then they follow would rules in a book (the program) in manipulating symbols in other books (the databases) in order to produce Chinese characters in output. The man doing the manipulations does not understand Chinese, and the program does not allow him to; hence the program does not “understand Chinese.” Strong AI is thus false, because programs cannot create understanding.

In the essay, Minds, Brains, and Programs, in which Searle produces this thought experiment, he presents and responds to a number of counterarguments. A few of them are listed here  –

  1. The Systems argument – understanding is an epiphenomenon, and perhaps although the man himself does not understand Chinese, the entire system does in fact possess an understanding. Searle deftly strikes this down; if the man were to memorize the program and the database, he would have internalized the entire system, yet he would still not understand Chinese.
  2. The Robot argument – If we created a robot with the program running as its brain, which was able to take in input via cameras and sensors and act accordingly, then surely it would show understanding of the world. Searle refutes this argument as well, emphasizing the lack of qualitative difference between the original program and this robot.
  3. “Many Mansions” argument – “Your whole argument presupposes that AI is only about analog and digital computers. But that happens to be the present state of technology.” Searle pokes fun of his opponents here, noting correctly that the “Strong AI” stance is supposed to be hardware-independent, such that any computational device should support intelligence if programmed correctly.

And so by introducing straw men and striking them down easily, Searle apparently solidifies his position. Or does he? Daniel Dennett provides a far more convincing counter-argument, which I’ll introduce in my next post.


“Chess; The Psychology of” – ruminations…

July 29, 2011

Proposed Subtitles:

Why 10,000 hours are so important

Why grandmasters are actually about 99% percent normal

Why you don’t have to be a genius to play chess

Why do I sound so much like Malcolm Gladwell???

Yes, the cultural consensus is changing. No longer does anyone speak of “natural talent” or “innate gifts” when they see a 12-year old master a difficult Tchaikovsky concerto on the violin, or a young grandmaster win a few dozen simultaneous blindfolded games of chess, or Tiger win yet another tour. No, now it’s all thanks to some opaque process called “deliberate and sustained practice.” Who’d’a thought? Countless people once convinced of their mediocrity are now overjoyed at newfound opportunities, and countless more are finding out that you can’t be Bill Gates just because the recursive factorial routine you just programmed worked the very first time. There are no free rides.

Read the rest of this entry »