An incredibly simple proof of the halting problem (in Scheme)

December 20, 2011

Don’t know the programming language Scheme? Don’t have to! (Even if you *had* to it would take less than an hour to learn basic syntax).

When I began this blog I intended to make a blog post explaining a formal proof of Turing’s halting theorem (eg. no algorithm can exist which determines whether a computational procedure always halts for every input. For example, a program that took in a number and printed every number greater than that up to 100 would not halt for every number, for obvious reasons). Both Alan Turing, with Turing machines, and Alonzo Church, with the lambda calculus, independently proved the halting problem (“Entscheidungsproblem”). However, both of these proofs require a lot of background explanation, and they’re not the best way to do it anyways.

The best way, I found, was a simple exercise from the textbook, Structure and Interpretation of Computer Programs. Here is a construction of the proof:

Suppose we have “a procedure halts? that correctly determines whether p halts on a for any procedure p and object a.”

Then we write the following procedures:

(define (run-forever) (run-forever))

(This is an infinite loop), and then a specially constructed:

(define (try p)
  (if (halts? p p)
      (run-forever)
      'halted))

Now, imagine that we called  the expression (try try) and let it run. If (halts? try try) evaluates to true, signaling that the function try halts when called on itself, then try runs forever, a contradiction. Similarly, if halts? decides that try does NOT halt when called on itself, the function halts.

And there you have it – the contradiction! By reductio ad absurdum, a function such as halts? cannot exist.

This proof by construction can be implemented in any language where functions are first-class objects, such as Python, JavaScript, and many other modern ones (not Java, of course. Java kind of sucks).

And in addition, if you’re ever taking a computer science course which uses SICP as a textbook, Exercise 4.15 is answered for you!

Advertisements

Learning LaTeX

August 19, 2011

i\hbar\frac{\partial}{\partial t}\left|\Psi(t)\right>=H\left|\Psi(t)\right>

Cool, huh?


Mathematicians vs. Engineers

August 18, 2011

I learned about probabilistic algorithms yesterday in Structure and Interpretation of Computer Programs. Probabilistic algorithms do not yield the correct answer with 100% certainty, but are designed so that the chance of an incorrect answer is infinitesimal. For example, the Fermat Test for primality yields a nice logarithmic-time (very quick) algorithm which yields the incorrect answer extremely rarely. I came across a nice bit of witticism from the author:

Numbers that fool the Fermat test are called Carmichael numbers, and very little is known about them other than that they are extremely rare…in testing primality of very large numbers chosen at random,the chance of stumbling upon a value that fools the Fermat test is less than the chance that cosmic radiation will cause the computer to make an error in carrying out a “correct” algorithm. Considering an algorithm to be inadequate for the first reason but not for the second illustrates the difference between mathematics and engineering.

Structure and Interpretation of Computer Programs, 53 n. (emphasis mine)

So…if I’m a computer scientist does that make me an engineer or mathematician? The author seems to have made his own decision.

Of course, Sheldon Cooper has his own, opposing opinion:


Computer Science and Philosophy: Procedural Epistemology

July 23, 2011

Today Amazon delivered me my first ever textbook for college (!) and the latest edition to the “computer science canon” that I’m currently collecting – Abelson, Sussman, and Sussman’s classic Structure and Interpretation of Computer Programs.

…I think it’s quite an exciting day. Naturally, I began to read the first few pages and came across a fascinating paragraph: Read the rest of this entry »


We’re really good at catching cheaters

July 15, 2011

I just finished reading Wall Street iconoclast Nicholas Nassim Taleb’s Fooled By Randomness, which was basically a toned-down precursor to the inflammatory (yet highly accurate) Black Swan. A particularly interesting thing that Taleb mentions is that a lot of human irrationality comes from modularization, or using different parts of the brain for different situations. Some parts of the brain, especially those geared towards abstract reasoning, tend to be weaker for most. However, the part that we use to catch cheaters happens to be exceptionally strong.

I found this interesting enough that I made a mental note to research it some time. Fortunately, the next book I picked up, Malcolm Gladwell’s Tipping Point (I’m going through books pretty quickly at this point) referenced a study on this exact thing! So here goes:

Read the rest of this entry »


Computer Science and Philosophy: The Most Human Human

July 8, 2011

I recently finished a fascinating book by Brian Christian, The Most Human Human. Christian took part in the 2009 “Loebner Prize,” an annual “official” Turing Test, in which prizes are given to the program which is able to fool the most human judges (or elicit, on average, the lowest confidence of its non-personhood in the judges’ assessments). Christian did in fact win the “Most Human Human award,” and to sum up some of his insights on how far artificial intelligence has come in terms of holding a conversation, I crafted a short Citizen’s Guide to Judging whether it is a Computer or Not:

Read the rest of this entry »


Black Swans, the “Ludic Fallacy”, and Bayesian inference

June 28, 2011

A few days ago, I finished reading The Black Swan by Nicholas Taleb, which goes in-depth on topics such as judgment under uncertainty and the issues relating to unrealistic models which deliberately ignore unlikely but possibly highly influential phenomena in order to stay simple. Taleb emphatically argues against the use of the Gaussian bell curve, or the GIF (“great intellectual fraud”), as he likes to call it, pointing out that it forecasts events several standard deviations from the norm as extremely unlikely. He points out that an event 4 SD away is twice as likely as 4.15 SD, and that “the precipitous decline in odds of encountering something is what allows you to ignore outliers. Only one curve can deliver this decline, and it is the bell curve (and its nonscalable siblings). Nassim instead expounds scalable “Mandelbrotian” curves, which, like all things Mandelbrotian, are fractal – The speed of the decrease of odds as one moves from the mean is constant, not declining. So the odds of having a net worth of over 8 million pounds is 1 in 4,000, for higher than 16 million pounds it’s one in 16,000, for 32 it’s 1 in 64,000, etc. So not only does the Mandelbrotian curve put more importance on outliers (“Black Swans”, or unpredictable but highly influential events such as stock market crashes that Gaussian models miss), but any small portion of the graph resembles the larger curve in a fractal sort of way.

Read the rest of this entry »