Posted 3/22/20
I TA’d a course on programming languages in Fall 2019, and consistently got questions about lambda calculus. The most important questions weren’t about the mechanics and grammar, but the bigger picture: “Why are we expressing programs in this convoluted and tedious way? What’s the point? What’s all of this for?” This post will address those broader concerns, provide a crash course in lambda-calculus thinking, and hopefully put lambda calc in context.
Lambda calculus is the smallest, simplest version of a programming language we can come up with. We can prove that we can do all kinds of things in lambda calc, even if it’s very tedious and unpleasant. Then we show that if we can implement a lambda calc interpreter in a given language (like Python), that language must be able to do everything that lambda calc can do. In this way lambda calculus is similar to Turing machines, in that if a language can simulate any arbitrary turing machine then it is “turing-complete” and can do everything turing machines can do.
We’ll do a brief example of the idea, without using lambda calculus syntax, which has a bit of a learning curve.
In lambda calculus we have functions that take exactly one argument. There are no variables except constants, no functions of multiple arguments, no constructs like lists or tuples or objects. Just functions of one argument, and some rules about how variable scope works. However, functions are able to create and return other functions.
If lambda calc had a concept of numbers and addition (it does not, but we can create such things) a Python-like syntax might look like:
def add(x): def add2(y): return x + y return add2 # Equivalent lambda-calc syntax # \x.\y.x+y
We can now use the above like (add(2) 3)
to add 2 and 3 and get back 5. This is obviously logically equivalent to a function of two arguments, like add(2, 3)
. Therefore, we’ve simulated functions of multiple arguments using only functions of one argument.
Next let’s define true and false and build an if-statement, since we can’t express a lot of important logic without an if-statement.
def true(a): def true2(b): return a return true2 def false(a): def false2(b): return b return false2 # Equivalent lambda-calc syntax # True: \a.\b.a # False: \a.\b.b
If we think of these as functions with two arguments, true(a,b)
always returns the first argument, and false(a,b)
always returns the second argument. This seems like an awkward definition, but it comes in handy when we define an if-statement:
def if_then_else(boolean): def if2(true_path): def if3(false_path): return (boolean(true_path) false_path) return if3 return if2 # Equivalent lambda-calc syntax # \p.\a.\b.p a b
We can think of this as a function that takes three arguments, where the first argument is either “true” or “false”, using our previous definition. The usage works like this:
if_then_else(true, 1, 3)
returns 1
if_then_else(false, 1, 3)
returns 3
Okay, that looks a little more familiar! The internal representation of true/false might still feel strange, but now we have functions with multiple arguments, and we have if-statements. This language is already a lot more useful than what we started with.
Let’s tackle something more complicated: Building a list. We’ll start a little smaller and build a pair, like (2,6)
:
def mkpair(a): def mkpair2(b): def access(first_or_second): return ((if_then_else(first_or_second) a) b) return access return mkpair2 # Equivalent lambda-calc syntax # \a.\b.\c.c a b
In plain English, this says “take three arguments. If the third argument is true, return the first argument. If the third argument is false, return the second argument.” We’ll use “true” and “false” as the third parameter to accomplish (2,6).first
or (2,6)[0]
depending on the kind of language you’re used to writing in. We can even write some helper functions to make this interface friendlier:
def first(pair): return (pair true) def second(pair): return (pair false) # Equivalent lambda-calc syntax # First: \p.p (\a.\b.a) # Second: \p.p (\a.\b.b)
Now we can use pairs like:
def addAPair(pair): return (add(first pair)) (second pair) def createAndAddPair(x): def createAndAddPair2(y): return addAPair( ((mkpair x) y) ) return createAndAddPair2
A lot of parenthesis to sort out, but by only passing the first two arguments to mkpair
(i.e. ((mkpair 2) 6)
) we create a pair of elements, and we can call first
or second
on this pair to extract the two halves. Great!
Now we can generalize to a list of arbitrary length, by defining a list as a pair of pairs. In other words, we can represent a concept like [1,2,3,4,5]
as pairs like (1,(2,(3,(4,5))))
. It’s tedious, but we can obviously encode data of any length this way.
What if we want to write a function that can process this list? How can we convey that a list has X elements in it? Well, there are a few design choices we can make. One option is including the length as the first element of the list:
(5,(1,(2,(3,(4,5)))))
This gets a little messy, since any function that’s processing the list will have to peel the first two elements off to get any data, and will have to pre-pend a new first element to update the length, like:
(4,(2,(3,(4,5))))
(3,(3,(4,5)))
(2,(4,5))
(1,5)
Alternatively, we could include booleans at each layer to indicate whether the list has more elements. This might look something like:
(true, (1, (true, (2, (false, 3)))))
This may be easier to work with since we don’t need to continue updating the list length every time we remove an element, but it also requires storing twice as many items to express the data in the list.
We started with a language that only contains functions of one variable, and using this concept alone we’ve created functions of multiple arguments, boolean logic, if-statements, pairs, and lists. So much from so little!
This is precisely the reason lambda calculus exists. From an academic perspective, it’s much easier to reason about programming and write formal proofs when the building blocks are small and simple, and we’ve shown you need very little to have a capable programming language.
From an engineering perspective, lambda calculus is a useful concept for thinking about how logic can be translated from one language to another. An imperative language like C or Python works very differently than a functional language like Haskell, or a logic language like Prolog. But if we can express our thinking in C or Python in terms of functions and lists, and we can express functions and lists in lambda calculus, then those same ideas can be expressed in Haskell or Prolog, even if the implementation looks radically different.
Lambda calculus provides a baseline for what programming languages can express, and how ideas can be translated between languages. This kind of thinking (along with lambda calculus’ variable-scoping rules, which we haven’t talked about in this post) form some of the early foundations of compiler and interpreter design.