Back in school, you must remember studying differential and integral calculus. Now what on earth is lambda calculus? Well, lambda calculus is basically a simple notation for functions and applications in mathematics and computer science. It has a significant impact in the field of programming language theory. It forms the basis for all the modern functional programming languages like Haskell, Scala, Erlang, etc. The main idea here is to apply a function to an argument and forming functions by abstraction. The good thing about lambda calculus is that the syntax is quite sparse, which makes it an elegant notation for representing functions. Here, we get a well-formed theory of functions as rules of computation. We will discuss this further soon. Even though the syntax of lambda calculus is sparse, it is really flexible and expressive. This feature makes is particularly useful in the field of mathematical logic. So what exactly is lambda calculus? How do we understand it?

**Why do we need it?**

Before we proceed further, it is important to understand why we need lambda calculus in the first place. Lambda calculus arose from the study of functions as rules. It was introduced in the 1930s by Alonzo Church as a way of formalizing the concept of effective computability. In the study of functions, it is sufficient to focus on unary functions. Unary functions are those that take exactly one argument. For example, the abs() function is a good example of a unary function because it just takes a single argument and gives the absolute value of that argument. But what if we want to make multiple arguments? Well, in the world of lambda calculus, we deal with it in the form of sequence of abstractions. This procedure is called currying.

What this means is that you can cascade any number of functions, where each function takes exactly one argument. In the end, you will have a full function that can take ’n’ arguments. Internally, we have ’n’ functions that take a single argument. Now you may ask, isn’t this a roundabout way of doing things? Well, as it turns out, this currying operation is a very useful concept and it lends itself very nicely when you are composing complex functions. This particular thing is very useful when you are designing large software systems. In your life as a programmer, you must have dealt with anonymous functions at some point of time. The concept of anonymous functions comes from the field of lambda calculus. All the modern programming languages offer this in some form. For example, in Python, you can define a lambda function by using the keyword “lambda”.

**How do we understand lambda calculus?**

Lambda calculus can be called the smallest universal programming language of the world. It’s nice to see such a direct link between mathematics and programming, right? Lambda calculus consists of a single transformation rule and a single function definition scheme. The transformation rule in question here is variable substitution. Lambda calculus is universal in the sense that any computable function can be expressed and evaluated using this formalism. Therefore, we can say that it is equivalent to Turing machines. However, the difference here is that lambda calculus emphasizes the use of transformation rules, and it does not care about the actual machine implementing them. It is an approach more related to software than to hardware. The central concept here is the “expression”. A “name”, also called a “variable”, is an identifier which can be any of the letters like a, b, or c. For example:

variable: x lambda abstraction (function): f = λx.x function application: f y

Here, the variable name ‘x’ is trivial and it can be replaced with anything. The lambda abstraction is basically a function that takes ‘x’ as the input and returns the expression on the right hand side of the dot. In this particular case, it will just return ‘x’. If you want to apply a function ‘f’ to an input argument ‘y’, you can just write it as shown above. Lambda calculus is an elegant notation for working with applications of functions to arguments.

Let’s take a simple mathematical example. We are given a simple polynomial such as:

f(x) = x² + 3x + 4

What is the value of f(x) when x = 3? We compute this by substituting the value of 3 for x in the expression. If we do that, we get:

f(3) = 3² + 3(3) + 4 = 22

To do the same thing in lambda calculus, we do this: λx [x² + 3x + 4]. The lambda operator allows us to abstract over x. One can intuitively read ‘λx [x² + 3x + 5]’ as an expression that is waiting for a value for the variable x. When given such a value, say ‘a’, the value of the expression is a² + 3a + 4.

Interestingly enough, through this example, we have arrived at the central principle in lambda calculus. It’s called β-reduction and it’s defined something like this:

(λx.M)N -> M[x := N]

Now that looks like an unfriendly mathematical expression, right? I mean, what does it even mean? Well, it’s not that unfriendly actually! In the above expression, M is a expression in x (like x² + 3x + 4). This means that λx.[M] is a function of x. Going by this logic, (λx.M)N refers to applying the function to the input argument ‘N’. We can reduce an application (λx.M)N of an abstraction term ‘λx.M’ to something by simply plugging in N for the occurrences of x inside M. This is what the the notation ‘M[x := N]’ expresses. It means that we have taken M and replaced all occurrences of x with N. This is the principle of β-reduction, and it is the heart of the lambda calculus.

**Okay I understand the concept, but how do we use it in programming?**

As it turns out, lambda calculus is really useful in programming. One of the most popular applications that you might be familiar with is an anonymous function. In the world of programming, anonymous functions are functions that are not bound to an identifier. Now what does that mean? It means that a function definition is not associated with any particular name. As in, you don’t need a name for this function. But how will that even work? To understand this, we need to talk about higher-order functions. Simply put, higher-order functions either take functions as inputs, or return functions as outputs, or both. Interesting, right? I mean, you can just pass a function to another function and ask it to return another function based on that. You can use the returned function to operate on your data. Anonymous functions are ubiquitous in functional programming languages. When you are dealing with complex systems where function composition becomes extremely critical, anonymous functions are really useful because of how we can use them in higher-order functions.

Another important concept that arises from this is “lazy evaluation”. Wait, what? Lazy evaluation refers to a concept in programming language theory where we delay the evaluation of an expression until it’s needed. To understand this, consider the following example. Let’s say there is a huge file with 100 million lines. Now your job is to process each line and extract some information. To do this, you write a function to read the file, and then process it line by line. But imagine reading 100 million lines and loading it into memory? That’s a huge overhead! In most cases, your machine is going to crash. You don’t really need to construct that list until you need to process the line, right? Once that line is processed, you don’t even need it. So why do we even need to load everything at once? This is where lazy evaluation comes into picture. The concept of lazy evaluation is too deep to be discussed here. You can just google it and read about it if you want to learn more. It’s totally worth it! Apart from this, lambda calculus is very useful for parallelism and concurrency as well.

This is just a very basic introduction to lambda calculus. The field is really deep and very interesting. If you want to understand it better, you should look into the mathematical formulation of lambda calculus, and see how it’s used for computation.

———————————————————————————————————

Good start on the topic, please do continue post further on this topic . Would love to read part 2 of it .

Pingback: Resumen de lecturas compartidas durante septiembre de 2018 | Vestigium

Pingback: Notes – Programming Paradigms | William Yip's Blog

Pingback: Things I Have Learned – 2019 First Quarter | William Yip's Blog