The whole
point of#replace
is
to give us a way to implement the semantics of function calls. In
Ruby, when a proc is called with one or more arguments, the body of
the proc gets evaluated in an environment where each argument has been
assigned to a local variable, so each use of that variable behaves
like the argument itself: in a metaphorical sense, calling the proc-> x, y { x + y }
with the
arguments1
and2
produces the intermediate expression1 + 2
, and that’s what gets
evaluated to produce the final result.
We can apply the same idea more literally in the lambda calculus by actually replacing
variables in a function’s body when we evaluate a call. To do this, we can define aLCFunction#call
method that does the replacement and
returns the result:
class
LCFunction
def
call
(
argument
)
body
.
replace
(
parameter
,
argument
)
end
end
This lets us simulate the moment when a function gets
called:
>>
function
=
LCFunction
.
new
(
:x
,
LCFunction
.
new
(
:y
,
LCCall
.
new
(
LCVariable
.
new
(
:x
),
LCVariable
.
new
(
:y
))
)
)
=> -> x { -> y { x[y] } }
>>
argument
=
LCFunction
.
new
(
:z
,
LCVariable
.
new
(
:z
))
=> -> z { z }
>>
function
.
call
(
argument
)
=> -> y { -> z { z }[y] }
Function
calls are the only thing that actually
happens
when a lambda calculus program is
evaluated, so now we’re ready to implement#reduce
. It’ll find a place in the
expression where a function call can occur, then use#call
to make it happen. We just need to be
able to identify which expressions are actually callable…
class
LCVariable
def
callable?
false
end
end
class
LCFunction
def
callable?
true
end
end
class
LCCall
def
callable?
false
end
end
…and then we can write#reduce
:
class
LCVariable
def
reducible?
false
end
end
class
LCFunction
def
reducible?
false
end
end
class
LCCall
def
reducible?
left
.
reducible?
||
right
.
reducible?
||
left
.
callable?
end
def
reduce
if
left
.
reducible?
LCCall
.
new
(
left
.
reduce
,
right
)
elsif
right
.
reducible?
LCCall
.
new
(
left
,
right
.
reduce
)
else
left
.
call
(
right
)
end
end
end
In this implementation, function calls are the only kind of
syntax that can be reduced. ReducingLCCall
works a bit like reducingAdd
orMultiply
from
Simple
: if either of its subexpressions is
reducible, we reduce that; if not, we actually perform the call by
calling the left subexpression (which should be aLCFunction
) with the right one as its
argument. This strategy is known as
call-by-value
evaluation—first we reduce the
argument to an irreducible value, then we perform the call.
Let’s test our implementation by using the lambda calculus to
calculate one plus one:
>>
expression
=
LCCall
.
new
(
LCCall
.
new
(
add
,
one
),
one
)
=> -> m { -> n { n[-> n { -> p { -> x { p[n[p][x]] } } }][m] } }[-> p { -> x { p[x] }
}][-> p { -> x { p[x] } }]
>>
while
expression
.
reducible?
puts
expression
expression
=
expression
.
reduce
end
;
puts
expression
-> m { -> n { n[-> n { -> p { -> x { p[n[p][x]] } } }][m] } }[-> p { -> x { p[x] } }]
[-> p { -> x { p[x] } }]
-> n { n[-> n { -> p { -> x { p[n[p][x]] } } }][-> p { -> x { p[x] } }] }[-> p { -> x
{ p[x] } }]
-> p { -> x { p[x] } }[-> n { -> p { -> x { p[n[p][x]] } } }][-> p { -> x { p[x] } }]
-> x { -> n { -> p { -> x { p[n[p][x]] } } }[x] }[-> p { -> x { p[x] } }]
-> n { -> p { -> x { p[n[p][x]] } } }[-> p { -> x { p[x] } }]
-> p { -> x { p[-> p { -> x { p[x] } }[p][x]] } }
=> nil
Well, something definitely happened, but we didn’t get quite the
result we wanted: the final expression is-> p { -> x { p[-> p { -> x { p[x] }
, but the lambda calculus representation of the
}[p][x]] } }
number two is supposed to be-> p { ->
. What went wrong?
x { p[p[x]] } })]
The mismatch is caused by the evaluation strategy we’re using. There are still
reducible function calls buried within the result—the call->
could be reduced to
p { -> x { p[x] } }[p]-> x
, for instance—but
{ p[x] }#reduce
doesn’t
touch them, because they appear inside the body of a function, and our semantics doesn’t
treat functions as reducible.
[
51
]
However, as discussed in
Equality
, two
expressions with different syntax can still be considered equal if
they have the same behavior. We know how the lambda calculus
representation of the number two is supposed to behave: if we give it
two arguments, it calls the first argument twice on the second
argument. So let’s try calling our expression with two made-up
variables,inc
andzero
,
[
52
]
and see what it actually does:
>>
inc
,
zero
=
LCVariable
.
new
(
:inc
),
LCVariable
.
new
(
:zero
)
=> [inc, zero]
>>
expression
=
LCCall
.
new
(
LCCall
.
new
(
expression
,
inc
),
zero
)
=> -> p { -> x { p[-> p { -> x { p[x] } }[p][x]] } }[inc][zero]
>>
while
expression
.
reducible?
puts
expression
expression
=
expression
.
reduce
end
;
puts
expression
-> p { -> x { p[-> p { -> x { p[x] } }[p][x]] } }[inc][zero]
-> x { inc[-> p { -> x { p[x] } }[inc][x]] }[zero]
inc[-> p { -> x { p[x] } }[inc][zero]]
inc[-> x { inc[x] }[zero]]
inc[inc[zero]]
=> nil
That’s exactly how we expect the number two to behave, so-> p { -> x { p[-> p { -> x {
is the right result after all, even
p[x] } }[p][x]] } }
though it looks slightly different than the expression we were
expecting.
Now that we’ve got a
working semantics, let’s finish things off by building a
parser for lambda calculus expressions. As usual, we can use Treetop to
write a grammar:
grammar
LambdaCalculus
rule
expression
calls
/
variable
/
function
end
rule
calls
first
:(
variable
/
function
)
rest
:(
'['
expression
']'
)
+
{
def
to_ast
arguments
.
map
(
&
:to_ast
)
.
inject
(
first
.
to_ast
)
{
|
l
,
r
|
LCCall
.
new
(
l
,
r
)
}
end
def
arguments
rest
.
elements
.
map
(
&
:expression
)
end
}
end
rule
variable
[a-z]
+
{
def
to_ast
LCVariable
.
new
(
text_value
.
to_sym
)
end
}
end
rule
function
'-> '
parameter
:
[a-z]
+
' { '
body
:
expression
' }'
{
def
to_ast
LCFunction
.
new
(
parameter
.
text_value
.
to_sym
,
body
.
to_ast
)
end
}
end
end
As discussed in
Implementing Parsers
, Treetop
grammars typically generate right-associative trees, so this grammar
has to do extra work to accommodate the lambda calculus’s
left-associative function call syntax. Thecalls
rule matches one or more consecutive
calls (likea[b][c][d]
), and the#to_ast
method on the resulting
concrete syntax tree node usesEnumerable#inject
to roll up the arguments
of those calls into a left-associative abstract syntax tree.
The parser and operational semantics together give us a complete
implementation of the lambda calculus, allowing us to read
expressions and evaluate
them:
>>
require
'treetop'
=> true
>>
Treetop
.
load
(
'lambda_calculus'
)
=> LambdaCalculusParser
>>
parse_tree
=
LambdaCalculusParser
.
new
.
parse
(
'-> x { x[x] }[-> y { y }]'
)
=> SyntaxNode+Calls2+Calls1 offset=0, "…}[-> y { y }]" (to_ast,arguments,first,rest):
SyntaxNode+Function1+Function0 offset=0, "… x { x[x] }" (to_ast,parameter,body):
SyntaxNode offset=0, "-> "
SyntaxNode offset=3, "x":
SyntaxNode offset=3, "x"
SyntaxNode offset=4, " { "
SyntaxNode+Calls2+Calls1 offset=7, "x[x]" (to_ast,arguments,first,rest):
SyntaxNode+Variable0 offset=7, "x" (to_ast):
SyntaxNode offset=7, "x"
SyntaxNode offset=8, "[x]":
SyntaxNode+Calls0 offset=8, "[x]" (expression):
SyntaxNode offset=8, "["
SyntaxNode+Variable0 offset=9, "x" (to_ast):
SyntaxNode offset=9, "x"
SyntaxNode offset=10, "]"
SyntaxNode offset=11, " }"
SyntaxNode offset=13, "[-> y { y }]":
SyntaxNode+Calls0 offset=13, "[-> y { y }]" (expression):
SyntaxNode offset=13, "["
SyntaxNode+Function1+Function0 offset=14, "… { y }" (to_ast,parameter,body):
SyntaxNode offset=14, "-> "
SyntaxNode offset=17, "y":
SyntaxNode offset=17, "y"
SyntaxNode offset=18, " { "
SyntaxNode+Variable0 offset=21, "y" (to_ast):
SyntaxNode offset=21, "y"
SyntaxNode offset=22, " }"
SyntaxNode offset=24, "]"
>>
expression
=
parse_tree
.
to_ast
=> -> x { x[x] }[-> y { y }]
>>
expression
.
reduce
=> -> y { y }[-> y { y }]