Tree traversal without recursion: the tree as a state machine
Friday, 16 Feb 2007 [Sunday, 23 Dec 2007]
I am currently working my way through Higher-Order Perl, a highly recommended tome on effective and practical use of functional programming techniques in Perl. As you’d expect from a book that spends a lot of time discussing such concepts as function composition and recursion, the subject of tree traversal makes a frequent apperance:
- First, as an introductory example to recursion;
- then, when discussing how to turn recursive functions into iterators using an explicit stack (which permits breadth-first searching);
- again recursively, in the section on tail call elimination, where the tail-recursive call is eliminated first, and the other recursive call is then replaced by an explicit stack.
There may be even more appearances later in the book that I’ve yet to discover; as I said, I’m not through with it yet. However, the book changes topic after that, at least momentarily, so I stopped to ponder. It occurred to me that this is the entire extent to which discussions of tree traversal typically go. Another obvious option that occurred to me many years ago is not discussed anywhere that I’ve seen, though it is occasionally mentioned as a possibility in passing:
You can get rid of any stacks whatsoever by keeping a parent pointer in the tree node data structure. Effectively, this turns the tree into a (sort of) state machine. While traversing, you need no memory other than the current and previous node/state. The traversal algorithm is very simple:
- If the previous node is this node’s parent node, descend to the left child node.
- If the previous node is this node’s left child node, descend to the right child node.
- If the previous node is this node’s right child node, ascend to the parent node.
Obviously, if there is no left child to descend to, you try the right one; and if there is no right child to descend to, you ascend to the parent. Traversal is complete when an attempt to ascend to the parent node fails because there is no parent. Pre-, post- and in-order traversal can be implemented simply by changing which of the conditions implies that the current node must be visited: if you visit the node when coming from…
- … the parent node, you get pre-order traversal.
- … the left child node, you get in-order traversal.
- … the right child node, you get post-order traversal.
Assuming all tree nodes are instances of a class which has parent
, left
and right
methods and uses undef
to signify the absence of a pointer, then the following is an implementation of the in-order version of the traversal algorithm in Perl:
sub traverse_tree {
my ( $tree_root, $visitor_callback ) = @_;
my ( $curr_node, $prev_node ) = $tree_root;
while( $curr_node ) {
my $next_node;
if( $prev_node == $curr_node->parent ) {
$next_node = $curr_node->left;
if( not $next_node ) {
$visitor_callback->( $curr_node );
$next_node = $curr_node->right || $curr_node->parent;
}
}
elsif( $prev_node == $curr_node->left ) {
$visitor_callback->( $curr_node );
$next_node = $curr_node->right || $curr_node->parent;
}
elsif( $prev_node == $curr_node->right ) {
$next_node = $curr_node->parent;
}
( $prev_node, $curr_node ) = ( $curr_node, $next_node );
}
}
This is the most straightforward implementation, which does have a fault: there is some code duplication between the coming-from-parent and coming-from-left-child states. The complication comes about because node visiting must be ensured even when the node does not have the particular pointer to come from; e.g. in the case of in-order traversal, you visit the current node when you come from the left child node; but when a node has no left child node, you must still ensure that the node will be visited. The discovery that the left child node is absent will happen when the previous node was the parent, and so that state must ensure to visit the current node before going on to try to descend to the right.
The fix is conceptually simple, but not easy to express in code. You need a way to fall through from the body of one branch to another’s without checking the condition for that branch, much the way C’s switch
statement works, where branches fall through by default and require an explicit break
to exit. A switch
statement in C is simply a structured expression of a jump table (but note that you couldn’t actually use a switch
statement in C for this because the case
conditions in this algorithm wouldn’t be constant expressions); so the Perl version will need a couple of explicit goto
s:
sub traverse_tree {
my ( $tree_root, $visitor_callback ) = @_;
my ( $curr_node, $prev_node ) = $tree_root;
while( $curr_node ) {
my $next_node;
{
goto FROM_PARENT if $prev_node == $curr_node->parent;
goto FROM_LEFT if $prev_node == $curr_node->left;
goto FROM_RIGHT if $prev_node == $curr_node->right;
FROM_PARENT:
last if $next_node = $curr_node->left;
FROM_LEFT:
$visitor_callback->( $curr_node );
last if $next_node = $curr_node->right;
FROM_RIGHT:
$next_node = $curr_node->parent;
}
( $prev_node, $curr_node ) = ( $curr_node, $next_node );
}
}
In this rendition of the algorithm, the reformulation required to implement pre- or post-order traversal is trivial: you just move the callback invocation to the appropriate label.
(It is in fact quite simple to implement all three variants in a single function: just put a call in every branch and make them conditional on an extra parameter, e.g. $visitor_callback->( $curr_node ) if $order == -1;
where $order == 0
means in-order traversal and in that case the parameter is optional.)
This approach consumes more memory than storing the parent pointers on a stack would. On a stack, the parent pointers are transient and only require on the order of O(log n) stack space, because you only need to keep enough of them around for a single path down the tree. Storing the parent pointers with the nodes, in contrast, takes O(n) space (or more precisely, exactly n). Another benefit of using iterative traversal with an explicit stack is that you can swap the stack for a FIFO, which gives you breadth-first search for free.
What I like about this traversal algorithm in spite of its shortcomings (and the reason I felt compelled to post about it) is how exceptionally simple and intuitive to grasp it is. When it first occurred to me, it seemed too simple to work – but no, the position and direction of the traverser encode enough information to guide its path along the entire tree in the right order without any ambiguity, as you can easily verify just by looking at a drawing of a tree. So it’s pity that this doesn’t ever seem to be discussed at any length.
Update: in email, Stu Fleming points out:
So now your trade-off is between memory required to store the tree versus memory required at run-time to traverse. If you know the maximum possible number of nodes that you will allocate, then you win because your memory requirement is then static. If you don’t, it isn’t an advantage as your dynamic memory requirement is still unknown.
This is an angle I hadn’t quite considered.
Update: I’ve gotten several mails wondering what the purpose of my posting this was. Let me reiterate: I’m not advocating the use of this algorithm in any particular circumstance. In fact, under almost any constraints, there are better choices. I just thought the symmetry in it was beautiful.
Update: Todd Lehman pointed out that given node-level locks, this algorithm allows concurrent traversal and update of the tree. Any atomic operation other than detaching a non-leaf node is safe.