Totally bored and jobless over this surprisingly silent weekend, I decided to revisit some of the Haskell programs that I had written when I took the Functional programming course during the previous year. While I was at it, I remembered this particular topic which struck me as interesting back then (..and it still is). So I am going to write about it despite the fact that it is something that would be covered by any basic Haskell textbook or tutorial. Well, that just goes to show how jobless I am!

The first time we were asked to reverse a list in Haskell, I did it using the extremely straightforward method I could think of.

`revList :: [a] -> [a]`

revList [] = []

revList (x:xs) = (revList xs) ++ [x]

What we are doing here is, given a list of the form `(x:xs)`

where `x`

is the element at the head of the list and `xs`

is the tail of the list (which is again a list!). We recursively apply revList to the tail and then concatenate the result with the singleton list containing the head `x`

.

Turns out this really bad way of reversing a list.

Let us analyse why.

One of the ways of looking at a Haskell list, say `[x,y,z,w]`

is

`x:(y:(z:(w:[])))`

, reflecting the `head:tail`

pattern mentioned earlier.

Observe that to append the element `x`

at the end of the reversed list, we concatenate the reversed list with the singleton list `[x]`

. Now, the concatenation of two lists even in Haskell is at best a linear time operation. One could think of a way of concatenating lists as follows

` (++) :: [a] -> [a] -> [a]`

(++) [] l = l

(++) (x:xs) l = x:((++ ) xs l)

Thus the concatenation operation would have to walk to through the first list at least once.

Let represent the time taken to reverse the list of elements. Clearly . And suppose represent the time taken to concatenate two lists where the size of the first list is and the size of the second list is . From the code, we can set up the following equation:

which solves to

Thus is quadratic in . This is hardly the time complexity we would have liked!

The reason for this is the fact that appending a single element to a list takes linear time proportional to the length of the list. This is due to the manner in which lists are internally represented in Haskell.

However, adding an element to the head of the list is a constant time operation. The code for doing this would look something like:

`add_to_head :: Int -> [Int] -> [Int]`

add_to_head x l = (x:l)

Thus, given a list `(x:(y:(z:(w:[]))))`

appending, say an element `v`

to the head would amount to storing the list as `v:(x:(y:(z:(w:[]))))`

. Thus we don’t have to worry about the length of the list. In this sense, the Haskell list is similar to a stack. Adding an element to the top or removing an element from the top is a constant time operation. But adding to the bottom requires popping out all the elements, pushing this new element, and pushing all the elements (ok, lists do better than that.)

So, we can make use of this fact to optimize `revList`

so that the operation takes linear time proportional to the length of the list. And the optimization would involve same trick that is used in reversing a stack. i.e pop out the element at head of the stack, push it into another stack, and repeat this till the first stack is empty. This requires only linear time.

`listRevFast :: [a] -> [a]`

listRevFast l = _listRevFast l []

where

_listRevFast :: [a] -> [a] -> [a]

_listRevFast [] l = l

_listRevFast (x:xs) l = _listRevFast xs (x:l)

And this one, you can verify requires linear time since it accesses each element of the original list exactly once.