Monday, March 24, 2014

Strong Induction and run-time complexity for recursive functions

        Hi, it’s nice to be back. In my previous slog, I mentioned about analyzing the run-time complexity of recursive function and the master theorem. So I will spend most of the time on it today by using merge sort as an example.

        At the very beginning, I would like to introduce a digression here – The effectiveness of strong induction in writing recursive function. The idea of strong induction is by having a functional base case and assuming the function works for all the smaller cases, the function will eventually solve a bigger problem. Like the inorder traversal we did in Lab9, I simply put

So I have a functional base case -  if a binary tree does not have left and right sub-tree, simply return the linked list with the root of the tree. Then there are three difference cases – A tree without left sub-tree, a tree without right sub-tree, and a complete tree. For the first two different cases, by strong induction, I assume my function works for smaller cases so I just changed where I prepend the root of the tree and simply call the function on sub-trees. Then for the last case, also by strong induction, I called the function on right sub-tree, prepend the root, then prepend the linked-list generated by calling ‘inorder’ on the left sub-tree. And then I’m done. By assuming the function works for smaller cases, I eventually solved a larger case. Note you need to change the prepend method a bit for prepending linked-lists. So through strong induction, all I need to do is to find out the base case and use smaller cases to construct a larger case and it will provide much ease when writing recursive function.

    It’s a huge digression actually ha-ha! Now I’m back on track. I’ll take the merge sort we did in class as an example.
Notice this is not the complete version since I don’t include merge function here due to space limit. So for merge sort, by strong induction, this should work as it has a valid base case and uses smaller cases to construct a larger case. But its run-time complexity is not easy to find out. Since the run-time complexity depends only on worst cases, I’ll ignore the condition when length of L is smaller than 2 but analyze the else part. The very first thing we need to do is to assume the total steps merge sort takes are M(n) steps for a list of length n. Then by our assumption, both merge_sort(L[:len(L)//2]) and merge_sort(L[len(L)//2:]) will take approximately M(n/2) steps by ignoring the constants because of the nature of big-Oh. For merge, since it’s a procedural function, we can easily analyze its total steps for merging two lists of n/2 length is c*n where c is a constant. So
M(n) = M(n/2) + M(n/2) + cn = 2M(n/2) + cn = 2(2M(n/4)+cn/2) + cn/4 = …. So by telescoping this equation , 

we can see M(n)/n  = logn eventually, so M(n) o(nlogn). For more information on proving this, I suggest you check this powerpoint ( http://www.cs.princeton.edu/courses/archive/spr07/cos226/lectures/04MergeQuick.pdf ). So the idea is to firstly find the recurrence for run-time complexity and then telescoping it. Eventually the recurrence could be solve and the big-Oh notation will come through.


    That’s it and good luck on term test2.

No comments:

Post a Comment