Hey, sorry that this slog is a bit
late since I was quite sick last week but now since I’ve come back, I can write
down some of my own reflective on run-time complexity.
First of
all, what is the run-time complexity for a program? Basically it’s the total
steps a program would take given the size of input n, and we formally use
O(f(n)) to describe the asymptotic behavior of the program and here O(f(n)) is defined
as ‘There exists a natural number c, and there exists a natural number b, such
that for all n bigger than b, the total steps of the program would be less than
or equal to c*f(n)’. Quite obscure and doesn’t seem to make sense at the first
glance so here I am going to introduce my own interpretation: For a run-time
complexity of a program to be in O(f(n)), the growth rate of the total steps of
the program with respect to the size of input is less than or equal to f(n). So
if the total steps a program takes is n^2 given the size of input n, then we
will say the run-time complexity of the program belongs to O(n^2).
Next,
why should we even care about the run-time complexity of a program? Some may claim
that as long as the program works, it is a good program. This makes sense when
the input is relatively small but suppose here we are given the size 1 million,
and one program has a run-time complexity of log(n) while the other has n^2.
Log of one million is about 20 while the square of one million is one times ten
to the 12th and this is a huge difference since computers nowadays,
though fast, still have limits and it’s obvious 20 steps takes much less time.
So correctness differentiate bad and good programs while efficiency
differentiate good and better programs.
As I
have told about the interpretation of run-time complexity and the reason why we
should care about run-time complexity. I would like to go a bit further by
telling you how to analyze the run-time complexity of a program. There’s a
small trick I found: The depth of a for loop depends on the input size usually
determines the power of n in O(n^x). Take the following program as an example.
This program has a
nested for loop so its depth is 2 and since both for loop depends on the size
of input x. Thus the inner loop runs x times and the outer loop runs the inner
loop x times and the run-time complexity of this program is, obviously O(n^2). Also, always remember that constants normally would be
ignored when calculating the run-time complexity because of the definition of
O(f(n)).
Though my trick works for many programs, it has a limitation
that such trick could only be applied in procedural programs while for
recursion, more work needs to be done. I was stuck on how to calculate the
run-time complexity for recursive programs until this week in CSC240’s lecture.
I was told the master theorem (check http://en.wikipedia.org/wiki/Master_theorem
for more info.) and then I have the weapon to tackle the run-time complexity of
recursive programs. I’m still on the way of learning and I’ll come back later
this week to write more on analyzing recursive programs.
No comments:
Post a Comment