Expontentials are big.
Computers can process millions of things in what seems like an instant, so I think initially people are doubtful when you try and convey just how bad exponential time algorithms are. “So there are just a lot of possibilities to check. How long could it really take for a computer to just run through them all?”
How long would it take to compute all the factors of a 100 bit number? There are only so many possibilities, right? Divide by 2, divide by 3, by 4, … How long could it really take to check them all?
Well, say I went back in time to the beginning of the universe, programmed a computer that checks in one nanoseconds if one integer divides another. I have 2100 numbers to check. 289 nanoseconds is already the age of the universe, which means that after running for the few billion years between the beginning of the universe and now, my program would only be 289 / 2100 = 0.04% done.
↩ December 15, 2012