Seuraa 
Viestejä628
Liittynyt19.3.2012

Mutta rakkaustohtori saa jatkaa.
Eihän siinä ketjussa ollut muuta kuin tietorakenteisiin liittyvää ihmettelyä. Ja marmorikuulien painoa.

Kommentit (3)

Merlin
Seuraa 
Viestejä4095
Liittynyt25.9.2017

Lohdutuslahja.

As mentioned in the answer to the linked question, a common way for an algorithm to have time complexity O(log n) is for that algorithm to work by repeatedly cut the size of the input down by some constant factor on each iteration. If this is the case, the algorithm must terminate after O(log n) iterations, because after doing O(log n) divisions by a constant, the algorithm must shrink the problem size down to 0 or 1. This is why, for example, binary search has complexity O(log n).

Interestingly, there is a similar way of shrinking down the size of a problem that yields runtimes of the form O(log log n). Instead of dividing the input in half at each layer, what happens if we take the square root of the size at each layer?

For example, let's take the number 65,536. How many times do we have to divide this by 2 until we get down to 1? If we do this, we get

65,536 / 2 = 32,768
32,768 / 2 = 16,384
16,384 / 2 = 8,192
8,192 / 2 = 4,096
4,096 / 2 = 2,048
2,048 / 2 = 1,024
1,024 / 2 = 512
512 / 2 = 256
256 / 2 = 128
128 / 2 = 64
64 / 2 = 32
32 / 2 = 16
16 / 2 = 8
8 / 2 = 4
4 / 2 = 2
2 / 2 = 1
This process takes 16 steps, and it's also the case that 65,536 = 216.

Uusimpien tutkimusten mukaan uusimmat tutkimukset ovat keskimäärin yhdentekeviä.

Suosituimmat

Uusimmat

Uusimmat

Suosituimmat