> It seems like this should select a number in the range with no bias. Is there something I missed?
Yes. There are many values of N that aren’t divisors of UInt32Max.
As the article says: “However, no algorithm can convert 2⁶³ equally likely values into n equally likely values unless 2⁶³ is a multiple of n: otherwise some outputs will necessarily happen more often than others. (As a simpler example, try converting 4 equally likely values into 3.)”
UInt32Max (i.e. 4294967295) is divisible by 3, so your code actually is perfectly random (or more accurately, as random as go's rand package). It would be biased with N=4, for example.
Regardless, with small values of N, the bias is very slight so you would need many many iterations to see the imperfection in a statically significant way.
A quick search didn't reveal any good resources for how to test the quality of a random number generator in a number range. Is what I came up with the best strategy, and you just need to run it for much longer (and compare to a known-good implementation) to see the difference?
> (As a simpler example, try converting 4 equally likely values into 3.)
No, but you can convert a RNG that emits 4 equally likely values into an RNG that emits 3 equally likely values. Just - anytime the RNG returns 4, try again.
Here's a fun puzzle / annoying interview question: You have a biased coin. You can flip it as often as you want, but heads and tails are not equally likely. Without figuring out the bias of the coin, how do you produce purely random bits?
Enlist my friends to flip more coins in parallel :)
At a high level I’d probably try and exploit the fact that every bit sequence with a given number of H and T has equal probability. e.g., HHHT HHTH HTHH THHH are equally probable and so can be mapped to four different values. That still only gets me 2 bits (50%) but other combinations (e.g., variations on HHTT) could get me log2(6) bits. I’m guessing with a higher number of flips I could extract (on average) more and more as a proportion. No clue what the asymptote would be.
Thinking further, for N flips you get 0 bits of entropy for all H or all T. For all other sequences, you get log2(N choose count(H)) bits of entropy, and you can average the sum of these over N.
According to Wolfram Alpha this works as N gets larger but it’s not great. For 16 flips you get 9.5 bits of entropy, but hey at least I beat half! 32 flips gets you about 20 bits of entropy. By 64 flips you get 43 bits, and that’s approaching 2/3 efficiency. Maybe not so bad after all!
Going higher is a little tough since I’m on mobile but it starts crawling reaching only 71% efficiency by 1024 flips. I’m curious if it does actually asymptotically reach 100% efficiency (for a fair coin), even if quite slowly.
Edit: Playing more[1] it really seems to approach 72.1. I wonder if I can figure out the asymptote analytically…
This kind of nerd-sniped me. I did find a closed-form solution, that relies on an identity mapping the product of successive factorials ("hyperfactorials") to the Barnes G-Function which is related to the Riemann Zeta Function at a level deep past my comprehension. Still, here's[1] the closed form solution which returns the number of bits of entropy able to be generated given some number of coin flips using this approach, assuming the coin is indeed fair. Divide by the number of flips again to get the efficiency.
Unfortunately, Wolfram Alpha isn't able to determine the limit of this function[2], and neither am I. :)
Yes. There are many values of N that aren’t divisors of UInt32Max.
As the article says: “However, no algorithm can convert 2⁶³ equally likely values into n equally likely values unless 2⁶³ is a multiple of n: otherwise some outputs will necessarily happen more often than others. (As a simpler example, try converting 4 equally likely values into 3.)”