Here’s a copy of my own comment from the reddit thread;
Randomness is a property of a source, not of a number. Numbers are not random. Randomness is a distribution of possibilities and a chance based selection of an option from the possibilities.
What we use in cryptography to describe numbers coming from an RNG is entropy expressed in bits - roughly the (base 2 log of) number of equivalent unique possible values, a measure of how difficult it is to predict.
It’s also extremely important to keep in mind that RNG algorithms are deterministic. Their behavior will repeat exactly given the same seed value. Given this you can not increase entropy with any kind of RNG algorithm. The entropy is defined exactly by the inputs to the algorithm.
Given this, the entropy of random numbers generated using a password as a seed value is equivalent to the entropy of the password itself, and the entropy of an encrypted message is the entropy of the key + entropy of the message. Encrypting a gigabyte of zeroes with a key has the total entropy of the key + “0” + length in bits, which is far less than the gigabytes worth of bits it produced, so instead of 8 billion bits of entropy, it’s 128 + ~1 + 33 bits of entropy.
Then we get to kolgomorov complexity and computational complexity, in other words the shortest way to describe a number. This is also related to compression. The vast majority of numbers have high complexity which can not be described in full with a shorter number, they can not be compressed, and because of this a typical statistical test for randomness would say it passes with a certain probability (given the tests themselves can be encoded as shorter numbers), because the highest complexity test has too low complexity to have a high chance of describing the tested number.
(sidenote 1: The security of encryption depends on mixing in the key with the message sufficiently well that you can’t derive the message without knowing the key - the complexity is high - and that the key is too big to bruteforce)
(sidenote 2: the kolgomorov complexity of a securely encrypted message is roughly the entropy + algorithm complexity, but for a weak algorithm it’s less because leaking patterns lets you circumvent bruteforcing the key entropy - also we generally discount the algorithm itself as it’s expected to be known. Computational complexity is essentially defined by expected runtime of attacks.)
And test suites are bounded. They all have an expected running time, and may be able to fit maybe 20-30 bits of complexity in there, because that’s how much much compute resources you can put into a standardized test suite. This means all numbers with a pattern which requires more bits to describe will pass with a high probability.
… And this is why standard tests are easy to fool!
All you have to do is to create an algorithm with 1 more bit of complexity than the limit of the test and now your statistical tests will pass, because while algorithms with 15 bits of complexity will generally fail another bad algorithm with ~35 bits of complexity (above a hypothetical test threshold of 30) will frequently pass despite being insecure.
So if your encryption algorithm doesn’t reach beyond the minimum cryptographic thresholds (roughly 100 bits of computational complexity, roughly equivalent to same bits of kolgomorov complexity*), and maybe just hit 35 bits, then your encrypted messages aren’t complex enough to resist dedicated cryptoanalysis, and especially not if the adversary knows the algorithm already, even though they pass all standards tests.
What’s worse is the attack might even be incredibly efficient once known (nothing says the 35 bit complexity attack has to be slow, it might simply be a 35 bit derived constant folding the rest of the algorithm down to nothing)!
* kolgomorov complexity doesn’t account for different costs for memory usage versus processing power, nor for memory latency, so memory is often more expensive