And, similarly, say that it was just ASCII text. It was completely known ASCII text. But you had, in a block of text, say maybe you were decrypting a sector, 4K bytes. Well, even in 4K there’s many things about ASCII text, even if you don’t know what it says, that are tip-offs. For example, in ASCII, typically ASCII compresses highly because, for example, the high bit, the eighth bit is always off. Most of the alphabet fits, typically fits within the first 0 to 128, or 127, rather, characters. So your high bit is off. Well, in a block of 4,000 characters you’re going to have - well, actually 4,096 in a 4K block - you’re going to have 4,096 high bits. So if any decryption of something that you thought was just ASCII happened to suddenly have the majority of all of its high bits off, that’s suddenly a very good chance that you’ve decrypted it correctly because encrypted text, which we know is going to be highly random, encrypted anything is going to, on average, have 2,048. That is to say, 50 percent of the high bits in the bytes are going to be on; the other 50 percent are going to be off. And it’s going to be extremely regular. And so if suddenly you get a decryption of even something you don’t know at all, except here again we know something about, we’ve made some assumptions about what it is that we’re decrypting. We’re saying we think this is ASCII. And when we get right, suddenly most or all of the high bits are off. The chance of it being a wrong decryption where all of that is true is just astronomical.