Sort of…
If we assume the given rate is correct, the cumulative probability of getting the sample set as the one collected is worse than 179 million to 1. I’ve seen samples for other random events that described outliers or anomalies where nothing was actually wrong, and willing to give the benefit of the doubt in a lot of situations for way longer than I should given past events (though these days anything that doesn’t pass even a 99% confidence level test makes me nervous, especially if I see a trend), but we are way way beyond that. We have a >99.9999% confidence from the samples taken that the true rate falls outside what was stated, and 300 samples is absolutely a significant amount given a stated rate of 10% and our sampled rate of 1.6%. It is much harder to zero in one the actual rate through sampling, for sure, we’d have way less confidence saying “the actual drop rate is 1%, so yours is wrong”, for example, but we can be pretty confident in saying “based on this, I’m pretty sure it isn’t actually 10%”. Worth noting that also the best “luck” recorded data sets from person to person posted here would still fall on the side of “bad luck” for a true rate of 10%.
For a similar situation, consider a situation where you are running a coin flip trial and getting heads 24 out of 25 times, and then declaring there is no way to tell whether or not this is a “fair” coin. Yes, you’d want more data to prove it with “statistical significance”, but it is super easy to just flip a coin more times whereas, on our end, trying to collect Epic Vault Key data is extremely tedious, and going forward without even event weekend, borderline impossible until it rolls around again. By the way, hitting this hypothetical coin situation with a “fair” coin is a roughly 1 in 1.3 million chance, more than 100 times more likely than hitting 5 (or fewer) successes in a sample of 303 with an actual rate of 10%.
At some point, we have to request that, even if it potentially “wasting the devs time”, that this is at least “enough” for some checks be performed on their side both to see if the “correct” rate is in game (both during the vault event and not, and/or if we were quoted the “correct” rate in the 5.1 update notes). I’d imagine (or hope) they have a tool to show them all vault key drops over a weekend from Live data (rather than simulated) as well as all Epic Vault keys, at which point it wouldn’t take more than a glance to see how close it was to 10%. For example, if 10,000 total vault keys were dropped and anything less than 800 were epic, that is a big enough sample to declare with similar confidence that the rates are “not 10%”, even though it is much less “off” than our number, percentage wise.
The point being, any scenario where we have “enough data” to “prove” the rates are a given number with any degree of certainty is a scenario where a potential bug has been in the game way too long and way too much legwork had to be done to the playerbase if it is ever revealed as an actual bug.
I’d be lying if I said that past mistakes have happened with very similar things factored in to how much benefit of the doubt I’d be willing to extend here, but I can picture about a dozen ways this could have gone wrong, even a bunch where even if they pulled simulated data it would line up with “expected” while still being discrepant from the data given by the players.
At this point, it just seems far more likely someone screwed up in setting/communicating the rates (or both) than all the people that happened to be recording results and willing to post them (and/or just happened to be recording and spreadsheeting all their results but only decided to post after they were “unlucky”) had some degree of “bad luck” ranging from “minor” to “extreme outlier”. Its not impossible everything is working correctly, but…