Vault Weekend data collection - for w/e 13 Sept

I will concede that this is a possible scenario, but I do not believe this is the most probably. Based on all the information we have now (including past data gathering endeavors by at least some of the people posting here and past mistakes made by the dev team), it is far more likely that this is a mistake on their end rather than some extreme confluence of a bunch of people recording and then withholding data. This isn’t the first data gathering endeavor on this forum, and skewing to the degree that would need to “corrupt” this data set to the point where it didn’t at least point to something being “off” would imply a bias far far greater than any I have experienced in any other data set I collected here. For example, if we removed all the people that got 0 vault keys from the larger set (leaving 5 out of 148), we still have a situation where our sample set at even 99.999% confidence level does not include the stated rate of 10% within in its confidence interval. We’d have to assume nearly every sample is skewed or biased, and while we can’t prove the rigor of this data, it is odd that there is no evidence to the contrary to be found.

Unless I’m completely misinterpreting the data?

The sheet shows every one of the five who got a key having at best 1 out of 11 successes, and at worst 1 out of 56. That would make every single data set “bad luck” if the true rate were 1 in 10 (neutral luck if it were 1 in 11, depending on how you interpreted the 10% in the original post, the 10% could be describing 1 EVK per 10 vault keys, which is closer to 9.1% successes)

Now, the absence of evidence does not necessarily mean the evidence of absence, but given even inconclusive data, no matter how you arrange it that trends heavily away from the conclusion “the rate of 10% is correct as stated”, based on the history of the multiple reliable reporting sources versus the proven fact that very similar game issues have happened more than once in the past, if forced to draw a conclusion, it is far more likely that the mistake occurred on their end.

Yes, absolutely, the “correct” response for an objective statistical analysis as proof is to gather more data and ensure it is more “pure”, but also probably underestimating what a monumental task this can be, even over multiple participants and that fact that it should not be on us, the playerbase, to ever need to collect enough data on something being wrong to rise to the level of “statistical proof” (both in terms of real time that needs to elapse, months or sometimes even years, in addition to the time spent actually gathering the data) before it will be examined from the developer side, where they have much better tools to find out what may have gone wrong in a fraction of the time and effort. Which was the larger point I was trying to make in my previous post.

But, at this point, given the inability to just “gather more data” in any reasonable way, even with an organized effort, until a point in which we’ve already resigned to leaving a potential drop error unaddressed in the game for at least another month, which is more likely?:

  • extreme reporting bias due to unhappy people, including several known data collectors, with no evidence to the contrary, where we have to assume we only got bad data in addition to some amount of “bad luck”
  • any one of a myriad of human errors that could have caused something that could explain the discrepancy, from logic/programming error to a data input error to simple miscommunication combined with a rushed development schedule and lack of testing compounded by sub-optimal working conditions to cause a similar errors that have already happened outside of these conditions on more that one occasion

If you weren’t around for the Chaos Portal drop rate saga, you are probably have a higher internal burden of proof before you are willing to even raise it as “probably an issue”. Having gone through that once already, I really really hope it never has to make it that far, because it just drags everything down. To the point where even if it turns out to be “a waste of time”, the smart move would be for the devs to at least give it a check. Thus, yes, they should also lower their burden of proof as well.

tl;dr: I can’t call the data “proof”, but I can definitely say there is enough of a trend that it needs to be double checked on the dev side, and if something is wrong and get to the level of “proof” before it is fixed, that is a slew of negative attention that they don’t need piled onto it just being wrong in the first place. If it were me sitting on the other side of the fence, I’d be pushing to check this out asap, on my own time if had to, because of the implications of continued repeated problems and what that means for the game’s image and how it affects overall discourse.

6 Likes