It’s a code: “We’re on MongoDB!”
“Roger, Sirrian we confirm you are on MongoDB; proceed with objective Delta”
It’s a code: “We’re on MongoDB!”
“Roger, Sirrian we confirm you are on MongoDB; proceed with objective Delta”
I was gonna say, I’m not sure I’ve ever heard it pronounced any other way…
“The cows are not what they seem. The penguins march at dawn.”
LostInLust,
I kinda agree with you in a way, but you must look at whom pays your bills first. I have been a game developer in the past, and those whom pay always had the loudest voice. Its very simple when it comes to economics.
You choose between:
a) Listen to people who don’t pay you (most), and go out of business
b) Listen to people who do pay you (most), so you continue to stay in business.
Now I’m not advocating ignoring the people who never pay to play, however those that do will always have the largest voice in any development.
And yes every player has a right to speak, just wont be heard as much as the one who SHELTERS & FEEDS YOU!
Neither group necessarily has good ideas, or the business’ interests in mind. They’re both equally worth listening to, if either of them is. Unquestioningly meeting the demands of either group is almost certain to lead to ruin.
I suppose a race condition could be a factor, but I would expect that case to be independent of time of day, and we saw some time-zone-specific spikes in timeout events. I can understand why they would choose a document-focused DBMS product if they are combining card art, text and rendering data into one database alongside high-volume numeric data describing game results and other player history, but… I have yet to see a single such document-focused product that knows how to query-optimize numeric fields properly (when compared to one of the big boys such as Oracle). I’m not trying to be a naysayer here, but I don’t think we will see the last of these timeout spikes for a while yet. I wish the devs all the best of luck in getting it sorted!!
EDIT: If memory serves, that product started with a lead developer who was commonly known around the office as Mongo Jerry,
At my previous company - Trust me when I say we got Oracle into race conditions with the row chaining and large datasets when we were preparing the data for reports.
We also manage to create this nasty deadlock when it came to a search feature only during peak hours. It took us several months to finally isolate why and what was happening.
So anything is possible when you’re a real time table structure doing constant queries and writes.
I hear “j-Sahn” and “jason” about equally.
I say Jason personally.
In Ireland it is pronounced ROW-SHEEN.
Server issues are back though they don’t seem as severe just yet. Multiple connection errors for myself and others in game chat.
Edit: The issue appears to have cleared up fairly quickly.
Yep and yep
it was like only 5-10 mins of issues
Anyone else running into trouble? I just lost two treasure hunts (one of them really good!)
Thinking the servers are taking a dump again. I’ve had to exit out of the game multiple times due to infinite loading on guild tab.
At about this time every week we roll our database over from the primary to the secondary instance (basically we keep 2 copies of the database running, and switch over from one to the other, which helps clear out memory and keep everything running smoothly).
During that changeover it seems we get a brief queue of messages, and some players experience a timeout for a few minutes.
Obviously for a game, weekend changeover is a bad idea since our number of users is generally higher than on a weekday.
I’m going to look into changing the rollover time to Wednesday during our off-peak period (about 9am GMT)