Server issues (October 10th 2016)?

Thank you for the update!

Hi Guys,

We’re currently trying to investigate this issue further from our end. At the moment we don’t have an ETA when the servers will be back.

But we will be sending out compensation on all platforms in the future, once we get servers back up and running.

14 Likes

Thanks Andrew.

Btw, I just got Error - Incorrect Login Details. Username: KrVC5cpnj42c - Android / Code Taisia

Not sure if it is related but this is what happened after rebooting from the big error message

I’ve been getting those weird errors too.

Hope they get it fixed soon because I have to get up early tomorrow (0315 MDT, 0915Z).

Guys,I know it’s frustrating,but the devs do their best to fix it.A little support won’t hurt.

And neither will error code reporting.

Quite frankly, your environment isn’t all that large. Two sets of servers is fairly easy to manage, even when geographically diverse.

I’ve done consulting work in BC/DR for companies and other entities that have dozens of data centers and hundreds of server pairs/clusters. It gets a lot more complex as data stores scale.

It’s also quite expensive. You have to remember that this game is free to play. A good portion of the user base actually does play for free, I would guess, and revenues aren’t easy to project so large operational or capital expenditures are hard to justify.

At the end of the day, I agree with you at least in part. I just don’t think it’s as binary as you’re making it sound. It’s pretty nuanced, especially for a company that has several thousand customers and just a small group of employees.

I just logged in and picked up tribute. Will test PvP.

Please go ahead and move to another service provider please. Screw the extra few months of money. People will quit playing at all if this keeps on happening. Better to lose some money than lots of money long term by losing customers.

Nor can I

It’s working again

while I agree with you about scaling does cost more and makes things more difficult.

Amazon AWS is pretty cheap if you just store the data and use the single core server unless you need to enable it, then you upgrade on the fly and grow with the needs. This would be my backup solution.

I’m pretty sure this game is run on 1 maybe 2 servers, and should be split between many servers doing many things, typically more cheaper servers produce more throughput than 1 large server. Although the coding does become more important when you start scaling.

I do have other experience working in the news/media, with 20 front-end servers, and 7 databases in a cluster, this was just one of our locations, they were smart about it and used a SAN where the 20 front-end servers would replicate the data from and basically “cached” all the database results to each server so only writes and data driven pages only had to hit the databases. We reduced as much database as possible.

I have a feeling this game is using memcache or another memory caching solution, which then makes it even more cumbersome when you start scaling and scaling across locations. It can still work, but you start seeing the diminishing effects when pings are distant.

I just hope they get this game stable again, when I’m working and stuck on a programming dilemma, I like to do a battle or two and usually helps clear the block, and progression resumes.

1 Like

Back up and I was able to win PvP and kept the rewards!

Well, I will if everyone else will join in. . . “ψ(`∇´)ψ

Just a heads up on where we are with the new server situation…

As you are aware, we have been looking to switch across server providers for quite some time now. However, this is a delicate process and one that we want to get right. If we switched across too early, then (as bad as you think our servers are now) it could potentially cause even more problems. Even though our current servers experience the odd outage and timeout, we could be in permanent outage if we got things drastically wrong during the migration.

So we are slowly and surely getting our new servers in place. We want to test that they scale well and are stable enough to handle the current userbase. Over the next month or so we will continue to perform tests on it, and hopefully after that everyone will be rolled out across to the new servers. However, as with all things we expect there may be some teething issues here and there once the full rollout occurs, but at least at that stage we will have greater control over them so can resolve any issues quicker.

Thanks for everyone’s patience, and we apologise about the issues. We’ll send out some sort of compensation shortly.

9 Likes

This is what happens on No Pants Monday. You blew the damn servers out…I wonder what’s going to happen on No Shirt Tuesday.

1 Like

@Nimhain & @Andrew , I have to say that I’m much more distressed by the absolute lack of communication by the dev team in the 8+ hours since @Sirrian initially commented on this thread than I am by the actual downtime of the game. Most reasonable players know issues happen and some issues take longer to fix than others but failing to give your players timely updates can be far more damaging as it implies a lack of care on the developers part (whether true or not). Even just posting something as short as “Still looking into this; sorry!” do a lot to quell the frustration and give players faith that you care about them and the health of your game and that your aren’t taking them and their patience for granted. Compensation after the fact is definitely appreciated but won’t make up for all the damage your silence creates during the event.

I know the dev teams’ focus is on fixing the issue right now, but I hope the utter lack of community management made apparent by this event will be addressed at some point in the near future, too.

I think one of those new Mythics would be perfect :wink: