Good thing to note, ty @PitMonk It never came back up, even after validating files. Maybe I will try this. They are still in there after the upload of the repaired file.
These are the pages I read.
sqlite3.exe %%a “VACUUM;REINDEX;ANALYZE;pragma integrity_check”
What the files game.db-shm and game.db-wal are
Even in real MMORPG’s servers (farms) are taken down on a daily basis (e.g. EVE Online). You simply need a regular time window for maintenance. That a program runs slower with increasing size of the data base, is nothing special, nor limited to Conan Exiles. Its the major task of every database software. In case of Exiles, Funcom has to take care of keeping it as small as possible. Everything unnecessary have to be ereased as soon as possible. Tables have to be maintained small. I’m afraid, this is currently not the case. Along with decay timers it`s probably the most limiting factor for large servers in Exiles. People always peer to server performance when asking for higher caps, but thats nowhere near everything.
I found that after I restore a backup of the game.db file to do a rollback, if those two files game.db-shm and game.db-wal were still there, the rollback would be unsuccessful. I needed to delete them for the rollback to work correctly.
Same here Pit, I deleted them and it started. Here is what I initially noticed. The game loaded faster. No noticeable ram usage dif. Thralls I placed in the world started with minimal health and are slowly regaining. I don’t know what else was effected but I guess we will see. Worst case I’ll delete the 3 files and restore the original tonight when the majority let me know whats what.
@Ryu-Salazar Server farms never shut down, period. Literally ever. Hard drives get swapped, power supplies even. The only way a server farm gets shut down is because its dying. Getting a little sick of responding to these ridiculous no knowledge/experience claims, neither do hosting companies who host your games, nor do 90% of the MMO industry. Weekly maintenance schedules are performed by professional companies. Gamers are informed ahead of time in game, professionally. This game and the 10% are the few who haven’t got there yet but hopefully they will. We are trying to solve a problem for the gamers in our own community since Funcom isn’t. They aren’t on the official servers or for anyone else. The objective of this thread is for them to become aware of the issue and to direct a solution, for other hosted server admins to join in on the discussion. We have a large responsibility to keep things working for many people. We are being forced to do this because there literally isn’t anyone else in funcom to address this so we now resort to our own SQL database repairs. Think of that sh** for a moment. A gamer, is literally editing the main game engine service file themselves. Jeez how far we have stooped. I’m literally almost done capitalizing their companies name.
Sounds like you have professional experience with game server clusters?
I have experience with servers. Shutting them down is not an option unless critically crucial. If it was, IT budgets would skyrocket annually to triple. Besides, we’re not even talking about starting or restarting servers to begin with, something you should have read. This discussion relates to the daily requirement of shutting down and starting up the game service, game.db database file, not the physical server. Hence my point in my previous statement.
I’m sorry. I was refering to the game service, not to a machine itself. However, I have read about “rebooting” operations in other cases, but lets not go that road down any further.
So here is what I have to report regarding the batch file script file. It eliminated all loose campfires around the world. It starts all thralls at 0 health which is just dumb. Going to add more as the night progresses and I get more information from my gamers. The game however seems to be smoother though. Still the looming issue of very long startup times continue to plague us and I fear as the server grows and the game.db file grows this issue will continue to get only worse.
I don’t think this will get any faster. The more that’s built on the map, the more it has to load up. Personally, 1 to 10 min to boot isn’t a huge issue, but if it continues to get longer, I dread to think what it will be like in 1 years time!
We are now starting to get a few clans all with Lv60 players. They are all active and building some huge bases all over the map. I’m sure we will start getting slow-downs over the next month or two.
I only used the maintenance script as detailed in my previous post, I have a small team of admins who clear items they find whilst they play the game, this works for us right now.
No imagination needed, take our 1941 US official server for example. Its been on since early 2017 since the wipe and now takes between 4-6 hours to get online once it goes down. This is the only reason 15 of us left.
They should fix this issue long term or repair the file in such a way that it repairs its self as its live and doesn’t crash unless a patch needs to be updated. Then they should also create official servers for private host’ers like us to transfer their game.db file to so that the players can continue to play on official servers and continue on without having to start all over.
Thank you. I noticed a similar issue on several official servers. Several have been plagued by Foundation Spammers. There seems to be a correlation from when they joined the server going down.
The first and primary issue is the DBMS itself. (Being SQLite).
It’s the wrong DBMS for this application.
PostgreSQL best free database
FunCom could have averted this issue in the beginning if they had built the game with a proprietary DBMS.
@Ninja_Havok pretty much every single MMO out there have daily restart/maintenance for a reason. Some may over time move to weekly restarts/maintenance.
And I work as a cloud engineer with tier 4 datacenters. There is quite a lot of wrong in your statements unfortunately.
I get it, you would have preferred the enterprise solution for your Conan Exiles server with core cost in the excess of 20000$ to make sure it could run all the maintenance required without shutting down your game.
Maybe if you paid 100$ a month in sub that could work
This doesn’t sound right buddy. Surely its more the case that the hosts don’t act quickly to reboot the game rather than it actually taking that long to boot up?
The sad thing is, that this is excatly a topic for early access testing. But unfortunately the opportunity was missed.
Vahlok My sentiments exactly.
7700ks and fast SSDs do make a huge difference in performance, especially for single threaded operations. I doubt your hosting provider is using a consumer grade CPU in their servers, but maybe.
I tried to install in my slow SSD storage pool and was experiencing rubber banding really bad. So another issue I’d like to point out is that SQL updates are done synchronously during a tick and if the update/write doesn’t complete before the next tick, the client will rubberband.
So, if you’re not rubberbanding you can take comfort in the fact that your provider is using decently fast SSDs. I moved the install to my samsung pro m.2 storage pool and issues went away.
I’m also using 32gb of RAM, a fresh install is taking 4gb.