FBXL Social

[Admin mode] This is a log I was writing as I continued through.

We've finally at long last made it to the new server! (lol when I wrote that line I was so naive)

One thing I learned is that pg_repack will totally fill up your storage if it fails (as mine did during the time period of crashing all the time) -- hundreds of gigabytes of old tables that didn't do anything. It massively increased the time I took to transfer the database for no good reason. For anyone else running an instance, it's probably something to be aware of.

According to documentation, it can be cleaned up with:

\c pleroma
DROP EXTENSION pg_repack CASCADE ;
CREATE EXTENSION pg_repack;

In the case of my database, I got well over 100GB of drive space back immediately for no good reason. In terms of restoring the backup I made, it ended up sucking up huge amounts of time on dead databases.

So I wrote the above 7 hours ago. It turns out the restore isn't a linear process!!

It's a never-ending process.... I understand now why I failed on the previous process, I couldn't have actually completed the steps I'm waiting for -- 12 hours after I started.

It's a substantial upgrade in some ways. The SSD was SATA before, now it's nvme. The original CPU was a Intel(R) Core(TM) i5-4570TE CPU @ 2.70GHz with hyperthreading disabled. The social container I have only has 2 of the 4 cores now, but it's on a AMD Ryzen Embedded R1505G with Radeon Vega Gfx

(The rest of the day passed) Holy moly, 20ish hours in?

(Several more hours in) I ended up calling it a failure 24 hours in, and went with a new way of looking at things: Instead, I upgraded the postgresql 15 to postgresql 16 and plan to just move the binary folders over.

This seemed like a great idea for the first hour... But it turns out slow machines are slow, so it took quite a while to migrate. Still probably the right idea.

Eventually the upgrade did finish, then I was able to just tar up the postgresql 16 folder and ftp it over to the new server.

Thankfully, this time it did in fact successfully transfer. I had one problem where it seemed the user didn't get created properly so I set the password and database permissions. Next, I had a quick issue where pleroma was exposing itself to the old IP address, but that was one line change in the config. Finally, after what felt like days without FBXL Social, things were back up.

One thing not related to the technical side of things, there were a few times where I had a thought and went "Oh, that's clever I should post that on -- oh nevermind I sure hope postgresql hurries up!"

So a few points afterwards:

1. Proxmox is really nice. Other than constantly whining about not having a subscription, it's really nice.
2. We're now doing automated backups to network attached storage, which is also really nice.
3. It's all just containers, so if hardware fails, I can fire up the same container on another proxmox machine which is (you guessed it) really nice. (I was going to try for High Availaibilty, but you need
4. Containers are really light, so I'm able to have individual containers for individual services which is (find another description bro) really nice.
5. Migrating large postgresql databases is friggin slow!
6. Using straight pg_dump to create a backup of your database is actually stupid, because my backup was 200GB. Once I used -FC the size went down like 75%.
7. pg_repack helps improve size and performance of postgres databases, but if it fails half way through you end up with potentially huge databases that don't do anything! That was the final straw that stopped me from the original migration. The server took a full day on re-indexing one table (I think activity visibility) and I realized the repack tables would probably be just as long or longer.
8. I should have cleaned up my database before trying to migrate in the first place.

One thing that's really funny -- the server that ran my reverse proxy, my nextcloud, my main website, the fbxl website, and fbxl social all at once now just runs a couple small things, and now it's sitting at 0.04 load. That machine crashing (ostensibly because it couldn't turbo anymore) was the thing that began this whole ordeal, and now it's basically idle.

Next for me will be taking a lot of my now idle or removed boxes and making them into tiny proxmox nodes so I can do all kinds of neat things on the fringes from one centrally managed system. No downtime required since nothing active will go down.

Still 0 fans in my entire empire of dirt.
replies
1
announces
0
likes
0

Running multi 100 GBs databases back when I learned sql would make me piss my pants.