I mean seriously? This dude’s project has had hardly any downtime (like 100% inaccessible downtime) in the last few days during a massive migration. How impressive is that? He was able to find a solution with cloudflare where, sure things were a little slow to load and didn’t federate, but I never found myself unable to access kbin. On top of that, he’s communicating clearly and often. I hope this succeeds. @ernest has absolutely earned it.
There have been a few times I’ve been unable to log in, unresponsive pages etc., but considering how recently kbin.social was created, the massive influx of users and the fact that @ernest is managing all this himself - absolutely phenomenal job.
I haven’t looked into just what the backend architecture is like, but I’ve seen comments that suggested it may be a single physical server? If so, some short periods of downtime are unavoidable. I do high availability backend dev and it’s no easy task to have near perfect uptime. Having distributed servers across multiple locations is essential for that, but generally requires careful design. Databases also get more complicated when using distributed databases (but I swear by them).
@ernest doing the business. What a dude.
If you have the means, sling him some funds
I mean seriously? This dude’s project has had hardly any downtime (like 100% inaccessible downtime) in the last few days during a massive migration. How impressive is that? He was able to find a solution with cloudflare where, sure things were a little slow to load and didn’t federate, but I never found myself unable to access kbin. On top of that, he’s communicating clearly and often. I hope this succeeds. @ernest has absolutely earned it.
There have been a few times I’ve been unable to log in, unresponsive pages etc., but considering how recently kbin.social was created, the massive influx of users and the fact that @ernest is managing all this himself - absolutely phenomenal job.
I haven’t looked into just what the backend architecture is like, but I’ve seen comments that suggested it may be a single physical server? If so, some short periods of downtime are unavoidable. I do high availability backend dev and it’s no easy task to have near perfect uptime. Having distributed servers across multiple locations is essential for that, but generally requires careful design. Databases also get more complicated when using distributed databases (but I swear by them).
Bought him several coffees earlier today, he deserves it!
Thanks for the link just donated!
Chucked another coffee. The black gold will power the servers through.
Do you have the link on hand? I’ll pour one out too.
Sausage linked it above
Ah, thanks. Done! To save anyone else thinking similar from having to scroll up, it’s: https://www.buymeacoffee.com/kbin