Currently I am setting up a pleroma istance on a small ARM-board behind a 12Mbit/s asymetric DSL. We'll see how good this works out, but in theory, a single user will only take about the traffic your mastodon-client at max.
Pleroma is much easier to setup and basically runs on a potatoe
@sara @aral I worry a bit about this. I understand some of the reasoning but one of the big things people love about instances is the sense of community. I think making accounts easily transferable between instances is a better answer than running your own instance. Not always, but for most I think a better long term solution without losing that community feeling (that can admittedly sometimes go very wrong).
@sara I can see many smaller instances being a good thing, still hesitant on the single-user instance concept. Transferability would greatly help people be willing to try smaller instances. It's primarily a desire to choose one I trust to be around long term that made me initially choose mastodon.social (which I've since had second thoughts about). Transferability would help lessen that fear of smaller instances going away. Especially if I could easily back up my account on a regular basis.
@sara Well looking at the install procedure and all the components involved, I don't think setting up an instance initially is the biggest issue. Keeping it running, maintained, updated, reasonably secured possibly is. Especially if you have little to no experience with any of the pieces of software involved. From a security point of view such an environment seems just as desirable as an amount of WordPress or T3 installations that never seen any real administration... 😐
In my estimation, during the course of what I think of as set-up of any network service, one *can* anticipate and thus configure things in such a way to mitigate or forestall future problems. But only to a certain point, and only optimally even then if one already has some combination of understanding & experience with the service.
Beyond this set-up phase are those challenges of maintenance.
@deejoe @sara Yes. The latter is the actual point. Looking at my history of both operating FLOSS ever since the late 1990s and running in-house software, building and setting things up mostly was trivial. Things usually got painful and nasty all along with more fundamental changes: Included web servers or other components (sidekiq, node, ...) require an update. Database schema needs an update on a filled production database. File system layout changes between two releases. This is where ...
@deejoe @sara ... "fun" usually starts. And this is where, in some situations, even experienced people need to think twice before doing anything wrong. I've seen critical MXs go down for days just because what ought to be a "simple upgrade" of an upstream dependency had unforeseen consequences and crashed+burnt the whole installation. And these, in most cases, where software components that have been out there for a longer period of time, had a stronger backing in terms of skilled ...
@deejoe @sara Yes of course. There *are* solutions to that. In our environment, in example, we use (mostly...) #puppet and #docker for maintaining reproducible environments (package versions, configuration of dependencies such as reverse proxies and databases, database drivers, ...), tools such as #liquibase that keep track of database versioning and a bunch of others. I'm not saying this is not *doable* - of course you can learn and dive through this. But it's a steep learning curve for ...
@deejoe @sara ... admins ready to deal with and help in case of encountering the unexpected. On the other side (on our own infrastructure) I also had fun enough preventing damage from being DDoSed by unpatched and subsequenty exploited servers somewhere out there. That's a direct consequence of people without any sensitivity for these problems running modestly complex software completely on their own. 😐
Still a *lot* of work to do and many things missing, but ideally we want to gather enough documents about how we run our servers (mastodon, XMPP, rocketchat, mailman3, bunch of other things).
It's clear that there is a difference between installing a software by copy pasting sudo commands and maintain a whole machine. We should find ways to encourage people to get admin skills or co-admin servers.
@sara I agree. I set one up and ran it for months. Great experience. There are two aspects of the Mastodon instance admin experience that need work IMO: 1) Upgrading to new versions (Mine blew up when I did an upgrade. Multiple core devs looked at it and were mystified. My instance went mute. All the pieces still worked. Something in the routing busted.) - and 2) Routing. You can run your own instance, but understanding how traffic flows and having the links with other busy 'hub' instances are important.
@sara Another thing that would help is rock solid instructions on how to back up an instance. Everything - the web stack, backend, all of it, so that if you attempt an upgrade and botch, rolling back is a thing. I had my postgres backed up but wasn't sure about the rest, and with things like migrations etc it can get super tricky.
@feoh @sara this is somewhat useful re: migrations
> Our local raccoon families need some hacking.
P.S. I do not mean anything harmful. I had a friend many years ago who was sorta a raccoon whisperer.
Everything is connected.