Over time, Lemmy instances are going to keep aquiring more, and more data. Even if, in the best case, they are not caching content and they are just storing the data posted to communities local to the server, there will still be a virtually limitless growth in server storage requirements. Eventually, it may get to a point where it is no longer economically feesible to host all of the infrastructure to keep expanding the server’s storage. What happens at this point? Will servers begin to periodically purge old content? I have concerns that there will be a permanent horizon (as Lemmy becomes more popular, the rate of growth in storage requirements will also increase, thereby reducing the distance to this horizon) over which old – and still very useful – data will cease to exist. Is there any plan to archive this old data?
One way to approach the geometric storage growth would be to not cache everything everywhere all at once. With 1000+ instances, storing an object in a few instances would be ok if others can pull it in on demand. Can use some typical caching methodology like use frequency, aging etc.
This is a great idea. Instances will need eventually to agree to common storage areas, even if they dont all allow the same content on their instance. That savings would be huge in the long run.
Personally I think we should add a differentiation between the storage policies of content which is owned by your own instance and content that federates from other instances.
The former should be kept for a long time (forever?), while the latter can be cleared more regularly.
Pictrs 0.4 recently added support for object storage. This is fantastic, because object storage is dirt cheap compared to traditional block storage (like a VM filesystem). This helps a lot for image storage, which is a large part of the problem, but it’s not the whole problem.
I know Lemmy uses Postgres for everything else, but they should really invest time into moving towards something more sustainable for long term/permanent hosting. Paid Postgres services are obscenely upcharged and prohibitively expensive, so that’s not an option.
I’m armchair architecting here so I’m not sure what that would look like for Lemmy (Cloudflare KV? Redis?)
Still, even my own private instance has been growing at a rate of about 700MB per day, and I don’t even subscribe to that many things. I can’t imagine what the major instances are dealing with. This isn’t sustainable unless we want to start purging old data, which will kill Lemmy long term.
EDIT: Turns out ~90% of my Lemmy data is just for debugging and not needed:
https://github.com/LemmyNet/lemmy/issues/3103#issuecomment-1631643416
The largest table holds data that is only needed by Lemmy briefly. There is a scheduled job to clear it… Every 6 months. There are active discussions on how best to handle this.
On my instance I’ve set a cronjob to delete everything but the most recent 100k rows of that table every hour.
I saw that issue, and then I saw people having problems after clearing it, so I’m just going to wait until they figure that out in a stable version. Looking forward to it though!
@ubergeek77@lemmy.ubergeek77.chat @lodion@aussie.zone Can either of you link to that discussion please?
It looks like the issue I was referring to has since been edited, as it’s not actually relevant to clearing this database bloat:
The 700MB are the postgres data or everything including the images?
I’m under the impression that text should be very cheap to store inside postgres.
Keep in mind that you are also storing metadata for the post (i.e. creation time), relations (i.e. which used posted) and an index.
Might not be much now but these things really add up over the years.
Yes but those are in general a couple of bytes at most. The average comment will be less than 1KB. Metadata that goes with it will be barely more.
On the other hand most images will be around 1MB, or 1000x times larger. Sure it depends on the type of instance but text should be a long way from filling a hard drive. From what I’ve seen on github the database size is actually mostly debugging information, so it might explain the weirdness.
The long term solution is something like IPFS object storage that’s read only for everyone but the author instance. One copy of the data but all instances can read it and it’s stored forever in a redundant medium with bitrot protection.
No thanks, don’t need crypto bullshit
Because ipfs isn’t free and gets paid in filecoin, which is exactly what it sounds. Just crypto bullshit.
@mojo Just keep telling people you don’t know what IPFS is without coming outright and saying it. Lol.
“IpFs GeTs PaId In FiLe CoIn”
IPFS is a protocol, you nitwit. That’s like saying “ActivityPub is gets paid in Filecoin” Makes no fucking sense. Build a Fediverse layer on IPFS, no crypto needed. FFS get educated before you start trying to talk to adults.
Jesus… just stop.