Well you can start by trying on purpose to make an SCP wiki level horror scene. Then the bugs are features!
Well you can start by trying on purpose to make an SCP wiki level horror scene. Then the bugs are features!
Zelda + Scat = Relaxing The Legend of Zelda Jazz Covers
$PIKACHU = $ZELDA + $CAT
I think you can keep doing the SMB shares and use an overlay filesystem on top of those to basically stack them on top of each other, so that server1/dir1/file1.txt
and server2/dir1/file2.txt
and server3/dir1/file3.txt
all show up in the same folder. I’m not sure how happy that is when one of the servers just isn’t there though.
Other than that you probably need some kind of fancy FUSE application to fake a filesystem that works the way you want. Maybe some kind of FUES-over-Git-Annex system exists that could do it already?
I wouldn’t really recommend IPFS for this. It’s tough to get it to actually fetch the blocks promptly for files unless you manually convince it to connect to the machine that has them. It doesn’t really solve the shared-drive problem as far as I know (you’d have like several IPNS paths to juggle for the different libraries, and you’d have to have a way to update them when new files were added). Also it won’t do any encryption or privacy: anyone who has seen the same file that you have, and has the IPFS hash of it, will be able to convince you to distribute the file to them (whether you have a license to do so or not).
Well how do you think it should work then?
You might want to try Openstack. It is set up for running a multi-tenant cloud.
Seems to not be paying off though; having whole communities and instances close is pretty inconvenient.
Why does Lemmy even ship its own image host? There are plenty of places to upload images you want to post that are already good at hosting images, arguably better than pictrs is for some applications. Running your own opens up whole categories of new problems like this that are inessential to running a federated link aggregator. People selfhost Lemmy and turn around and dump the images for “their” image host in S3 anyway.
We should all get out of the image hosting business unless we really want to be there.
If the Twitter bird and name are no longer used in commerce, someone should absolutely snap that right up.
That could work fine, probably? Or you could use it on the same machine as other stuff.
ZFS zRAID is pretty good for this I think. You hook up the drives from one “pool” to a new machine, and ZFS can detect them and see that they constitute a pool and import them.
I think it still stores some internal references to which drives are in the pool, but if you add the drives from the by-ID directory when making the pool it ought to be using stable IDs at least across Linux machines.
There’s also always Git Annex for managing redundancy at the file level instead of inside the filesystem.
You take a PeerTube channel name and treat it as if it were a community name on Lemmy (https://peertube.instance/c/channelname
) and search it up in your instance and make it federate over.
Why should they pay royalties for letting a robot read something that they wouldn’t owe if a person read it?
People will say stuff like “fave before replying” though. And most platforms with a like will be able to make you a list of everything you have liked.
So I think like maps to the little Mastodon star pretty well, even though it might not be meant to be used that way.
I have found Mastodon still does that. And it turned out to be a problem, actually. I just kept going on there for no reason and reading like 100 nothings.
I’m definitely the other way, I want to see the stuff that’s there because I asked for it, and I want to ping pong around from people to the people they talk to to find new people. If I don’t already know of at least one interesting person or instance, why am I even joining the thing?
I appreciate having a list of people I could follow, but if there isn’t one I remember how to make my own fun.
I have indeed made a list of ridiculous and heretofore unobserved things somebody could be. I’m trying to gesture at a principle here.
If you can’t make your own hormones, store bought should be fine. If you are bad at writing, you should be allowed to use a computer to make you good at writing now. If you don’t have legs, you should get to roll, and people should stop expecting you to have legs. None of these differences between people, or in the ways that people choose to do things, should really be important.
Is there a word for that idea? Is it just what happens to your brain when you try to read the Office of Consensus Maintenance Analog Simulation System?
But for text to be a derivative work of other text, you need to be able to know by looking at the two texts and comparing them.
Training an AI on a copyrighted work might necessarily involve making copies of the work that would be illegal to make without a license. But the output of the AI model is only going to be a for-copyright-purposes derivative work of any of the training inputs when it actually looks like one.
Did the AI regurgitate your book? Derivative work.
Did the AI spit out text that isn’t particularly similar to any existing book? Which, if written by a human, would have qualified as original? Then it can’t be a derivative work. It might not itself be a copyrightable product of authorship, having no real author, but it can’t be secretly a derivative work in a way not detectable from the text itself.
Otherwise we open ourselves up to all sorts of claims along the lines of “That book looks original, but actually it is a derivative work of my book because I say the author actually used an AI model trained on my book to make it! Now I need to subpoena everything they ever did to try and find evidence of this having happened!”
In the future, some people might not be human. Or some people might be mostly human, but use computers to do things like fill in for pieces of their brain that got damaged.
Some people can’t regognize faces, for example, but computers are great at that now and Apple has that thing that is Google Glass but better. But a law against doing facial recognition with a computer, and allowing it to only be done with a brain, would prevent that solution from working.
And currently there are a lot of people running around trying to legislate exactly how people’s human bodies are allowed to work inside, over those people’s objections.
I think we should write laws on the principle that anybody could be a human, or a robot, or a river, or a sentient collection of bees in a trench coat, that is 100% their own business.
It sounds like nobody actually understood what you want.
You have a non-ZFS boot drive, and a big ZFS pool, and you want to save an image of the boot drive to the pool, as a backup for the boot drive.
I guess you don’t want to image the drive while booted off it, because that could produce an image that isn’t fully self-consistent. So then the problem is getting at the pool from something other than the system you have.
I think what you need to do is find something else you can boot that supports ZFS. I think the Ubuntu live images will do it. If not, you can try something like re-installing the setup you have, but onto a USB drive.
Then you have to boot to that and
zfs import
your pool. ZFS is pretty smart so it should just auto-detect the pool structure and where it wants to be mounted, and you can mount it. Don’t do a ZFS feature upgrade on the pool though, or the other system might not understand it. It’s also possible your live kernel might not have a new enough ZFS to understand the features your pool uses, and you might need to find a newer one.Then once the pool is mounted you should be able to
dd
your boot drive block device to a file on the pool.If you can’t get this to work, you can try using a non-ZFS-speaking live Linux and
dd
ing your image to somewhere on the network big enough to hold it, which you may or may not have, and then booting the system and copying back from there to the pool.