• 0 Posts
  • 26 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle


  • Beginning artists also need good reference material to become good artists and create new transformative material, is that also copyright infringement? For training to be useful you need more refined material than what you can currently produce, that’s just how knowledge works. The goal of these AI isn’t to produce the same as it’s reference material, if it was then you’d have a case. You can easily see from the output of these generators that the vast majority of what it produces is transformative, confirming it’s intended goal.

    Scraping data is also very well established as not infringing on copyright if used for analysis purposes. And if you’ve ever done any kind of analytical research yourself for a PhD or any kind of higher educational degree you know this to be a fundamental freedom required for a healthy society, not even just for artists to learn.

    Proposing it should be seen the way you put it would essentially turn ideas into a property one can own and license, and I can tell you now, the same companies you probably dislike will own so many of these ideas that you could effectively do nothing without paying a license to one of them. Is this what you want?

    And well, shouldn’t need to be said but, if a company gets sued when they think they’re in the right, they’re going to defend themselves lol. And as far as I know none of these lawsuits have been settled in the favor of artists claiming copyright infringement.


  • Yes exactly. The people who can conceivably use AI the best are those with very little to begin with. And should you create something successful you would most likely eventually hire actual artists to assist you. It’s never that black and white. There’s a lot of bad things to say about the big companies and their fascination with putting AI into everything, but that’s really just overlooking the much broader societal impact of AI, which is much more visibly positive for independent creators and smaller companies.

    The sudden change in how copyright infringement is weighted by some feels mostly like a tactic to me too. Which is a shame because you don’t need such things to get sympathy from most people. Losing job security is not something people are stone cold about, and will most likely support protections on that basis alone. Misrepresenting or lying about it will make allies shy away from you even if they have your best intentions in mind. As someone else put it in one of these threads: “If ethics is on your side, slam ethics. If the law is on your side, slam the law. If neither are on your side, slam the table.” and this fascination with harshly applying copyright infringement to people doing things with AI that artists did without AI since the dawn of time is stupid.



  • Funny - I distinctly remember not having any time to recreationally make, and most importantly, actually finish small art pieces. Because our society nowadays demands me to be working on things that aren’t quite art for 80% of the time I’m awake. AI assisted tools have caused me to be able to use that 20% to actually make something again in a satisfactory way. At least for me and most people I talk to in a similar situation, it has allowed me to enjoy being creative again.


  • Yeah I can’t look at artists with zero nuance for AI as anything but being hypocritical. Most artists I know from the industry understand that legally they have no case against these companies because they use the same fundamental freedoms and ideas extracted from the collective human creativity they themselves used to get where they are. And art and creative studies explicitly teach you this. You will spend a lot of time analyzing great works to see what makes them so special, and replicating those ideas as practice.

    It’s how it’s been since forever, and many great artists in history are on record as having directly studied, imitated, or producing homages of other great artists. The Mona Lisa is the best example, it has uncountable derivative works, but nobody questions the ethics of that because we accept even works directly based on another have room for creative input that can make it distinct. And nobody is claiming to have made the original, just their own version.

    Hiding or downplaying those facts about the creative industry so you can call AI theft without being a hypocrite is very questionable behaviour, especially since it’s often used to convince people that don’t know much about the creative process and can’t properly realize their ignorance is being taken advantage of to condition them these aren’t just a normal part of becoming a better artist. And if pressed on that, the response is usually “but it’s okay if a human does it.”, admitting that the point was intentionally misrepresented to not hint people in on the fact the AI is doing the same as the human, and not explicit copyright infringement akin to real theft.

    You can still not like AI or argue to provide better protections for people displaced by AI, I honestly partially agree. The technology needs to remain something in the hands of the working people that contribute to the collective, not gated behind proprietary services built to extort you. But arguing against AI on a level of theft or plagiarism (barring situations where the person using the AI intends to do exactly that) is just incredibly disingenuous and makes allies not want to associate with you because you’re just spouting falsehoods for personal gain. Even if I think you deserve all the help in the world, you’re asking me to accept and propagate a lie to support you, I will not do that.

    And there’s the flipside. Limiting those freedoms in a way that AI would be outlawed or constrained would most likely cause unintentional side effects that can blow up in artist’s faces, limiting not only their freedoms but also the freedoms of artists that embrace AI and use it as the tool it’s meant to be. And you bet your ass that companies like Disney are just salivating at the idea of amending copyright law once more.



  • PC is typically easier to develop for because of the lack of strict (and frequently silly) platform requirements. Which typically makes game development more expensive and slow than it needs to be when just targeting PC. If that barrier to entry was reduced to that of PC, you’d see a lot more games on there from smaller developers.

    With current gen consoles, pretty much every game starts as a PC game already, because thats where the development and testing happens.

    Rockstar here is the exception in that they are intentionally skipping PC - something that should be well within reach of a company their size while clearly being capable of doing so.

    If another AAA game comes out with only PC support I’ll be right there with you - but most game developers with the capability release for all major platforms now. But not the small console indie studio called Rockstar Games it seems.


  • First: They did actually end up removing this and making it configurable, check the bottom of the page. In a vacuum, the idea to stop cut-and-clear racists and trolls from using Lemmy is not something that’s too controversial. Sure, they are being hard asses about changing their mind and allowing instance owners to configure it themselves (and I’m glad they changed their mind). But there’s a big overlap between passionate and opinionated people, so they have to be at times to ensure a project doesn’t devolve into something they can’t put your passion into anymore.

    Second: I mean… what do you expect? In the issue above they actively encourage people to make their own fork of Lemmy and run that if they don’t like something from the base version of Lemmy, so I kind of would assume they do as they preach. Instance owners also have the option to block communities without defederation. Lemmy.ml is basically their home instance. If anything this is a reason not to make an account on lemmy.ml, but as long as that doesn’t leak into the source code of Lemmy, who cares?





  • It’s a bit of a flawed comparison (AI vs a hammer) - but let me try.

    If you put a single nail into wood with a hammer, which anyone with a hammer can also do, and even a hammer swinging machine could do without human input, you can’t protect it.

    If you put nails into wood with the hammer so that it shows a face, you can protect it. But you would still not be protecting the process of the single nail (even though the nail face is made up of repeating that process many times), you would specifically be protecting the identity of the face made of nails as your human artistic expression.

    To bring it back to AI, if the AI can do it without sufficient input from a human author (eg. only a simple prompt, no post processing, no compositing, etc) it’s likely not going to be protectable, since anyone can take the same AI model, use the same prompt, and get the same or very similar result as you did (which would be the equivalent of putting a single nail into the wood).

    Take the output, modify it, refine it, composite it, and you’re creating the hammer equivalent of a nail face. The end result was only possible because of your human input, and that means it can be protected.


  • That’s an eventual goal, which would be a general artificial intelligence (AGI). Different kind of AI models for (at least some) of the things you named already exist, it’s just that OpenAI had all their eggs in the GPT/LLM basket, and GPTs deal with extrapolating text. It just so happened that with enough training data their text prediction also started giving somewhat believable and sometimes factual answers. (Mixed in with plenty of believable bullshit). Other data requires different training data, different models, and different finetuning, hence why it takes time.

    It’s highly likely for a company of OpenAI’s size (especially after all the positive marketing and potential funding they got from ChatGPT in it’s prime), that they already have multiple AI models for different kinds of data either in research, training, or finetuning already.

    But even with all the individual pieces of an AGI existing, the technology to cross reference the different models doesn’t exist yet. Because they are different models, and so they store and express their data in different ways. And it’s not like training data exists for it either. And unlike physical beings like humans, it doesn’t have any kind of way to “interact” and “experiment” with the data it knows to really form concrete connections backed up by factual evidence.



  • I had YT premium for a while, and then I just wanted to download some videos (you know, like they advertise you can) and they just didnt allow it. Had to either watch it in the YT app or on youtube.com on my PC. That’s not downloading - thats just streaming with less computation for youtube, which helps youtube but not me. What a great ‘premium benefit’!

    Cancelled my premium right then and there, if they cant provide a feature as simple as just being able to download videos to mp4 or something, thats just misleading. Literally takes seconds to find a third party site or app (NewPipe) that does it.




  • You’re shifting the goal post. You wanted an AI that can learn stuff while it’s being used and now you’re unhappy that one existed that did so in a primitive form. If you want a general artificial intelligence that is also able to understand the words it says, we are still decades off. For now it can simply only work off patterns, for which the training data needs to be curated. And as explained previously, it’s not infringing on copyright to train things on publicized works. You are simply denying that fact because you don’t want that to be true, but it is. And that’s why your sentiment isn’t shared outside of some anti-AI circle you’re part of.

    The biggest users of AI are techbros who think that spending half an hour crafting a prompt to get stable diffusion to spit out the right blend of artists’ labor are anywhere near equivalent to the literal collective millions of man hours spent by artists honing their skill in order to produce the content that AI companies took without consent or attribution and ran through a woodchipper. Oh, and corporations trying to use AI to replace artists, writers, call center employees, tech support agents…

    So because you don’t know any creative people who use the technology ethically, they don’t exist? Good to hear you’re sticking it up for the little guy who isn’t making headlines or being provocative. I don’t necessarily see these as ethical uses either, but I would be incredibly disingenuous to insinuate these are the only and primary ways to use AI - They are not, and your ignorance is showing if you actually believe so.

    Frankly, I’m absolutely flabbergasted that the popular sentiment on Lemmy seems to be so heavily in favor of defending large corporations taking data produced en masse by individuals without even so much as the most cursory of attribution (to say nothing of consent or compensation) and using it for the companies’ personal profit. It’s no different morally or ethically than Meta hoovering all of our personal data and reselling it to advertisers.

    I’m sorry, but you realize that this doesn’t make any sense right? Large corporations are the ones who would have enough information and/or money at their disposal to train their own AIs without relying on publicized works. Should any kind of blockade be created to stop people training AI models from using public work, you would effectively be taking AI away from the masses in the form of Open Source models, not from those corporations. So if anything, it’s you who is arguing for large corporations to have a monopoly on AI technology as it currently is.

    Don’t think I actually like companies like OpenAI or Meta, it’s why I’ve been arguing about AI models in general, not their specific usage of the technology (As that is a whole different can of worms).