BrooklynMan

designer of experiences, developer of apps, resident of nyc, citizen of earth

he/him

tip jar

  • 5 Posts
  • 48 Comments
Joined 1 year ago
cake
Cake day: June 2nd, 2023

help-circle





  • “Bermaga”-era Trek, as I like to call it, had a lot of warmth, too, but it was certainly more serious. they really tried to formalize Trek much more, especially with the lore and the tech. It did come off as stiff a lot of the time, but it had its own goofy moments, too. It was certainly different in tone, though, especially DS9, which was pretty dark in its portrayal of Trek at the edges of and sometime outside of the Federation. The whole idea, though, was to portray a much more mature Federation and Starfleet, and I think they did a good job of that.

    It’s also what PIC S1 and S2 missed— the human connections, the warmth, that were present in the Bermaga-era Trek shows. That, and the good writing, directing, and acting. The characters weren’t believable and Trek was presented as some action series set in a dystopian future that certainly seemed alien to Trek viewers. No wonder everyone hated it. It’s also why S3 was such a hit: it was a return to everything that made 90s-era Trek great: excelled, character-driven storylines with clever tech problems that everyone had to work together to fix using science and cleverness.

    I love how SNW has hit its stride this season, has broken out of the DSC formula, and is hitting all the right notes (no pun intended).














  • You’re getting lost in the weeds here and completely misunderstanding both copyright law and the technology used here.

    you’re accusing me of what you are clearly doing after I’ve explained twice how you’re doing that. I’m not going to waste my time doing it again. except:

    Where copyright comes into play is in whether the new work produced is derivative or transformative.

    except that the contention isn’t necessarily over what work is being produced (although whether it’s derivative work is still a matter for a court to decide anyway), it’s regarding that the source material is used for training without compensation.

    The problem is that as a consumer, if I buy a book for $12, I’m fairly limited in how much use I can get out of it.

    and, likewise, so are these companies who have been using copyrighted material - without compensating the content creators - to train their AIs.


  • Of course it is. It’s not a 1:1 comparison

    no, it really isn’t–it’s not a 1000:1 comparison. AI generative models are advanced relational algorithms and databases. they don’t work at all the way the human mind does.

    but the way generative AI works and the we incorporate styles and patterns are more similar than not. Besides, if a tensorflow script more closely emulated a human’s learning process, would that matter for you? I doubt that very much.

    no, the results are just designed to be familiar because they’re designed by humans, for humans to be that way, and none of this has anything to do with this discussion.

    Having to individually license each unit of work for a LLM would be as ridiculous as trying to run a university where you have to individually license each student reading each textbook. It would never work.

    nobody is saying it should be individually-licensed. these companies can get bulk license access to entire libraries from publishers.

    That’s not materially different from how anyone learns to write.

    yes it is. you’re just framing it in those terms because you don’t understand the cognitive processes behind human learning. but if you want to make a meta comparison between the cognitive processes behind human learning and the training processes behind AI generative models, please start by citing your sources.

    The difference is that a human’s ability to absorb information is finite and bounded by the constraints of our experience. If I read 100 science fiction books, I can probably write a new science fiction book in a similar style. The difference is that I can only do that a handful of times in a lifetime. A LLM can do it almost infinitely and then have that ability reused by any number of other consumers.

    this is not the difference between humans and AI learning, this is the difference between human and computer lifespans.

    There’s a case here that the renumeration process we have for original work doesn’t fit well into the AI training models

    no, it’s a case of your lack of imagination and understanding of the subject matter

    and maybe Congress should remedy that

    yes

    but on its face I don’t think it’s feasible to just shut it all down.

    nobody is suggesting that

    Something of a compulsory license model, with the understanding that AI training is automatically fair use, seems more reasonable.

    lmao