Office space meme:

“If y’all could stop calling an LLM “open source” just because they published the weights… that would be great.”

  • Zikeji@programming.dev
    link
    fedilink
    English
    arrow-up
    19
    ·
    edit-2
    6 days ago

    Open source isn’t really applicable to LLM models IMO.

    There is open weights (the model), and available training data, and other nuances.

    They actually went a step further and provided a very thorough breakdown of the training process, which does mean others could similarly train models from scratch with their own training data. HuggingFace seems to be doing just that as well. https://huggingface.co/blog/open-r1

    Edit: see the comment below by BakedCatboy for a more indepth explanation and correction of a misconception I’ve made