I’m versed enough in SQL and RDBMS that I can put things in the third normal form with relative ease. But the meta seems to be NoSQL. Backends often don’t even provide a SQL interface.

So, as far as I know, NoSQL is essentially a collection of files, usually JSON, paired with some querying capacity.

  1. What problem is it trying to solve?
  2. What advantages over traditional RDBMS?
  3. Where are its weaknesses?
  4. Can I make queries with complex WHERE clauses?
  • theit8514@lemmy.world
    link
    fedilink
    arrow-up
    17
    ·
    8 days ago

    NoSQL is best used as a key-value storage, where the value can be non-tabular or mixed data. As an example, imaging you have a session cookie value identifying a user. That user might have many different groups, roles, claims, etc. If you wanted to store that data in a RDBMS you would likely need a table for every 1-to-many data point (Session -> SessionRole, Session -> SessionGroup, etc). In NoSQL this would be represented as a single key with a json object that could looks quite different from other Session json objects. If you then need to delete that session it’s a single key delete, where in the RDBMS you would have to make sure that delete chained to the downstream tables.

    This type of key-value lookups are often very fast and used as a caching layer for complex data calculations as well.

    The big downside to this is indexing and querying the data not by the primary key. It would be hard to find all users in a specific group as you would need to scan each key-value. It looks like NoSQL has some indexing capabilities now but when I first used it it did not.

    • Colloidal@programming.devOP
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      8 days ago

      Let me see if I got it. It would be like a denormalized table with a flexible number of columns? So instead of multiple rows for a single primary key, you have one row (the file), whose structure is variable, so you don’t need to traverse other tables or rows to gather/change/delete the data.

      The downsides are the usual downsides of a denormalized DB.

      Am I close?

      • Azzu@lemm.ee
        link
        fedilink
        arrow-up
        7
        ·
        edit-2
        8 days ago

        Pretty much. The advantage is not really the unstructeredness per se, but simply the speed at which you can get a single record and the throughput in how much you can write. It’s essentially sacrificing some of the guarantees of ACID in return for parallelization/speed.

        Like when you have a million devices who each send you their GPS position once a second. Possible with RDBS but the larger your table gets, the harder it’ll be to get good insertion/retrieval speeds, you’d need to do a lot of tuning and would essentially end up at something like a NoSQL database effectively.

      • ryedaft@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        ·
        8 days ago

        Yes. You can also have fields that weren’t defined when you created the “table”.

        With something like Elasticsearch you also have tokenisation of text which obviously compresses it. If it’s logs (or similar) then you also only have a limited number of unique tokens which is nice. And you can do very fast text search. And everything is set up for other things like tf-idf.

      • bahbah23@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        8 days ago

        Rather than try to relate it to an rdbms, think of it as a distributed hash map/associative array.

        • Colloidal@programming.devOP
          link
          fedilink
          English
          arrow-up
          4
          ·
          8 days ago

          What I’m hearing is that they’re very different beasts for very different applications. A typical web app would likely need both.

          • ramble81@lemm.ee
            link
            fedilink
            arrow-up
            3
            ·
            8 days ago

            Yup. And this right here is where I dismiss people that generally say you only need one or the other. Each has a specific advantage and use case and you’ll have the best performance when you choose the “right tool for the job” and don’t just attempt to shoehorn everything into a single solution

            • Colloidal@programming.devOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              8 days ago

              Hold a sec. Rolling your own RDBMS out of a NoSQL database is insane. But is the opposite feasible? Wouldn’t it be a simple table with two columns: a key and a JSON blob?

              • ramble81@lemm.ee
                link
                fedilink
                arrow-up
                2
                ·
                8 days ago

                Could you do it? Yes, but it’s not something that it’s optimized to do. NoSQL engines are designed to deal with key value pairs much better than an RDBMS. Again, best tool for the job.

  • eluvatar@programming.dev
    link
    fedilink
    arrow-up
    12
    ·
    8 days ago

    A place where this type of DB really shines is in messaging. For example Discord uses NoSQL. Each message someone sends is a row, but each message can have reactions made on it by other users. In a SQL database there would be 2 tables, one for messages and one for reactions with a foreign key to the message. But at the scale of Discord they can’t use a single SQL server which means you can’t really have 2 tables and do a join to find reactions on a message. Obviously you could shard the databases. But in NoSQL you just lookup the message and the reactions are just stored alongside it, not in another table, making the problem simpler.

    • Colloidal@programming.devOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      8 days ago

      Right, and you’d never do a search for messages with a particular reaction, so there’s no functionality loss is this use case.

    • FizzyOrange@programming.dev
      link
      fedilink
      arrow-up
      3
      ·
      8 days ago

      It’s not really messaging that’s the differentiator here - it’s scale (specifically write scale). If you can’t have a single master database then sure you might need NoSQL. But you almost certainly aren’t anywhere near that scale. Giant sites like Stackoverflow and Shopify aren’t.

  • HamsterRage@lemmy.ca
    link
    fedilink
    arrow-up
    4
    ·
    8 days ago

    I spent 30 years working with derivatives of the Pick Operating System and its integrated DBMS. Notably Universe and Ultimate. Back in the day, it was very, very difficult to even explain how they worked to others because the idea of key/value wasn’t commonly understood, at least as it is today.

    I was surprised at how similar MongoDB is to Pick in many many respects. Basically, key/value with variant record structures. MongoDB uses something very close to JSON, while Pick uses variable length delimited records. In either case, access to a particular record in near instantaneous give the record key, regardless of how large the file is. Back in the 1980’s and earlier, this was a huge advantage over most of the RDBMS systems available, as storage was much slower than today. We could implement a system that would otherwise take a huge IBM mainframe, on hardware that cost 1/10 the price.

    From a programming perspective, everything revolves around acquiring and managing keys. Even index files, if you had them (and in the early days we didn’t so we maintained our own cross-reference files) were just files keyed on some value from inside records from the main data file. Each record in an index file was just a list of record keys to the main data file.

    Yes, you can (and we did) nest data that would be multiple tables in an SQL database into a single record. This was something called “Associated Multivalues”. Alternatively, you could store a list of keys to a second file in a single field in the first file. We did both.

    One thing that became very time/disk/cpu expensive was traversing an entire file. 99% of the time we were able to architect our systems so that this never happened in day to day processing.

    A lot of stuff we did would horrify programmers used to SQL, but it was just a very different paradigm. Back in a time when storage and computing power were limited and expensive, the systems we built stored otherwise unthinkable amounts of data and accessed it with lightening speed on cheap hardware.

    To this day, the SQL concepts of joins and normalization just seems like a huge waste of space and power to me.

  • jacksilver@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    8 days ago

    Part of any issue looking at SQL vs NoSQL currently is that SQL has continued to evolve and actually taken steps to incorporate no-sql like paradigms.

    A good example is JSON support. Initially if you wanted to store or manage JSON objects it was either as text in SQL or required a NoSQL database. Now the SQL standard has support for JSON.

    Similarly “Big Data” is a space for NoSQL, things like columnar databases were designed for more efficient storing/processing (although columnar indexes can now exist in SQL databases I believe).

    Some spaces where NoSQL still is really important is things like graph databases and key value (as others have mentioned). Graph databases require a different query language and backend.