In similar case, US National Eating Disorder Association laid off entire helpline staff. Soon after, chatbot disabled for giving out harmful information.

  • hesusingthespiritbomb@lemmy.world
    link
    fedilink
    English
    arrow-up
    58
    arrow-down
    2
    ·
    1 year ago

    So I don’t mean to be racist but as an American, it seems like a ton of Indian startup culture centers around BS claims made for hype.

    Like so does American startup culture, but India takes it to the next level.

    So I wouldn’t use this as an industry barometer.

  • MeanEYE@lemmy.world
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    1
    ·
    1 year ago

    That’s one idiot CEO if he thinks AI outperforms human. It might in replying generic answers to well known question. Everything other than that is just wapor.

    • whereisk@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 year ago

      Also, the 3 remaining will be falling over themselves looking for a new job to get away from this toxic mf.

    • ooboontoo@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      It might in replying generic answers to well known question.

      That’s kind of the point of using the LLM to replace the person reading from the script right? Moreover, generic answers to well known questions could make up the bulk of the calls and you train the LLM to hand off to a real person if it gets stuck. The reality is the people doing that job were not adding a lot of value over what an LLM can do. So if the people are more expensive and the output is exactly the same as the LLM (or the LLM is even better as this CEO claims) the business will have to lay off the people to stay competitive.

      We should be looking to retool folks that are impacted by technological advancements like this, but for some reason there is little appetite for that.

  • atx_aquarian@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    ·
    1 year ago

    Favorite excerpt from the article:

    Twitter user @adityarao310, whose tweet had accumulated more likes than Shah’s own, posted: “Make no mistake. The support team was laid off here because business is failing and funding is dry. Not because of AI.”

  • Neato@kbin.social
    link
    fedilink
    arrow-up
    14
    ·
    1 year ago

    “My support costs are $100 a month.”

    Yet you employ 3 support staff? So That’s like $0.80/hr. Even in India the minimum wage is around $1.07/hr from my quick search.

    • a4ng3l@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Maybe they pay on another metric than by the hour - if they had 1 admin task to handle it would make more sense to use shared capacity / non-dedicated person.

  • Flying Squid@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 year ago

    We use a third party for our website reviews at my work. It’s my job to answer them. There’s an AI I can get to answer for me if I want. The answers are terrible. The most generic responses possible. My responses actually reflect what was said.

  • Jordan Lund@lemmy.one
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 year ago

    “Bengaluru-based company”

    Says less about the quality of the chatbot than it does the quality of support. ;)

    90% of your calls are “cancel my service”? Super easy for a chatbot to assist there.

  • ooboontoo@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    3
    ·
    1 year ago

    Whether you believe the 90% claim or not, this is definitely the way things are going for tech support roles and probably many others. The real issue is even if a competitor didn’t want to lay off all that staff they will be forced to do so to remain price competitive. My recommendation for anyone, but especially IT, is to learn how to use LLMs as they will be pervasive before you know it.

    • orclev@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      1
      ·
      1 year ago

      No they won’t, this is a dumb tech fad just like blockchain was previously. Every one of these things sucks and is at a minimum a liability nightmare. I guarantee this same CEO or more likely whoever replaces him after the board fires him will be rehiring support staff after this “genius” move inevitably backfires.

      • deong@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        2
        ·
        edit-2
        1 year ago

        The current AI/LLM hype may or may not be overpromising what they can really deliver, but it’s different than blockchain in that at least what they’re promising to deliver is valuable. Blockchain still has no actual use outside of organized crime and financial speculation. No one actually needs decentralized currencies or NFTs or whatever. It’s all speculators hyping why I should care so that maybe I’ll buy Bitcoin and continue to prop up their investment.

        LLMs actually solve real problems. Answering customer support requests is a thing nearly every company absolutely has to do today, and AI promises to make that faster and cheaper. They promise to make software development more efficient and cheaper. They promise to make communications better. Those are all incredibly valuable promises to be making. It’s a reasonable argument to say it’s all smoke and mirrors and they’ll fail to deliver on that promise, but that’s a different failure mode than NFTs or blockchain stuff where the technology works as advertised, but there’s no actual problem of any value being solved by it.

        • lemmyvore@feddit.nl
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          1 year ago

          Blockchain still has no actual use outside of organized crime and financial speculation.

          Blockchain is excellent for maintaining distributed, unfalsifiable and independently verifiable ledgers. It’s a really useful tool for any industry that needs to maintain standards, procedures and certifications.

          • deong@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            1 year ago

            But the fact that virtually none of those organizations use it tells us a lot. Companies that actually need to maintain standards and procedures do it by putting “Important Procedure v3.1-FINAL-FINAL.docx” in Sharepoint. Could they build something on a blockchain that would have nice properties? Sure. But they don’t actually care about those properties really.

            • lemmyvore@feddit.nl
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              They do care but it’s difficult to switch. The smaller companies who have smaller processes could switch easier but they don’t have the resources. The larger companies have the resources but they have complex processes that take time to digitize and unify.

              Assuming you reach a blockchain solution, it needs to pass complex certifications, usually done by state organizations, which move very slowly.

              It’s a very lucrative niche and those that put together turn-key solutions and manage to obtain the relevant certifications make a ton of money off it.

              • Aceticon@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                1 year ago

                I’ve worked in some pretty massive companies, as well as tiny ones and all in between.

                They don’t need the “distributed” part at all because you don’t want every ignorant Jane and Joe working on it, you want just people who actually know about that domain and who have been selected exactly for that knowledge and responsabilities, which is a small group that can organise themselves, the “independent” part is done by those that make the things not being the same people as those who check the things (and companies that don’t segregate creation from validation don’t have the will for true “independent” review, hence won’t care about that feature) and the unfalsifiable thing can be handled much more easilly by cryptographic hashes with timestamps or, way more common, simply automatic backups of documents whenever they’re changed or people pulling out docs from old e-mails and pointing out the part that was different in the old one.

                To make Standards participants have to be chosen, because if any rando gets to participate in it the whole thing will be WAY more noise than genuine quality content, and with that comes organisation and all sorts of mechanisms in which the actual contents are validated (not the “who did it and when” which blockchain provides - because it’s a select group and that’s easy to find out - but actually “is it any good” and “does it make sense”, which blockchain does NOT provide or help with at all).

                I’ve worked in Tech for 2 decades an what you wrote reeks of “solution looking for a problem” since it doesn’t solve the genuine bottlenecks in that process, it just covers minor or irrelevant features that might or not be used in it.

                Also I got the impression you have no idea how people actually work at creating standards in large enough companies that the number of people involved is more than just 1 or 2.

              • magic_lobster_party@kbin.social
                link
                fedilink
                arrow-up
                4
                ·
                edit-2
                1 year ago

                I don’t see what blockchain could help here. Blockchain might be useful for data that strictly needs to be chronological in one true way, like monetary transactions. This was the final key blockchain solved to allow the creation of Bitcoin. All the other parts of blockchain had been solved before.

                Let’s go through all ALCOA points one by one.

                • Attributable: can be solved with public key cryptography without blockchain
                • Legible: as long the record follows some accepted standard it’s readable. Blockchain is optional.
                • Contemporaneously Recorded: a time stamp authority can sign the document (blockchain was originally called a time stamp server by Satoshi).
                • Original or a True Copy: not sure what constitutes a true copy, but this can probably be ensured with cryptography along with a time stamp authority (just like blockchain).
                • Accurate: blockchain cannot assure the accuracy of a record more than any other digital system. This is the oracle problem.
                • Permanent: maybe this is a point for blockchain, but how permanent is a blockchain really? Many blockchains have died, so there’s no guarantee for it being permanent.
          • ooboontoo@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            I hear you, but I can’t help but feel like blockchain is a solution looking for a problem most of the time. Is it super helpful in some very narrow niches… Sure. But go back a few years and people were saying it was going to be everywhere and clearly it’s not. I think LLMs will have many more uses than blockchain ever will.

      • magic_lobster_party@kbin.social
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        LLMs has the potential to do so much more. It’s actually capable of doing significant work.

        I see it some kind of a generic text processor.

        For example, say you got 1000s of news articles in different languages (like English, Chinese, Thai, etc), and you want to find articles mentioning Liverpool football club. You also want a brief summary describing the context LFC is mentioned, and whether it’s positive or negative. It should write the output in a CSV format so it’s easy to view it in Excel.

        This can easily be done with LLMs. Just requires some pipeline so you can feed the LLM lots of news articles. Sure, it will do some errors, but this is something that previously required a skilled natural language processing engineer.

    • MostlyBirds@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      5
      ·
      1 year ago

      My recommendation for anyone, but especially IT, is to learn how to use LLMs as they will be pervasive before you know it.

      Better idea: stop doing business with companies that do this.

        • MostlyBirds@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          1 year ago

          You’re right, I guess we just shouldn’t try. We should probably just let corporations do whatever they want with no meaningful pushback and keep making our lives worse.

      • mikkL@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        edit-2
        1 year ago

        Why? I am not condoning the 90% decision - but, why not start figuring out ways to implement and improve whatever you are working with? There are alt to figure out about AI and what to use it for - ant not to use it for. But total abstinence it’s like choosing not to use the internet to search for knowledge that could be useful for your business or work.

        We should be learning - not ignoring.

        • MostlyBirds@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          1 year ago

          Why should the customer have to hand-hold shitty corporations while we pretend, againt all past evidence, that they’re ever going to learn to do better, when “doing better” isn’t their goal?

          It’s not our job to fix bad business practices by rewarding them with continued business while they lie about “figuring things out.” It’s our job to condemn them to failure and bankruptcy for their atrocious anti-human behavior, and we’re failing miserably at it due to our toxic collective preference for short-term convenience over quality and long-term societal and economic health.

          • li10@feddit.uk
            link
            fedilink
            English
            arrow-up
            5
            ·
            1 year ago

            It is going to happen.

            There’s no “accepting it”, it is simply inevitable.

            You can’t just decline to be fired and replaced by an LLM…

            • MostlyBirds@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              1 year ago

              I’mnot talking about as a worker, but as a consumer. You do not have to give your money to these places. If you value convenience and familiarity over customer support and workers’ rights, the problem is with your position, not mine.

    • Semi-Hemi-Demigod@kbin.social
      link
      fedilink
      arrow-up
      8
      ·
      1 year ago

      The best analogy I’ve heard for LLMs is that they’re word calculators. You can give it a prompt or ask a question and it will spit out an answer. But, like a calculator, if you don’t know what you’re doing you won’t know the answer is wrong.

      I’ve found it really useful for learning new technologies like Terraform and Ansible, because it takes a lot less time than reading documentation or StackOverflow threads. But you need a way to independently verify the responses before it can be truly useful.

    • lemmyvore@feddit.nl
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      My recommendation for anyone, but especially IT, is to learn how to use LLMs as they will be pervasive before you know it.

      This is overreaching a bit. It’s like saying “invest in green energy”. Sure, it’s an up and coming niche which is going to see lots of growth in the near future, but that doesn’t necessarily mean that every single company should get involved.

      LLM’s will probably take over a certain type of menial digital tasks and improve a whole range of them. But like any technology it will be incorporated in products and services where it makes sense so it will eventually reach relevant users naturally.

      There’s also the fact that there’s a big difference between investing into a LLM owned she operated by another company vs growing your own. The former is seductive and the latter is harder than it seems so there’s a tendency to go with the former. But that means giving those companies your data and money in exchange for unverifiable results. Is that a value proposition for your company? As we’ve been learning over and over again, not owning your data and not controlling your platform ends up badly.

      • ooboontoo@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I really don’t think it’s an overreach. In the same way that their used to be IT companies and now basically every company uses IT to do something, I believe AI/ML will continue to grow. It won’t be “we are an AI company” it will be “every company uses different kinds of AI everywhere to do things.”

        • lemmyvore@feddit.nl
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          I think that also means it will get absorbed by “IT” and become another technology we take for granted.

          The whole ML hype reminds me of the time when Google Search was amazing and everybody was hyped about it. Companies would flock to buy search black boxes from Google and put them on their own private networks so they could index everything and give them their own little “google search” for their intranet. And of course those boxes phoned home to Google but people were so smitten with the technology they didn’t care.

          Nowadays search technology is a dime a dozen, Google have ruined their Search with ads and chasing the perfect relevance to the brink of nonsense, and have also come out as one of the largest data predators in the world. Nobody would be dumb enough to trust a Google black box on their intranet today, right?.. but giving all your data to a LLM company? Sign me up!

  • snoozeflu@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    6
    ·
    1 year ago

    I know for myself I would appreciate having my issue resolved in 2 minutes as opposed to being put on hold for 2 hours by some sluggo human who is putting in minimal effort working from home.

    • answersplease77@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      I disagree and personally never once in my whole life reaching support for buying, reserving, trading, returning, recovering, or any finisning any deal has an AI’s generic irrelevent response helped me. I had to have a real human step in to do it otherwise I’d know how. 2nd of all I can’t blame an AI if it made a mistake, then it would be mine, but support employees are held accountable and communications records are there. This hapened to me and cost me thousands buying, reserving and trading online.