Dukaan founder and CEO Suumit Shah revealed that 90% of the company’s support staff has been laid off after the introduction of an AI chatbot to answer customer support queries.

  • june@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    1
    ·
    1 year ago

    The big problem with wanting to use AI (which generally means LLM these days) is that it lacks real creativity. If a problem isn’t documented, the AI won’t know what to do about a particularly difficult support request, or it will give wrong answers all together. My time in CS for tech taught me that the number of novel resolutions is far, far greater than most people realize.

    • James@lemmy.ca
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      1 year ago

      That’s all tier 1 help desk ever does anyway.

      From my experience they know less about the product than I do when I try to get support on it.

      • Pika@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        it’s so true… I feel bad due to it but, half the time t1 is just rehashing the power cycle and try again list.

    • jsveiga@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      Yes, but for all the common issues, for which there’s a script that 1st level will have to follow anyway, AI can do it. If it forwards to a human when it gets to a dead end - and snoops the conversation to learn from it - then less humans are employed.

      Some years ago I opened an issue with Google Pay through the app feedback option.

      A CS messaged me in less than 5 minutes. She was so thorough and her texts looked so scripted, that I had to ask “I apologize in advance, but… are you a bot or a human?”, because I could be much more concise and less patient in my answers if it was a bot. She solved my issue very efficiently btw, and though it was quite funny that I asked.

    • Hypersapien@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      First level support staff generally aren’t allowed to have creativity. They just follow the script and then pass the problem up when it’s something they can’t handle.

    • Phyrin@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      I would agree, this was my first thought.

      Though if the product is sufficiently defined, and bounded, it might make sense. Think support line for a fridge, oven, or other less-open products. Unbounded spaces like general purpose computer support will initially struggle while documentation is built up.

    • bill_1992@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      Wouldn’t any automated system ideally escalate to the next tier of (human) support when it detects something complicated?

      Though I agree with you, I don’t think LLMs are lay-off 90% good.

      • Balder@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 year ago

        Wouldn’t any automated system ideally escalate to the next tier of (human) support when it detects something complicated?

        In my experience, this never happens. Since they have now very few human staff they make it VERY difficult to talk to a human to the point you often give up.

      • LordOfTheChia@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        system ideally escalate to the next tier of (human) support when it detects something complicated?

        Why escalate when you can hallucinate!

    • swrdghcnqstdr@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      often times it will get actual documented solutions wrong too. this is an example of the same type of concept implemented in the MDN

    • swrdghcnqstdr@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      often times it will get actual documented solutions wrong too. this is an example of the same type of concept implemented in the MDN

  • Tatters@feddit.uk
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 year ago

    I phoned a customer helpline yesterday and it was answered immediately, by a soothing, male voice, which talked me through various options. The speech recognition of my responses was accurate, its side of the conversation was natural if a bit stilted, and it was able to answer all my questions. Scarily impressive.

    • BorgDrone@lemmy.one
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 year ago

      This is great news. Working for a helpdesk is hell. The fewer people who have to do it the better.

      The problem is not the loss of jobs, we should strive to eliminate as many jobs as possible. The problem is the link between labor and income.

      • Tatters@feddit.uk
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        1 year ago

        “ we should strive to eliminate as many jobs as possible.”

        Which will make assimilation into the Collective all the easier. We know your kind.

  • code@lemmy.mayes.io
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    My first thought is the latest ai bots will destroy first level customer support jobs.