Already has - air Canada was held liable for their ai chatbot giving wrong information that a guy used to buy bereavement tickets. They tried to claim they weren’t responsible for what it said, but the judge found otherwise.
They had to pay damages.
It’s like computer game box art in the 80s. The game might be fun, but it really looks like PONG. It doesn’t look at all like the fantasy art they had painted for the box.
AI can be a great tool for business. It can help you think, work, and produce a higher quality product. But people don’t understand its limitations and that its success is very much based on the user, and how it was trained.
It’s almost like LLM’s aren’t the solution to literally everything like companies keep trying to tell us they are. Weird.
I honestly can’t wait for this to blow up in a company’s face in a very catastrophic way.
Already has - air Canada was held liable for their ai chatbot giving wrong information that a guy used to buy bereavement tickets. They tried to claim they weren’t responsible for what it said, but the judge found otherwise. They had to pay damages.
That’s not catastrophic yet. That cost them only the money which would otherwise have been margin on top of a low priced ticket.
AI is basically like early access games but the entirety of big tech is rushing to roll it out first to as many people as possible.
Hah, remember when games and software used to be tested to ensure they would function correctly before release?
At least with Early Access games you know its in development.
What has it been, nearly a decade now that we just expect nearly everything to be broken on launch?
It’s like computer game box art in the 80s. The game might be fun, but it really looks like PONG. It doesn’t look at all like the fantasy art they had painted for the box.
AI can be a great tool for business. It can help you think, work, and produce a higher quality product. But people don’t understand its limitations and that its success is very much based on the user, and how it was trained.