• 13 Posts
  • 118 Comments
Joined 5 months ago
cake
Cake day: January 20th, 2024

help-circle













  • I like that the writer thought re climate change. I think it’s been 1 of the biggest global issues for a long time. I hope there’ll be increasing use of sustainable energy for not just data centers but the whole tech world in the coming years.

    I think a digital waiter doesn’t need a rendered human face. We have food ordering kiosks. Those aren’t ai. I think those suffice. A self-checkout grocer kiosk doesn’t need a face too.

    I think “client help” is where ai can at least aid. Imagine a firm that’s been operating for decades and encountered so many kinds of client complaints. It can feed all those data to a large language model. With that model responding to most of the client complaints, the firm can reduce the number of their client support people. The model will pass the complaints that are so complex or that it doesn’t know how to address to the client support people. The model will handle the easy and medium complaints; the client support people will handle the rest.

    Idk whether the government or the public should stop ai from taking human jobs or let it. I’m torn. Optimistically, workers can find new jobs. But we should imagine that at least 1 human will be fired and can’t find a new job. He’ll be jobless for months. He’ll have an epic headache as he can’t pay next month’s bills.










  • There were at least 2 times in game 4 when Brown got past his defender and there was a help defender. I thought it was the right defensive play. Easy layup for Brown if there was no help defense.

    On another note, I guess 1 of the things that the Pacers tried to prevent was the Celtic shooters getting hot. Hard to beat the Celtics when they’re raining threes. I googled. They have the most 3-pointers made in the 2024 playoffs. The Pacers rank 4th.




  • The article is too long for me. 2 of its main ideas are “Everyone using large-language models should be aware of ai hallucination and be careful when asking those models for facts.” and “Firms that develop large-language models shouldn’t downplay the hallucination and shouldn’t force ai in every corner of tech.”

    There was already so much misinformation on the Web before Chatgpt 3.5. There’s still so much misinformation. No need for the hallucination to worsen the situation. We need a reliable source of facts. Optimistically, Google, Openai or Anthropic will find a way to reduce or eradicate the hallucination. The Google ceo said they were making progress. Maybe true. Or maybe generic pr lie so folks would stop following up re the hallucination.