A radio station in Poland fired its on-air talent and brought in A.I.-generated presenters. An outcry over a purported chat with a Nobel laureate quickly ended that experiment.

  • MagicShel
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 months ago

    You are anthropomorphizing it. It can give truth or falsehood the same as pages of a book or a funhouse mirror.

    If you ask me today, “what is the meaning of life?” I might give you an answer. And if you ask me tomorrow, I might give a different one. You have no way of knowing whether I’m correct today, tomorrow, or ever. But if one of those answers, right or wrong, helps you find meaning, it’s still useful. (As a rhetorical point. I’m definitely the last person anyone should look to to find meaning.)

    AI is a lot like that. You give it input, it gives you output, and whether you get anything of value depends greatly on what you are looking for.

    I’ve gotten some advice on improving some of my writing. And some of the advice I took, some I ignored, and some I modified before using. I think the writing turned out better, and since I largely write for myself I’m pretty happy with that.

    I’ve asked it for help programming, and at times it was helpful and other times cost me hours circling around the same old wrong answers, but there’s every chance I would’ve struggled just as much looking online.

    The other day my daughter was making a slushie and it was turning out really wet and gross, so I explained to an AI what we’d done and asked if it had any idea why it didn’t turn out. And it turns out, we were using zero sugar soda which doesn’t work—the sugar is necessary. So we added some simple syrup and it turned out perfectly.

    And it was much faster and easier than Google. But if the advice had been wrong, nothing of value would’ve been lost.