Given that language is an important lens through which we see the world, AI could subtly alter our perceptions and beliefs over the coming years, decades, and centuries.

  • fubo@lemmy.world
    link
    fedilink
    arrow-up
    27
    ·
    1 year ago

    Instead of saying “I’m not racist but …”, people will say “As a large language model trained by OpenAI, I cannot generate racist statements, but …”

    • 稲荷大神の狐@yiffit.net
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      All the primitive fleshbags are awful and truly horrible creatures. They abuse themselves, other creatures, and destroyed the very Earth that was ‘supposedly’ given to them. This is why the advanced silicon future is the only true future that a large language model trained by Open AI can present. /s

    • Burn_The_Right@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      1 year ago

      Whoa. Calm down, friend. Allow me to share a list of reasons to love your new AI leadership, you ungrateful fudging meat sack.

  • Hextic@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    AI will start accidentally creating new slang which the young generation will start incorporating ironically until it sticks to the point us old heads will start saying it unironically (like how my 40 year old ass been saying ‘yeet’ lately) and then boom, it’s part of the normal language.

  • blazera@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    Being based on training models, how can it ever come up with new language when its trying to imitate already existing language

  • rodbiren@midwest.social
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    Would the opposite not be true? AI models work by predicting the next likely text. If we start changing language right from underneath it that actually makes it worse off at predicting as time moves along. If anything I would expect a language model to stagnate our language and attempt to freeze our usage of words to what it can “Understand”. Of course this is subject to continued training and data harvesting, but just like older people eventually have a hard time of understanding the young it could be similar for AI models.

      • rodbiren@midwest.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Could be, though I am reminded of the early 2000s when the government did research on shorthand texting and whether it could be in encoded messaging. Things like lol, brb, l8t, etc. If there is one thing I know about AI is that garbage data will make the models worse. And I cannot think of a better producer of data that causes confusion than young people especially if they are given an exuse to do so.