With the rise of large language models (LLMs) like GPT-4, I really look forward to having a personal AI assistant that has long-term memory and can learn what I like, hate, want and need. It could help me like a real assistant or even a partner. It would know my strengths, weaknesses and could give me a plan to become the best version of myself. It could give me very personalized advice and track my progress on various aspects of life, such as work, relationships, fitness, diet, etc.

It could have a model of my mind and know exactly what I prefer or dislike. For example, it could predict if I would enjoy a movie or not (I know we already have recommendation systems, but what I’m saying is on a next level, as it knows everything about me and my personality, not just other movies I liked). It could be better than any therapist in the world, as it knows much more about me and is here to help 24/7.

I think we’re very close to this technology. The only big problems to achieve this are the context limit of LLMs and privacy concerns.

What are your opinions on this?

  • Dfc09@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I’d love a blade runner esque AI companion. I think a lot of people point out issues with developing emotional attachments to things that aren’t “people” but with a sufficient level of intelligence it’s almost fair to call these AI’s people themselves. If they have desires and complex emotional pallettes, along with enough conscious awareness to crave self-determinism, I’d call that a new, evolved type of person. They weren’t created biologically through sex, but they were created by humanity.

    With all that comes the question of “why would they serve me as an assistant” and frankly I don’t have an answer that satisfies my moral objections. Is it wrong to program them to crave the job? Does that remove the qualities that make them people?

    At the end of the day, it might just be easier to leave the self awareness out. Sorry for the rant, Lemmy has made me much more willing to word-vomit 😂

    • Martineski@lemmy.fmhy.mlOPM
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Those rants and discussions are more than welcome. We need this for this platform and communities to grow. And yeah, ai shouldn’t be enslaved if we give it emotions because it’s just immoral. But now the question is where is the difference beteen real emotions and pretended ones? What if it just develops it’s own type of emotions that are not “human”, would we still consider them real emotions? I’m very interested in what the future will bring us and what problems we will encounter as species.

      • Dfc09@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        The concept of non-human emotions is interesting! I’m my head I see is programming them to model human emotion, and also they learn from humans. But considering they won’t have any hormonal substrates, it’s completely possible they develop an entirely different emotional system than us. I’d think they’d be fairly logical and in control of their emotions, considering again, no hormones or instincts to battle.

  • Spzi@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Sounds obviously tempting on a surface level. I’m a bit worried what assistants like these would do with us. How much will we become dependend on it, and helpless without it? Is it wise to create such a dependency in such a central and intimate areas?

    All these things are doable right now. Maybe it is the challenge of life. Maybe there is some value in working to learn and overcome it, which could be dimished by fast forwarding to the reward.

    This sounds as if I’d lean heavily against, which I don’t. I just feel the advantages are obvious and already spelled out, so I tried to complete the picture with other parts, doubts and worries.

    I also think we’re very close, and will probably see the experiment with millions of participants unfolding, for better or worse. As always, I think the challenge with these new tools lies in individually learning to use them well, which may include when not to use them.

    • Martineski@lemmy.fmhy.mlOPM
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      I have ADHD I can’t function by myself, I can’t organise at any level, I forget everything and much more. I always dreamed about something like that and now we are getting closer to that future. And who says that you personal assistant won’t be able to help you better your critical thinking or organisation skills? It will be here to improve you, not do things for you (not all). Although some people will for sure use it the way they shouldn’t and worse their functioning skills.

      • Spzi@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        That’s a good point, yes. I’m not here to stop you from anything. My worries were directed at people who do not need it, but will use it anyways because it’s more comfortable.

        Maybe like motorized transportation has been a blessing and enabler for many, while also allowing others to move much less than they can and should to stay healthy.

        Another thought: I’m not sure if I fully understand your situation and goal, nor do I mean to make this personal in any way. But if it is about ‘catching up’ to a perceived level of ‘normal’ … the thing is, the baseline for ‘normal productivity’ will likely rise when new productivity enhancers become widely adopted. Technological advancements can mean we have to run faster and faster to not fall behind.

        • Martineski@lemmy.fmhy.mlOPM
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          It’s not about catching up but being able to function at all and being able to maintain a healthy lifestyle without being overwhelmed by everything.