• loobkoob@kbin.social
      link
      fedilink
      arrow-up
      38
      arrow-down
      5
      ·
      1 year ago

      I don’t think AI will be a fad in the same way blockchain/crypto-currency was. I certainly think there’s somewhat of a hype bubble surrounding AI, though - it’s the hot, new buzzword that a lot of companies are mentioning to bring investors on board. “We’re planning to use some kind of AI in some way in the future (but we don’t know how yet). Make cheques out to ________ please”

      I do think AI does have actual, practical uses, though, unlike blockchain which always came off as a “solution looking for a problem”. Like, I’m a fairly normal person and I’ve found good uses for AI already in asking it various questions where it gives better answers than search engines, in writing code for me (I can’t write code myself), etc. Whereas I’ve never touched anything to do with crypto.

      AI feels like a space that will continue to grow for years, and that will be implemented into more and more parts of society. The hype will die down somewhat, but I don’t see AI going away.

      • bionicjoey@lemmy.ca
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        1
        ·
        1 year ago

        The thing is, AI has been around for a really long time and has lots of established use-cases. Unfortunately, none of them are to do with generative language/image models. AI is mainly used for classifying data as part of data science. But data science is extremely unsexy to the average person, so for them AI has become synonymous with the ChatGPTs and DALLEs of the world.

        • Ephera@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Yeah, so far we’ve had discriminative AI (takes complex input, gives simple output).
          Now we have generative AI (takes simple input, gives complex output).

          I imagine, the discussion above is about generative AI…

      • rsuri@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        2
        ·
        1 year ago

        I’ve found good uses for AI already in asking it various questions where it gives better answers than search engines, in writing code for me (I can’t write code myself), etc.

        I’d caution against using it for these things due to its tendency to make stuff up. I’ve tried using ChatGPT for both, but in my experience if I can’t find something on google myself, ChatGPT will claim to know the answer but give me something that just isn’t true. For coding it can do basic things, but if I wanna use a library or some other more granular task it’ll do something like make up a function call that doesn’t exist. The worst part is that it looks right, so I used to waste time trying to figure out why it doesn’t work for me, when it turns out it doesn’t work for anybody. For factual information, I had to correct a friend who gave me fake stats on airline reliability to help me make a flight choice - he got them from GPT 4 and while the numbers looked right, they deviated from other info. In general you never want to trust any specific numbers from LLMs because they’re trained to look right rather than to actually be right.

        For me LLMs have proven most useful for things like brainstorming or coming up with an image I can use for illustration purposes. Because those things don’t need to be exactly right.

        • loobkoob@kbin.social
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          I agree completely. I think AI can be a valuable tool if you use it correctly, but it requires you to be able to prompt it properly and to be able to use its output in the right way - and knowing what it’s good at and what it’s not. Like you said, for things like brainstorming or looking for inspiration, it’s great. And while its artistic output is very derivative - both because it’s literally derived from all the art it’s been trained on and simply because there’s enough other AI art out there that it doesn’t really have a unique “voice” most of the time - you could easily use it as a foundation to create your own art.

          To expand on my asking it questions: the kind of questions I find it useful for are ones like “what are some reasons why people may do x?” or “what are some of the differences between y and z?”. Or an actual question I asked ChatGPT a couple of months ago based on a conversation I’d been having with a few people: “what is an example of a font I could use that looks somewhat professional but that would make readers feel slightly uncomfortable?” (After a little back and forth, it ended up suggesting a perfect font.)

          Basically, it’s good for divergent questions, evaluative questions, inferent questions, etc. - open-ended questions - where you can either use its response to simulate asking a variety of people (or to save yourself from looking through old AskReddit and Quora posts…) or just to give you different ideas to consider, and it’s good for suggestions. And then, of course, you decide which answers are useful/appropriate. I definitely wouldn’t take anything “factual” it says as correct, although it can be good for giving you additional things to look into.

          As for writing code: I’ve only used it for simple-ish scripts so far. I can’t write code, but I’m just about knowledgeable enough to read code to see what it’s doing, and I can make my own basic edits. I’m perfectly okay at following the logic of most code, it’s just that I don’t know the syntax. So I’m able to explain to ChatGPT exactly what I want my code to do, how it should work, etc, and it can write it for me. I’ve had some issues, but I’ve (so far) always been able to troubleshoot and eventually find a solution to them. I’m aware that if want to do anything more complex then I’ll need to expand my coding knowledge, though! But so far, I’ve been able to use it to write scripts that are already beyond my own personal coding capabilities which I think is impressive.

          I generally see LLMs as similar to predictive text or Google searches, in that they’re a tool where the user needs to:

          1. have an idea of the output they want
          2. know what to input in order to reach that output (or something close to that output)
          3. know how to use or adapt the LLM’s output

          And just like how people having access to predictive text or Google doesn’t make everyone’s spelling/grammar/punctuation/sentence structure perfect or make everyone really knowledgeable, AIs/LLMs aren’t going to magically make everyone good at everything either. But if people use them correctly, they can absolutely enhance that person’s own output (be it their productivity, their creativity, their presentation or something else).

      • qyron@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        20
        ·
        1 year ago

        If it was a fad then why does the crypto currency simply doesn’t die? Because I’m waiting on that for some time and nothing really happens.

        • EfreetSK@lemmy.world
          link
          fedilink
          English
          arrow-up
          22
          ·
          1 year ago

          I’d see 2 reasons:

          1. A lot of people put a lot of money into it and they won’t give them up. They’ll keep buying and selling, keeping it sort of artificially afloat even if it has no real world usage. Well there is actually one which leads me to the next point
          2. The illegal market (and gambling) has a use case for cryptocurrencies so they actually use them

          But to put it simply - they don’t die because they don’t have to. There is no single company that would pull the plug. By it’s design, they can coexist in our world and no one can stop it, doesn’t matter if people use it or not

          It’s like a torrent with millions of seeders. As long as there is at least one seeder, the torrent will exist even if the files it contains aren’t really useful

    • Meltrax@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      1 year ago

      It’s not. It’s massively expensive though. There’s money pouring into it because it’s the next big thing. Eventually, the companies that can afford to consistently power a massive LLM learning server farm will be the ones to keep going, the rest will flounder, get acquired, or disappear. Mozilla isn’t a big enough fish and won’t get acquired. AI is not a fad, but it’s not a sustainable business model for a company like Mozilla so I hope all their eggs aren’t going in that basket.

      • spaduf@slrpnk.net
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        1 year ago

        Hell I think there’s a solid argument to be made that it’s not even a sustainable model for the biggest players. As it stands they’re offering remarkably little functionality for how much it costs them. On the other hand, mozillas work in this space up until now has largely been on bringing previously unimaginable functionality to locally hosted open source models and datasets. And that does look to be a sustainable business model.

    • EfreetSK@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      1 year ago

      I was very sceptical towards the recent hypes (space exploration, cryptocurencies, self driving cars, …) which turned out to be fads but this time … this time I’m going to guess it isn’t going to be a fad. Well it depends what we imagine by “AI” - will you have a robot pal like in movie I Robot or AI Artificial Intelligence? Probably not. Will AI predictions and learning be put into majority of programms and quite clever AI voice-assistants will appear like in movie Her? Yeah, I guess this could happen. My main reasons are:

      1. It actually isn’t that difficult, machine learning isn’t new and very theoretically speaking, as long as you have enough computation power, nothing is stopping you. Like at the moment I can’t think of any limit
      2. Laws to stop it would be very difficult. You cannot just say “No AI!”, I mean people can run it at home, how do you want to stop it? Which leads me to other point
      3. The OpenSource community had also made progress in the area
      4. Major players are heavily investing into it
  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    1 year ago

    This is the best summary I could come up with:


    Over the last few years, Mozilla also started making startup investments, including into Mastodon’s client Mammoth, for example, and acquired Fakespot, a website and browser extension that helps users identify fake reviews.

    Indeed, when Mozilla launched its annual report a few weeks ago, it also used that moment to add a number of new members to its board — the majority of which focus on AI.

    Surman told me that the leadership team had been planning these efforts for almost a year, but as public interest in AI grew, he “pushed it out of the door.” But then Draief pretty much moved it right back into stealth mode to focus on what to do next.

    Surman believes that no matter the details of that, though, the overall principles of transparency and freedom to study the code, modify it and redistribute it will remain key.

    The licenses aren’t perfect and we are going to do a bunch of work in the first half of next year with some of the other open source projects around clarifying some of those definitions and giving people some mental models.”

    Then, he noted, when the smartphone arrived, there were a few smaller projects that aimed to create alternatives, including Mozilla (and at its core, Android is obviously also open source, even as Google and others have built walled gardens around the actual user experience).


    The original article contains 1,252 words, the summary contains 229 words. Saved 82%. I’m a bot and I’m open source!

    • Ephera@lemmy.ml
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      I didn’t ask for it, but I’m lowkey happy to have them in this. I imagine, in a few years from now, all the start-ups will have run out of money or been acquired, and as per the usual, only big tech companies remain.

      Traditional search engines will basically be dead, completely swamped with AI-generated spam. And even non-techies will generally depend on generative AIs for information and communication.
      If those are exclusively controlled by big tech, we’ll have tons of censorship (e.g. if you want to export an LLM to China, it has to pretend to not know about the Uyghurs) and just generally no control.

      I don’t expect Mozilla to save the world here, they’re too small for that. But they’re already providing useful tools, raising the entrypoint for independent devs.

        • TheGrandNagus@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          You should actually read the plans about their AI. It runs entirely locally, using your own data that never leaves your PC.

            • TheGrandNagus@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              1 year ago

              I’ve not heard about what you’re saying, so I’d like to learn more.

              Their AI system will collect zero data, though, and run entirely locally. And that’s what this is about.

              Like it or not, this is a hyped feature that people want. The cat’s out of the bag. It’s not a feature that I want, but it is one the market wants.

              It’s good to have a privacy-respecting option when we all know in a few years the likes of Google, Microsoft, and Apple will dominate the market. And we know that they won’t respect our privacy.

        • TheGrandNagus@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          Yeah, the CEO is overpaid, but that’s the norm for tech companies. Seems weird to simp for Google, who has a vastly higher paid CEO, though. Which is what the above user is doing when they’re cheering on the prospect of browser engines that aren’t chromium dying.

          I don’t really see how they’re abandoning their values, either. This is about them having an AI system where they collect zero data and it’s done 100% on your own machine.