And yet, China is using AI.

…I… don’t know what to think about that.

…I really don’t.

Because it seems that AI is just a scam.

It may “exist” but what it can do is a scam.

Maybe China thinks we have to use it just to “keep up” with the Western powers, but I dunno.

Anyway, interesting discussion with Adam Conover and Ed Zitron. It’s long, but you can listen to it while doing other things. And the comments are interesting too, but then again, there are also trolls in the comments as well (AI supporters here and there).

Frankly, though? I oppose AI. I’m anti-AI. I’m anti-AI in China and anti-AI in America and anti-AI in the whole damn planet.

  • ☆ Yσɠƚԋσʂ ☆@lemmygrad.ml
    link
    fedilink
    arrow-up
    13
    ·
    13 hours ago

    I think this is a perfect illustration of how technology ends up being applied under different social and economic systems. The reason AI is problematic in the west is due to the fact that it’s applied towards finding ways to increase the wealth of the oligarchs. On the other hand, China is using AI for stuff like industry automation, optimizing government workflow, and so on. AI is just a tool for automating work, there’s nothing inherently bad about it. The question is how this tool is applied and to what purposes.

    • footfaults@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      4
      ·
      11 hours ago

      I’m not so sure about that. Your analysis correctly identifies that it is being used in the West for nefarious purposes, but I honestly think even on the technical merits it is a flawed technology and a waste. DeepSeek is more efficient, yes, but it is still a flawed technology that I do not believe they should be using

      • OrnluWolfjarl@lemmygrad.ml
        link
        fedilink
        arrow-up
        5
        ·
        7 hours ago

        I wouldn’t say AI (or pattern-replicating models resembling AI) is flawed. It’s a great tool for saving time and automating certain processes.

        The problem is the myriad of grifters who appeared, mostly in the West, trying to sell it as a cure-all snake oil.

        For instance, there’s a massive push in the EU to insert AI in education, but with little regard or planning on how to do it effectively. It would be a great tool if we were to feed AI with our curriculi, then ask it to update it to current knowledge (e.g. in science), come up with suggestions for better delivery of certain topics, eliminate time wasted on erroneous, repeating, or useless topics and improve our schedules for other topics (e.g. teaching Romeo and Juliet in Languages, and at the same time go through the history of 1400s Venice in History). These things could be done using commitees over a 5 year period. Or they could be done by AI in a day. Instead though, we get to have handsomely-paid private contractors organize days-long training sessions over how to use AI to draw a picture, because it might make a presentation to students slightly more exciting.

        • footfaults@lemmygrad.ml
          link
          fedilink
          English
          arrow-up
          6
          ·
          6 hours ago

          Honestly even your idea of having an LLM “update” a curriculum just makes me annoyed. Why does everyone automatically give authority to an LLM on perhaps one of the most important societal functions, instead of trusting teachers to do their job, with the decades of experience that they have in teaching?

          Is this what we want? AI generated slop for teaching the next generation because it’ll get it done in a day?

      • ☆ Yσɠƚԋσʂ ☆@lemmygrad.ml
        link
        fedilink
        arrow-up
        2
        ·
        7 hours ago

        I find it works well for many purposes, particularly R1 variant. I’ve been using it for lots of stuff and it saves me time. I don’t think it’s flawed technology at all, you just have to understand where and how to use it effectively just like any tool.

        • footfaults@lemmygrad.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 hours ago

          I would argue that if your goal is to get an output that is the statistical mean for a given input, then sure an LLM will generate a set of outputs that statistically go together. It just happens that you throw enough data at it and waste a small country’s annual energy consumption then of course you’ll get something statistically similar. Congrats. You did it.

          • ☆ Yσɠƚԋσʂ ☆@lemmygrad.ml
            link
            fedilink
            arrow-up
            3
            ·
            6 hours ago

            The energy consumption has already been reduced drastically by reinforcement learning, mixture of agents, quantizing, and other techniques. We’re literally just starting to optimize this tech, and there’s already been huge progress in that regard. Second, it’s already quite good at doing real world tasks, and saves me a ton of time writing boilerplate when coding.

            • footfaults@lemmygrad.ml
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 hours ago

              Second, it’s already quite good at doing real world tasks, and saves me a ton of time writing boilerplate when coding.

              So, that’s another thing that I wonder about. All these LLMs are doing is automating boilerplate code, and frankly that’s not really innovative. “Programming via stack overflow” was a joke that has been in use for nearly two decades now (shudder) and all the LLM is doing is saving you the ALT+TAB between SO and your text editor, no?

              If you’re doing a TODO app in Angular or NextJS I’m sure you get tons of boilerplate.

              But what about when it comes to novel, original work? How much does that help? I mean really how much savings do you get, and how useful was it?

              • ☆ Yσɠƚԋσʂ ☆@lemmygrad.ml
                link
                fedilink
                arrow-up
                3
                ·
                2 hours ago

                The reality is that most of programming isn’t all that innovative. Most work in general isn’t innovative. Automating the boring stuff is literally the whole point. Meanwhile, it’s far more sophisticated than copy pasting from StackOverflow. It can come up with solutions in a context of a specific problem you give it, and the chain of reasoning DeepSeek R1 produces is actually interesting in itself, as it reads like a chain of thought.

                This itself can actually be useful for doing novel and original work because it stimulates your thinking. Sometimes you see something that triggers an idea you wouldn’t have had otherwise, and you can pull on this thread. I find it literally saves me hours of work, and it is very useful.

                For example, just the other day I used it to come up with a SQL table schema based on some sample input JSON data. Figuring out the relationships would’ve taken me a little while, and then typing it all up even longer. It did exactly what I needed, and let me focus on the problem I wanted to solve. I also find it can be useful for analyzing code which is great for getting introduced to a codebase you’re not familiar with, or finding a specific part of the code that might be of interest.

                It’s also able to find places in code that can be optimized and even write the optimizations itself https://github.com/ggml-org/llama.cpp/pull/11453

                Based on my experience, I can definitively says that his is a genuinely useful too for software development.

                • footfaults@lemmygrad.ml
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  2 hours ago

                  For example, just the other day I used it to come up with a SQL table schema based on some sample input JSON data

                  How likely is it that this JSON structure and corresponding database schema is somewhere in the (immense) training data. Was it novel? New?

                  Like I just have continual questions about an LLM doing 3NF a priori on novel data.

                  Like if we just outright say that LLMs are just a better Google or a better IntelliSense that can fetch you existing data that it has seen (which, given that it’s basically the entire Internet, across probably the entire existence of the Internet that has been crawled by crawlers and the Internet archive, which is a boggling amount) instead of dressing it up as coming up with NEW AND ENTIRELY NOVEL code like the hype keeps saying, then I’d be less of a detractor

      • Dengalicious@lemmygrad.ml
        link
        fedilink
        arrow-up
        1
        ·
        8 hours ago

        I think the difference between generative AI and AI as a whole needs to be made here. DeepSeek is generative AI, along with all the issues that goes along with that but there are very applicable AI based systems for professional, industrial, and scientific uses. The level of data that machine learning systems can analyze provides a usage far beyond the problems that are certainly inherent to generative AI.

        • footfaults@lemmygrad.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          7 hours ago

          I think the difference between generative AI and AI as a whole needs to be made here.

          Does it? I would not consider LLMs to be AI and it’s unfortunate that a marketing gimmick has been turned into a colloquialism to the point where they have stolen the term AI and renamed the original pre-OpenAI concept to “AGI”.

          Regardless, the point I am making is that the current hype surrounding LLMs is unwarranted.

          The level of data that machine learning systems can analyze provides a usage far beyond the problems that are certainly inherent to generative AI.

          But here’s where I disagree. I think Machine Learning at least has a relatively modest marketing pitch, where you feed it data and based on some training and inputs (yes this is indeed similar to an LLM) and you will get a reasonable estimation of whatever you are seeking as an output, based on historical data. Nobody is running around claiming that my monitoring system that has some ML in it for CPU and temperature and load averages is suddenly going to turn into God like Sam Altman and all these other wackos want to happen with LLMs.

        • ☆ Yσɠƚԋσʂ ☆@lemmygrad.ml
          link
          fedilink
          arrow-up
          1
          ·
          7 hours ago

          Of course, but I don’t think we should discard LLM based AI either. It’s a piece of a bigger picture. One of my favorite examples of it being used effectively are neurosymbolic systems where deep neural networks are used to classify noisy input data, and then a symbolic logic system is used to reason about classified data.

  • Max@lemmygrad.ml
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    1 day ago

    As you can see from people commenting in this thread and discussions elsewhere, someone with positive or negative opinions of AI will lump or leave out different technologies to determine whether AI is good or bad. So being pro and anti AI is probably too general a position to be meaningful.

    That said the current ‘pro-AI’ push in global markets is clearly a naked attempt to expand on some mostly useless technology that needs to use a huge amount of expensive computing power after crypto mining fell through a bit and stopped filling this role.

  • fubarx@lemmy.ml
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    57 minutes ago

    Some may remember the dotcom days, where everyone was trying to latch internet onto something. You ended up with online pet food and hosiery. When it crashed, it took out a whole swath of companies and a lot of people who had come out to Silicon Valley for the gold rush.

    A few years later, it was mobile + X. Just add mobile to coupons, or wine, or car shopping. Then in 2008, we had the housing crash and it took out most of the mobile companies on shaky ground.

    The common thread was that a lot of questionable companies dropped out, and many bathrooms were lined with worthless stock options. But we also ended up with companies that had a sustainable business model and survived and became large.

    Now, we’re in the mid-stage of the same trajectory. AI + whatever. There will be the inevitable shakeout, lots of companies will go out of business. But we’ll end up with a few strong ones that will solve real problems.

    Not sure which ones, but if history is a guide, it will be those that solve sticky, real problems that people are willing to pay for long term.

  • Finiteacorn@lemmygrad.ml
    link
    fedilink
    arrow-up
    15
    ·
    1 day ago

    To clarify when u say “AI” do u mean llms and diffusion models cuz if u mean ai as a whole then u are simply wrong ai is hugely useful, has been around for decades, and is widely used in basically everything u can imagine. If by ai u specifically mean llms and diffusion models, then let me clarify that yes these models do in fact exists and let me further clarify that “AI” is just a term these models are in no way “intelligent” just in case thats what u mean by “exists”, and yeah many of their purported uses are scams tho some are not, the REAL use of llms and diffusion models from the perspective of capitalist is to fire a bunch of people because “ai can do their jobs now” and then hire them right back at a much lower salary to do the same job but as prompt engineers, just like they did with translators back when google translate became viable(which is also ai btw).

  • ExotiqueMatter@lemmygrad.ml
    link
    fedilink
    arrow-up
    12
    arrow-down
    1
    ·
    edit-2
    1 day ago

    It’s not true that all current AI is a scam.

    Western AI is a detestable scam that everyone hate because they use it for stupid things that no one asked for but everyone hate, like replacing artists with AI or generating endless slop to pollute the internet with, and because the capitalist insist on using it as a way to completely replace workers instead of using it as a tool to make work easier as it should be.

    If instead of being stupid bazinga hyped techbros about it we treat it as the mere tool that it is, it can be genuinely useful in some very niche applications.

  • pcalau12i@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    4
    ·
    edit-2
    1 day ago

    The claim that AI is a scam is a ridiculous and can only be stated by someone who doesn’t understand the technology. Are we genuinely supposed to believe that capitalists hate profits and capital accumulation and are just wasting their money on something worthless? It’s absurd. AI is already making huge breakthroughs in many fields, such as medicine with protein folding. I would recommend watching this video on that subject in particular. China has also been rapidly improving the speed of construction projects by coordinate them with AI.

    To put it in laymen’s terms, traditional computation is like Vulcans: extremely logical and have to go compute everything logically step-by-step. This is very good if you want precise calculations, but very bad for many other kinds of tasks. Here’s an example: you’re hungry, you decide to go eat a pizza, you walk to the fridge and open it, take out the slice, put it in the microwave to heat it up, then eat it. Now, imagine if I gave you just the sensory data, such as, information about what a person is seeing and feeling (hunger), and then asked you to write a full-proof sequence of logical statements that, when evaluated alongside the sensory data, would give you the exact muscle contractions needed to cause the person to carry out this task.

    You’ll never achieve it. Indeed, even very simple tasks humans do every day, like translating spoken words into written words, is something that nobody has ever achieved a set of logical if/else statements to replicate. Even something seemingly simple like this is far too complicated with far too many variables for someone to ever program, because everyone’s voice is a bit different, every audio recording is going to have slightly different background noise, etc, and to account for all of it with a giant logical proof would be practically impossible.

    The preciseness of traditional computation is also its drawback: you simply cannot write a program to do very basic human tasks we do every day. You need a different form of computation that is more similar to how human brains process information, something that processes information in a massively parallel fashion through tweaking billions of parameters (strengths in neural connections) to produce approximate and not exact outputs that can effectively train itself (“learn”) without a human having to adjust those billions of parameters manually.

    If you have ever used any device with speech recognition, such as writing a text message with spoken voice, you have used AI, since this is one of the earliest examples of AI algorithms actually being used in consumer devices. USPS heavily integrates AI to do optical-character recognition, to automatically read the addresses written on letters to get them to the right place, Tom Scott has a great video on this here on the marvel of engineering that is the United States Postal Service and how it is capable of processing the majority of mail entirely automatically thanks to AI. There have also been breakthroughs in nuclear fusion by stabilizing the plasma with AI because it is too chaotic and therefore too complex to manually write an algorithm to stabilize it. Many companies use it in the assembly line for object detection which is used to automatically sort things, and many security systems use it to detect things like people or cars to know when to record footage efficiently to save space.

    Being anti-AI is just being a Luddite, it is oppose technological development. Of course, not all AI is particularly useful, some companies shove it into their products for marketing purposes and it doesn’t help much and may even make the experience worse. But to oppose the technology in general makes zero sense. It’s just a form of computation.

    If we were to oppose AI then Ludwig von Mises wins and socialism is impossible. Mises believed that socialism is impossible because no human could compute the vastness of the economy by hand. Of course, we later invented computers and this accelerated the scale in which we can plan the economy, but traditional computation models still require you to manually write out the algorithm in a sequence of logical if/else statements, which has started to become too cumbersome as well. AI allows us to break free of this limitation with what are effectively self-writing programs as you just feed them massive amounts of data and they form the answer on their own, without the programmer even knowing how it solves the problem, it acts as kind of a black-box that produces the right output from a given input without having to know how it internally works, and in fact with the billions of parameter models, they are too complicated to even understand how they work internally.

    (Note: I am using the term “AI” interchangeably with technology based on artificial neural networks.)

  • Kultronx@lemmygrad.ml
    link
    fedilink
    arrow-up
    12
    ·
    2 days ago

    AI is just another tool, like email. The biggest problem with AI is the people using it, whether it’s burning tons of greenhouse gases for huge datacentres or relying on them for important tasks that should be left to humans. Deepseek sort of combats the first issue, and one would hope China is smart enough to use it to serve the people and not profit. I really like Ed Zitron and his podcast and have been listening for awhile, his criticism of OpenAI and other Web3 nonsense has been ahead of the curve.

  • HakFoo@lemmy.sdf.org
    link
    fedilink
    arrow-up
    9
    ·
    2 days ago

    It’s an intersection of immature tech and desperate capital.

    Look at some of the rubbish that was tossed around when personal computers were new. The classic one is “you’ll have one in the kitchen to store recipes” but they didn’t foresee livestreaming or Microsoft Excel. Or look at some of the railway routes proposed in the mid-1800s. Fair enough, people will speculate and make dubious guesses about the future of new tech. I expect we’ll find useful cases for generative AI, but many of the “now with extra fingers” products will wither when they prove to offer unacceptable cost once unsubsidized, or simply prove to be too much hassle to get correct output from. When a support LLM regularly gives technically incorrect answers, nobody will want it no matter how clean its grammar and florid its language is.

    But fir now, it’s fuel for late-stage capitalism that has to jump from bubble to bubble to keep delivering infinite growth. They’ll wager on any and all of it, even the stupid and ultimately useless stuff, in case it delivers the moonshot return they’re after. It’s institutional FOMO that they might miss buying into day-1 Apple, or that they might strangle their big win by not cramming it down everyone’s throat hard enough.

  • knfrmity@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    3
    ·
    2 days ago

    An LLM, just like any technology, is in itself neutral. What’s important is who controls the application of said technology.

    When the people own a technology, both directly and via the means of production applying and producing it, it will be used to improve the lives of the people.

    When the capitalists own a technology, it will be used to improve the lives of capitalists.

    • footfaults@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      7 hours ago

      An LLM, just like any technology, is in itself neutral.

      I don’t agree with this statement. All technology is built with a purpose in mind. Saying it’s neutral is a lie.