• jarfil@beehaw.org
    link
    fedilink
    arrow-up
    1
    ·
    11 months ago

    that will come at the expense of training the future senior engineers until, at some point, there won’t be any (/enough)

    Anything a human can be trained to do, a neural network can be trained to do.

    Yes, there will be a lack of trained humans for those positions… but spinning up enough “senior engineers” will be as easy as moving a slider on a cloud computing interface… or remote API… done by whichever NN comes to replace the people from HR.

    ML is based on human learning and replacing the “learning” stage of human practitioner with machines is going to eventually create a gap in qualified human oversight

    Cue in the humanoid robots.

    Better yet: outsource the creation of “qualified oversight”, and just download/subscribe to some when needed.

    • mozz@mbin.grits.devOP
      link
      fedilink
      arrow-up
      6
      ·
      11 months ago

      Anything a human can be trained to do, a neural network can be trained to do.

      Citation needed

      • jarfil@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        11 months ago

        Humans are neural networks… you can cite me on that.

        (Notice I didn’t say anything about the complexity, structure, or fundamental functioning of a human neural network. All points to modern artificial NNs being somewhat on a tangent to humans… but also that there is some overlap already, and that it can be increased)

        • mozz@mbin.grits.devOP
          link
          fedilink
          arrow-up
          6
          ·
          11 months ago

          Humans are a lot more than the mathematical abstraction that is a neural network.

          You could say that you believe that any computational task that a human brain can accomplish, a neural network can also accomplish (simply assuming that all of the higher-level structures, different parts of the brain allocated to particular tasks, the way it encodes and interacts with memories and absorbs new skills, variety of chemical signals which communicate more than a simple number 0 through 1 being sent through each neuron-to-neuron connection, is abstractable within the mathematical construct of a neural network in some doable way). But that’s (a) not at all obvious to me (b) not at all the same as simply asserting that we’ve got it all tackled now that we can do some great stuff with neural networks © not implying anything at all about how soon it’ll happen (i.e. could take 5 years, or 500, although my feeling is probably on the shorter side as well).

          • jarfil@beehaw.org
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            11 months ago

            Artificial NNs are simulations (not “abstractions”) of animal, and human, neural networks… so, by definition, humans are not more than a neural network.

            simple number 0 through 1

            Not how it works.

            Animal neurons respond as a clamping function, with a constant 0 output up to some threshold, where they start outputting neurotransmitters as a function of the input values. Artificial NNs have been able to simulate that for a while.

            Still, for a long time it used to be thought that copying the human connectome and simulating it, would be required to start showing human-like behaviors.

            Then, some big surprises came from a few realizations:

            1. You don’t need to simulate the neurons, just the relationship between inputs and outputs (each one can be seen as the level of some neurotransmitter in some synapse).
            2. A grid of values, can represent the connections of more neurons than you might think (most neurons are not connected to most others, the neurotransmitters don’t travel too far, they get reabsorbed, and so on).
            3. You don’t need to think “too much” about the structure of the network; add a few extra trillion connections to a relatively simple stack, and the network can start passing the Turing test.
            4. The values don’t need to be 16bit floats, NNs quantized to as little as 4bit (0 through 16) can still show pretty much the same behavior.

            There are still a couple things to tackle:

            1. The lifetime of a neurotransmitter in a synapse.
            2. Neuroplasticity.

            The first one is kind of getting solved by attention heads and self-reflection, but I’d imagine adding extra layers that “surface” deeper states into shallower ones, might be a closer approach.

            The second one… right now we have LoRAs, which are more like psychedelics or psychoactive drugs, working in a “bulk” kind of way… with surprisingly good results, but still.

            Where it really will start getting solved, is with massive scale neuromorphic hardware accelerators the size of a 1TB microSD card (proof of concept is already here: https://www.science.org/doi/10.1126/science.ade3483 ), which could cut down training times by 10 orders of magnitude. Shoving those into a billion smartphones, then into some humanoid robots, is when the NN age will really get started.

            Whether that’s going to take more or less than 5 years, it’s hard to say, but surely everyone is trying as hard as possible to make it less.

            Then, imagine a “trainee” humanoid robot, with maybe 1000 accelerators of those, that once it trains a NN for whatever task, can be copied over to as many simple “worker” robots as needed. Imagine a company spending a few billion USD on training a wide range of those NNs, then offering a per-core subscription to other companies… at a fraction of the cost of similarly trained humans.

            TL;DR: we haven’t seen nothing yet.

            • mozz@mbin.grits.devOP
              link
              fedilink
              arrow-up
              4
              ·
              11 months ago

              by definition, humans are not more than a neural network.

              Imma stop you right there

              What’s the neural net that implements storing and retrieving a specific memory within the neural net after being exposed to it once?

              Remember, you said not more than a neural net – anything you add to the neural net to make that happen shouldn’t be needed, because humans can do it, and they’re not more than a neural net.

            • noxfriend@beehaw.org
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              11 months ago

              We don’t even know what consciousness or sentience is, or how the brain really works. Our hundreds of millions spent on trying to accurately simulate a rat’s brain have not brought us much closer (Blue Brain), and there may yet be quantum effects in the brain that we are barely even beginning to recognise (https://phys.org/news/2022-10-brains-quantum.html).

              I get that you are excited but it really does not help anyone to exaggerate the efficacy of the AI field today. You should read some of Brooks’ enlightening writing like Elephants Don’t Play Chess, or the airoplane analogy (https://rodneybrooks.com/an-analogy-for-the-state-of-ai/).

              • jarfil@beehaw.org
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                11 months ago

                Where did I exaggerate anything?

                We don’t even know what consciousness or sentience is, or how the brain really works.

                We know more than you might realize. For instance, consciousness is the ∆ of separate brain areas; when they go all in sync, consciousness is lost. We see a similar behavior with NNs.

                It’s nice that you mentioned quantum effects, since the NN models all require a certain degree of randomness (“temperature”) to return the best results.

                trying to accurately simulate a rat’s brain have not brought us much closer

                There lies the problem. Current NNs have overcome the limitations of 1:1 accurate simulations by solving only for the relevant parts, then increasing the parameter counts to a point where they solve better than the original thing.

                It’s kind of a brute force approach, but the results speak for themselves.

                the airoplane analogy (https://rodneybrooks.com/an-analogy-for-the-state-of-ai/).

                I’m afraid the “state of the art” in 2020, was not the same as the “state of the art” in 2024. We have a new tool: LLMs. They are the glue needed to bring all the siloed AIs together, a radical change just like that from air flight to spaceflight.

                • noxfriend@beehaw.org
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  11 months ago

                  We know more than you might realize

                  The human brain is the most complex object in the known universe. We are only scratching the surface of it right now. Discussions of consciousness and sentience are more a domain of philosophy than anything else. The true innovations in AI will come from neurologists and biologists, not from computer scientists or mathematicians.

                  It’s nice that you mentioned quantum effects, since the NN models all require a certain degree of randomness (“temperature”) to return the best results.

                  Quantum effects are not randomness. Emulating quantum effects is possible, they can be understood empirically, but it is very slow. If intelligence relies on quantum effects, then we will need to build whole new types of quantum computers to build AI.

                  the results speak for themselves.

                  Well, there we agree. In that the results are very limited I suppose that they do speak for themselves 😛

                  We have a new tool: LLMs. They are the glue needed to bring all the siloed AIs together, a radical change just like that from air flight to spaceflight.

                  This is what I mean by exaggeration. I’m an AI proponent, I want to see the field succeed. But this is nothing like the leap forward some people seem to think it is. It’s a neat trick with some interesting if limited applications. It is not an AI. This is no different than when Minsky believed that by the end of the 70s we would have “a machine with the general intelligence of an average human being”, which is exactly the sort of over-promising that led to the AI field having a terrible reputation and all the funding drying up.

    • noxfriend@beehaw.org
      link
      fedilink
      arrow-up
      4
      ·
      11 months ago

      Anything a human can be trained to do, a neural network can be trained to do.

      Come on. This is a gross exaggeration. Neural nets are incredibly limited. Try getting them to even open a door. If we someday come up with a true general AI that really can do what you say, it will be as similar to today’s neural nets as a space shuttle is to a paper airoplane.

        • noxfriend@beehaw.org
          link
          fedilink
          arrow-up
          1
          ·
          11 months ago

          I wouldn’t say 74k is consumer grade but Spot is very cool. I doubt that it is purely a neural net though, there is probably a fair bit of actionismnat work.

      • jarfil@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        11 months ago

        Try getting them to even open a door

        For now there is: AI vs. Stairs, you may need to wait for a future video for “AI vs. Doors” 🤷

        BTW, that is a rudimentary neural network.

        • noxfriend@beehaw.org
          link
          fedilink
          arrow-up
          2
          ·
          11 months ago

          I’ve seen a million of such demos but simulations like these are nothing like the real world. Moravec’s paradox will make neural nets look like toddlers for a long time to come yet.

          • jarfil@beehaw.org
            link
            fedilink
            arrow-up
            1
            ·
            11 months ago

            Well, that particular demo is more of a cockroach than a toddler, the neural network used seems to not have even a million weights.

            Moravec’s paradox holds true because of two fronts:

            1. Computing resources required
            2. Lack of formal description of a behavior

            But keep in mind that was in 1988, about 20 years before the first 1024-core multi-TFLOP GPU was designed, and that by training a NN, we’re brute-forcing away the lack of a formal description of the algorithm.

            We’re now looking towards neuromorphic hardware on the trillion-“core” scale, computing resources will soon become a non-issue, and the lack of formal description will only be as much of a problem as it is to a toddler… before you copy the first trained NN to an identical body and re-training costs drop to O(0)… which is much less than even training a million toddlers at once.

    • Overzeetop@beehaw.org
      link
      fedilink
      arrow-up
      4
      ·
      11 months ago

      I’m assuming you’re being facetious. If not…well, you’re on the cutting edge of MBA learning.

      There are still some things that just don’t get into books, or drawings, or written content. It’s one of the drawbacks humans have - we keep some things out our brains that just never make it to paper. I say this as someone who has encountered conditions in the field that have no literature on the effect. In the niches and corners of any practical field there are just a few people who do certain types of work, and some of them never write down their experiences. It’s frustrating as a human doing the work, but it would not necessarily be so to a ML assistant unless there is a new ability to understand and identify where solutions don’t exist and go perform expansive research to extend the knowledge. More importantly, it needs the operators holding the purse to approve that expenditure, trusting that the ML output is correct and not asking it to extrapolate in lieu of testing. Will AI/ML be there in 20 years to pick up the slack and put it’s digital foot down stubbornly and point out that lives are at risk? Even as a proponent of ML/AI, I’m not convinced that kind of output is likley - or even desired by the owners and users of the technology.

      I think AI/ML can reduce errors and save lives. I also think it is limited in the scope of risk assessment where there are no documented conditions on which to extrapolate failure mechanisms. Heck, humans are bad at that, too - but maybe more cautious/less confident and aware of such caution/confidence. At least for the foreseeable future.

      • jarfil@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        11 months ago

        we keep some things out our brains that just never make it to paper

        ISO 9001 would like to talk to all those people and have them either document, or see the door. Not really cutting edge, more of a basic business certification to even dream about bidding for any government related project (then, people still lie and don’t keep everything documented… and shit happens, but such are people).

        some of them never write down their experiences

        Get a humanoid learning robot, you’ll have a log of everything it experienced at the end of the day, with exact timestamps, photos, and annotations.

        understand and identify where solutions don’t exist and go perform expansive research to extend the knowledge

        Auto-GPT does it. The operator’s purse is why it doesn’t get used much more 😉