- cross-posted to:
- [email protected]
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
- [email protected]
I predict a huge demand of workforce in five years, when they finally realized AI doesn’t drive innovation, but recycles old ideas over and over.
I predict execs will never see this despite you being correct. We replaced most of our HR department with enterprise GPT-4 and now almost all HR inquiries where I work is handled through a bot. It daydreams HR policies and sometimes deletes your PTO days.
But can you convince it to report itself for its violations if you phrase it like it’s a person?
No unfortunately. A lot of us fucked with it but it keeps logs of every conversation and flags abusive ones to management. We all got a stern talking to about it afterwards.
“Trust your tools”. Not my fault the hammer was replaced by a banana.
I give you permission to replace HR with chatgpt. It just can’t be any worse.
“Workforce” doesn’t produce innovation, either. It does the labor. AI is great at doing the labor. It excels in mindless, repetitive tasks. AI won’t be replacing the innovators, it will be replacing the desk jockeys that do nothing but update spreadsheets or write code. What I predict we’ll see is the floor dropping out of technical schools that teach the things that AI will be replacing. We are looking at the last generation of code monkeys. People joke about how bad AI is at writing code, but give it the same length of time as a graduate program and see where it is. Hell, ChatGPT has only been around since June of 2020 and that was the beta (just 13 years after the first iPhone, and look how far smartphones have come). There won’t be a huge demand for workforce in 5 years, there will be a huge portion of the population that suddenly won’t have a job. It won’t be like the agricultural or industrial revolution where it takes time to make it’s way around the world, or where this is some demand for artisanal goods. No one wants artisanal spreadsheets, and we are too global now to not outsource our work to the lowest bidder with the highest thread count. It will happen nearly overnight, and if the world’s governments aren’t prepared, we’ll see an unemployment crisis like never before. We’re still in “Fuck around.” “Find out” is just around the corner, though.
Even mindless and repetitive tasks require instances of problem solving far beyond what a.i is capable of. In order to replace 41% of the work force you’ll need a.g.i and we don’t know if thats even possible.
Let’s also not forget that execs are horrible at estimating work.
“Oh this’ll just be a copy paste job right?” No you idiot this is a completely different system and because of xyz we can’t just copy everything we did on a different project.
Or salesmen. “Oh, you have that another system to integrate with? No, no change in estimates, everything is OK.”
Then they have a deal concluded etc, and then suddenly that information reaches the people who’ll be actually doing it.
It was 41% of execs saying workforce will be replaced, not 41% of workforce will be replaced
Its not replacing people outright its meaning each person is capable of doing more work each thus we only need 41% the people to achieve the same task. It will crash the job market. Global productivity and production will improve then ai will be updated repeat. Its just a matter of if we can scale industry to match the total production capacity of people with ai assistance fast enough to keep up. Both these things are currently exponential but the lag may cause a huge unemployment crisis in the meantime.
In this potential scenario, instead of axing 41% of people from the workforce, we should all get 41% of our lives back. Productivity and pay stay the same while the benefits go to the people instead of the corporations for a change. I know that’s not how it ever works, but we can keep pushing the discussion in that direction.
You and I know damn well that a revolution is the only way that’s gonna happen, and there aren’t any on the horizon.
What do u replace it with after a revolution? Communism doesnt work capitalism is flawed democracy is flawed but seems to at least promote our freedoms. I think we defiantly need a fluid democracy before we can start thinking about how we solve the economic problems (well other than raising minimum wage that’s a no brainer) without undermining exponential growth.
Capitalism isn’t just flawed, it’s broken. For every prosperous nation like the UK or Germany, there’s half a dozen Haitis and Panamas.
By “communism”, I presume you mean Marxist-Leninist state socialism, which indeed fails miserably. However, it isn’t the only alternative to capitalism. Historically, there have been several communes during the Spanish and Russian civil wars that worked fine and didn’t have a central leader, let alone a dictatorship. Although they died because of military blunders, this model is currently being followed more or less in Chiapas by the Zapatistas.
In these places, workers’ councils ruled. Direct face-to-face democracy by neighbours were how most things were done. I recon that this is a fairly nice arrangement.
Democracy’s flaws come from subversion by the wealthy and the fact that republics don’t let people really participate, but rather choose people who participate in their place.
We are walking talking general intelligence so we know it’s possible for them to exist, the question is more if we can implement one using existing computational technology.
I’ve worked with humans, who have computer science degrees and 20 years of experience, and some of them have trouble writing good code and debugging issues, communicating properly, integrating with other teams / components.
I don’t see “AI” doing this. At least not these LLM models everyone is calling AI today.
Once we get to Data from Star Trek levels, then I can see it. But this is not that. This is not even close to that.
People are always enthusiastic about automating others’ jobs. Just like they are about having opinions on areas of knowledge utterly alien to them.
Say, how most see the work of medics.
And the fact that a few times in known history revolutions happened makes them confident that another one is just behind the corner, and of course it’ll affect others and not them.
Hahahaha, good one
You know what I like about Pareto law and all the “divide and conquer” algorithms? You should still know where the division is and which 10% are more important than the other 90%.
Anyway, my job is in learning new stuff quickly and fixing that. Like of many-many people, even some non-technical types really.
People who can be replaced with machines have already been for the most part, and where they can’t, it’s also a matter of social pressure. Mercantilism and protectionism and guilds historically were defending the interests of certain parties, with force too.
No, I don’t think there’ll be a sudden “find out” different from any other period of history.
just 13 years after the first iPhone, and look how far smartphones have come
I disagree.
As someone who has the first iPhone, it was amazing and basically did everything that a new one does. It went on all websites, had banking apps and everything.
I would actually argue phones have become worse, they are very bloated and spy on you, at first they actually made your life better and there was no social media apps super charged for addiction.
Hype hype hype hype hype.
Hilarious L take
You know what I love about blocking people?
these are the same people who continue to use monetary incentives despite hard scientific evidence that it has the opposite effect from what is desired. they’re not gonna realise shit.
The ones refusing to give raises and also being shocked and complain bitterly about loyalty when people quit for a higher wage somewhere else.
Seems to be working in Hollywood films for the last 20 years
Yeah the 59% in this survey are going to end up pretty successful and buy out the 41%
but recycles old ideas over and over.
I am so glad us humans don’t do that. It’s so nice going to a movie theater and seeing a truly original plot.
The Oncology pharma companies would love that! Every time I google symptoms I swear…
In my experience, 100% of executives don’t actually know what their workforce does day-to-day, so it doesn’t really surprise me that they think they can lay people off because they started using ChatGPT to write their emails.
This was my immediate thought too. Even people 2-3 levels of management above me struggle to understand our job let alone the person 5-6 levels up in the executive suite.
At my last job my direct manager had to explain to upper management multiple times that X role and Y role could not be combined because it would require someone to physically be in multiple places simultaneously. I think about that a lot when I hear about these corporate plans to automate the workforce.
However, people saying that C-suite can be replaced with GPTs don’t understand that plenty of people not in C-suite could be replaced or not replaced just as well. Lots of office plankton around with such reasoning skills that I just don’t know how their work can bring profit.
I can’t decide whether those people are really needed or they are employed so that they wouldn’t collectively lynch those of us who’d keep relevance, but wouldn’t be social enough to defend from that doom.
The problem with building hierarchies of humans is with humans politicking and lying and scheming with each other, not even talking about usual stuff like friendship and sympathy and their opposites. It’s just impossible to see what’s really happening behind all that.
Well it’s good to know 59% of execs are aware that AI isn’t gonna change shit
Some of that 59% might, but I guarantee at least some very strongly think it will change things, but think the change it brings will require as many people as before (if not more), but that they will be doing exponentially more with the people they have.
The problem with that headline is that it doesn’t feed the hype cycle.
Could be they just think there is productivity shortfall and current workforce + plus AI will help meet it. Or just lieing for PR.
With out more data its just guessing though
Can AI replace executives too?
Yes. And it will.
As soon as we’ve managed to make a computer that can simulate an entire brain in real time. Who knows how many decades or even centuries will that take.
No. Middle management is a lot of repeating tasks that an AI could do. The thing is that were not talking about replacing all middle management, we’re talking about giving 10% of the managers the tools to run 90% of the repetitive, tedious and boring tasks.
To replace a corporate executive? No, I don’t think so. We already have algorithms more than capable of replacing CEOs. There is nothing that challenging in what they do…
The challenge is to not do whatever the optimal algorithm says. If they simply did what an algorithm says, it would be very easy for competitors to predict.
The challenge comes in being a scapegoat for when things go wrong (albeit a goat with a golden parachute) and a hype man for when things go right.
But as others have said AI won’t replace executives because it’s executives making the decisions to use AI, and no one with power will ever choose an option that reduces their own money.
Oh, but the board directors might want to replace the CEO anyway.
Well, the one in power might decide that they’re spending too much on the managers below them.
You make it sound like corporations invent a new revolutionary wheel each quarter. They don’t.
What fantastic new beverage have Coca Cola launched the last couple of years? What astonishing new car technology has GM or Volkswagen released lately?
Most companies are doing what they’ve always have done and guarding their market share. Now and then some small competitor with something revolutionizing pops up and either starts eating market share it gets aquired by one the bigger ones.
So between a competition popping up or one of your engineers coming up with a lucky accident, all you do is to manage the business as you always do.
It’s amazing how this delusion gets repeated so much in here. Absolute unhinged shit.
Yes.
The biggest factor in terms of job satisfaction is your boss.
There’s a lot of bad bosses.
AI will be an above average boss before the decade is out.
You do the math.
I really want to see if worker owned cooperatives plus AI could do help democratize running companies (where appropriate). Not just LLMs, but a mix of techniques for different purposes (e.g., hierarchial task networks to help with operations and pipelining, LLM for assembling/disseminating information to workers).
Say execs. You know, the people who view labor as a cost center.
They say that because that’s what they want to happen, not because it’s a good idea.
And only 41%.
I’ve advised past clients to avoid reducing headcount and instead be looking at how they can scale up productivity.
It’s honestly pretty bizarre to me that so many people think this is going to result in the same amount of work with less people. Maybe in the short term a number of companies will go that way, but not long after they’ll be out of business.
Long term, the companies that are going to survive the coming tides of change are going to be the ones that aggressively do more and try to grow and expand what they do as much as possible.
Effective monopolies are going out the window, and the diminishing returns of large corporations are going to be going head to head with a legion of new entrants with orders of magnitude more efficiency and ambition.
This is definitely one of those periods in time where the focus on a quarterly return is going to turn out to be a cyanide pill.
Yup, and there’s a lot you can do to increase productivity:
- less time wasted in useless meetings - I’ve been able to cut ours
- more time off - less burnout means more productivity
- flexible work schedules - life happens, and I’m a lot more willing to put in the extra effort today if I know I can go home early the next day
- automate the boring parts - there are some fantastic applications of AI, so introduce them as tools, not replacements
- profit sharing - if the company does well, don’t do layoffs, do bigger bonuses or stock options
- cut exec pay when times get hard - it may not materially help reduce layoffs, but it certainly helps morale to see your leaders suffering with you
And so on. Basically, treat your employees with respect and they’ll work hard for you.
Short term is all that matters. Business fails? Start another one, and now you have a bunch of people that you made unemployed creating downward pressure on labor prices.
No, you have a lot of people you made unemployed competing with you.
This is already what’s happening in the video game industry. A ton of people have lost their jobs, and VC money has recently come pouring in trying to flip the displaced talent into the next big success.
And they’ll probably do it. A number of the larger publishers are really struggling to succeed with titles that are bombing left and right as a result of poor executive oversight on attempted cash grabs to please the short term market.
Look at Ubisoft’s 5-year stock price.
Short term is definitely not all that matters, and it’s a rude awakening for those that think it’s the case.
Mostly the execs don’t care. They’ve extracted “value” in the form of money and got paid, that’s the extent if their ability to look forward. The faster they make that happen the faster they can do it again, probably somewhere else. They don’t give a single shit what happens after.
It really depends on the exec.
Like most people, there’s a range.
Many are certainly unpleasant. But there’s also ones that buck the trend.
Yeah, and there are a few good lawyers and a few good cops and (probably) a few good politicians too, but we’re not talking about the few exceptions here.
Well, we kind of are as the shitty ones tend to fail after time and the good ones continue to succeed, so in a market that’s much more competitive because of a force multiplier on labor unlike anything the world has seen there’s not going to be much room for the crappy execs for very long.
Bad execs are like mosquitos. They thrive in stagnant waters, but as soon as things get moving they tend to reduce in number.
We’ve been in a fairly stagnant market since around 2008 for most things with no need for adaptation by large companies.
The large companies that went out of business recently have pretty much all been from financial mismanagement and not product/market fit like Circuit City or Blockbuster from the last time adaptation was needed with those failing to adapt going out of business.
The fatalism on Lemmy is fairly exhausting. The past decade shouldn’t be used as a reference point for predicting the next decade. The factors playing into each couldn’t be more different.
How do you arrive at effective monopolies are going out the window, squaring it with what we see in the world today which runs counter.
There’s diminishing returns on labor for large companies and an order of magnitude labor multiplier in the process of arriving.
For example, if you watched this past week’s Jon Stewart, you saw an opening segment about the threat of AI taking people’s jobs and then a great interview with the head of the FTC talking about how they try to go after monopolistic firms. One of the discussion points was that often when they go up against companies that can hire unlimited lawyers they’ll be outmatched by 10:1.
So the FTC with 1,200 employees can only do so much, and the companies they go up against can hire up to the point of diminishing returns on more legal resources.
What do you think happens when AI capable of a 10x multipler in productivity at low cost is available for legal tasks? The large companies are already hiring to the point there’s not much more benefit to more labor. But the FTC is trying to do as much as they can with a tenth the resources.
Across pretty much every industry companies or regulators a fraction of the size of effective monopolies are going to be able to go toe to toe with the big guys for deskwork over the coming years.
Blue collar bottlenecks and physical infrastructure (like Amazon warehouses and trucks) will remain a moat, but for everything else competition against Goliaths is about to get a major power up.
Scaling up productivity is what tends to lead to layoffs. Having the exact same output but with fewer employees is pretty much guaranteed to lower cost and increase profit, so that’s what most execs are likely to do. Short-sited maybe, but businesses are explicitly short-sited, only focusing on the next quarter.
Freeing humans from toil is a good idea, just like the industrial revolution was. We just need our system to adapt and change with this new reality, AGI and universal basic income means we could live in something like the society in star trek.
I’m sure that’s what execs are talking about.
Doesn’t matter what the execs say, it will happen and it will become easier and easier to start your own business. They are automating themselves out of a high paying job.
Can’t wait for AI to replace all those useless execs and CEOs. It’s not like they even do much anyways, except fondling their stocks. They could probably be automated by a markov chain
If they could replace project managers that would be nice. In theory it is an important job, but in practice it’s just done by someone’s mate who was most productive when they don’t actually turn up.
The Paranoia RPG has a very realistic way of determining who gets to be the leader of a group. First, you pick who’ll do what kind of job (electronics, brute force, etc). Whoever didn’t get picked becomes the leader, as that person is too dumb to do anything useful.
Yes that’s quite a funny and satirical way of doing it but it’s probably not actually the best way in real life.
I think Boeing have proven this quite nicely for everyone, the company was much better off when they had actual engineers in charge. When they got corporate paper pushes everything went downhill.
I have been on enough projects where engineers were in charge that went to hell to know that isnt always a solution. And yes I am an engineer.
One of the projects I am on now the main lead is full PE civil and its a manmade clusterfuck well behind schedule, overbudget, and several corporate bridges burned. Haven’t even started digging yet.
By far the very biggest cluster fuck I was ever on was run by a Chemical Engineer. A 40 million dollar disaster that never should have been even considered.
Being good at technical problems (which frankly most of us aren’t) doesn’t mean you know how to do anything else.
I have had good ones and not so good ones.
I swear people don’t know the difference between a good project manager and a bad one, or no one.
Everyone on here is on about how the.board has no idea what the bottom rungs of the ladder do and are all “haha they are so stupid they think we do nothing”. Then in the next sentence say they don’t know what the board does and that they just do nothing.
Project managers on board members what the hell you want about
People slagging off jobs they don’t understand.
Both project managers that they probably have experience with dealing with but don’t understand and board members they probably don’t have any experience with and also don’t understand.
Board members don’t do shit
I see.
What is this judgment based on?
First hand experience
Don’t get a job in government contracting. Pretty much I do the work and around 5 people have suggestions. None of whom I can tell to fuck off directly.
Submit the drawing. Get asked to make a change to align with a spec. Point out that we took exception to the spec during bid. Get asked to make the change anyway. Make the change. Get asked to make another change by someone higher up the chain of five. Point out change will add delays and cost. Told to do it anyway. Make the next change…
Meanwhile every social scientist “we don’t know what is causing cost disease”
AI will (be a great excuse to) reduce workforce, say 41% of people who get bonuses if they do.
Game’s changed. Now we fire people, try to rehire them for less money and if that doesn’t work we demand policy changes and less labour protection to counter the “labour shortage”.
Labor shortage is such a funny term. It’s like coming to a store and looking for 1kg of meat for 1$, not finding it and saying there’s meat shortage. Or coming to a vegetarian store and looking for 1kg of any meat and saying the same.
When everybody is employed, but the economy needs more people - that’s labor shortage. When there are people looking for jobs, but not satisfied with particular offerings - that’s something else.
If Gartner comes out with a decent AI model, you could replace over half of your CIOs, CISOs, CTOs, etc. Most of them lack any real leadership qualities and simply parrot what they’re told/what they’ve read. They’re their through nepotism.
Also, most of them use AI as a crutch, so that’s all they know. Meanwhile, the rest of us use it as a tool (what it’s meant to be).
simply parrot what they’re told/what they’ve read.
That’s exactly what an LLM is
But the AI can do it cheaper
But their job is to be the fall guy.
Christ, if you think a CTO is hard to deal with, wait until you have to interface with the AI CTO.
As long as i can prompt-engineer my way into twice the salary for half the hours, that might still be worth it!
Yup. The owners can save a lot of money on those paychecks.
Won’t tho.
I think that they will. Much like tech workers who had no interest in unions because they thought that they were aligned with the owners, management is going to have a rude awakening and learn that if you don’t own the company then you are just labor.
Lol. That’s not how class solidarity works, but I do hope youre right.
59% of execs are wrong.
I think that’s a little low.
They’ll be replaced with AI
41% execs think that a huge amount of class power will go from workers in general to AI specialists (and probally the companies they make or that hire them).
I personally can’t wait for a lot these businesses that bet on the wrong people to replace turn around and form new competition but with this new tech filling in the gaps of middle management, hr, execs, etc.
I mean its fucking meme, but an AI assisted workplace democracy seems alright to me on paper (the devils in details).
Execs don’t give a shit. They simply double down on the false cause fallacy instead. They wouldn’t ever admit they fucked up.
Last year the company I work for went through a run of redundancies, claiming AI and system improvements were the cause. Before this point we were growing (slowly) year on year. Just not growing fast enough for the shareholders.
They cut too deep, shit is falling apart, and we’re loosing bids to competitors. Now they’ve doubled down on AI, claiming blindness to the systems issues they created, and just made an employee’s “Can Do” attitude a performance goal.
Optimising for the oblivious or unscrupulous, nice.
You sound like you work from one of my part suppliers
Lets try it. I am willing to start a worker coop headed by votes and an AI. Fuck it.
Thankfully I don’t even wanna work. I just wanna live and if that’s not possible, exist.
Same. I welcome our AI overlords as long as that means I can just stay at home and fully embrace my autism by not giving a fuck about the workforce while studying all of the thousands of subjects I enjoy learning about.
Not a thing til the revolution, dear.
I say AI overlords might be an improvement over the human overlords that have persisted throughout human history.
The AI overlords will be trained on data based on human overlords decisions and justifications. We are fucked, my man.
They won’t be though because the managers don’t know anything about AI. People who actually train the AI will be some poor sap in IT who’s been lumbered with a job they don’t want, because AI is computers right.
So I’m going to train it on good stuff written by professionals, Star Trek episodes, and make it watch War Games.
The managers don’t even have any data sets the AI could absorb anyway because most of their BS is in person, and so not recorded for analysis.
Oh my. I see you don’t know mich about the hell called key performance indicators…
Key performance indicators will be what will turn our AI overlords into AI tyrants. And there is so so much data available for training the AIs.
The autism is not required. No one cares about their jobs, especially people who work in jobs where “everyone is a family”. People care about those jobs the least.
I will never care if AI takes mandatory work from me, but I want income replacement lol. Seriously though I hate working so much every job I’ve ever had has made me suicidal at some point. I’m glad there’s a chance at least I won’t have nothing but work and death ahead of me. If that’s all that’s left it’s okay, a little disappointing but it is what it is.
Not allowed. Work or die, im afraid.
And that means lower prices for consumers. Right? Guys… r… right?
And that means lower prices for consumers. Right? Guys… r… right?
No, but it does mean 41%fewer people can afford to buy these companies products, you cheapass shortsighted corporate fucks.
41% is the number of executives that think AI will reduce their work force, not the number of jobs they expect to replace.
Your point stands though.
More businesses will be started to make the products since the profit margin is suddenly so high… driving down prices.
Execs? The same people who make short sighted decisions and don’t understand basic psychology? Let me go get a pen so I won’t…give two fucks what this bogus survey says. Let AI run your business so I can have some excitement in my life
They don’t care. Jack Welch’s ghost must be fed by destroying more companies for short term gain.
As someone scripting a lot for my department in the tech industry, yea AI and scripts have a lot of potential to reduce labor. However, given how chaotic this industry is, there will still need to be humans to take into account the variables that scripts and AI haven’t been trained on (or are otherwise hard to predict). I know the managers don’t wanna spend their time on these issues, as there’s plenty more for them to deal with. When there’s true AGI, that may be a different scenario, but time will tell.
Currently, we need to have some people in each department overseeing the automations of their area. This stuff mostly kills the super redundant data entry tasks that make me feel cross eyed by the end of my shift. I don’t wanna be the embodiment of vlookup between pdfs and type the same number 4+ times.
exactly, this will eliminate some jobs, but anyone who’s asked an LLM to fix code longer than 400 lines knows it often hurts more than it helps.
which is why it is best used as a tool to debug code, or write boilerplate functions.
Do you think AI for programmers will be like CAD was for drafters? It didn’t eliminate the position, but allows fewer people to do more work.
this is pretty much what i think, yeah.
a lot of programming/software design is already kinda that anyway. it’s a bunch of people who were educated on computer science principles, data structures, mathematicians, and data analytics/stats who write code to specs to solve very specific tool problems for very specific subsets of workers, and who maintain/update legacy code written decades ago.
now, yeah, a lot things are coded from scratch, but even then, you’re referencing libraries of code written by someone awhile ago to solve this problem or serve this purpose or do thing, output thing. that’s where LLMs shine, imo.
No. More high-level languages with less abstraction leakage are like CAD for drafters. Not “AI”.
I personally would want such tools to be more visual and more like systems, not algorithms.
Like interconnected nodes in a control system. Like PureData for music, or like LabView. Maybe more powerful and general-purpose.
But the fact that this tech really kicked off just three years ago and is already threatening so many jobs, is pretty telling. Not only will LLMs continue to get better, but they’re a big step towards AGI and that’s always been an existential crisis we knew was coming. This is the the time to start adapting, quick.
They didn’t just appear out of nowhere, they’re the result of decades of research and development. You’re also making the assumption that additional progress is guaranteed. AI has hit walls and dead ends in the past, there’s no reason to assume that we’re not hitting a local maximum again right now.
And there’s no reason to believe that it is. I know there’s been speculation about model collapse and limits of available training data. But there’s also been advancements like training data efficiency and autonomous agents. Your response seems to ignore the massive amounts of progress we’ve seen in the space.
Also the computer, internet, and smart phone were based on decades of research and development. Doesn’t mean they didn’t take off and change everything.
The fact that you’re saying AI hit walls in the past and now we’re here, is a pretty good indication that progress is guaranteed.
You said there’s no reason and then you list potential reasons right after. Yes, there has been progress and no one is arguing against that, but the two big issues are:
- What exists is being overhyped as far more capable than it really is.
- How much room there is to grow with current techniques is still unknown.
The computer, internet, and smart phone are all largely deterministic with actions resulting in direct known outcomes. AI as we know it is based on highly complex statistical models and relies heavily on the data it is trained on. It has far more things that can go wrong which makes it unsuitable for critical applications (just look at the disasters when it’s used as a customer service representative). That’s not even getting into the legal issues that have yet to actually be answered. Just look at the CTO of OpenAI squirming on the question of what Sora was trained on (timestamped).
Being able to overcome walls in the past doesn’t guarantee overcoming walls in the present. That’s like saying being able to jump over a hurdle is the same as leaping over a skyscraper. There’s also the question of timing, it took decades for those previous walls to be overcome. Impact to the workforce is largely overstated and is being used as an excuse for cost cutting. It’s just like the articles about automation after the great recession. I’m still waiting on robots that can flip burgers (article from 2012).
Here is an alternative Piped link(s):
the CTO of OpenAI squirming on the question of what Sora was trained on (timestamped)
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
I listed reasons people usually cite and why I don’t think they’re a good reason to assume there won’t be progress. I agree it’s over-hyped today, because people are excited about the obvious potential tomorrow. I think it’s foolish to hide behind that as if it’s proof that it doesn’t have potential.
Let’s say you’re right and we hit a wall for 50 years on any progress on AI. There’s nothing magical about the human brain’s ability to make logical decisions on observations and learning. It’s going to happen. And our current system of economy that attributes a person’s value to their labor will be in deep shit when it happens. It could take a century to make an appropriate change here. We’re already way behind, even with a set back to AI.
I think it’s funny when people complain about AI learning from copyright. AI’s express goal is to be similar to a human consciousness. Have you ever talked to a human who’s never watched a TV show, or a movie, or read a book from this century? An AI that’s not aware of those things would be like a useless alien to us.
If people just want to use legal hangups to stop AI, fair play. But that plan is doomed, infinite brainpower is just too valuable. Copyright isn’t there to protect the little guy, that was the original 28 year law. Its current form was lobbied by corporations to stifle competition. And they’ll dismantle it (or ignore it) in a heartbeat once it suits them.
The topic at hand is this survey which claims significant impacts to the workforce within five years and this is what I’m speaking towards. As for copyright, these models are straight-up not possible without that data and the link can be clearly demonstrated, they have their training data available which they may have to expose in a court case. Forget about the little guy, the large corporations who own the data will not be happy letting them build this lucrative AI without them getting paid for it. There will be legal fights and it is a potential complication in rolling this stuff out so it should be considered.
What does it threaten really?
It works for contact centers for bots to answer short simple questions, so that agents’ time would be used more efficiently. I’m not sure it saves that much money TBF.
It works for image classification. And still needs checking.
It works for OCR. And still needs checking.
It works for voice recognition and transcription, which is actually cool. Still needs checking.
but they’re a big step towards AGI
What makes you think that? Was the Mechanical Turk a big step towards thinking robots?
They are very good at pretending to be that big step for people who don’t know how they work.
You’re right that it doesn’t save too much money making people more efficient. That’s why they will replace employees instead. That’s the threat.
Yes they make mistakes. So do people. They just have to make less than an employee does and we’re on the right track for that. AI will always make mistakes and this is actually a step in the right direction. Deterministic systems that rely on concrete input and perfectly crafted statistical models can’t work in the real world. Once the system it is trying to evaluate (most systems in the real world) is sufficiently complex, you encounter unknown situations where you have to spend infinite time and energy gathering information and computing… or guess.
Our company is small and our customer inquiries increased several fold because our product expanded. We were panicking thinking we needed to train and hire a whole customer support department overnight, where we currently have one person. But instead we implement AI representatives. Our feedback actually became more positive because these agents can connect with you instantly, pull nebulous requests from confusing messages, and alert the appropriate employee of any action needed. Does it make mistakes? Sure, not enough to matter. It’s simple for our customer service person to reach out and correct the mistake.
I think people that think this isn’t a big deal for AGI don’t understand how the human mind works. I find it funny when they try and articulate why they think LLMs are just a trick. “It’s not really creating anything, it’s just pulling a bunch of relevant material from its training data and using it as a basis for a similar output.” And… What is it you think you do?
And… What is it you think you do?
Unlike an LLM, I rebuild myself, for example.
It’s trivial to copy an LLM, but if you mean self improvement: https://arxiv.org/abs/2401.10020
You’ll get blindsided real quick. AIs are just getting better. OpenAI are already saying they moved past GPT for their next models. It’s not 5 years before it can fix code longer than 400 lines, and not 20 before it can digest a specification and spout a working software. Said software might not be optimized or pretty, but those are things people can work separately. Where you needed 20 software engineers, you’ll need 10, then 5, then 1-2.
You have more in common with the guy getting replaced today than you care to admit in your comment.
Edit: not sure why I’m getting downvoted instead of having a discussion, but good luck to you all in your careers.
i didn’t downvote you, regardless internet points don’t matter.
you’re not wrong, and i largely agree with what you’ve said, because i didn’t actually say a lot of the things your comment assumes.
the most efficient way i can describe what i mean is this:
LLMs (this is NOT AI) can, and will, replace more and more of us. however, there will never, ever be a time where there will be no human overseeing it because we design software for humans (generally), not for machines. this requires integral human knowledge, assumptions, intuition, etc.
LLMs (this is NOT AI)
I disagree. When I was studying AI at college 20+ years ago we were also talking about expert systems which are glorified if/else chains. Most experts in the field agree that those systems can also be considered AI (not ML though).
You may be thinking of GAI or Universal AI which is different. I am a believer in the singularity (that a machine will be as creative and conscious as a human), but that’s a matter of opinion.
I didn’t downvote you
I was using “you” more towards the people downvoting me, not you directly. You can see the accounts who downvoted/upvoted, btw.
Edit: and I assumed the implication of your comment was that “people who code are safe”, which is a stretch I was answering to. Your comment was ambiguous either way.
jesus christ you should be shoved into a locker
Wow. Thanks for the advice. I guess that’s just Lemmy showing me the door. Good luck with your community here.
Try not to let the bot hurt your feelings, it was trained on cunts ‘n’ assholes
Yikes
Where you needed 20 software engineers, you’ll need 10, then 5, then 1-2.
It’s an open secret that this is already the case. I have seen projects that went on for decades and only required the engineering staff they had because corporate bureaucracy and risk aversion makes everyone a fraction as effective as they could be, and, frankly, because a lot of ineffective morons got into software development because of the $$$ they could make.
Unless AI somehow eliminates corporate overhead I don’t understand how it’ll possibly make commercial development monumentally easier.
Yeah people think AI is what sci-fi movies sold them. Hyper intelligent - hyper aware sentient beings capable of love or blah blah blah. We’ll get there, but corps don’t need that. In fact that’s the part they don’t want. They need a mindless drone to replace the 80% of their workers doing brainless jobs.
They need a mindless drone to replace the 80% of their workers doing brainless jobs.
Yeah the problem there is that they don’t know their own staff enough to know who are the people doing brainless jobs.
I’ve worked office jobs at a few large corporations. I’ve noticed they like to lay off a department, see how long the other departments can get by splitting up the work, then when everything is on fire they open up hiring. But every now and then… they let go of a department and everything just keeps working. It’s a strategy that seems to work, unfortunately.
Sounds like my current job.
Scripting is one thing and unpredictable plagiarism generator is another.
If you mean ML text recognition, ML classification etc - then yeah, why not.