The problem is that those companies are monopolies and can raise prices indefinitely to pursue this shitty dream because they got governments in their pockets. Because gov are cloud / microsoft software dependent - literally every country is on this planet - maybe except China / North Korea and Russia. They can like raise prices 10 times in next 10 years and don’t give a fuck. Spend 1 trillion on AI and say we’re near over and over again and literally nobody can stop them right now.
It doesnt matter if they reach any end result, as long as stocks go up and profits go up.
Consumers arent really asking for AI but its being used to push new hardware and make previous hardware feel old. Eventually everyone has AI on their phone, most of it unused.
Good, let them go broke in the pursuit of a dead end.
Why won’t they pour billions into me? I’d actually put it to good use.
I’d be happy with a couple hundos.
I’d be happy with a big tiddy goth girl. Jealous of your username btw.
I have been shouting this for years. Turing and Minsky were pretty up front about this when they dropped this line of research in like 1952, even lovelace predicted this would be bullshit back before the first computer had been built.
The fact nothing got optimized, and it still didn’t collapse, after deepseek? kind of gave the whole game away. there’s something else going on here. this isn’t about the technology, because there is no meaningful technology here.
I have been called a killjoy luddite by reddit-brained morons almost every time.
What’re you talking about? What happened in 1952?
I have to disagree, I don’t think it’s meaningless. I think that’s unfair. But it certainly is overhyped. Maybe just a semantic difference?
Companies aren’t investing to achieve AGI as far as I’m aware, that’s not the end game so I this title is misinformation. Even if AGI was achieved it’d be a happy accident, not the goal.
The goal of all these investments is to convince businesses to replace their employees with AI to the maximum extent possible. They want that payroll money.
The other goal is to cut out all third party websites from advertising revenue. If people only get information through Meta or Google or whatever, they get to control what’s presented. If people just take their AI results at face value and don’t actually click through to other websites, they stay in the ecosystem these corporations control. They get to sell access to the public, even more so than they do now.
The funny thing is with so much money you could probably do lots of great stuff with the existing AI as it is. Instead they put all the money into compute power so that they can overfit their LLMs to look like a human.
It’s ironic how conservative the spending actually is.
Awesome ML papers and ideas come out every week. Low power training/inference optimizations, fundamental changes in the math like bitnet, new attention mechanisms, cool tools to make models more controllable and steerable and grounded. This is all getting funded, right?
No.
Universities and such are seeding and putting out all this research, but the big model trainers holding the purse strings/GPU clusters are not using them. They just keep releasing very similar, mostly bog standard transformers models over and over again, bar a tiny expense for a little experiment here and there. In other words, it’s full corporate: tiny, guaranteed incremental improvements without changing much, and no sharing with each other. It’s hilariously inefficient. And it relies on lies and jawboning from people like Sam Altman.
Deepseek is what happens when a company is smart but resource constrained. An order of magnitude more efficient, and even their architecture was very conservative.
I like my project manager, they find me work, ask how I’m doing and talk straight.
It’s when the CEO/CTO/CFO speaks where my eyes glaze over, my mouth sags, and I bounce my neck at prompted intervals as my brain retreats into itself as it frantically tosses words and phrases into the meaning grinder and cranks the wheel, only for nothing to come out of it time and time again.
Find a better C-suite
COs are corporate politicians, media trained to only say things which are completely unrevealing and lacking of any substance.
This is by design so that sensitive information is centrally controlled, leaks are difficult, and sudden changes in direction cause the minimum amount of whiplash to ICs as possible.
I have the same reaction as you, but the system is working as intended. Better to just shut it out as you described and use the time to think about that issue you’re having on a personal project or what toy to buy for your cat’s birthday.
Right, that sweet spot between too less stimuli so your brain just wants to sleep or run away and enough stimuli so you can’t just zone out (or sleep).
The number of times my CTO says we’re going to do THING, only to have to be told that this isn’t how things work…
I just turn of my camera and turn on Forza Motorsport or something like that
The actual survey result:
Asked whether “scaling up” current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was “unlikely” or “very unlikely” to succeed.
So they’re not saying the entire industry is a dead end, or even that the newest phase is. They’re just saying they don’t think this current technology will make AGI when scaled. I think most people agree, including the investors pouring billions into this. They arent betting this will turn to agi, they’re betting that they have some application for the current ai. Are some of those applications dead ends, most definitely, are some of them revolutionary, maybe
Thus would be like asking a researcher in the 90s that if they scaled up the bandwidth and computing power of the average internet user would we see a vastly connected media sharing network, they’d probably say no. It took more than a decade of software, cultural and societal development to discover the applications for the internet.
The bigger loss is the ENORMOUS amounts of energy required to train these models. Training an AI can use up more than half the entire output of the average nuclear plant.
AI data centers also generate a ton of CO². For example, training an AI produces more CO² than a 55 year old human has produced since birth.
Complete waste.
It’s becoming clear from the data that more error correction needs exponentially more data. I suspect that pretty soon we will realize that what’s been built is a glorified homework cheater and a better search engine.
what’s been built is a glorified homework cheater and an
betterunreliable search engine.
I think most people agree, including the investors pouring billions into this.
The same investors that poured (and are still pouring) billions into crypto, and invested in sub-prime loans and valued pets.com at $300M? I don’t see any way the companies will be able to recoup the costs of their investment in “AI” datacenters (i.e. the $500B Stargate or $80B Microsoft; probably upwards of a trillion dollars globally invested in these data-centers).
Right, simply scaling won’t lead to AGI, there will need to be some algorithmic changes. But nobody in the world knows what those are yet. Is it a simple framework on top of LLMs like the “atom of thought” paper? Or are transformers themselves a dead end? Or is multimodality the secret to AGI? I don’t think anyone really knows.
No there’s some ideas out there. Concepts like heirarchical reinforcement learning are more likely to lead to AGI with creation of foundational policies, problem is as it stands, it’s a really difficult technique to use so it isn’t used often. And LLMs have sucked all the research dollars out of any other ideas.
I agree that it’s editorialized compared to the very neutral way the survey puts it. That said, I think you also have to take into account how AI has been marketed by the industry.
They have been claiming AGI is right around the corner pretty much since chatGPT first came to market. It’s often implied (e.g. you’ll be able to replace workers with this) or they are more vague on timeline (e.g. OpenAI saying they believe their research will eventually lead to AGI).
With that context I think it’s fair to editorialize to this being a dead-end, because even with billions of dollars being poured into this, they won’t be able to deliver AGI on the timeline they are promising.
Part of it is we keep realizing AGI is a lot more broader and more complex than we think
Yeah, it does some tricks, some of them even useful, but the investment is not for the demonstrated capability or realistic extrapolation of that, it is for the sort of product like OpenAI is promising equivalent to a full time research assistant for 20k a month. Which is way more expensive than an actual research assistant, but that’s not stopping them from making the pitch.
AI isn’t going to figure out what a customer wants when the customer doesn’t know what they want.
Current big tech is going to keeping pushing limits and have SM influencers/youtubers market and their consumers picking up the R&D bill. Emotionally I want to say stop innovating but really cut your speed by 75%. We are going to witness an era of optimization and efficiency. Most users just need a Pi 5 16gb, Intel NUC or an Apple air base models. Those are easy 7-10 year computers. No need to rush and get latest and greatest. I’m talking about everything computing in general. One point gaming,more people are waking up realizing they don’t need every new GPU, studios are burnt out, IPs are dying due to no lingering core base to keep franchise up float and consumers can’t keep opening their wallets. Hence studios like square enix going to start support all platforms and not do late stage capitalism with going with their own launcher with a store. It’s over.
Me and my 5.000 closest friends don’t like that the website and their 1.300 partners all need my data.
Why so many sig figs for 5 and 1.3 though?
Some parts of the world (mostly Europe, I think) use dots instead of commas for displaying thousands. For example, 5.000 is 5,000 and 1.300 is 1,300
Yes. It’s the normal Thousands-separator notation in Germany for example.
But usually you don’t put three 000 because that becomes a hint of thousand.
Like 2.50 is 2€50 but 2.500 is 2500€
Is there an ISO standard for this stuff?
No, 2,50€ is 2€ and 50ct, 2.50€ is wrong in this system. 2,500€ is also wrong (for currency, where you only care for two digits after the comma), 2.500€ is 2500€
what if you are displaying a live bill for a service billed monthly, like bandwidth, and are charged one pence/cent/(whatever eutopes hundredth is called) per gigabyte if you use a few megabytes the bill is less than a hundredth but still exists.
Yes, that’s true, but more of an edge case. Something like gasoline is commonly priced in fractional cents, tho:
Yeah, and they’re wrong.
Says the country where every science textbook is half science half conversion tables.
Not even close.
Yes, one half is conversion tables. The other half is scripture disproving Darwinism.
We (in Europe) probably should be thankful that you are not using feet as thousands-separator over there in the USA… Or maybe separate after each 2nd digit, because why not… ;)
It makes sense from typographical standpoint, the comma is the larger symbol and thus harder to overlook, especially in small fonts or messy handwriting
But from a grammatical sense it’s the opposite. In a sentence, a comma is a short pause, while a period is a hard stop. That means it makes far more sense for the comma to be the thousands separator and the period to be the stop between integer and fraction.
I have no strong preference either way. I think both are valid and sensible systems, and it’s only confusing because of competing standards. I think over long enough time, due to the internet, the period as the decimal separator will prevail, but it’s gonna happen normally, it’s not something we can force. Many young people I know already use it that way here in Germany
I knew the context, was just being cheesy. :-D
Too late… You started a war in the comments. I’ll proudly fight for my country’s way to separate numbers!!! :)
oh lol
Optimizing AI performance by “scaling” is lazy and wasteful.
Reminds me of back in the early 2000s when someone would say don’t worry about performance, GHz will always go up.
It always wins in the end though. Look up the bitter lesson.
Thing is, same as with GHz, you have to do it as much as you can until the gains get too small. You do that, then you move on to the next optimization. Like ai has and is now optimizing test time compute, token quality, and other areas.
To be fair, GHz did go up. Granted, it’s not why modern processors are faster and more efficient.
TIL
I miss flash players.
Good let them waste all their money
Technology in most cases progresses on a logarithmic scale when innovation isn’t prioritized. We’ve basically reached the plateau of what LLMs can currently do without a breakthrough. They could absorb all the information on the internet and not even come close to what they say it is. These days we’re in the “bells and whistles” phase where they add unnecessary bullshit to make it seem new like adding 5 cameras to a phone or adding touchscreens to cars. Things that make something seem fancy by slapping buzzwords and features nobody needs without needing to actually change anything but bump up the price.
I remember listening to a podcast that’s about explaining stuff according to what we know today (scientifically). The guy explaining is just so knowledgeable about this stuff and he does his research and talk to experts when the subject involves something he isn’t himself an expert.
There was this episode where he kinda got into the topic of how technology only evolves with science (because you need to understand the stuff you’re doing and you need a theory of how it works before you make new assumptions and test those assumptions). He gave an example of the Apple visionPro being a machine that despite being new (the hardware capabilities, at least), the algorithm for tracking eyes they use was developed decades ago and was already well understood and proven correct by other applications.
So his point in the episode is that real innovation just can’t be rushed by throwing money or more people at a problem. Because real innovation takes real scientists having novel insights and experiments to expand the knowledge we have. Sometimes those insights are completely random, often you need to have a whole career in that field and sometimes it takes a new genius to revolutionize it (think Newton and Einstein).
Even the current wave of LLMs are simply a product of the Google’s paper that showed we could parallelize language models, leading to the creation of “larger language models”. That was Google doing science. But you can’t control when some new breakthrough is discovered, and LLMs are subject to this constraint.
In fact, the only practice we know that actually accelerates science is the collaboration of scientists around the world, the publishing of reproducible papers so that others can expand upon and have insights you didn’t even think about, and so on.
There’s been several smaller breakthroughs since then that arguably would not have happened without so many scientists suddenly turning their attention to the field.
There are some nice things I have done with AI tools, but I do have to wonder if the amount of money poured into it justifies the result.