pedalpete a day ago

I keep getting the feeling that the tech media is it's own worst enemy, or it shows how little tech media like the verge understands about technology.

Though I agree that language is not intelligence, suggesting that the AI boom is only about LLMs, or that LLMs do not create value is incredibly misleading.

I disagree with the base concept that to justify the current investment, we must arrive at AGI.

Estimates put total AI investement at $1.5T.

Is it a lot? Sure.

But what is going to come out of it.

Faster and likely improved results from medical imaging. Lower cost and widespread ability to create images and videos, denting the multi-trillion dollar marketing industry. Self-driving and advanced driver assistance, lives saves, costs reduced. Improvements in education and low-cost tutoring available to more people.

Let's say these investements just break even with spend. Isn't that better for society?

I know people are going to say "but what about the radiologists"? We have a shortage of GPs and doctors to care for an aging population. These highly trained doctors can do more in the medical community.

What about the actors/directors/sound engineers, etc in the media industry. This will likely shrink, but we can't ignore their expertise entirely. This industry won't go away. A friend is a voice actor. He isn't being replaced, however, he is getting less work, not because of the final production, but because in the pre-production phases, they don't need his expertise because they can use AI as 'good enough".

The lens I look at this through is my grandfather, who developed film for a living. That job disappeared. Nobody cried out for the film developers when we made the shift to digital. We create more imagery now than ever before, more people are employed in the imaging business than ever before.

As a (former) software engineer myself, do I believe AI will replace engineers? I think this is true as much as packages replaces engineers. When I started programming, there weren't a ton of open-source packages and there weren't a ton of tools like NPM, Cargo, for managing packages. I saw the transition from "write most of the code yourself" to "lego-brick" style programming. It didn't reduce the amount of programmers, it increased what we could do, and allowed us to do more interesting work than boilerplate.

  • rsynnott 5 hours ago

    > suggesting that the AI boom is only about LLMs

    It's largely around LLMs, plus some generative image models. That is _where the money is_. Sure, some people are doing some CV stuff that might have some medical applications, but no-one's spending tens of billions of borrowed money on datacenters for _that_.

    > or that LLMs do not create value is incredibly misleading.

    They're not suggesting that.

    Fundamentally, the current economic bubble around 'AI' is based on 'AGI' being achievable, soon. Otherwise, the valuations and amounts of money being spent simply do not make sense. LLMs being somewhat useful will not cut it; the spending implies expectation of dramatically increased capabilities, soon.

    • toss1 3 hours ago

      Yes

      >>the spending implies expectation of dramatically increased capabilities, soon.

      Even more than that, the spending is a death-race in expectation of:

      1) a team reaching runaway superintelligence,

      2) that it will be winner-take-all for the first team to get there, and

      3) fear that somebody else will get there first.

      So, everyone is spending as much as they can beg, borrow, or steal because in their view, there is only first place or being part of the underclass ruled by the other team that gets there first.

      This is not about just making useful products/services that can increase productivity for their customers.

  • DrierCycle 17 hours ago

    The contraction of events and the semantic residue, into tokens, symbols, metaphors, sentences, images, segments, parameterized in any way shape/form, is irrelevant next to the semantic load the event contains in a variety of analog states.

    No creativity emerges from this, just the mimicry of the tokens, symbols in their reduced, modeled parameters.

    That's the problem with LLMs and frontiers, RL. It's all subject to the bottlenecks of math and symbols.

    The analog load of semantic (use tasks, actions, variance, scales), the immensity of the nested forms these take between brain, body, screen, canvas, aren't replicable as creation.

    It's not information or stimuli we engage with, these are convenient false reductions we're realizing are inconvenient. There's much more the brain, eye detect in reflection. The words and symbols are an aftereffect the computer detects on its way to disregarding the events.

    We see this aftereffect as creativity, thinking, etc only as a clever illusion the words, tokens hand us. Nothing more.

    • jorl17 16 hours ago

      I disagree with your view on creativity, and indeed find LLMs to be remarkably creative. Or perhaps, to sidestep the anthropomorphization issue, I find that LLMs can be used as tools to produce creative works, greatly amplifying one's creativity to the point that a "near 0 creativity" subject (whatever that is) can create works that others will view as profoundly creative.

      In truth, I don't think there's likely to be a correct definition for what "creativity" is. We're all just moving each other's goal posts and looking at different subject aspects of our experience.

      What I do know is that I have talked to dozens of people, and have friends who have talked to hundreds. When shown what LLMs and generative models can do, there's a significant portion of them who label this work as creative. Rather than deem these people ignorant, I would rather consider that maybe if enough people see creativity somewhere, perhaps there is something to it. LLMs creating poems, stories, connecting concepts which you wouldn't otherwise see connected, birthing ideas in images that video which had never existed before.

      Of course, I'm aware that by this logic many things fall apart (for example, one might be tempted to believe in god because of it). Nonetheless, on this issue, I am deeply on the creativity camp. And while I am there, the things I get LLMs to create, and people I know do so as well, will continue to induce deep emotion and connection among ourselves.

      If that's not creativity to some, well, that's life, full of creative ways to define creativity or eschew it from the world.

      • DrierCycle 15 hours ago

        LLMs can't innovate form. And form/format innovation is the core of creativity. Neither can RL or frontier. That's creativity, the ability to sense wordless states beyond existing status quo cause effect exchanges. Tokens have no game here. It's that simple, it's how cutting-edges are broken through. It's always trapped behind the aesthetics input, that's just another fatal blow to it's ability to create.

        I'm not concerned with "hundreds" of people's opinions. And stories, poems, these are dead/dying forms.

        You may be on the creativity camp, stay there. It's generic as creativity. From my POV, someone building a hybrid between game and movies, this tech is dead dull artifactual, like a fossilized version of art.

        Yes of course we have definitions of creativity, see Anna Abraham. "creativity is doing something unique and new, using existing networks for things they weren't designed for"

        And we have the wordless processes that materialze as affinities and events in the imagination.

        https://www.nature.com/articles/s41583-019-0202-9

        I'm very struck by coders who are neither engineers, scientists, nor creatives in the arts, yet have the false intuition and self-delusion that this code is creative. Nonsense.

        I'm afraid LLMs have zilch in comprable abilities to both as they exclude words, symbols, tokens etc.

        • tim333 4 hours ago

          Perhaps in their basic form but you could do a variation on them like

          >AlphaEvolve is an evolutionary coding agent for designing advanced algorithms based on large language models such as Gemini. https://en.wikipedia.org/wiki/AlphaEvolve

          which came up with a novel improvement in matrix multiplication

          >I don't think people realize just how insane the Matrix Multiplication breakthrough by AlphaEvolve is https://www.reddit.com/r/singularity/comments/1knem3r/i_dont...

          • DrierCycle 2 hours ago

            It's not evolutionary, that's functionality. There are no algorithms in nature.

            Evolution is a tinkering process that finds by selection, and functionality is what we post-hoc add to the success.

            Anything reduced to symbols is simply functionality within models, to claim it's creative is illusory. Creativity, imagination, etc exist in a realm beyond, and preceding symbols, metaphors, etc. Creativity is 'thinking around symbolic bottlenecks'.

            The matrix multiplication breakthrough is still under the glass ceiling symbols provide. A new way to do math isn't the same as visual scale invariant paradoxes. It's not creativity, the word is a metaphor, the closer word is optimization.

        • jorl17 15 hours ago

          As a (rather obsessive, perhaps compulsive) poet, I will indeed remain in my camp :)

          (I do get what you're saying. Yet, I am not convinced that processes such as tokenization, and the inherent discretization it entails, are incompatible with creativity. We barely understand ourselves, and even then we know that we do discretize several things in our own processes, so it's really hard for me to just believe that tokenization inherently means no creativity).

          • DrierCycle 15 hours ago

            Keep in mind there are events, they are real. Words are just symbols we use to make the thoughts or the events cohere as memories, but they're artifacts that have no direct connection to the thoughts. Nor do any tokens, symbols, codes, etc. These are bottlenecks. They have no analog, they lack specificity. Creativity is the ecological exchange between body and ecology to make records that engage paradox wordlessly.

            That's what the article's big reveal is about. The events (which can be externalized creatively) are not really creative as words. That's a big problem for the species in general. That's a glass ceiling no one is recognizing or taking notice of. And that sort of gives you an idea how poetry and code are trapped behind it.

        • sfgvvxsfccdd 15 hours ago

          “Story is a dying form” is a take all right.

          Story having been with us since time immemorial makes me question its correctness.

  • ikr678 16 hours ago

    The shortage of doctors and medical professionals is artifical though, and unevenly distributed. Technology like AI isnt going to solve for structural issues like health insurance and private equity hospital takeovers distorting the healthcare market.

    • gdulli 16 hours ago

      Automation of medicine will improve our lives in the way that automated customer service has improved our lives.

      • soraminazuki 16 hours ago

        Google support, but for life critical situations!

      • ares623 13 hours ago

        Gonna save that for later.

  • ares623 20 hours ago

    Way I read that is you’re okay with it because (you think) it won’t affect you negatively.

  • techblueberry 21 hours ago

    “ The lens I look at this through is my grandfather, who developed film for a living. That job disappeared. Nobody cried out for the film developers when we made the shift to digital.”

    This is how progress has created so much human suffering, by impacting people in a way just sparse enough that you can believably imply “no one cried out for x”.

    Yes they did, it impacted thousands if not multiples of that; many in ways that they never recovered from. Some of them probably committed suicide, or ended up homeless, or in some other way had their life destroyed. And that impact reverberated onto their friends, families and communities. And maybe the New York Times even wrote an article about it, but we collectively as a society ignored that suffering.

    • jorl17 16 hours ago

      Utilitarianism is a bitch, huh?

      (I make this remark merely as a gag. I think you pinpoint an issue which has been unresolved for ages and is knee-deep into ethics. One could argue that many of our disagreements about AI and progress (in a broader sense) stem from different positions on ethics, including utilitarianism).

      • techblueberry 4 hours ago

        I’m a utilitarian, but a transparent one. I think most people are uncomfortable saying “I acknowledge the pain and suffering it took to make my iPhone, and the tradeoff is worth it.”

        • jorl17 4 hours ago

          I very unironically think everyone should watch The Good Place to get an initial feel for ethics.

  • turtlesdown11 21 hours ago

    None of these "what's going to come out of it" are likely to occur.

    • noitpmeder 20 hours ago

      Agreed. I'd love to have some of whatever copium the GP is smoking... Or be half as sure about many things as they are about this insane speculation.

miladyincontrol a day ago

Wordceling over semantics of LLMs seems such a fool's errand ripe with an innate smugness that the output of our brains is somehow special, and that no matter how advanced these AI get they will never be suffice to their definition of knowledge.

It feels like a sort of smug escapism that ignores for many tasks LLM output regardless of how you want to define it is enough. It may not replace humans or our thought entirely, but many otherwise human tasks simply do not require such. Instead of facing the reality of such authors like this article's rejoice under the idea that our thought is special, our output unique and unmatched, those AI marketing fools are for naught as we cannot be replaced because of how they tried sticking LLM's output into a very human shaped box... if only it were that simple.

hshdhdhj4444 15 hours ago

I don’t necessarily disagree with the first couple of claims. But I think the author pulls a sleight of hand to make their case (to be fair, I suspect they themselves didn’t realize they’re doing this).

- The first claim is that language is not intelligence.

I think this is true, but am not sure. However, for the sake of argument, let’s accept this as true.

However, this leads to the obvious objection stated by the author. General intelligence doesn’t need to work the same way as human intelligence does. The author counters this with their second claim.

- We have no evidence that a language based model can lead to intelligence.

For the sake of argument, let’s even accept this as true. But here’s where the sleight of hand comes in. Lack of evidence in X being true is not the same as evidence that X is false.

Just because we don’t have evidence that language can evolve into something like intelligence, doesn’t mean language cannot evolve into something like evidence.

And so far our experience in language evolving into something like intelligencr has been surprisingly positive.

  • DrierCycle 14 hours ago

    Intelligence is wordless. Basic fact. And words are used in the aftermath of thought/intelligence, not prior. They are aftereffects, not effects. Words have no prior. That's a big problem.

  • vrighter 9 hours ago

    Tfa ddn't imply lack of evidence is evidence of impossibility.

    What he is referring to is that this whole bubble is predicated on promises that it actually is possible. And very soon, at that.

    When they said there is no evidence they meant that the bubble is expanding due to these promises being made without the evidence. Meaning everyone should be warier about a much bigger risk then they are led to believe. But if that were true, the actual insane amounts of money moving around wouldn't be so insane. So "graph go down"

austhrow743 21 hours ago

Article makes the claim that LLMs are not the path to AGI and provides some information on the topic.

I can’t find where they try to determine how much investment, purchasing, or use is fuelled by claims that AGI is coming though. Absent that, the title seems unrelated to the article content.

  • add-sub-mul-div 19 hours ago

    Right, the investment is a bet on replacing a vast amount of labor, period. They don't care whether that replacement is AGI or not. They'll be happy if it's just lesser/enshittified but passable output.

manofmanysmiles 18 hours ago

I would love to ask the author: are you sure that large language models are only modeling language?

  • DrierCycle 14 hours ago

    Whatever gets predicted by tokens gets summarized by symbols, which are artifacts of language. This gets to the illusory aspects of binary as well, the rabbit hole goes deep.