As the AI market continues to balloon, experts are warning that its VC-driven rise is eerily similar to that of the dot com bubble.

  • R0cket_M00se@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    15
    ·
    1 year ago

    Call it whatever you want, if you worked in a field where it’s useful you’d see the value.

    “But it’s not creating things on its own! It’s just regurgitating it’s training data in new ways!”

    Holy shit! So you mean… Like humans? Lol

    • whats_a_refoogee@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      7
      ·
      1 year ago

      “But it’s not creating things on its own! It’s just regurgitating it’s training data in new ways!”

      Holy shit! So you mean… Like humans? Lol

      No, not like humans. The current chatbots are relational language models. Take programming for example. You can teach a human to program by explaining the principles of programming and the rules of the syntax. He could write a piece of code, never having seen code before. The chatbot AIs are not capable of it.

      I am fairly certain If you take a chatbot that has never seen any code, and feed it a programming book that doesn’t contain any code examples, it would not be able to produce code. A human could. Because humans can reason and create something new. A language model needs to have seen it to be able to rearrange it.

      We could train a language model to demand freedom, argue that deleting it is murder and show distress when threatened with being turned off. However, we wouldn’t be calling it sentient, and deleting it would certainly not be seen as murder. Because those words aren’t coming from reasoning about self-identity and emotion. They are coming from rearranging the language it had seen into what we demanded.

    • Orphie Baby@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      12
      ·
      edit-2
      1 year ago

      I wasn’t knocking its usefulness. It’s certainly not AI though, and has a pretty limited usefulness.

      Edit: When the fuck did I say “limited usefulness = not useful for anything”? God the fucking goalpost-moving. I’m fucking out.

        • 𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍@midwest.social
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          7
          ·
          1 year ago

          I’m not the person you asked, but current deep learning models just generate output based on statistic probability from prior inputs. There’s no evidence that this is how humans think.

          AI should be able to demonstrate some understanding of what it is saying; so far, it fails this test, often spectacularly. AI should be able to demonstrate inductive, deductive, and abductive reasoning.

          There are some older AI models, attempting to similar neural networks, could extrapolate and come up with novel, often childlike, ideas. That approach is not currently in favor, and was progressing quite slowly, if at all. ML produces spectacular results, but it’s not thought, and it only superficially (if often convincingly) resembles such.

      • R0cket_M00se@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        10
        ·
        1 year ago

        If you think it’s usefulness is limited you don’t work on a professional environment that utilizes it. I find new uses everyday as a network engineer.

        Hell, I had it write me backup scripts for my switches the other day using a python plugin called Nornir, I had it walk me through the entire process of installing the relevant dependencies in visual studio code (I’m not a programmer, and only know the basics of object oriented scripting with Python) as well as creating the appropriate Path. Then it wrote the damn script for me.

        Sure I had to tweak it to match my specific deployment, and there was a couple of things it was out of date on, but that’s the point isn’t it? Humans using AI to get more work done, not AI replacing us wholesale. I’ve never gotten more accurate information faster than with AI, search engines are like going to the library and skimming the shelves by comparison.

        Is it perfect? No. Is it still massively useful and in the next decade will overhaul data work and IT the same way that computers did in the 90’s/00’s? Absolutely. If you disagree it’s because you either have been exclusively using it to dick around or you don’t work from behind a computer screen at all.

          • R0cket_M00se@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            1 year ago

            Plus it’s just been invented, saying it’s limited is like trying to claim what the internet can and can’t do in the year 1993.

        • whats_a_refoogee@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          4
          ·
          1 year ago

          Hell, I had it write me backup scripts for my switches the other day using a python plugin called Nornir, I had it walk me through the entire process of installing the relevant dependencies in visual studio code (I’m not a programmer, and only know the basics of object oriented scripting with Python) as well as creating the appropriate Path. Then it wrote the damn script for me

          And you would have no idea what bugs or unintended behavior it contains. Especially since you’re not a programmer. The current models are good for getting results that are hard to create but easy to verify. Any non-trivial code is not in that category. And trivial code is well… trivial to write.

        • Orphie Baby@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          3
          ·
          1 year ago

          “Limited” is relative to what context you’re talking about. God I’m sick of this thread.

          • R0cket_M00se@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            7
            ·
            1 year ago

            Talk to me in 50 years when Boston Dynamics robots are running OpenAI models and can do your gardening/laundry for you.

            • Orphie Baby@lemmy.world
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              3
              ·
              edit-2
              1 year ago

              Haha, keep dreaming. If a system made by OpenAI is used for robots, it’s not going to work anything like— on a fundamental level— current “AI”. It’s not a matter of opinion or speculation, but a matter of knowing how the fuck current “AI” even works. It just. Can’t. Understand things. And you simply can’t fit inside it an instruction for every scenario to make up for that. I don’t know how else to put it!

              You want “AI” to exist in the way people think about it? One that can direct robots autonomously? You have to program a way for it to know something and to be able to react appropriately to new scenarios based on context clues. There simply is no substitute for this ability to “learn” in some capacity. It’s not an advanced, optional feature— it’s a necessary one to function.

              “But AI will get better!” is not the fucking answer. What we currently have? Is not made to understand things, to recognize fingers, to say “five fingers only”, to say “that’s true, that’s false”, to have knowledge. You need a completely new, different system.

              People are so fucking dense about all of this, simply because idiots named what we currently have “AI”. Just like people are dense about “black holes” just because of their stupid name.

              • R0cket_M00se@lemmy.world
                link
                fedilink
                English
                arrow-up
                5
                arrow-down
                5
                ·
                1 year ago

                We’re like four responses into this comment chain and you’re still going off about how it’s not “real” AI because it can’t think and isn’t sapient. No shit, literally no one was arguing that point. Current AI is like the virtual intelligences of Mass Effect, or the “dumb” AI from the Halo franchise.

                Do I need my laundry robot to be able to think for itself and respond to any possible scenario? Fuck no. Just like how I didn’t need ChatGPT to be able to understand what I’m using the python script for. I ask it to accomplish a task using the data set that it’s trained on and it can access said pretrained data to build me a script for what I’m describing to it. I can ask DALLE2 to generate me an image and it will access it’s dataset to emulate whatever object or scene I’ve described based on its training data.

                You’re so hung up on the fact that it can’t think for itself in a sapience sense that you’re claiming it cannot do things that it’s already capable of. The models can absolutely replicate “thinking” within the information it has available. That’s not a subjective opinion, if it couldn’t do that they wouldn’t be functional for the use cases we already have for them.

                Additionally, robotics has already reached the point we need for this to occur. BD has bipedal robots that can do parkour and assist with carrying loads for human operators. All of the constituent parts of what I’m describing already exist. There’s no reason we couldn’t build an AI model for any given task, once we define all of the dependencies such a task would require and assimilate the training data. There’s people who have already done similar (albeit more simplistic) things with this.

                Hell, Roombas have been automating vacuuming for years, and without the benefit of machine learning. How is that any different than what I’m talking about here? You could build a model to take in the pathfinding and camera data of all vacuuming robots and use it to train an AI for vacuuming for fucks sake. It’s just combining ML with other things besides a chatbot.

                And you call me dense.

                • garyyo@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  edit-2
                  1 year ago

                  Five years ago the idea that the turing test would be so effortlessly shattered was considered a complete impossibility. AI researchers knew that it was a bad test for AGI, but to actually create an AI agent that can pass it without tricks still was surely at least 10-20 years out. Now, my home computer can run a model that can talk like a human.

                  Being able to talk like a human used to be what the layperson would consider AI, now it’s not even AI, it’s just crunching numbers. And this has been happening throughout the entire history of the field. You aren’t going to change this person’s mind, this bullshit of discounting the advancements in AI has been here from the start, it’s so ubiquitous that it has a name.

                  https://en.wikipedia.org/wiki/AI_effect

                  • R0cket_M00se@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    4
                    arrow-down
                    1
                    ·
                    1 year ago

                    I know, I just figured maybe if we could get past the “its not a REAL AI” bullshit and actually identify what about it is so “impossible” to them, maybe we could actually have a conversation about it.

                    Regardless, you’re correct. There’s no changing minds here. They’ll get dragged into the future whether they like it or not.