• FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    5
    arrow-down
    5
    ·
    1 month ago

    OpenAI is no longer the cutting edge of AI these days, IMO. It’ll be fine if they close down. They blazed the trail, set the AI revolution in motion, but now lots of other companies have picked it up and are doing better at it than them.

    • pizza_the_hutt@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      33
      arrow-down
      2
      ·
      1 month ago

      There is no AI Revolution. There never was. Generative AI was sold as an automation solution to companies looking to decrease labor costs, but’s it’s not actually good at doing that. Moreover, there’s not enough good, accurate training material to make generative AI that much smarter or more useful than it already is.

      Generative AI is a dead end, and big companies are just now starting to realize that, especially after the Goldman-Sachs report on AI. Sam Altman is just a snake oil saleman, another failing-upwards executive who told a bunch of other executives what they wanted to hear. It’s just now becoming clear that the emperor has no clothes.

      • SkyNTP@lemmy.ml
        link
        fedilink
        arrow-up
        7
        ·
        1 month ago

        Generative AI is not smart to begin with. LLM are basically just compressed versions of the internet that predict statistically what a sentence needs to be to look “right”. There’s a big difference between appearing right and being right. Without a critical approach to information, independent reasoning, individual sensing, these AI’s are incapable of any meaningful intelligence.

        In my experience, the emperor and most people around them still has not figured this out yet.

      • anachronist@midwest.social
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 month ago

        Generative AI is just classification engines run in reverse. Classification engines are useful but they’ve been around and making incremental improvements for at least a decade. Also, just like self-driving cars they’ve been writing checks they can’t honor. For instance, legal coding and radiology were supposed to be automated by classification engines a long time ago.

        • bizarroland@fedia.io
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          1 month ago

          It’s sort of like how you can create a pretty good text message on your phone using voice to text but no courtroom is allowing AI transcription.

          There’s still too much risk that it will capitalize the wrong word or replace a word that’s close to what was said or do something else wholly unconceived of to trust it with our legal process.

          If they could guarantee a 100% accurate transcription of spoken word to text it would put the entire field of Court stenographers out of business and generate tens of millions of dollars worth of digital contracts for the company who can figure it out.

          Not going to do it because even today a phone can’t tell the difference between the word holy and the word holy. (Wholly)

    • mozz@mbin.grits.dev
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 month ago

      If they closed down, and the people still aligned with safety had to take up the mantle, that would be fine.

      If they got desperate for money and started looking for people they could sell their soul to (more than they have already) in exchange for keeping the doors open, that could potentially be pretty fuckin bad.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        4
        ·
        1 month ago

        Well, my point is that it’s already largely irrelevant what they do. Many of their talented engineers have moved on to other companies, some new startups and some already-established ones. The interesting new models and products are not being produced by OpenAI so much any more.

        I wouldn’t be surprised if “safety alignment” is one of the reasons, too. There are a lot of folks in tech who really just want to build neat things and it feels oppressive to be in a company that’s likely to lock away the things they build if they turn out to be too neat.

        • mozz@mbin.grits.dev
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          1 month ago

          Many of their talented engineers have moved on to other companies, some new startups and some already-established ones.

          When did this happen? I know some of the leadership departed but I hadn’t heard of it from the rank and file.

          I’m not saying necessarily that you’re wrong; definitely it seems like something has changed between the days of GPT-3 and GPT-4 up until the present day. I just hadn’t heard of it.

          There are a lot of folks in tech who really just want to build neat things and it feels oppressive to be in a company that’s likely to lock away the things they build if they turn out to be too neat.

          I’m not sure this is true for AI. Some of the people who are most worried about AI safety are the AI engineers. I have some impression that OpenAI’s safety focus was why so many people liked working for them, back when they were doing groundbreaking work.

          • FaceDeer@fedia.io
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            1 month ago

            AI engineers are not a unitary group with opinions all aligned. Some of them really like money too. Or just want to build something that changes the world.

            I don’t know of a specific “when” where a bunch of engineers left OpenAI all at once. I’ve just seen a lot of articles over the past year with some variation of “<company> is a startup founded by former OpenAI engineers.” There might have been a surge when Altman was briefly ousted, but that was brief enough that I wouldn’t expect a visible spike on the graph.