• Rentlar@lemmy.ca
    link
    fedilink
    arrow-up
    26
    ·
    22 hours ago

    An actually interesting use of artificial intelligence being able to accomplish something, when put in the hands of expert mathematicians. Definitely a lot of coaxing it back to doing the task correctly but it is pretty cool that it can solve problems (even if they are math nerd ones) in a way that are independently verifiable.

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      5 hours ago

      There’s no actual “AI” involved here. Mathematicians have been using computational methods since they invented the computer. These results are just a natural continuation of decades of work.

      But no. There’s no Johnny 5 inside this program.

      • Rentlar@lemmy.ca
        link
        fedilink
        arrow-up
        3
        ·
        4 hours ago

        Well just like a MATLAB plotting program “draws” lines and curves and stuff, Claude is a programs that puts together various reasonings based on the mathematician’s input.

        • skuzz@discuss.tchncs.de
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          2 hours ago

          Various weighted statistical guesses based on token input and n-dimensional matrix weighted output. Not reasoning per se. You are handed the ingredients to make bread with no instructions, you’ll eventually make bread, statistically.

    • panda_abyss@lemmy.ca
      link
      fedilink
      arrow-up
      24
      arrow-down
      1
      ·
      21 hours ago

      In the hands of experts these are definitely useful. I’ve always felt that.

      Ai should be used to augment humans, not replace them.

      Unfortunately we have idiots making decisions looking at the sycophant BS machine without knowing what the job actually does

        • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
          link
          fedilink
          arrow-up
          11
          arrow-down
          1
          ·
          17 hours ago

          Exactly, when you dig into all the complaints people have about this tech, they’re ultimately just symptoms of the underlying capitalist relations.

        • panda_abyss@lemmy.ca
          link
          fedilink
          arrow-up
          9
          ·
          edit-2
          19 hours ago

          Yes.

          I’d feel a lot less annoyed at my code being used to train the AI (without my consent) if the AI’s benefits weren’t funnelled into private pockets.

          I’d feel a lot less annoyed at AI if it wasn’t constantly use to replace jobs and then fail at it. Actually, AI isn’t replacing jobs, it’s being used as an excuse to do layoffs while pretending your company is being innovative, so as not to scare off investors.

          Without a profit motive there wouldn’t be ChatGPT Health, which is just faking medical skills while being wrong as often as a coin toss, in exchange for money. If I did that I’d be sued for negligence and/or fraud.

        • Juice@midwest.social
          link
          fedilink
          arrow-up
          13
          ·
          20 hours ago

          You can read Marx’s chapters on technology in Capital volume 1, and what he describes from his own time about how tech is developed and for whose benefit and specifically how it has to exploit workers in order to be useful to capital; it matches so closely with the development of AI that we are seeing.

      • All Ice In Chains@lemmy.ml
        link
        fedilink
        arrow-up
        10
        arrow-down
        2
        ·
        21 hours ago

        The problem is always techbros. Large Language Models, Deep Learning, these kinds of things are potentially valuable when put to work in the right arena.

        A techbro will never put them in the right arena. It’s always a false promise built on flimsy reputational credit.

      • Scrubbles@poptalk.scrubbles.tech
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        3
        ·
        21 hours ago

        Exactly. It should all be treated as another tool in the toolbelt. To me, it reminds me of when GUI editors came along in IDEs like Visual Studio. It honestly feels the same. Tech CEOs immediately clamor to say that tech jobs are dead, the market for engineers dips. Engineers freak out and refuse to learn the technology while others learn what it is. Those who learn and use it as a tool elevate themselves and move faster. There is a non-trivial group of people who refuse to use the GUI tools on principal. Eventually the CEOs realize they made a mistake, and then more work comes in faster than ever before. Eventually over the years/decades everyone starts using the tech as a tool.

        It’s the same with an AI. Like it’s following the exact same pattern to a T. CEOs starting to realize that it’s just a tool that can be used, but it needs people at the helm to know how to use it. Devs are split, some it’s accelerating their work if they know what it’s doing, others see a useless boondoggle and refuse to use it but are probably only hurting themselves because every interview is asking “are you using AI”. I’d say we’re finally starting to normalize on it’s usage as a tool.

        • sloppy_diffuser@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          19 hours ago

          Totally agree with your overall point.

          That said, I have to come to the defense of my terminal UI (TUI) comrades with some anecdotal experience.

          I’ve got all the same tools in Neovim as my VSCode/Cursor colleagues, with a deeper understanding of how it all works under the hood.

          They have no idea what an LSP is. They just know the marketing buzzword “IntelliSense.” As we build out our AI toolchains, it doesn’t even occur to them that an agent can talk to an LSP to improve code generation because all they know are VSCode extensions. I had to pick and evaluate my MCP servers from day one as opposed to just accepting the defaults, and the quality of my results shows it. The same can be done in GUI editors, but since you’re never forced to configure these things yourself, the exposure is just lower. I’ve had to run numerous trainings explaining that MCPs are traditionally meant to be run locally, because folks haven’t built the mental model that comes with wiring it all up yourself.

          Again, totally agree with your overall point. This is more of a PSA for any aspiring engineers: TUIs are still alive and well.

        • Juice@midwest.social
          link
          fedilink
          arrow-up
          6
          arrow-down
          1
          ·
          20 hours ago

          Remember when they built a $50 billion server farm in my back yard to run a GUI IDE?

          I think your attitude toward AI in the abstract is pretty good, it matches my experience in tech, but also theres something much larger going on here

          • Scrubbles@poptalk.scrubbles.tech
            link
            fedilink
            English
            arrow-up
            5
            ·
            20 hours ago

            That’s fair. A huge difference is how much money is behind the crazy hype machine, and how desperate they are to keep the hype going. Most actual tech people I know, work with, and are connected with in the field have normalized on tech usage. Knowing when to use it and when not to use it. It’s only the tech bros at the top who are still like “Yeah bro it’s totally going to get rid of labor bro we’re all gonna have androids who do all the work bro just trust me just 200 billion more dollars bro I promise”

    • kryptonianCodeMonkey@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      18 hours ago

      They are useful tools. I use copilot quite often in my work routine. Mostly to generate boiler plate code for me, add explanatory comments, review code for syntax and logic mistakes, etc. They can handle analysis and debugging quite well. They can usually write code based on plain language input if you can describe specifically what you need. And they can write documentation fairly well based on it’s own analysis of the code (though sometimes it’s missing context).

      They’re still not a silver bullet by any means. If their training on a particular language is limited and/or documentation is not accessible, it often makes up stuff wholecloth that looks like it might work but isn’t correct syntax (it was basically useless with Dynatrace Query Language when I was learning the syntax last year). Sometimes it doesn’t follow instructions exactly. Sometimes even when just refactoring code like to reduce complexity it ends up making unintended changes to the logic. Sometimes I end up spending as much time or more debugging AI generated code as it would have taken to write it correctly the first time.

      It’s handy, but it’s no silver bullet. The fact that these guys got something so novel and complicated out of it is quite impressive and probably required a lot of data input, precise mathematical instructions and, frankly, luck and a lot of iterations.

      • partofthevoice@lemmy.zip
        link
        fedilink
        arrow-up
        1
        ·
        15 hours ago

        Yeah, to be fair I’ve had them do some pretty incredible stuff. I often need to spend some time finding its mistakes, making it fix them, refining my own verbiage, and coaching how it should be responding (so it doesn’t overwhelm itself). But it’s definitely helped me finish a month of work in a week.