• Rooskie91@discuss.online
    link
    fedilink
    arrow-up
    4
    ·
    6 hours ago

    If I hear one more person say something along the lines of, “AI is the future” I’m going to strangle them. Of all the people that say that shit, none of them can explain how it works.

  • melsaskca@lemmy.ca
    link
    fedilink
    arrow-up
    3
    ·
    6 hours ago

    I’ve been hesitant to play around with AI just because of how sneaky business is done lately and I don’t trust “business”. I can’t consciously reconcile my use of AI with the horrendous resources required to keep it up and running. I’d rather go “green” and figure out shit on my own, using old school research methodologies. My only caveat to this is if I really, really wanted a funny image. Maybe a Spongebob and Magilla Gorilla mashup. That, I’d sell out for. /s

  • SaveTheTuaHawk@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    6 hours ago

    Uh…recombinant DNA experiments were never paused, and while human cloning is illegal in non shitholes, Sam Altman has a company to genetically modify embryos in San Francisco called Preventive.

  • hardcoreufo@lemmy.world
    link
    fedilink
    arrow-up
    25
    ·
    14 hours ago

    I find AI very frustrating. I had a script I wanted to turn into a systemd service which I’ve never done. I searched the web, didn’t find quite what I wanted so I asked AI. It gave a great answer to exactly my question and explained what every field was doing. It got me there faster than searching and browsing forums would have.

    So great, I also wanted to set up a watchdog on the pi to reboot. It tells me to get watchdog package from apt then edit a systemd conf file. An hour later with nothing working right gave up and found a tutorial in about 30 seconds of web browsing that made it clear AI was mixing up instructions from 2 different methods.

    So it saved me 5 minutes on one thing, cost me an hour on another. I feel like the internet and search engines of 10 years ago were much better than what we have now.

    • wulrus@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      7 hours ago

      That is my exact experience. I was basically just incoherently whining about an issue I had that involved accessing the DB for old legacy windows photo albums and preserving them, and it spit out a fully working program that did all that.

      Then again, it often latches onto a way to do something that messes things up and leads nowhere, and I have to be the one to say: “STOP. The goal is to install a scanner on a very common OS, one that is praised for being particularly compatible to this. Now you want me to add 50 lines of custom configuration to a background service and switch it to an unsupported version. We are clearly on the wrong path here.”

      Hence I do experiment with it at home to see its limits, but my customers get 100 % human generated solutions.

    • marxismtomorrow@lemmy.today
      link
      fedilink
      arrow-up
      11
      ·
      14 hours ago

      That touches on the heart of it; search engines have been so enshittified that AI is by default better, because it occasionally gets information from its training data that isn’t easily found through normal searching.

      (Some) AI has it’s place, as in GAN AI is amazing at finding subtle indicators of patterns that can be extrapolated to new data, but got it’s just so bad at 99% of applications it has ever been used for, including the entire concept of LLMs which are such an inherently flawed technology that they’ll never be passable as useful for anyone that isn’t a greedy shortsighted CEO wanting to replace workers as soon as possible.

    • Skullgrid@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      14 hours ago

      here’s how I do it :

      Word it as best as I can. If the AI gives a specific and likely answer, doublecheck the documentation or stack overflow, or its listed sources.

      It sucks a lot of the stuff I’m searching comes from the same three fucking AI generated things from 2024 onwards

  • deadymouse@lemmy.world
    link
    fedilink
    arrow-up
    1
    arrow-down
    2
    ·
    edit-2
    6 hours ago

    Progress cannot be stopped, it will continue until the apocalypse comes because of it and no one can stop it. It’s a pattern.

  • Duamerthrax@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    17 hours ago

    Recombinant DNA promised better organ transplants, but it made Christians uncomfortable, so Bush II banned it.

    • glimse@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      7 hours ago

      And yet it was made, posted, saved, and shared. Because posting MORE content is better than posting GOOD content

      I’m not worried about AI ruining the internet…we’ve already done it ourselves.

  • HugeNerd@lemmy.ca
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    16 hours ago

    How is “human cloning” a) a real technology b) a bigger danger than the 8 billion fucking morons already here c) different from twins and triplets?

    • SaveTheTuaHawk@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 hours ago
      1. Genetically modified embryos were made by a lab in China for a wealthy client.

      2. The technology is not accurate, other modifications could lead to genetic diseases

      3. Twins and triplets are not modified.

    • vaultdweller013@sh.itjust.works
      link
      fedilink
      arrow-up
      8
      ·
      16 hours ago

      We can clone a sheep and even nearly bring species back from extinction via cloning. That is vastly more advanced than just cloning a person, as for the other factors it’s mostly a matter of ethics what with the potential for cloning celebrities for stupid reasons or making a sapient clone just to harvest their organs, which as an aside wasn’t that a Sliders episode?

      • T156@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 hours ago

        making a sapient clone just to harvest their organs

        A clone just makes a genetically identical baby, though, and they are shorter-lived. Dolly only lived half as long as the sheep she was a clone of, before she died of old age.

        Unless you wanted to wait 15 - 20 years, for organs that might, on average, last 15, cloning isn’t practical.

        • vaultdweller013@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          ·
          9 hours ago

          I’m assuming we can solve the telemere issue for this. Frankly though it seems to be a stalled out field, at least until we can figure out how to better use stem cells.

          But yeah if you are in your 20s or even 50s making a clone baby of yourself and waiting 20 years would be technically viable to get a new set of organs. Which is more what I’m referring to, especially since creating a healthy body you can rip apart would basically require letting it live a relatively healthy life.

    • TheKingBee@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      17 hours ago

      Some of us are greedy fucks, we let them make the decisions for some reason.

      I’ve always advocated for a system where people who are qualified but don’t want to should lead…

      • tristynalxander@mander.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 hours ago

        Who gets to say who’s qualified? While I appreciate experts, any filter you add to democracy is dangerous. I think experts should serve a large council of randomly selected citizens and people who were ranked higher than a lottery option in a ranked voting system. That allows us to have career politicians, but also prevents them from entrenching themselves as the “lesser evil”.

        • tristynalxander@mander.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          6 hours ago

          Sortition does best as an anti-corruption mechanism, rather than a full system that removes all politicians. I like to merge it with ranked voting by adding a lottery option to the ballot that politicians have to beat. This, for lack of a better term, Ranked Sortition system is also an easier transition from the current system, so even if you want a full sortition this is easier to implement at various local levels where people still need to get used to the idea.

          Edit: Also is there a com where we can talk about these sort of voting theory things?

        • TheKingBee@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          17 hours ago

          Yes it is!

          I didn’t use that word because no one ever knows wtf it means lol

          I kinda like the term randomocracy

  • Tavi@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    21 hours ago

    ooooohh it’s so dangerous and capable ooohhhhh please we need to be regulated ooooooo we’re not releasing it to the public it’s so dangerous ooooooo

    • SirIglooi@sh.itjust.works
      link
      fedilink
      arrow-up
      14
      ·
      19 hours ago

      No idea what you’re on about. Mythos is a GAME CHANGER. Completely DESTROYS software security. Thats why we’re going to SAVE THE WORLD by letting our corporate sponsors use it.

    • angband@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      17 hours ago

      If they regulate something they don’t have (AGI), they (corps) can steal it from the small shop that creates it 30 years from now. insert head tapping meme here

  • lastlybutfirstly@lemmy.world
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    edit-2
    21 hours ago

    The only thing dangerous about AI is people believing the hype and thinking it can actually think and do things it can’t do at all. LLMs, flock cameras etc. are just MENACE matchbox computers at their core. And it’s dangerous that governments and CEOs are just blindly relying on whatever crap they pump out without human supervision.

    • AppleTea@lemmy.zip
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      21 hours ago

      But! What if a computer could reproduce all the same phenomenon as a brain?

      Do we have any reason to think this might be the case? Not really. But. We also (maybe) have no reason to think this isn’t the case. What else are we gonna spend trillions gambling on? An ecosystem capable of supporting mammals? Don’t make me laugh!

      • lastlybutfirstly@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        18 hours ago

        I agree that the amount of water and electricity these AI centers gobble up is a concern. But I don’t know what you mean by our way of life. Personally I think it’s very useful when judiciously used. It’s dangerous if NASA haphazardly tosses AI generated code into the OS for a rocket going on a moon mission. But to quickly generate a meme or YT thumbnail is harmless.

  • Baggie@lemmy.zip
    link
    fedilink
    arrow-up
    35
    ·
    1 day ago

    And then there’s antichiral bacteria, where the entire scientific community will shoot you if you even breath wrong adjacent to the idea

    • dejected_warp_core@lemmy.world
      link
      fedilink
      arrow-up
      17
      ·
      1 day ago

      As someone who has family that died from mad cow (prion disease), fuck everything about that. The fact that there are prion-tainted spaces out in the wild, is terrifying enough.

    • cornshark@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      1 day ago

      What’s that and what do you mean by breathing wrong at the idea? Is someone trying to breed some sort of supervillain bacteria?

      • Baggie@lemmy.zip
        link
        fedilink
        arrow-up
        2
        ·
        6 hours ago

        Others have already answered, but yeah it’s a bit of a Pandora’s box. We almost certainly wouldn’t be able to contain it, and there’s no way of knowing what it would do the the world or even universe. It’s some supremely scary shit.

      • pelya@lemmy.world
        link
        fedilink
        arrow-up
        12
        ·
        23 hours ago

        Almost every organic molecule has a mirrored counterpart, like a normal screw and a left-handed screw.

        Almost none of them occur in the nature.

        So we have the technology to synthesize them now, and synthesize a bacteria out of them.

        But if you do that, and the bacteria escapes, all your existing medicine will be useless, so you need to re-synthesize all your antibiotics in left-hand configuration.

        That typically does not happen with regular bacteria experiments, because most of what you can synthesize in the lab will be a descendant of some other well-known bacteria, which already have an appropriate medicine to treat it, and in most cases it will be effective against your new strain.

        • Buddahriffic@lemmy.world
          link
          fedilink
          arrow-up
          6
          ·
          22 hours ago

          Though wouldn’t that incompatibility go both ways? Current drugs and antibodies wouldn’t work with them but wouldn’t they use the mirrored proteins for energy and functioning, thus our bodies would be of no use to them?

          I’ve been wondering if bio-compatability would mean one doesn’t have a chance against the other or if it’s more like separate worlds that can only interact at a high level (like via the senses) but not at a lower level (sharing infections, food, and other biological processes).

          • Fluke@feddit.uk
            link
            fedilink
            arrow-up
            6
            ·
            edit-2
            21 hours ago

            Maybe?

            Worth risking life as we know it just to find out, for shiggles?

            The truth is, there will be somewhere that they outcompete native fauna for resources but can’t be stopped by what controls the natives, and whoops, there goes the ecosystem.

            • Buddahriffic@lemmy.world
              link
              fedilink
              arrow-up
              3
              ·
              21 hours ago

              I think it would be important to know in the context of space exploration, assuming we can solve the other very hard problems standing in the way of a Star Trek future (though I’m not holding my breath lol), we’d need to know if we should stay the fuck away from any planets we find with life or if we can make contact without potentially dooming both our planet and theirs to potentially returning to the single-celled life stage.

              But yeah, it is likely a real world pandora’s box.

  • Sunflier@lemmy.world
    link
    fedilink
    English
    arrow-up
    41
    arrow-down
    1
    ·
    edit-2
    17 hours ago

    The difference between AI and the other 3: AI has the potential to save all the rich people trillions through the firing of the proletariat whereas the 3 numbered items were merely a small group of people trying to make money for themselves.

    • KairuByte@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      3
      ·
      24 hours ago

      1 and 3 could easily make a boatload of money, and could allow rich people to “live forever” and edit themselves in the process.

    • quick_snail@feddit.nl
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      1 day ago

      Wut. Rich people will shoot themselves in the foot by firing the proletariat. AI is trash.

      The only thing that would save them is a bail out when everything crashes.

      • jj4211@lemmy.world
        link
        fedilink
        arrow-up
        11
        ·
        1 day ago

        So much of the white collar work is frankly a bit performative in general, and doing it well versus doing it badly versus not even doing it at all is sometimes not at all possible to tell.

        Thanks to mismanagement, people are brought in “in case they might be useful” a bunch of material is produced that is beyond the ken of the management who just smiles and nods because they have no idea.

        Witnessed a group manage to coast on doing effectively nothing for over a year on “we are going to do analytics in the cloud” as executive after executive sagely nodded. New executive came into the fold and got the same pitch and said “ok, fine, but what analytics, with what data sources, what do you expect to get out of it?” In a rare moment of competence an executive actually dared to figure out something instead of just smiling over the buzzwords. That same executive was gone within 3 months, because broadly speaking this was a problem for his peers that mostly operated by buzzword alignment.

        There’s a mountain of internal project document material that must be created, but is never used, because of processes where non-technical executives imagine they can review a technical design as long as it isn’t “code”, or that they can fire their coders and replace with new coders if they can reference some ‘non-code’ document to help.

        GenAI may be pretty bad, but depressingly it might not matter given how much pretty bad stuff is already out there.

        • cornshark@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          1 day ago

          Makes sense! So your theory is leadership will fire themselves and replace themselves with genai, keeping the rank and file workers?

          • jj4211@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            1 day ago

            Nah, that rank and file workers will go and the leadership will happily let genai keep doing performative bullshit that doesn’t matter and claim it’s like super important

      • Canaconda@lemmy.ca
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        1 day ago

        “An evil man will burn his own nation to the ground to rule over the ashes.” ~ Sun Tzu

        “AI Slop” is not mutually exclusive with “AI fascism”. Billionaires are already burning down the planet. Clearly they don’t care about killing humanity on the way.

      • Buddahriffic@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        21 hours ago

        In addition to what the other reply says, the current state of AI isn’t necessarily the best AI could be. Even with the iterative changes on the LLM-based model, things are improving so fast that it might be safe to shrink the workforce for technical tasks soon.

        But I’m sure I’m not the only one that thinks the LLM-focused approach itself is just a local minimum the industry is stuck trying to optimize while another approach that isn’t just a big data “throw everything we can at it and hope it spits out useful results” but something more methodological that encodes our knowledge from experts to give it a head start as well as robust reasoning strategies and logic to let it improve on that starting point as it seeks and adds relevant data in ways similar to how we do science and engineering.

        I believe that it’s a race between an AI that truly can outcompete us and societal collapse, because the real reason AI is more difficult to stop than those other three is how easy it is to hide development. The massive data centers are required for the current approach being scaled up for the world to use it. AI research and development can be done on home PCs, especially if you’re more interested in results than speed (in which case you aren’t limited by cores or memory but just by storage and time).

        • Junkasaurus@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          5 hours ago

          Eh it’s the illusion of speed. Scaling brought enormous returns from GPT-3 -> GPT-4 but it’s been far less significant for every major release since. To compensate for this, every research lab is coming up with new ways to extract value of it of models: CoT, RL, Agent Harness etc

          However, these are all hacks to make LLMs more efficient or (try) to make them more reliable. They still have significant drawbacks which will take years (probably decades) to ever get them to the point where they can reliably replace knowledge workers. China knows this and is taking a far different approach to LLM development (not a tankie fyi). Scaling is a horrible idea which will burn billions of dollars with an astronomically low chance of return.

          • Buddahriffic@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            5 minutes ago

            Yeah, while I have some doubts, I believe that LLMs have fundamental issues that will always hold them back. The doubts come because Claude Code seems like they’ve built a system where they are effective at giving it a good context, and it has relatively quickly solved some annoying obscure issues with my environment that I was unable to make any progress on my own with and other LLMs were also useless for.

            I still think it’s a series of patches/bandaids to cover up those flaws, but my doubt comes in the form of “what if those patches can get it to average human level or even skilled”. I don’t think LLMs can get to the true innovator level like Einstein and Tesla, but doing competent work is well below that level and at this point I think LLMs might be able to get there.

            And I think other approaches could do even better. Not that I know what they are, but just based on the assumption that we haven’t found the ideal approach in the still infancy of what AI could be.

            Edit: Funny enough but the current/recent advancements seem to be aimed at eliminating the job of “prompt expert” first.

    • Kommeavsted@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      2
      ·
      1 day ago

      Firing and rehiring at a lower wage. That is, if they’re motivated to continue producing functional products. It’s clear that at this point many aren’t. So maybe this content is moot.