• Wilco@lemmy.zip
    link
    fedilink
    English
    arrow-up
    5
    ·
    20 小时前

    We need laws passed where AI should have to be clearly labeled or the user faces severe fines. Robo calls and AI IVR phone systems should clearly tell you “this is AI”.

  • krakenx@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    23 小时前

    Use of AI should be disclosed the same way 3rd party DRM and EULA agreements are. And similarly it should mention some details. People are free to boycott Denuvo if they want, but people are also free to buy it anyways if they want. Disclosure is never a bad thing.

  • QuantumTickle@lemmy.zip
    link
    fedilink
    English
    arrow-up
    207
    arrow-down
    1
    ·
    2 天前

    If “everyone will be using AI” and it’s not a bad thing, then these big companies should wear it as a badge of honor. The rest of us will buy accordingly.

    • Devial@discuss.online
      link
      fedilink
      English
      arrow-up
      64
      arrow-down
      2
      ·
      2 天前

      If “everyone will be using AI”, AI will turn to shit.

      They can’t create originality, they’re only recycling and recontextualising existing information. But if you recycle and recontextualise the same information over and over again, it keeps degrading more and more.

      It’s ironic that the very people who advocate for AI everywhere, fail to realise just how dependent the quality of AI content is on having real, human generated content to input to train the model.

      • 4am@lemmy.zip
        link
        fedilink
        English
        arrow-up
        38
        arrow-down
        3
        ·
        2 天前

        “The people who advocate for AI” are literally running around claiming that AI is Jesus and it is sacrilege to stand against it.

        And by literally, I mean Peter Thiel is giving talks actually claiming this. This is not an exaggeration, this is not hyperbole.

        They are trying to recruit techno-cultists.

        • EldritchFemininity@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 天前

          Ironically, one of the defining features of the techno-cultists in Warhammer 40k is that they changed the acronym to mean “Abominable Intelligence” and not a single machine runs on anything more advanced than a calculator.

          • 4am@lemmy.zip
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            1 天前

            Sci Fi keeps trying to teach us lessons, and instead we keep using it as an instruction manual.

            (Except, apparently, whenever it’s on the nose we interpret it as dramatic irony…)

      • Sl00k@programming.dev
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        2 天前

        I think the grey area is what if you’re an indie dev and did the entire story line and artwork yourself, but have the ai handle more complex coding.

        It is to our eyes entirely original but used AI. Where do you draw the line?

        • irmoz@reddthat.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 天前

          That’s somewhat acceptable. The ideal use of AI is as a crutch - and I mean that literally. A tool that multiplies and supports your effort, but does not replace your effort or remove the need for it.

        • Default_Defect@anarchist.nexus
          link
          fedilink
          English
          arrow-up
          12
          ·
          2 天前

          Disclose the AI usage and how it was used. Let people decide. There will always be “no AI at all, ever” types that won’t touch the game, but others will see that it was used as a tool rather than a replacement for creativity and will give it a chance.

        • Devial@discuss.online
          link
          fedilink
          English
          arrow-up
          8
          ·
          2 天前

          The line, imo, is: are you creating it yourself, and just using AI to help you make it faster/more convenient, or is AI the primary thing that is creating your content in the first place.

          Using AI for convenience is absolutely valid imo, I routinely use chatGPT to do things like debugging code I wrote, or rewriting data sets in different formats, instead of doing to by hand, or using it for more complex search and replace jobs, if I can’t be fucked to figure out a regex to cover it.

          For these kind of jobs, I think AI is a great tool.

      • CatsPajamas@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        2 天前

        How does this model collapse thing still get spread around? It’s not true. Synthetic data has actually helped bots get smarter, not dumber. And if you think that all Gemini3 does is recycle idk what to tell you

        • Devial@discuss.online
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          42 分钟前

          If the model collapse theory weren’t true, then why do LLMs need to scrape so much data from the internet for training ?

          According to you, they should be able to just generate synthetic training data purely with the previous model, and then use that to train the next generation.

          So why is there even a need for human input at all then ? Why are all LLM companies fighting tooth and nail against their data scraping being restricted, if real human data is in fact so unnecessary for model training, and they could just generate their own synthetic training data instead ?

          You can stop models from deteriorating without new data, and you can even train them with synthetic data, but that still requires the synthetic data to either be modelled, or filtered by humans to ensure its quality. If you just take a million random chatGPT outputs, with no human filtering whatsoever, and use those to retrain the chatGPT model, and then repeat that over and over again, eventually the model will turn to shit. Each iteration some of the random tweaks chatGPT makes to their output are going to produce some low quality outputs, which are now presented to the new training model as a target to achieve, so the new model learns that the quality of this type of bad output is actually higher, which makes it more likely for it to reappear in the next set of synthetic data.

          And if you turn of the random tweaks, the model may not deteriorate, but it also won’t improve, because effectively no new data is being generated.

  • AnarchistArtificer@slrpnk.net
    link
    fedilink
    English
    arrow-up
    57
    arrow-down
    1
    ·
    2 天前

    Corporations are not our friends, even when they seem friendly, like Steam. However, they can be useful allies, so I’m glad to see this response from Steam.

  • twinnie@feddit.uk
    link
    fedilink
    English
    arrow-up
    151
    ·
    2 天前

    They don’t need to court developers, they need to court consumers. The games will be sold wherever people are buying.

    • CosmoNova@lemmy.world
      link
      fedilink
      English
      arrow-up
      92
      arrow-down
      3
      ·
      2 天前

      Consumers have already decided mobile gambling slop is the most successful investment in the gaming industry. I don‘t trust consumers to know what‘s best for them.

      • Katana314@lemmy.world
        link
        fedilink
        English
        arrow-up
        68
        ·
        edit-2
        2 天前

        I think the studies showing how certain minds can be targeted and manipulated by dark gambling patterns made me think differently about gambling. I’m less likely to blame the victims now - in many ways it can be difficult or near-impossible for them to control those impulses. I’d at least like lootbox gambling slop to be regulated the same as casinos.

        Look how popular fantasy sports is now. It’s basically just the casino industry seeking out new avenues to cheat the definition of “Playing odds to win cash”.

        • Carighan Maconar@piefed.world
          link
          fedilink
          English
          arrow-up
          9
          ·
          2 天前

          Yeah that shit is like selling heroine specifically to vulnerable people in depressing phases of their life. But wth gambling ads and dark patterns in video games we somehow accept it. 😕

      • Oxysis/Oxy@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        25
        ·
        2 天前

        Well yeah gambling is addicting, the mobile slop companies know that so they try to get people addicted to it. It’s really sad what’s happened to the mobile gaming space, as it’s so heavily dominated by gambling. Hell the entire world is being run over by gambling companies now. It’s a major problem that will have to be addressed at some point soon.

    • rtxn@lemmy.world
      link
      fedilink
      English
      arrow-up
      44
      arrow-down
      5
      ·
      2 天前

      consumers

      This is very much a pet peeve, but be careful about how you use “consumer” versus “customer”. They each imply completely different power dynamics.

      • warm@kbin.earth
        link
        fedilink
        arrow-up
        18
        arrow-down
        4
        ·
        2 天前

        It’s very much consumer these days, people buy literally anything marketed to them.

          • warm@kbin.earth
            link
            fedilink
            arrow-up
            5
            ·
            2 天前

            I like to think I hold myself to a higher standard or at least just a standard. General consumption, I’m not sure, but for video games, people standards have dropped significantly, the masses accept a lot of bullshit and even defend it.

        • rtxn@lemmy.world
          link
          fedilink
          English
          arrow-up
          27
          ·
          edit-2
          2 天前

          Maybe some people, who are an ocean away from me, have been gaslit into thinking they can’t be anything other than consumers. I know it can be difficult to grasp the concept, but you can refuse a service if the terms are unacceptable. It is possible to go into a transaction with open eyes and full knowledge of the rights granted to you by law and responsibilities demanded of you by the contract.

          That’s why I say “customer”. It’s a reminder to myself that I should demand equitable treatment, even if the chances are slim unless the courts get involved. You don’t have to jump into the meat grinder willingly.

  • minorkeys@lemmy.world
    link
    fedilink
    English
    arrow-up
    33
    ·
    2 天前

    Consumers have a right to be informed of information relevant to them making purchasing decisions. AI is obviously relevant to the consumer and should be disclosed.

  • who@feddit.org
    link
    fedilink
    English
    arrow-up
    28
    ·
    edit-2
    2 天前

    “Calls to scrap” the disclosures make it sound like a societal movement, when in fact it’s just two people with obvious bias: Tim Sweeney and some guy who promotes Tim Sweeney’s products on youtube.

    I don’t give a flying frog what they think. When I allow someone to sell me something, I like to know what’s in it.

    • ameancow@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      edit-2
      2 天前

      Yah the more I use AI the more I can detect the absolute bullshit people on both sides spew.

      It’s the most amazingly complicated averaging machine we’ve ever invented. It will take the most interesting source materials, the most unique ideas of other people, the most creative materials, and it will find a way to find the safest, most average common qualities between those things. This isn’t a model problem or input problem, it’s fundamental to how generative AI works.

      It helps with searching for things online, it helps create guide plans for taking on new tasks like learning some new skill. It’s far better at teaching how to do something like coding than it is left to just code on its own and you copy and paste. It can certainly do that, but you spend so much time correcting it and fixing it that you do far better learning the code yourself and how it works.

      Same with art, the people who are using it to best effect are themselves already artists and they use AI to thumbnail compositions or rough layouts, color tests and such, and then just do the work themselves but faster because they already know roughly what direction they’re going.

      But using it to write your scripts, to copy/paste code, to generate works of art… it’s literally just giving you other people’s ideas mashed together and unseasoned.

    • mirshafie
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      2 天前

      I’m not even opposed to AI in games. I’d love to see more granulated disclosures, but Steam-style disclosure should be the bare minimum.

  • daniskarma@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 天前

    The thing is that it’s kind of voluntary. Game developers could have use AI to develop the game and if they wouldn’t want to disclose it no one would know.

    Unless the use of AI is the very crappy “AI art” that’s easy to notice the rest of uses would be very hard or actually impossible to figure it out to audit the legitimacy of the tag.

    And this will end like r/art where the mods deleted a post accusing the artist of using AI when it was not AI and the final mod answer was “change your art style so it doesn’t look like AI”. A brutal witch-hunt in the end.

  • megopie@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    64
    ·
    2 天前

    The reality is, that it’s often stated that generative AI is an inevitability, that regardless of how people feel about it, it’s going to happen and become ubiquitous in every facet of our lives.

    That’s only true if it turns out to be worth it. If the cost of using it is lower than the alternative, and the market willing to buy it is the same. If the current cloud hosted tools cease to be massively subsidized, and consumers choose to avoid it, then it’s inevitably a historical footnote, like turbine powered cars, Web 3.0, and laser disk.

    Those heavily invested in it, ether literally through shares of Nvidia, or figuratively through the potential to deskill and shift power away from skilled workers at their companies don’t want that to be a possibility, they need to prevent consumers from having a choice.

    If it was an inevitability in it’s own right, if it was just as good and easily substitutable, why would they care about consumers knowing before they payed for it?

    • U7826391786239@lemmy.zip
      link
      fedilink
      English
      arrow-up
      48
      ·
      2 天前

      relevant article https://www.theringer.com/2025/11/04/tech/ai-bubble-burst-popping-explained-collapse-or-not-chatgpt

      AI storytelling is an amalgam of several different narratives, including:

      Inevitability: AI is the future; its eventual supremacy is both imminent and certain, and therefore anyone who doesn’t want to be left behind had better embrace the technology. See Jensen Huang, the CEO of Nvidia, insisting earlier this year that every job in the world will be impacted by AI “immediately.”

      Functionality: AI performs miracles, and the AI products that have been released to the public wildly outperform the products they aim to replace. To believe this requires us to ignore the evidence obtained with our own eyes and ears, which tells us in many cases that the products barely work at all, but it’s the premise of every TV ad you watch out of the corner of your eye during a sports telecast.

      Grandiosity: The world will never be the same; AI will change everything. This is the biggest and most important story AI companies tell, and as with the other two narratives, big tech seems determined to repeat it so insistently that we come to believe it without looking for any evidence that it’s true.

      As far as I can make out, the scheme is essentially: Keep the ship floating for as long as possible, keep inhaling as much capital as possible, and maybe the tech will get somewhere that justifies the absurd valuations, or maybe we’ll worm our way so far into the government that it’ll have to bail us out, or maybe some other paradigm-altering development will fall from the sky. And the way to keep the ship floating is to keep peddling the vision and to seem more confident that the dream is inevitable the less it appears to be coming true.

      speaking for myself, MS can thank AI for being the thing that made me finally completely ditch windows after using it 30+ years

    • Katana314@lemmy.world
      link
      fedilink
      English
      arrow-up
      31
      ·
      2 天前

      Don’t forget, “Turns out it was a losing bet to back DEI and Trans people”.

      This is something scared, pathetic, loser, feral, spineless, sociopathic, moronic fascists come up with to try to win a crowd larger than an elevator; Assume the outcome as a foregone conclusion and try to talk around it, or claim it’s already happened.

      Respond directly. “What? That’s ridiculous. I’ve never even seen ANY AI that I liked. Who told you it was going to pervade everything?”

    • WanderingThoughts
      link
      fedilink
      English
      arrow-up
      16
      ·
      2 天前

      That reminds me how McDonald’s and other gaat food chains are struggling. People figure it’s too expensive for what you get after prices going up and quality going down for years. They forgot that people buy if the price and quality are good. Same with AI. It’s all fun if it’s free or dirt cheap, but people don’t buy expensive slop.

    • Riskable@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      2 天前

      If the cost of using it is lower than the alternative, and the market willing to buy it is the same. If the current cloud hosted tools cease to be massively subsidized, and consumers choose to avoid it, then it’s inevitably a historical footnote, like turbine powered cars, Web 3.0, and laser disk.

      There’s another scenario: Turns out that if Big AI doesn’t buy up all the available stock of DRAM and GPUs, running local AI models on your own PC will become more realistic.

      I run local AI stuff all the time from image generation to code assistance. My GPU fans spin up for a bit as the power consumed by my PC increases but other than that, it’s not much of an impact on anything.

      I believe this is the future: Local AI models will eventually take over just like PCs took over from mainframes. There’s a few thresholds that need to be met for that to happen but it seems inevitable. It’s already happening for image generation where the local AI tools are so vastly superior to the cloud stuff there’s no contest.

    • CatsPajamas@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      2 天前

      MIT, like two years out from a study saying there is no tangible business benefit to implementing AI, just released a study saying it is now capable of taking over more than 10% of jobs. Maybe that’s hyperbolic but you can see that it would require a massssssive amount of cost to make that not be worth it. And we’re still pretty much just starting out.

      • Jayjader@jlai.lu
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 天前

        I would love to read that study, as going off of your comment I could easily see it being a case of “more than 10% of jobs are bullshit jobs à la David Graeber so having an « AI » do them wouldn’t meaningfully change things” rather than “more than 10% of what can’t be done by previous automation now can be”.

        • CatsPajamas@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          23 小时前

          Summarized by Gemini

          The study you are referring to was released in late November 2025. It is titled “The Iceberg Index: Measuring Workforce Exposure in the AI Economy.” It was conducted by researchers from MIT and Oak Ridge National Laboratory (ORNL). Here are the key details from the study regarding that “more than ten percent” figure:

          • The Statistic: The study found that existing AI systems (as of late 2025) already have the technical capability to perform the tasks of approximately 11.7% of the U.S. workforce.
          • Economic Impact: This 11.7% equates to roughly $1.2 trillion in annual wages and affects about 17.7 million jobs.
          • The “Iceberg” Metaphor: The study is named “The Iceberg Index” because the researchers argue that visible AI adoption in tech roles (like coding) is just the “tip of the iceberg” (about 2.2%). The larger, hidden mass of the iceberg (the other ~9.5%) consists of routine cognitive and administrative work in other sectors that is already technically automated but not yet fully visible in layout stats.
          • Sectors Affected: Unlike previous waves of automation that hit blue-collar work, this study highlights that the jobs most exposed are in finance, healthcare, and professional services. It specifically notes that entry-level pathways in these fields are collapsing as AI takes over the “junior” tasks (like drafting documents or basic data analysis) that used to train new employees. Why it is different from previous studies: Earlier MIT studies (like one from early 2024) focused on economic feasibility (i.e., it might be possible to use AI, but it’s too expensive). This new 2025 study focuses on technical capacity—meaning the AI can do the work right now, and for many of these roles, it is already cost-competitive.

          https://www.csail.mit.edu/news/rethinking-ais-impact-mit-csail-study-reveals-economic-limits-job-automation?hl=en-US#%3A~%3Atext=This+important+result+commands+a%2Cthe+barriers+are+too+high.”

          • Jayjader@jlai.lu
            link
            fedilink
            English
            arrow-up
            1
            ·
            20 小时前

            I’ll be honest, that “Iceberg Index” study doesn’t convince me just yet. It’s entirely built off of using LLMs to simulate human beings and the studies they cite to back up the effectiveness of such an approach are in paid journals that I can’t access. I also can’t figure out how exactly they mapped which jobs could be taken over by LLMs other than looking at 13k available “tools” (from MCPs to Zapier to OpenTools) and deciding which of the Bureau of Labor’s 923 listed skills they were capable of covering. Technically, they asked an LLM to look at the tool and decide the skills it covers, but they claim they manually reviewed this LLM’s output so I guess that counts.

            Project Iceberg addresses this gap using Large Population Models to simulate the human–AI labor market, representing 151 million workers as autonomous agents executing over 32,000 skills across 3,000 counties and interacting with thousands of AI tools

            from https://iceberg.mit.edu/report.pdf

            Large Population Models is https://arxiv.org/abs/2507.09901 which mostly references https://github.com/AgentTorch/AgentTorch, which gives as an example of use the following:

            user_prompt_template = "Your age is {age} {gender},{unemployment_rate} the number of COVID cases is {covid_cases}."
            # Using Langchain to build LLM Agents
            agent_profile = "You are a person living in NYC. Given some info about you and your surroundings, decide your willingness to work. Give answer as a single number between 0 and 1, only."
            

            The whole thing perfectly straddles the line between bleeding-edge research and junk science for someone who hasn’t been near academia in 7 years like myself. Most of the procedure looks like they know what they’re doing, but if the entire thing is built on a faulty premise then there’s no guaranteeing any of their results.

            In any case, none of the authors for the recent study are listed in that article on the previous study, so this isn’t necessarily a case of MIT as a whole changing it’s tune.

            (The recent article also feels like a DOGE-style ploy to curry favor with the current administration and/or AI corporate circuit, but that is a purely vibes-based assessment I have of the tone and language, not a meaningful critique)

  • Aurenkin@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    47
    ·
    edit-2
    2 天前

    The ethics and utility (or lack thereof) of AI is an important discussion in it’s own right. In terms of Steam though, I really don’t think it’s relevant. Players want the disclosures, that’s it, that’s all that should really matter. Am I missing some nuance here?

    • borth@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      30
      ·
      2 天前

      The nuance is that Tim doesn’t give a shit what players want, him and his cronies don’t want it because it’s harder to convince someone to play AI slop when they know it’s AI slop before they even try it 😂

    • WanderingThoughts
      link
      fedilink
      English
      arrow-up
      22
      ·
      2 天前

      It might make players demand lower prices if some cheap AI slop is used in the game. That’s the thing publishers want to avoid. They want to sell cheap slop for full price and pocket the difference. That’s what it’s about in the end.

      • Red_October@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        2 天前

        I haven’t really seen demands for lower prices on AI slop, but I’ve seen a lot of outright refusal to buy at any price, and returns when the disclosure came later.

        • CatsPajamas@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 天前

          I don’t think the epic guy is making an argument for slop, he’s just saying that gen ai is at the point where avoiding using it is as much of a choice as deciding to. Generating the basis for digital art using something like flux then converting that into a 3d asset, with or without help from other AIs, would count, but could be made to look just as nice as something that didn’t use those tools, but took significantly longer. I understand that argument. What it fails to understand is that for the foreseeable future that is not how this tech is going to be used. It will be used by relative amateurs who push out garbage as quickly as possible. Maybe in five years there’s an argument to be made here, but even then I doubt it. People just won’t care about good utilization of AI because they’ll never even notice it. They’ll still hate the slop but that will inevitably become less sloppy. They’ll be able to tell the difference just based on the quality of the other aspects of the game.

    • Darkcoffee@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      2
      ·
      2 天前

      They want it? I don’t know, the review score of Black Ops 7 begs to differ.

      Personally I’ll give money to a hard working indie dev that may use AI to help in their work spiradically over a big company shoving AI in everything to replace workers.

    • Sl00k@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      2 天前

      I posted this in another comment but I think the nuance is really in what did they use the AI for. Are they using Claude code for the programming but did the entire artwork by hand? How many really care about that?

      Compared to someone who tried to one shot a slop game with full AI assets and is just trying to make a quick buck.

  • kazerniel@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    ·
    2 天前

    I’m glad for those disclosures (because I’m not touching AI games), but tons of devs don’t disclose their AI usage, even in obvious cases, leaving us to guessing :/

    • Bassman1805@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      2 天前

      There’s also the massive gray area of “what do YOU define AI to mean?”

      There are legitimate use cases for machine learning and neural networks besides LLMs and “art” vomit. Like, what AI used to mean to gamers: how the computer plays the game against you. That probably isn’t going to upset many people.

      (IIRC, Steam’s AI disclosure is specifically about AI-generated graphics and music so that ambiguity might be settled here)

        • AgentRocket@feddit.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 天前

          I’d say it depends on whether or not the voice actor whose voice the AI is imitating has agreed and is fairly compensated.

          I’m imagining a game, where instead of predefined dialog choices, you talk into your microphone and the game’s AI generates the NPCs answer.

  • RampantParanoia2365@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 天前

    …what calls? No one is calling for this. One dude said it was unnecessary. That’s not a call, it’s an opinion. He’s not out picketing for the end of fucking AI labels.

    • _cryptagion [he/him]@anarchist.nexus
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      2 天前

      whether he is or isn’t, they saw a chance to create a huge amount of good PR for Valve while doing and spending absolutely nothing. I mean, look at the amount of upvotes this post has. all they had to do is take what appears to be a principled stand.