Davriellelouna@lemmy.world to science@lemmy.worldEnglish · 23 天前Scientists reportedly hiding AI text prompts in academic papers to receive positive peer reviewswww.theguardian.comexternal-linkmessage-square11linkfedilinkarrow-up1101arrow-down10cross-posted to: technology@lemmy.worldtechnology@lemmit.online
arrow-up1101arrow-down1external-linkScientists reportedly hiding AI text prompts in academic papers to receive positive peer reviewswww.theguardian.comDavriellelouna@lemmy.world to science@lemmy.worldEnglish · 23 天前message-square11linkfedilinkcross-posted to: technology@lemmy.worldtechnology@lemmit.online
minus-squareSquizzy@lemmy.worldlinkfedilinkEnglisharrow-up1arrow-down1·23 天前Yeah absolutely, but researchers who are attempting skirt review processes to only receice positive feedback are not respecting the process.
minus-squareCrypticCoffee@lemmy.mllinkfedilinkEnglisharrow-up2·20 天前What’s to respect in an AI review where they didn’t even review the output. It’s an LLM lazy review. Deserves to be gamed.
minus-squareSquizzy@lemmy.worldlinkfedilinkEnglisharrow-up1arrow-down2·20 天前Yes, the reviewers should not be using it. The researcher shouldnt be submitting it with the intention of gaming it. AI is not all LLM chat bots, there are legitimate AI implementations used in research.
Yeah absolutely, but researchers who are attempting skirt review processes to only receice positive feedback are not respecting the process.
What’s to respect in an AI review where they didn’t even review the output. It’s an LLM lazy review. Deserves to be gamed.
Yes, the reviewers should not be using it. The researcher shouldnt be submitting it with the intention of gaming it.
AI is not all LLM chat bots, there are legitimate AI implementations used in research.