

thereās also overpush, which is meant as a self-hostable drop-in replacement for pushover and does not use ai afaict.
Profile picture drawn by Paws and Claws and licensed under the Creative Commons Attribution Sharealike 4.0 International license(cc-by-sa 4.0)
currently migrating from jackr@lemmy.dbzer0.com


thereās also overpush, which is meant as a self-hostable drop-in replacement for pushover and does not use ai afaict.


animals are like sentient beings yāknow, a clanker is a⦠matrix or a bunch of matrices or something


how⦠what⦠how⦠why⦠why would you thinkā¦


It has, but I honestly thought it was fake and/or satire


iirc there are also some that detect humidity and infer when to stop from that somehow


is this real news or āprojectedā from polymarketās āpredictionsā?
changed their name to fugging iirc
I use libredirect for this. You can select multiple nitter instances and it supports a lot of other sites as well. It also gives you the option to redirect to the original if something doesnāt load. I also use a custom userscript to redirect lesswrong to greaterwrong because lesswrong has a tendency to crash my browser if I am doing anything else at the same time on mobile </ad>


you can also explain the mistake they made, instead of doing⦠this?


isnāt one person controlling a server also an ideological decision?


right mate, I am sure you can draw any equivalences with bestiality and such yourself, so I wonāt explicate on them. I just want to say, you donāt have to defend the man-made horrors within our comprehension of animal product industries if you donāt want to be a vegan. I am not a vegan, because I canāt afford to. You can just say āthat shitās fucked upā.


If I were to artificially inseminate a woman with sperm from a spermbank without her consent, would that be sexual assault?


isnāt this making fun of narratives people use to justify not improving their personal use?


Iām sure the MIC is going to be very happy with all of their new tech being made public


surely a newer statistics machine will correctly generate a statistically unlikely password. These Models Will Get Better. This Will Get Fixed.


wonāt somebody think of the poor friends of childrapists who are unable to go to this one conference now? oh the humanity!


of course, this witch is a known and convicted child rapist and trafficker, dead on I say, dead on
Iām sure you mustāve heard about the disastrous effects on the environment and the electrical grid by now, as well as the crisis in computer parts (GPUs, RAM, SSDs and now also HDDs) caused by AI data centres. Besides this, AI output is polluting the internet. This can be used to very quickly spin up a lot of sites to support a narrative, or to fill a site with ācontentā that increases SEO to get advertiser money. This makes looking anything up these days almost impossible. This is especially a problem because AI is unreliable. AI works purely off of statistics and doesnāt have any conception of truth or falsehood, so it generates what is in philosophical terms called ābullshitā (real term!). Output can be true accidentally, but you are never getting āthe truthā. This property has already been exploited by companies by generating data optimised for LLM consumption in order to advertise their products. Many chatbots are also built in a way that is very dangerous. They are optimised to keep you using them, which is often done by making them agree with pretty much everything you say. This has been shown multiple times to cause psychotic breakdowns in all kinds of people, even if they started out using it to, for example, write code. However, the group most at risk of this are the people using the bots as an alternative to a therapist. unfortunately, AI companies encourage this usage through initiatives like gpt health. It also turns out that AI dependence can harm your ability to learn certain things. This makes sense intuitively, a coder who relies on a chatbot to write parts of the code or to debug is less likely to develop those skills themselves. This is especially a problem with increased usage in schools. Yet more ethical problems arise with the image generation modes of AI, which, unfortunately(but unsurprisingly) turn out to be trained on like⦠A LOT of child porn. This has been one of the controversies with grok recently. Unfortunately, there is no real way to stop someone from asking for anything in the training data. Best you can do is either try to give negative incentive to the model or to hardecode in a bunch of phrases to automatically reject. This is a fundamental problem with the architecture. Generation of revenge porn, child porn and misinformation has run rampant. AI is also a privacy and security nightmare. Due to the fundamental architecture of AI models there is no way to distinguish between code and instructions. This means that āagentsā can be injected with instructions to, for example, leak confidential data. This is a big problem with parties like hospitals attempting to integrate AI into their workflow. This is in addition to pretty much all models being run āin the cloudā due to the high costs associated with running a model. Speaking of costs, all of these models currently operate at a gigantic loss. They are currently essentially circulating an IOU between themselves and a few hardware companies(nvidia), but that cannot last forever. If any of these companies survive, they will be way more expensive to use than now. Many of the current companies are also pretty evil, being explicitly associated by figures like Peter Thiel, whose stated goal in life has been to end democracy. There are also some arguments surrounding copyright. While I do not want to strengthen copyright law and so will be careful around my comments on this topic, it is certainly true that ai often outputs essentially exactly someone elses work without crediting them.
This is all I could think of off the top of my head, but there surely is more. hope this helps!


hey, thanks for reaching out ^ ^. keeping an eye on new accounts makes sense, especially with open registrations. I was wondering about what exactly happened. I figured one mustve blocked/defeded the other but couldnāt figure it out from a cursory glance at the modlog. hope no issues occur with them.
ps: I think fediseer lists the instance as registrations closed, I donāt know if that is intentional or not.
That didnāt really surprise me tbh, I follow a blog that is ostensibly meant to be a furry blog but gets repeatedly sidetracked into cryptographic stuff, and the conclusion for anything e2ee is essentially that only signal is worth using if you are looking for actual e2ee. But yeah, encryption is generally pretty bad on this kinda thing.