

a tiny bit of dead person, sounds like
a tiny bit of dead person, sounds like
The contributing guide is here but for adding simple places I’ve had good luck just doing it from the app. The hamburger menu on the bottom right has a “add place” option, and I found it very intuitive from there personally. Good luck and thanks for being willing to contribute!
It’s really pretty, I might try this on my pc. I mostly use matrix from my phone tho, and I didn’t spot any native support for that so I can’t actually switch.
Their website and roadmap didn’t mention it, do you know if they support video calls?
What client would you consider good, then? I’ve liked element so far but I’m always happy to try something new
Counterpoint: medical researchers don’t make line go up
I also really like Small Form Factor form factors
(sorry couldn’t resist)
Can you please elaborate on that second one, or drop a name so I can look into it? Sounds very counterintuitive and like something I wanna know
Wow you’re more patient than I am, if you type on your phone here a lot lmao. Thanks for the answers!
You make a good point about systemd being monolithic, and I hate to add to your replies fully ignoring it to only talk about the thorn… but I gotta admit I’m really curious how you type it.
I’m guessing you’re not using text replacement and that you’re typing it instead, but do you have it bound to a key combo, replacing a little-used character, etc? Do you use the same method on mobile, if you also use the thorn there? If you type like this everywhere, are you concerned about your distinct typing patterns making you easy to dox?
Sorry to hit you with a bunch of questions unrelated to your actual comment, I don’t have strong opinions on systemd so don’t have much to contribute there lol
I’ve given up on running ASA using a 5900x and a 10gb 3080, cuz I can barely get a stable 40fps at 4K with clouds turned off and the settings as low as I can tolerate them (including upscaling turned on). If you’re playing at a lower resolution you’ll obviously have a much better time with it than I do, but fair warning this game is just unoptimized as fuck.
My performance is entirely gpu-bound, but that’s also what you’d expect at 4K so if you run 1080p for example a 5000-series cpu would probably give you a big jump in performance. A 5950x, 5900xt, 5900x, 5800xt, etc would be a really nice upgrade from your current cpu and with a bios update your board should support them all. If you’re able to find one, a 5800x3d or 5700x3d would be ideal if this is a purely gaming pc for you, though they’re not as powerful for productivity stuff.
The only other thing I’d check out is just to see if you have vsync enabled / a framerate cap, your gpu usage being low is probably just from the cpu bottleneck but those two things can also cause it
This is the one part I’m not clear on, are the filters different from just keyword filters? Will they work with content produced on lemmy/mastodon/etc? If they’re different from keyword filters, how will they be enforced cross-instance?
The other changes look really great and I really appreciate the devs putting in so much work and making so much progress - especially on feedback that they didn’t even get that long ago, for some of these features.
I never thought I’d say this, but in this one specific scenario I’m actually perfectly happy with him being alive too
To be clear, I wasn’t saying you’re wrong. I just like homomorphic encryption a lot and love a chance to tell people about it lmao
There are no LLMs that process encrypted tokens.
Check out homomorphic encryption! AFAIK it’s not used in any LLMs just yet but the plans are in place and it’s tantalizingly close
Hold up, tech noir… 2??? Thank you so much for introducing this to my life
Was this Vancouver lmao
So if I’m understanding you right, your crush is primarily on the person herself and/or who she wants to be and the character aspect of it is secondary? That’s kinda the opposite of what my ignorant guess would be so it’s really interesting to read. Thanks for answering!
Did you use a heavily quantized version? Those models are much smaller than the state of the art ones to begin with, and if you chop their weights from float16 to float2 or something it reduces their capabilities a lot more
Yep, the OpenAI api and/or the ollama one work for this no problem in most projects. You just give it the address and port you want to connect to, and that port can be localhost, lan, another server on another network, whatever.
If you haven’t seen the umbrella academy you should watch it just to see Nathan’s actor absolutely killing it in that role, he does such an incredible job. He brings a lot of the same humour but a lot of depth and actual character development that misfits had… less of 😂