• 4 Posts
  • 23 Comments
Joined 2 years ago
cake
Cake day: June 26th, 2023

help-circle
  • And then you have a trained model that requires vast amounts of energy per request, right? It doesn’t stop at training.

    You need obscene amounts GPU power to run the ‘better’ models within reasonable response times.

    In comparison, I could game on my modest rig just fine, but I can’t run a 22B model locally in any useful capacity while programming.

    Sure, you could argue gaming is a waste of energy, but that doesn’t mean we can’t argue that it shouldn’t have to cost boiling a shitload of eggs to ask AI how long a single one should. Or each time I start typing a line of code for that matter.


  • Well, from what I understand for admins you have some config keys being PF_OPTIMIZE_IMAGES to toggle the entire optimization pipeline (or accept supported formats as is) and the IMAGE_QUALITY percentage as an integer to tweak the lossy compression for formats that support it.

    The image resize to 1080 is even hardcoded in the optimizrtion pipeline. I think I saw a toggle for it on the PHP side, but it seems they only expose the toggling of storage optimization as a whole for admins. The 1080 is currently not exposed as a parameter to set, sadly.

    As a creator, I was interested in the maximum possible quality to retain. As PNG is often supported and by design only features lossless compression at best while remaining well under 15MB for a file with common image aspect ratio’s, that was the winner in that regard. My uncropped 24MP images then become 3MB-ish.

    Other formats tend to be way smaller in filesize due to lossy compression being so effective and most images I checked on Pixelfed are resized&optimized JPEGs well under 1MB (around 600-800KB). That is probably the file format and size you’ll encounter most.

    My own filesize comparisons were for RAW exports using Darktable for different file formats, qualities and resolutions. The PHP image pipeline used by Pixelfed will probably yield comparable results for the same image.

    If I were to advocate new settings, that would be cranking up the resolution to more modern standards (like fitting a 4k monitor) and converting to WebP at some 85% (or sticking with 80%).

    It’s difficult, though, as that may introduce double-lossy pipelines when converting other lossy formats. That’s why I looked into resolution settings first. If you upload an image that is too large, it currently decodes your (maybe lossy) image, resizes that (lossy, probably?) and re-encodes that using the set lossy quality if applicable.

    Thus, first order of business: at least publish ideal image sizes.

    Second, better quality control. Might involve settings per file format or setting a unified output file format.



















  • Before I forget: many thanks for your response! It’s nice to discuss this.

    That distinction is important indeed. I could always add a notice to the README to underline that for potential users.

    I’m going to make a dependency map of our own libs and license the language tools and their dependencies as LGPL such that they can be relatively freely embedded in other products. The post-processing and analysis libs/applications will then be licensed under the AGPL (dual licensing). We had other libraries under the GPL before, but in the current landscape it seems wise to cover the hosted/embedded variations as well.