circle 12mo ago • 100%
Even I used to believe that there is a good demand, but sadly it's a very small minority.
circle 12mo ago • 100%
Oh yes, to top it I have small hands - I can't reach almost any of the opposite edge without using two hands. Sigh.
circle 1y ago • 100%
Thanks ill check that out
circle 1y ago • 100%
Agreed. YouTube revanced works well too. But are there alternatives for iOS?
circle 1y ago • 100%
This is such a good idea!
intuition: 2 texts similar if cat-ing one to the other barely increases gzip size no training, no tuning, no params — this is the entire algorithm https://aclanthology.org/2023.findings-acl.426/
circle 1y ago • 50%
circle 1y ago • 100%
Oh nice, thanks!
As the title suggests, basically i have a few LLM models and wanted to see how they perform with different hardware (Cpus only instances, gpus - t4, v100, a100). Ideally it's to get an idea on the performance and overall price(vm hourly rate/ efficiency) Currently I've written a script to calculate ms per token, ram usage(memory profiler), total time taken. Wanted to check if there are better methods or tools. Thanks!
circle 1y ago • 0%
Haha. That's true!
I've been having some random issues with the apps, now it's mostly wefwef on the browser.
Can't wait to see sync for lemmy
circle 1y ago • 100%
I already miss my muscle memory operations through sync :/