exu

joined 1 year ago
[–] [email protected] 3 points 10 months ago (1 children)

If you're using llama.cpp, have a look at the GGUF models by TheBloke on huggingface. He puts approximate RAM required in the readme based on the quantisation level.

From personal experience I'd estimate 12G for 7B models based on how full RAM was with 16 gigs. For mixtral at least 32G.

[–] [email protected] 1 points 10 months ago

I've found it's pretty good for translating between steps so to speak.

Converted some bash to python relatively quickly by giving it snippets and fixing errors as it made them.

I also had success generating an ansible playbook based on my own previously written install instructions for SillyTavern and llama.cpp.

I could do both of those tasks myself, but thar would be more difficult than having a mostly correct translation and fixing some errors.

[–] [email protected] 0 points 10 months ago (2 children)

Imo this would be impossible to implement. The user can just remove whatever mark was inserted.

I'll also leave this here: https://github.com/ggerganov/llama.cpp

[–] [email protected] 1 points 10 months ago

I've also gone down that rabbit hole and found Vivictpp pretty good. It allows you to play two videos so you can swipe between them like imgsli you mentioned.

There's a whole range measurements trying to approximate quality differences between a video source and encode. PSNR, SSIM, VMAF, MS-SSIM
All of them with some strong areas and tricks you can use to cheat them.

[–] [email protected] 0 points 10 months ago (1 children)

The EU doesn't include a bunch of countries on the continent of Europe

[–] [email protected] 0 points 10 months ago (3 children)

Then Europe is a bunch of countries wearing a bikini and lots of accessories. There's no one part that covers all of it, some accessories strongly clash with each other and you have random bracelets everywhere.

[–] [email protected] 12 points 11 months ago (1 children)

I've been playing with llama.cpp a bit for the last week and it's surprisingly workable on a recent laptop just using the CPU. It's not really hard to imagine Apple and others adding (more) AI accelerators on mobile.

[–] [email protected] 84 points 11 months ago (9 children)

Maintaining a vacuum over long distances is really fucking hard.
You'd be better served utilising existing rail infrastructure and improving that to make high speed trains possible.

[–] [email protected] 2 points 1 year ago (1 children)

You can use podman pods and generate the systemd file for the whole pod.

[–] [email protected] 0 points 1 year ago

Nope, but it integrates very well with Podman.

[–] [email protected] 2 points 1 year ago

Iirc that's an issue with ffmpeg. Opus itself can do that. I also stumbled upon that once.

[–] [email protected] 1 points 1 year ago

With enough time and motivation it's probably possible, but that holds for many things.

view more: ‹ prev next ›