theterrasque

joined 1 year ago
[–] [email protected] 14 points 5 months ago (2 children)

πŸ«°πŸ€™πŸ«΅πŸ‘ŒβœŠπŸ«³πŸ«ΈπŸ€²πŸ€Œ

[–] [email protected] 1 points 5 months ago

I worked on one where the columns were datanasename_tablename_column

They said it makes things "less confusing"

[–] [email protected] 4 points 5 months ago (2 children)

I mean, I totally agree with you. But that also kinda ignores all the useful things a dog can be trained to do.

[–] [email protected] 1 points 5 months ago

It's less the calculations and more about memory bandwidth. To generate a token you need to go through all the model data, and that's usually many many gigabytes. So the time it takes to read through in memory is usually longer than the compute time. GPUs have gb's of RAM that's many times faster than the CPU's ram, which is the main reason it's faster for llm's.

Most tpu's don't have much ram, and especially cheap ones.

[–] [email protected] 1 points 5 months ago* (last edited 5 months ago)

Reasonable smart.. that works preferably be a 70b model, but maybe phi3-14b or llama3 8b could work. They're rather impressive for their size.

For just the model, if one of the small ones work, you probably need 6+ gb VRAM. If 70b you need roughly 40gb.

And then for the context. Most models are optimized for around 4k to 8k tokens. One word is roughly 3-4 tokens. The VRAM needed for the context varies a bit, but is not trivial. For 4k I'd say right half a gig to a gig of VRAM.

As you go higher context size the VRAM requirement for that start to eclipse the model VRAM cost, and you will need specialized models to handle that big context without going off the rails.

So no, you're not loading all the notes directly, and you won't have a smart model.

For your hardware and use case.. try phi3-mini with a RAG system as a start.

[–] [email protected] 0 points 6 months ago (1 children)

I'm not saying it's broken, but it has some design choices and functions that makes even Whatsapp a better choice for privacy minded people. Like rolling their own crypto and not having e2ee as default.

[–] [email protected] 8 points 6 months ago

So you're saying it's already feature complete with most json libraries out there?

[–] [email protected] -4 points 6 months ago (2 children)

You realise there is no algorithm behind Lemmy, right?

Of course there is. Even "sort by newest" is an algorithm, and the default view is more complicated than that.

You aren't being shoved controversial polarizing content subliminally here.

Neither are you on TikTok, unless you actively go looking for it

[–] [email protected] 2 points 9 months ago (1 children)

Hah as if. In the early 00s the mods were in maybe once or twice a day and there was tons of CP being posted.

Worst I saw was a little girl chopped into pieces, and a many -page discussion / argument if it should be sorted as CP or Necro porn. That was the old 4chan.

[–] [email protected] 3 points 9 months ago

Even for 4chan that's fucked up.

Oh, sweet summer child..

view more: β€Ή prev next β€Ί