Zos_Kia

joined 6 months ago
[–] [email protected] 1 points 4 days ago

I've only had issues with fitgirl repacks i think there's an optimisation they use for low RAM machines that doesn't play well with proton

[–] [email protected] 12 points 5 days ago (4 children)

That's a room temp take at best

[–] [email protected] 6 points 6 days ago

But then how am I supposed to use your "research" to make imaginary claims on generational attention spans ?

[–] [email protected] 2 points 1 week ago

“i have collected some soil samples from the mesolithic age near the Amazon basin which have high sulfur and phosphorus content compared to my other samples. What factors could contribute to this distribution?”

Haha yeah the top execs were tripping balls if they thought some off-the-shelf product would be able to answer this kind of expert questions. That's like trying to replace an expert craftsman with a 3D printer.

[–] [email protected] 1 points 1 week ago (2 children)

What kind of use-cases was it, where you didn't find suitable local models to work with ? I've found that general "chatbot" things are hit and miss but more domain-constrained tasks (such as extracting structured entities from unstructured text) are pretty reliable even on smaller models. I'm not counting my chickens yet as my dataset is still somewhat small but preliminary testing has been very promising in that regard.

[–] [email protected] 6 points 1 week ago (5 children)

Most projects I've been in contact with are very aware of that fact. That's why telemetry is so big right now. Everybody is building datasets in the hopes of fine tuning smaller, cheaper models once they have enough good quality data.

[–] [email protected] 0 points 1 week ago

I doubt these tools will ever get to a level of quality that can confuse a court. They'll get better, sure, but they'll never really get there.

[–] [email protected] 1 points 3 weeks ago (1 children)

It's especially frustrating as the whole point of the Google search page was that it was designed to get you out on your way as fast as possible. The concept was so mind blowing at the time and now they're just like nevermind let's default to shitty

[–] [email protected] 1 points 3 weeks ago

If I understand these things correctly, the context window only affects how much text the model can "keep in mind" at any one time. It should not affect task performance outside of this factor.

[–] [email protected] 2 points 3 weeks ago (2 children)

Yeh, i did some looking up in the meantime and indeed you're gonna have a context size issue. That's why it's only summarizing the last few thousand characters of the text, that's the size of its attention.

There are some models fine-tuned to 8K tokens context window, some even to 16K like this Mistral brew. If you have a GPU with 8G of VRAM you should be able to run it, using one of the quantized versions (Q4 or Q5 should be fine). Summarizing should still be reasonably good.

If 16k isn't enough for you then that's probably not something you can perform locally. However you can still run a larger model privately in the cloud. Hugging face for example allows you to rent GPUs by the minute and run inference on them, it should just net you a few dollars. As far as i know this approach should still be compatible with Open WebUI.

[–] [email protected] 1 points 3 weeks ago (4 children)

There are not that many use cases where fine tuning a local model will yield significantly better task performance.

My advice would be to choose a model with a large context window and just throw in the prompt the whole text you want summarized (which is basically what a rag would do anyway).

view more: next ›