projectmoon

joined 1 year ago
[–] [email protected] 15 points 6 days ago

They can build a keyboard into it, sure. It's just UI elements and a bunch of buttons. Won't be a good keyboard, but it can be done.

[–] [email protected] 3 points 1 week ago (2 children)

Where can I get a sub 400 AMD card with 26 GB of VRAM?

[–] [email protected] 24 points 2 weeks ago (1 children)

https://agnos.is/posts/tech-recruitment-is-out-of-control.html

This was my experience at the beginning of 2024. It was bad enough that I had to write a blog post about it.

[–] [email protected] 2 points 3 weeks ago

Have you tried Matrix?

[–] [email protected] 5 points 3 weeks ago (1 children)

LLMs are statistical word association machines. Or tokens more accurately. So if you tell it to not make mistakes, it'll likely weight the output towards having validation, checks, etc. It might still produce silly output saying no mistakes were made despite having bugs or logic errors. But LLMs are just a tool! So use them for what they're good at and can actually do, not what they themselves claim they can do lol.

[–] [email protected] 1 points 1 month ago

OpenWebUI connected tabbyUI's OpenAI endpoint. I will try reducing temperature and seeing if that makes it more accurate.

[–] [email protected] 1 points 1 month ago (2 children)

Context was set to anywhere between 8k and 16k. It was responding in English properly, and then about halfway to 3/4s of the way through a response, it would start outputting tokens in either a foreign language (Russian/Chinese in the case of Qwen 2.5) or things that don't make sense (random code snippets, improperly formatted text). Sometimes the text was repeating as well. But I thought that might have been a template problem, because it seemed to be answering the question twice.

Otherwise, all settings are the defaults.

[–] [email protected] 1 points 1 month ago (4 children)

I tried it with both Qwen 14b and Llama 3.1. Both were exl2 quants produced by bartowski.

[–] [email protected] 3 points 1 month ago

Perplexica works. It can understand ollama and custom OpenAI providers.

[–] [email protected] 1 points 1 month ago (6 children)

Super useful guide. However after playing around with TabbyAPI, the responses from models quickly become jibberish, usually halfway through or towards the end. I'm using exl2 models off of HuggingFace, with Q4, Q6, and FP16 cache. Any tips? Also, how do I control context length on a per-model basis? max_seq_len in config.json?

[–] [email protected] 4 points 6 months ago

Even the smell of Olives causes me to gag. I absolutely cannot eat them. Olive oil is fine. But actual olives, no. Doesn't matter if they're old, new, canned, fresh. They're absolutely disgusting. One of the few foods I outright cannot and will not eat.

view more: next ›