this post was submitted on 23 May 2024
45 points (94.1% liked)

Selfhosted

40173 readers
651 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I am a teacher and I have a LOT of different literature material that I wish to study, and play around with.

I wish to have a self-hosted and reasonably smart LLM into which I can feed all the textual material I have generated over the years. I would be interested to see if this model can answer some of my subjective course questions that I have set over my exams, or write small paragraphs about the topic I teach.

In terms of hardware, I have an old Lenovo laptop with an NVIDIA graphics card.

P.S: I am not technically very experienced. I run Linux and can do very basic stuff. Never self hosted anything other than LibreTranslate and a pihole!

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 8 points 5 months ago (2 children)

While you can run an llm on an "old" laptop with an Nvidia GC it will likely be really slow. Like several minutes to much much longer slow. Huggingface.co is a good place to start and has a ton of different LLMs to choose from that range from small enough to run on your hardware to ones that won't.

As you are a teacher you know that research is going to be vital to your understanding and implementing this project. There is a plethora of information out there. There will not be a single person's answer that will work perfectly for your wants and your hardware.

When you have figured out your plan and then run into issues that's a good point to ask questions with more information about your situation.

I say this cause I just went through this. Not to be an ass.

[–] [email protected] 2 points 5 months ago (1 children)

Can they not get a TPU on USB, like the Coral Accelerator or something?

[–] [email protected] 1 points 5 months ago

It's less the calculations and more about memory bandwidth. To generate a token you need to go through all the model data, and that's usually many many gigabytes. So the time it takes to read through in memory is usually longer than the compute time. GPUs have gb's of RAM that's many times faster than the CPU's ram, which is the main reason it's faster for llm's.

Most tpu's don't have much ram, and especially cheap ones.