this post was submitted on 13 Jun 2025
59 points (89.3% liked)

Selfhosted

46680 readers
524 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I was looking back at some old lemmee posts and came across GPT4All. Didn't get much sleep last night as it's awesome, even on my old (10yo) laptop with a Compute 5.0 NVidia card.

Still, I'm after more, I'd like to be able to get image creation and view it in the conversation, if it generates python code, to be able to run it (I'm using Debian, and have a default python env set up). Local file analysis also useful. CUDA Compute 5.0 / vulkan compatibility needed too with the option to use some of the smaller models (1-3B for example). Also a local API would be nice for my own python experiments.

Is there anything that can tick the boxes? Even if I have to scoot across models for some of the features? I'd prefer more of a desktop client application than a docker container running in the background.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 17 points 2 days ago (3 children)

Ollama for API, which you can integrate into Open WebUI. You can also integrate image generation with ComfyUI I believe.

It's less of a hassle to use Docker for Open WebUI, but ollama works as a regular CLI tool.

[–] [email protected] 1 points 1 day ago* (last edited 1 day ago) (1 children)

But won't this be a mish-mash of different docker containers and projects creating an installation, dependency, upgrade nightmare?

[–] [email protected] 2 points 19 hours ago

All the ones I mentioned can be installed with pip or uv if I am not mistaken. It would probably be more finicky than containers that you can put behind a reverse proxy, but it is possible if you wish to go that route. Ollama will also run system-wide, so any project will be able to use its API without you having to create a separate environment and download the same model twice in order to use it.

[–] [email protected] 3 points 2 days ago* (last edited 2 days ago)

ChainLit is a super ez UI too. Ollama works well with Semantic Kernal (for integration with existing code) and langChain (for agent orchestration). I'm working on building MCP interaction with ComfyUI's API, it's a pain in the ass.

[–] [email protected] 0 points 1 day ago

This is what I do its excellent.