CallMeButtLove
I love animals and am against animal cruelty but I'd yeet the hell out of a raccoon if it was attacking my dog. I miss you Maddie girl.
I watched Diggnation a lot too because it felt like TechTV was living on in some little way. I remember thinking Alex Albrecht was pretty cool at the time. I haven't thought about that in well over a decade. I'd rather not look him up now because I don't want to know if he's actually a piece of shit like Tommy Talerico haha.
Me reading the op comment:
"Oh that's awesome I should do that!"
Me reading your comment:
"Oh yeah..."
I have a similar setup except I use pfSense as my router and pihole for DNS, but I'm sure you can get the same results with your setup. I'm running HAProxy for my reverse proxy and configs for each of my docker containers so any traffic on 443 or 80 gets sent to the container IP on whatever unique port it uses. I then have DNS entries for each URL I want to access the container by, with all of those entries just pointing to HAProxy. Works like a charm.
I have HAProxy running on the pihole itself but there's no reason you couldn't just run that in it's own container. pfSense also let's you install an HAProxy package to handle it on the router itself. I don't know if opensense supports packages like that though.
You can even get fancy and do SSL offloading to access everything over HTTPS.
That name seems to imply they're aware of how boring an update it is.
"This app is not available for your device because it was built for an older version of Android."
Pixel 7 here, am I missing something?
Can someone explain this to me? Based on the comments I'm sure it's horrifying but I can't help I have a morbid curiosity.
I really hate when companies do that kind of crap. I just imagine a little toddler stomping around going "No! No! Nooo!"
Is there a way to host an LLM in a docker container on my home server but still leverage the GPU on my main PC?
Same. I need more of these.
That's really interesting but I think that suffers from a similar issue because I'd assume the processing power needed to run the matrix alone would be much greater than 1:1 per human.