MoogleMaestro

joined 1 month ago
[–] [email protected] 5 points 4 days ago

I noticed this as well. It's a shame as I still use it as my daily search driver.

[–] [email protected] 4 points 6 days ago (2 children)

Regarding VPNs, I wish this was an easier way of doing it. Unfortunately it requires all friends to be tech savvy enough to understand why a vpn is necessary.

[–] [email protected] 6 points 1 week ago

I hate writing a serialized format

I mean, that's why it's serialized. It's not supposed to be written by hand, that's why you have a deserializer. 🤦

 

Hi there self-hosted community.

I hope it's not out of line to cross post this type of question, but I thought that people here might also have some unique advice on this topic. I'm not sure if cross posting immediately after the first post is against lemmy-ediquet or not.

cross-posted from: https://lemmy.zip/post/22291879

I was curious if anyone has any advice on the following:

I have a home server that is always accessed by my main computer for various reasons. I would love to make it so that my locally hosted Gitea could run actions to build local forks of certain applications, and then, on success, trigger Flatpak to build my local fork(s) of certain programs once a month and host those applications (for local use only) on my home server for other computers on my home network to install. I'm thinking mostly like development branches of certain applications, experimental applications, and miscellaneous GUI applications that I've made but infrequently update and want a runnable instance available in case I redo it.

Anybody have any advice or ideas on how to achieve this? Is there a way to make a flatpak repository via a docker image that tries to build certain flatpak repositories on request via a local network? Additionally, if that isn't a known thing, does anyone have any experience hosting flatpak repositories on a local-network server? Or is there a good reason to not do this?

[–] [email protected] 2 points 1 week ago

This is the craziest fucking timeline.

It goes to show streaming services are not long for this world with the introduction of AI.

[–] [email protected] 9 points 1 week ago

Link to the video. I agree, it was a really good video on this topic and how wrong it is philosophically.

[–] [email protected] 9 points 1 week ago

The internet as we knew it is doomed to be full of ai garbage. It's a signal to noise ratio issue. It's also part of the reason the fediverse and smaller moderated interconnected communities are so important: it keeps users more honest by making moderators more common and, if you want to, you can strictly moderate against AI generated content.

[–] [email protected] 27 points 1 week ago* (last edited 1 week ago)

This is a false equivalency.

Google used to act as a directory for the internet along with other web search services. In court, they argued that the content they scrapped wasn't easily accessible through the searches alone and had statistical proof that the search engine was helping bring people to more websites, not preventing them from going. At the time, they were right. This was the "good" era of Google, a different time period and company entirely.

Since then, Google has parsed even more data, made that data easily available in the google search results pages directly (avoiding link click-throughs), increased the number of services they provide to the degree that they have a conflict of interest on the data they collect and a vested interest in keeping people "on google" and off the other parts of the web, and participated in the same bullshit policies that OpenAI started with their Gemini project. Whatever win they had in the 2000s against book publishers, it could be argued that the rights they were "afforded" back in those days were contingent on them being good-faith participants and not competitors. OpenAI and "summary" models that fail to reference sources with direct links, make hugely inaccurate statements, and generate "infinite content" by mashing together letters in the worlds most complicated markov chain fit in this category.

It turns out, if you're afforded the rights to something on a technicality, it's actually pretty dumb to become brazen and assume that you can push these rights to the breaking point.

[–] [email protected] 7 points 1 week ago

If he wins this, I guess everyone should just make their Jellyfin servers public.

Because if rich tech bros get to opt out of our copyright system, I don't see why the hell normal people have to abide by it.

[–] [email protected] 18 points 2 weeks ago

In reality, mastodon doesn't achieve the same dopamine hit by design. This is both a good thing (less addictive, more conversational) and a bad thing (less retention, more opaqueness in statistics) depending on why you want to use or don't want to use social networks.

[–] [email protected] 8 points 1 month ago

Awesome,

but I wonder if we'll ever get better read and write counts on SD cards. It feels like the size is getting larger than the amount of possible writes to the device, making it kind of moot.

[–] [email protected] 1 points 1 month ago

I mean, sure, but this counteracts all that money they spend when most artists make their money on Patreon or similar (if they make any money at all, frankly.)

[–] [email protected] 66 points 1 month ago (10 children)

Yeah, I actually think this policy is 100% correct and, if more services did this instead of eating the costs, we could have a real discussion about the harm caused by arbitrary fees.

It will likely result in Apple seeking a special deal with Patreon to avoid this mess though. It's really not a good look for Apple especially as they cater themselves to the creatives market.

view more: next ›