webghost0101

joined 1 year ago
[–] [email protected] 30 points 4 months ago (1 children)

Gimp for a full image manipulation suit.

Krita for digital painting/art and a decent gui, still better for light image edits then paint.

Between those two photoshop is essentially overpriced hypeware. Its convenient to have both foss apps packed under a single well designed interface but no where worth what they demand. After adobe leaked the details from my student account back in 2013 they have continuously caused me so much damage they should be paying me.

[–] [email protected] 10 points 4 months ago* (last edited 4 months ago) (2 children)

Are you familiar with Jellyfin?

https://jellyfin.org/

If you can set up a server of this then its as easy as making the library folder the output folder for yt-dlp

https://github.com/yt-dlp/yt-dlp

I didnt even have to write me own script. I gave chatgpt a notepad with channel urls and just told it to write me code to load these urls one by one and download the Last 2 videos. (Trust me you dont want to accidentally download a whole channel). Yt dlp can maintain a log of sort so videos aren't downloaded more then once.

I run this script on a schedule and delete the video when i am done with it. Nice and clean. I can also recommend trying to run an invididious instance for general video browsing but mine took some twiddling to setup right.

https://invidious.io/

[–] [email protected] 28 points 4 months ago* (last edited 4 months ago)

At that point i will either have to use an ai tool to scrub the filth out

Or

Consider if i really need whatever content is within it and touch some grass instead.

[–] [email protected] 33 points 4 months ago (14 children)

What this? I cant hear you over my high definition yt-dlp content.

Where i am going I haven’t needed a google account in over a year.

[–] [email protected] 24 points 4 months ago

This is why i am for mandatory open sourcing of abandonware. So much stuff just laying wasted that could be hacked.

But allowing you to diy your own toys might make you consume less and thats bad or something.

[–] [email protected] 6 points 4 months ago* (last edited 4 months ago)

If after 12 monty they actually comply then thats still a positive.

However i fear they may “fix” it with malicious compliance at 11 months and then the cycle repeats.

Instead what i think should happen is they should need to obtain “verified compliance” within a year. (Minus the time europe takes to check) and if the term expires the penalty goes up to eventually forced splitting up.

[–] [email protected] 18 points 4 months ago* (last edited 4 months ago) (2 children)

Why? Does 95% of digital advertisement even serve a single valuable purpose?

I get that websites need funding and that legitimate business require some way communicate their services exist. We need to solve the problem for the former and create specialized accessible safe spaces for the later.

When is the last time anyone here saw an ad for a local business, when is the last time anyone recall willfully clicking one? Was there actually anything useful there?

From what i recall ads almost always are one of the following:

  • sex, barely legal drugs and predatory video games. (Lumped together to make a bad pun)

  • real product/fake price: oh this item isnt in stock plz look at catalog

  • politics, buy our guide to get rich, actual illegal scam operation.

None of them are honest or respectful to the customer. People aren't prey, stop baiting.

Admittedly, for me this is personal. Autism means i experience the extra noise as painful. Plastering it on useful websites feels like a hostile attack to keep me out and unwelcome. I downright refuse to look at watch nor will i support them through ad free subscriptions to the point of it having become a digital disability.

But come on, can we smart online people really not figure out something else that isn't based on literal brainwashing.

[–] [email protected] 2 points 4 months ago

No need to clarify what you meant with the oligarchs theres barely any exaggeration there. Ghouls is quite accurate.

Considering the context of a worst case possible scenario (hostile takeover by an artificial superior) which honestly is indistinguishable from general end of the world doomerism prophecies but very alive in the circles of Sutskeva i believe safe ai consistent of the very low bar of

“humanity survives wile agi improves the standards of living worldwide” of course for this i am reading between the lines based on previously aquired information.

One could argue that If ASI is created the possibilities become very black and white:

  • ASI is indifferent about human beings and pursues its own goals, regardless of consequences for the human race. It could even find a way off the planet and just abandon us.

  • ASI is malaligned with humanity and we become but a. Resource, treating us no different then we have historically treated animals and plants.

  • ASI is aligned with humanity and it has the best intentions for our future.

For either scenario it would be impossible to calculate its intentions because by definition its more intelligent then all of us. Its possible that some things that we understand as moral may be immoral from a better informed perspective, and vice versa.

The scary thing is we wont be able to tell wether its malicious and pretending to be good. Or benevolent and trying to fix us. Would it respect consent if say a racist refuses therapy?

Of course we can just as likely hit a roadblock next week and the whole hype dies out for another 10 years.

[–] [email protected] 5 points 4 months ago* (last edited 4 months ago) (2 children)

No i applaud a healthy dose of skepticism.

I am everything but in favor of idolizing silicon valley gurus and tech leaders but from Sutskeva i have seen enough to know he is one of the few to actually pay attention to

Artificial Super intelligence or ASI is the step beyond AGI (artificial general intelligence)

The later is equal or better in capacity to a real human being in almost all fields.

Artificial Super intelligence is defined (long before openai was a thing) as transcending human intelligence in every conceivable way. At which point its a fully independent entity that can no longer be controlled or shutdown.

[–] [email protected] -1 points 4 months ago* (last edited 4 months ago) (1 children)

Your entitled to that opinion, so are others. Sutsekeva may be an actual loony.. or an absolute genius. Or both, that isn't up to debate here.

I am just explaining what this about because if you think this is “just another money raiser” you obviously havent paid enough attention to who this guy is exactly.

Super intelligence in Artificial intelligence is a wel defined term btw, in case your still confused. You may have seen them plaster on stuff like buzzwords but all of these definitions precede AI hype of last years.

ML = machine learning, algorithms that improve over time.

AI = artificial intelligence, machine learning with complex logic, mimicking real intelligence. <- we are here

AGI = artificial general intelligence. An Ai agent that functions intelligently at a level indistinguishable from a real human. <- expert estimate this will be archived before 2030

ASI = Artificial Super Intelligence Agi that transcends human intelligence and capacities in every wat.

It may not sound real to you but if you ever visited the singularity sub on reddit you will see how a great number of people do.

Also everything is science fiction till its not. Horseless chariots where science fiction so where cordless phones. The first airplane went up in 1903, 66 years later we landed in the moon.

[–] [email protected] 8 points 4 months ago* (last edited 4 months ago) (7 children)

Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.

This is the guy who turned against Sam for being to much about releasing product. I don't think he plans on delivering much product at all. The reason to invest isn't to gain profit but to avoid losing to an apocalyptic event which you may or may not personally believe, many Silicon Valley types do.

A safe Ai would be one that does not spell the end of Humanity or the planet. Ilya is famously obsessed with creating whats basically a benevolent AI god-mommy and deeply afraid for an uncontrollable, malicious Skynet.

[–] [email protected] 0 points 4 months ago

I left that problem behind me a long time ago

Yt-dlp+jellyfin+invidious

view more: ‹ prev next ›