webghost0101

joined 1 year ago
[–] [email protected] 12 points 5 months ago* (last edited 5 months ago) (5 children)

A textfile with channel urls + a yt-dlp script + a self hosted jellyfin servers.

Saves on bandwidth too, you only download once and can keep your favorites saved offline as long as you have storage space.

I do run my own invidious client too for video searches and “filler” channels you watch selectively.

[–] [email protected] 30 points 5 months ago* (last edited 5 months ago) (2 children)

Honestly the way the internet is going do you need access to the majority of the internet? I feel like its pretty dead as it is now already.

Lemmy will still work because we mostly use Firefox, and i bet the same will hold true for many others.

Basically the moment mainstream internet becomes google only you will see nerds build new websites specifiably to cater to the non google crowd and i trust random internet nerds a hack of a lot more than a monopoly corporation.

BRING IT ON GOOGLE!, YOU CAN INITIATE THE PUSH TO CREATE A NEW BETTER INTERNET. ^Create demand for freedom trough your suppressive enforments^

[–] [email protected] 14 points 5 months ago (1 children)

Again, as a chatgpt pro user… what the fuck is google doing to fuck up this bad.

This is so comically bad i almost have to assume its on purpose? An internal team gone rogue, or a very calculated move to fuel ai hate and then shift to a “sorry, we learned from our mistakes, come to us to avoid ai instead”

[–] [email protected] 2 points 5 months ago* (last edited 5 months ago) (1 children)

Correct, i kept it simple on purpose and could probably have worded it better.

It was a meant as a broader statement including “both publicly available free to download models like those based on the ollama architectures as well as free to acces proprietary llm’s like gpt3.5”

I personally tried variations of the vicuna, wizardLM and a few other models (mostly 30B, bigger was to slow) which are all based on ollama’s architecture but i consider those individual names to be less known.

Neither of these impressed me all that much. But of course this is a really fast changing industry. Looking at the hf leaderboard i don’t see any of the models i tried. Last time i checked was January.

I may also have an experience bias as i have become much more effective using gpt4 as a tool compared to when i just started to use it. This influences what I expect and how i write prompts for other models.

I’d be happy to try some new models that have since archived new levels. I am huge supporter for self-hosting digital tools and frankly i cant wait to stop funding ClosedAi

[–] [email protected] 7 points 5 months ago* (last edited 5 months ago) (3 children)

Having tried many different models on my machine and being a long-time GPT-4 user, I can say the self-hosted models are far more impressive in sheer power for their size. However, the good ones still require a GPU that most people nor teenagers can't afford.

Nonetheless, GPT-4 remains the most powerful and useful model, and it's not even a competition. Even Google's Gemini doesn't compare, in my experience.

The potential for misuse increases alongside usefulness and power. I wouldn't use Ollama or GPT-3.5 for my professional work because they're just not reliable enough. However, GPT-4, despite also having its useless moments, is almost essential.

The same holds true for scammers and malicious actors. GPT-4's voice will technically allow live, fluent conversations through a phone using a dynamic voice. That's the holy grail for scamcallers. OpenAI is right to want to eliminate as much abuse of their system as possible before releasing such a thing.

There is an argument to be made for not releasing such dangerous tools, but the counter is that someone malicious will inevitably release it someday. It's better to be prepared and understand these systems before that happens. At least i think thats what OpenAi believes, i am not sure what to think. How could i known they Arent malicious?

[–] [email protected] 2 points 5 months ago

It goes beyond me why a corporation with so much to lose does’t have a narrow ai that simply checks if its response is appropriate before providing it.

Wont fix all but if i try this manually chatgpt pretty much always catches its own errors.

[–] [email protected] 1 points 5 months ago

Speaking as an autist for the entire autistic community.

PLEASE DONT

What have i set in motion :o

[–] [email protected] 6 points 5 months ago (5 children)

I wonder if they considered reddit votes to try to give more weight to high quality answers but also high quality jokes.

But without votes pure nonsense becomes equal to truth.

Humans could use reddit because we understand the site enough to be able to filter the valuable from the bad.

I feel like the answer would be in between ai specifically to be such a filter.

Every such post of google failing i have screen capped and then asked chatgpt for a more detailed explanation to do what google suggests i do. Everytime it managed to call out the issues. So just allowing an ai to proofread its response in context of the question could stop a lot of hallucinations.

But its at least 3 times as slow and expensive if it needs to change its first response.

But i guess doing things properly isnt profitable , better to just rush tech and kill your most famous product.

[–] [email protected] 50 points 5 months ago* (last edited 5 months ago) (3 children)

I understand the irony. But can we not pretend they blindly used an output or even generated a full page. It was a specific section to provide a technical definition of “what is a deepfake”.

“I was really struggling with the technical aspects of how to define what a deepfake was. So I thought to myself, ‘Well, why not ask the subject matter expert (i do not agree with that wording, lol) , ChatGPT?’” Kolodin said. 

The legislator from Maricopa County said he “uploaded the draft of the bill that I was working on and said, you know, please, please put a subparagraph in with that definition, and it spit out a subparagraph of that definition.”

“There’s also a robust process in the Legislature,” Kolodin continued. “If ChatGPT had effed up some of the language or did something that would have been harmful, I would have spotted it, one of the 10 stakeholder groups that worked on or looked at this bill, the ACLU would have spotted, the broadcasters association would have spotted it, it would have got brought out in committee testimony.”

But Kolodin said that portion of the bill fared better than other parts that were written by humans. “In fact, the portion of the bill that ChatGPT wrote was probably one of the least amended portions,” he said.

I do not agree on his statement that any mistakes made by ai could also be made by humans. The reasoning and errors in reasoning is quite different in my experience but the way chatgpt was used is absolutely fair.

[–] [email protected] 7 points 5 months ago

He just created greed. All the others evil flow from it.

[–] [email protected] 1 points 5 months ago (1 children)

They are copying the fictional movie character… the voice is a real person and their is precedent that explicitly impersonating a voice is ip theft.

But a fictional personality and a voice that has similar features? I really hope this does settle in court.

[–] [email protected] 1 points 5 months ago

Tweeting “her” was stupid but he has stated for years its his favorite movie and honestly even with a wildly different man voice it is still a very similar appearing product as the movie.

view more: ‹ prev next ›