wolfshadowheart

joined 1 year ago
[–] [email protected] 10 points 1 year ago

Bethesda is a publisher as well which explains that

[–] [email protected] 6 points 1 year ago (4 children)

You are severely overestimating both in your assumptions.

I think I have like 3,000 audiobooks in .mp3s and it takes hardly 5gb.

[–] [email protected] 6 points 1 year ago

B-b-but my exclusive content!

[–] [email protected] 3 points 1 year ago

I think they only do overwatch and VAC bans at first, no?

[–] [email protected] 3 points 1 year ago (1 children)

They're super transparent with whatever they have going on with them. They had one probe within the last couple years but they don't keep logs so I'm not sure anything bad for the users is possible, and what VPN hasn't been asked for it's information lol

[–] [email protected] 6 points 1 year ago

Ah I see. It begins I guess :(

[–] [email protected] 1 points 1 year ago (4 children)

Isn't it just rolling out for Chrome users? Won't Firefox be unaffected?

[–] [email protected] 2 points 1 year ago (1 children)

Ah yeah my mistake I'm always mixing up language and image based AI models. Training text based models is much less feasible locally lol.

There's no model for my art so I'm creating a checkpoint model using xformers to bypass the VRAM requirement and then from there I'll be able to speed up variants of my process using LORA's but that won't be for some time, I want a good model first.

[–] [email protected] 3 points 1 year ago (3 children)

You definitely can train models locally, I am doing so myself on a 3080 and we wouldn't be as many seeing public ones online if that were the case! But in terms of speed you're definitely right, it's a slow process for us.

[–] [email protected] 5 points 1 year ago

I'm not sure about for expanded models, but pooling GPU's is effectively what the Stable Diffusion servers have set up for the AI bots. Bunch of volunteers/mods run a SD public server and are used as needed - for a 400,000+ discord server I was part of moderating this is quite necessary to keep the bots running with a reasonable upkeep for requests.

I think the best we'll be able to hope for is whatever hardware MythicAI was working on with their analog chip.

Analog computing went out of fashion due to it's ~97% accuracy rate and need to be build for specific purposes. For example building a computer to calculate the trajectory of a hurricane or tornado - the results when repeated are all chaos but that's effectively what a tornado is anyway.

MythicAI went on a limb and the shortcomings of analog computing are actually strengths for readings models. If you're 97% sure something is a dog, it's probably a dog and the 3% error rate of the computer is lower than humans by far. They developed these chips to be used in cameras for tracking but the premise is promising for any LLM, it just has to be adapted for them. Because of the nature of how they were used and the nature of analog computers in general, they use way less energy and are way more efficient at the task.

Which means that theoretically one day we could see hardware-accelerated AI via analog computers. No need for VRAM and 400+ watts, MythicAI's chips can take the model request, sift through it, send that analog data to a digital converter and our computer has the data.

Veritasium has a decent video on the subject, and while I think it's a pipe dream to one day have these analog chips be integrated as PC parts, it's a pretty cool one and is the best thing that we can hope for as consumers. Pretty much regardless of cost it would be a better alternative to what we're currently doing, as AI takes a boatload of energy that it doesn't need to be taking. Rather than thinking about how we can all pool thousands of watts and hundreds of gigs of VRAM, we should be investigating alternate routes to utilizing this technology.

[–] [email protected] 12 points 1 year ago (11 children)

Okay, I'm with you but...

how are we using these closed source models?

As of right now I can go to civitai and get hundreds of models created by users to be used with Stable Diffusion. Are we assuming that these closed source models are even able to be run on localized hardware? In my experience, once you reach a certain size there's nothing that layusers can do on our hardware, and the corpos aren't using AI running on a 3080, or even a set of 4090's or whatever. They're using stacks of A100's with more VRAM than everyone's GPU in this thread.

If we're talking the whole of LLM's to include visual and textual based AI... Frankly, while I entirely support and agree with your premise, I can't quite see how anyone can feasibly utilize these (models). For the moment anything that's too heavy to run locally is pushed off to something like Collab or Jupiter and it'd need to be built with the model in mind (from my limited Collab understanding - I only run locally so I am likely wrong here).

Whether we'll even want these models is a whole different story too. We know that more data = more results but we also know that too much data fuzzes specifics. If the model is, say, the entirety of the Internet while it may sound good in theory in practice getting usable results will be hell. You want a model with specifics - all dogs and everything dogs, all cats, all kitchen and cookware, etc.

It's easier to split the data this way for the end user as this way we can direct the AI to put together an image of a German Shepard wearing a chefs had cooking in the kitchen, with the subject using the dog-Model and the background using the kitchen-Model.

So while we may even be able to grab these models from corpos, without the hardware and without any parsing, it's entirely possible that this data will be useless to us.

[–] [email protected] 5 points 1 year ago

Are you saying it's specifically an issue after restarting ones phone? Just a few weeks ago I was walking my dog and my phone fell out my pocket. I hadn't used it so it was locked and I was able to ring it just fine with Find my Device online. Took me a little while to find the sound, but it located it no problem.

view more: ‹ prev next ›