beigeoat

joined 1 year ago
[–] [email protected] 26 points 1 year ago (1 children)

In hindi we call it "old lady hair"

[–] [email protected] 2 points 1 year ago

To satisfy you:

[–] [email protected] 5 points 1 year ago (2 children)

Well, you see my parents and grandparents don't understand the concept of ads fully, especially in case of YouTube Shorts. After a few instances of them sharing the ads, thinking they were regular content, i just got the family plan.

[–] [email protected] 2 points 1 year ago (1 children)

A course in college had an assignment which required Ada, this was 3 years ago.

[–] [email protected] 4 points 1 year ago (1 children)

Some models also prefer children for some reason and then you have to put mature/adult in positive prompt and child in negative

[–] [email protected] 22 points 1 year ago (4 children)

AMD is getting better for ML/scientific computing very fast for the regular consumer GPUs. I have seen the pytorch performance more than double on my 6700xt in 6 months to the point that it has better performance than a 3060(not ti).

[–] [email protected] 6 points 1 year ago

Please no, this is incredibly dangerous. They didn't stop at giving people AI which gave developers incredibly untrusted and deceptive code. Now they want to run this code without oversight.

People are going to be rm -rf /* by the AI and will only then understand how stupid of an idea this is.

[–] [email protected] 1 points 1 year ago

If going for an inverter try a sin wave one if it's in your budget.

[–] [email protected] 7 points 1 year ago (1 children)

This plus any LLM model is incapable of critical thinking. It can imitate it to the point where people might think it's able to, but that's just because it has seen the answers to the problems people are asking during the training process.

[–] [email protected] 7 points 1 year ago (1 children)

This usually depends on the country/region. For example in India ikea is obscenely expensive for what they are selling when you can get a miles better product at a similar price.

At least in Delhi you can get really really good furniture at a fair price.

[–] [email protected] 3 points 1 year ago

I have used it mainly for dreambooth, textual inversion and hypernetworks, just using it for stable diffusion. For models i have used the base stable diffusion models, waifu diffusion, dreamshaper, Anything v3 and a few others.

The 0.79 USD is charged only for the time you use it, if you turn off the container you are charged for storage only. So, it is not run 24/7, only when you use it. Also, have you seen the price of those GPUs? That 568$/month is a bargain if the GPU won't be in continuous use for a period of years.

Another important distinction is that LLMs are a whole different beast, running them even when renting isn't justifiable unless you have a large number of paying users. For the really good versions of LLM with large number of parameters you need a lot of things than just a good GPU, you need at least 10 of the NVIDIA A100 80GB (Meta's needs 16 https://blog.apnic.net/2023/08/10/large-language-models-the-hardware-connection/) running for the model to work. This is where the price to pirate and run yourself cannot be justified. It would be cheaper to pay for a closed LLM than to run a pirated instance.

[–] [email protected] 2 points 1 year ago (2 children)

The point about GPU's is pretty dumb, you can rent a stack of A100 pretty cheaply for a few hours. I have done it a few times now, on runpod it's 0.79 USD per HR per A100.

On the other hand the freely available models are really great and there hasn't been a need for the closed source ones for me personally.

 

test 1..2..3.. test

view more: next ›