barsoap

joined 1 year ago
[–] [email protected] 9 points 1 day ago (1 children)

The tracks are A.I. generated from lyrics and musical compositions that I have created. The A.I. samples are then mixed and edited by me.

Generated from human compositions, human-mixed, human-edited, there's plenty of songs which have less human input. Even I can steal beats from a frying steak.

This isn't the "automated AI slop" that you're looking to complain about.

As to "intention to mislead": That has nothing to do with AI. Passing off a new composition as a 1974 track on first sight is peak retro.

[–] [email protected] 12 points 1 day ago

Would have to buy new board and RAM, not really worth it performance-wise, at least not for me. Some day, yes, but that day hasn't come and will definitely be after a GPU upgrade.

[–] [email protected] 1 points 2 days ago

Memory chips have had an utterly fickle market ever since there's been memory chips, companies in that business are still in that business because they learned how to deal with the swings. If micron can survive (and they will) then so will Samsung whose memory chip business has the whole conglomerate to fall back onto.

[–] [email protected] 1 points 2 days ago (1 children)

And it would be so easy to make a big splash in the market by having a phone where the camera doesn't protrude out of the back.

[–] [email protected] 2 points 2 days ago* (last edited 2 days ago)

The limit on Moore’s Law has been more to the economic side than actually packing transistors in.

The reason why those economic limits exist is because we're reaching the limit of what's physically possible. Fabs are still squeezing more transistors into less space, for now, but the cost per transistor hasn't fallen for some time, IIRC about 10nm thereabouts is still the most economical node. Things just get difficult and exponentially fickle the smaller you get, and at some point there's going to be a wall. Of note currently we're talking more about things like backside power delivery than actually shrinking anything. Die-on-die packaging and stuff.

Long story short: Node shrinks aren't the low-hanging fruit any more. Haven't been since the end of planar transistors (if it had been possible to just shrink back then they wouldn't have engineered FinFETs) but it's really been taking up speed with the start of the EUV era. Finer and finer pitches don't really matter if you have to have more and more lithography/etching/coating steps because the structures you're building are getting more and more involved in the z axis, every additional step costs additional machine time. On the upside, newer production lines could spit out older nodes at pretty much printing press speed.

[–] [email protected] 5 points 2 days ago

Yep lua and lisp/scheme are also unityped and not even close to as broken. All are remarkably similar languages, theory-wise.

...also something something Guido not getting tail call elimination and people sending him copies of the wizard book. It's been a while.

(And, yes, lua does proper tail calls).

[–] [email protected] 9 points 3 days ago (1 children)

About the only AI company currently alive that I'm sure will survive is CivitAI. Huggingface probably, too. Both are, in the end, in the datacenter business. Huggingface has exposure to VC BS in their client base, they might be in trouble if a significant number suddenly go belly-up but if they have any sense they'll simply not overextend. And, well, they, too, can switch to cat pictures.

[–] [email protected] 2 points 1 week ago

Yep that's what nvidia marketing seems to be calling their denoiser nowadays. Gods spare us marketing departments.

[–] [email protected] 2 points 1 week ago (2 children)

Tensor cores have nothing to do with raytracing. They're cut-down GPU cores specialising in tensor operations (hence the name) and nothing else. Raytracing is accelerated by RT cores, doing BVH traversal operations and ray intersections, the tensor cores are in there to run a denoiser to turn the noisy mess that real-time RT produces into something that's, well, not messy. Upscaling, essentially, the only difference between denoising and upscaling is that in upscaling the noise is all square.

And judging by how AMD has done this stuff before nope they won't do separate cores, but make sure that the ordinary cores can do all that stuff well.

[–] [email protected] 2 points 1 week ago

The trick to nixos, in this instance, is to use a python venv. Python dependencies are fickle and nasty in the first place, triply so when talking about fast-churning AI code, I tried specifying everything with nix, I succeeded, and then you have random comfyui plugins assuming they can get a writeable location by constructing a path from comfyui's main.py. It's not worth it: Let python be the only dependency you feed in, let pip and general python jank do the rest.

[–] [email protected] 1 points 1 week ago (1 children)

5500 here. I can't use any recent rocm version because the GFX override I use is for a card that apparently has a couple more instructions and the newer kernels instantly crash with an illegal operation exception.

I found a build someone made buried in a docker image and it indeed does work, without override, for the 5500 but it's using all generic code for the kernels and is like 4x slower than the ancient version.

What's ultimately the worst thing about this isn't that AMD isn't supporting all cards for rocm -- it's that the support is all or nothing. There's no "we won't be spending time on this but it passes automated tests so ship it" kind of thing. "oh the new kernels broke that old card tough luck you don't get new kernels".

So in the meantime I'm living with the occasional (every couple of days?) freeze when using rocm because I can't reasonably upgrade. Not just the driver crashes, the kernel tries to restart it, the whole card needs a reset before doing anything but display a vga console.

[–] [email protected] 1 points 1 week ago

but they say that you should use the right pinky to reach every key towards the upper right end of the keyboard, which gets old fast given how frequently you need to access them.

I don't do that either. I hit the rightmost stuff with the ring finger, some keys are on the middle finger. The return to home position thing is still important, though, the one place to measure all distances from. Also I learned touch-typing with dvorak which may or may not have had an influence.

 

A new paper suggests diminishing returns from larger and larger generative AI models. Dr Mike Pound discusses.

The Paper (No "Zero-Shot" Without Exponential Data): https://arxiv.org/abs/2404.04125

 

Link to talks schedule, times are CET (deal with it)

Streams will show up here and final recordings here. There's generally also rough-cut recordings posted automatically after a talk is over, don't have a link for that yet.

Oh and for completeness' sake the congress' web page.

 

Today we're looking at an ion milling machine. This instrument accelerates argon particles to high velocities and then slam them into your sample, acting as an atomic sandblaster. The sample is slowly etched due to the transfer of kinetic energy from the argon gas molecules. It can etch literally any material, even diamond!

10
submitted 9 months ago* (last edited 9 months ago) by [email protected] to c/[email protected]
 

RyanF9 uses science to explain how Gore-Tex works and why you’re being ripped off.

 

In the 80s one British firm was working of the future of high performance computing, where not 1 processor would work on a task but many. That company was inmos and the processor was the Transputer.

view more: next ›