gerryflap

joined 1 year ago
[–] [email protected] 5 points 1 year ago (1 children)

You're allowed to post later, though this will be visible with your post. Tbh I like it until now. It's a good way to engage with my friends now that I only see them a few times a month. The friends I have on BeReal are generally respecting the rule that you either post at the moment BeReal asks you to or at the nearest moment where it's okay to so. I won't post at work, at the toilet, or in the shower or something. But I will do so after those moments of I'm doing something mundane like washing the dishes or cycling home. You'll generally just get a sense of what your friends' life's like like normally, as well as the fun things they do like traveling or visiting some event.

[–] [email protected] 7 points 1 year ago (1 children)

But it's true. These AI models are not some big database where every piece of information is stored and can just be removed whenever you desire.

Imagine you almost got hit by a car while crossing the road as a child. That memory influenced your decisions from there on out, you learnt to always look before crossing, and over time your brain literally got wired differently because of that incident. Suddenly 20 years later the law requires you to remove that memory from your brain because apparently it was private data. How do you do that? It's not a single data point that just hangs around in your brain. Even if you could remove that memory, it still has compound effects on who you are and what you do. There is no removing that memory in such a way that all its effects on your brain are completely gone. It's exactly the same for these AI models. The way this one private data point affected the model parameters cannot be reverted unless you retrain the entire thing.

[–] [email protected] 5 points 1 year ago

Well the counterpoint is that NVIDIA's Linux drivers are famously garbage, which also pisses off professionals. From what I see from AMD now with ROCm, it seems like they've gone the right way. Maybe they can convince me next time I'm on the lookout for a GPU.

But overall you're right yeah. My feeling is that AMD is competitive with NVIDIA regarding price/performance, but NVIDIA has broader feature support. This is both in games and in professional use cases. I do feel like AMD is steadily improving in the past years though. In the gaming world FSR seems almost as ubiquitous (or maybe even more ) as DLSS, and ROCm support seems to have grown rapidly as well. Hopefully they keep going, so I'll have a choice for my next GPU.

[–] [email protected] 1 points 1 year ago

They were left-handed and used the same hands as I did for the fork and the knife. It might simply be caused by the "normal" placement of the fork and knife next to the plate.

[–] [email protected] 10 points 1 year ago (3 children)

My problem when buying my last GPU is that AMD's answer to CUDA, ROCm, was just miles behind and not really supported on their consumer GPUs. From what I se now that has changed for the better, but it's still hard to trust when CUDA is so dominant and mature. I don't want to reward NVIDIA, but I want to use my GPU for some deep learning projects too and don't really have a choice at the moment.

[–] [email protected] 7 points 1 year ago* (last edited 1 year ago) (3 children)

I think that's normal. I'm right-handed, but I have my phone in my left pocket and tend to use it in my left hand when using it with one hand. Last week I also had someone ask me whether I was left handed because I used my fork in my left hand to hold the food still and my knife in my right hand to cut the food. I honestly never think about these things, they just kinda happened over time because it worked for me. In my experience my dominant hand is obviously better for most things, but in the end it's also a matter of training. If, for whatever reason, I use my non-dominant hand for something repeatedly instead of my dominant hand then my non-dominant hand will get better. It's just that the dominant hand improves faster and has a higher "skill ceiling" in my experience.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

But the fact that it can do so much is an awesome (and maybe scary) result in and of itself. These LLMs can write working code examples, write convincing stories, give advice, solve simple problems quite reliably, etc all from just learning to predict the next word. I feel like people are moving the goalpost way too quickly, focussing so much on the mistakes it makes instead of the impressive feats that have been achieved. Having AI doing all this was simply unthinkable a few years ago. And yes, OpenAI is currently using a lot of hardware, and ChatGPT might indeed have gotten worse. But none of that changes what has been achieved and how impressive it is.

Maybe it's because of all these overhyping clickbait articles that make reality seem disappointing. As someone in the field who's always been cynical about what would be possible, I just can't be anything else then impressed with the rate of progress. I was wrong with my predictions 5 years ago, and who knows where we'll go next.

[–] [email protected] 2 points 1 year ago (1 children)

Yeah also indeed. Back then I was actually working with image generation and GANs and it was just starting to take off. A year later or something StyleGAN would absolutely blow my mind. Generating realistic 1024x1024 images while I was still bumbling about with a measly 64x64 pixels. But I didn't foresee where this was going even then

[–] [email protected] 22 points 1 year ago (6 children)

Who says it ends here? We've made tremendous progress in a short time. 10 years ago it was absolutely unthinkable that we'd be at a stage right now where we can generate these amazing images from text on consumer hardware and that AI can write text in a way that could totally fool humans. Even as someone working in the field I was fairly sceptical 5-6 years ago that we'd get here this fast

[–] [email protected] 8 points 1 year ago

You can code in binary, but the only thing you'd be doing is frustrating yourself. We did it in the first week of computer science at the university. Assembly is basically just a human readable form of those instructions. Instead of some opcode in binary you can at least write "add", which makes it easier to see what's going on. The binary machine code is not some totally other language than what is written in the assembly code, so writing in binary doesn't really provide any more control or benefit as far as I'm aware.

[–] [email protected] 1 points 1 year ago

I don't really see the problem. People like to listen to the stuff and Spotify provides it and pays the creator. Seems like everything is working as intended. Looks like it's just greedy people getting annoyed that they can't get even richer.

[–] [email protected] 1 points 1 year ago

I'm often quite dominant in discussions and tend to strongly defend my position when I consider the other options to be wrong. However, my second issue is that I struggle to accept new ideas sometimes. This combination can leave me in a place where I heavily fight against a new idea, purely because I'm not fully convinced why it'll improve things. As I learn more about it, sometimes I'll see why it's actually a good idea and I'll cringe about my staunch resistance to it. I'm getting better at suppressing that initial reaction, but sometimes it's hard since I really feel like we're doing the wrong thing. I also think it's related to being autistic, as black an white thinking, resistance to change, and valuing facts over the feelings of others are all things that often come with autism. Nevertheless I really want to tone it down a bit.

view more: ‹ prev next ›