markon

joined 1 year ago
[–] [email protected] 5 points 3 months ago

Yep they now get paid for the data we have them. I have no sympathy lol. At least these models can't actually store it all losslessly by any stretch of the imagination. The compression factors would have to be like 100-200X+ anything we've ever been able to achieve before. The numbers don't work out. The models do encode a lot though and some of it is going to include actual full text data etc but it'll still be kinda fuzzy.

I think we do need ALL OPEN SOURCE. Not just for AI, but I know on that point I'm preaching to the choir here lol

[–] [email protected] 3 points 3 months ago (2 children)

They should. Maybe all the angry people here would go bliss out. Lol

[–] [email protected] 0 points 3 months ago

What's AI? Fuck what AI? Which one? What kind?

[–] [email protected] 1 points 3 months ago

Lol this is a good one. I love my LLMs but this is it. The problem is most people don't even think at all anyway. Most of the time I don't either. If we're honest with ourselves, we're still just barely advanced apes.

I don't get marketing. The more gets shoved the father I retreat and ignore. I'll let em run on YouTube sometimes just so the advertiser has to pay out the fraction of a penny on a wasted ad. Actually, how about we do this!

Let's build software that goes around watching ads constantly so it makes their numbers go all to hell.

[–] [email protected] 1 points 3 months ago

Cool they should setup their own hidden services! 😂

[–] [email protected] -1 points 3 months ago

The funny thing is we hallucinate all our answers too. I don't know where these words are coming from and I am not reasoning about them other than construction of a grammatically correct sentence. Why did I type this? I don't have a fucking clue. 😂

We map our meanings onto whatever words we see fit. If I had a dollar for every time I've heard a Republican call Obama a Marxist still blows my mind.

Thank you for saying something too. Better than I could do. I've been thinking about AI since I was a little kid. I've watched it go from at best some heuristic pathfinding in video games all the way to what we have now. Most people just weren't ever paying attention. It's been incredible to see that any of this was even possible.

I watched Two Minute Papers from back when he was mostly doing light transport simulation (raytracing). It's incredible where we are, but baffling people can't see the tech as separate from good old capitalism and the owner class. It just so happens it takes a fuckton of money to build stuff like this, especially at first. This is super early.

[–] [email protected] 0 points 3 months ago

Just like us. Sometimes it's better to have bullshit predictions than none.

[–] [email protected] -3 points 3 months ago (1 children)

We should understand that 99.9% of what wee say and think and believe is what feels good to us and we then rationalize using very faulty reasoning, and that's only when really challenged! You know how I came up with these words? I hallucinated them. It's just a guided hallucination. People with certain mental illnesses are less guided by their senses. We aren't magic and I don't get why it is so hard for humans to accept how any individual is nearly useless for figuring anything out. We have to work as agents too, so why do we expect an early days LLM to be perfect? It's so odd to me. Computer is trying to understand our made up bullshit. A logic machine trying to comprehend bullshit. It is amazing it even appears to understand anything at all.

[–] [email protected] 0 points 3 months ago (1 children)

Uhm. Have you ever talked to a human being.

[–] [email protected] 0 points 3 months ago

Asking the chat models to have self-disccusion and use/simulate metacognition really seems to help. Play around with it. Often times I am deep in a chat and I learn from its mistakes, it kinda learns from my mistakes and feedback. It is all about working with and not against. Because at this time LLMs are just feed forward neural networks trained on supercomputer clusters. We really don't even know what they are capable of fully because it is so hard to quantify, especially when you don't really know what exactly has been learned.

Q-learning in language is also an interesting methodology I've been playing with. With an imagine generator for example though, you can just add (Q-learning quality) and you may get more interesting and quality results. Which itself is very interesting to me.

[–] [email protected] 0 points 3 months ago* (last edited 3 months ago) (1 children)

I've used LLMs a lot over the post couple years. Pro tip. Use it a lot and learn the models. Then they look much more intelligent as you the user becomes better. Obviously if you prompt "Write me a shell script to calculate the meaning of life, make my coffee, and scratch my nuts before 9AM" it will be a grave disappointment.

If you first design a ball fondling/scratching robot, use multiple instances of LLMs to help you plan it out, etc. then you may be impressed.

I think one of the biggest problems is that most people interacting with llms forget they are running on computers and that they are digital and not like us. You can't make assumptions like you can with humans. Usually even when you do that with us you just get stuff you didn't want because you weren't clear enough. We are horrible at instructions and this is something I hope AI will help us learn how to do better. Because ultimately bad instructions or incomplete information doesn't lead to being able to determine anything real. Computers are logic machines. If you tell a computer to go ride a bike at best it'll go out and do all the work to embody itself in a robot and buy a bike and ride it. Wait, you don't even know it did it though because you never specified for it to record the ride....

A very few of us are pretty good at giving computers clear instructions some of the time. Also though, I have found just forcing models to reason in context is powerful. You have to know to tell it to "use a drill down tree style approach to problem solving. Use reflection and discussion to explore and find the optimal solution to reasoning through the problem." Might still give you bad results. That is why you have to experiment. It is a lot of fun if you really just let your thoughts run wild. It takes a lot of creative thinking right now to really get the most out of these models. They should all be 110% open source and free for all. BTW Gemini 1.5 and Claude and Llama 3.1 are all great, nd Llama you can run locally or on a rented GPU VM. OpenAI I'm on the fence about but given who all is involved over there I wouldn't say I would trust them. Especially since they want to do a regulatory capture.

 

Moooroe daddy morrreeeeeemooooooorrreeeeesbbsbbbsbbbdmorerrwee

view more: next ›