ekky

joined 11 months ago
[–] [email protected] 8 points 7 months ago

Lol, it is indeed one of the cleaner versions that I remember having seen, nice work! ^, ^

Image of a box labelled "Memories", with RAM memory sticks inside.

[–] [email protected] 10 points 7 months ago (2 children)

But Admiral Patrick, how dare your ancient memes from times long forgotten not meet our modern expectations? Do you at least have a proper shitposting license?

I'll post mine as reference, may you gaze upon it and ponder the shortcomings of your horrible artifact-ridden memes!

An artifact ridden and overcompressed image of a man labelled "me" holding the mythical "Shitposting License", with the caption "What gives u the right to flood my newsfeed with ur crap memes?"

[–] [email protected] 66 points 8 months ago (1 children)

AI: "Can I copy your work?"

Phil: "Just don't make it obvious."

AI:

[–] [email protected] 25 points 8 months ago (4 children)

You have a common border with Denmark, right? There might be a possibility there....

[–] [email protected] 47 points 8 months ago

Everything nowadays that attempts to give back a little autonomy or freedom to the user is called piracy.

As long as an app could theoretically be used for piracy, even if it was made to circumvent toxic behavior of users' bought and paid for products, then it must be properly labelled as piracy and taken down.

I'll better stop before this becomes a rant.

[–] [email protected] 6 points 8 months ago* (last edited 8 months ago) (1 children)

Agreed for induction, but I'd mich rather use one or two minutes more cleaning the knobs than having to almost cook my finger on this 60-90 degree Celcius hot conventional stove's touch surface to change the plate from step 7 to 4 for 10 FUKKEN SECONDS! OUCH!

Having to restart it 2-3 times during cooking because it got confused (pan moved slightly to the side) is also rather annoying.

Edit & tl:dr: Touch works decent on induction, just please keep it far away from any conventional stoves.

[–] [email protected] 4 points 8 months ago

Oh right, I do actually have track, volume, and "take call" on the wheel. I think I did use them once, but it just never stuck since they felt awkward to use.

[–] [email protected] 11 points 8 months ago

I'm more concerned about fog lights, emergency lights, and Window heating, as law usually requires you to be able to use them if conditions require it.

[–] [email protected] 7 points 8 months ago (2 children)

Same, I've got an Opel Corsa from 2016, so it's pretty much brand new.

The only things in the wheel are the speed control, wipers, and default lights.

For everything else required for driving, such as fog lights, emergency lights, front and back Window heating, AC, radio, and of course the shift stick, I'll need to remove a hand from the wheel.

Luckily for me, the Touchscreen in the middle only handles less important things like navigation and external music sources.

[–] [email protected] 18 points 8 months ago (2 children)

“Fixed issue with ssl python libs,” or “Minor bugfixes.”

Red bird going "Hahaha, No!"

In other news, never work more than one person on a branch (that's why we have them). Make a new related issue with its own branch and rebase whenever necessary, and don't even think about touching main or dev with anything but a properly reviewed and approved PR (in case they aren't already protected), or I'll find and report you to the same authority that handles all the failed sudo requests!

Also, companies that disable rebasing are my bane. While you can absolutely do without, i much prefer to have less conflicts, cleaner branches and commits, easier method to pull in new changes from dev, overall better times for the reviewer, and the list goes on. Though, the intern rewriting multiple branches' history which they have no business pushing to is also rather annoying.

[–] [email protected] 1 points 8 months ago* (last edited 8 months ago)

Neural nets are a technology which is part of the umbrella term "machine learning". Deep learning is also a term which is part of machine learning, just more specialized towards large NN models.

You can absolutely train NNs on your own machine, after all, that's what I did for my masters before Chatgpt and all that, defining the layers myself, and also what I do right now with CNNs. That said, LLMs do tend to become so large that anyone without a super computer can at most fine tune them.

"Decision tree stuff" would be regular AI, which can be turned into ML by adding a "learning method" like a KNN or neural net, genetic algorithm, etc., which isn't much more than a more complex decision tree where decision thresholds (weights) were automatically estimated by analysis of a dataset. More complex learning methods are even capable of fine tuning themselves during operation (LLMs, KNN, etc.), as you stated.

One big difference from other learning methods and to NN based methods, is that NN likes to add non-weighted layers which, instead of making decisions, transform the data to allow for a more diverse decision process.

EDIT: Some corrections, now that I'm fully awake.

While very similar in structure and function, the NN is indeed no decision tree. It functions much the same as one, as is a basic requirement for most types of AI, but whereas every node in a decision tree has unique branches with their own unique nodes, all of a NN's nodes are interconnected to all nodes of the following layer. This is also one of the strong points of a NN, as something that seemed outrageous to it a moment ago might have become much more plausible when looking at it from a different point of view, such as after a transformative layer.

Also, other learning methods usually don't have layers, or, if one were to define "layer" as "one-shot decision process", they pretty much only have a single or two layers. In contrast, the NN can theoretically have an infinite amount of layers, allowing for pretty much infinite complexity as long as the inputted data is not abstracted beyond reason.

At last, NN don't back-propage by default, though they make it easy to enable such features given enough processing power and optionally enough bandwidth (in the case of chatGPT). LLMs are a little different, as I'm decently sure they implement back-propagation as part of the technologies definition, just like KNN.

This became a little longer than I had hoped, it's just a fascinating topic. I hope you don't mind that I went into more detail than necessary, it was mostly for the random passersby.

[–] [email protected] 24 points 8 months ago

AI is a very broad term, ranging from physical AI (material and properties of a robotic grabbing tool) to AI (as seen in many games, or in a robotic arm to calculate path from current position to target position) and to MLAI (LLM, neural nets in general, KNN, etc.).

I guess it's much the same as asking "are vehicles bad?". I don't know, are we talking horse carriages? Cars? Planes? Electric scooters? Skateboards?

Going back to your question, AI in general is not bad, though LLMs have become too popular too quick and have thus ended up being misunderstood and misused. So you can indeed say that LLMs are bad, at least when not used for their intended purposes.

view more: ‹ prev next ›