DrJenkem

joined 1 year ago
[–] [email protected] 3 points 7 months ago

Yeah, obviously the issue can be discovered. My point is that it's not going to be immediately discovered by the cashier or a customer. It'll probably not get discovered until the accountant comes by and notices the discrepancy.

[–] [email protected] 2 points 7 months ago* (last edited 7 months ago) (2 children)

No. The bill given to the customer would still show the correct amount.

And if anyone looked at previous bills from the backend, they would see normally priced chicken tenders. The total for the bill would be wrong though.

[–] [email protected] 6 points 7 months ago

Bad developers are common though. And good documentation won't stop a bad developer from doing a bad thing.

I agree that SQLi isn't as common as it once was, but it still very much exists.

[–] [email protected] 4 points 7 months ago

https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=sql+injection

And without giving away specifics, I've personally found SQLi vulns in the wild within the last 5ish years.

[–] [email protected] 7 points 7 months ago (4 children)

No it would change the value of all past bills, future bills would still be correct.

[–] [email protected] 10 points 9 months ago

Crows are based.

[–] [email protected] 12 points 9 months ago (5 children)

People who aren't programmers, haven't studied computer science, and don't understand LLMs are much more impressed by LLMs.

[–] [email protected] 7 points 9 months ago (5 children)

Depends on what you mean by general intelligence. I've seen a lot of people confuse Artificial General Intelligence and AI more broadly. Even something as simple as the K-nearest neighbor algorithm is artificial intelligence, as this is a much broader topic than AGI.

Well, I mean the ability to solve problems we don't already have the solution to. Can it cure cancer? Can it solve the p vs np problem?

And by the way, wikipedia tags that second definition as dubious as that is the definition put fourth by OpenAI, who again, has a financial incentive to make us believe LLMs will lead to AGI.

Not only has it not been proven whether LLMs will lead to AGI, it hasn't even been proven that AGIs are possible.

If some task can be represented through text, an LLM can, in theory, be trained to perform it either through fine-tuning or few-shot learning.

No it can't. If the task requires the LLM to solve a problem that hasn't been solved before, it will fail.

I can't pass the bar exam like GPT-4 did

Exams often are bad measures of intelligence. They typically measure your ability to consume, retain, and recall facts. LLMs are very good at that.

Ask an LLM to solve a problem without a known solution and it will fail.

We can interact with physical objects in ways that GPT-4 can't, but it is catching up. Plus Stephen Hawking couldn't move the same way that most people can either and we certainly wouldn't say that he didn't have general intelligence.

The ability to interact with physical objects is very clearly not a good test for general intelligence and I never claimed otherwise.

[–] [email protected] 9 points 9 months ago (5 children)

I mean of you have two boxes, one of which is actually intelligent and the other is "just" a very advanced parrot - it doesn't matter, given they produce the same output.

You're making a huge assumption; that an advanced parrot produces the same output as something with general intelligence. And I reject that assumption. Something with general intelligence can produce something novel. An advanced parrot can only repeat things it's already heard.

[–] [email protected] 172 points 9 months ago (66 children)

They're kind of right. LLMs are not general intelligence and there's not much evidence to suggest that LLMs will lead to general intelligence. A lot of the hype around AI is manufactured by VCs and companies that stand to make a lot of money off of the AI branding/hype.

[–] [email protected] 28 points 9 months ago (5 children)

What if I want no taxes for the lower class lower taxes for the middle class and small business but much higher taxes on the upper class and large corporations

Left.

a very strong military

Typically right, but plenty of examples of marxist-leninist states with strong militaries, such as the USSR or China. And on the less authoritarian side you have the YPG in rojava who was very effective at fighting the Islamic state.

but stronger corporate regulation with more teeth

This one's a little confusing, would probably need more clarification.

to fund public works and social services with the taxes we bring in

Left.

a free and equal society with no hierarchical systems or bigotry, freedom of speech and strong privacy laws with certain restrictions on speech (calls to violence, etc...), very strong unions, a near complete elimination of wall street, and a fair justice system that doesn't target minorities as prey?

Left-libertarian/anarchist.

Also, guns are fine for self defense in my opinion.

At least in America, the guns issue is typically viewed as a left vs. right issue, but there's plenty of folks on the far left that are in favor of guns (socialist rifle association, redneck revolt, John Brown gun club, etc).

Karl Marx even has an often cited quote on guns:

Under no pretext should arms and ammunition be surrendered; any attempt to disarm the workers must be frustrated, by force if necessary

Which side do I fall on?

Pretty much left. You're certainly left of the American Democrats. Pretty much the only thing stopping you from being a full on leftist is you don't seem to be opposed to capitalism itself. Therefore, I'd say most of your positions sound like they fall under social democracy.

[–] [email protected] 4 points 10 months ago* (last edited 10 months ago)

It's been a while since I've worked with AOSP, but I had always understood it to be some weird shit with Google's internal processes. The "do not merge" commits are all over the AOSP, or at least they used to be.

view more: ‹ prev next ›