Hamartiogonic

joined 1 year ago
[–] [email protected] 1 points 1 year ago

That’s true for all the things that can have a query cost. What about those AI applications that don’t have any financial cost to the user? For instance, The Spiffing Brit continues to find interesting ways to exploit the YouTube Algoritm. I’m sure you can apply that same “hacker mentality” to anything with AI in it.

At the moment, many of those applications are on the web, and that’s exactly where a query costs can be a feasible way to limit the number of experiments you can reasonably run in order to find your favorite exploit. If it’s too expensive, you probably won’t find anything worth exploiting, and that should keep the system relatively safe. However, nowadays more and more AI is finding its way in the real world, which means that those exploits are going to have some very spicy rewards.

Just imagine if the traffic lights were controlled by an AI, and you found an exploit that allowed you to get the green light on demand? Applications like this don’t have any API query costs. You just need to be patient and try all sorts of weird stuff to see how the lights react. Sure, you can’t run a gazillion experiments in an hour, which means that you might not find anything worth exploiting. Since there would be millions of people experimenting with the system simultaneously, surely someone would find an exploit.

[–] [email protected] 10 points 1 year ago (2 children)

Based on this data, 48.6% of oil was used on the read. If we assume that about every other car becomes electric, that could cut the total oil demand by about 24%. That’s actually quite significant, but obviously it will happen so gradually that the oil industry should have enough time to adjust.

Eventually most cars will be electric, but even that won’t destroy the entire oil industry, because there are still many other uses for oil. It takes a while for various other industries to shift away from burning oil and gas, but when that happens the oil industry will be totally screwed.

[–] [email protected] 10 points 1 year ago (2 children)

This is a part of a bigger topic people need to be aware of. As more and more AI is used in public spaces and the internet, people will find creative ways to exploit it.

There will always be ways to make the AI do stuff the owners don’t want it to. You could think of it like the exploits used in speedrunning, but in this case there’s a lot more variety. Just like you can make an AI generate morally questionable material, you could potentially find a way to exploit the AI of a self driving car to do whatever you can think of.

[–] [email protected] 2 points 1 year ago (1 children)

That’s a very important distinction. Hard wasn’t the clearest word for that use. I guess I should have called it something else such as deceptive or misleading. The idea is that some pictures got a below 50% ratio, which means that people were really bad at categorizing them correctly.

There were surprisingly few pictures that were close to 50%. Maybe it’s difficult to find pictures that make everyone guess randomly. There are always a few people who know what they’re doing because they generate pictures like this on a weekly basis. The answers will push that ratio higher.

[–] [email protected] 1 points 1 year ago

Yes but the question is why would anyone pay for ads like that? How is that investment going to make any sense?

[–] [email protected] 21 points 1 year ago (3 children)

If you look at the ratios of each picture, you’ll notice that there are roughly two categories: hard and easy pictures. Based on information like this, OP could fine tune a more comprehensive questionnaire to include some photos that are clearly in between. I think it would be interesting to use this data to figure out what could make a picture easy or hard to identify correctly.

My guess is that a picture is easy if it has fingers or logical structures such as text, railways, buildings etc. while illustrations and drawings could be harder to identify correctly. Also, some natural structures such as coral, leaves and rocks could be difficult to identify correctly. When an AI makes mistakes in those areas, humans won’t notice them very easily.

The number of easy and hard pictures was roughly equal, which brings the mean and median values close to 10/20. If you want to bring that value up or down, just change the number of hard to identify pictures.

[–] [email protected] 1 points 1 year ago

I haven’t been in contact with him in years, but I’ll try to remember if I bump into him in the future.

[–] [email protected] 1 points 1 year ago

Counting these resources is very tricky. You could just ask mining companies how much resources they have, but that number tends to increase as they drill more. See JORC code for more info.

Drilling is expensive, so the company won’t drill any more than they absolutely have to in order to convince the investors. This means that outside the measured mineral resources there’s usually a lot that is only indicated or inferred. Since, the confidence in those parts tends to be very low, the companies can’t really report those numbers as proper mineral resources. However, they can confidently report the measured and proved resources, so those are the numbers you’ll usually see in the news articles.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

Oh, but there are lots of other mechanisms. Conspiratorial Thinking (CT for short) is a complicated subject, and people who are into CT tend to have a bunch of things in common. For example, many of them suffer from anxiety, uncertainty, loneliness and many other things. Some will even show signs of sub-clinical narcissism, psychosis and paranoia.

All of that means that they tend to find CT very appealing, but it won’t really alleviate their symptoms or address any root causes. Well, some people find a sense of community in conspiracy circles, so that would help with loneliness. The sense of uncertainty can be alleviated by offering simplified (but incorrect) explanations as to how the world works. People having CT will also have a sense of being in an exclusive group since they are in possession of hidden truths. Nevertheless, CT still drives these people deeper into CT and further away from the rest of the society. This causes further alienation and anxiety.

[–] [email protected] 4 points 1 year ago

Oh no. Are you saying that even the backup explanation of the conspiracy theorists was BS? Who would have thought.

First, the vaccine was supposed to kill you on the spot, then they shifted to saying that it will kill you some time later and the final version was that it will make everyone sterile.

[–] [email protected] 1 points 1 year ago

If people wanted to speed it up, a runaway green house effect or runaway snowball earth triggered by a nuclear winter should do it. The first one might even destroy all life on earth as long as the temperature stays above 100 °C long enough. The latter one will not eradicate all the microbes, but it would be very effective against humanity.

[–] [email protected] 2 points 1 year ago

That’s just natural selection doing its thing. I don’t think the anti-vaxxer philosophy will completely disappear, but the number of people believing in it will be cut down by various diseases such as covid. Those who survive, will probably be damaged by said diseases, so who knows how well they’ll be able to articulate their thoughts after that.

view more: ‹ prev next ›