ArbitraryValue

joined 1 year ago
[–] [email protected] 37 points 2 days ago (1 children)

Nothing can fix things because teenagers will not cooperate. If Instagram could identify all its teenage users, those users would move to a platform that couldn't. The only thing the restrictions achieve is a reduction in the market share of the platform with the restrictions.

[–] [email protected] 5 points 1 week ago* (last edited 1 week ago) (3 children)

The fact that it won't have any record of calls I missed while the phone was off or didn't have reception, although actually that's probably the fault of the service provider. They can send me texts I missed. Why can't they send me a list of missed calls?

[–] [email protected] 62 points 1 week ago (12 children)

I don't understand why browsers support this "functionality".

[–] [email protected] 3 points 2 weeks ago* (last edited 2 weeks ago)

FourPacketsOfPeanuts has already given a good answer specifically about Israel's situation, but I want to say something about international law in general. Law may be written based on moral principles, but law is still not the same thing as morality. In our daily lives, we follow our moral principles because that's what we believe is right, and we follow the law because otherwise cops will put us in jail.

The situation for a sovereign country is different - there are no cops and there is no jail. If other countries wanted to take hostile action, they would even if there was no violation of international law, and if they did not want to take hostile action, the wouldn't even if there was a violation. Morality still exists (although morality at the scale of countries is necessarily not the same as morality at the scale of individuals) but the law might as well not exist because it is not enforced. It's just pretty language that may be quoted when a country does what it was going to do anyway.

I'm not trying to imply that I think that Israel is violating international law. I'm saying that discussing whether it is or not is a purely intellectual exercise with no practical relevance. If I support Israel but you convince me that it is technically breaking some law, I'm still not going to change my mind. If you oppose Israel but I convince you that it is technically obeying every law to the letter, you're still probably not going to change your mind.

[–] [email protected] 3 points 2 weeks ago* (last edited 2 weeks ago)

So far "more data" has been the solution to most problems, but I don't think we're close to the limit of how much useful information can be learned from the data even if we're close to the limit of how much data is available. Look at the AIs that can't draw hands. There are already many pictures of hands from every angle in their training data. Maybe just having ten times as many pictures of hands would solve the problem, but I'm confident that if that was not possible then doing more with the existing pictures would also work.* Algorithm design just needs some time to catch up.

*I know that the data that is running out is text data. This is just an analogy.

[–] [email protected] -1 points 2 weeks ago (1 children)

Not really questionable - hospitals explicitly lose their protection if they are used for military activity.

[–] [email protected] -5 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

What occasions are you referring to? I know people claim that Israeli use of white phosphorous munitions is illegal, but the law is actually quite specific about what an incendiary weapon is. Incendiary effects caused by weapons that were not designed with the specific purpose of causing incendiary effects are not prohibited. (As far as I can tell, even the deliberate use of such weapons in order to cause incendiary effects is allowed.) This is extremely permissive, because no reasonable country would actually agree not to use a weapon that it considered effective. Something like the firebombing of Dresden is banned, but little else.

Incendiary weapons do not include:

(i) Munitions which may have incidental incendiary effects, such as illuminants, tracers, smoke or signalling systems;

(ii) Munitions designed to combine penetration, blast or fragmentation effects with an additional incendiary effect, such as armour-piercing projectiles, fragmentation shells, explosive bombs and similar combined-effects munitions in which the incendiary effect is not specifically designed to cause burn injury to persons, but to be used against military objectives, such as armoured vehicles, aircraft and installations or facilities.

[–] [email protected] 2 points 2 weeks ago

The issue I have with referring to the current situation as a bubble is that this isn't just hype. The technology really is amazing, and far better than what people had been expecting. I do think that most current attempts to commercialize it are premature, but there's such a big first-mover advantage that it makes sense to keep losing money on attempts that are too early in order to succeed as soon as it is possible to do so.

[–] [email protected] 1 points 2 weeks ago* (last edited 2 weeks ago)

Multiple studies are showing that training on data contaminated with LLM output makes LLMs worse, but there's no inherent reason why LLMs must be trained on this data. As you say, people are aware of it and they're going to be avoiding it. At the very least, they will compare the newly trained LLM to their best existing one and if the new one is worse, they won't switch over. The era of being able to download the entire internet (so to speak) is over but this means that AI will be getting better more slowly, not that it will be getting worse.

[–] [email protected] 4 points 2 weeks ago (5 children)

I don't disagree, but before the recent breakthroughs I would have said that AI is like fusion power in the sense that it has been 50 years away for 50 years. If the current approach doesn't get us there, who knows how long it will take to discover one that does?

[–] [email protected] 4 points 2 weeks ago* (last edited 2 weeks ago) (9 children)

It would be odd if AI somehow got worse. I mean, wouldn't they just revert to a backup?

Anyway, I think (1) is extremely unlikely but I would add (3) the existing algorithms are fundamentally insufficient for AGI no matter how much they're scaled up. A breakthrough is necessary which may not happen for a long time.

I think (3) is true but I also thought that the existing algorithms were fundamentally insufficient for getting to where we are now, and I was wrong. It turns out that they did just need to be scaled up...

view more: next ›