Natanael

joined 1 year ago
[–] [email protected] 1 points 10 months ago (13 children)

You're conflating things. We have no reason to argue those are true with any certainty, but we still can't exclude the possibility. It's the difference of "justified belief" vs coherent theory. Physics have had a ton of theories postulated without evidence where decades later one option was proven true and many others proven false. Under your assumption you shouldn't have made that theory before it could be tested.

[–] [email protected] 1 points 10 months ago (15 children)

I'm not arguing any specific purpose of controlling a simulation in these ways, just that the arguments saying it wouldn't happen are too weak. A multipurpose simulation (imagine one shared by many different teams of simulation researchers) could plausibly be used like this where they mess with just about anything and then reset. Doesn't mean it's likely, just that it's unreasonable to exclude the possibility

[–] [email protected] 1 points 10 months ago

If you don't know what they're testing that could certainly seem excessive. But failure of imagination doesn't prove it's impossible, although you can argue it's unlikely

[–] [email protected] 2 points 10 months ago (17 children)

I'm not saying it happens, I'm just saying some of the arguments here aren't logically justified

[–] [email protected] 0 points 10 months ago (2 children)

Simulations of boats in water don't care about what's happening to the water much of the time yet it needs to be there, you seem to be way too confident in your conclusions

[–] [email protected] 1 points 10 months ago* (last edited 10 months ago) (4 children)

You don't rerun everything from scratch. Especially weather simulations can be checkpointed at places you have high certainty, and keep running forks after that point with different parameters. This is extremely common with for example trying to predict wind patterns during forest fires, you simulate multiple branches of possible developments in wind direction, humidity, temperature, etc. If the parameters you test don't cover every scenario that is plausible you might sometimes engineer it into the simulation just to see the worst case scenario, for example.

And in medicine, especially computational biochemistry you modify damn near everything

[–] [email protected] 1 points 10 months ago (19 children)

To the simulated object there's no difference between a fork of a simulation with different parameters vs directly changing parameters in a running simulation.

For one, we’d notice things changing without cause.

Maybe those reactions are part of the test? Or doesn't affect it. Or they abandon instances where it was noticed and the test derailed.

[–] [email protected] 1 points 10 months ago

But it doesn't necessarily show if they have common sense. If you have many low complexity problems then maybe, but it can't predict the best performers

[–] [email protected] 1 points 10 months ago* (last edited 10 months ago) (6 children)

Checkpointing interesting points in simulations and rerunning with modified parameters happens literally all the time

Especially weather / climate / geology and medicine

[–] [email protected] 1 points 10 months ago (1 children)

In this instance it doesn't. But in this universe almost every industry using simulations run many different ones with different parameters. It doesn't make sense to assume simulation theory with only a single simulation without interventions, because that assumes the simulator already knew that what the simulation would produce would fit what they wanted and that's not a guarantee (just for information theory reasons alone!)

[–] [email protected] 2 points 10 months ago (21 children)

Why does testing numerous different circumstances and consequences violate the idea is simulation? A sufficiently capable simulation engine could literally be used for social experiments

[–] [email protected] 0 points 11 months ago

Until Amazon Sidewalk make every smart TV connected against your will

view more: ‹ prev next ›