froop

joined 11 months ago
[–] [email protected] 0 points 11 months ago

The interaction is between nodes in the model. Those are the components that individually have no real characteristics, but when combined into a billion-dimension model, that results in emergent properties. Correctly writing novel code is an emergent property. Correctly solving an ASCII art maze is an emergent property. There is a point where a text predictor, being sufficiently accurate, demonstrates emergent understanding.

Your definition emergent property is outright wrong.

[–] [email protected] -2 points 11 months ago (2 children)

Your description is how pre-llm chatbots work. They were really bad, obviously. It's overly simplified to the point of dishonesty for llms though.

Emergent properties don't require feedback. They just need components of the system to interact to produce properties that the individual components don't have. The llm model is billions of components interacting in unexpected ways. Emergent properties are literally the only reason llms work at all. So I don't think it's absurd to think that the system might have other emergent properties that could be interpreted to be actual understanding.

[–] [email protected] 1 points 11 months ago

There is no angular change between the axle shaft and the engine.

[–] [email protected] 5 points 11 months ago (2 children)

Looks like it eliminates the engine-side cvd but not the wheel-side.