Nope, wifi adds value. They are asking what the added value is
KeenFlame
*Simulate, not stimulate lol
Thanks, I'll check those out. The entire point of your comment was that llm is a dead end. The branching as you call it is just more parameters which approach, in lower token models a collapse. Which is why more tokens and larger context does improve accuracy and why it does make sense to increase them. LLMs have also proven to in some cases have what you call reason and what many call reason but which is not a good word for the error. Larger models provide a way to stimulate the world which in turn gives us access to the sensing mechanism of our brain, which is to stimulate and then attend to disparages between the simulation and actual. This in turn gives access to action which unfortunately is not very well understood. Simulation, or prediction, is what our brains constantly do to be able to react and adapt to the world without massive timing failure and massive energy cost, for instance consider driving where you focus on unusual sensing and let action be an extension of purpose by just allowing constant prediction to happen where your muscles have already prepared to commit even precise movements due to enough practice with your "model" of how wheel and foot apply to the vehicle.
Yes, it was. Like all scientific discoveries several corporations started building proprietary products. You are wrong that it was built with that purpose.
I really don't think there's more examples of optimistic predictions than there are pessimistic ones.
The discoveries made in recent years definitely point to an emergent incredibly useful set of tools that it would be amiss to pretend wouldn't eventually replace junior developers in different disciplines. It's just that without juniors there will never be any seniors. And someone needs to babysit those juniors. So what we get is not something that can replace an entire workforce in a long long while even if top brass would love that
Absolutely, it's one of the first curious things you discover when using them, such as stable diffusion "masterpiece" or the famous system prompt leaks from proprietary llms
It makes sense in how it works but in proprietary use it is mostly handled for you
Finding the right words and amount is a hilarious exercise that provides pretty good insight in the attention mechanics
Consider the "let's work step by step"
This proved a revolutionary way to system the coders as they then will structure the output better, there's then more research that happened around why this is so amazingly effective at making the model proof check itself
Predictions are obviously closely related to the action part of our brains as well, so it makes sense that it would help when you think about it
How does this amazing prediction engine discovery that basically works like our brain does not fit in a larger solution?
The way emergent world simulation can be found in the larger models definitely point to this being a cornerstone, as it provides functional value in both image and text recall.
Nevermid that tools like memgpt doesn't satisfy long term memory and context windows doesn't satisfy attention functions properly, I need a much harder sell on LLM technology not proving an important piece of agi
It was more like a scientific discovery
It's not easy for you, me
For anyone.
It's easy for the anime engineers in your head
No, it's not just arts and crafts foil put in a box and now you have chaffe
Again it's just you romanticised the idea and don't understand how complicated such a system would be, it's beyond our capabilities to make
military hardware is not made to be cool, it's made to be cost effective and reliable
That was a good guess but unfortunately it is just difficult even in the scenario you proposed
Huh?
I'm so confused by "it's already here" "what even defines agi" etc. "what do we mean"
Like is it really that hard to understand the difference of a function you call and it returns something
And an autonomous entity
so hard to distinguish?