The problem is notably "powerful", AIs need pretty significant hardware to run well
As an example the snapdragon NPUs I think can barely handle 7B models.
The problem is notably "powerful", AIs need pretty significant hardware to run well
As an example the snapdragon NPUs I think can barely handle 7B models.
This is because all LLMs function primarily based on the token context you feed it.
The best way to use any LLM is to completely fill up it's history with relevant context, then ask your question.
Doesn't this just do what gets done through convolution anyway?
What's the point of this.
The project was using a way to bypass requiring a backing account to proxy the requests, but the API update broke that
The instances that chose (and choose) to go the extra mile by creating and maintaining proxy account(s) are the ones still working
If the instance gets too popular the twitter goons quickly figure out what the proxy account is and ban it, though. So it's a constant game of cat and mouse.
This is a good move for international open source projects, with multiple lawsuits in multiple countries around the globe currently ongoing, the intellectual property nature of code made using AI isn't really secure enough to open yourself up to the liability.
I've done the same internally at our company. You're free to use whatever tool you want but if the tool you use spits out copyrighted code, and the law eventually has decided that model users instead of model trainers are liable for model output, then that's on you buddy.
Doing gods work here
Starship was still Elon's brainchild and it is years behind, and threatens the viability of the entire Artemis program. Their finances are also terribly linked to the success of Starlink, which is also shaky at best.
I would not say SpaceX is "on track."
I feel like this is going to be where I disconnect in a major way from our childrens' generation.
They're likely going to find it completely normal to have an LLM as a friend and I don't think I'll ever be able to bring myself around to that.
The irony here is palpable
I thought it was atomic age and information age...
Or was that just empire earth...
It doesn't have to be your searches, it could have just been the fact that your phone recognized you were on a road trip and that people in your ad cohort tend to want to buy shoes while on road trips.
I've worked in algorithmic ad space before and I can say that I've never seen evidence of phones listening on conversations but I have seen plenty of evidence from years ago where all your other data is used to form a terrifyingly accurate profile.
We used to do dead reckoning and gps speed gait profiling and we would only need about a weeks worth of GPS data to know height, weight, sex, where you live, where you work, where your kids go to school etc.
We would take that data and cross reference that with data broker info to form a profile, put you in an ad cohort bin, and serve you up as a platform for ad matching services to match to ad campaigns, which get even further targeted.
Millions of dollars spent hyper targeting you but 99 times out of 100 the inaccurate campaign is paying more so they get the adspace but the one time the actual low paying hyper focused campaign gets through it's always scary how accurate it is.
tl;dr: Ad companies don't need to listen to your conversation to know what you want to buy, ads are usually inaccurate because the inaccurate campaign paid more
It's usually not the water itself but the energy used to "systemize" water from out-of-system sources
Pumping, pressurization, filtering, purifying all take additional energy.