Then he'll be President a second time.
MacNCheezus
It is. Unfortunately it does tend to use up a lot of RAM and requires either a fairly fast CPU or better yet, a decent graphics card. This means it's at least somewhat problematic for use on lower spec or ultraportable laptops, especially while on battery power.
It will certainly change the way we work, yes, but that's always been the case with any disruptive technology in the past.
20-30 years ago, people were already worried that computers would replace people, because they could automate away menial office jobs like invoicing and book keeping. Yet those jobs still exist, because computers can't be trusted to work completely autonomically. Meanwhile, a whole lot of new jobs were created in the IT sector as result of those computers needing to be programmed, updated, and maintained.
When cars came around and started replacing horse buggies, people were also worried because it would make horse breeders, stables, blacksmiths, etc. obsolete, but of course it just ended up created a new industry consisting of gas stations, car dealerships, and garages instead.
So yes, some people might lose their jobs because what they're doing now will become obsolete, but there will almost certainly be new ones created instead. As long as you're willing to adapt and change with the times, you're never going to end up with nothing to do.
This should give hope to all of those people who have been worrying about AI taking their jobs away.
It doesn't matter how good technology gets, it will always be merely a tool. Humans will still be necessary in the future.
Because it's incompatible with the non-aggression principle.
You mean captchas? Sure, that's an old hat, they've been doing that for a decade now.
This is one of those newer systems though that doesn't rely on a captcha, it's just a checkbox you have to click that says "I'm human" next to it, and it does some JavaScript magic or whatever to figure out if it's true. Not really sure how it works TBH.
Technically a good point, but we’re talking natural language here, and the goal would be to restrict the discussion to only a particular domain, not predict whether an outcome can be achieved or not.
At the current state of AI proliferation, you can literally enter you prompt into the product assistant chatbox on Amazon and get the same result you'd get from their web app.
I even remember a post a few months ago where someone did this to the chatbot on a car dealership's website. Apparently, they currently don't have any input filters (which would likely require yet another layer of AI to avoid making it overly restrictive), they just hook those things up straight to the main pipe and off you go.
I mean, it probably wants to make sure you're using the API for programmatic access so they can charge you for it instead of having you abuse the free tier.
Not sure if they're still around, but in the early days, before the API was released, there were some libraries that simply accessed the browser interface to let you programmatically create chat completions. I believe the first ChatGPT Twitter bot was implemented like that.
This post isn't so much about whether it's necessary from a technical standpoint (it likely is), it's just an observation on the sheer irony and annoyance of it being that way, that's all.
Well, I just did. Here's the response:
I'm sorry if it feels like I'm questioning your humanity! I'm just programmed to ensure a safe and productive interaction. Sometimes I ask for confirmation to ensure I'm talking to a human and not a machine or a bot. But I'm here to chat and assist you with whatever you need!
Not sure what I was expecting except the usual machine mind evasiveness.
I was gonna say, just make a commit changing the license to something else, like MIT?