FaceDeer

joined 1 year ago
[–] [email protected] 10 points 8 months ago

This article is from June 12, 2023. That's practically stone-aged as far as AI technology has been progressing.

The paper it's based on used a very simplistic approach, training AIs purely on the outputs of its previous "generation." Turns out that's not a realistic real-world scenario, though. In reality AIs can be trained on a mixture of human-generated and AI-generated content and it can actually turn out better than training on human-generated content alone. AI-generated content can be curated and custom-made to be better suited to training, and the human-generated stuff adds back in the edge cases that might disappear when doing repeated training generations.

[–] [email protected] 55 points 8 months ago (11 children)

I'd be very interested in those results too, though I'd want everyone to bear in mind the possibility that the brain could have many different "masculine" and "feminine" attributes that could be present in all sorts of mixtures when you range afield from whatever statistical clusterings there might be. I wouldn't want to see a situation where a transgender person is denied care because an AI "read" them as cisgender.

In another comment in this thread I mentioned how men and women have different average heights, that would be a good analogy. There are short men and tall women, so you shouldn't rely on just that.

[–] [email protected] 14 points 8 months ago (1 children)

People's heights change over time too. Men and women can nevertheless have different average heights.

[–] [email protected] 5 points 8 months ago

Article mentioned 400-word chunks, so much less than paper-sized.

[–] [email protected] 33 points 8 months ago (2 children)

Not to mention that a response "containing" plagiarism is a pretty poorly defined criterion. The system being used here is proprietary so we don't even know how it works.

I went and looked at how low theater and such were and it's dramatic:

The lowest similarity scores appeared in theater (0.9%), humanities (2.8%) and English language (5.4%).

[–] [email protected] 4 points 8 months ago

Tinned tuna is also nice to add for some extra flavor and variety.

[–] [email protected] 6 points 8 months ago

I've got a bunch of frozen mashed potatoes pre-divided into meal-sized tupperware. Microwave one of those and it's quite hearty.

I've also got a rice cooker and it's super easy to make something both substantial and tasty with one of those, dump in the rice and water and then add a tin of condensed soup as well. Push the button and come back later to dump it onto a plate. I've found most kinds of condensed soup work well, though avoid anything with "cream of" in the title as those can end up unpleasantly goopy.

[–] [email protected] 3 points 8 months ago (1 children)

Indeed, and many of the more advanced AI systems currently out there are already using LLMs as just one component. Retrieval-augmented generation, for example, adds a separate "memory" that gets searched and bits inserted into the context of the LLM when it's answering questions. LLMs have been trained to be able to call external APIs to do the things they're bad at, like math. The LLM is typically still the central "core" of the system, though; the other stuff is routine sorts of computer activities that we've already had a handle on for decades.

IMO it still boils down to a continuum. If there's an AI system that's got an LLM in it but also a Wolfram Alpha API and a websearch API and other such "helpers", then that system should be considered as a whole when asking how "intelligent" it is.

[–] [email protected] 13 points 8 months ago (2 children)

It was the British spelling.

[–] [email protected] 2 points 8 months ago (2 children)

Call it whatever makes you feel happy, it is allowing me to accomplish things much more quickly and easily than working without it does.

[–] [email protected] 1 points 8 months ago

There was an interesting paper published just recently titled Generative Models: What do they know? Do they know things? Let's find out! (a lot of fun names and titles in the AI field these days :) ) That does a lot of work in actually analyzing what an AI image generator "knows" about what they're depicting. They seem to have an awareness of three dimensional space, of light and shadow and reflectivity, lots of things you wouldn't necessarily expect from something trained just on 2-D images tagged with a few short descriptive sentences. This article from a few months ago also delved into this, it showed that when you ask a generative AI to create a picture of a physical object the first thing the AI does is come up with the three-dimensional shape of the scene before it starts figuring out what it looks like. Quite interesting stuff.

view more: ‹ prev next ›