webghost0101

joined 1 year ago
[–] [email protected] 5 points 7 months ago

Instruction unclear, stocked up on crayons and got raided by the fbi for suspicion of wrongdoing.

[–] [email protected] 2 points 7 months ago

First of all thank you for the detailed reply.

+10 for “randomly” linking latent vision. Thats the dude that made ipadapter for stable diffusion which is hands down revolutionary for my comfyui workflows.

I actually fully agree on all the 3D stuff, i remember that gta video.

My comment was reflecting the following idea, ahum

“putting the whole image through Al. Not just the textures. Tell it how you want it to look and suddenly a grizzled old Mario is jumping on a realistic turtle with blood splattering everywhere.” -bjoern_tantau

But on the topic of modern 3D I expect we can go very far. Generate high quality models of objects. Venerate from that a low poly version + consistent prompt to be used by the game engine ai during development and live gaming. Including raytracing matrixds (not rtx but yes sikilar but for detection. (which admittedly coded exactly once to demonstrated for an exam and barely understand ) what i try to say is some clever people will figure out how to calculate collisions and interaction using low poly+ai.

I am very impressed by the retrogameXMaster but i think it may also depend on the game.

In these older games the consistency of its gameplay is core to its identity, there pre graphics. hitbox detection is pixel based which is core gameplay and influences difficulty. Hardware limitations in a way also become part of gameplay design.

You can upscale and given many of em fancy textures and maybe even layers of textures, modded items and accessibility cheats.

But the premises: “ Not just the textures. Tell it how you want it to look and suddenly a grizzled old Mario is jumping on a realistic turtle with blood splattering everywhere.”

An ai can coock something up like that But it will be a new distinct Mario game if you change all that much of what’s happening on screen:

Anyway i am tired and prob sound like a lunatic the longer i speak so again thanks for the good read.

[–] [email protected] 1 points 7 months ago* (last edited 7 months ago) (4 children)

What i ment is that the final image is dynamic so players may have a unique configuration which makes it harder for ai to understand whats going on.

Using the final render of each frame would cause a lot of texture bleeding for example when a red character stands in front of a red background. Or is jumping on top of an animal, you may have wild frames where the body shape drastically changes or is suddenly realistically riding the animal then petting it the next frame to then have it die on frame 3, all because every frame is processed as its own work.

Upscaling final renders is indeed possible but mostly because it doesnt change things all that much of the general shapes, Small artifacts are also very common here but often not noticeable by the human eye and dont effect a modern game.

In older games, especially mario where hitboxes are pixel dependent youd either have a very confusing games with tons of clipping because the game doesn’t consider the new textures or it abides to new textures affecting the gameplay.

Source: i have studied game development and have recreated mario era games as part of assignments, currently i am self-studying the technological specifics of how machine learning and generative algorithms operate.

[–] [email protected] 2 points 7 months ago

Who says the output is an average?

I agree for narrow models and Loras trained on a specific style they can never be as good as the original but i also think that is the lamest uncreative way to generate.

Much more fun to use general use models and to crack the settings to generate exactly what you want the way you want,

[–] [email protected] 2 points 7 months ago (6 children)

There is no single “whole” image when talking about a video game. It’s a combination of dynamic layers carefully interacting with each-other.

You can take any individual texture and make it look different/more realistic and it may work with some interaction but might end up breaking the game. Especially if hit boxes depend on the texture.

We may see ai remakes of video games at some point but it will require the ai to reprogram from scratch.

Now when we talk about movies and other linear media, i expect to see this technology relatively soon.

[–] [email protected] 1 points 7 months ago

I dunno, seem like the goal is to get you to buy a subscription to collect your data hostage in their cloud.

And somehow for enough gullible customers its actually working.

[–] [email protected] 1 points 7 months ago* (last edited 7 months ago)

Well that’s very interesting for me personally to think about. Thanks for bringing this up.

I always really enjoyed programming but i hated being a developer.

Ive always loved making art but objectively suck at painting, not great at drawing while i am pretty good with computers, i’ve long realized i can use that to scratch my creative itch as opposed to traditional tools. I have dabbled in 3d modeling, scripting, creating custom theming, general indie game development but my real long time dream is opening a workshop where i reconfigure old hardware into cool looking contraptions operating silly programs that serve no practical use besides inspiring joy.

When i worked as a developer i was assigned a task and told to program x or y within z limits and standards. I had no creative freedom and really hated that job for that reason.

i guess when it comes to how i work with ai its fair to compare it to being a programmer much more then a conventional painter, it definitely taps into my technical insight on a similar level, but it does much more then scripting scratch my very real itch to create things.

On principle I've always been very openminded to what art can be, a literal toilet can be art so i also considered that the thoughts of a philosopher are art. Writing is art, cooking can be art, Video games are art.

Its absolutely ok to make distinctions yourself, if art is anything at all it is subjective but i hope you can see that following my logic i don’t see why my creative projects wouldn't count towards the definition.

[–] [email protected] 4 points 7 months ago (8 children)

This technology is evolving so fast it wont be long and a non invasive wearable will be able to do exactly the same.

[–] [email protected] 2 points 7 months ago* (last edited 7 months ago) (2 children)

Thats an unfair comparison. Were not talking about “painters” or “illustrators” but using the very general term “artist”

I literally started by saying i agree that just asking sm premade like bing to generate x with y isnt making art.

But there can be deep creative processes involved in getting an ai to generate just right and any actual professionals i do know use AI will more often then not use photoshop edits as parts of their process. The ai is a tool.

If you are intentionally using creative process to create an imagined output then you are by dictionary definition an artist.

Stable diffusion is also much more a technology then a product, anyone with a decent gpu can train their own models and many people have. Using someone elses models is no different then using someone else’s brushes in a painting program because what counts is what you do with it, which often involves alot more then just typing in a prompt.

If you want some examples of the creative freedom and complexity one can get just search for “comfyui workflow”

In your sport example, if you managed to step for step guide and train a basic robot (so not a toy preconfigured to play sports)into properly playing sports you wouldn't perse fit the dictionary for an athlete but you having the knowledge to do this could create a reasonable assumption that you are. Otherwise i would say amateur-engineer could also apply because you probably need to know a lot about how the robot joints function. At the very least i would call you an artist because it would take a lot of creative trial and error to pull off.

[–] [email protected] 4 points 7 months ago

To be fair this tweet doesn't say anything about training data but simply that it theoretically can use present day data if it looks it up online.

For gpt4 i think its was initially trained up to 2021 but it has gotten updates where data up to december 2023 was used in training. It “knows” this data and does not need to look ut up.

Whether they managed to further train the initial gpt4 model to do so or added something they trained separately is probably a trade secret.

[–] [email protected] 15 points 7 months ago (5 children)

Theres a lot of nuance that exists here.

There are many consumer apps based on stable diffusion where people just type what they want “astronaut sitting on a horse” most work is below the hud and therefor i agree with your sentiment, asking something isn't a creative process. The results is usually decent but rarely amazing but anyone can recreate it with the right prompt and seed

But things change quickly when you use proper tools like comfyui where you get full control of what the tech can do. Not all models play well with plain descriptions and prompts start to resemble a lengthy magical spell of keywords that become unreadable to a human being. Some keywords perform consistently but are highly counter-intuitive but they only work with some models and settings.

Then there are all the modifiers that change the weights and interpretation of the prompt, latent information, customize noise generations. Mix/matching multiples models iterating on the same picture, using custom or native vae, clip skip 0, 1 or 2…

During the process of changing things the results are usually utter crap but the more you understand what your doing the closer you will get to a workflow that can consistently output good images.

A last step is taking the parameters/seed that generated best pictures from a batch and editing the prompt/settings further to fix the last details.

The process is a creative one and the result is impossible to recreate without someone knowing exactly all the steps involved so here i would say artistic ownership can be applied.

view more: ‹ prev next ›