Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Please don't post about US Politics. If you need to do this, try [email protected]
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected].
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
view the rest of the comments
I think these two fields are very closely related and have some overlap. My favorite procgen algorithm, Wavefuncion Collapse, can be described using the framework of machine learning. It has hyperparameters, it has model parameters, it has training data and it does inference. These are all common aspects of modern "AI" techniques.
I thought "Wavefunction Collapse" is just misnamed Monte Carlo. Where does it use training data?
WFC is a full method of map generation. Monte Carlo is not afaik.
Edit: To answer your question, the original paper on WFC uses training data, hyperparameters, etc. They took a grid of pixels (training data), scanned it using a kernal of varying size (model parameter), and used that as the basis for the wavefunction probability model. I wouldn't call it AI though because it doesn't train or self-improve like ML does.
I think the training (or fitting) process is comparable to how a support vector machine is trained. It's not iterative like SGD in deep learning, it's closer to the traditional machine learning techniques.
But I agree that this is a pretty academic discussion, it doesn't matter much in practice.
MC is a statistical method, it doesn't have anything to do with map generation. If you apply it to map generation, you get a "full method of map generation", and as far as I know that is what WFC is.
Could you share the paper? Everything I read about WFC is "you have tiles that are stitched together according to rules with a bit of randomness", which is literally MC.
Ok so you are just talking about MC the statistical method. That doesn't really make sense to me. Every random method will need to "roll the dice" and choose a random outcome like a MC simulation. The statement "this method of map generation is the same as Monte Carlo" (or anything similar, ik you didn't say that exactly) is meaningless as far as I can tell. With that out of the way, WFC and every other random map generation method are either trivially MC (it randomly chooses results) or trivially not MC (it does anything more than that).
The original Github repo, with examples of how the rules are generated from a "training set": https://github.com/mxgmn/WaveFunctionCollapse A paper referencing this repo as "the original WFC algorithm" (ref. 22): long google link to a PDF
Note that I don't think the comparison to AI is particularly useful-- only technically correct that they share some similarities.
I don't think WFC can be described as an example of a Monte Carlo method.
In a Monte Carlo experiment, you use randomness to approximate a solution, for example to solve an integral where you don't have a closed form. The more you sample, the more accurate the result.
In WFC, the number of random experiments depends on your map size and is not variable.
Sorry, I should have been more specific - it's an application of Markov Chain Monte Carlo. You define a chain and randomly evaluate it until you're done - is there anything beyond this in WFC?
I'm not an expert on Monte Carlo methods, but reading the Wikipedia article on Markov Chain Monte Carlo, this doesn't fit what WFC does for the reasons I mentioned above. In MCMC, your get a better result by taking more steps, in WFC, the number of steps is given by the map size, it can't be changed.
I'm not talking about repeated application of MCMC, just a single round. In this single round, the number of steps is also given by the map size.