Cuteness and Monsters or the Endless Options of Euloran Graphics



October 18th, 2024 by Diana Coman

Given the huge size of the pile of pictures, I might as well start with some of them directly, so scroll down for just a few lucky models out of thousands looked at and millions in the wings waiting for their chance to get out there in the euloran worlds and make their mark or at least be seen around.

A very forward-marching version of my character in game:
gfx2_4_640.jpg

The monster with multi-eyed tentacles and vomit-like colour:
gfx2_1_640.jpg

The psychedelic duck:
gfx2_3_640.jpg

Props and shape-showing character:
gfx2_5_640.jpg

A very curious flying worm - or maybe wyrm:
gfx2_12_640.jpg

The above are just a very small selection of the output from a few weeks of focused work on the graphics generator. This work finally opened up the sort of more "organic" growth of models that I was after - in addition to fitting meshes to bone, I can now finally grow directly such meshes along the bones of any given skeleton. With results varying of course, depending on a whole other set of parameters but building up quite well on the strong foundation of everything done so far. To keep it simple, highlighting here just the major parts of it as it currently stands:

  1. There's a new, working approach to "grow" a mesh organically around the bones of any given skeleton, thus getting rid in one go of a whole set of issues that appear when one tries to build it from pieces. As a bonus, it actually results in more interesting *and* less space-consuming meshes, too.
  2. There's a whole set of new types of surfaces available to mix as mere building blocks of any model or indeed model type.
  3. The skeleton construction stands the test of expanded use of all sorts, from simply more bones1 to more fine-tuned picking of joints that feeds thus back into the process what is learnt through all the exploration. With the added parameter groups and types, the same skeleton building serves now equally well for making animate and inanimate models just as fine - the only difference is in post-processing really.
  4. The texture generator got an upgrade and can both generate and apply now an image directly to a whole model just as it did to a single mesh2.
  5. There are automated explorers3 to help literally see what's in any direction picked.
  6. There are settled generators fully connected to everything needed to pick up any chosen results of various explorations and add them to the gfx pipeline for direct use in Eulora2 from that point on.
  7. There's a growing list of insights regarding not only what works and what doesn't but more to the point *why* that is so - and thus what sort of further explorations are worth doing, what sort of new additions are worth considering and so on.

For some examples coming from all the above, there's such a large pile of all sorts that I can just as well do simply a lucky dip to pick from it - so here are some more pictures:

Run for your euloran lives - it's the Hungry Spikey come to feed:
gfx2_15_640.jpg

The big-headed dinosaur:
gfx2_6_640.jpg

A nightmarish winged creature:
gfx2_7_640.jpg

Snail coming out of its shell:
gfx2_8_640.jpg

The curved tails possibly attached to someone:
gfx2_9_640.jpg
gfx2_10_640.jpg

A very boldly coloured fish:
gfx2_2_640.jpg

The many legs that want to move:
gfx2_13_640.jpg

This one got named, too, say "Hi" to Catface:
gfx2_11_640.jpg

Beyond the new production pipelines and the new directions and the new everything in the above, there is something else coming out more clearly of it and it occurs to me that it might even be the most valuable of the lot, namely a better formed statement of what makes my approach so different from what exists otherwise or why exactly it is that I don't just go with the latest framework to model yet another nicely looking set of assets made and perfected one by one. To highlight it for easy finding:

While I appreciate beauty and harmony, whether in their positive or negative forms, it's the underlying process, the very genetics at work if you prefer, that I'm interested in. My focus here4 is indeed on modeling, understood in its original meaning of becoming able to generate what comes next. Hence my goal at each step is on advancing that sort of deep, generative understanding, not as much on achieving this or that specific effect in itself. Whenever one takes a break or looks at the result, its beauty or lack of it is more a measure of how far one got along the modeling path, simply a natural result rather than something to go chasing after.

I think that it's really only the above sort of explicit modeling understanding gain that makes it worth all the work and even all the "ugly" results on the way. Because a beautiful artefact may be wonderful in itself, indeed but note that the sort of brute-force data crunching freely and quite widely, even increasingly widely available nowadays5 can and does produce any number of those from even just bits and pieces of anything that it has ever seen. And thus, by extension, anything one makes "by hand" is going to be in short order fully drowned and lost in an endless pile of cheaply and quickly made AI-replicas at best. Though most likely it will not get even that much attention as such, let alone anything else - it will be instead merely used directly as just another data point, that's all, the product of all your effort just gobbled up as mechanically and aseptically as possible since it's bots only after all.

As for the technical details and what more or else is coming out, how it's done and where one can further go with it, I'll happily discuss it with others too, as always. The best and easiest way if you are interested is directly in game, of course, but failing access to that, simply ask in that comment box below and we'll take it from there to see where it goes.


  1. On the topic of more bones: is 99 bones a lot for the mesh to bone approach? What's 999 to the organic one, or 9999 for that matter? There's a cost to more of anything, of course, but if and when that cost is willingly paid, in resources and corresponding adaptation of the rest of parameters to fit, there's no further limit at all. 

  2. even better actually since I finally figured out what was causing that visible seam in it and how to get rid of it, too! 

  3. Run it, pick some intervals for desired parameters and then just let it get on with it. It will generate the files, load them in a viewer, take pictures from different angles, save it all and repeat until done. Then all I have to do is to go through the pile of resulting pictures and take it all in, with any needed information at hand since they are all neatly named to know what went in and what process was at work. 

  4. And not just here but quite everywhere really, see for instance the discussion of that unnamed other

  5. "AI" is the more usual term and if you wonder why I didn't use it directly, I don't mind the question at all - consider perhaps what it means and what it does exactly to you to fully and fundamentally equate at all times intelligence with data crunching, without even further reflecting on how well the two may or may not be truly matched. 

Comments feed: RSS 2.0

Leave a Reply