No Bones in Thy Skeleton and No Theory in Thy Research

This is an intermediate report of my past two weeks' explorations of animated character graphics for Eulora, as part of working towards a parametrised generator of trillions (infinity!) of models. While I usually do any write-ups at more clearly defined stopping points -basically when I got *somewhere* rather than in the middle of going-somewhere- in this case I had to do this write-up simply to unload all the stuff that I explored1. The main reason for taking the time now in the middle of it to write up what I have so far is simply that I found myself already going back through my own notes just to *remember* fully one detail or another. And rather than wasting the time to keep going back and forth through the notes, it's way, way better to write it all up and publish it - at the very least because then I'll actually remember it (that's my meaning for mentally archiving something) and otherwise because it also allows others to provide any feedback they might have.

Keeping in mind therefore that this is simply a stop *on the way* to somewhere rather than any planned somewhere as such, here's what I looked at so far, what I did and what I found out as a result:

  1. Sockets for attaching one mesh to another on the fly with CS+Cal3d.

    In principle this is absolutely needed so that one can show visually the result of actions such as equipping tools/weapons/cloths and the like. Theoretically and from reading the relatively sparse documentation & examples of both CS and Cal3d, I had naively set this as an "easy and quick" sort of thing - in practice it turned out to be of course not all that easy and quick at all. I've spent a bit less than 2 days on trying to make it work in client 2.0 and while I managed to create from code any sockets I choose on a Cal3d mesh and moreover to write the code that uses them (aka attaches another mesh to that socket) according to what all the docs say, the results so far are annoying like hell: there's no error and no complaint and according to everything available all is perfect, except the "attached" mesh does not bloody show at all!

    While I am quite sure that I *will* have to sort it out and make it actually work sooner or later, at this stage I decided against spending more time on it because on one hand it's not yet burning and on the other hand, giving it some time will most likely help since in the meanwhile I'll anyway get to know everything relevant way better than I do right now. So much for "easy and quick" task though - and I can't tell you how annoying this "result" here actually is. For what it's worth, I even suspect already where the issue might be, namely at the exact positioning of the socket, because the positioning has to explicitly choose a *triangle* of the whole bloody mesh to which you want to attach something. And so either you know that mesh in full detail or it's very easy I guess to attach the thing in a way that doesn't show. Anyways, that's pretty much why I think I'll have a way easier time sorting this out at a later stage - this part will become clearer as part and parcel of writing my own mesh generator.

  2. Detailed study of Cal3d formats to fully get what my generator would need to produce exactly anyway. As a practical example for this I used the files for the Cally example character (see next item) but I took initially some time to go through the formats with pen and paper at hand as there's a marked difference between knowing the formats to be able to plug them into the client (as I knew them already) and otherwise knowing them to the level of detail that I need to be able to generate new files that would match exactly what I intended to represent in the first place.

    The full set of Cal3d formats consist in 5 types of files: skeleton, mesh, animation, morphanimation and material. There's a binary and an xml version for each of those but Cal3d's handy converter works to convert both ways so there's no issue on this front that I can currently see. As CS can't use materials in Cal3d format anyways and Eulora is currently not interested in morphanimations, I focused exclusively on the remaining 3 types: skeleton, mesh and animation.

    The skeleton file held the first surprise for me in that the name is misleading really: it's not at all a skeleton (aka bones, despite being called everywhere "bones") that is stored in there but at most a set of ...articulation points, I'd call them. Essentially a "bone" in cal3d's format is simply ONE POINT and that makes a huge practical difference because one point does not have any length or thickness - it has merely a parent and 2 sets of transformations (where by one transformation I mean a set of one translation and one rotation, with the rotation given as a quaternion in cal3d's format): one that is relative to the parent bone (so effectively defining the point's position with respect to the parent bone rather than with respect to the whole model's origin point) and one that is basically the reverse transformation for the position in the model's space (aka with respect to the model's origin space).

    The transformation relative to the parent bone is confusingly called just "translation" and "rotation". The root bone is the one that has no parent and - implicitly because explicitly would be way too clear - its own origin is in fact the model's origin. Then the other set of transformations are called - for added confusion points - "local" though they are essentially relative to the model's origin rather than the bone's origin. The thinking behind that local though is that they are the transformations applied to a mesh's vertices to bring it from the model's space to the bone's space. I can see the reasoning but I find it very unhelpful and the reason is of course the fact that I'm looking at it with the aim to generate the darned thing while all of it is built with the aim to render it so everything is pretty much falling the wrong way around, every time. Not that it's unexpected or any news, no.

    Moving on from the skeleton, the mystery of how come the bones are in fact only points has its solution in the mesh file - as long as you remember that it's again made for a quite different purpose that mine. A bone's length and thickness matter because it's those that determine otherwise which parts of the attached mesh move and in what way in the end. But since the cal3d formats are not concerned with figuring out which parts need to move, there is indeed no need to store that information (length and thickness) explicitly in the skeleton file, right? So, what you get instead is the already-calculated (pre-chewed, rather) exact values - and in the mesh file at boot, since yeah, it's the mesh's vertices that are affected. Moreover, you get to further realise that the whole pretense of "meshes being attached to bones" is misleading as apparently CG2 ~always aims to be: there is no attachment as such because when you render a mesh, the skeleton and any "attachments" are anyway totally ignored - it's only during animations that the skeleton matters at all and even then, only for those "bones" that are explicitly present in the animation's keyframes.

    To recap the above mess, my current understanding is this: in the skeleton file there are only the origins of each "bone" given in a hierarchical set of transformations starting with a root bone (that has no parent and that actually is as a result at the origin of what's considered the "model space") as well as the transformations from model space to each bone's space; in the mesh file, there is a full list of vertices given through position & rotation *in the model space* (so no, not hierarchically at all, you need the damned foot itself to know where the foot for that particular character needs to be) and having further attached to them "influences"3 from any number of "bones" - note that those "influences" have any...influence only at animation time, as they don't do anything at all when just rendering the mesh itself; finally, the animation file consists in sequences of transformations (translation and rotation, as everywhere) given for each "bone" with respect to its parent (so supposedly in the "bone's space").

    The above better understanding of the formats came in a small part from re-reading the Cal3d docs (that didn't take that long since there's really not much to those docs) and otherwise in a lot bigger part from the practical messing about with generating some stuff (see below, after the Cally mapping item).

  3. Mapping the Cally example as a means to get a better understanding of what goes into a fully working animated character and figure out in all the needed gory detail just how all the different parts work together.

    As this was purely a study task, everything went fine: I converted the binary cal3d files (skeleton, meshes and animations) for Cally to xml format with cal3d's own convertor (in cal3d/bin) and then went through them in detail with pen and paper at hand. As a result, I got the "skeleton" in clear, all its 37 "bones", I admired the horror of the meshes being defined as lists of vertices and triangles making up the surfaces ("faces") and I finally understood more precisely what's with the "tracks" for animation - it's just how Cal3d decided to name the set of keyframes defined for each bone as part of one single animation.

    The benefit of the above is not as much something I can show as concrete result but it's a lot in terms of practical experience and otherwise more concrete understanding that helps me going further and that I otherwise simply needed. It took only a few hours as the model is not huge and the main part was mapping the skeleton and otherwise just a bit of a mesh and one animation to get an idea.

  4. Prototype toolchain as a means to experiment and explore as well as allow this part to work outside the client itself since it really makes little sense to have to plug any attempt into the full client just to see the mess.

    The current prototype toolchain relies on CS and Cal3d, of course, but otherwise it's very light. I adapted the previous viewer and bashscript for Cal3d models so that it works as a quick way to test whatever my own generator spits out in cal3d format. Then I wrote a quick prototype (it's less than 1k lines of cpp code itself) that can currently correctly write so far csf (skeleton) and cmf (mesh) files in cal3d format, while also playing around for a bit with some code-generation of "mesh parts" that relies for the shapes themselves on CS's regular shapes (ellipse, specifically) and otherwise allows tweaking of the generation through a set of parameters (so far the desired height of the model and the ratios x:y, z:y; while in principle I had in mind also the number of various limbs aka legs, arms, fingers and tits, so far I fixed those to just get something working and showing since at this point it's anyway not helping much to spend more time on additional parameters). The current code successfuly generated as a result a "ellipse-figure" aka a biped that has 1 or 2 ellongated/flattened spheres (based on an ellipse rather than a circle) making each body part, manages to keep them together in a recognisable biped-shape and otherwise got its head bobbing about -if rather to the side for now- by stealing the relevant head-bobbing part of cally's "walk" animation. It's of course not only ugly but not at all the approach I intend otherwise for the generation of chars but it served well for this initial exploration anyway.

    The above prototype is *not* intended in any way to be the generator (or even part of it) as such - it is just the initial practical exploration mainly to start figuring out what the output needs to even consist in & how it works. For the actual generation approach there are quite a few directions I have in mind to look into and filter to see what is worth giving a proper spin and/or start from, otherwise (see the next item in the list - the state of theory review).

    NB: the animation format is NOT yet spit out by the current prototype, but since it's still xml, I don't expect it will be any bigger trouble than the rest really. The more painful part with all those formats is at most this bridging of the gap between the logical place of some parts when generating the model vs what the format wants because of rendering convenience (e.g. the influences on a per vertex basis and located in the mesh rather than in the skeleton). As it is, I lost quite some time to make that head bob and it's still not exactly bobbing in the right place - there's some further detail that I still don't fully grok on how the animation is exactly specified. So this is still on the list as such, but my current plan on it is that I'll work on in parallel with and as part of advancing with more concrete explorations of actual generation of stuff.

    The above prototype was quite invaluable really and on the very bright side, I have now the assurance that it IS indeed possible to spit out -even from my own generated mesh from scratch- the exact formats that the current client can work with, even if otherwise the geometry and possibly the meshing too can be perhaps described sanely aka through the chosen parameters rather than through the full resulting sets of everything. On the side, I also got as part and parcel of this to explore the regular 3d shape generation that CS currently provides: while there are supposedly cones, cyllinders, spheres (built based on ellipses), boxes and capsules, the practice shows that many of those are not all that great generators since the result fails CS's own checks - the engine itself complains either that the object is "not closed" or that there are "degenerate faces" and in principle both of those complaints can create visible performance problems if the number of such objects is high enough. This came as a surprise really but I don't think it actually matters much since I don't plan to use CS's capsules and the like anyway. (The sphere works fine and it's actually used for the current default sky but I don't plan using it for the generator anyway.)

  5. Theory exploration as a means to improve my own knowledge in this field (since I hadn't otherwise much exposure to it so far) and to get some more ideas as to possible options and promising directions.

    For the theory exploration I started from character modeling, thinking - rather naively, on hindsight - that it's precisely where one gets to figure out what the whole process is there and what are its key parameters, constraints, requirements and issues. Possibly these are indeed hidden somewhere in plain sight - and if you know where, please leave a comment pointing me to that where - but at least within the limited time I had, what I got instead of process and all that jazz was the realisation that the focus is at almost all times on "assisting artists" rather than exploring the space and possibilities that automation offers (such as it is, maybe for good or for worse but it...is).

    Basically the automation is considered at most a sort of crutch to do what the artists don't know (and would rather not learn either, from what I can tell), a data cruncher (for instance to extract features out of large databases of human faces) and otherwise a reliable grinder where the task requires one, as is the case for instance with mirroring the two halves of a character so that the result is perfectly symmetrical. This reliance on the machine for perfect symmetry ignores of course - while specifically and vocally aiming for "realism" and "real-like" results - the fact that no human being is actually perfectly symmetrical or even regularly "asymmetrical" anyway. This sort of mismatches aside, the overall point is simply that there is precious little exploration of fully automated generation as such4, the little there is tends to be found anyway in the CAD5 domain and otherwise unearthed in some incipient forms from quite a few years ago, which by now is not at all surprising if you remember Chomsky6

    Besides the unexpected pleasure of Blum's paper on the medial axis7, the few useful bits I got nevertheless out of this part: a better understanding of some terms dear to artists, most notabbly rigging and skinning. While there is some loose usage of those two terms, what I've got pinned down as more or less reliably explaining what is going on would be that rigging is concerned with how the movement of internal parts - be it "bones" if you must give them that name but more generically, whatever internal parts are considered for modeling purpose - affects the vertices at the surface of the visible shape, while skinning would be the opposite of this, so focusing instead on how movements or changes to the outer vertices correspond to movements and more specifically rotations mainly of inner joints. In the usual simplified manner of a type of practice, rigging often turns into the building of a skeleton and positioning it just-so to fit at exactly desired places inside an individual model (because god forbid it would fit more than one!!!) while skinning turns into deciding which vertices get linked to which bones and how much/in what exact way do they move/change with that bone.

    Besides the terms themselves, I further got to go at least for a quick scan of a few algorithms that are apparently most used (in CG it seems to go exactly like this - for all the huge pile of papers, the algorithms actually used are very few in any given domain anyway) for calculating at rendering time the results of a given model that is already fully rigged and skinned ("smooth skinning" seems to be a very popular choice of algorithm for being good enough, pretty much). And from there on, I had a quick look at skin binding and bone linking approaches too, though not in a lot of detail. The point and gain of those was mainly to get some idea of what the known problems and issues even tend to be and what sort of solutions have been found. Apparently when it comes to bone positioning and linking, the core (other than hand-picking/adjusting and/or machine learning based on image datasets) is still Blum's medial axis and when it comes to meshing (describing a surface through those vertices and triangles dear to CG because of fast rendering), the core is Delauney's triangulation.

    The above narrowed the further space for exploration to looking at least at what has been done perhaps using Blum's and Delauney's work (and Voronoi I suppose I should say, since it's linked anyway). While I still want to have a further look in there as it's not otherwise a domain I was very familiar with, the exploration so far has turned out at least one rather interesting approach to mesh generation8 by Persson and Strang9, relying on a rather neat I'd say physical analogy of finding the equilibrium of an underlying truss structure. Basically the generator starts with a random number of vertices in the pre-defined bounding box in which the end shape will fit. Using the physical analogy of a truss, those vertices would be the nodes of the truss and there are force-displacement relationships considered to be acting for each "bar" (aka edge between vertices, as calculated) in the truss. Those force-displacement relationships depend on the length of the bar and moreover, there are further "reaction" forces introduced at the extreme boundaries so that vertices are kept within the bounding box. The algorithm adjusts iteratively the positions of those nodes until the whole structure reaches an equilibrium. At each iteration, Delauney triangulation is applied to adjust the edges (hence the "truss' bars") between the new positions of the vertices. The geometry itself is simply and neatly described by a distance function that gives for any point its distance to the boundary of the domain, returning negative values for points inside and positive values for those outside.

Conclusion and Next Steps

As it turns out, this stop in the middle of the road to put stuff down was more than needed - given that it took almost 2 days just to select, structure and do the proper write-up of the main points of interest. Nevertheless, it's still just somewhere in the middle so the next steps will just push each part further - only it should be a rather easier push, now that I archived the last bulging set of notes and can therefore start on a new one.

On the implementation side, the next step is to write the code to spit out the animation file too (and that would mean I at least have the working writers for all three formats that are an absolute must from the cal3d set: skeleton, mesh, animation). Then I'll need to decide on some concrete generation approach (beyond the prototype/testing let's play with spheres) that I want to try in practice and give that a go, see what comes out of it and then iterate on this. I must say that I really like the idea of a generator that effectively looks for an equilibrium solution for a set of displacement functions that are let loose in the 3D space that can maximally contain a piece of (or the whole) model. Nevertheless, given the discovery of the dubious "skeleton" actual meaning, I'm keeping for now a very open mind here and I don't really want to fix much upfront since I can't quite tell how one or another approach really would turn out.

On the supporting implementation side (aka "theory"), I still need to do quite some reading both on the Maths side and on the applications, such as they are, to CG. In particular, I think I have a bit more to explore to put my mind at rest regarding automated generators of geometry & mesh. I also need a better grasp on Delauney's work and on Blum's medial axis at the very least.

Considering the above, it might very well be that the next write-up finds me still somewhere in the middle of the road rather than moving on to something-other-than-generator but at the very least it should be *further ahead* on this road, I would think.


  1. And as a result of the write-up fully structure and archive it all in my head, as it always happens with write-ups. 

  2. Computer Graphics 

  3. Technically, Cal3D also allows some "weights" to be attached to each vertex with the purpose of specifying the "rigidity" of each vertex. Those are called in the documentation "springs" and the underlying idea is to provide cloth-like or hair-like effects (when fully not rigid). 

  4. Even approaches that state "generation" and seem bent on using automated approaches turn out at a closer look to still see the whole exercise as ultimately at most supporting some "artist" and therefore absolutely focused in a...GUI. For example and among the better attempts at least (as it provides some concrete skeletons (and some variations obtained at least programmatically on those): "Creature Generation using Genetic Algorithms and Auto-Rigging", by Jon Hudson, 2013. 

  5. Computer-assisted design 

  6. And a good example specifically here for automated generation is to my eyes a 1967 paper by Harry Blum, on "A Transformation for Extracting New Descriptors of Shape." To quote directly from his introduction, the very first paragraph: "I have approached the problem of shape by assuming that the current mathematical tools were somehow missing essential elements of the problem. For despite more than two millennia of geometry, no formulation which appears natural for the biological problem has emerged. This is not surprising perhaps when one recognizes that geometry has been born of surveying and has grown in close collaboration with physical science and its mensuration problems. A corollary to this position is that there is some central difference between the biological problem that we are trying to deal with and the physical problem that we have been dealing with. Consequently, such an approach requires a restudy of visual function to assess what such a geometry should indeed try to accomplish. Unfortunately, the problem of exploring function is not easy to do in isolation, since the visual world is extremely rich, and hypotheses about visual shapes and their functional value to an organism may reflect the cultural bias of the experimenter to a large degree. I have chosen to enter the problem from the middle by hypothesizing simple shape processing mechanisms, and then exploring together the geometry and visual function that result. One such mechanism is presented in this paper. Since it leads to a drastic reformulation of a number of notions of visual shape, it may be useful to review briefly some of the notions implicit in our views and experiments." - It was simply a pleasure to read this, especially after sifting through quite a lot of what I can only summarise as "and now we tweaked this so it looks more like that and/or it's easier for the artist to do with just a few clicks." And yeah, in case you are wondering - Blum's medial axis is still used, very useful and very much studied, as it represents the topological skeleton of a shape, way better than hand-placed "bones" and whatnots, at that. As an aside, apparently even possibly too well from at least one computational perspective, in that even small changes to the boundary of the whole shape can trigger large changes to the medial axis. 

  7. And for a *very* unfair comparison, I'll quote here some modern production, on page 12 of Desmond Eustin van Wyk's 2008 vintage "thesis submitted in fulfilment of the requirements for the degree of MAGISTER SCIENTIAE in the Department of Computer Science, University of West Cape": "Also, programming in general is difficult and operators or functions used in procedural modelling are of a low level and requires understanding of programming concepts during script development." Full marks for the candid admission but this is it - the newly minted magister scientiae in recognition for the work done to avoid the difficulties of programming and the horrible requirements of understanding concepts. What, do you have any problem with that? 

  8. Note that CS's and more generally CG's use of "mesh" is annoyingly loose from a stricter, more mathematical I guess, perspective: while "mesh" is used to stand for what is rendered on the screen as a model's appearance, strictly speaking it's simply the description of a surface through simpler shapes - most usually triangles for 2D; when looking into generating models though, one needs to consider before that the generation of "geometry" aka of the actual 3D shape that exposes those surfaces to be meshed. So the generation of a 3D model has to include at the very least 4 parts (generated through whatever means and possibly shared across multiple models but still generated at *some point* in time and through *some method*, nevertheless): a skeleton aka a hierarchical set of joints and a way to describe how they act upon a given geometry+mesh when such is attached; the geometry of the model's parts and the concrete meshing of its outer surfaces (hence, the vertices and triangles that the graphics engine ultimately works with); the animations that describe essentially movement of vertices anyway, whether it's done indirectly via the "bones" of a skeleton or directly as is the case for the so-called morph-animations. 

  9. Per-Olof Persson and Gilbert Strang, "A Simple Mesh Generator in Matlab", SIAM Review, Vol. 46 (2), pp. 329-345, June 2004 

2 Responses to “No Bones in Thy Skeleton and No Theory in Thy Research”

  1. Diana Coman says:

    For future reference, the discussion of the above, from #eulora logs:

    <diana_coman> mp_en_viaje: ^ the promised [http://logs.ossasepia.com/log/eulora/2020-03-03#1002363][write-up]
    <ossabot> Logged on 2020-03-03 19:13:47 diana_coman: I have ~3k words trying to set down the work of those past 2 weeks and I'm still not done; and it's anyway a set down out of necessity (aka it needed the structuring) rather than out of having arrived at a can-stop-here point, sigh; anyways, I should be able to finish and publish the write-up tomorrow.
    <mp_en_viaje> i just finished mine, so thus i can now proceed... to yours
    <diana_coman> ah, new trilema read? /me goes to read then
    <mp_en_viaje> diana_coman, noy so fast lol. so basically, "bones" should really more straightforwardly be called pinch points ?
    <diana_coman> mp_en_viaje: it's a mess - basically it depends on *which ones*/where you mean; in cal3d's format they are pretty much pinch points, yes; in theory and in modelling tools (such as Blender) you get proper bones with length and thickness and all that, precisely because those things DO matter for figuring out which vertices are affected
    <diana_coman> not even pinch points really; just...points, lol
    <mp_en_viaje> so basically there's a half-compiled sitution because in geometrical truth the only thing they can be is pinch points ; but in an intuitive sense they'd better be somewhat more like the layman's notion of a bone in the ghuman skeleton
    <diana_coman> it's not that the attached mesh gets pinched there
    <mp_en_viaje> diana_coman, but that if it DOES get pinched, it will have to be there
    <diana_coman> hmmm, in practice, annoyingly and infuriatingly and insanely - not even that; you can have those points ANYWHERE and they can "influence" ANY vertices ANYWHERE
    <diana_coman> for all the logic that makes
    <mp_en_viaje> i confess that all is not so much
    <diana_coman> welcome to my love of CG!
    <mp_en_viaje> let's work at this through the immediate example.
    * diana_coman is listening
    <mp_en_viaje> unwind with me the history of mathematics, and let's be again in the days of lagrange. the problem of modelling a rope under load informs all this, yes ? the "bones" aka "pinch points" are the discrete portions of the rope (or length of chain), and in ~this~ sense any one can influence any one ?
    <mp_en_viaje> ie, mediatedly through the nearest one ?
    <mp_en_viaje> this might explain at least why they tend to count so ungodly uselessly many, anyways.
    <diana_coman> hm, I would *hope* that indeed that is what informs all this (though evidence that there is something exactly informing is rather ..sparse); and yes, the theory as far as I can see it seems to be that indeed.
    <mp_en_viaje> alright, this is pretty fucking stupid, not to mention century+ out of date
    <mp_en_viaje> what you get when you teach people the alphabet only, they'll "read" their "own" vulgate into the bible.
    <diana_coman> well, honestly, they probably don't mean that specifically; there's nothing all that clear at any point; it's more a matter of "this seems to fit/work well enough and it's intuitive!!"
    <mp_en_viaje> rediscover half of everything, make a tower of it...
    <diana_coman> sure, there are some approaches that go further and consider also a layer of muscle between those bones and the mesh etc
    <mp_en_viaje> how are the points given ? like if there's a bone (1, 1, 1) and a bone (3, 0, 0) and a bone (0, 2, 1) then the model's implicitly a rectangle 3x2x1 units ?
    <mp_en_viaje> well, a parallelipiped, whatever.
    <diana_coman> mp_en_viaje: heh, why u so logical; the first insanity is that the skeleton has nothing to do with the model at any other time than strictly animation-time
    <mp_en_viaje> but i mean, how are these "bones" defined ? not by absolute position ? what are they then ?
    <diana_coman> the bones are defined by relative position to their parent bone; ie it's a tree with a root bone
    <diana_coman> it's that root bone (the one without a parent) whose position is considered the origin of the model
    <diana_coman> so if the root is at 10,10,10 then the model's space is supposedly with origin at 10,10,10
    <mp_en_viaje> so polar rather than cartesian, 1st bone is always 0, 0, 0 and the 2nd being 1, 1, 1 and the third being 1, 1, 1 makes the third absolutely 2, 2, 2 away from wherever the first is ?
    <mp_en_viaje> or are they aCTUALLY given in steres and distances ?
    <mp_en_viaje> steradians*
    <diana_coman> they are actually given as relative transformations so translation (3d vector, x,y,z) from parent bone and relative rotation (quaternion from parent bone)
    <mp_en_viaje> aha
    <mp_en_viaje> so it then can be said that indeed each bone is the length (of unclear thickness) from its parent to "itself". ie, bones are only defined as the point of their ending
    <mp_en_viaje> on the implication that all bones off this bone START where it ends.
    <diana_coman> on top of that, each bone further stores the translation+rotation to bring a vertex to the bone's own local space; so basically the inverse transformation for the cumulative transformations from the root to that bone
    <mp_en_viaje> this actually makes perfect sense to me.
    <diana_coman> I guess you can see it that way, yes; but there's no thickness then anyway
    <mp_en_viaje> it's not clear why thickness is wanted.
    <mp_en_viaje> but yes, abstract 0-thickness bone seems to be the case
    <diana_coman> because it normally matters if you really go for a biological model, doesn't it? position of bone inside the volume of flesh and the bone's thickness also matter with respect to what the movement looks like
    <mp_en_viaje> yes, but you see, i'm sure there's a parametric distance somewhere, saying how close to the abstract bone the texture should pack
    <mp_en_viaje> so yes, you can't really have eccentric bones, but then again... something you gotta give
    <diana_coman> mp_en_viaje: the format itself makes sense as above; it's not as much as it doesn't make *any* sense, it's simply that it's geared towards using the results of something already generated and fully "produced" aka it's not the generation that is coded but the most detailed (as in as much detail as needed) result
    <mp_en_viaje> why do you say so ?
    <diana_coman> heh, that sort of thing "a parametric distance" is at most in ...blender
    <diana_coman> the most obvious example I'd say is the actual "which vertices does this bone influence"
    <mp_en_viaje> what does prevent you from wrapping a mandelbrot over an arbitrary bone tree, for shits and giggles ?
    <diana_coman> the "which vertices" is in the mesh file where each vertex has attached the list of bones that affect it and the "influence" as a %
    <diana_coman> so there's nowhere the function that calculates "what vertices are affected if this mesh is attached to that bone"
    <diana_coman> it's already the result of such a calculation (whatever that may be)
    <mp_en_viaje> hm
    <diana_coman> re wrapping mandelbrot or anything else - I need to do the actual wrapping and then spit out the coordinates, vertices, faces, influences, whatever comes out of it.
    <mp_en_viaje> so use something simple, like a law of halves. closest does 1, the next 1/2 and so on
    <diana_coman> sure, I'm not saying it can't be done or anything; I'm just saying it's something I'll need to figure out in some way, wherever I'm starting from; it's an open field, can do whatevers.
    <mp_en_viaje> aha
    <mp_en_viaje> well, i think this'd be worth trying out.
    <diana_coman> mp_en_viaje: what specifically? at this stage I can think of ~anything worth trying precisely because there's not much in the way; nevertheless, all tries take time and some effort, ofc.
    <mp_en_viaje> take a fractal to be your texture ; take an arbitrary graph, to be your skeleton ; take some paramatric manner (such as, say, fibonnaci might inspire) to do the dampening. produce the object and diplsy it.
    <mp_en_viaje> display*
    <diana_coman> mp_en_viaje: and the geometry?
    <mp_en_viaje> isn't the geometry implicit in the bone and the bone-texture relation ?
    <mp_en_viaje> isn't the geometry implicit in the bone structure and the bone-texture relation ?
    <diana_coman> you do realise I have to produce the list of vertices AND triangles, nothing less and no implicit will do; not sure I get fully your meaning of texture because the CS "texture" is just an image really.
    <mp_en_viaje> and this takes to what i'm missing.
    <mp_en_viaje> yes, texture is just an image. what triangles ?
    <diana_coman> the "mesh" aka specifically the description of the outer surface (of an underlying 3D object supposedly) through triangles in CS case; basically approximating the 3D surface through a long list of 2D triangles
    <diana_coman> by "geometry", CG means the 3D shape
    <diana_coman> I get it that you'd expect to have a skeleton + influences on texture and so derive the shape from there; except that's not what CS expects/does and so if it is to be that, I'll have to explicitly do that derivation and spit out the shape in "neutral pose" or whatever.
    <mp_en_viaje> is this hard ?
    <diana_coman> recall, the skeleton is used only during animations, while there IS a "model" even at rest and then the skeleton is fully ignored to start with.
    <mp_en_viaje> well yes, by the cs. but i mean, this is what "wrap" or "produce" means above
    <mp_en_viaje> again, take a simple rule, "around main bone, distance x ; around child bones, distance x/2 ; etc".
    <diana_coman> I can't really say atm; it's not like I have any experience at ALL with this sort of thing or with meshing or with ANY of it; it took me two weeks after all to at least get wtf is exactly the full input cs/cal3d want anyway; I can look into it and see, that's about all I can say.
    <mp_en_viaje> sure.
    <diana_coman> anyway, re the above, why is the texture even needed, I don't get it?
    <diana_coman> you can use any image as texture anyway
    <mp_en_viaje> what ~I~ am saying on the other hand is that caution can't even be thrown to the wind, [http://trilema.com/2020/the-buller-podington-compacts/?b=watch&e=anyway#select][for lack of anything but wind to set it on in the first place]
    <diana_coman> it's on top, painted on whatever shape I define through those vertices and triangles
    <mp_en_viaje> so you know, just do a thing and let's laugh at the product. we'll be way ahead of more responsible adults who do not venture themselves so unconscionably.
    <mp_en_viaje> diana_coman, i expect you need it to ~see~ the thing.
    <diana_coman> neah, can even paint it green,w hat
    <mp_en_viaje> nah. you're looking for subtle detail and meaning.
    <mp_en_viaje> this is why i said mandelbrot.
    <diana_coman> hm
    <diana_coman> from what I gather, your suggestion is in fact simply to "grow" the structure around a set of bones and rely on the mandelbrot paint to trick the eye?
    <mp_en_viaje> i just want to see what comes out. this isn't a definitive or any other kind of solution
    <mp_en_viaje> the paint is there to, eg, illustrate something about "this is JUST how the bones should work" or "this is TOTALLY NOT how t=bones shoul work" that we aren't atm in a position to divine for lack of experience
    <mp_en_viaje> the whole exercise is self-didactic.
    <diana_coman> as I said earlier - at this stage and the way I see it, pretty much ~any try is worth the same; but I guess after this write-up I got rather cold re "bones"
    <mp_en_viaje> ie, very cheap experience of the very best kind is available, let's have some. even if the only product of it is that we can later say "well, at least this model isn't as bad as the mandelbuddy", we are still ahead of more responsible adults who do not venture themselves so unconscionably.
    <diana_coman> ie my inclination was more towards generating whatever shapes of some sizes and *then* attaching those to even random bones for all one cares
    <diana_coman> I doubt there's anything really very cheap in there, heh
    <mp_en_viaje> i think this is better than that.
    <mp_en_viaje> well, eyah, cheapness is always relative.
    <diana_coman> why?
    <diana_coman> why this better than that , I mean
    <diana_coman> (with cheapness it's clear enough, lol)
    <mp_en_viaje> i got that. because, i believe, that forces you to make more assumptions you're not aware off that then you have to fight with.
    <diana_coman> hm; funnily enough what I had against the skeleton-based in pretty much the assumptions, hehe
    <diana_coman> perhaps it's simply that I can see them better there than in the other case, might be.
    <mp_en_viaje> consider the matter from the traditional republican pov of standing and opposability. you will be asked "why do you even think this is a shape ?" what do you answer, "Because it's in my geometry shapebook ?" then what of "what makes you think your book has any power here, mage ?"
    <diana_coman> anyways, fine with me, I'll re-read, try to see what it takes, shout when stuck and otherwise see.
    <mp_en_viaje> k
    <diana_coman> mp_en_viaje: because it's an equilibrium :D
    <diana_coman> I don't care of no geometry shapebook; it could not be any other way, given those here parameters; if you don't like it, change the parameters
    <diana_coman> and/or the process, sure.
    <mp_en_viaje> you said "make some shapes", did you not ?
    <diana_coman> yes, but "shapes" does not mean regular shapes or even "usual shapes"
    <mp_en_viaje> but you will have to by your word make some actuall shapes, specific and given.
    <diana_coman> hm? so you make with the wrapped mandelbrot, what
    <diana_coman> or what, because you don't call them shapes they are not that?
    <diana_coman> I make some "actual shapes" aka the result of one process+one set of params, that's all
    <mp_en_viaje> and how do you know they are shapes ?
    <mp_en_viaje> i can say "dude i dunno, i just fucked with the bones, WHICH ARE PART OF THIS". what can you say ? "I got them from geometry, which should be a part of all things" ?
    <diana_coman> uhm, I suspect it's just some superficial misunderstanding really
    <mp_en_viaje> well, in any case it's why i suspect this is better than that :P
    <diana_coman> on one hand I'm not that sure that any bones are part of it; on the other hand you seem to invest "shape" with more meaning than "a finite piece out of the infinite 3d space " (since it's 3d shapes we are talking about, to be more precise)
    <mp_en_viaje> their space is in principle great, and the actual space of interest in context an unknwon carving from it. hence the problem, an why "it forces you to make assumptions you then have to fight".
    <mp_en_viaje> anyways, re to have to plug any attempt into the full client just to see the mess. << is this so ?
    <diana_coman> the way I see it you go for growing it out of a root (the bones) while I go for carving it out of the 3d volume in which it supposedly fits, that's about it all
    <diana_coman> while initially I was thinking indeed of growing it from the root, the bones debacle dissuaded me from the assumption that indeed that is the root (if there's any clear root)
    <diana_coman> mp_en_viaje: what being so? now there's the adapted viewer so no, don't have anymore to plug into the client , can see it in the viewer, that was the point of it.
    <mp_en_viaje> yes, is the client so bulky it's worth having a separate viewer ?
    <mp_en_viaje> rather the problem is that the client fucks the ooda loop spuriously, innit.
    <diana_coman> mp_en_viaje: ofc
    <mp_en_viaje> idiots.
    <mp_en_viaje> how they manage to always so unerringly cut the branch under foot at the tightest point... it's like god hates people.
    <mp_en_viaje> anyways, i suppose that code is useful because... well... it might even end up backported ol
    <diana_coman> and you know, it's insane to go through the whole "load the world" just to see a test char ffs; not to mention that no matter how you put it, the viewer is ...tiny.
    <diana_coman> anyways, this doesn't mean that I don't intend to see the models in eulora too; it's just that it's really wasting time to fire up the client for each and every viewing of whatever attempt, pretty much.
    <mp_en_viaje> yeah
    <mp_en_viaje> anyways, as you say, pretty good news.
    <diana_coman> it dawns on me that my complaint re "wasting time to fire up the client" is terribly funny now given how it's anyway the client that starts 10x faster than the deployed client since no preloads and all that, lolz
    <mp_en_viaje> yes well :D
    <diana_coman> but for futzing with gen-chars, it's still too much.
    <mp_en_viaje> this is how it goes, nothing more intolerant of the slowness of fast performance than the same spirit that turned mere performance into fast performance in the first place.
    <diana_coman> lol, I can see it, yes.
    <mp_en_viaje> relying on a rather neat I'd say physical analogy of finding the equilibrium of an underlying truss structure << right.
    <mp_en_viaje> I must say that I really like the idea of a generator that effectively looks for an equilibrium solution for a set of displacement functions that are let loose << i think for the record that this intuition is sound. what exactly would be the meaning of such a thing in the context is a little vague yet, but otherwise i don't expect there's alternative approaches.
    <mp_en_viaje> that blum quote in footnote 6 is fucking beautiful.
    <diana_coman> it is, isn't it? and coming after a boatload of crap, it literally made my day, that Blum paper.
    <mp_en_viaje> i see it.

  2. Diana Coman says:

    And the discussion of the next step here, recovered from the #eulora logs, for future reference:

    <mp_en_viaje> http://logs.ossasepia.com/log/eulora/2020-03-09#1002514 << word.
    <ossabot> Logged on 2020-03-09 11:21:13 diana_coman: mp_en_viaje: re [http://logs.ossasepia.com/log/eulora/2020-03-04#1002438][the easy question], meanwhile I got at least some idea as to what might be involved/the general steps; and as a result, I have this niggling thought that it's possible that either I don't quite get what you have in mind or you don't quite get what cs/cal3d can actually do (and esp what they can't /don't do); so I'd rather discuss this to make
    <mp_en_viaje> http://logs.ossasepia.com/log/eulora/2020-03-09#1002522 << so basically there's a missing part, there's no straightforward manner of going from "now take your clothes off" to "nude.jpg", and certainly not one that gimp can use.
    <ossabot> Logged on 2020-03-09 11:36:10 diana_coman: ... approximate the desired surface); for both steps, there are various constraints wrt to what makes for a "good mesh" - and good there means that rendering systems ala cs/cal3d will not choke on it, not that "it looks great" or anything of the sort.
    <mp_en_viaje> gimp can display nude.jpg, if it exists already ; eulora can display a ~pre-rasterized~ model, if it was pre-rasterized already.
    <diana_coman> mp_en_viaje: on the bright side, *meanwhile*, I might have found something that might just about...work!
    <mp_en_viaje> aha!
    <mp_en_viaje> i was about to say, this space is long but narrow, there's just not that many possible approaches
    <diana_coman> having looked at a mountain of ~everything (and most of it not at all appealing), in the end it seems I have unearthed what I'd call just about the simplest thing possible (some ~1990 vintage if I got that straight, possibly earlier) and I have just tested this morning a polygonization of an implicit surface
    <mp_en_viaje> http://logs.ossasepia.com/log/eulora/2020-03-09#1002523 <, right! here's on sheer display my general idea of useful contribution : cutting through all that pile of alf-like crap to make a working summary of a field.
    <ossabot> Logged on 2020-03-09 11:37:40 diana_coman: on the bright side, the real options there seem relatively limited in that despite a huge pile of papers and algorithms, there are only a few core things that seem to work in practice; on the less bright side, it's precisely this "meshing" that is considered/acknowledge as "the hardest part".
    <mp_en_viaje> diana_coman, and did it yield ?
    <diana_coman> with a bit of help from, ahem, awk to spit the cal3d mesh format, I can at least say that I got the resulting thing at least visible in cs+cal3d so I am hopeful that ...yes, it can be made to work, hopefully!
    <diana_coman> so far I don't have the texture for it and this might be a bit of further trouble in that I have to produce the full texture possibly (or otherwise CS will tile it stupidly)
    <mp_en_viaje> lol
    <diana_coman> the texture part is not yet fully clear, I'd still need to dig into it
    <mp_en_viaje> diana_coman, alrighty, am i looking forward to an illustrated article ?
    <mp_en_viaje> i find those easiest to read
    <diana_coman> but I was quite relieved to have actually something working that goes from an implicit surface (so we can bloody well send an equation as all "here's graphics" ffs) to the rasterized list of vertices + triangles and moreover cs+cal3d eats it up without complaint
    <diana_coman> basically so far it's better than cs's own "generators" huh
    <mp_en_viaje> this indeed is a first.
    <mp_en_viaje> it's rather like moving ancient dragon bones in their articulations, the sort of thing computers rarely see.
    <diana_coman> part and parcel of this latest round of exploration of the domain, I think I have figured out as well WHY do they have that many bones and whatnots and no, it's nothing to do with rope models or whatever
    <mp_en_viaje> oh ?
    <diana_coman> yeah, it's most likely because all the "modeling tools" ala blender/studio max/whatevers use some algorithms to extract those and yeah, there aren't that many
    <diana_coman> marching cubes (+variations) generally and that has exactly such known effect
    <diana_coman> now, even my tiny ancient dragon bones (because yes, it feels exactly like that) produce quite a list of vertices + triangles but still ...fewer!
    <diana_coman> re article with illustrations, I was hoping to get some sort of texture on it too at first and/or to put some meshes on a "skeleton" - so far since it's literally this morning I got this going, it was just one torus and then (to make sure I'm not dreaming here), a sort of blob
    <mp_en_viaje> this is the sort of thing that makes one's effort in lifting the lid justifiable.
    <diana_coman> but yeah, I took already some screenshots :D
    <mp_en_viaje> that's fine, put a texture on. not everything's gotta be nude now
    <diana_coman> btw, the code is < 1k loc including some alternative that I'm not even using (because it has some trouble and I haven't sorted it out + it's unclear we even need to sort it out really)
    <mp_en_viaje> http://logs.ossasepia.com/log/eulora/2020-03-09#1002530 << this is absolutely so.
    <ossabot> Logged on 2020-03-09 16:20:11 diana_coman: on further reflection, it seems to me that the meshing+tesselation is pretty much the only real option; because building it out of existing primitives doesn't really deliver much benefit - the little that might be won on no-need-to-mesh-or-tesselate is lost anyway on trouble at joints/connecting points for the most obvious bit (likely not even the only one).
    <mp_en_viaje> diana_coman, ha! perfect.
    <diana_coman> I guess if I keep reading papers at this rate, I'll be able to tell the year of a paper by its content - there are some differences that start popping out already.
    <mp_en_viaje> it's usually how this goes. in archeology we call it "cultures", but basically when idiots come up with religious nonsense it goes like empress' hairdues, by fashion.
    <diana_coman> apparently in science-fashion like in cloths-fashion I stick to my fashion until the world comes back to it (or not, for all the difference it makes otherwise)
    <mp_en_viaje> lol
    <mp_en_viaje> vintage ftw. anyways, good.
    <diana_coman> so I gather we are in sync here and it's ok to proceed with this aka work to put it for test/example on a skeleton + a texture and then we see further
    <diana_coman> in principle I should probably write up the whole pile of papers too but I admit I'm so meh on them that I can't even say I'll do it in more detail than "here's the one working thing out of ~everything", hm.
    <mp_en_viaje> yep.
    <mp_en_viaje> i don't expect you have to by-name review the celenterates.
    <mp_en_viaje> if they were worthy of a name we'd know it like we know each other's.
    <diana_coman> all right; I shall therefore play with a few more implicit surfaces, heh.
    <mp_en_viaje> cool

Leave a Reply