The Shady Business with Shaders in CS (Notes on Graphics in Eulora, IX)



December 29th, 2019 by Diana Coman

Over this past week I've been going as planned through CS code and docs and everything available to figure out in more concrete detail just how exactly are those fabled "shaders" implemented and used by the graphics engine - this 3rd big tangle of clusterfuck code that fell in my lap for disentangling too, it would seem. And boy did I find everything and anything in there, to the extent that here I find myself, stopping one week in rather than two, since I already need to set it all out1 in clear so a decision can then be made as to the way forwards.

The initial starting point was the Terrain2 plugin that handles terrain generation using a heightmap and palette of materials. This plugin expects a shader to exist with a specific name even since it relies on it for what it calls "splatting" - basically mixing seamlessly the different materials of the terrain where they change from one to another close enough so that you care about it (at a distance it just uses whatever is defined as "base material"). The trouble with this "expectation" is that it's one of those blind dependencies that are the mark of CS: the Terrain plugin (being a plugin, ffs!) is supposedly independent of the rest, right? Except, as you can see, it requires this darned shader and that in turn requires all sorts of snippets and other shaders and those require their own loaders and xml and all that jazz that turns out in practice to be thousands and thousands of lines of code. So sobered by this display of true independence and revolutionary zeal2, I went deeper into CS and looked in turns at how a shader is created in the first place (spoiler: ONLY through a spiderweb of "loading" from XML) and then how it's used: in bits and pieces, depending on which plugin is chosen and what hardware+software is used. Read on for the gruesome details!

The look at how a shader is created was triggered by this idea that it's perhaps simply a matter of extracting from the loaders/shader creators whatever they do and package that so it can be done whenever needed. This idea died quickly upon a look at the "loaders" because "what they do" is a lot of obfuscation really: since there are conditions and loops and inclusion of "snippets" from other files and fallbacks and all sorts possible and indeed present in the "xml definition" of a shader, the loading part is an equally complex affair of 7.5k+ LOC where various bits and pieces (e.g. techniques) are collected and then evaluated at a later stage since it can't even be known upfront which one is useful for what or where or whether at all. I can't see how it's any use to aim to disentangle this mess directly really.

For added lulz of the raise-hopes-than-bash-it-on-the-rocks type, I even managed to find in one of the "shader wrappers" a method that was supposedly meant for "loading a shader from source code", except it contains nothing other than a comment: "Loading from source code is not supported". And I can easily tell you exactly why it's "not supported" - because it can't fuckingly be sanely supported given all that mess, no. No wonder it's "not supported".

Next I turned to investigating the way shaders are stored and used throughout the engine so that I can perhaps figure out if it's possible to either rip them out entirely and replace with a saner approach or at least create my own "shaders" directly on the fly in the code and register those with the engine, effectively bypassing the whole xml-loaders insanity.

In summary, shaders as currently implemented in CS are considered "compiled" parts, as they are indeed obtained solely through a sort of ad-hoc XML-language interpretation. They (and especially the shader variables with all the added complexity of contexts and stack and whatnot) are used throughout the engine: from them being part and parcel of a material definition to the rendering loop explicitly calling the shaders registered for each mesh. As such, ripping the current shader-setup entirely out doesn't seem very doable without a full overhaul of the engine itself.

There is NO current way to modify an existing shader on the fly and the "shader compiler" class itself (the one producing shaders) pulls in the whole SCF mess with it so that it's not even all that easy or straightforward to try to make /adapt one directly in code. The *only* currently working way to create a shader and load it into the engine is by using a loader that will trigger god-knows-how-many-levels-deep of xml tokenizing, parsing, evaluating and so on. There are nominally two types of shader loaders, namely for "xmlshader" and "weavershader" but in practice the xmlshader is the basis, as the weavershader simply adds a few more bells and whistles on top, otherwise relying on the xmlshader for the heavy work. The shaders themselves are not really concerned solely with "defining a class of surface" but pack inside additional concerns such as providing alternatives for different types of hardware or software (this is done via providing several ways of doing the same thing), supporting various LOD (levels of detail), reusing other supposedly-existing shaders and snippets, supporting (or not) different rendering plugins (currently OpenGL and software rendering).

Given the above rather troublesome finds, I think it's worth to go in a bit more detail on the theory of shaders in CS, since it might help to figure out what if any of it is really useful as such (and for what exactly). From a theoretical point of view, shaders in CS sound reasonable enough: each shader is meant to provide simply all the rendering steps and tumbles (or other workarounds) needed to obtain on screen a class of surfaces: e.g. you can write a shader for water-like surfaces, another one for cloth-like surfaces and yet another one for terrain surfaces. There is even a reasonable enough pipeline for this since each shader is meant to contain a clear list of parts3:

  1. Metadata
  2. Shader Variables
  3. Technique(s)
  4. Pass(es) per technique
  5. Mapping(s) per pass
  6. Vertex processor per pass (if any)
  7. Vertex program per pass (if any)
  8. Fragment program per pass (if any)

The code investigation revealed that metadata is more of an overplanning in there - it consists currently of "maximum number of lights this shader can handle". And I can't even quite say why is there a maximum number of lights or why exactly does a shader add lights to start with - since when is a surface to be defined by ...added lights? But nevertheless, it is: apparently quite a few "effects" are achieved by adding whatever number of lights in various passes, for all the logic that has.

Shader variables are one of the big things in there really: they are meant to work as parameters for each shader so that conceivably you can reuse a shader to create similar but not identical surfaces. This is also part of where the clusterfuck starts: while each shader can in principle define its own custom variables, in practice there is a set of "predefined" variables (such as "diffuse tex" or "specular tex") that are assigned different values in different "contexts". So at rendering time, there is a whole *stack* of shader variables collected from various places and good luck knowing exactly which value will any given variable have where and when. A lick of sanity there is in the form of a hardcoded priority of different contexts so that - according to the docs - the value of any given shader variable will be taken from the highest priority context where it's found, with contexts provided in order from lowest to highest priority by: shader manager (basically the "global" scope for shader variables), current light, render step, render mesh, mesh wrapper, shader, material. In practice, the result of all this is that everything ends up relying on something else and as a result entirely not separated, despite the whole elaborated pretense to the contrary. Unsurprising by now, I know.

The shader manager for all its pompous name turns out at a closer look to provide only a time variable that is supposedly useful for shaders that aim to animate the surface - why would an animated-surface be needed? I can't quite tell. The render step and mesh are ..."special". Just in case the rest seemed too sane for your taste, here we have some specialness added since those are not really parameters for shaders - more like some information sources that got stuffed into those "shader variables" since why not. Apparently one can get in here geometry from some meshes (vertices), splatting masks from terrains or even transforms and textures or anything under the sun really. The last three "contexts" (mesh wrapper, shader and material) are the ones most often in use - theoretically the mesh wrapper would allow one to parametrize the surface on a per mesh basis (using the mesh wrapper), per material basis (material context) or otherwise to provide default values in the shader itself. So much support for endless customization that there's not much room left for *easily* making a darned thing to customize in the first place.

The techniques of a shader are meant to be the actual meat of the shader aka the work done to make that surface look as desired. Customization hits here in the form of several techniques meant as *alternatives*: different ways to do the same thing so that no hardware&software configuration is left behind. To get the full extent of what those good intentions amount to in here, multiply then the number of techniques with the several *passes* (aka multiple renderings for the same darned surface) per technique and further add to it the vertex+fragment processors/programs that come in different flavours depending on the underlying plugin to use (OpenGL or software rendering atm).

The working of a shader in practice as found out from digging through CS sources is meant to be a loop of "passes" where each pass consists in 5 steps:

  1. Activation
  2. Setup
  3. Drawing the mesh
  4. Teardown
  5. Deactivation

If you frown at the 5 steps of which only one does the actual drawing, it's ok, there's even more similar sets of substeps in almost each of those 5 steps. In any case, the activation+setup mainly collects and sets in place all the variables and values that are available from the various contexts at that time. This can easily be a whole mess because "variables" can include any number of anything really, from textures and lights to geometry to additional rendering buffers. The drawing of the mesh is handled from what I could tell mainly by the OpenGL plugin (or the software renderer if you run it without OpenGL) and as such the code is dependent on that. The teardown+deactivation do the whole dance in reverse, as expected. Efficiency at its best, wouldn't you say?

Looking at the classes used for the various components of a shader (shader variables, passes, techniques, programs), the shader variables are the most reasonable in that they don't bring in at least a whole lot besides what is anyway in there due to materials for instance. The techniques themselves could perhaps be ripped out and repackaged as their internals seem to be mainly direct manipulations of the g3d class (graphics 3d). The main issue though is with respect to the iShader interface itself because it mandates a "QueryObject" that pulls in all of a sudden the whole SCF (basically it forces iShader to be a plugin, just like that), holly shit! And otherwise the shader passes, programs and how to even decide *what* should exactly be done by a shader do if anything at all: which parts are best handled by the material/texture itself and which parts should be done by a "shader" and how does one even make this choice?

As far as I can tell currently, there are a few possible next steps perhaps:

  1. Aim to modify one of the existing "shader compilers" to allow creation from code. This would seem at first sight as requiring the fewest changes to CS itself (though it does inevitably mean that I'm patching CS itself, with all that brings with it, yes) but it comes with some big unknowns, not least of which is: how exactly should a shader be specified in a sane way already (since importing the whole conditions and whatnots is not sane)?
  2. Aim to create directly own version of "iShader" - this means I'll end up writing now a new "CS plugin", supposedly less invasive than option 1 above but carrying the SCF burden at the very least. Ripping SCF out of iShader as first step is likely to turn into all sorts of additional trouble and I can't say that I see the justification for going there even if aiming for this direction. The advantage I can see on this option compared to 1 above - once the SCF is somehow handled, ugh - would be that it's both more direct (ie should be able then to create and register the shader as/when desired) and perhaps more amenable to a step by step discovery of just what should go in a shader to start with.
  3. Investigate what exactly and how much can be done via texture+materials alone. This is quite iffy because it would probably require quite the dive into Blender now of all things (and the exporter that does not yet exist/work) but I mention it here for completeness.

A big handicap currently here is the lack of clear knowledge as to which parts currently in "shaders" are really needed and to what extent. Basically a lack of practical experience with graphics really, to know how much can be achieved through textures+materials only and how much/which parts require the shader on top to end up as anything reasonable. The current set of shaders is such a spaghetti that I don't even think it's worth attempting some direct translation - although I did look through it to start getting some sort of familiarity with how things may be done and to what extent. It all seems so far to be more of a trial-and-error anyway and the code provides further encouragement in this direction e.g. "// If you understand this initially (or at all) I salute you" , "// No idea why we have to invert the Z at all, but reflection is wrong without it"4


  1. And for my own future reference, so I'm including even "known" parts in here, let them be as I never ever regretted having them written in clear in a single place. 

  2. Can't resist humming it too since it's still better laughing than going nuts: aqui se queda la clara la entranable transparencia de tu querida presencia... 

  3. Section 4.12.1 in the CS manual gives a reasonable although purely theoretical description for those. 

  4. Both quotes are from cs/plugins/video/render3d/shader/shaderplugins/glshader_fixed/glshader_fvp.cpp aka the OpenGL plugin that supposedly does the vertex programs parts that may be specified in a shader. 

Comments feed: RSS 2.0

4 Responses to “The Shady Business with Shaders in CS (Notes on Graphics in Eulora, IX)”

  1. > There is NO current way to modify an existing shader on the fly

    I do not think this is really desirable or something we actually wish to do. In the indeed rare cases where some sort of modification of behaviour from the player pov is desirable, keeping two and switching from using one to using the other is definitely the way to go ; there's never going to be some kind of open ended, vastly plurious shader modification contemplated.

    > SCF mess

    What's scf stand for here ?

    Anyways : the most concerning part would appear to be the hardware support implemented at such a high level as the fucking shaders ; self-evidently the engine should have long ago decided how it's going to deal with whatever engine it's living atop. What exact hardware dependency is even contemplated here, I don't get it ?

    More generally : xml shaders were rather a sort period fashion, like say "object-oriented programming" or whatever web2.0 nonsense ; I'm not about to either send you on a spelunking expedition to find "how shaders should really be" nor am I going to order the fixing of the forms of dead fashions. In fact, we have arrived at a point here where we actually need input from the active and lively community of gfx developers for eulora -- and the fact that the current crop of morons floating about idly in the soup have so far failed to bring to life that active and lively community doesn't change anything.

    So how about this bit waits a while, and you go do some server stuff ? This will only progress once we have some actual shader producers to poll ; and yes at that time it'll need some summarizing/cleaning/fixing etc -- but at that time.

    Which, of course, begs the question of why are we even bothering to include the morons in the first place -- god knows keeping #trilema open for all comers for years hasn't resulted in some greatness of contribution or anything like that. Mayhap the solution is to simply define graphics in general from first principles and eschew the idiocy of "creativity" by "people themselves" fresh offa dat informational superhighway truck. Why do I have to make a gui for complicated parametrized definitions of appearance ? So that Joe Schmuck can sorta-kinda fiddle one bit in ten billion and then importantly asciilifeform all over himself as to his grandiose contributions ? I can fiddle bits better than he ever can through the usual process, wut the fuck is he needed for.

  2. Diana Coman says:

    SCF stands for "shared class facility" and is this horrible "abstraction" that CrystalSpace came with, supposedly for flexibility, practically for a lot of harmful complexity.

    The hardware dependency goes along the lines of "if your GPU does not support more than x texture units, then we'll use more passes with less than x texture units each". Essentially workarounds for various "what if" scenarios of a rather dubious nature (how exactly did they choose which "what if" are "relevant" anyway).

    Some spec for the graphics part was deemed needed to flesh out more of that data hierarchy model for the communication protocol to be able to send everything to the client. It does sound way saner to define it from the first principles indeed. I'm not sure though what do you see exactly as next steps that I should take here.

  3. Diana Coman says:

    The discussion from #trilema logs:

    mircea_popescu: diana_coman, let's restate this, so currently work on nailing down the comms protocol is stalled on a definitive universal data model, which is stalled on graphical use in the client, that about it ?
    diana_coman: mircea_popescu: yes.
    mircea_popescu: were there more chunks you could be working on ?
    diana_coman: mircea_popescu: hm, thinking now about it, I think there might be, namely the more directly game-relevant parts that are also not yet fully spec (eg character, structure/item etc); I need to go over it in this light, put the graphics to the side for now and see from there.
    mircea_popescu: there's two portions to the problem.
    mircea_popescu: one is that a coupla years of diligent work have gotten eulora to this position where it actually outgrew the intellectual basis of the community such as it is in pretty much every respect. many things nominally upstream would be inputs for this decision spot but are sadly absent, from "well... what about scheme then ?" coincidentally brought up avbove to the self-obvious "how did the western world produce 0 graphic art
    mircea_popescu: ists worth the mention" discussed in the article and so on.
    mircea_popescu: so i don't even know on what basis i'm supposed to decide what, here.
    mircea_popescu: the other part is that, well... why exactly is a texture / skin / shader / model / anything else in compgfx even valuable in the first place ?
    mircea_popescu: explain this to me, can you ? why are these valuable ?
    diana_coman: oh boy, I'm possibly the last person to ask for an argument pro "valuable" on those.
    mircea_popescu: perhaps it's why you're asked then.
    diana_coman: hm; I keep thinking that "perhaps I don't know enough about them to find the value" ; as I see them now, they are more accumulations of trial and error/overfitting/tinkering though so they seem of very little - or indeed negative - value tbh.
    diana_coman: the sort of accumulating accidental complexity, to link it in with trinque's thread re OS.
    mircea_popescu: bear with me here, because this is going to be lengthy.
    mircea_popescu: so, the sukhoi 34 is a pretty cool plane ; in any case the basis of the russian defeat in the field of usg's pretense to air participation. now suppose for the sake of argument someone comes offering to sell one ; and suppose further that you're to advise on the purchase by providing a single scalar value : what the item is worth iyo.
    mircea_popescu: what value do you spit out ?
    diana_coman: ugh, this seems out of my advising capabilities really.
    mircea_popescu: why ?
    diana_coman: because "what this item is worth" depends on more than just the item in itself and I have no idea even on the item really, let alone the context.
    mircea_popescu: well, there's one overarching bit of context here : the offer is to sell ~ONE~
    diana_coman: if you are asking "what is this item worth to *you*", well, not much; the fuck do I do with this sukhoi.
    mircea_popescu: i suspect the item as discussed is actually worth 0, yes, making complete idiots of the management of all the us' "allies".
    mircea_popescu: but let's continue :
    diana_coman: oh huh, only one? dunno, perhaps the idea is to reverse engineer, that might be about the only possible value I can see but it's dubious for all sorts of reasons.
    mircea_popescu: "artificial intelligence", like "flight" or whatever else, is as such and in the abstract a long cherished dream of humanity.
    mircea_popescu: artificial intelligence as actually extant today is, leaving aside all sorta obscure lulz alf used to like referencing about expert systems and nato wargames, and the assorted lulzpile of neural networks and etc from the very dead 70s, and also leaving aside the whole life OF MINSKY, and then sussman, and such others as lost themselves in that whale,
    diana_coman: myeah; I fell in love with it at 17 (when read the promise) and then promptly barfed by 19 (when encountered the full extent of the "practice")
    mircea_popescu: a very simple thing : systems of really really many linear equations with really really numerous variables hidden behind very small parameters.
    mircea_popescu: now, suppose we change out the sukhoi, and chande in instead "an artificial intelligence", specifically, a few TB of w/e worth of mostly 0s, binary values.
    mircea_popescu: if run through the proprietary bundle, it beats any human at go.
    mircea_popescu: otherwise, it... i dunno, it makes a skymap if you print the 1's as stars i guess.
    diana_coman: lolz; and pretty light patterns possibly.
    mircea_popescu: now, what's your scalar in this new context ?
    diana_coman: what, for buying an AI system that is "very powerful" when ran through the proprietary bundle & otherwise unclear what it does?
    diana_coman: rather: unclear what *useful* thing it does
    diana_coman: as above, it's negative value, below that "not much" for the sukhoi.
    mircea_popescu: do you see the argument that "an sukhoi" is worth exactly the same as "an ai", in the sense that they're the ash of a cigar someone else smoked, the marblecake of an anal cavity someone else fucked, and, to quote the quite prophetic mr mel again, what's the point of a program that can't rewrite its own code ?
    diana_coman: that yes, I do; though I am partial to an sukhoi more than to an AI because at least it's a concrete beast I guess.
    mircea_popescu: alright
    mircea_popescu: now let's see here : leaving aside for the sake of discoursive coherence the correct vectorial representation, and making do with a purely catesian approach (aka bitmaps) for the time being, on the expectation that while this rather than that allows much easier quantification, that rather than this doesn't magically pack much more complexity, greeks be damned :
    mircea_popescu: a cube a thousand pixels wide will take, to be naively described point by point, about 3*256 Gb. this means there's about a trillion such cubes. yes ?
    diana_coman: ah, I see; yes.
    mircea_popescu: alright. out of this space of 3bn units, once applying the rule of "looks like a game character" (or "mob" or whatever forumation of "is useful for eulora") there's going to be a few equivalency classes (most visible on say nintendo, bcause most annoying on nintendo, really, the wolf+1 gets blond hair ?)
    mircea_popescu: but mostly, there's going to be empty space. or rather : "crap" is the largest equivalency class.
    diana_coman: ahaha, sure.
    mircea_popescu: now the proposition here is, as i understand it, the following :
    mircea_popescu: so hard and difficult and unapproachable and scary and etcetera is this question of splittign the space, that it is worthwhile to go to all the trouble of farming a bunch of morons, because their crap/noncrap decision is tantamount to fucking holy, and no deployment of anything but honest to god THE dude from big lebowsky can possibly cut it.
    mircea_popescu: so it's worth making a thing like blender, and then making a thing like the inexistent exporter, and then making a thing like cs, and all the xml wrapping and etcetera,
    mircea_popescu: just so these idiots can alter a pixel here or there.
    diana_coman saw this coming
    mircea_popescu: except, of course, they don't. and when hanbot TRIED to do it, she died in a flaming mess of deeply inadequate tools. she almost made a slime. ALMOST.
    mircea_popescu: hey hanbot how did that go ?
    hanbot: i had a lot of fun modeling a slime guy in blender, then discovered getting it to a usable form via "baking textures" etc 5 or so layers deep was ... i guess i'd call it beyond my attention span.
    mircea_popescu: so you're saying the unusable tools part was not the making but the exporting, as it were ?
    hanbot: yep
    mircea_popescu: where was the article btw, got a link ?
    hanbot: nah, i committed a major sin there, possibly i'll have to pay by starting over and properly documenting.
    mircea_popescu: diana_coman, the problem with the foregoing is that i can kinda ballpark the COST of maintaining all the infrastructure for monkeys to typewrite at their leisure. so can you, i expect.
    mircea_popescu: is it fucking worth it ?
    mircea_popescu: hanbot, why the fuck didn't you write it out ? i remember reading SOMETHING.
    diana_coman: mircea_popescu: I had the article on the toolchain to help her as I knew she was having a go at it based on some discussion in #eulora I think; maybe that?
    mircea_popescu: i recal lpictures of the slime guy in it!
    diana_coman: re maintaining the infrastructure, I don't see how it can be really justified as such anyway given that uhm, those doing something with it are anyway essentially non-existent currently so can't maintain so they "do" anything; and if it is to *also* build them up, then well, they'll fit whatever infrastructure is provided, not the other way around.
    diana_coman: and no, I wouldn't start making another blender, ugh.
    hanbot: yeah there's some #eulora refs like http://logs.ossasepia.com/log/eulora/2017-02-05#940257, i sincerely dun recall properly documenting tho', probably part and parcel of the overwhelmed at step 10 of 10 thing. mea culpa, i oughta know better.
    ossabot: (eulora) 2017-02-05 hanbot: fwiw diana_coman i have a blender-made animated character guy finished and am gonna try out the crystal space exporter thingy as per http://www.crystalspace3d.org/docs/online/manual/Blender.html , prolly tomorrow. if you have any tips/pointers/etc in the meantime pls to schpiel at me
    mircea_popescu: hanbot, yeah, don't do that again.
    mircea_popescu: diana_coman, the way this coming to a head is working out in my head is as follows : you have practically speaking the option to either a) go trawl the entire internet, drag out ~everything~ that's conceivably useful (submit expenses report for stuff that's behind reasonable paywalls i guess) and then we systematize the pile ; or else write a possible-lifeform-generation machine and see what you want it to save.
    mircea_popescu: or whatever, how the data should be kept.
    mircea_popescu: we actually did something somewhat like a test run for a if memory serves, hence http://trilema.com/2016/eulora-012/ trials yes
    diana_coman: if you mean "art products" by that everything that's conceivably useful then yes, indeed, I was thinking precisely of that rather sad attempt, hm; the idea -naive!- behind that to my mind was more to find perhaps as a result *people* doing something useful in that direction, hm.
    mircea_popescu: neither of these is shaders specifically, because obviously neither can be shaders specifically, because how
    diana_coman: the thing with the AI/automated generation was initially that "can't beat human at this sort of task" iirc.
    mircea_popescu: diana_coman, sadly i do not currently believe anyone whose name is going to be worth knowing in ten years is older than about fifteen today. with maybe a dozen exceptions.
    mircea_popescu: you meet more 30yos, you'll meet more alfs in the best case. i am so fucking uncurious to be meeting anymore alfs...
    diana_coman: myeah; that basically says "you'll have to build up the people too so..."
    mircea_popescu: it does. if there's a meteor land tomorrow wipe out "humanity" there'll be exactly nothing lost. except for the stench it wouldn't even be noticeable anything occured.
    diana_coman: http://logs.ossasepia.com/log/trilema/2019-12-30#1956451 - pretty much b then, for so much "choice" , huh.
    ossabot: Logged on 2019-12-30 12:41:13 mircea_popescu: diana_coman, the way this coming to a head is working out in my head is as follows : you have practically speaking the option to either a) go trawl the entire internet, drag out ~everything~ that's conceivably useful (submit expenses report for stuff that's behind reasonable paywalls i guess) and then we systematize the pile ; or else write a possible-lifeform-generation machine and see what you want it to save.
    mircea_popescu: but this aside : our position is remarkably vulnerable, because of the countless ways in which people are morons, from the pure bobeckistan of blender (and let's not forget how python got indexed in the first place) to a very thick stripperweb-style tarabostes idi
    mircea_popescu: ocy.
    mircea_popescu: do you even recall how many "artists" i commissioned to date whose idea of "commission" and "artist" was... take some money and RUN!!!!!
    mircea_popescu: gotta be at least a dozen, including both the expert who made one splash screen and the contest winning kid on tardstalk who made the other.
    diana_coman: I didn't keep track but I do recall at least a few off the top of my head, yes.
    mircea_popescu: anyway, i'm not about to go paying idiots a whole lotta money so they're a little bit less idiotic, what the fuck.
    mircea_popescu: now, the risk with choosing b is that it can readily turn into the grave ; in the hands of any being an engineer that'd be exactly the necessary outcome. it can take forever, yes ?
    mircea_popescu: usure you wouldn't rather choose a ?
    diana_coman: certainly; and part of why I'm not all-that-enthusiastic about it; otoh my current experience with ~equivalent of a make me also-not-that-enthusiastic; so I'm looking at "choice".
    diana_coman: certainly I meant for "it can take forever"
    mircea_popescu: the advantage of a truly republican new year's celebration
    mircea_popescu: you get to sit with your drink and mull the depths.
    mircea_popescu: anwyay -- i kinda do want a done anyway, so if any of the unemployed in the audience wants to hop to, do talk to me about it.
    diana_coman: mircea_popescu: what about cs/ps anyway? ie graphics as "how & what to store" is one thing; there is still at all times some sort of "how to get from data to pretty pics/animations/etc"
    mircea_popescu: what are you asking specifically ?
    diana_coman: what are the steps going to be re "graphics on client side"?
    mircea_popescu: both a and b above produce a pile ; it's piped into extant eulora client ; done.
    mircea_popescu: (watch that at some later point, all the assholes who were "too busy" to help out when it mattered will whine about why bitcoin is so expensive now, and their work not worth jack once we didn't need it anymore so they finally deigned to doing some)
    diana_coman: ok; worth noting that the "piped into" part might itself be quite involved / will depend on what's exactly in that pile from a/b.
    mircea_popescu: but that's the idea : what i mean by " we systematize the pile" is that first we make a large pile, then we select from it to make a smaller but correctly structured pile, and then we use the pile to fix cs where needed (and also we convert some portion of the larger pile left out, that's worth doing)
    diana_coman: at any rate, I'll take some days to (mull & read)*
    mircea_popescu: (mull & read)* tingles my malloc sense. is that a dangling dereference ?
    diana_coman: ahaha, no, it's just several passes.
    mircea_popescu: anyway. you happy ?
    diana_coman: happiness is such a relative thing, lol.
    mircea_popescu: lol. i mean it as the term of art, seen in "vincent ? are we happy ?"
    diana_coman: lolz; I was about to say that compared to further trying to get sense out of cs entrails, just about ~anything else qualifies for happy.
    mircea_popescu: well good for you then ; ima go back to trying to dig myself out from under all the publishings omfg.

  4. [...] since it's defaults and pragmatism all around this time, I've set the prototype client to just load the main shaders anyway since the rest of the code can't really do anything remotely reasonable without them. Let them be [...]

Leave a Reply