Over this past week I've been going as planned through CS code and docs and everything available to figure out in more concrete detail just how exactly are those fabled "shaders" implemented and used by the graphics engine - this 3rd big tangle of clusterfuck code that fell in my lap for disentangling too, it would seem. And boy did I find everything and anything in there, to the extent that here I find myself, stopping one week in rather than two, since I already need to set it all out1 in clear so a decision can then be made as to the way forwards.
The initial starting point was the Terrain2 plugin that handles terrain generation using a heightmap and palette of materials. This plugin expects a shader to exist with a specific name even since it relies on it for what it calls "splatting" - basically mixing seamlessly the different materials of the terrain where they change from one to another close enough so that you care about it (at a distance it just uses whatever is defined as "base material"). The trouble with this "expectation" is that it's one of those blind dependencies that are the mark of CS: the Terrain plugin (being a plugin, ffs!) is supposedly independent of the rest, right? Except, as you can see, it requires this darned shader and that in turn requires all sorts of snippets and other shaders and those require their own loaders and xml and all that jazz that turns out in practice to be thousands and thousands of lines of code. So sobered by this display of true independence and revolutionary zeal2, I went deeper into CS and looked in turns at how a shader is created in the first place (spoiler: ONLY through a spiderweb of "loading" from XML) and then how it's used: in bits and pieces, depending on which plugin is chosen and what hardware+software is used. Read on for the gruesome details!
The look at how a shader is created was triggered by this idea that it's perhaps simply a matter of extracting from the loaders/shader creators whatever they do and package that so it can be done whenever needed. This idea died quickly upon a look at the "loaders" because "what they do" is a lot of obfuscation really: since there are conditions and loops and inclusion of "snippets" from other files and fallbacks and all sorts possible and indeed present in the "xml definition" of a shader, the loading part is an equally complex affair of 7.5k+ LOC where various bits and pieces (e.g. techniques) are collected and then evaluated at a later stage since it can't even be known upfront which one is useful for what or where or whether at all. I can't see how it's any use to aim to disentangle this mess directly really.
For added lulz of the raise-hopes-than-bash-it-on-the-rocks type, I even managed to find in one of the "shader wrappers" a method that was supposedly meant for "loading a shader from source code", except it contains nothing other than a comment: "Loading from source code is not supported". And I can easily tell you exactly why it's "not supported" - because it can't fuckingly be sanely supported given all that mess, no. No wonder it's "not supported".
Next I turned to investigating the way shaders are stored and used throughout the engine so that I can perhaps figure out if it's possible to either rip them out entirely and replace with a saner approach or at least create my own "shaders" directly on the fly in the code and register those with the engine, effectively bypassing the whole xml-loaders insanity.
In summary, shaders as currently implemented in CS are considered "compiled" parts, as they are indeed obtained solely through a sort of ad-hoc XML-language interpretation. They (and especially the shader variables with all the added complexity of contexts and stack and whatnot) are used throughout the engine: from them being part and parcel of a material definition to the rendering loop explicitly calling the shaders registered for each mesh. As such, ripping the current shader-setup entirely out doesn't seem very doable without a full overhaul of the engine itself.
There is NO current way to modify an existing shader on the fly and the "shader compiler" class itself (the one producing shaders) pulls in the whole SCF mess with it so that it's not even all that easy or straightforward to try to make /adapt one directly in code. The *only* currently working way to create a shader and load it into the engine is by using a loader that will trigger god-knows-how-many-levels-deep of xml tokenizing, parsing, evaluating and so on. There are nominally two types of shader loaders, namely for "xmlshader" and "weavershader" but in practice the xmlshader is the basis, as the weavershader simply adds a few more bells and whistles on top, otherwise relying on the xmlshader for the heavy work. The shaders themselves are not really concerned solely with "defining a class of surface" but pack inside additional concerns such as providing alternatives for different types of hardware or software (this is done via providing several ways of doing the same thing), supporting various LOD (levels of detail), reusing other supposedly-existing shaders and snippets, supporting (or not) different rendering plugins (currently OpenGL and software rendering).
Given the above rather troublesome finds, I think it's worth to go in a bit more detail on the theory of shaders in CS, since it might help to figure out what if any of it is really useful as such (and for what exactly). From a theoretical point of view, shaders in CS sound reasonable enough: each shader is meant to provide simply all the rendering steps and tumbles (or other workarounds) needed to obtain on screen a class of surfaces: e.g. you can write a shader for water-like surfaces, another one for cloth-like surfaces and yet another one for terrain surfaces. There is even a reasonable enough pipeline for this since each shader is meant to contain a clear list of parts3:
- Metadata
- Shader Variables
- Technique(s)
- Pass(es) per technique
- Mapping(s) per pass
- Vertex processor per pass (if any)
- Vertex program per pass (if any)
- Fragment program per pass (if any)
The code investigation revealed that metadata is more of an overplanning in there - it consists currently of "maximum number of lights this shader can handle". And I can't even quite say why is there a maximum number of lights or why exactly does a shader add lights to start with - since when is a surface to be defined by ...added lights? But nevertheless, it is: apparently quite a few "effects" are achieved by adding whatever number of lights in various passes, for all the logic that has.
Shader variables are one of the big things in there really: they are meant to work as parameters for each shader so that conceivably you can reuse a shader to create similar but not identical surfaces. This is also part of where the clusterfuck starts: while each shader can in principle define its own custom variables, in practice there is a set of "predefined" variables (such as "diffuse tex" or "specular tex") that are assigned different values in different "contexts". So at rendering time, there is a whole *stack* of shader variables collected from various places and good luck knowing exactly which value will any given variable have where and when. A lick of sanity there is in the form of a hardcoded priority of different contexts so that - according to the docs - the value of any given shader variable will be taken from the highest priority context where it's found, with contexts provided in order from lowest to highest priority by: shader manager (basically the "global" scope for shader variables), current light, render step, render mesh, mesh wrapper, shader, material. In practice, the result of all this is that everything ends up relying on something else and as a result entirely not separated, despite the whole elaborated pretense to the contrary. Unsurprising by now, I know.
The shader manager for all its pompous name turns out at a closer look to provide only a time variable that is supposedly useful for shaders that aim to animate the surface - why would an animated-surface be needed? I can't quite tell. The render step and mesh are ..."special". Just in case the rest seemed too sane for your taste, here we have some specialness added since those are not really parameters for shaders - more like some information sources that got stuffed into those "shader variables" since why not. Apparently one can get in here geometry from some meshes (vertices), splatting masks from terrains or even transforms and textures or anything under the sun really. The last three "contexts" (mesh wrapper, shader and material) are the ones most often in use - theoretically the mesh wrapper would allow one to parametrize the surface on a per mesh basis (using the mesh wrapper), per material basis (material context) or otherwise to provide default values in the shader itself. So much support for endless customization that there's not much room left for *easily* making a darned thing to customize in the first place.
The techniques of a shader are meant to be the actual meat of the shader aka the work done to make that surface look as desired. Customization hits here in the form of several techniques meant as *alternatives*: different ways to do the same thing so that no hardware&software configuration is left behind. To get the full extent of what those good intentions amount to in here, multiply then the number of techniques with the several *passes* (aka multiple renderings for the same darned surface) per technique and further add to it the vertex+fragment processors/programs that come in different flavours depending on the underlying plugin to use (OpenGL or software rendering atm).
The working of a shader in practice as found out from digging through CS sources is meant to be a loop of "passes" where each pass consists in 5 steps:
- Activation
- Setup
- Drawing the mesh
- Teardown
- Deactivation
If you frown at the 5 steps of which only one does the actual drawing, it's ok, there's even more similar sets of substeps in almost each of those 5 steps. In any case, the activation+setup mainly collects and sets in place all the variables and values that are available from the various contexts at that time. This can easily be a whole mess because "variables" can include any number of anything really, from textures and lights to geometry to additional rendering buffers. The drawing of the mesh is handled from what I could tell mainly by the OpenGL plugin (or the software renderer if you run it without OpenGL) and as such the code is dependent on that. The teardown+deactivation do the whole dance in reverse, as expected. Efficiency at its best, wouldn't you say?
Looking at the classes used for the various components of a shader (shader variables, passes, techniques, programs), the shader variables are the most reasonable in that they don't bring in at least a whole lot besides what is anyway in there due to materials for instance. The techniques themselves could perhaps be ripped out and repackaged as their internals seem to be mainly direct manipulations of the g3d class (graphics 3d). The main issue though is with respect to the iShader interface itself because it mandates a "QueryObject" that pulls in all of a sudden the whole SCF (basically it forces iShader to be a plugin, just like that), holly shit! And otherwise the shader passes, programs and how to even decide *what* should exactly be done by a shader do if anything at all: which parts are best handled by the material/texture itself and which parts should be done by a "shader" and how does one even make this choice?
As far as I can tell currently, there are a few possible next steps perhaps:
- Aim to modify one of the existing "shader compilers" to allow creation from code. This would seem at first sight as requiring the fewest changes to CS itself (though it does inevitably mean that I'm patching CS itself, with all that brings with it, yes) but it comes with some big unknowns, not least of which is: how exactly should a shader be specified in a sane way already (since importing the whole conditions and whatnots is not sane)?
- Aim to create directly own version of "iShader" - this means I'll end up writing now a new "CS plugin", supposedly less invasive than option 1 above but carrying the SCF burden at the very least. Ripping SCF out of iShader as first step is likely to turn into all sorts of additional trouble and I can't say that I see the justification for going there even if aiming for this direction. The advantage I can see on this option compared to 1 above - once the SCF is somehow handled, ugh - would be that it's both more direct (ie should be able then to create and register the shader as/when desired) and perhaps more amenable to a step by step discovery of just what should go in a shader to start with.
- Investigate what exactly and how much can be done via texture+materials alone. This is quite iffy because it would probably require quite the dive into Blender now of all things (and the exporter that does not yet exist/work) but I mention it here for completeness.
A big handicap currently here is the lack of clear knowledge as to which parts currently in "shaders" are really needed and to what extent. Basically a lack of practical experience with graphics really, to know how much can be achieved through textures+materials only and how much/which parts require the shader on top to end up as anything reasonable. The current set of shaders is such a spaghetti that I don't even think it's worth attempting some direct translation - although I did look through it to start getting some sort of familiarity with how things may be done and to what extent. It all seems so far to be more of a trial-and-error anyway and the code provides further encouragement in this direction e.g. "// If you understand this initially (or at all) I salute you" , "// No idea why we have to invert the Z at all, but reflection is wrong without it"4
And for my own future reference, so I'm including even "known" parts in here, let them be as I never ever regretted having them written in clear in a single place. ↩
Can't resist humming it too since it's still better laughing than going nuts: aqui se queda la clara la entranable transparencia de tu querida presencia... ↩
Section 4.12.1 in the CS manual gives a reasonable although purely theoretical description for those. ↩
Both quotes are from cs/plugins/video/render3d/shader/shaderplugins/glshader_fixed/glshader_fvp.cpp aka the OpenGL plugin that supposedly does the vertex programs parts that may be specified in a shader. ↩
Comments feed: RSS 2.0
> There is NO current way to modify an existing shader on the fly
I do not think this is really desirable or something we actually wish to do. In the indeed rare cases where some sort of modification of behaviour from the player pov is desirable, keeping two and switching from using one to using the other is definitely the way to go ; there's never going to be some kind of open ended, vastly plurious shader modification contemplated.
> SCF mess
What's scf stand for here ?
Anyways : the most concerning part would appear to be the hardware support implemented at such a high level as the fucking shaders ; self-evidently the engine should have long ago decided how it's going to deal with whatever engine it's living atop. What exact hardware dependency is even contemplated here, I don't get it ?
More generally : xml shaders were rather a sort period fashion, like say "object-oriented programming" or whatever web2.0 nonsense ; I'm not about to either send you on a spelunking expedition to find "how shaders should really be" nor am I going to order the fixing of the forms of dead fashions. In fact, we have arrived at a point here where we actually need input from the active and lively community of gfx developers for eulora -- and the fact that the current crop of morons floating about idly in the soup have so far failed to bring to life that active and lively community doesn't change anything.
So how about this bit waits a while, and you go do some server stuff ? This will only progress once we have some actual shader producers to poll ; and yes at that time it'll need some summarizing/cleaning/fixing etc -- but at that time.
Which, of course, begs the question of why are we even bothering to include the morons in the first place -- god knows keeping #trilema open for all comers for years hasn't resulted in some greatness of contribution or anything like that. Mayhap the solution is to simply define graphics in general from first principles and eschew the idiocy of "creativity" by "people themselves" fresh offa dat informational superhighway truck. Why do I have to make a gui for complicated parametrized definitions of appearance ? So that Joe Schmuck can sorta-kinda fiddle one bit in ten billion and then importantly asciilifeform all over himself as to his grandiose contributions ? I can fiddle bits better than he ever can through the usual process, wut the fuck is he needed for.
SCF stands for "shared class facility" and is this horrible "abstraction" that CrystalSpace came with, supposedly for flexibility, practically for a lot of harmful complexity.
The hardware dependency goes along the lines of "if your GPU does not support more than x texture units, then we'll use more passes with less than x texture units each". Essentially workarounds for various "what if" scenarios of a rather dubious nature (how exactly did they choose which "what if" are "relevant" anyway).
Some spec for the graphics part was deemed needed to flesh out more of that data hierarchy model for the communication protocol to be able to send everything to the client. It does sound way saner to define it from the first principles indeed. I'm not sure though what do you see exactly as next steps that I should take here.
The discussion from #trilema logs:
[...] since it's defaults and pragmatism all around this time, I've set the prototype client to just load the main shaders anyway since the rest of the code can't really do anything remotely reasonable without them. Let them be [...]