Ossa Sepia

July 29, 2019

Overview of FFA Ch1 - Ch19

Filed under: Coding — Diana Coman @ 9:51 a.m.

Over the past few days I've taken the time to finally go through all the currently published chapters of FFA (1-19 with some bis and a,b,c on the way too) and got as a result a clearer idea of the whole thing as it builds up from basic definitions of large integers to the Peh interpreter machine with its own registers, two stacks (data and control, respectively) and the Cutouts mechanism for controlling access to sensitive data. While initially I was planning to add to my previous notes on FFA Ch1-Ch4, my experience of getting back to FFA after a significant break pointed to the need for a higher-level overview of what-where-why even before looking for such detailed illustrations of the code involved. So here's my current overview of FFA - open to any criticism you are able to throw at it, as always - consisting first of a brief list of main approaches in FFA and then the 4 "layers" that I see to the whole thing so far.

Main FFA Approaches and Algorithms

  1. Strictly non-branching on input values and therefore practically the death of "if". No ifs, no buts, so what's left is MUXes1: essentially the "if" goes down one level, from a logical switch (which operation to perform) to a mechanical one (which buffer to copy the result from); the same operation is always performed but its result is not always used.
  2. Strictly constant-time execution for any given inputs size. This means in practice strictly worst-time, of course, as it's always the case when you require all to be the same. The only way for all to be equal is by making them all equal to the worst, no way around it. Then again, some people's worst turn out better than other people's best so take "worst" with a pinch of context, as always.
  3. Comba multiplication algorithm: calculating sequentially the *columns* of a product; additional optimisations for the squaring case (mainly avoiding calculating the same products twice).
  4. Karatsuba multiplication algorithm: calculating the product XY as a combination of cheaper operations (additions and shifts preferred by far to multiplications) on the half-words of X and Y; additional optimisations for the squaring case.
  5. Barrett's modular reduction: calculating X mod M by means of an approximation using the "Barrettoid", B, that can be precomputed from any given module; X mod M is approximated first as X - M*B and then adjusted by at most 2 subtractions of M (i.e. the error of X - M*B as approximation of X mod M has only three possible values: 0, M or 2M).
  6. Stein's binary GCD: rather quasi-Stein as it's adapted to FFA requirements, most notably the constant-time execution requirement (by iterating over the full number of bits rather than stopping when the smallest number is 0).
  7. Miller-Rabin compositeness test: an adapted version of MR, forcing any given witness into the acceptable range and providing a "one-shot" test for any given value N and witness W.
  8. Peh interpreter machine: basically the "cryptographic machine" that the user gets to push the buttons of; Peh has a data stack and a control stack, a set of registers and an instruction set including the ability to define and call subroutines and loops as well as to define Cutouts aka a protected area for sensitive data.

FFA Layers (from bottom up)

  1. Representation of arbitrarily large integers as fixed-size (i.e. specified for every run) arrays of machine-words (with iron-specific size of machine-words changeable via knobs in the code). In FFA terminology, such integers are called "FZ".
  2. Basic arithmetical, modular arithmetical and logical operations on FZ entities in constant time and handling any overflows/underflows. This part builds up from the most "naive" approach (aka most straightforward if rather costly) dubbed "egyptological" through successive refinements (optimisations of the original algorithm followed by radical changes of approach) until the final result uses Barrett's reduction for modular operations, Karatsuba with Comba core-case (i.e. below empirically-established thresholds) for multiplications and a variation of those (Karatsuba + Comba + specific threshold) specifically optimised for squarings.
  3. Primality-related algorithms working in constant time on FZ entities: Stein's binary GCD2 and an adapted MR3 that can adequately handle any value of a given witness (by forcing it effectively within the valid range).
  4. Peh, the finite-tape, dual-stack and cutouts-enabled interpreter. While I still need to turn this inside out a few times more, I can tell already that I want a Peh-iron!

And while there are still a lot of experiments and timings and playings to go through with all the FFA, I'm happy at this point to sign the remaining chapters as I went through them with pen in hand and as far as I can tell they do what it says on the tin (not to mention that I certainly would use FFA and in fact I'm itching to drop already the shitty gpg, ffs!):

  1. Multiplexer aka a way to select only one of several inputs. 

  2. Greatest Common Divisor 

  3. Miller-Rabin 

July 24, 2019

Naked Models (Notes on Graphics in Eulora, IV)

Filed under: Coding, Eulora — Diana Coman @ 11:30 a.m.

What's the first change you'd do to an empty-headed character such as Testy? Why, chop some bits off, change his sex and get him naked at least, even transparent at that, since he's pretty much just as tedious and boring as all children dolls 1. Of course the first attempts failed to produce anything visible because it turns out that using Cal3D sprites in Eulora's client goes through several layers of inmisdirection: Crystal Space (CS) wraps the Cal3D code in its own SprCal3D plugins and factories and whatnots; on top of that, there is also the rather elusive Cel (Crystal Entity Layer) that is meant to be a game-specific part of CS keeping track of game entities; and messing within and across all is Planeshift's (PS) own GEM concoction interlaced with delayed loaders and Dead-Reckonings2 and cloning of materials (why? no idea yet) and surprising, undocumented transformations. Still, after a while of hacking through all the above, I could at least get some parts of Testy showing... if lying down rather than upright:

Spot the bottom in the grass!

Spot the bottom in the grass!

The prostrate position makes at least sense in that "movement" also looks a bit like some swimming look-alike: parting the legs and advancing sort of thing. Then again, it turns out that using a different model (e.g. Cally) makes it even more interesting - the model is upright when waiting but it dives straight back to the horizontal as part of moving and it furthermore cares not one bit for whatever movement animation one loads. So there's a clue for another day - current movement needs a good bashing over the head and a deep dive to figure out what's all this nonsense. But until then, here's naked Cally who borrowed Testy's skin for a spell:

What bits is she missing?

What bits is she missing?

Anyway, since position of the model is always a matter of applying one transformation or another, I went ahead and applied the obvious transformation, 90 degrees and upsy daisy on her feet:

Upsy-Daisy with muddy skin!

Upsy-Daisy with muddy skin!

And funnily enough, once the 90 degrees rotation is applied to the factory (i.e. presumably to ALL entities built out of that factory) the Testy model remains upright even during movement without any perceived change. On the other hand, the Cally model is upright without the 90 degrees rotation but nevertheless dives down for movement and pops back up as soon as movement is done. So it would seem that movement has its own assumptions that it doesn't bother to make public, how lovely. Seeing how it's most probably to do with the model's initial position or similar, I might even need to end up again with the full environment, Blender and all installed to get to the bottom of it and probably see exactly what or how the exporter is also broken and specific. Such joys to come but for the moment they are not yet here since the more pressing need is first of all to disentangle some more the PS+CEL+CS+CAL3D mess regarding animated entities so as to actually be able to *use* the walk/fight/move animations rather than just "add" them and moreover to be able to add and remove on the fly as many and as diverse entities as wanted, beyond the "main character" guy. And all this is getting deeper into PS swamps so it's likely to take more than a week indeed - ideally by then I'll be able to cut off at least some parts of the existing code (precaching and preloading especially) and lighten the client start a bit if nothing more.

  1. Changing graphics is this exercise in tediousness and pointlessness, it eerily reminds me of playing with dolls, just one step lower than that since those are just images of dolls, fancy that. And playing with dolls was utterly boring even when I was 5! It's like an endurance test along the lines of watching paint dry. Tinker with this and tinker with that, all for "the looks" of it and can't even satisfyingly throw it at anything to hear it break. 

  2. Not kidding, although it's way less exciting than the name they had the chance to land on: it's mainly looking to see if your character falls down and breaks their neck. 

July 20, 2019

The Young Hands Club

Filed under: Outreach — Diana Coman @ 9:08 p.m.

Since I already have a eager young hand and it turns out that even online young hands still need some space in which to mess around as much as their growth requires, I've went ahead and set up The Young Hands Club precisely for this purpose. A sort of very public learning club if you want - and even if you don't!

Now let's see what comes out of it in one year's time. The opportunity is there, the options are all yours supposedly so... what will you *do* about it?

July 17, 2019

The Mirror Land (Notes on Graphics for Eulora, III)

Filed under: Coding, Eulora — Diana Coman @ 10:26 a.m.

Getting some actual landscape as in getting the height right turns out to be only half (the reasonably well-working half) of the terrain task: once I had in place at least the generic mechanism for creating factories and meshes, the terrain factory required only a few iterations, false starts and mandatory dives into CS's entrails to get working and save Testy from his usual fall into the sky-sphere:
Terrain generated from heightmaps only (no mix of materials, only base material).

After setting up the lighting a bit better too, it looks even more reasonable (next to the crater that is currently empty, yes):
Base terrain from heightmap.

The truly frustrating bit turns out to be the painting of the landscape in the desired materials. For the relatively easy part, the Terrain2 construct of CrystalSpace provides a "terraformer" that can be fed a "material palette" (i.e. a set of materials to use) and an indexed image - as a result, the terrain will have a reasonable mixture of the materials depending on heights and an ok fading of the materials into one another where they meet so it doesn't look that terrible, especially once all normals are in place too so that lights stand a chance to work:
Terrain from height map *and* material palette with 3 materials: grass, sand and rock.

The trouble however is that so far I haven't quite managed to get to the bottom of turning down the reflective mirror-like surface of this landscape! In theory, materials have those 2 light-reflecting properties, namely the extent to which they are "diffuse" or "specular" (i.e. "shiny"). Still, for all my futzing around with those for all three materials (sand, grass, rock), the observed effects are pretty much non-existent. So much for the theory and welcome to this graphics thing of tinkering. To see clearly what I mean, here's a rather surreal rendering with the ambient light turned on to higher levels so it's more obvious (note that the effect is precisely the same even with low ambient light - it's just harder to realise what it is exactly; similarly and as expected, one can mask the effect to the point that "it's fine" by turning on the fog button - it makes sense since it dulls the ambient light, I'd say):

On the positive side, the futzing with lights and whatnots is really something of less concern for me at this moment since I'm not that interested in obtaining an image *just so* - my goal is in figuring out *what* sort of things one can set, what do they do and how to set them while the game is running. In this sense, at least the basics of the terrain seem to be in place for now: heights are read from a heightmap file, the materials are indeed used according to heights, the rest is for another day. I suspect though that in usual tradition, the only real solution here is to dig in the end in that shaders pit as well since it's most probably a shader thing: the terrain object uses a different, terrain-specific shader and for some reason the defaults for that end up producing more of a mirror than a soil, at least given *everything else* (such is graphics, everything and anything can be the culprit). Note that I specifically used and set precisely the same parameters for materials as they are in the client currently in use - so it's most probably something else/somewhere else that I'm not setting just right for now.

The next step now is to figure out animated meshes too, hence Cal3d objects. So in the next episode, it's Testy himself that will change and get created directly from code. Once I have at least one way to do this too, I can go back a bit and look again into integrating them into (or extracting them out of...) the rest of the PS infrastructure. Then again, before doing that, I might still need to dive in and figure out the shaders in more detail to extract some way of specifying them without xml and directly from code too, perhaps. That promises though to be a lot of figuring out and additional mess so depending on whether it can or not wait a bit longer, it might not be exactly next in line - basically I have lots of work competing for my attention, a sort of inverted looking-for-work, it's the... (lots and lots of) work looking for me!

July 14, 2019

A Working Cuntoo Install on AMD FX-8350 (with script)

Filed under: Coding — Diana Coman @ 10:20 p.m.

Since the relatively recent discovery that Cuntoo work is apparently not really going anywhere for lack of more public testing and reporting, I'm doing my bit and publishing what I've got so far on installing Cuntoo on a local box and prodding it for a while. First, to introduce the box in question, so you can easily replicate it:

  • AMD FX-8350, 4GHz, Octa Core
  • Gigabyte Ultra-Durable GA-78LMT-USB3
  • 2*8GB DDR3 RAM
  • 1TB Seagate Barracuda

Since I wanted to do this from scratch to make sure it's easy to replicate, I ignored my previous Cuntoo-compilations and I made myself first a bootable USB-stick with Gentoo's livecd. Initially I took the minimal install_amd64_minimal_20190703T214502Z.iso but that proved iffy to use because it was apparently a bit too spartan for my requirements (esp. no make on it). So I traded space for time as it were and used instead the much larger livedvd-amd64-multilib-20160704.iso (2.2G vs 292M for the minimal iso). Perhaps this was quite overkill and there is something in between that does the job just fine so if you find/have that, just use it - the bootable USB-stick really is just a way to "get in," nothing more. Note that on some .iso images you might need to run isohybrid first if you want to use them to boot from USB (rather than burning them on a CD/DVD). As for making the bootable USB stick, that's as easy as using dd to transfer the .iso to your USB drive - do be patient if your .iso is big as dd is silent.

Armed with the bootable USB stick, I popped it into one of my USB 2.0 slots and turned the machine on. If you have the same setup as mine, the F12 key will give you the boot menu (or Delete for the full BIOS menu) and you'll need to select the USB stick *from the hard-drives* (you'll need the PgUp/PgDown keys and not the +/- as some docs mislead me). Once the live-dvd Gentoo image boots, you can proceed with the Cuntoo-related tasks, in order (note that you'll be doing all this inside the live-dvd image so if you reset, *the environment will reset too and your changes will be lost* so you'll need to start from the beginning again. With many thanks to Hannah for her mini-series of posts detailing her ordeal with Cuntoo and therefore inspiring my script-making right from the start, here are the steps and the relevant script parts:

  1. Add to your /etc/hosts file the IPs of any websites you'll need - the exact list depends on what you choose to do at each of the following steps (e.g. where do you take your V from) but at the very least you'll need the IP1 for trinque.org to get the Cuntoo tarball!
  2. Obtain and import any gpg public keys you need - similar to the step above, the exact list here depends on whose stuff you use but as a minimum you'll need Trinque's key, of course. As I've used only Trinque's and my own stuff, my gpg-importing bit of the script looks like this:
    # gpg keys
    curl -v "http://ossasepia.com/vpatches/diana_coman.asc" -O
    gpg --import diana_coman.asc
    curl -v "http://wot.deedbot.org/FC66C0C5D98C42A1D4A98B6B42F9985AFAB953C4.asc" > trinque.asc
    gpg --import trinque.asc
  3. Get and install a working GNAT. My choice here for expedience is to use the frozen Adacore version that I'm hosting on this site. Correspondingly, the relevant part of the script grabs it from my website, checks the provided sha512sum on it and installs it (if the GNAT dir doesn't already exist from a previous run perhaps):
    # GNAT
    if [ ! -e $GNATZIP ]
      curl -v "http://ossasepia.com/available_resources/gnat-gpl-2016-x86_64-linux-bin.tar.gz" > $GNATZIP
    if [ ! -e $GNATZIP.sha512 ]
      curl -v "http://ossasepia.com/available_resources/sha512sum_gnat-gpl-2016-x86_64-linux-bin.tar.gz.txt" > $GNATZIP.sha512
    if [ "$(sha512sum $GNATZIP)" = "$(cat $GNATZIP.sha512)" ]
      echo "Matching sha512sum for Adacore-GNAT."
      echo "FAILED sha512sum test for Adacore-GNAT, stopping."; exit
    if [ ! -d $GNAT ]
      tar -xvf $GNATZIP
      cd $GNAT
      sudo ./doinstall
      cd ../
      echo "GNAT dir already exists so GNAT is assumed installed! Skipping GNAT install."
    PATH="/usr/gnat/bin:$PATH"; export PATH
    echo "Updating PATH variable to $PATH in order to use GNAT."
  4. Get and install your favourite V. Make *sure* it can be found (e.g. update your PATH variable with V's location). Once again, since I can rely on my previous work on packing V (namely mod6's implementation of V), my life is that bit easier now and my script relies on my website to get and check everything:
    # V
    if [ ! -e $VZIP ]
      curl -v "http://ossasepia.com/vpatches/starter_v.zip" > $VZIP
    if [ ! -e $VZIP.diana_coman.sig ]
      curl -v "http://ossasepia.com/vpatches/starter_v.zip.diana_coman.sig" > $VZIP.diana_coman.sig
    echo "gpg --verify $VZIP.diana_coman.sig $VZIP"
    gpg --verify $VZIP.diana_coman.sig $VZIP
    if [ ! $? -eq 0 ]
      echo "FAILED sig check on $VZIP, stopping."; exit
      echo "Sig check fine for VZIP"
    if [ ! -d $VDIR ]
      unzip $VZIP
    cd $VDIR
    chmod +x build.sh
    cd ../
    PATH="$PWD/$VDIR:$PATH"; export PATH
    echo "The PATH var for using gprbuild, vk.pl, ksum, vdiff and vpatch is: "$PATH
  5. Get Cuntoo itself (note that it's ~800M so depending on your connection, this step may take a while):
    # Cuntoo itself
    if [ ! -e $CUNTOOZIP ]
      curl -v "http://trinque.org/$CUNTOOZIP" > $CUNTOOZIP
      curl -v "http://trinque.org/$CUNTOOZIP.sig" > $CUNTOOZIP.sig
    echo "gpg --verify $CUNTOOZIP.sig $CUNTOOZIP"
    gpg --verify $CUNTOOZIP.sig $CUNTOOZIP
    if [ ! $? -eq 0 ]
      echo "FAILED sig check on $CUNTOOZIP, stopping."; exit
      echo "Sig check fine for $CUNTOOZIP"
    tar -xvf $CUNTOOZIP
    cd $CUNTOO

Once all the above finished without any error or problem, all you need to do is to run the bootstrap.sh script according to Trinque's instructions. If you are at a loss as to the kernel config - by far the ugliest part in this - and your hardware is similar to mine, I was able to use Stan's "Dulap-II" config with minimal (possibly it works even without any) changes so go grab that one and feed it to the script - note that you'll get a quite spartan but absolutely functional environment and you can customize it from there as you want it. Here's the full bash script too if you want to use it: cuntooprep.sh.

NB: at the moment I'm writing this, the known path-issue within Cuntoo's scripts is still present in the Cuntoo.tar file so you'll need to do the change manually for the signature to verify on the genesis.vpatch that you obtain at the end. The change you need to make is explained in the logs and you can do it after running the bootstrap.sh script if you already started that.

Once you get the genesis.vpatch, verify it against Trinque's signature - if it doesn't verify, chances are that you still need to fix the paths issue so see the paragraph above and remember to actually re-run the scripts/make_portage_tree.sh script after you made the change! Then simply reboot and choose to boot from the disk on which you installed Cuntoo. You can then recompile the kernel to tweak any configs you want, for instance with those steps:

  1. cd /usr/src/linux
  2. make oldconfig
  3. make menuconfig (and/or change whatever you want directly in the .config file before running this - I've ended up finding the *text* file BETTER than the bloody gui thing simply because nobody can tell where something is in there, ugh.)
  4. make && make modules_install
  5. cp arch/x86/boot/bzImage /boot/bzImageNew
  6. change /etc/lilo.conf to add the bsImageNew as another boot option (DO keep the old one too, at least until you are sure the new one is fine!)
  7. lilo
  8. shutdown -r now

Once you had enough of recompiling the kernel, the next step would be to have a look at what you'd like to add to your system - chances are there'd be quite a lot of things that you need before you can actually do a lot in there. Cuntoo has its own preferred repository in /cuntoo/portage and a distant second in /usr/portage. So when you run emerge, you'll see either ::cuntoo or ::gentoo appended to the packages required, depending on which of the repositories is used. To enrich the cuntoo repository and effectively get rid of the madness where gentoo just doesn't "have" anymore the packages you want, you'll need to copy the tarball of your package into /cuntoo/distfiles and the ebuild+manifest+patches+shit into /cuntoo/portage/package-path. My initial attempt with MySQL was a step too far at this stage as it served only to point out more clearly what a mess porting the dependencies to Cuntoo will be - MySQL itself requires all sorts! So I looked among those "all sorts" and picked out one with better chances of being standalone, namely curl. With this adjusted aim, I was actually able to "port" the curl ebuild to Cuntoo as it were, simply like this:

  1. Download the tarball without installing it: ebuild /usr/portage/net-misc/curl/curl-7.60.0.ebuild fetch
  2. Copy then the tarball to /cuntoo/distfiles
  3. Copy /usr/portage/net-misc/curl to /cuntoo/portage/net-misc/curl
  4. Go into /cuntoo/portage/net-misc/curl and snip away the ebuilds that are other versions than 7.60.0 as well as the references to them in the Manifest file (NB: you'll need to keep all the patches though regardless of what their names say as they are still applied to the 7.60.0 version anyway).
  5. Now run emerge --ask -v net-misc/curl and look closely: it should say ::cuntoo and it should report that it has to download precisely 0Kb. If all that is true, let it do it and you'll have gotten Curl in your new and sparkly Cuntoo!

Now I must say that I'm a bit at a loss as to whether the above curl-experience should get the grand name of "making an ebuild" or not or what - to me it looks more like hacking an ebuild at best and moreover a quick look at gentoo's docs re making ebuilds reveals enough *other* stuff to eat that I don't see it happening all that soon. So I don't even know - is *this* sort of hacking ebuilds that Trinque was asking for? Or writing them from scratch or something else entirely or what?

  1. Obviously, unless you use DNS but why would you still use that anyway? 

July 12, 2019

My First Hours on Dev.to

Filed under: Dark Modern Ages, Outreach — Diana Coman @ 2:46 p.m.

As I really have too little time to waste any of it on fretting about how to start on things, I just picked Dev.to as the first online place to explore in search of young hands that grow from their right place and not from arse1. The choice was quite arbitrary really - it popped out on my unordered list as a relatively lively place and it uses GitHub old accounts so at least I did not have to make yet another "account". As a result, I now have in there too, that same old end-of-uni photo of mine that GitHub also has but so what. Anyway, to start it off, I gave them some content for free, sure, what's a few paragraphs to me now, here it is:

Come work on what matters, so you matter too.

I'm part of TMSR - the place where well-thought Bitcoin innovation happens steadily, publicly, unfashionably and with inescapable, far-reaching consequences. From a new model of software development to a MMORPG and building up a working market for computer artists plus everything software and hardware in between, the focus is at all points on owning what you do and growing your knowledge and ability at a sustainable pace.

The programming language of choice is Ada (with a fully-documented rationale as to why Ada) but work with legacy code includes C, C++, Python, Lisp and potentially anything else really.

Come work with me on things that matter, if you want to matter too. I write (and have been writing for a while) at http://ossasepia.com

The post above quickly got a few "hearts" and so far (1 day later) precisely no comments at all. Apparently love is easier to get than conversation, did you ever notice that? And what does it actually tell you, hmm?

The easy love aside, I didn't really wait (or expect) for any conversation to actually start from my first post, no matter what I'd have written there2 and so I just started looking around at the whole thing instead, trying to figure out its structure and therefore some way to *systematically* explore it. That quickly bumped into the obvious fact that the whole thing seems rather on purpose built *against* systematic anything. There are tags for instance to group supposedly by interests the content and you can "follow" tags even with "weights" attached but you can't see ALL tags (I know because I asked them, right there, in the "hello" thread, yes). You can see "the top 100" tags and supposedly that should be enough for everyone, screw the unpopulars, apparently I'm not supposed to be able to find them even if I am willing to go through as many tags as it takes to precisely reach them too! There are also profiles of course and you can "follow" those too but again, no way to actually see a list of them and be able to just go and talk to each one, no. Gotta try and map the space through the conversations that they feed you (literally, it's a "feed", right?) or otherwise go pretty much by chance, here and there, looking under each stone in search for people nowadays. Oh, and the platform is some Open Source pile of code, of course. Anyways, doesn't it strike anyone as really weird this thing where precisely online stuff that is by its nature exactly fit for systematic organisation and access is instead by design anti-systematic? Note that it's not just a matter of "offering also a fuzzy path to follow" but rather limiting the option to that and nothing else.

Anyways, 100 tags at least are better than none, so I started reading from there, for lack of any other better strategy really. Reading3 and commenting of course, since the whole point is to engage with people, what else. So after a few hours yesterday reading and commenting around there, here's how my dashboard on dev.to looked like:

Looking at the numbers this morning (after another 15 minutes spent on dev.to), I have: 1 post published (with "under 100 views"), 29 comments written, 10 "followers" and 4 "total post reactions." Hopefully I find a way to get faster at this as I do it more since I can't say I really have as much time as the current rate promises to eat up.

As for people actually following on what they said and making their way to irc and/or to this blog (or Stan's, since there was one DrBearhand whom I pointed towards FFA), I'll believe it when I see it happening and not earlier. But in any case if those I reached already fail to actually act on their own words now, it's their failure and nobody else's anymore, certainly not mine - from here on, it's on them entirely. What do you think the actual rate will end up like, 1 in 100? 1 in 1000? 1 in 1mn?

  1. It's Stan's term-of-art, see the logs

  2. I suspect I'm getting old really, there's no other explanation for this sort of lack of silly expectations here. 

  3. Really, having read lots and lots of legacy code prepared me for everything and anything, there is that. 

July 8, 2019

The Rocky Moon (Notes on Graphics for Eulora, II)

Filed under: Coding, Eulora — Diana Coman @ 9:35 p.m.

My client-side Eulora task set last week is completed and as a result, the Testy character gets now to see something more than just fog in his Sphere-of-all-beginnings, look:

Testy under a Wavy-Blue Moon, in the Sky-Sphere of all beginnings.

Testy under a Wavy-Blue Moon, in the Sky-Sphere of all beginnings.

It turns out that CS can indeed eat - though not very willingly1 - "geometry" for fixed/static entities (aka "generic meshes") even without any xml in sight2. For static aka non-moving entities, there are 5 sets of numbers required:

  1. Vertex coordinates
  2. Normals at vertices
  3. Colours at vertices
  4. Texture mappings at vertices
  5. Triangles

The main trouble with the above was the exact "how" and "where" to actually set them so CS understands what they are. It turns out that the most obvious "those are the vertices and those are the normals etc" approach is "obsolete" and replaced apparently by having to specify a render buffer for each set of numbers. Of course it's still the same arrays of numbers either way, but with render buffers you need to also specifiy which *type* of render buffer you have in mind (i.e. what are those values for) and in pure CS style this is specified by giving the... name. Name that is afterwards internally translated into a numerical id of course but nevermind since it's supposedly as easy as it can get to have to remember that the name is exactly "position" and not "positions" nor "vertices" nor anything else3). So easy in fact that I ended up having the list of "buffer names" at hand all the time, great help indeed.

Before moving on to some basic descriptions of those 5 sets of numbers, it's worth noting that they are just the most usual 5 sets really: there are plenty others that one can specify though it's unclear exactly when and precisely why would one do. The easiest example concerns texture coordinates: one can specify as many as 4 different texture mappings. And why stop at 4 if one went all the way there, why not 5 or 6 or 7 or 1000? No idea. Anyway, for the basic descriptions:

Vertex coordinates are exactly what the name says: the 3D coordinates of vertices that define the entity's shape in space. Those are all relative to the position at which the entity is drawn at any given time, of course, since they are internal to the entity and unaware of anything else in the larger world.

Normals at vertices are perpendiculars on the surface of the entity at each of the vertices defined earlier. Those are quite important for lighting (together with the orientation of each triangle surface defined further on).

Colours at vertices specify the colour of the object at each vertex and allow therefore colouring of the full object through interpolation for instance. While in principle this can be skipped when a texture is given, the "skip" simply means default colour (black) so it's a "skip" only in the sense that it's less obvious it exists.

Texture mappings at vertices - those are tuples (u,v) that specify a texture point (2D because textures are 2D images) that is to be mapped precisely to each vertex. Just like vertex coordinates provide fixed points for the shape of the engine that is otherwise interpolated at non-specified points, the texture mappings provide fixed points for the "painting" of the shape with a given texture.

Triangles define the surface of the entity as approximated through triangular shapes. All surfaces of static shapes in CS are approximated through triangles and the only difference between curved and flat surfaces is essentially in the number of triangles required so that your eye is fooled enough so you like rather than dislike the lie. This is after all the whole computer graphics in a shell: a lot of effort spent to lie to you the way you like it.

Triangles in CS are given as triplets (A,B,C) where the values are simply indices of the vertices previously defined. So the simplest cube will have for instance 8 vertices for its 8 corners as well as 2*6=12 triangles for its 6 faces since each face, being a square, needs 2 triangles to approximate. And those 12 triangles mean 36 values in total since each triangle needs the indices of all its 3 vertices. Moreover, the order in which the vertices are specified matters as it gives the orientation of the approximated surface4 and that in turn is crucial for calculating lighting (and ultimately for figuring out whether you get to even see what is on that surface at all). Since it's not intuitive perhaps, it's worth noting the basic fact that a surface has 2 sides and the "painting" of a texture is strictly on one side only. So if you mess up the surface's orientation, you can easily do the equivalent of dyeing a garment on the inside without any colour whatsoever showing when you wear it unless you turn it first inside-out.

As you might have guessed from the above example with the most basic cube if from nothing else, even very simple entities require a lot of values tediously and boringly specified - hence quite the bread and butter of automation5, yes. For now and for testing purposes, I simply extracted the numbers of interest from the xml description of the moon object in game and then I used them as default values for an object created directly from code. And lo' and behold, Testy got a moon to stare at and the moon even got to paint itself stony when I'm bored of its usual wavy blue:

Stony Moon and Testy in the Sky-Sphere.

Stony Moon and Testy in the Sky-Sphere.

The next biggish step further in this line is to figure out how to generate terrain from just a heightmap and then how to create an animated (hence, Cal3D) entity from code rather than through the sterile xml-packaging. Looking even further from there, monstrous xml-shaders await in the same direction but in other directions there is still a lot of work to do on teaching the PS code to use the EuCore library and to discover as a result that the world is way larger than what fits in its inherited, well-worn and time-proven, traditional xml-bible.

  1. A bit like those finding out that they can actually eat food that was never packaged in plastic or washed in chlorine or even re-heated. 

  2. At least not in geometry-sight. In a bit wider perspective, all shaders so far are still balls of xml that will probably have to be untangled sooner rather than later. 

  3. If you give a non-existent name, the buffer is simply ignored since supposedly you can define your own buffers too so it's not an error. Combined with the wonderful way in which everything influences everything else in graphics, you get - if you are lucky - to puzzle over some unexpected visual effect that might even seem at first to have nothing at all to do with the names of the render buffers at all (maybe you messed up a few numbers in that long list, you know? 

  4. By convention, vertices are given clockwise as read when facing the surface 

  5. Automation can be here exporters from supported tools such as Blender and generators of known shapes/patterns. 

July 3, 2019

Growing Young Hands

Filed under: Outreach — Diana Coman @ 5:11 p.m.

A recent conversation in the forum highlighted the need for me to re-adjust some of my priorities and push more to the forefront and into the light of purposeful activity the building of helpers too, not only of code. While this part is certainly not new to me nor otherwise entirely neglected up to now, it's clearly also not where I'd want it to be.

Over the past few years, in addition to the technical work that is indeed over-represented on my blog1, I have also spent some hours every week with young people (other and older than my son) mainly on Maths, occasionally on bits and pieces of programming. As such, I effectively planted some sanity seeds in fertile soil but this hasn't yet yielded any new direct contributors to Eulora or even to the wider TMSR. In part this is perhaps because they are yet too young2 but it might also be the natural outcome of my very leisurely (as opposed to tightly focused) approach to this side of things, doing it occasionally and selecting only out of the relatively small pool of those who are physically close enough to come to me with any regularity and have moreover searched enough to find me in the first place. While this has the significant advantage of a strong initial auto-filtering, it also has the crippling disadvantage that it's so slow as to be in the end not practical. I can improve on this a bit perhaps by selecting a few and simply immersing them more in Eulora/TMSR work directly, see where that leads. However, even at best, that'll be currently 1 or at most 2 people that I'd even consider at all so what's that going to do exactly anyway?

The low numbers above are not "too low" for what it is - if anything, they are actually quite good results I suppose. Given the natural scarcity of quality combined with modern artificial abundance of everything else, the above leisurely approach is really nothing more than playing the lottery and with very few tickets too. So no wonder as to results, but let's screw this semi-passive wait for repeated lottery wins and look instead for some more active and pointed ways to find the ones I care to find in the first place and to specifically aim the whole time at the most pressing need - help grow those who can actively take part in Eulora and TMSR.

The first obvious and most straightforward avenue is to go again to the local Uni but this time specifically aiming to choose some helpers, adjust, repeat and keep at it. I'm still mulling this part over mainly in terms of whether to spread the word and then simply set up some workshop right in their campus since a laptop and a few drinks is all it takes anyway. Alternatively or in addition I suppose I can even do the same at the local pub3 and it might even have better seats and better drinks than their caffeteria. The only hurdle here is really to reserve some time on a regular basis specifically for this (or similar public meetings basically). This would be prong 1: regular public workshops as selection grounds.

While the above should be anyway an improvement in terms of number of candidates seen at least, it still doesn't strike me as enough at all. And moreover, there are the hard limits of location and of how many can physically fit around me at any given time. Looking to the wide online space, there is of course plenty of room there and supposedly plenty of eager young hands looking for a place where they can matter too. A quick look around shows at least in theory more gaming/programming/learning forums than you can shake a stick at, so in parallel with the above, I'll have to set up additional time to trawl those systematically and see what really is there, if anything. I suppose a "come work with me on what really matters so that you matter too" is just as good a starting message as any and it's not written in stone anyway. This would be prong 2: systematic trawling of gaming/programming/learning forums and talking to all their members.

Speaking of the multitude of forums and learning platforms and courses and summer schools and hackathons and whatnots, it's worth perhaps mentioning that I'm not particularly worried about their huge number and supposed competition. They are many indeed but they are as far as I can see quite alike, a lot of the same thing under slightly different wrappers, pretty much like all the latest and bestest "innovations" - at this stage they really seem to me more of a way to drown the few young ones that look for meaning than to actually help them in any way. For all the hype that seems to be around pushing and shoving youngsters into computing, from where I sit it really looks increasingly more like pushing and shoving them towards becoming the slaves and guinea pigs of tools and not at all their masters. So those who want the first are plenty served while those who want the last (and might not even know yet how to say it) are more starved than ever. The same situation shows in schools too in fact and to the extent that those looking for meaning will do just about anything to stick with it once they finally find the slightest shred of it.

All the above mulling out of the way, the next main step is to move my schedule around again and as soon as possible to make space for all of the above even if it's only a couple of hours per week, as long as it's every week. The mountain of work seems to only grow so that's not easy and it'll take a while to readjust so that I don't otherwise drop any major ball. But while this readjustment is ongoing, I'll be able at least to mull it all a bit more in case I can add to the list above and even cut out more clearly some tasks that I can drop newcomers into since that's the fuzziest part for now - there's quite a steep climb at the moment to everything I'm in and that's not really ideal for someone new and green. Still, the ideal is never available, so whatever it is, they'll follow along as best they can and that's what it will always be.

  1. While I can always go "why didn't I write the rest too?" there is this fact that writing takes time too and at some point I can either spend my limited time on doing or on writing about doing/having done it. No matter how I go about it, there will always be some amount of selection as to what gets written and what doesn't. This being said, there is as always plenty of space for doing better at this, certainly, especially since I know very well how much worse it was before I specifically set about to make it better, step by step. 

  2. One just went off last year to study Medicine for instance, am I going to consider that as a waste now or what. 

  3. Cafes haven't yet sprouted right on my doorstep and I'm not going to go to the town centre for this sort of thing just to be somewhere more crowded and noisier. 

June 30, 2019

Notes on Graphics for Eulora

Filed under: Coding, Eulora — Diana Coman @ 9:31 p.m.

Eulora's client uses the CrystalSpace (CS) engine for all its graphics needs and CS in turn uses the Cal3D library for animated objects in the world1. Some two years ago2 I wrote this stop-gap glue-and-string script to supposedly help artists and all those eager to contribute but held back by a lack of direct way to experiment and see their "art" coming alive in CS. Well, I wrote it and until the beginning of this month, those scripts plus some basic notions from introductory computer graphics courses taken way back during my uni years were about as much as I could say I actually knew regarding computer graphics.

As it seems quite conceivable by now that I'll end up having to get all the way to the bottom of the graphics pit too, I spent most of this month reading the graphics reference book3 and in parallel trying to make sense in a very practical way of the whole way in which Eulora's client handles all the graphics part from the Planeshift (PS) top layer down to CS and CAL3D. And while I don't yet have a fully working new pipeline for actually loading on the fly all and any sorts of graphics into Eulora's client, I have at least a hard-won better understanding of what goes on in there (despite the layers upon layers of obfuscation) as well as some bits and pieces that might conceivably even make it into such a pipeline in the end. So I'll jot down here my findings so far and those yet-moving ideas about it all, as it helps for sure to structure and unload a bit in preparation for the next stage of work. And it might even help - says ever optimistic me - to start perhaps a discussion with anyone who knows this better than I do so if that's you, kindly comment below and let me know your thoughts.

One difficulty with CS (and it seems to me with Computer Graphics in general really) is this persistent idea of accomodating all sorts of "alternative" ways of doing things that turn out in fact to be precisely the same thing only packaged differently so that nobody is left behind or something. So instead of having a clear structure of knowledge to acquire, one is supposedly left to be creative and find what "feels natural" or "comfortable" to them. Well, so much for actually learning anything there, welcome to lots of code that does the same thing, only it's harder to tell that upfront without sinking hours in it, to follow it all from one end to the other. Nevertheless, after all the reading and poking it and changing it and having my way with it, my current basic structure of CS4 notions of interest for client graphics is this (ordered from top down, as much as I could):

  1. Sectors - those are the highest-level containers of "graphics" of any sort. A sector simply stands for a "location" that is effectively defined by nothing more than a name5 and two mixed characteristics: on one hand the visibility culler in use (so a computational matter) and on the other hand the ambient light (i.e. the light that affects everything in that sector in the same way at any position and as such a matter of aspect pure and simple).

    CS supports two visibility cullers, namely Frustum and Dynamic (dynavis) with the default for the game's needs being the dynavis culler. So a first building block for the desired client pipeline is simply a "GetSector" function. Like all the rest of the "GetX" functions that I have in mind, this will retrieve any information it needs about the sector from EuCore / EuCache (via a CPP-to-Ada layer, as needed) and then either retrieves the required sector from the engine itself if it exists already or otherwise proceeds to creating it and loading it into the engine so that at the end of it, one way or another, the sector is known to exist. When information is missing/incomplete/unavailable, the GetSector method fails and lets the caller decide what to do (try again at later time, most likely). At this moment I have a prototype of this method as well as some similar prototypes for GetLight, GetTexture and GetMaterial - in other words, I can create from code without any xml and parametrised four main things: sectors, lights, textures and materials.

  2. Lights - those are sources of light in a sector beyond and in addition to any ambient light. They can be customised in all sorts of ways (e.g. intensity, colour, position, direction). Whether they are elements inside a sector or an environmental characteristic of a sector is unclear to me at this stage since they could easily be seen either way. From an implementation point of view however they are currently clearly elements inside a sector, NOT characteristics. This means among other things that one could in principle add/estinguish/modify lights as they please at any given time and I'd say this is how it should be, given that it's not inconceivable to be able to turn on a candle or any other light in one's own house for instance.
  3. Weather - the more general name for this would be environmental factors I suppose, but so far it's really at most "weather" and in practice mainly... fog. Supposedly one could add here all sorts of *effects* really so from an implementation point of view I'd even call this part "environmental effects" i.e. whatever effects manifest themselves in the sector as a whole and are specific to it. The fog itself so far appears to be a rather simple matter as it's essentially a limited modifier of ambiental light. I don't really know if this is just a limitation of CS or a more general approach and for that matter I can't for the life of me quite grasp yet exactly how do graphics-people decide whether some effect they want is rolled into "fog" or into "ambiental light" or into anything in there. This is an issue at all levels and so far it really looks like an individual preference than a choice, quite everywhere. In other words a matter of "am I more skilled in making it look just so by tinkering with this or by tinkering with that"?
  4. Meshes - those are any and all elements in a sector that have what CS calls "geometry". This geometry concept is a bit fuzzy at times but as far as I managed to pin it down, it would seem to mean essentially that the object has a shape defined mainly as a set of "vertices" aka points in the 3D space that the object occupies. There are 3 main types of Meshes:

    1. Movement-capable entities
    2. Fixed entities
    3. Terrain

    Each of the above types of meshes has its own specific requirements and even ways of defining itself, including its geometry (hence part of the fuzziness...). At least in the current setup and as a very brief overview, movement-capable entities are effectively CAL3D models defined as hierarchical structures of meshes with attached action representations (e.g. run/move/jump/fight animations) and skeleton-based geometry; fixed entities are CS's "generic mesh" with vertex and triangle-based geometry; terrain entities are CS's "terrain2" mesh type with attached terraformer for generating the geometry based on a heightmap. The details of each are much gnarlier than this list might suggest and I'll be teasing them out one by one as there is no way around that but at any rate, CS's approach seems to have been to make different xmls for each of them and all sorts of "loaders" on top of loaders and on top of plugins, services, parsers and whatnots. Then PS added its own loader on top of CS's pile of loaders. As you might imagine, this tends to rather obscure the precise way in which one can create those different meshes *without* xml.

  5. Factories - those are abstract concepts containing effectively the blueprint for creating identical meshes. The idea behind them - as far as I can tell - is that it's worth loading in memory what is needed for a mesh only once and then reusing it every time a new specific instance of that precise mesh is required. In practice I'm not sure that this is really worth much for anything other than snowflakes or raindrops really, since otherwise pretty much each instance of a mesh seems to have its own factory anyway. There is however the argument that the factory should contain only truly common structure (i.e. all players of one race have the same structure) and then leave the customisations to each instance. So keeping this in mind, here they are in the list, factories for meshes (of all sorts, though indeed, each of the three different mesh types has its own specific factory type on top of a generic type).
  6. Materials - as far as I can tell so far, a "material" is essentially any specific combination of one shader + a whole set of values for the shader's parameters (aka "shader variables" in CS terms). Note that the texture itself (or themselves as apparenlty one can use several although it's not clear to me WHY is that any better), if there is one, is effectively an argument passed on to the shader.
  7. Textures - those are the pretty picture that is lovingly painted on some/any surface that an object in the world may show to the player at some time or another. Essentially each object is but a set of surfaces linked together and each of those surfaces may be covered with one or another texture fixed with some spit so that it goes exactly over that bump or starting from that point to end up just-so. In principle, one could provide instead the way to calculate the texture at any point but that seems to be less frequently used in practice (CS claims to support it though, although I can't tell to what extent and with what results really).
  8. Shaders - this is where the fuziness increases to madness levels. On one hand, "shader" seems to really mean all sorts of things in different places, from yet another description of a surface to any code running on the GPU. There is obviously more reading that I need to do on this aspect but so far what I've got is that CS's "shaders" are really just another layer of encapsulation meant to be for types of surfaces. So if one has types of meshes encapsulated in factories, one also has types of surfaces encapsulated in shaders and is able therefore to apply some "surface type" to any given texture so that the same picture will look "cloth-like" or "stone-like" depending on the nature of the surface on which it is applied. This is about as clear as I can get it now and I can't say I like it much. On top of this, shaders in CS are yet another pile of layers and layers of xml with intricate dependencies on one another and on who-knows-what-else in the name of reusability6.
  9. Render Loops - those are (in usual graphics style as far as I can tell) simultaneously in parallel and on top of the "shaders" above. They are yet another encapsulation, this time of types of rendering + lighting objects. To make the difference between shaders and render loop abundantly clear, default render loops are of course provided by CS, as files in the /data/shaders directory. Admitedly I totally lack the expertise to say at this stage just why are BOTH shaders and renderloops needed as separate entities and/or why not either do with only one of them or otherwise merge them together in a single thing at least but hopefully one of my more knowledgeable readers clarifies this for me. Failing such clarification, I'll need to keep on reading and experimenting with it, of course, for as long as it takes, what else.

In addition to the list above, there would be also the easier part of interface "skin" aka the variious icons and decorations for windows, buttons, labels and whatnots. This part is also on the list to be made fully reliable on data received from EuCache but so far I haven't touched it yet (other than keeping an eye out for what I could do about it, whenever other bits and pieces make me go close to that part of the code that has anything to do with it, of course).

For a fresh image, the TestChar at least moved out of its cubic box and into a rather faceted sphere, in his quest for the promised infinite landscape. Lights got a bit better, shader variables were sorted so as to neither block nor directly crash CS anymore. Next on the list is fully figuring out just how to properly feed "geometry" to CS as what it is, namely several sets of numbers (vertices, texels, normals, triangles) rather than all those + a ton of xml on top. This is a required step in order to have the GetFactory prototype too for static meshes at least. Once that is done, the next step is to move on to factories for terrain and then for animated meshes. Until then though, here's Test-in-his-sphere:

  1. For completeness, I should mention that CS has also its own native support for animated objects via 2 different sort of objects even but neither of those is as flexible and overall useful as CAL3D is. Therefore, I simply consider that animated object = CAL3D from now on. 

  2. Already two years passed! And to think that I'm picking it up precisely from where I left it all this time ago, kind of boggles the mind. 

  3. Foley, J.D., van Dam, A., Feiner, S.K. and Hughes, J.F., "Computer Graphics: Principles and Practice", 2nd ed., Addison-Wesley. Yes, I'm aware that there is a newer edition of this book, supposedly improved, better, shinier, more fashionable whatever have-you. I'm also aware that it's full of what can only be called advertising for Microsoft while being way lighter on what I'm interested in, namely precisely principles of computer graphics. So I'll stick with this edition, thank you. 

  4. Note that there is no desire to make all Eulora somehow CS-dependent as such but at this stage, I have to at the very least support CS as it were so I'm aiming for using CS as a practical example of handling client graphics, not as a driver of standards or content. 

  5. CS uses names aka text/strings as identifiers to the extent that it *requires* those for ~all the Find functions at all levels. It *does* have its own internal IDs and it even exposes them if one tries really, really hard to get to them but they are basically not very useful otherwise and moreover they can't even be set to match an external set of IDs (i.e. one can't create an item with a specified ID.)  

  6. "Reusability" seems to mean that the original author can skip writing a few lines of code at the cost of nobody else being able to understand afterwards at all how the thing works unless they spend at least double the time that it would take them to write it from scratch anyway. 

June 18, 2019

Euloran Blu's and Boos

Filed under: Coding, Open Sores — Diana Coman @ 9:24 a.m.

The legacy code base on client side can best be described as a pile of tangled pointerisms and assorted CPP mess under the name of Planeshift (PS) on top of a smaller ball of ifdefism1 and XMLade called CrystalSpace (CS), on top of a comparatively saner and at any rate smaller ball of CPP called Cal3d, the whole construction bursting with a lot of good intentions and precious little useful structuring or maintenance consideration. While sanity seems to quickly decrease from bottom (cal3d) up (to CS and then various parts of PS), what strikes one even more is that the whole thing if afflicted with a grave case of XML addiction that starts from CS and then simply comes to its ugly fruition in PS. It has been obviously written at the very top of the XML hype - everything and I do mean *everything* and everywhere is best described in XML! It could be argued that the authors simply wanted to have a lot of XML of their own but since they couldn't do just that2, they had to also do a graphics engine and a sort of game on the side, almost as a second consideration really. Come to think of it, why not call it then at least XMLSpace and XMLShift?

While CS itself has at least some clear initial design and therefore some structure (if a very clunky one at best of times mainly because the author really had too many good intentions), its main underlying assumptions are precisely the opposites of mine: besides the total focus on xml, there is also this obsession with event-driven (which makes it very hard to actually follow anything from one end to the other), the generic plugins (which again means that doing the simplest thing involves a huge scaffolding upfront) and the fact that the whole thing tries rather desperately to make itself "easy to use" by those who don't understand (nor even look into) the code at the expense of course of those who do since there is no possible way to have one without the other. Leaving aside for a bit how one can never really do the work of others for themselves, the sad result is that tracking down anything in full becomes extremely difficult. The documentation helps here but only up to a point since it is not exactly up to date and more importantly, it doesn't really focus on helping the reader understand the code - it focuses instead with all its good intentions on helping the reader use the library without really understanding it, as a sort of black or at best graying box. And moreover, pervasive everywhere, there is the wrong sort of "simplification": while the weight of it all becomes obvious to the author too (it's quite usual for various constructors to have at least 10 parameters and they are already several layers deep so they anyway call parent constructors with yet another set of parameters), he doesn't see or choose a better abstraction that would help rather than hinder the effort. Instead, he arbitrarily fixes some bits and pieces: for instance, the terrain2 plugin simply expects the heightmap to be in a file called sectorName_heightmap in exactly the current path of the VFS3. In other, fewer words, weaning the legacy code base off its unsavoury assumptions and xml-addiction so that it simply loads art as it becomes available is extremely frustrating and time-consuming, making the overall situation quite foggy blue with cornered cases and hardly anything useful in sight:

The art4 that you just admired above might not look like much but it's a big achievement on Euloran soil: for the first time in Euloran history, a Testy character (still sadly xml-based himself) made it outside the pre-defined sphere of the XML-Island and into the great unknown of non-predefined, non-xml, fully code-loaded and discovered as you go world! It's true that the world is for now only a cube with rather sharp and unyielding corners and no landscape at all, it's true that the very three suns that gave reasonable light on the Island look foggishly blue in here and it's sadly true that the character itself is still trapped in the XML bubble. But it's a working start and moreover it's a seed that has just caught roots within the hostile soil of the legacy code so that it can grow out of it, hopefully discarding as it grows more and more of the unhelpful mess!

  1. This is a term of art, the logs will help you. 

  2. Why? Why can't you do *just that*, whatever it is you actually want, did you ever ask yourself? 

  3. Virtual File System inside CS.... 

  4. In computer graphics EVERYTHING is art, ok? There is no such thing as mere fitted drawings and clumsy, clunky approximations, no, no. All the games have nothing but art in them! Hell, they *are* nothing but art. Well, of snorts, yes. 

« Newer PostsOlder Posts »

Theme and content by Diana Coman