Ossa Sepia

March 15, 2019

EuCrypt Chapter 16: Bytestream Input/Output Keccak

Filed under: EuCrypt — Diana Coman @ 4:30 p.m.

~ This is part of the EuCrypt series. Start with Introducing EuCrypt. ~

Keccak1 suffers from a bit/byte issue: while internally the actual transformations work at byte-level, the input is taken bit by bit and expected to be in LSB order2 while the output is offered again bit by bit but coming out in MSB3 order if taken byte by byte. Moreover, the padding applied to any input to bring it to a convenient length is again defined - and even applied - at bit level rather than byte level. While originally I discussed this issue in more detail and systematised the options available for Keccak to get some clarity and make a decision on either bit-level or byte-level, the actual implementation followed as closely as possible the original specification retaining bitstream (i.e. bit by bit) input and output while doing internally the transformations at byte level, as confusing as that was. As soon as this implementation was put to actual use though, it became clear that the bitstream part really has to go because it causes huge waste (8x stack-allocated space for any input, quite correctly described as exploding) and trouble in the form of overflowing the stack even for relatively small inputs. So this is the promised .vpatch that updates Keccak to work on bytestream input and produce bytestream output, getting rid of the x8 waste and effectively choosing options 1.1, 2.2 and 3.2.

The obvious change is to convert all the bit* to byte*. This includes constants such as the Keccak rate that is now expressed in number of octets, types such as bitstream that becomes bytestream and functions such as BitsToWord that becomes BytesToWordLE. Note that the internals of the Keccak sponge (i.e. the transformations) are unchanged since they weren't working at bit-level anyway. The less obvious change is the addition of bit-reversing (via lookup table since it's fastest) - this is needed to ensure that Keccak receives and produces at octet-level the same values as it did at bit-level. Specifically, this means that input values on Big Endian iron will be bit-reversed for input (since input is expected LSB) and obtained output values on Little Endian iron will be bit-reversed (since output is extracted MSB). It's ugly but so far it's the only option that doesn't change the Keccak hash essentially.

The .vpatch contains those main changes:

  • Bit reversing: there is a lookup table with the corresponding bit-reversed values for any byte (i.e. 0-255 values mapped to corresponding values with the other bit-order convention). So the bit-reversing of a byte is simply a matter of reading the value from the table at that index.
  • Input: bytestream instead of bistream so effectively an array of octets. Because of Keccak's LSB expectation re input bits, the BitsToWord function became BytesToWordLE meaning that it will reverse the bits on Big Endian Iron. The Padding is also expressed at byte level in LSB format (so the last byte is 0x80 rather that 0x01.
  • Output: bytestream instead of bitstream. Because of Keccak spitting out MSB when output is extracted at byte level, the WordToBits function became WordToBytesBE meaning that it will flip the bits on little endian so that the iron sees the same value as a big endian iron would.
  • Tests: there is an additional test that checks the values in the bit-reversing table for correctness, effectively calculating each of them and comparing to the constant; other than this, the same tests are simply updated to use bytestream/bytes as input and output; as in the previous version, there are also tests for calculating a hash or using Keccak on a String, quite unchanged.

The .vpatch for the above can be found on my Reference Code Shelf and is linked here too, for your convenience.

As I don't have any Big Endian iron around, I couldn't test the above on anything other than Little Endian so if you are looking for something easy to help with, you've found it: kindly test and report here in the comments or on your blog (a pingback will be enough or otherwise comment here with a link).

  1. As originally specified by Bertoni et al. 

  2. Least significant bit. 

  3. Most significant bit. 

March 11, 2019

Compiling Crystal Space with SJLJ

Filed under: Coding — Diana Coman @ 12:42 p.m.

Miserable failures and unwanted problems make great blog fodder1 and so here I am, writing yet another set of compilation notes. This time it's all about how to ditch the non-standard ZCX run-time for its well-behaved and otherwise similarly fast ancestor, SJLJ.

My project to compile is fairly complex as it links dynamically with a bunch of libraries and it includes code in C, CPP and Ada all together. The first naive attempt at simply setting the PATH environment variable to point to the ZCX GNAT (with dynamic linking) and then recompiling the server code itself (i.e. without recompiling the libraries) failed at linking stage:

Error relocating ./euserver: _Unwind_SjLj_RaiseException: symbol not found
Error relocating ./euserver: _Unwind_SjLj_Unregister: symbol not found
Error relocating ./euserver: _Unwind_SjLj_Register: symbol not found
Error relocating ./euserver: __gxx_personality_sj0: symbol not found
Error relocating ./euserver: _Unwind_SjLj_Resume: symbol not found
Error relocating ./euserver: _Unwind_SjLj_ForcedUnwind: symbol not found

The good thing about those errors are that they confirm at least that the SjLj run-time is indeed to be used, rather than ZCX. So the correct intentions are there but the result is still miserable failure. A quick report in the forum yielded immediate help as Stan kindly provided the very useful observation that such errors at the final linking stage can only mean that it is still trying to link with a non-SJLJ gcc standard lib. Outrageous and unpermitted and how does it dare2!! A poke of the executable euserver by means of ldd revealed that indeed, it is apparently linking the ZCX libstdc++.so.6 and libgcc_s.so.1 and all of that because of the pesky Crystal Space (CS) library! After a bit of initial doom when I thought I might need to recompile the whole bloody toolchain manually with --enable-sjlj-exceptions or such something, some rummaging around in the SJLJ-istic GNAT's directory revealed that it actually has already everything that's needed - if only they'd be linked instead of the ZCX-isting versions! So it came as a relief really that it's "only" CS3 that needs to be recompiled too4. Onwards!

A quick look through CS's makefile made it plenty clear that the naive approach of a simple .gpr file is not going to work: basically CS suffers deeply from ifdefism since it caters to all-and-everyone it can think of. So at the moment no option other than going at it with simply the PATH set properly to the SJLJ GNAT and then running: make clean; ./configure --without-java --without-perl --without-python --without-3ds --with-cal3d=$HOME/eulora/cal3d . And then all of a sudden, the configure reports that it couldn't find zlib although obviously, zlib did not run away anywhere. Moreover, a closer inspection of the full output of configure reveals that it *does* actually find zlib (so it's not a matter of "don't know where it is") but it reports it as unusable as it can't find apparently the ...headers. After various attempts at feeding it the path to the headers in various ways5, I got a bigger hammer and simply ran (making sure that it is PRECISELY the same version I had otherwise in the ZCX-istic toolchain):

curl -sL "http://www.zlib.net/zlib-1.2.11.tar.gz" -O
tar -xzvf zlib-1.2.11.tar.gz
./configure --prefix=$HOME/eulora/zlib
make install

Back in the CS dir, I ran again:

jam distclean
./configure --with-z=$HOME/eulora/zlib --without-java --without-perl --without-python --without-3ds --with-cal3d=$HOME/eulora/cal3d
jam -aq libs plugins cs-config

The configure above worked fine and it reported that it did find zlib this time and it's usable and all is good. And then the jam went ahead and compiled all sorts, some of them even *with* #include < zlib.h > so obviously all was well, right? Except:

In file included from ./include/csutil/archive.h:36:0,
                 from cs/plugins/filesys/vfs/vfs.cpp:39:
./include/csutil/zip.h:32:18: fatal error: zlib.h: No such file or directory
compilation terminated

So VFS6 is brighter than all the rest and it can't find zlib. Moreover, a manual run of the very same compilation line with added -I$HOME/eulora/zlib/include completed absolutely fine and then the jam -q libs plugins cs-config went through without any trouble at all. The rest was simply waiting for the rather slow and bloated build of CS. So at this point, one would need to chase about in the config and jam files of CS and vfs where is the place to add the missing include flag. Some searches through the whole autoconf+configure+makefile+jamfile revelead that the ZLIB.CFLAGS and ZLIB.LFLAGS are set correctly for Jam to use so the problem seems to be simply that the vfs plugin is not aware it needs to use them. Nevertheless, I am not very keen on spending even more time essentially learning more on all those tools that are set to be discarded as at some point everything will have to move to GNAT and gprbuild anyway. So for now there it stands, ugly as it is: manually re-run that compilation line for vfs and be done with it7

Once CS was compiled with ZCX, the rest went perfectly fine, server and tests and everything else. Do remember to fully clean your previous compilation though before recompiling for changing from ZCX to SJLJ as otherwise all sorts of weird errors will appear. With gprbuild that's as simple and fast as running in your project's directory where the .gpr file is:

gprclean -r

And that's all! I happily have the server using SJLJ and so I can finally go up rather than down a few levels, up this pit of unexpected troubles and back to what I was actually doing - migrating both server and client to Eulora's new communication protocol so that there can finally be a text client too and everything one wants!

  1. Not that I am really lacking either problems or blog fodder but at this rate I'll end up with a compilation notes category all by itself. Other problems, failures and assorted troubles want a place in the sun too! 

  2. Especially since all the .gpr files of the server code specifically include the option --nostdlib to make sure the standard lib is not included and then a direct specification of the musl, ZCX rpath and dynamic-linker to use. 

  3. I recompiled also Cal3d since CS depends on Cal3d but that was simply a matter of cleaning properly the previous compilation, configure and recompile. Cal3d compiles even with a basic .gpr file so by comparison to CS it's absolutely wonderful. 

  4. In hindsight of course it has to be recompiled since CS contains all but the kitchen sink and specifically for this part of interest all sorts of threads and thread management "utilities" that pull in all sorts.  

  5. A run of ./configure --help said that one can use --with-z=path-to-zlib to directly tell configure where zlib is; very nice, except it still wasn't enough. 

  6. Virtual File System; inside a supposedly graphics engine; you have no idea. 

  7. CS is really to be compiled only once per system and that's it. If it becomes something I need to compile more often then it will have to be trimmed and ported to gprbuild anyway. 

March 4, 2019

GNAT Compilation Notes

Filed under: Coding — Diana Coman @ 4:55 p.m.

Ada is *the* republican programming language1 and GNAT is its compiler but the compiler of GNAT itself is... another GNAT! While this would mean at first glance a self-contained environment and the end of headaches once GNAT + Ada are fully digested, the recent surprises with GNAT's default disdain for the Ada standard show that some parts are likely to be simply indigestible. Such parts - ZCX run-time especially - will probably end up at least sidelined if not directly and explicitly discarded by an emerging republican standard. Still, as part of investigating those unwanted surprises and especially the actual differences between ZCX and SJLJ, I gained a lot more experience with compiling GNAT itself in all sorts of configurations and on various environments. And since I promised to publish my GNAT compilation notes, it makes sense to publish at least the complete summary to have it in one place for my own future reference2.

GNAT Versions

  • The initial GNAT that made it into TMSR and served so far rather well is Adacore's 2016 GNAT GPL, mirrored previously as part of my work on EuCrypt. The mirrored version is compiled with both ZCX and SJLJ run-times, allowing one to easily switch between them with a simple flag, namely: --RTS=sjlj given to gprbuild3. ZCX is the default so compiling without the --RTS flag simply means that ZCX is used.
  • Ave1's GNAT version 2018-01-17: this uses Adacore's 2016 GNAT only to bootstrap the compilation of a republican GNAT linked against the MUSL C library. Note that the shitgnomes have already messed up some of the links in the scripts there - according to my notes the current scripts fail with a "not found" error at fetchextract http://ftp.gnu.org/gnu/gcc/gcc-4.9.adacore2016/ gcc-4.9.adacore2016 .tar.bz2 . Obviously, the pill to this is to have everything already downloaded and only then run the script (i.e. no fetch, only extract).
  • Ave1's GNAT version 2018-04-30 adding support for ARM architectures.
  • Ave1's GNAT version 2018-05-15 adding parallel build support that effectively cuts significantly the time required for the full build4
  • Ave1's GNAT version 2018-05-29 moving towards supporting only static linking (but not quite there yet, see next version).
  • Ave1's GNAT version 2018-06-01 ditching dynamic linking and therefore building only statically linked programs.
  • Ave1's GNAT version 2018-09-24 fixing an issue with a hard-coded path.

As it can be easily seen above, there are essentially only three main versions: the original Adacore GNAT (glibc-based), Ave1's musl-based GNAT supporting dynamic+static builds (i.e. version 2018-05-15) and Ave1's musl-based GNAT supporting only static builds (i.e. latest version, 2018-09-24). Ideally one would be able to ditch dynamic linking entirely and simply work only with Ave1's latest version but so far I still have on my hands all sorts of code that is far from ideal.

Compilation Options

  • To make use of your computer's cores and speed up the build time: export MAKEOPTS="-j8" (obviously, adjust the value depending on your number of cores).
  • To build effectively offline5: download all the needed tarballs and unpack into the tarballs directory. Since I did this, I went one step further and also simply deleted the scripts download-adacore*.sh as well as their calls from the build-*.sh scripts but this is entirely up to you of course. NOTE: currently the 2018-09-24 seems to have a problem with all the tarballs already in place - my latest run failed complaining that the bin dir doesn't exist inside the created bootstrap, specifically: ../../extra/../build.sh: line 148: cd: /home/eu-test/adabuilds/build510/bootstrap/bin: No such file or directory. This requires deeper digging and Ave1 will perhaps help me out through the maze of scripts in there.
  • To build with SJLJ run-time (NOTE: you'll obtain ONLY SJLJ so no switching with --RTS):
    1. Add to extraconfig.sh the following 2 lines:


    2. Change the setting of ZCX_BY_DEFAULT from "True" to "False". Ideally this would be changed via a Makefile but so far I wasn't able to find the Makefile that actually works for this. So I hacked it for now: unpack the gcc-4.9.adacore2016.tar.bz2 archive from tarballs/ ; the files of interest are in gcc-4.9.adacore2016/gcc/ada namely system-linux-* (one for each architecture - I modified all of them): change in those ZCX_BY_DEFAULT to False; pack the tar.bz2 archive back (e.g. tar -cjvSf gcc-4.9.adacore2016.tar.bz2 gcc-4.9.adacore2016); run the ./build-ada.sh script and that's it.

So far I was able to compile the following:

  • 2018-09-24 (aka latest, static ONLY): ZCX version using existing Adacore's GNAT; ZCX version using previously compiled 2018-09-24 ZCX version (i.e. itself!)
  • 2018-05-15 (aka older, supporting both dynamic and static linking): SJLJ and ZCX versions using existing Adacore's GNAT; SJLJ and ZCX versions using previously compiled 2018-05-15 on same machine (i.e. without running into the paths issue).

  1. For a summary of "why Ada", read Mocky's Log Reference on it. 

  2. Or so I hope - that in the future the reference will save me from gaining even *more* experience of the same sort that I gained here. Ever the optimist that I am, I know. 

  3. gprbuild --RTS=sjlj 

  4. I could build GNAT in a bit over 1 hour on 8 cores. 

  5. I recommend this for at least 3 reasons: you know what you are building with, since you put those tarballs there - hopefully you checked them; you avoid any trouble with broken links and similar shitgnommeries; it's faster. 

March 3, 2019

Inconsiderate Considerations or a Tiny Euloran Dataset on Blueprints

Filed under: Eulora — Diana Coman @ 8:57 p.m.

Blueprints in Eulora are currently the product of expensive clicks: millions go in to pay only for the most inconsequential of tokens, thousands come out and few of the blueprints obtained are what one wanted to get anyway. In typical euloran fashion however, this exercise in the frustrations swamps is anyway on one hand unavoidable and on the other hand potentially, eventually, greatly rewarding1 - with a bit or a bit more of luck and other assorted arcana. That being so, I let my luck do whatever it does and otherwise I set about to collect in one place a tiny bit of data on what comes out of where. About 200mn ECu later, here's what I've got when clicking them with a noob2:

Consideration Type3 Consideration q4 Bundle q Output q Blueprints obtained5
10 x Apprentice Tinker Considerations 179 37006 34 LTF, Slag, CT, CC, PPB, IO, DUH, SCS, BCT
10 x Apprentice McGuyver Considerations 47 32877 17 QF, POC, Rockpile, CSW, GT, PC, CB, CDS, ETD, RH
7 x Neophyte Gumbo Considerations 152 7528 14 NP, TT, FT, TF, WOA, BBB, ACG, BNG, CP, CF
10 x Neophyte McGuyver Considerations 2670 18389 91 SFH, IT, PS, MK, TM, BH, RH, CB, POC, HG

Conspicuously missing from the above, the very blueprint that I really wanted to see, namely BMS10. The BMS would be in the McGuyver line but 10 Neophyte clicks and 10 Apprentice clicks failed to produce even 1 single bp. Rather sadly, the Apprentice McGuyver clicks still produced the useless Caveman Bedroll blueprints and other such shit (Reed Hauberk!) that one has too much of, even just from Neophyte clicks anyway. It's totally unclear why exactly would some blueprints be SO common even though they are not necessarily the cheapest ones: take for instance the ton of LTF (4070 bv) or CT (1404 bv) blueprints obtained (0.1mn of each) and compare that with the 13 total BCT (540 bv) blueprints! If anything, it would be the cheaper blueprints that are harder to get but then again, I got tons of "Somewhat Fashionable Hat" and "Hoof Gloves" useless blueprints from the McGuyver line and those are the cheapest in that line (656 bv and 622 bv respectively).

Assuming that those rare bps aren't simply unobtainable at this moment for whatever reason, the obvious conclusion is that those considerations are rather inconsiderate of the ~200mn base value sunk into them and won't reveal in return even the very basic of *what* blueprints should one expect from where. Then again, it's not *consideration* you want from a good game, is it?

  1. Different players might find different things rewarding but a basic reward would be the rare serious "popping" i.e. obtaining for once millions out of thousands rather than the other way around.  

  2. the ~only way I have to actually get a wider spectrum of bps since clicks with Foxy end up high quality output and 1 type of bp in the usual cases; in the rare case, Foxy can in principle get more bps too but here I wanted to see precisely WHAT bps one gets from where so I suppose it's a gamble on the types more than the overall value itself. 

  3. Each crafting line in Eulora has its own Considerations line. Each line has several levels of which so far there are three seen: neophyte, apprentice and journeyman. 

  4. Quality of the Consideration blueprint used for this click. 

  5. They are ordered based on quantity obtained, from high to low. 

  6. ~=6.8mn base value. 

  7. ~=6.05mn base value 

  8. ~=1mn 

  9. ~=2.83mn base value 

  10. In the missing and becoming ever so rare range there would also be the CFT bp from Tinkering but at least I did not do a Neophyte Tinkering click this time... 

February 28, 2019

ZCX vs SJLJ - Data Set

Filed under: Coding — Diana Coman @ 5:37 p.m.

This all started with a misbehaving Ada program: the one that keeps "killing" its tasks with blind bullets that do precisely and exactly nothing. The horror of having code that refuses to actually abort its own spawned tasks is amply discussed in the logs linked above but I think it's worth pointing out the thick layers of horrid at play: it's not only (as if that wasn't enough by itself) that it won't abort its tasks but also that it does this on the quiet as it were - the abort code is effectively hijacked to do nothing for as long as one uses the default ZCX1. Now why exactly is ZCX the default given that it's not providing what the Ada standard says it should? And what is anyway its rationale2 for chopping away bits of the standard? No idea really, the docs are silent on this: it's apparently enough bother that they admit at least that indeed "the ZCX run-time does not support asynchronous abort of tasks(abort and select-then-abort constructs)" and otherwise throw in there an unsubstantiated statement that "most programs should experience a substantial improvement by being compiled with a ZCX run-time." Why would they bother with rationale or with justifying decisions - nobody ever asks for or indeed reads such things anyway as they are way too complex, aren't they?

Once the culprit was identified as ZCX, a quick test3 showed that the older SJLJ4 is indeed perfectly capable of killing spawned tasks. And so the investigation moved on to the possible cost of using SJLJ vs ZCX. Supposedly ZCX is faster, SJLJ is slower but suppositions are one thing and hard data quite another. To figure out if that's indeed the case, one needs simply to design an experiment for the exact sort of thing investigated and then set to work implementing and running it. Given the docs' claim that SJLJ comes with a signficant penalty for any exception handlers present, the experiment included also runs with and without exception handlers to try and gauge if those add indeed signficantly to the running time. Specifically, there were two main programs used (linked below if you want to reproduce this experiment):
1. Nested procedure calls (with and without exception handlers): there are three procedures A, B, C that increment each a global counter and pseudo-randomly5 call one or both of the other two procedures for as long as the counter is below a certain value. Note that you'll likely need to increase the size of the stack for this experiment since otherwise it will quickly overflow.
2. Nested for loops with Serpent encryption (NO exception handlers): there are between 1 and 22 for loops counting either from 1 to 10 or from 1 to 100 and going to the next level on a subset of values (those divisible by 2, by 3 or by 4 - those are essentially knobs to adjust for different runs); the inner-most loop performs a Serpent encryption.

I ran this experiment on two different computers:

  • S.MG Test server: this is an x86_64 AMD Opteron(tm) Processor 6376 (8 cores, 2.3GHz).
  • A torture-room desktop code-named D1: this is an Intel i5, 4GB RAM (2 cores, 3.2GHz).

The S.MG Test server runs 3 different incarnations of Ave1's GNAT: the ZCX variant which is obtained as a straight compilation using Ave1's scripts from the 15th of July 2018; the SJLJ-compilation of the same (i.e. supporting both dynamic and static linking); the SJLJ compilation of the latest, static-only version from September 2018 .

The D1 machine runs Adacore's GNAT 2016 and simply switches between ZCX and SJLJ via the --RTS parameter given to gprbuild.

The full set of data obtained is this:

S.MG Test server
Ave1's GNAT, build 301 (dynamic linking) and build 202 (static only) for SJLJ, older Ave1's GNAT for ZCX

Procedure Calls testing  with MT as PRNG;
Max = 16777216; 3 procs, mod 3 calls both, one or the other of the 2;
NO serpent calls; X final gives the total number of calls;
Seeding of MT: 16#111#, 16#222#, 16#333#, 16#444#  -> X = 22368144.
NB: as MT is used, X final is repeatable between runs (same seed).

Handlers | ZCX (s)  | SJLJ (s) (build 301) | SJLJ (s) (build 202 - static only)
0        | 1.119    | 1.211                | 1.165
0        | 1.174    | 1.210                | 1.161
1        | 1.299    | 4.106                | 3.850
1        | 1.302    | 3.777                | 3.755
2        | 1.599    | 3.929                | 3.301
3        | 1.862    | 4.125                | 4.193

 Nested Loops testing 
loops 1 to 100; if mod 2
Loops (runs)    | ZCX	(s)    |	Serpent Timing             | SJLJ (s) (b301) | SJLJ (s) (b202 - static ONLY)
1 (a)  (1k runs)| 0.000168893  | 0.000168893 / (50^1) = 3.377e-6   |   0.000168199   | 0.000168727
2 (b)  (1k runs)| 0.007213758  | 0.007213758 / (50^2) = 2.8855e-6  |   0.007055130   | 0.007084592
3 (c)  (1k runs)| 0.351611073  | 0.351611073 / (50^3) = 2.81e-6    |   0.352684722   | 0.351471743
4 (d)  (1 run)  | 17.740324000 | 17.740324000 / (50^4) = 2.83e-6   |  17.580107000   | 17.73699000
5 (e)  (1 run)  | 879.95117100 | 879.951171000 / (50^5) = 2.81e-6  | 881.139986000   | 879.7342540

loops 1 to 10; if mod 4
Loops (runs=1)  | ZCX (s)      | Time per Serpent (ZCX)            | SJLJ (s) (b301) | SJLJ (s) (b202 - static ONLY)
1               | 0.000008     | 0.000008 / (2^1) = 4e-6           |  0.000010       |  0.000009
2               | 0.000017     | 0.000017 / (2^2) = 4.25e-6        |  0.000016       |  0.000017
3               | 0.000031     | 0.000031 / (2^3) = 3.875e-6       |  0.000030       |  0.000024
4               | 0.000046     | 0.000046 / (2^4) = 2.875e-6       |  0.000057       |  0.000056
5               | 0.000150     | 0.000150 / (2^5) = 4.6875e-6      |  0.000110       |  0.000089
10              | 0.027650     | 0.027650 / (2^10)= 2.7e-5         |  0.002937       |  0.002915
22              | 11.98272     | 11.98272 / (2^22)= 2.8569e-6      | 12.045471       | 12.011244

loops 1 to 10; if mod 3
Loops (runs=1)  | ZCX (s)      | Time per Serpent (ZCX)            | SJLJ (s) (b301) | SJLJ (s) (b202 - static ONLY)
1               | 0.000012000  | 0.000012000 / (3^1) = 4e-6        |   0.000013000   |   0.000010000
2               | 0.000034000  | 0.000034000 / (3^2) = 3.778e-6    |   0.000027000   |   0.000033000
3               | 0.000076000  | 0.000076000 / (3^3) = 2.815e-6    |   0.000111000   |   0.000075000
4               | 0.000219000  | 0.000219000 / (3^4) = 2.704e-6    |   0.000221000   |   0.000220000
5               | 0.000654000  | 0.000654000 / (3^5) = 2.691e-6    |   0.000823000   |   0.000673000
10              | 0.167347000  | 0.167347000 / (3^10)= 2.834e-6    |   0.167906000   |   0.167250000
15              | 40.69105200  | 40.69105200 / (3^15)= 2.836e-6    |  40.743956000   |  40.716526000
16              | 121.9171600  | 121.9171600 / (3^16)= 2.832e-6    | 123.760958000   | 121.816518000

D1 computer 
Adacore's GNAT 2016, switching with --RTS between ZCX and SJLJ

Procedure Calls testing  with MT as PRNG;
Max = 16777216; 3 procs, mod 3 calls both, one or the other of the 2;
NO serpent calls; X final gives the total number of calls;
Seeding of MT: 16#111#, 16#222#, 16#333#, 16#444#  -> X = 22368144.
NB: as MT is used, X final is repeatable between runs (same seed).

Handlers | ZCX (s)  | SJLJ (s)
0        | 0.896    | 0.882
0        | 0.905    | 0.906
1        | 1.058    | 184.516
1        | 1.064    | 265.339
2        | 1.215    | 372.574
3        | 1.329    | 446.821

 Nested Loops testing 
loops 1 to 100; if mod 2
Loops           | ZCX (s)      | Time per Serpent                  | SJLJ (s)
1 (a)  (1k runs)|   0.000095269|  0.000095269 / (50^1) = 1.905e-6  |   0.000098855
2 (b)  (1k runs)|   0.004609351|  0.004609351 / (50^2) = 1.844e-6  |   0.004598492
3 (c)  (1k runs)|   0.231582664|  0.231582664 / (50^3) = 1.853e-6  |   0.230719036
4 (d)  (1 run)  |  11.548194   | 11.548194 / (50^4)    = 1.848e-6  |  11.579138
5 (e)  (1 run)  | 580.261706   |580.261706 / (50^5)    = 1.857e-6  | 586.996228

loops 1 to 10; if mod 4
Loops (runs=1)  | ZCX (s)      | Time per Serpent                  | SJLJ (s)
1               | 0.000009     | 0.000009 / (2^1) = 4.5e-6         | 0.000013
2               | 0.000023     | 0.000023 / (2^2) = 5.75e-6        | 0.000023
3               | 0.000036     | 0.000036 / (2^3) = 4.5e-6         | 0.000043
4               | 0.000083     | 0.000083 / (2^4) = 5.18e-6        | 0.000100
5               | 0.000167     | 0.000167 / (2^5) = 5.218e-6       | 0.000167
10              | 0.005537     | 0.005537 / (2^10)= 5.4e-6         | 0.007399
22              | 8.007915     | 8.007915 / (2^22)= 1.909e-6       | 7.944913
22              | 7.971711     | 7.971711 / (2^22)= 1.9006e-6      | 7.966864         

loops 1 to 10; if mod 3
Loops (runs=1)  | ZCX (s)      | Time per Serpent                  | SJLJ (s)
1               |  0.000008    |  0.000008 / (3^1)  = 2.6e-6       |  0.000009
2               |  0.000017    |  0.000017 / (3^2)  = 1.8e-6       |  0.000019
3               |  0.000142    |  0.000142 / (3^3)  = 1.75e-6      |  0.000143
4               |  0.000427    |  0.000427 / (3^4)  = 5.27e-6      |  0.000150
5               |  0.001267    |  0.001267 / (3^5)  = 5.21e-6      |  0.001259
10              |  0.107943    |  0.107943 / (3^10) = 1.82e-6      |  0.113296
15              | 27.163182    | 27.163182 / (3^15) = 1.89e-6      | 27.167316
16              | 81.724382    | 81.724382 / (3^16) = 1.89e-6      | 81.951616        

Based on the data above, my conclusion so far is that there is rather precious little to justify the use of ZCX: when no exception handlers are present in the code, the running times are similar under ZCX and SJLJ; when exception handlers are present in the code, there is indeed a penalty for using SJLJ but this penalty is not all that big on some irons. This being said, there is little reason that I can see for having lots of exception handlers in any sane code anyway so I'm really not all that concerned about this honest cost of SJLJ especially when compared with the "you can't kill your own tasks" cost of the so-called ZCX.

Separate from the above but still relevant, there are still quite a few issues remaining with SJLJ including the fact that it's apparently broken on ARM. But every pain in its own time and so for now simply look at the data above and let me know: do you see any real reason why one would *not* simply go with SJLJ?

  1. "Zero-Cost Exceptions" being the full name of this great idea of reducing costs by not doing the work - meaning here by simply choosing to not implement some parts of the Ada standard, namely the asynchronous abort.  

  2. Or its excuse at the very least since rationale might be too much to ask for under those circumstances. 

  3. You can try this yourself: the code is in test_tasks_ada.zip, download, read it, run it and see if the tasks actually abort or not. Try it also with SJLJ (gprbuild --RTS=sjlj) and spot the difference. 

  4. "Setjmp / longjmp" by its name, the older exception handling model that actually implements the Ada standard so it allows asynchronous abort. 

  5. Using the Mersenne-Twister pseudo-random generator

February 7, 2019

Seppuku Job Market: Minimal Dynamic Tasking in Ada

Filed under: Coding — Diana Coman @ 10:44 p.m.

Eulora's server needs a reliable and robust way of performing - preferably in parallel whenever possible - various jobs for all the players that might connect to the game at any given time. Given the parallel requirement, there isn't really any way around the fact that multi-threading is needed. Nevertheless, since multi-threading is by its nature complex enough to give subtle errors and heavy headaches at any time, I'd really much rather make sure any implementation that deals with multiple threads of execution is as small, clear, plain and easy to follow as possible. In other words, if it has to be multi-threaded then it should better be minimal, self-healing, self-adjusting and ruthlessly functional with all and any bells and whistles chucked as far away from it as possible. To drive this point home and keep it in mind at all times1, I'll call this self-reliant unit of the server the Seppuku Job Market or SJM for short.

The list of requirements for the SJM is this:
1. Accept Jobs from all and sundry in a thread-safe manner and execute them in order of their priorities.
2. Generate and kill Worker tasks2 *dynamically* and on an *as-needed basis* to perform jobs as soon as possible but remaining at all times within a pre-set maximum number of Workers3.
3. Creation and destruction of Workers should be reliable and robust: in particular, SJM should run for ever unless explicitly stopped and it should re-spawn Workers as needed, even if they get killed from outside the code (cosmic-ray event or not).
4. Aim to perform jobs in order of their specified priority but taking into account that at most ONE job per player is actually executed at any given time. In other words: do NOT allow a player to hog the whole thing and run as many jobs as they want; this is parallelism aimed to increase the number of players served not merely the number of jobs performed!

Point 1 of requirements hints at the nature of SJM: as it needs to accept jobs from many, unknown sources, it is effectively a "server" of sorts and moreover it is essentially a resource from the point of view of job producers. The best Ada construct that readily fits this description is a protected unit (aka a passive entity that guarantees thread-safe access to the data it encapsulates - in this case to the queue of jobs waiting to be performed). One significant benefit of an Ada protected entity is the fact that it is specifically not a task itself nor is there a task associated with it. Instead, the mutually exclusive access to services provided by a protected unit is ensured by the run-time system and therefore the whole thing has at least one less headache to think of: while Worker tasks may get killed, the SJM itself at least cannot get killed unless the whole program (i.e the main thread of execution of the server itself) gets killed.

Point 2 of requirements (dynamic, self-adjusting number of tasks) means that I'll need to actually create and dispose of tasks programmatically - there is no way to have only statically allocated tasks. In turn, this means that a few restrictions have to go away: No_Allocators, No_Finalization, No_Task_Allocators, No_Tasking, No_Unchecked_Deallocation. The need to drop the No_Finalization and No_Unchecked_Deallocation restrictions comes from the way in which Ada handles memory allocated dynamically even when on the stack. Essentially, dynamically allocated tasks receive memory from a "pool". Once allocated, memory from a pool is reclaimed ONLY when the whole pool goes out of scope or in other words when it can be guaranteed that there is no piece of code left that can actually attempt to access that bit of memory. This is very robust and quite useful of course but in the case of dynamically allocated tasks it means that tasks that finish will STILL effectively occupy memory unless specifically deallocated (with unchecked_deallocation as that's the only way to do it as far as I can tell). In turn, this creates the undesirable but very real and quite horrible possibility that the code will run just fine *until* the pool in which tasks are created runs out of memory because of all previous tasks that finished long time ago but whose space was never reclaimed. To avoid this, the code has to keep track of terminated tasks and explicitly deallocate the memory they occupy before chucking away their pointer and/or re-spawning a replacement Worker (as there is no way to "restart" a task).

Point 3 means that Workers need to be effectively managed based on the evolution of the number of available jobs and the number of Workers themselves. One approach would be of course to have a Supervisor task but the problem then is twofold: first, the Supervisor needs to be aware of changes to the jobs queue as they happen; second, having a Supervisor task creates the potential problem of who supervises the supervisor (esp. with respect to recovery from killed thread since in this case the Supervisor itself might die unexpectedly). Given however that the SJM protected unit effectively guards precisely the jobs queue, it's also in the best position to react promptly to an increase or decrease in jobs and so it follows that it should in fact manage the Workers too. After all, it can do a bit more on receiving a job than merely chucking it into the queue: ideally it would in fact pass it on to a Worker immediately.

While at first sight "take job, spawn Worker, pass it on and let him do it" sounds precisely fine, in practice it's really not fine at all and not least because of the requirement at Point 4: passing a job on to a Worker requires some ordering of jobs (by priority) and even a sort of guarded access to a player since a new job cannot be accepted (and especially cannot be passed on to a Worker for execution) while an existing Worker may still be toiling away on a previous job for the same player. So the SJM needs to find out when a job is finished in order to accept again jobs for that specific player. As always, there are only a few ways to know when something finished: either look for it4 as one rather has to do when Workers are just passive executors of jobs or otherwise expect a signal to be sent back by a more active type of Worker task when it finished the job it had.

This distinction between active and passive Workers (or tasks in general) is quite significant. As passive entities, Workers can at most simply wait to be handed a job or any other signal. Typically, a Worker would be created and handed a job, they would do it and then they would quietly die keeping out of the way of everyone else. This can be a great fit in various cases but I can see several problems with this for Eulora's server: first, Workers cannot be reused even when jobs are available so there is a rather inefficient kill/create overhead5 precisely at busy time when one wants it even less than at any other time; second, the only way for the SJM to find out when a job finished is by a sort of polling i.e. going through the whole set of workers and checking which one is in a terminated state - note that it is not at all clear just *when* should this be done or how would it be triggered (sure, one can use a sort of scheduled event e.g. check it every 3 seconds or some such but it's more of a workaround than addressing the problem); third, the SJM needs to do both Worker creation and Job allocation (i.e. priority ordering + only one job per player at any given time) at the same time and while keeping a job creator waiting.

The first of the above issues (no reuse of Workers) is easily addressed by making Workers active rather than passive: they get created and then they actively ask for a job; once they got a job, they do it and then they go back and report it done, after which they queue again to get another job or perhaps the boot if there are no jobs to be had. And since such active Workers do not finish by default when a task is finished, they need to have rather suicidal tendencies and ask not merely for a job but rather for either a job or permission to seppuku (hopefully in a clean manner though!).

Making Workers active (if suicidal) neatly separates Worker creation from Job allocation: when jobs start pouring in, the SJM can simply create a bunch of Workers and release the job creators before it makes time to actually hand the jobs out to queuing workers. When the jobs keep pouring in, Workers keep working and there's no need to kill them now to only create them a few milliseconds later. Moreover, finished jobs are simply reported and marked as such without any need to poll. In the (hopefully rare) case when a Worker dies unexpectedly before sending the signal that it finished its job, they will be anyway observed sooner or later when the state of Workers is assessed to decide if more or fewer Workers are needed. Essentially the only trouble this approach brings is the added responsibility on the SJM: it controls access to the Job queue for job creators AND for Workers while ALSO effectively managing and keeping track of all Worker-related aspects. But then it's not a Seppuku Job Market for no reason: if it needs to do it, it will have to do it and do it well.

As a proof of concept of the above, I have implemented the SJM precisely as described: as a protected unit that encapsulates a Job queue and manages active Worker tasks, creating and destroying them as needed while also de-allocating memory of any terminated Workers, ensuring that only one Job per player is accepted at any given time and allowing a graceful stop that does not block any job producers that may come at a later time and does not leave dangling Worker tasks either. Jobs are simply record types with a discriminant that specifies their type and therefore the exact form a variable part of the record takes (since each Job type is likely to have specific data structures it requires). Note that I specifically avoided the Object-Oriented option (i.e. tagged type in Ada) with a hierarchy of Job types and reliance on polymorphism for "Complete" to do the right thing depending on the exact type of Job. The reason for this avoidance is mainly that there really isn't much to gain from it as far as I can see at the moment. Similarly, I prefer to not rely on generic containers (for the Job Queue for instance) unless they become clearly and absolutely needed. Finally, I am quite aware of Ada's relevant annexes such as Real-Time Systems and I know that it provides a whole infrastructure of worker pools and jobs with futures even (i.e. a way to provide results at a later time) but they are quite at odds with the significant aim of keeping it all as short6 and clear and easy to follow as possible (not to mention potential issues with the way in which some parts might be implemented using a secondary stack for instance which I specifically do not want to have).

The public part of the EuJobs package is this:

with Interfaces; use Interfaces;
with Data_Structs;
with Ada.Finalization;
with Ada.Unchecked_Deallocation; -- to clean up worker tasks if needed.

package EuJobs is

  pragma Elaborate_Body;

  -- knobs and constants
  Max_Workers : constant Natural := 64;
  Max_Idle_W  : constant Natural := Max_Workers;
  -- max jobs
  Max_Jobs    : constant Natural := Max_Workers * Max_Workers;

  -- Generic Eulora Workers type: simply perform given Jobs
  subtype Worker_Index is Natural range 1..Max_Workers;
  -- Those are to be FULLY managed (including created/ended) by the Job Market
  -- ACTIVE but suicidal elements:
  --   a worker will keep requesting jobs/permission to seppuku
  --     until allowed to terminate
  -- Pos is a token identifying Worker with the Job Market
  -- NB: ALL workers WILL use this Job Market
  -- NB: do NOT create workers from outside the Job Market!
  task type Worker( Pos: Worker_Index );

  -- needed to dynamically generate Workers
  type Worker_Address is access Worker;
  procedure Free is new Ada.Unchecked_Deallocation(Worker, Worker_Address);

  -- ALL the info that the Job Market holds on workers to manage them
  type Worker_Rec is
      Assigned  : Boolean := False;
      Player_Id : Interfaces.Unsigned_64;
      WA        : Worker_Address;  -- actual pointer to worker
    end record;  

  -- for storing pointers to generated workers including if assigned and id
  type Worker_Array is array( Worker_Index'First ..
                              Worker_Index'Last) of Worker_Rec;

  -- limited controlled type that ensures no dangling workers at Finalize time
  type Controlled_Workers is new Ada.Finalization.Limited_Controlled with
      Workers: Worker_Array;
    end record;

  procedure Finalize( S: in out Controlled_Workers );
  procedure Initialize( S: in out Controlled_Workers );

  -- Job types; NB: do NOT map (nor have to) directly on message types!
  type Job_Types is ( Do_Nothing,
                      Create_Acct );

  -- Data structure with relevant information for each type of job
  type Job_Data ( T: Job_Types := Do_Nothing ) is
      -- common information relating to the one requesting this job
      Player_ID: Interfaces.Unsigned_64 := 0;
      Source_IP: Interfaces.Unsigned_32 := 0;
      --NB: this is SOURCE port - reply WILL be sent here, whether RSA or S!
      Source_P : Interfaces.Unsigned_16 := 0;
      -- Message counter, as received
      Counter  : Interfaces.Unsigned_16 := 0;
      Priority : Natural := 0; --lowest possible priority
      case T is
        when Create_Acct =>
          Acct_Info: Data_Structs.Player_RSA;
        when Do_Nothing =>
        when Print_Job =>
      end case;
    end record;

  procedure Complete( JD   : in Job_Data );

  subtype Job_Count is Natural range 0..Max_Jobs;
  type Job_Array is array( 1..Max_Jobs ) of Job_Data;
  type Jobs_List is
      Len  : Job_Count := 0;
      JA   : Job_Array;
    end record;


  -- FULLY self-managed Job Market for euloran jobs:
  --   -- accepts jobs to do
  --   -- spawns, kills and managed workers that complete the jobs
  -- NB: Job_Market will DISCARD a new job when:
  --    -- it is FULL (i.e. can't handle anymore)
  --    -- it is stopping
  --    -- it already has a job for the same player
  -- Jobs are performed according to specific criteria (not strictly fifo):
  --   - FIFO but ensuring no more than 1 job per player served at any time
  --   - ALSO: there might be other priorities (e.g. type of job)
  protected Job_Market is
    -- adding a new job that needs to be done
    -- this can be ANY derivated type of Job_Data
    -- NB: Added will be true if J was indeed accepted and False otherwise
    entry Add_Job( J     : in Job_Data;
                   Added : out Boolean );

    -- workers request jobs when they are out of work
    -- workers need to provide their token (Pos)
    -- they can get to do: either a job OR seppuku signal.
    procedure Get_Job( Pos    : in Worker_Index;
                       J      : out Job_Data;
                       Seppuku: out Boolean );

    -- workers have to report back when a job is done
    -- (or they get sweeped up eventually if/when they abort).
    procedure Done_Job( Pos: in Worker_Index );

    -- sets in motion the process to stop gracefully:
    --   -- no more jobs received, existing discarded
    --   -- all workers will be given Seppuku signal
    -- NB: NO reverse for this.
    procedure Stop;

    -- for any external supervisors
    -- returns TRUE if it is NOT stopping
    -- returns False if it is stopping
    function Operating(  Waiting_Jobs: out Natural;
                         Idle_Workers: out Natural;
                         Active_Workers: out Natural;
                         Terminated_Workers: out Natural;
                         Is_Full: out Boolean)
      return Boolean;


    -- internal storage of jobs and mgm of workers
    Board  : Jobs_List;

    -- NB: Workers are in the BODY of the package
    --   because they HAVE to be after the body of Finalize

    -- when stopping:
    -- discard new jobs; give out stop on get/done; empty jobs map
    Stopping : Boolean := False;
    Fullboard: Boolean := False;

    -- Retrieves next available job from the Board and returns it in JD
    -- Sets Found to True if an available job was found (i.e. JD is valid)
    -- Sets Found to False (and JD is undefined) if NO available job was found.
    -- NB: this DOES remove the element from the board!
    procedure Get_Available( JD    : out Job_Data;
                             Found : out Boolean );

    -- checks if the given player_id IS currently served by any worker
    function Is_Assigned( Player_ID: in Interfaces.Unsigned_64 )
             return Boolean;

    -- Checks in Board list ONLY if there is a job for this player
    -- Returns True if a job was found (i.e. a job waiting for a worker)
    -- Returns False otherwise.
    -- NB: Player might STILL have a job in progress (assigned to a worker)
    function Has_Waiting_Job( Player_ID: in Interfaces.Unsigned_64 )
             return Boolean;

    -- releases any player_id that might be stuck with aborted workers
    -- *creates* new workers if needed (specific conditions met)
    procedure Manage_Workers;

  end Job_Market;


  -- create new Worker with identification token (position) P
  function Create_Worker(P: in Worker_Index)
             return Worker_Address;

end EuJobs;

Workers are very simple tasks with an ID received at creation time to identify them within the Job_Market (very simply by position in the array of Worker addresses). They run a loop in which they request tasks or permission to Seppuku and when they receive either of them they proceed to do as instructed. Perhaps you noticed above that the array of Worker pointers is wrapped inside Controlled_Workers, which is a controlled type. A controlled type in Ada guarantees that the provided Initialize and Finalize routines are run precisely at the stages that their names suggest to enable the type to start off cleanly and to end up cleaning after itself. In the case of Controlled_Workers, the Initialize simply makes sure that the array has all pointers marked as null and moreover as not assigned any tasks while the Finalize goes one more time through the array and finishes off (with abort) any workers that are not null already. Note that the scope of Worker tasks is in fact the package level since the Worker_Address type is declared at this level (and that's how the scope is defined for such types in Ada). You might have noticed also that there is no concrete array of Workers defined anyhere so far: indeed, the array of workers is defined inside the package body for two main reasons: first, it should NOT be accessed by anyone from outside (not even potential children packages at a later time); second, it has to be defined after the bodies of Initialize and Finalize since otherwise it can't be created.

Jobs are barely sketched for now as Job_Data structures with a discriminant to distinguish different types and a variable part for specific data that each type of job needs. The Complete procedure then simply does different things for each type of job in a straightforward manner (at the moment it does something for the print job only for basic testing purposes).

The Job_Market itself is a protected object that offers a handfull of services (aka public entries, procedures or functions): entry Add_Job for job producers to provide their new jobs; procedure Get_Job for Workers who are looking for something to do; procedure Done_Job for Workers who report they finished their previously allocated job; procedure Stop for any higher-level caller who is in a position to turn off the whole Job_Market; function Operating that simply provides information on the current state (i.e. operating or stopping) and status (e.g. number of jobs and workers) of the Job_Market. Note that there are important differences between functions, procedures and entries: functions can only *read* protected data so they are effectively banned from modifying anything, hence Operating being exactly a function as it provides a snapshot of current state and metrics for the Job_Market; procedures can modify data but a call to them is unconditional meaning it gets accepted as soon as the protected object is available and the caller is first in queue for it, without any further restrictions - hence Stop, Done_Job and Get_Job are procedures since there is no constraint on them being called at any time; finally, entries can also modify data but they have entry barriers meaning they accept a call only when certain conditions are met - in this case Get_Job has the simple but necessary condition that either the Job_Market is stopping (in which case callers should not be blocked since it's pointless to wait anyway) or the Job queue is not full since it makes little sense to allow a job producer in just to discard their job for lack of space anyway. Note however that this is merely for completeness here since in practice there will be several other levels of measures taken so that the job queue does NOT become full since that is clearly not a sane way to have the server running.

In addition to the above public services, the Job_Market also has a private part where it keeps the job queue (as a basic array for now - this can easily change at a later time if there is a good reason for the change), a flag to know if it's stopping and one to register if/when the board is full as well as a few helper procedures and functions for its own use. The Get_Available procedure effectively implements the strategy of picking next Job to execute: it's here that priorities are considered really and it's here that there is another check to make sure that no two jobs of the same player are ever executed at the same time. The Is_Assigned procedure checks the set of Workers to see if any of them is performing a job for the specified player. The Has_Waiting_Job on the other hand checks the job queue to see if there is any job from the specified player waiting in the queue. Arguably the most important of those is "Manage_Workers" that does precisely what the name says: it does a headcount of Workers in various states, cleans up any aborted/unexpectedly dead ones, reclaims memory for terminated ones and then, if required, creates new Workers to match the current Job provision. Note that there really are only 64 workers in total (and at any rate this is unlikely to become a huge number) so this headcount of workers is not really terribly costly.

The overall package further has a private function that dynamically creates a new Worker task with the given ID, returning its address. This is more for convenience than anything else since one could easily call new directly so perhaps it will even go away at the next round of trimming the code.

The implementation in eujobs.adb starts with the Initialize and Finalize procedures, declares the Controlled_Workers object and then proceed with the internals of the Job_Market itself:

with Ada.Text_IO; use Ada.Text_IO;

package body EuJobs is

  procedure Finalize( S: in out Controlled_Workers ) is
    -- ALL this needs to do is to make SURE no worker is still running!
    for I in S.Workers'First .. S.Workers'Last loop
      if S.Workers(I).WA /= null then
        abort S.Workers(I).WA.all;
        S.Workers(I).WA := null;
        S.Workers(I).Assigned := False;
      end if;
    end loop;
  end Finalize;

  procedure Initialize( S: in out Controlled_Workers ) is
    for I in S.Workers'First .. S.Workers'Last loop
      S.Workers(I).WA := null;
      S.Workers(I).Assigned := False;
    end loop;
  end Initialize;

  -- actual workers slots; workers are managed internally here
  -- this type is needed though, to Finalize properly
  CW: Controlled_Workers;

  protected body Job_Market is
    -- adding a new job that needs to be done
    -- this can be ANY derivated type of Job_Data
    entry Add_Job( J     : in Job_Data;
                   Added : out Boolean )
      when Stopping or    --to unblock producers
           (not Fullboard) is
      -- if stopping, discard job -- allows callers to finish too...
      -- check Player_ID and add job ONLY if none exist for this player
      if (not Stopping) and
         (not Is_Assigned(J.Player_ID)) and
         (not Has_Waiting_Job(J.Player_ID)) then
        -- board is known to have space, so add to it
        Board.JA(Board.JA'First + Board.Len) := J;
        Board.Len := Board.Len + 1;

        -- job added may mean full board
        FullBoard := Board.Len >= Board.JA'Last;

        -- Quick worker management to adjust if needed
        -- Let caller know that job was indeed added
        Added := True;
        Added := False; --not added, aka discarded
      end if;
    end Add_Job;

    -- workers request jobs or seppuku when they are out of work
    procedure Get_Job( Pos    : in Worker_Index;
                   J      : out Job_Data;
                   Seppuku: out Boolean ) is
      Found : Boolean;
      if Stopping then
        -- when stopping: all seppuku
        Seppuku := True;
        -- try first to get some job that should be done
        Get_Available(J, Found);
        if (not Found) then
          Seppuku := True; --since no job is available..
          -- have a job so no seppuku for now
          Seppuku := False;
          -- update Worker record to mark player as being served etc.
          CW.Workers(Pos).Assigned := True;
          CW.Workers(Pos).Player_ID := J.Player_ID;
          -- this SURELY means board is NOT full!
          Fullboard := False;
        end if;
      end if;
      -- LAST: manage workers in ANY CASE!
    end Get_Job;

    -- workers have to report back when a job is done
    procedure Done_Job( Pos: in Worker_Index ) is
      -- update record for this worker and let him go
      CW.Workers(Pos).Assigned := False;
    end Done_Job;

    -- aim to stop gracefully:
    --   -- no new jobs stored, existing discarded, workers killed.
    -- NB: NO reverse for this.
    procedure Stop is
      Stopping := True; -- NO need for anything else, really
    end Stop;

    function Operating(  Waiting_Jobs: out Natural;
                         Idle_Workers: out Natural;
                         Active_Workers: out Natural;
                         Terminated_Workers: out Natural;
                         Is_Full: out Boolean)
      return Boolean is
      Waiting_Jobs := Natural( Board.Len );
      Is_Full := Fullboard;
      Idle_Workers := 0;
      Active_Workers := 0;
      Terminated_Workers := 0;

      for I in CW.Workers'Range loop
        if CW.Workers(I).WA /= null then
          if CW.Workers(I).WA'Terminated then
            Terminated_Workers := Terminated_Workers+1;
          elsif CW.Workers(I).Assigned then
            Active_Workers := Active_Workers + 1;
            Idle_Workers := Idle_Workers + 1;
          end if;
        end if;
      end loop;
      return (not Stopping);
    end Operating;

    -- anything needed for external load checking (?)

--private stuff

    procedure Get_Available( JD    : out Job_Data;
                             Found : out Boolean ) is
      Pos   : Job_Count;
      P     : Natural := 0; --priority of job found so far
      Found := False;
      -- ALWAYS walk the FULL set: higher priority might have come in later
      for I in 1 .. Board.Len loop
        if ( (not Found) or (Board.JA(I).Priority > P) ) and
           (not Is_Assigned(Board.JA(I).Player_ID) ) then
          Found := True;
          Pos   := I;
          P     := Board.JA(I).Priority;
          -- but don't copy just yet, as there might be higher priority further
        end if;
      end loop;
      -- retrieve the found job data but ONLY if found!
      if Found then
        JD := Board.JA(Pos);
        -- if not last job, shift to avoid gaps in the array
        if Pos < Board.Len then
          Board.JA(Pos..Board.Len-1) :=
              Board.JA(Pos + 1 .. Board.Len);
        end if;
        -- update count of jobs in the array
        Board.Len := Board.Len -1;
      end if;
    end Get_Available;

    function Is_Assigned( Player_ID: in Interfaces.Unsigned_64 )
             return Boolean is
      Found: Boolean := False;
      -- walk the array of workers and check
      for I in CW.Workers'Range loop
        if CW.Workers(I).WA /= null and
           CW.Workers(I).Assigned and
-- Will have to rely on .assigned being SET properly by the manager!
--  (not CW.Workers(I).WA'Terminated) and
           CW.Workers(I).Player_ID = Player_ID then
          -- found it!
          Found := True;
        end if;
      end loop;
      return Found;
    end Is_Assigned;

    function Has_Waiting_Job( Player_ID: in Interfaces.Unsigned_64 )
             return Boolean is
      Found: Boolean := False;
      for I in Board.JA'First .. Board.JA'First + Board.Len loop
        if Board.JA(I).Player_ID = Player_ID then
          Found := True;
        end if;
      end loop;
      return Found;
    end Has_Waiting_Job;

    procedure Manage_Workers is
      Active_W: Natural := 0;
      Idle_W  : Natural := 0;
      Total_W : Natural := 0;
      To_Create: Natural:= 0;
      -- release player ids if workers terminated
      -- count also precisely how many are active
      for I in CW.Workers'Range loop
        if CW.Workers(I).WA /= null then
          if CW.Workers(I).WA'Terminated then
            -- this terminated abnormally -> LOG?
            CW.Workers(I).Assigned := False;
            -- claim this space to restart a worker here if needed
            --CW.Workers(I).WA := null;
            -- deallocate it too as otherwise memory space slowly gets lost
            -- NB: Free proc sets it to null anyway

          --if NOT null and NOT terminated-> idle or active
          elsif CW.Workers(I).Assigned then
              -- this is an active worker, count it
              Active_W := Active_W + 1;
            -- this is an idle worker, count it
            Idle_W := Idle_W + 1;
          end if;
          -- null workers are simply empty spaces, no need to count them
        end if;
      end loop;
      -- calculate total workers
      Total_W := Active_W + Idle_W;

      if (not Stopping) and
         (Board.Len > Total_W) and
         (Total_W < Max_Workers ) and
         (Idle_W = 0) then
        -- need (perhaps) to create workers: how many?
        To_Create := Board.Len - Total_W;

        -- create them for as long as there is ANY space..
        -- NB: MORE workers MIGHT have terminated meanwhile,
        -- but they won't be null!
        for I in CW.Workers'Range loop
          if CW.Workers(I).WA = null then
            -- found a place, so create a worker
            CW.Workers(I).Assigned := False;
            CW.Workers(I).WA := Create_Worker(I);
            To_Create := To_Create - 1;
            Total_W := Total_W + 1;

            if To_Create <= 0 or Total_W >= Max_Workers then
            end if;
          end if;
        end loop;
      end if;
     end Manage_Workers;

  end Job_Market;

  -- Worker body
  task body Worker is
    JD      : Job_Data;
    Seppuku : Boolean := False;
    -- main Loop: get a job or die, work and repeat.
      -- ask the Job Market for a job or permission to seppuku
      Job_Market.Get_Job( Pos, JD, Seppuku );

      if Seppuku then
        exit Work_Loop;
        -- do the job
        EuJobs.Complete( JD );
        -- report job done
        Job_Market.Done_Job( Pos );
      end if;
    end loop Work_Loop;
    -- worker is done and will die gracefully!
  end Worker;

  -- Jobs themselves
  procedure Complete( JD   : in Job_Data ) is
    Stop: Boolean;
     -- do different things for different types of jobs...
    case JD.T is
        when Create_Acct =>
          --Acct_Info: Data_Structs.Player_RSA;
          Stop := False;
        when Set_SKeys =>
          -- SKes: Data_Structs.Serpent_Keyset;
          Stop := False;
        when Mgm_SKeys =>
          --SMgm: Data_Structs.Keys_Mgm;
          Stop := False;
        when Print_Job =>
          Put_Line("Completing: job counter " &
                   Interfaces.Unsigned_16'Image(JD.Counter) &
                   " priority " & Natural'Image(JD.Priority) &
                   " for player " &
                   Interfaces.Unsigned_64'Image(JD.Player_ID) &
                   " from IP:P " & Interfaces.Unsigned_32'Image(JD.Source_IP) &
                   ":" & Interfaces.Unsigned_16'Image(JD.Source_P));
        when others =>
          -- no job or dubious at best, better stop.
          Stop := True;
    end case;
  end Complete;

  function Create_Worker(P: in Worker_Index)
             return Worker_Address is
    return new Worker(P);

end EuJobs;

Your thoughts, observations and critiques on the above are welcome below in the comments section. If there is a problem with the above approach or with the code itself I really want to hear of it sooner rather than later since it's of course easier to do something about it now - this is after all the whole reason why I'm publishing this proof of concept so go ahead and point out any faults you see.

  1. Also to reflect some suicidal tendencies of my Workers but that becomes clearer later. 

  2. "Threads" if you prefer non-Ada terminology. 

  3. There isn't much point in having more Workers than your underlying iron can actually support 

  4. blocking until it's done or checking at some intervals 

  5. Ada's documentation claims that dynamic creation of a task has a big overhead anyway so it's best avoided whenever possible but I can't say I have any idea just what "big overhead" means here. 

  6. The full .ads + .adb code+comments shown below is 500 lines, it uses no secondary stack, no heap and no containers or other similar external packages. Even the "use Ada.Text_IO" will go away as it's in there now strictly to allow the Print job to be seen as it completes for testing purposes. 

January 12, 2019

Compiling Ada Library for Use with Non-Ada Main

Filed under: Coding — Diana Coman @ 5:40 p.m.

Following a rather bumpy road of compilation troubles trying to link an Ada lib into a CPP main program, I found first a working solution to the task at hand and then a headache trying to disentangle the confusion of what is exactly a "standalone encapsulated dynamic" library, why is it needed and how exactly does it differ on the initialization front from a boring static library. Fortunately it turns out that my headache was due mainly to all that bumping into walls combined with the rather confusing terms used in .gpr files - there was at least nothing that a good dose of hands-on experimentation and several re-reading of the GNAT docs couldn't cure! Still, as I'd rather not repeat the whole process next time I need to mix Ada with others, I'll summarise here my notes on the options I found for compiling an Ada library1 so that it can be safely used from a non-Ada main program.

To use Ada code from a non-Ada main program, one needs to find a way to actually start the Ada run-time environment *before* any calls to Ada code. The Ada run-time does the crucial task of elaboration of Ada code (i.e. getting everything ready for executing code, so broadly speaking it takes care of initializing variables and constants as well as running any code found in the main body of packages that are used). Since elaboration is a concept entirely specific to Ada, there is no way to rely on the non-Ada main code (C, C++ or whatever it might be) to take care of this. Instead, the solution is to make sure that the Ada library itself contains and exposes an initialization procedure that does exactly this: starts the Ada run-time and performs the required elaboration for the library code. Once this exists, the non-Ada code simply has to make sure it calls this initialization procedure *before* calling *any* Ada code from that library and that's all2. This much was clear from the beginning - it's from here on that the headache and confusion started since not ALL Ada libraries actually contain/expose such an initialization routine. Essentially, in addition to the usual classification of libraries into static or dynamic, Ada has another parallel classification: standalone or not! And asking gprbuild to produce a standalone library is NOT done by using the "Library_Standlone" option but by defining an... interface for the library via the "Library_Interface" option in the .gpr file. Specifically, from the beginning:

  • To use an Ada library from a non-Ada main program, one needs to compile the library as "standalone". The standalone type of Ada library is the only one that contains and exposes for outside use an initialization routine that will start the Ada run-time and perform all elaboration tasks required for the library itself. NB: the initialization routine will be called libnameinit so if the library is called "eunet" then the routine will be "eunetinit".
  • To create a standalone Ada library with gprbuild, the corresponding .gpr file has to include the option "Library_Interface" that lists the packages that are actually exposed for use from outside the library. This option is enough by itself to obtain a standlone library and therefore to have the initialization routine! NB: you can build a standalone library as static or dynamic, as you want, simply specifying the kind, via "Library_Kind" - in both cases, the resulting .a or .so file will contain the initialization routine. For example:
      for Object_Dir use "obj";
      for Library_Dir use "lib";
      for Library_Name use "eunet";
      for Library_Kind use "static";
      for Library_Interface use ("Eunet", "Raw_Types");
  • Standalone libraries have subtypes too and it is actually the subtype that is specified via the option "Library_Standalone" in a .gpr file! According to GNAT's user guide, the Library_Standalone can take 3 values: standard (default), no, encapsulated.
    • The "standard" is the option used if your .gpr file does not even mention "Library_Standalone" (but DOES mention "Library_Interface"!) and it means that the initialization routine is contained and exposed.
    • The "encapsulated" option means in addition that the library will depend only on static libraries except for system libraries - so this option will effectively pull in everything the library needs, including the GNAT run-time. This makes for a significantly *easier* use and linkage further downstream BUT it forces the Library_Kind to... "dynamic". I could NOT find out any clear explanation as to WHY this is so but if I'm to guess I'd say it's probably a way of "protecting" users so that they don't encapsulate the Ada run-time in 10 separate libraries and then use all of them in the same program or something.
    • Finally, the "no" option means - surprisingly! - what you'd expect: the library is NOT to be a standalone library after all (and "Library_Interface" be damned)!
  • Summarizing the messy interplay between Library_Interface and Library_Standalone above: you can have a static or dynamic standalone library as long as you leave "Library_Standalone" option alone; you can have only a dynamic standalone library if you actually want to include the GNAT run-time. This item is called "encapsulated standalone library" and means that "Library_Standalone" is set to "encapsulated". You can - unclear why/when is it useful - specify explicitly that you do NOT want a standalone library by setting "Library_Standalone" to "no". Essentially the Library_Standlone chooses between "types" of standalone that include standard, encapsulated or... not standalone at all. I still get slightly nauseaous.
  • Assumming you did go for one sort or another of standalone library, there is a further option to ask the library to "automatically" run its initialization. This is done via "Library_Auto_Init" option being set to "true" (the default value). However, this is the sort of gun that can easily explode in your face since the actual behaviour is platform dependent so you can't rely on it for anything. As a result, I'd say this is best set to "false" clearly and explicitly so that one is not lulled into the idea that someone else will do the initialization auto-magically.
  • If you build a static standalone library, note that its linking into the main program requires also the linking of GNAT runtime as a minimum. The exact things you need depend on what your library really uses but things can get quite gnarly. For instance3 the line for a basic main.cpp test that does ~nothing but it does it with the whole smg_comms + some glue for handling net stuff of Eulora's client:

    gcc main.cpp -o main.o lib/libeunet.a
    -ldl -lpthread -lrt -L/home/eu-test/eulora/eunet/c_wrappers/bin/ -lC_Wrappers
    -L/home/eu-test/eulora/eunet/rsa/bin/ -lRSA -L/home/eu-test/eulora/eunet/mpi/bin/

On the bright side, the investigation that resulted in the above notes means that I'm now satisfied that I can in fact link an eunet Ada library both with Eulora's client (that links mainly with dynamic libs so possibly easier as encapsulated standalone) and with code that runs on GNAT with static libs only. In addition, I certainly got also a much better understanding of Ada's elaboration and elaboration order and how it should be handled for safe use of tasks from within a library. But a set of notes for that might be the topic for another time!

  1. even a rather complex one with tasks that start at elaboration time among other things 

  2. In principle there is also a symmetric finalization procedure that is to be called at the very end by the non-Ada code: this one shuts down the Ada run-time but in practice it doesn't seem to be required all that often. 

  3. Isn't "libgnarl.a" such a great name? 

January 4, 2019

Tandretea inceputului de an

Filed under: Lyf, Young, old and oldest — Diana Coman @ 11:10 a.m.

Daca inceputul a fost cu argumentari iar urmarea a fost cu mirari cat fiinta si purtari de tot felul, 2019 incepe aproape pisiceste asa, tandru si zambitor si cat pe ce sa nu mai am loc de el in fotoliu:


Creste el, mai cresc si eu, va doresc sa tot cresteti si voi ca stat pe loc nu se poate niciodata pe lumea asta. La multi ani!

December 24, 2018

A Week in TMSR: 10 - 16 December 2018

Filed under: A Week, TMSR — Diana Coman @ 3:55 p.m.

On Monday, 10th of December 2018:

Mircea Popescu notices that the MP-WP1 installation on Pizarro's shared server seems broken as it fails to correctly process footnotes. In response, Asciilifeform2 asks BingoBoingo3 to look into it, noting also that other accounts on the server (Hanbot's) don't exhibit the same problem and therefore the issue has to be linked to Nicoleci's account on the shared server.

Mircea Popescu expresses his surprise at Nicoleci's apparent inability to express herself in writing anyway nearly as well as she is able to express herself orally. The difference is significant enough to be rather hard to believe if not directly witnessed. Further discussion with Trinque and Asciilifeform of Nicoleci's public writings on her blog - mainly summaries of TMSR logs - and of the sad state of what passes as "writing" in the US nowadays leads to the conclusion that the core issue with her writing is that it lacks any narrative structure: instead of telling a story of any kind, she seems to attempt to just give the gist of her thoughts at one moment or another.

Asciilifeform expresses his pleasant surprise at having recently tried a 3D device. He suggests it for Eulora but Mircea Popescu notes that Eulora is significantly more intellectual than visceral or graphical at the moment and the current struggle in this direction is anyway simply getting even basic art done for the game rather than improving public's access to it.

Danielpbarron publishes on his blog another snippet of his conversations with some dudes on religious matters. Trinque struggles to make any sense of the published snippet and points to Danielpbarron the solipsistic nature of his current activities as they can be perceived based on his publications. Danielpbarron fails to see Trinque's point and enquires whether there is anyway any significant difference between talking publicly of religion as he does and talking publicly of sex as Mircea Popescu does. This enquiry is promptly answered by Mircea Popescu who points out some significant differences: while every human being is interested in sex seeing how sex is fundamental to humans, not every human being is actually interested in religion seeing how religion is fundamentally gossip; moreover, while other types of gossip are at least interesting as they touch on interesting people, religion fails to captivate as it concerns nobodies. Danielpbarron disagrees with this view of religion and affirms that the "truth of the Bible is universally known", offering as unique support to this assertion a few citations from his Bible.

Nicoleci publishes her 101th post on her blog detailing some interactions with people from her past who failed to impress her as much as they told themselves they did even when she was younger while positively making her laugh currently with their unsolicited emails.

Ben Vulpes publishes an accounting statement for Pizarro for November, relying on a semi-automated process (numbers are produced automatically but the final format requires manual work to put everything together).

Diana Coman realises that her previously mentioned problem of an empty genesis .vpatch as a result of Cuntoo's bootstrap script is caused by an issue with the vdiff tool on the machine running the script (so nothing to do with Cuntoo's bootstrap script after all). After fixing the vdiff tool she reports that the Cuntoo script runs successfully and produces a .vpatch but the signature for it fails to verify. Bvt chimes in to report that he has a similar problem on his computer as the .vpatch he obtained from Cuntoo's bootstrap script fails to verify against Trinque's provided signature. Later during the day, Diana Coman publishes the .vpatch she obtained and Trinque is able to compare it with his own noting that there are several differences that he will need to fix, including his use of sha-based vdiff rather than the keccak-based vdiff. Diana Coman also notes that the Cuntoo bootstrap script fails on a different machine configuration (different operating system mainly), stopping with an error. She provides a paste of the error and Trinque is able at a later time to point her to the potential issue - an un-met requirement (having /dev/shm mounted) for compiling Python.

Diana Coman gives a talk on Bitcoin to students at Reading University in the UK. Later during the day she publishes a write-up of it including a detailed account of her Bitcoin talk and the supporting slides that she used.

Diana Coman offers to Asciilifeform the results of a tcpdump running on SMG's test server with Pizarro for several months during the year. The dump provides the content of some unexpected UDP packages that were observed during a previous test of UDP communications in October 2018. The dump includes some VoIP apparent scam that seems to originate from Iceland. As Asciilifeform is interested to investigate more into this, Diana Coman points out to him that it's all on Pizarro-owned infrastructure and so he asks BingoBoingo to reroute to one of his own computers with Pizarro all packets with unassigned IP destination.

On Tuesday, 11th of December 2018:

Commenting on Diana Coman's write-up of her talk at Reading Uni on the previous day, Mircea Popescu notes that the lack of a recording of the talk is rather unfortunate especially given how simple it is to obtain normally. Diana Coman and Mircea Popescu then discuss a bit the practical aspects of recording a talk and the rather shockingly basic conditions offered by Reading University on this occasion. Mircea Popescu notes in conclusion that the write-up of the talk looks good and the missing recording is more a matter of "missing out on a possible fanbase!" than anything else.

BingoBoingo reports that his Peruvian girlfriend finds Argentina very beautiful especially compared to what she knows of Uruguay. This prompts Mircea Popescu's "eh" and Asciilifeform's observation that Argentina hasn't quite managed yet to fully burry/destroy/run down the beautiful buildings it inherited from back when it mattered. The conversation then moves onto the significant differences in quality of buildings in different parts of the world and at different times, with Asciilifeform revealing that he can actually distinguish what he considers well-built structures by their smell that might be - or might not be - due to a combination of aging plaster, actual wood and perhaps old books in significant quantities.

Nicoleci publishes on her blog her summary of TMSR logs of 19th November 2018.

BingoBoingo publishes on Qntra an update on Macron's adventures in France and another update on Ebola's adventures in Congo.

Mircea Popescu draws on his extensive knowledge of world history and his extremely numerous interactions with a wide range of people to discuss his emergent view that multiculturalism fails first and foremost for lack of multiple actual cultures rather than for lack of potential merit in the idea of culture-mixing itself. Asciilifeform points out to the merits of China (at the time of Confucius) as an example of different actual culture that existed but Mircea Popescu notes that merits are irrelevant for the issue at hand: in practice, there is only a very narrow and unique way to culture and so everything that counts as such inevitably finds itself on this same path without much diversity possible. China is given again as an example since its current relevancy in the world is, in Mircea Popescu's view, fully due to and limited by the extent to which it copied white man culture. Addressing Asciilifeform's point, Mircea Popescu also notes that previously to this copying, China was simply a large bureaucratic state in a similar way in which the Inca state had also been one but still failing to actually develop as a culture since working organisation by itself is not enough. To support his point, Mircea Popescu remarks also that an actual alternative culture in China would be directly identifiable simply by its results. Given the obvious lack of such results - as there is no equivalent Chinese #trilema at all, let alone one bigger in size as it should logically be given China's size and more efficient organisation - it means therefore that there can't possibly be a culture there in any sense either. Both Mircea Popescu and Asciilifeform acknowledge that this might still be proven incorrect at a later time although the chances for such proof seem to them rather low. The more likely explanation for the current situation is in Mircea Popescu's opinion the simple fact that China can't seem to be able to advance past its remarkable efficiency at copying - currently copy successes that stop short of developing anything new including for instance mining Bitcoin but also owning the full fab stack and still failing at the same time to produce its own CPU architecture.

Asciilifeform rages at html's failure to provide a reliable way to format even basic equations so that they look the same across different displays and browsers (in particular without using javascript and/or images). Trinque suggests using SVG might be a good approach for the task but Asciilifeform rejects it because it won't be of any use for text-based browsers. Mircea Popescu provides a solution based on the use of html tables and top/bottom floating alignments, publishing it on Trilema as well, for future reference. At first, Asciilifeform balks at the proposed approach as he says it doesn't work with the Lynx text-based browser but Mircea Popescu points out that there is no way that works exactly the same in both text-based and graphical mode.

BingoBoingo announces that Pizarro's price for BTC is set at $4000 per 1 BTC for the month of December 2018. This price is based on an auction of $2000 that concluded on the 7th of December with the sale of the $2000 to Mats for 499.99ECu. Using this exchange rate, BingoBoingo produces Pizarro's invoices for provided services to bvt, jurov and trinque. Further invoices are likely due for Mocky's and Nicoleci's shared hosting with Pizarro and for SMG's test server.

Mircea Popescu states that he considered for some time Diana Coman's innovation/subversion distinction and he finds it to be well founded. He further notes that this distinction makes it clear that there is very little difference between subversion and "inclusion." Diana Coman agrees with this observation and notes that those finding change (hence, innovation by another name) difficult will simply push for subversion instead for as long and in as many ways as they can. Mircea Popescu adds to this the funny fact that Spanish uses the same word for expressing that something is expensive ("cuesta mucho") and that one finds something difficult ("me cuesta"), driving home the inescapable conclusion that indeed, the sort of person who finds it difficult to think (and therefore to change) has indeed no busines in #trilema or with Bitcoin for that matter. Diana Coman further links this "cost" of personal difficulty to the oft-heard complaint of "it's not fair" but Mircea Popescu considers the matter to be a much more intricate ball of nonsense than that. Nevertheless, he notes that a preocuppation with "fairness" (as opposed to correctness) is indeed a good heuristic for lack of useful intellect since it betrays significant inner voids that make it all together doubtful the subject is really a person at all.

A side note by Mircea Popescu on the provenience of the "arena" word in English from the Spanish word for sand turns into a short discussion with Diana Coman on the Arenal volcano in Costa Rica and subsequently with Asciilifeform on the properties of volcanic sand and the importance of semiconductors.

Asciilifeform announces that he will bid on a Symbolics MacIvory model and he will have it xray tomographied if he obtains it.

Mircea Popescu rages at Mozilla Firefox's idea of "releases" of the browser that include executables of all sorts and assorted signatures without any clear apparent meaning. Asciilifeform is rather amused at the idea that there is anything other than ceremonial in latest Mozilla offerings but notes also that he is not aware of any version of Firefox that did not suck to start with. Trinque chimes in to say that he has a version of Firefox that he built on Musl so that there is at least that as a potential de-facto graphical browser for Cuntoo. Mircea Popescu notes that at some point the republic will likely have to write its own sane browser anyway, getting rid in the process of all sorts of useless junk that currently come stuck with any graphical browser.

On Wednesday, 12th of December 2018:

Nicoleci publishes on her blog her summary of TMSR logs of 20 November 2018. She also notes that fetlife has deleted a post of Mircea Popescu from 2 days before since it was apparently more liberal than their liberalism can take.

Phf brings back the discussion on fairness/correctness from the previous day noting that he naturally considered fairness to mean exactly that: a recognition of correctness even when it's not to one's own advantage. In response, Mircea Popescu points out that this meaning of fairness as unpleasant-but-correct has always been a purely eastern one while the western definition always focused on a sort of weighing and comparing of outcomes. He links this to Hajnal's line in the sense that fewer and later marriages give more idle time to be spent on the contemplated sort of "fairness" considerations.

Mircea Popescu redirects Nicoleci away from attempting to summarize TMSR logs and on to transcribing old proceedings of the Royal Society of London that are rather interesting to read but are hardly readable in their existing format since they've been mangled by the automated OCR process.

One of Trilema's readers suggests to use a Wordpress Latex plugin to properly format equations. Mircea Popescu passes on the suggestion but Asciilifeform says he already investigated the plugin and it fails to solve his problem as it still relies on images and therefore it produces output that is not entirely suited for text-only browsers. Mircea Popescu points out that Mathematical notations are simply not fully alphabetic and as such they can't ever be pure text and therefore it's up to terminals to work correctly by being able to handle text + adnotations rather than text only. The discussion further advances on to what sort of text preprocessing should be actually done by a browser with Mircea Popescu noting that this question doesn't yet have a clear answer and Asciilifeform noting that at any rate, existing answers such as tags totally fail to actually answer anything. The mention of tags touches a nerve with Mircea Popescu and he notes that they are a very good example of the fundamentally broken approach that created the significant current technological debt: "simplification" implemented without regards to actual secondary costs incurred and by removing the barriers to entry that kept out precisely the sort of people that had no place to enter in the first place. While Asciilifeform heartily agrees with this view, he considers it old news and summarises it as "mechanization + idiocy == mechanized idiocy." He adds however that this sort of simplification "works" anyway simply by subversion of the very object called computer since actual computers are even more difficult to obtain than they were before while the objects that are now indeed very easily obtained are computers in name only.

BingoBoingo publishes on Qntra four articles: on Britain's no confidence vote in its Prime Minister, on the death of a Physics professor at Stanford University, on the secret conviction of George Pell in Australia and on one of the FBI's terrorism charges.

Mircea Popescu discusses with Asciilifeform the dilemma of Free Software raised by Naggum: while useful code has indeed value as Naggum clearly argues, its valuation cannot be approached in the way that Naggum seems to suggest, namely by attaching some value to the lines of code itself and/or closing code so that its source is not freely available anymore. In Mircea Popescu's view, free access to useful code source does not take away value of the code but instead adds a very useful entry point that works also as a passive but effective filter so that valuable contributors can be discerned from time wasters. Asciilifeform does not disagree with this view but expresses some reserve: he says he did not learn from reading ugly code despite reading loads of it but rather from reading non-code text; he also notes that Naggum seems to have been aware of the fact that lines of code added do not translate into value added but rather the opposite (the best code is no code); he adds also that Naggum's statement regarding the loss of value of software through free publishing are likely the result of his own personal history of trying to make a living by solving complex problems and seeing the tools he needed gradually vanishing as their producers failed to be valued enough to be able to continue their work. Mircea Popescu acknowledges that this is very possibly true and even proposes the neat packing of this pattern into a foundational myth under the title of "avik killed naggum"4 but notes that nevertheless the view that publication destroys value is not only misplaced but dubious in that it actively serves only those authors that attempt to extract more than their work is worth on closer examination. And since the abstract work of computer programming is much more similar to other abstract work such as that performed by doctors, the correct valuation should also follow similar patterns rather than attempting to follow patterns (such as copyright) that are derived from valuation of non-abstract work. As a result, Mircea Popescu notes that on one hand the requirement to publish code does not have to apply without discrimination and on the other hand the only correct way to pay for abstract computer work is through the crown allocation process: authors of abstract work may receive their payment as a recognition by a higher authority (the crown) of their valuable contribution but not as some quantifiable, formula-calculated amount that most users can decide on since most users are utterly unqualified to evaluate this type of abstract work in the first place.

As a continuation of the previous discussion on evaluating abstract work in general and code in particular, Mircea Popescu further stresses the important fact that valuable abstract work is by its very nature and fundamentally a surplus phenomena - meaning that there has to be first some surplus in order for one to be capable of performing abstract work of any value. In practical terms, this means that the authors do it without strictly needing the payment for it and as their own personal choice of doing it in preference to doing other things - some of them with clear payment even - that they are perfectly able to do. Asciilifeform also links this to operating from causes rather than for purposes (i.e. for obtaining some specific payment in this case).

A further continuation of the same discussion explores also to some extent the further difficulty in assigning rewards for valuable abstract work even through the crown allocation process. The process does not make the evaluation of abstract work any easier and it also doesn't provide a clear way to ensure optimal labour allocation at times of need. Essentially, Mircea Popescu notes that existing tools (money as a signal of value and market forces as regulators) although a good fit for concrete work and objects are nevertheless a disastrous fit where abstract work is involved and their failure is so significant that it likely drives intelligent people towards some form of socialism (as the only sort of alternative perceived) in their attempt to find a solution to the problems caused. The conclusion overall is, in Mircea Popescu's own words: "labour allocation is broken and nobody has any better".

BingoBoingo issues Pizarro invoices to Mircea Popescu for Nicoleci's shared hosting and for SMG's test server. He also updates Pizarro's public page to reflect the 10% discount offered on shared hosting for annual subscriptions over monthly subscriptions. Later, following Mocky's request, he also invoices Mocky for an annual shared hosting subscription.

Mocky asks BingoBoingo to bill him for his shared hosting with Pizarro on an annual bassis. He reports that his search for a job is still ongoing although slowed to some extent by holidays of his interviewers. Mircea Popescu suggests perhaps pooling resources through running a TMSR version of bourbaki: specialist appliers to remote jobs dumping tasks in a file that gets passed around for TMSR people to choose from as and when they want to do some non-TMSR work. Mocky chimes in to say that he previously considered outsourcing some of his own work, while Trinque notes that Oracle for instance is known to actually do precisely this. Asciilifeform says he'd be delighted to work in this way but expresses his doubts at the scheme, mainly due to the difficulty he perceives with task level/definition/discussion and the potentially problematic case of tasks that nobody wants to pick up within the allocated timeframe. Mocky says that his concern with this model is the fact that it can take him a year to become capable of actually solving specific problems within a reasonable time.

Asciilifeform and Mircea Popescu discuss the actual relationship between employer and employees with specific focus on Asciilifeform's apparent inability of escaping employee status. Mircea Popescu notes that the core issue seems to be the mismatch between the favourite "select first and then talk to selected" approach of most republicans and the opposite "filter the ocean" approach5 that is actually required for any search outside of TMSR (a search for employer included). Relatedly, he asks whether Pizarro has managed to do anything of the sort in order to find the clients it needs for survival. At a later time, BingoBoingo replies, revealing that the short answer is no, Pizarro has not yet managed to do anything of the sort but it might perhaps still manage to do it if only an "awk ninja" materializes to write the needed scripts.

On Thursday, 13th of December 2018:

Nicoleci publishes on her blog her summary of TMSR logs for 21 November 2018.

Diana Coman negrates douchebag for obstinately wasting her time. Probably in retaliation, douchebag carves in his own nick on freenode the message that nobody is interested in listening to, doing the irc equivalent of sandwich man. This prompts some laughter and merriment all around and Phf notes that douchebag's vulnerability finding is all about form rather than substance. Diana Coman says that the proposed view fits indeed the observed behaviour and moreover makes the whole activity very similar to a form of political correctness applied to code. Mircea Popescu takes this further and says that in this case the whole thing is also the precise equivalent of period politruks that attempt to police the code as the currently relevant form of speech.

Diana Coman publishes Chapter 12 of SMG Comms containing a thread-safe Ada implementation of simple queues that are specific to the needs of Eulora (as opposed to the generic thread-safe Ada queues of the standard).

Diana Coman informs Trinque that she experienced some problems obtaining an answer from deedbot to the !!ledger command. Trinque notes that the command currently works for him and he suspects the issue was most likely due to a lost connection between the irc bot that receives the command and the back service that actually handles all wallet functionality.

Diana Coman rates juliankunkel, the lecturer at Reading University that invited her to give a Bitcoin talk to students. Asciilifeform and BingoBoingo welcome him but he doesn't have much to say.

Asciilifeform reports he acquired the bolix on which he previously bid and he says he will therefore xray it. A bit later, Mircea Popescu contributes to this with a warning for Asciilifeform to check the rated power of the equipment he intends to use since not all equipment is powerful enough for such a task. Asciilifeform however reveals that the task is not likely to require high power anyway as there is no middle metal layer.

BingoBoingo publishes on Qntra: on the use of facial recognition at a pop concert.

Mircea Popescu, Asciilifeform and Diana Coman discuss the best approach to take for implementing (or not!) a sender/receiver layer as part of SMG Comms. The conclusion is that there will be such a layer as part of SMG Comms but a very thin one that simply moves UDP messages from/to outbound/inbound queue and the UDP socket. The reason for this layer to exist is the need to move UDP messages quickly from the relatively small queue on the IP stack to the larger in-memory queue. The reason for it to be part of SMG Comms is that it's not specific in any way to any given application since it's so thin as to focus exclusively on moving messages from/to socket and queues.

On Friday, 14th of December 2018:

Asciilifeform provides a paste of his talk to adlai in #asciilifeform as proof that quitting drinking has at least *some* effects. Mircea Popescu suggests a recuperative scholarly series on the SNS server and later notes that this is simply for documentation value rather than some silver bullet (or indeed any sort of working bullet at all). Asciilifeform indulges his love of old/interesting hardware mentioning items rarer than the bolix: xerox lispm and tandem. Phf reveals he has worked on a tandem (known as HP NonStop) at some point and he appreciated the architecture but noted that the software was entirely written in Cobol. This is interesting to Asciilifeform but it makes him poke Phf about some promised Bolix documentation that he previously said he might have. Following on from this, Asciilifeform reveals that he will likely perform the xray of his Bolix machine with his own hands but that he'd like to have Phf's papers (if there are any) to check against. The next day, Mircea Popescu adds to this discussion noting that there is proper xray-hygiene to follow when performing such a task.

Danielpbarron publishes on his blog another talk with unknown people on religious matters. A reference to reddit in there prompts Trinque to enquire if Danielpbarron is meanwhile militantly anti-republican. Danielpbarron flatly answers "no".

Nicoleci publishes on her blog her summary of TMSR logs of 22 November 2018.

BingoBoingo publishes on Qntra: on Germany's three choices of sex on paper.

On Saturday, 15th of December 2018:

Danielpbarron publishes on his blog another talk with random people on religious matters. Asciilifeform gets from it the impression that Danielbarron's approach is essentially calvinistic but Danielpbarron rejects this assessment on the grounds that "calvinism leads to hell". Some further talk reveals that Asciilifeform hasn't followed the religious life of Danielpbarron all that closely.

Asciilifeform further discusses with Phf his current plan for xray-ing his Bolix machine and then using the gained knowledge to build probes for further knowledge gain.

Asciilifeform notes that Ben Vulpes' logging bot is not working and Mircea Popescu notes that anyone can start another logging bot and simply aim it at the chans of interest as the bot code is published already.

BingoBoingo publishes on Qntra: on the latest adventures of Macron in France.

Mircea Popescu provides another sample from the responses he gets to one of his ocean-filtering actions. Asciilifeform is curious on the percentage of responses that manage to at least read the full initial message that usually gets cut off on various mobile phones and the like. The response is that there are some that pass this basic test but the percentage is very small.

Nicoleci publishes on her blog her summary of TMSR logs of 23 November 2018.

On Sunday, 16th of December 2018:

Asciilifeform announces he received his Bolix in perfect packing, with all accessories and able to run. He notes that he was in the end the only one to bid on this machine but he still did not want to miss the opportunity to buy it since the price apparently increases by $1000 every year. Phf suspects that there aren't that many bidders anyway and it's all more of a show with other owners of older hardware simply hoarding it as they notice they can't replace it anymore at any cost. The emerging picture seems to be that 2009 is the cutoff point for hardware that one can trust. A bit later, Phf provides Asciilifeform with DKS patches for Bolix and a port of 'zork' to Bolix.

Mircea Popescu laughs heartily at the Unity Web Player being now reportedly no longer working on Chrome, Firefox and Edge browsers. His faint interest in the matter focuses on the fact that it's totally unclear why and how did Unity exactly achieve its previous popularity. His hypothesis on this is that Unity got "chosen" simply for lack of any alternative. Asciilifeform offers as similar puzzle the success of qt but Mircea Popescu notes that they are in fact not comparable since Unity never actually worked nor did it ever have serious resources to speak of while qt both works and is not in fact going anywhere. Asciilifeform then links this to Bolix noting that the 3d engine for it still exists and is called Mirai. Its forum however is not working as it was overrun by spam. Amberglint joins in the discussion to correct Asciilifeform's assertion that Mirai was ported to CPP - he says it was in fact ported to Allegro Common Lisp. He also mentions that the most well-known work done in Mirai is the Gollum character for the Lord of the Rings film.

Asciilifeform publishes on his blog Chapter 14A of the FFA series covering the first half of Barretts Modular Reduction.

Mircea Popescu publishes on his blog the result of his wtf is a "Post Malone".

Diana Coman publishes on her blog a summary of TMSR logs from 3 to 9 December 2018. Mircea Popescu provides feedback and some corrections to it. The summary also prompts Mircea Popescu to add to one of the main topics previously touched namely the existing conflict between different versions of database management systems (mysql and postgres) being needed for different sort of tasks. This spills into the next day and the next week.

  1. A customized version of the Wordpress blogging platform produced by Mircea Popescu, packaged in V format by Hanbot and currently used by most republican blogs and by the Pizarro ISP for its clients that share space on a server. 

  2. Main tech for Pizzaro ISP. 

  3. Founder of Pizarro ISP and the only current local pair of hands. 

  4. Avik is the self-styled "master" that reportedly feeds his slaves a cocktail of pills, works at precisely the sort of software that crowded out of the market the tools needed by Naggum for his work and otherwise keeps pestering Nicoleci with unsolicited emails. 

  5. This is a term of art meaning literally talking to EVERYONE and then filtering out 1 / 1mn or in similar ratios those contacts that are in fact of any interest. 

December 19, 2018

A Week in TMSR: 26 November - 2 December 2018

Filed under: A Week, TMSR — Diana Coman @ 4:45 p.m.

On Monday:

Asciilifeform publishes on his blog Chapter 13 of the FFA series.

Trinque announces that he has a working Cuntoo bootstrapper that runs entirely offline and reliably produces the same genesis .vpatch that can then be verified against his signature. His write-up on the topic is due for next day. As part of his work on this, Trinque wonders whether vdiff could or should perhaps be able to produce a genesis .vpatch without requiring an empty directory. Asciilifeform points out in response that the current approach is both standard and without fault to his eyes. Moreover, later during the day, Mircea Popescu also notes that the empty directory is perfectly fine from a philosophical point of view since it represents the perfect code. He also states that an alternative, specific solution (such as diff against /dev/null) is perfectly valid as well for as long as it doesn't turn into a different mode of operation: essentially as long as vdiff simply diffs whether for genesis or not, it's fine; as soon as one wants to define some diff as diff and then something-else as genesis, insanity creeps in. This turns quickly quite vicious as Asciilifeform points out that such insanity is usually the "being smart" of code and as such the bane of any programmer who wants to do something as opposed to just write more code; in turn, Mircea Popescu strikes back with the even pointier point that the "being smart" of people is the even greater bane of any man who wants to do something as opposed to just breathe in and out for yet another day.

As further result of his work on the Cuntoo script, Trinque notes that he doesn't find sane the default behaviour of vdiff to exit with what is normally an error code (i.e. returning something other than 0) just because the given parameters are not as expected. Nevertheless, he makes do with it for now and uses it as it is in his Cuntoo script. Mircea Popescu chimes in to note that Trinque's point on this is valid - there shouldn't be a need to keep adding the check for this case. Phf takes note of the issue raised and says that he is adding it to his backlog of issues to fix and curiosities to look at (with naked eye, powerful microscope or a bigger hammer, as required).

Nicoleci publishes her summaries of #trilema logs of 9 and 10 November 2018. Asciilifeform finds at least one sentence in the latter hilarious.

Spyked comes back from a longish absence due to unexpected health issues of his father. He rates and introduces his new bot, feedbot, points republicans to its help page, promises to publish its V tree as well in the following days and tells Trinque that he can disable the rss part of his deedbot since feedbot is taking over that job. Mircea Popescu would rather have a smoother approach to this take over and points out to Spyked that a planned and gradual take over is likely to be less bumpy (for everyone involved) than the original para-dropping of feedbot into TMSR territory.

Asciilifeform enquires whether there is some automated or semi-automated way to submit new .vpatch and/or .sig files for inclusion into the main repository at http://btcbase.org/patches. His enquiry fails to get an answer and so far he doesn't seem bothered enough to press the issue further.

The current bot count in channel further increases by one as Asciilifeform resurrects his own FFA-bot - his name is PehBot. Asciilifeform gives PehBot a spin, illustrating the newly added capabilities of the bot that matches now the FFA content up to and including Chapter 13. Initially, Asciilifeform plans to move PehBot to #asciilifeform but Mircea Popescu points out that bot or not, one can stay in #trilema for as long as one doesn't become a nuisance.

BingoBoingo publishes on Qntra an article on the seizing of some Ukrainian ships in the Kerch strait. Both deedbot and feedbot jump on the new feed and announce it, prompting Mircea Popescu to poke spyked who hasn't yet noticed the call for a smoother take over. Smoothness might need to be forced upon with a bigger gun.

Asciilifeform fixes the commenting form on his blog after more than a year since they last worked and more like a few days since the latest round of pointed complaints about it from people who wanted to provide feedback on his FFA series. As part of the fix, he replaces one set of tripping wires for commenters with a single tripping thread hoping that the decreased complexity means that at least *some* commenters survive it now.

As part of his www winter-cleaning, Asciilifeform also installs the selection script that enables one to link to specific parts of text in a post. He goes on to have as a result an oft-repeated argument with Mircea Popescu regarding the proper way to keep - since you have to! - and especially keep in check your www stable that includes dirty beasts such as php and wordpress.

The ups and downs of reported fiat-value of Bitcoin continue to entertain as the "valuation" goes down to 3000 green papers out of infinity per one coin out of 21 million total. Asciilifeform saves his laughter for when the valuation goes to 3 green papers for the same coin but as part of the discussion, Mircea Popescu digs out and links several older posts on trilema.com that are a good read at any time.

From valuation of Bitcoin, the discussion moves seamlessly to valuation of people. Mircea Popescu notes that the bar to being a "wise man" in TMSR keeps increasing but there is hope that this increase is actually capped since the steep increases recently witnessed are likely due to catching up with neglected work rather than anything else. Illustrations range from dirty socks to recent (douchebag) and a bit less recent (kakobrekla) failures in #trilema and from Arthur Blair to C.S. Lewis or cardinal Newman in the world at large. As a side point, there is also a definition of "fractional girlfriend" and the observation that not everybody seemingly asking a question is actually looking for its or indeed for any actual answer.

On Tuesday:

Mircea Popescu provides a concrete example of the need to filter an ocean to find a crumb of usefulness. Empirical results seem to suggest that it's about 1 person in 1000 that seem to interact at all with what they read.

Mircea Popescu asks for feedback on his recently published (last week) first draft towards defining a republican replacement for DNS aka the GNS. Trinque says he hasn't yet got around to read the published piece but he promises he will read, digest and come back with a response.

As promised on the previous day, Trinque publishes on his blog his new script for bootstrapping Cuntoo. He asks people to let him know if they try it and with what results. Asciilifeform quickly looks and promises to try it at a later time but for the time being he questions the way in which the script steps through 2 GCC versions to get to the desired 4.9.4 version and the fact that it lacks ave1's gcc. In response, Trinque says that having ave1's GCC is the plan in the longer term but the point of the current item (the script) is to provide a stable starting point made of what-is so including mainstream 4.9.4 gcc. Mircea Popescu chimes in at a later point to say that nevertheless the ebuild with ave1's gcc should be made and preferably quite quickly. He also sketches out the roadmap for the longer term, including the full removal of SSL (all flavours and from all places) that is to be replaced with straight RSA. This sort of replacement is meant indeed for all republican items including for instance TRB although Mircea Popescu notes that the replacement might be FFA or a different republican RSA implementation depending on the practical requirements and constraints of each application.

Joining in the conversation sparked by Trinque's Cuntoo script, phf reveals that he has various POC1 bits and pieces that explore some potential ways of installing packages on a system in a V-reliable way. Trinque would like to see those and Phf promises to pull them out from their respective hiding places but only after he finishes his current move from the US to Russia. At a later point, Diana Coman reads the script and asks for clarification from Trinque on whether the published script can also be used to upgrade an existing gentoo installation to Cuntoo. Trinque replies that such a feat is in principle possible but it's currently undocumented and as such unexplored territory to be yet tried at explorer's own risk.

Nicoleci publishes her summaries of #trilema logs of 11 and 12 November 2018.

Spyked notices the request for smoother transition to his feedbot and acts accordingly to synchronize with Trinque.

Phf "snarfs" the latest FFA vpatches from Asciilifeform. The "snarf" is a term of art and it means that the new .vpatches and their signatures are now mirrored in the main TMSR repository at http://btcbase.org/patches. Asciilifeform profers his fondness of Phf's repository infrastructure that is "unspeakably helpful".

Asciilifeform, Diana Coman, Mircea Popescu and Phf discuss briefly the way in which Ada transitioned from initial ugly and gnarly find to republican standard language for programming. As part of the discussion, Asciilifeform links to Mocky's useful summary of the arguments for using Ada.

Davout was last seen in TMSR in April 2018, more than 5 months ago. Meaningful work from Davout was last seen several years ago. Mircea Popescu gives a quick summary of Davout's known involvement so far: being a tech in the early days and doing the receivership for BitBet at a later date.

A "grubles" from 2014 joins #trilema in 2017 and doesn't last long. By contrast, the negative rating he acquired in 2014 continues to last.

Asciilifeform corrects his own oversight and negrates "Hasimir".

BingoBoingo publishes on Qntra a short announcement of the release of Cuntoo bootstrapper and a brief note on the adventures of some people who run any code they happen to find.

On Wednesday:

Danielpbarron's blog sends delayed pingbacks from 2016 as he is finally uploading old articles on the new hosting with Pizarro. Following the delayed pingbacks, Mircea Popescu reads the content in which they are embedded and expresses puzzlement over the mismatch between the interests and worldview they reflect and Danielpbarron's desire to continue being a lord of TMSR. When asked directly, Danielpbarron states that Bitcoin is still an interest of his and it's simply a matter of enjoying his position and being materially invested in Bitcoin via running a node and having items and ECu in Eulora. He also says he may be lazy but not "morally opposed" to doing meaningful work in TMSR.

Amberglint pops by to offer Asciilifeform a pointer to someone who wants to decap an Ivory processor board. Asciilifeform points out that the operation can't be trusted to anybody walking in through the door as the Ivory is a very scarce resource so the wannabe de-capper is cordially invited to get in the WoT and convince people that he can be trusted with such a task. Phf also chimes in to ask Asciilifeform to postpone any attempt at decapping an Ivory until he gets the chance to provide the docs that he has somewhere burried among other stuff and might shed some light without the dangers of decapping anyone. Asciilifeform publishes a high resolution photo of the Mac Ivory Model 2 processor board together with a shouty "Do Not Touch!" warning neatly guarding it.

BingoBoingo publishes on Qntra 2 articles, on the jailing of a 64 year old woman over holding cotton candy and on sanctions issued against Bitcoin addresses, respectively. He also publishes on his own blog the Peso exchange rate of the day and his practical cooking lesson involving birds and stuffings.

Nicoleci publishes her summary of #trilema logs of 13 November 2018.

Asciilifeform wants to store his valuable Ivories with Mircea Popescu or even just pass them on to him but the latter is neither interested in adding to his pile of rarities nor offering vault space.

Mircea Popescu briefly toys with the idea of sending some bitcoin to the 2 addresses that were "sanctioned" but he notes that they haven't really been used anyway. In the process, he reveals that he can't quite tell apart the orlov of today from the orwell of yesteryear - they are all an orlol to him. Nevertheless, with Asciilifeform's help, he unearths the desired reference that turns out to be from the more recent writer - the topic being that a strongly held "no" is deeply disturbing to those who never encountered such a thing before.

Mocky briefly stops by to let BingoBoingo know about the fate of most recent wires that he sent. He also notes that most employers in his area now require "secret government clearance" for would-be employees.

On Thursday:

Mocky wonders at the latest job descriptions that include "blockchain engineer". He also states that searching for employment sucks but his previous solution for it - to stay with the same company for 14 years - also ended up sucking. Mocky then points out that he got so far precisely what he wanted, namely a lot of kids and a lot of code but he feels like an idjit for not having saved anything during all this time. Mircea Popescu validates Mocky's feelings on this matter.

Nicoleci publishes her summary of #trilema logs of 14 November 2018.

Spyked coordinates with Trinque to take over feeds in the switch from deedbot to feedbot. He also announces he bought himself a c101pa thinking it is a good place to test Trinque's Cuntoo bootstrapper on. Asciilifeform quickly points out to him that no, the c101pa is no such thing.

Mircea Popescu reveals he is burnt to peeling as a result of slut wrestling in the Costa Rican sun. Based on this information, Asciilifeform estimates that he'd last about 10 minutes under similar circumstances and reveals that he can also become crisplifeform under the sun on the 39th parallel.

Asciilifeform compares the LOC2 count for his FFA series that is in the low thousands with that of TRB which is in the tens of thousands not including additional unknown ball of dependencies pulled in. Diana Coman points out that the quality of any lines of code also matters - so it gets even better than the naked numbers show since she'd gladly read Asciilifeform's 1000 LOC at any time in preference of reading even 100 lines of Koch's.

Diana Coman states that she read and will sign Chapter 3 of the FFA series. She has however a question for Asciilifeform on it, regarding the exact meaning of overflow for a shift operation. As she points Asciilifeform to the exact code in question, he is able to confirm that she is right in observing that the code can produce garbage if called with arbitrary arguments but the procedure in question is strictly for internal use of the lib and as such strictly called with correct arguments that don't result in garbage.

Asciilifeform points out node that is stuck behind the top of the chain and he suggests that the aggression patch of trb should be deployed to help it catch up. Lobbes acknowledges that the node is his but notes that it is already running with the aggresion patch so that further investigation is needed to find out the reason for its sad situation.

On Friday:

Mircea Popescu notes the remarkable similarities between apparently different things such as the fate of competent versus incompetent engineers in the current environment or the ability to remain synchronised with the network of Bitcoin nodes hosted with a reliable service as opposed to those hosted with a less reliable one. The similarity comes from the overwhelmingly socialist streak of the environment that is essentially described as "hindering the worthy to prop up the unworthy."

Nicoleci publishes her summaries of #trilema logs of 15 and 16 November 2018.

A certain "zx2c4" revisits #trilema providing Asciilifeform with ample opportunities for restating various basic points including the fact that the technical can never be separated from the political, the fact that a "proof" that requires faith (be it in unread, supporting code) is at most a proof of the proponent's idiocy and otherwise no proof at all and the competent opinion that "Rust" is a "leprous pile of shit" no matter how one looks at it. Upon coming online to the whole zx2c4 display, Mircea Popescu swiftly negrates zx2c4. The action prompts Asciilifeform to cite from "Левый марш": ваше слово, товарищ маузер! (tm). Upon coming online to this last line in Russian, Ave1 embarks upon making sense of it and as a result swiftly publishes his attempt at translating the "Левый марш" to English. Asciilifeform contributes in the comments to Ave1's post with his own quick translation of the whole thing.

Diana Coman publishes Chapter 10 of her SMG Comms series including an implementation of Action and RSA keys types of messages for Eulora's needs. She also signs chapter 3 of Asciilifeform's FFA series.

BingoBoingo publishes on Qntra 2 brief notes on some US new fines and Argentina's new deal to buy soy from the US. He also updates his previous Qntra post on the 2 Bitcoin addresses that were sanctioned: the update includes the clear evidence that the sanctions are worthless since there are newly confirmed transactions to those addresses.

The potential decapper of Ivories turns up under the nickname of SeanRiddle. Mircea Popescu confusingly thinks he's the author of pbfcomics.com and rates him accordingly. Further discussion regarding SeanRiddle's procedure for decapping reveals that he does decapping only as a weekend hobby, uses rust remover to remove top layers, leaves stuff with multiple layers for someone with better equipment, doesn't do any comic stuff and works as a programmer for most of the time. He also provides on his blog only low resolution photos because of limitations of his initial blog setup that involved blogspot and wikimedia. Mircea Popescu points out that he is better served with an actual blog of his own where he can also simply upload files as big as they might be to have the high resolution that is needed for the task. Asciilifeform notes that the number of layers to remove from the Ivory is not known and Mircea Popescu re-rates SeanRiddle with the more apt reference to his existing decapping hobby as opposed to his inexisting comic work. Upon further discussion, Asciilifeform passes on the offer of using SeanRiddle as a decapper for the precious and rare Ivory and SeanRiddle himself agrees that he'd rather leave this job for someone else. Amberglint joins in and provides a few .pdf files of potential interest, prompting Mircea Popescu to suggest he starts his own blog already, possibly hosted with Pizarro.

Mircea Popescu complains to Mocky about an issue with Mocky's bot in Eulora but Mocky says he won't be able to look at it until he is done finding his new "daily bread overlord". Mircea Popescu acknowledges the answer and points out to anyone able to read that there is this opportunity waiting for them to contribute by fixing the bot issue.

On Saturday:

Danielpbarron publishes on his blog his conversation on religious issues - it turns out he took out parts of it anyway so it's only some of the conversation, with some dudes in some chat room called #LRH.

BingoBoingo publishes on Qntra the November 2018 Report on Qntra activity noting 3117 words published during the month, all of them by BingoBoingo himself. He also publishes on Qntra an article on an FBI raid.

BingoBoingo and Asciilifeform discuss the unreal state of the real estate in Uruguay and Argentina noting mostly the inflated prices of flats and the similarly inflated expecations of owners.

Amberglint pops by to point to Asciilifeform the Soviet Refal machine and other hints of Soviet Lisp machines but Asciilifeform is already familiar with those and points to Amberglint the place they all went to: /dev/null.

Danielpbarron will consider visiting Uruguay but doesn't actually plan a visit. He also considers running a poker bot but his consideration is stopped dead by the realisation that he can't legally operate a gambling anything from the country he is in. Mircea Popescu points out that this aspect is a problem of the country itself to the point that one cannot legally operate almost anything there but Danielpbarron doesn't consider that to be the worst of things anyway.

Mircea Popescu illustrates the issue of naive extension of notions to domains or situations where they don't apply. As concrete example, he notes that amortization does not make sense as a concept to be considered by a country when deciding what to do with its current generation since each generation is a set of resources that will be spent anyway and always without any possibility of saving any of it.

Mircea Popescu and Asciilifeform discuss the nature of what distinguishes individuals out of a mass. Initially Asciilifeform seems to consider it is a matter of having more or less of some characteristic such as courage but Mircea Popescu points out that the only practical way to distinguish is the answer or lack of answer to some specific situation - essentially whether one gets the "calling" or not.

Asciilifeform and Mircea Popescu disagree in their view and interpretation of Lavrentii Beria. The discussion spills onto the next day.

On Sunday:

Asciilifeform and Mircea Popescu continue at length their discussion of Beria, Stalin and the whole entourage. Asciilifeform provides a curated fragment by Bukovsky in support of his own point. Mircea Popescu attempts to read it but quickly runs into abundant examples of stupidity and therefore stops before getting to full blown rage.

Ben Vulpes asks BingoBoingo to send him over the Pizarro transactions so that he can produce the full statements and move on to making and filling with data a customer equity tracker for Pizarro. BingoBoingo provides the required data.

BingoBoingo publishes on Qntra a short post on the protests in France. He also publishes on Pizarro's blog a summary of his activities for the business.

  1. Pics and pocs might be anything but here I read it as proof of concept! 

  2. Lines of Code 

Older Posts »

Theme and content by Diana Coman