Once upon a time I used Git - local repository only - for keeping track of any work I did on large, unyielding and messed up code bases of all sorts as I explored them, trimmed them down and hosed them up on occasion. And before Git I used Subversion and before that I used CVS and so on the list keeps going, from one big system to the next, ever more polished in some ways and ever the same at core, ever "new" of the sort that brought its own learning curve and its own set of dependencies but ever old in its disregard for the user's machine (on which it would require all sorts of shit installed) and ever old in the underlying assumption that the user doesn't - and even shouldn't - care to own their tools but care instead to adapt to the tool, to effectively serve as guinea pig for whatever the tool - the master - may require.
With all the above, I gained at least a lot of experience using existing versioning systems I guess, but not much more. And as the gap between my computers' environment and those tools' expectations only widened in time, there came the point where I ditched all of them, unwilling to put up with their requirements even if it meant losing the usefulness I extracted otherwise from them, mainly with respect to maintaining a clear history record of my work and being able to move back and forth through it with relative ease.
While I never regretted not having Git or similar, it's the lack of integrated record while working with a large code base that bothered me quite a lot because it does hurt in all sorts of ways, from piling additional work on just to maintain otherwise a separate record to making it harder -as in more of a problem waiting to happen- to explore more radical trims to the code. The solution that currently works quite well for me is a basic set of scripts that allows me to use V in a more streamlined manner at development time: I've made a keypair just for code development1, a "init" script that creates the genesis vpatch containing whatever code you point it at and a "commit" script that creates a fresh vpatch on top of whatever you set out for it. So for a grand total of 108 lines of .sh scripts (comments included!), combined on occasion with a few more commands directly from the console (mainly when wanting to rollback basically but making a script out of a single command doesn't quite make sense to me), I get to simply use what I already have installed anyway, namely a V presser, vpatch (with Keccak) and gpg2.
Using the scripts to genesis ...the scripts themselves was a pleasure, at that. Here's from the README file, to have it plainly and otherwise head over to my Reference Code Shelf for the genesis itself:
- Prerequisites: vdiff, gpg (with secret key for the user set in v_init.sh).
- NB: DO change in BOTH scripts the name to match the gpg key you want used by the script!
- NB: the scripts are blind beasts and don't check anything but plow on, so unless you want to obtain garbage: a. know what you're doing and run them in the correct order b. don't mess up their expectations (esp the contents of the a b directories).
-
There are 2 basic scripts:
- v_init.sh - to be run only once, at the very beginning; this creates and populates correctly (with the newly made genesis vpatch + corresponding sig that it creates) the directories a, b, patches, .seals and .wot.
- v_commit.sh - this assumes that all directories are in place already (esp a, b, patches, .seals, .wot) and that b contains the result of a *previous* press on which the new .vpatch is to be created.
-
Usage:
-
To genesis some code:
from the directory of your choice, run the v_init.sh script giving it as parameters: the name of the project (it will be used to name the genesis as projectName_genesis.vpatch), the name of the directory containing the code you want under control (if it's not local, you'll have to give the full path) and, optional, any number of files that are not in the directory previously given but you want included as well (usually this is manifest and possibly a .gpr file for my use)Example:
sh v_init.sh vcodectrl vcode manifest -
To pack current changes to the code into a new .vpatch on top of whatever is currently found in the b directory: from the directory containing the a,b,patches,.seals,.wot, run the v_commit.sh script giving it as parameters the name of the project, the name of the directory containing the code, the name for the new vpatch (without the extension so without .vpatch at the end) and, optional, any number of files that are not in the directory previously given but you want included as well (usually this is manifest and possibly a .gpr file for my use).
Example:
sh v_commit.sh vcodectrl vcode added_readme manifest README -
To check the result:
simply use your favourite V implementation to press to whatever patch you want from the patches dirExample:
vk.pl f
vk.pl p v testpress added_readme.vpatch -
To work on a previous version of the code and possibly branch from there:
delete the b dir
use your favourite V implementation to press to the vpatch of your choice from the patches dir
make a copy of your newly pressed code base into the b directory
simply work on your newly pressed code base and then use v_commit.sh whenever you want to press a new vpatch on top.Example:
rm -rf b
vk.pl p v newbranch vcodectrl_genesis.vpatch
cp -r newbranch b -
To clean up *everything*:
rm -rf a b patches .seals .wot
-
To genesis some code:
I don't even intend to ever make this public, it has no business out there nor any "identity" as such, it's just a part of a tool, nothing more. ↩
For lack of republican option for it, so far ↩
Comments feed: RSS 2.0
I like the approach of 'a' and 'b' as throwaway directories managed by scripts, if I've got that right, with longer names for working trees.
How do you find yourself choosing that "directory of your choice", the one with the {a,b,patches,.seals,.wot}, which I've taken to calling the "V workspace"? I mostly use ~/src/$projectName/, then ~/v/ is a permanent store of patches & seals i.e. the same layout as the code shelf on my blog. I'm not altogether happy with this duplication, though the patches and .seals subdirs can be symlinked.
What is the "echo -e" at the start of the scripts for, was it perhaps meant to be "set -e"?
What was that gap between your computing environment and the tool's expectations in the case of git? Dunno if it used to be worse, but I found "porting" it to Gales Linux was a breeze; the only dependency outside the base system was zlib, use of which is widespread enough that it's a good candidate for importing to the base anyway (the only bother being that some things ship their own personal zlib or depend on newer features or quirks - looking at you, mysql). V by comparison, in all implementations yet seen, makes much steeper demands: this but not that gpg, this but not that diff & patch, some large language system on top of the Unix fundamentals; such that I still don't have it fully working, not for lack of desire, and Gales development itself still proceeds through the trusty old local git repo.
The diff and patch inadequacies come from my having gone with Busybox: funnily enough, one of the few Gales design decisions that *didn't* attract pointy questions from the forum yet turns out to be one of the biggest sources of headaches in practice.
Exactly right, to my mind the 'a' and 'b' are for scripts to manage and otherwise entirely throwaway. I guess it does mean also "longer names for working trees" but scripts don't care about that and so... the length of the path never registered for me as some sort of consideration at all.
Keeping in mind that this here is one of the earliest iterations on it, what I want most of all is to a. have clear and useful version control b. keep any instrumentation/mechanics required for version control clearly separated from the source code as much as possible (ie no .svn dirs just added into each source code dir, ugh) c. automate repeated processes as much as possible (because it makes a huge difference in practice and not just one of convenience but literally a significant productivity difference overall).
Hm, what is the difference that you make between ~/src/$projectName/patches and the set of patches related to $projectName that are in ~/v/ ? If anything, I suppose I use perhaps my code shelf on the blog as the "permanent store of patches & seals" (more like published would be the difference there). To answer directly your question though: I prefer to keep projects as self-contained as possible and not maintain any sort of additional "central" anything (possibly my allergy to bureaucracy shows even here, huh). So each project has its own $projectName dir where it does exactly what it needs to do and that might include various "branches" dirs and release set of patches and even version-stamped .tar.gz files and whatnot, sure.
One reason why I didn't add yet to this genesis here is exactly that in practice I found I have in fact different exact v-versioning scripts for Eulora's client vs server for instance. In other words, so far this tiny genesis is the only part that is at least not project-specific.
Right you are & thanks! Tbh I looked now at the vpatch published here and I can't tell what confusion I made there (+ apparently just copied it over from one to the other and it stuck). It's not anymore in any of the working scripts that I use otherwise but yeah, this V-Tree root (why on earth is it "genesis" and not plain root?) needs to grow.
Well, I tried installing it on a fresh system and it pulled in such a lengthy list of things to install first that I didn't even bother to make a note of all the stuff in it, just stopped there. In fairness though, I was anyway already not quite seeing the point of git when there is v, to my mind the takeover there -as principle- is quite complete. Even though there is indeed a lot lacking from an implementation point of view, this doesn't somehow detract from the fact that not using V is like going by horse cart when one knows of engines - there might even be some nostalgia/habit induced pleasantness on occasion in using git but it's just not cutting it anymore for work.
I can see your problem there though in that your whole environment is essentially closer to the C root of it all and otherwise importing/rather more willingly committing to Scheme, if anything rather than Ada. Hence, Ada itself (via the obese GNAT) is just not yet in for your nor easily brought in (or so it seems) and as a result, git still simply fits your environment better. So possibly you'll end up either having to make your own full version of V-tools fully C/Scheme based or otherwise bite the bullet and import GNAT. Worth perhaps noting also that despite what it might seem at a first glance, the choice between these (and any other, including giving it more time) alternatives here might be better made first and foremost on business and strategic grounds rather than technical.
Re gpg at least, I certainly hope to ditch it sooner rather than later, already. Eulora's client v2.0 has anyway full encryption so I don't see why would I then not fully use it.
In fairness, I think Trinque actually did quite question exactly the choice of Busybox, didn't he?
Maybe there's another instance you're thinking of, but in the article featuring Busybox and the comments following, he was skeptical of why Busybox wasn't used *more*. Paraphrasing, why Gales uses daemontools, custom init and MAKEDEV, rather than the similar, native Busybox applets ?
What had stuck in my mind on this topic was more that it was under discussion and overall contested/still to be resolved one way or another, with the most recent reference that comes to mind being trinque's let's resolve the busybox-uber-alles question that I don't think even got solved otherwise. Memory also insists that he said somewhere that he wasn't necessarily married to it but he certainly wanted to hear solid reasons for whatever gets chosen in the end, whether busybox or something else.
Perhaps I misinterpret the notion of "pointy questions" - to me the pointy in there stands for deep questioning and touching the core of the matter, not necessarily for taking an opposing stance to something.
http://trinque.org/2020/01/20/a-republican-os-part-3/#comment-156 was probably my last substantial input on that thread, which looks rather weaker than it could have been. (Eg I'm still not inclined to using random disk IDs, why / on what system doesn't linux enumerate them stably?? while my initramfs based usbsticks don't require automagically finding themselves a second time. And it turned out I *had* noted the specific script that broke busybox ash though apparently wasn't able to dig it up when needed). Nonetheless the meager fight it did offer - or rather, initial question as an invitation to further exploring the topic - went unanswered; not sure if I could have just pounded the table louder or what.
Thanks for pointing out the point of the pointies. What I meant was that while I was asked to justify why I went with non-busybox code for certain things supposedly provided by busybox, I was not asked to justify the use of busybox for all its other stuff, which I've been finding would have been an important question. Perhaps the discussion simply didn't get far enough to make anything of that one way or the other.
I'm not entirely sure; it seems to be as you suspect a sort of centralizing "important stuff here, scratch space there". I'll try getting rid of it, don't think I've grown any usage patterns that actually require it as yet.
Ah, it may be an attempt at moving toward scanning the world for relevant software signed by chosen identities, where ~/v amounts to the machine's local cache of the pieces thereby found. So actually aiming more for decentralized and gossip style though certainly not there yet.
Concrete use case: we're building a fleet of boxes to provide true memory-segregated network services: web, database, routing and so on. To the extent possible I'd like the code to be managed with V, and I want all the boxes to have all the code available on-disk, such that configuring each into its own specialized role starting from a stock configuration is a simple affair with minimal outside service dependencies. What do I load onto that stock config (eg tarball from which each disk is initially populated)? If it's a replica of ~/src/$projectName from my dev workstation, it's unnecessarily importing all sorts (pressed branches, uncommitted work in progress, compiled objects, non-version-controlled notes). Whereas ~/v/ is just like the blog's /v/ : just what's necessary to reproduce, all in one place and ready to go (but differs from the blog by including possibly unpublished internal work).
and
These two seem to be different things though i.e. the first sounds like a mirror of "V patches in my WoT" while the second sounds like "deployed product" - perhaps they can share a structure but I'm not so sure that they *should* share the same structure because to me at this first sight at least, they seem to have entirely different requirements of pretty much everything. This is not to say that the structure you have is necessarily unfit for purpose - only you know exactly what you need there and how it's all intended to work so only you can evaluate that. Note though that each project can maintain just as well its own ~/src/$projectName/deployed dir or similar, not like that has to be done in a different/central place (and again, this is just addressing directly what you mentioned as a potential issue that just doesn't seem to be much of a real issue actually, that's all). At the same time and as you mentioned at first, you can always simply symlink if you really need to have a central point of access for whatever reason. There's nothing "evil" or "wrong" about a single point of access by itself, I just didn't/don't yet need it beyond the shelf on the blog, that's *all*!
Overall, what I'd say (backed up by experience too) is as simple as this: build up the *minimum* structure that your *current* use requires on correct fundamental principles (e.g. don't have one project touching another project's file or whatever else total idiocy baked in), look out for any limitations (those negative spaces again!) you might bake in and especially for fixing too soon things that are not yet clear (this is the most usual way in which you can set yourself up for grief further down the road) but otherwise don't worry that much about the unknown (but desired and even potential! everything is potential, yes) future uses - if your structure is fundamentally sane, then it can also change without huge pains, to accommodate new/additional uses as they become clear (and not before!).
In short: do plan it for what you need *now* and keep it as simple as possible but don't over-optimise, nor pre-agonize about what you "will need" (in a fuzzy, not yet clear way) sometime in the future.
You had already made the decision to use busybox and as such, the time to question *that decision* had passed really. What was important instead the moment you came with a thing already made and already based on busybox was precisely the part you got asked about, namely ~"since you already and as a matter of fact made the decision to go with busybox, then why are you not *fully* going with it?"
Sure, as a learning experience and/or helping you and/or taking a closer interest in your work, one might ask about past decisions too but from a grown up point of view, anyone would expect you had already asked and answered yourself that question quite fully, to your own satisfaction, so for as long as you are responsible about the whole thing, why would anyone start questioning your already made decisions anyway?
Note also that if you go this way, how and where do you stop ie at which past decision exactly? Should the forum also have questioned your decision to go into computing perhaps? To go about it retrospectively like this ("I've been finding would have been an important question") is at most for your own process i.e. the correct conclusion there is to note that *you* didn't ask apparently this important question at its time and/or settled for less than an answer for it but there's nothing to turn on to some external body that you did *not* engage with at the time of *that* decision (unless what you mean is more along the lines of "the forum took me for much more of an adult than I was!!!"). Had you come to the forum at the time when you were weighing "what should I go with for Gales," then and only then you could say "they questioned all sorts but didn't question this thing that I find was most important actually."
You can argue perhaps from a slightly different angle that your point is more about them failing to notice the problems with busybox itself, i.e. not a matter of not questioning you on the most important thing but a matter of incorrectly evaluating exactly the most important thing. So something like "they also missed the crucial part about busybox / most important consequences of this past decision of mine" but that would be supported at most by people's support for busybox, if that was indeed the case and I'm not quite sure it was that much of a case. If you search the logs - from what I recall and otherwise see at a quick glance now, there had been questions raised regarding busybox's utility in real-world scenarios.
@Diana Coman, makes sense to me, thanks for setting it straight.
[...] at least December 2017, I've been using for all my code publishing and versioning needs the original implemention of V by mod6 that I even packed and published as a starter kit to [...]
[...] I am by far the most experienced user of V, I have therefore also first-hand experience with all the different [...]