Bernhard R. Link's blog

Welcome to my little private blog. Most of what you find here is targeted towards planet.debian.org, so it might make sense to follow that to understand what I am talking about.

The contents of this blog are of course also available as rss feed, even as two feeds:

index.rss
one with all posts
changelog.rss
only the posts in (an attempt of) English and that are targeted to planet.debian.org.

For more information who I am and how to contact me, take a look at my website (German only).

Firefox 69 dropped support for <keygen>

With version 69, firefox removed the support for the <keygen> feature to easily deploy TLS client certificates.
It's kind of sad how used I've become to firefox giving me less and less reasons to use it...
Fri, 20 Sep 2019 21:46:29 +0200permanent linkCategory: rants

The Colon in the Shell.

I was recently asked about some construct in a shell script starting with a colon(:), leading me into a long monologue about it. Afterwards I realized I had forgotten to mention half of the nice things. So here for your amusement some usage for the colon in the shell:

To find the meaning of ":" in the bash manpage[1], you have to look at the start of the SHELL BUILTIN COMMANDS section. There you find:

: [arguments]
	No effect; the command does nothing beyond expanding arguments and performing any specified redirections.  A zero exit code is returned.

If you wonder what the difference to true is: I don't know any difference (except that there is no /bin/:)

So what is the colon useful for? You can use it if you need a command that does nothing, but still is a command.

Then there is more things you can do with the colon, most I'd put under "abuse":

This is of course not a complete list. But unless I missed something else, those are the most common cases I run into.

[1] <rant>If you never looked at it, better don't start: the bash manpage is legendary for being quite useless as hiding all information in other information in a quite absurd order. Unless you look at documentation about how to write a shell script parser, then the bash manpage is really what you want to read.</rant>

Mon, 08 Dec 2014 20:35:03 +0100permanent linkCategory: babble

Enabling Change

Big changes are always a complicated thing to get done and can be the harder the bigger or more diverse an organization is it is taking place in.

Transparency

Ideally every change is well communicated early and openly. Leaving people in the dark about what will change and when means people have much less time to feeling comfortable about it or arranging with it mentally. Especially bad can be extending the change later or or shortening transition periods. Letting people think they have some time to transition only to force them to rush later will remove any credibility you have and severely reduce their ability to believe you are not crossing them. Making a new way optional is a great way to create security (see below), but making that obligatory before the change even arrives as optional with them will not make them very willing to embrace change.

Take responsibility

Every transformation means costs. Even if some change did only improve and did not make anything worse once implemented (the ideal change you will never meet in reality), the deployment of the change still costs: processes have adapted to it, people have to relearn how to do things, how to detect if something goes wrong, how to fix it, documentation has to be adopted and and and. Even as the change causes more good than costs in the whole organization (let's hope it does, I hope you wouldn't try to do something if the total benefit is negative), the benefits and thus the benefit to cost ratio will differ for the different parts of your organization or the different people within it. It's hardly avoidable that for some people there will not be much benefit, much less perceived benefit compared to the costs they have to burden for it. Those are the people whose good will you want to fight for, not the people you want to fight against.

They have to pay with their labor/resources and thus their good will for your benefit the overall benefit.

This is much easier if you acknowledge that fact. If you blame them for having the costs, claim their situation does not even exist or even ridicule them for not embracing change you only prepare yourself for frustration. You might be able to persuade yourself that everyone that is not willing to invest in the change is just acting out of malevolent self-interest. But you will hardly be able to persuade people that it is evil to not help your cause if you treat them as enemies.

And once you ignored or played down costs that later actually happen, your credibility in being able to see the big picture will simply cease to exist at all for the next change.

Allow different metrics

People have different opinions about priorities, about what is important, about how much something costs and even about what is a problem. If you want to persuade them, try to take that into account. If you do not understand why something is a reason, it might be because the given point is stupid. But it might also be that you miss something. And often there is simple a different valuation of what is important, what the costs are and what are problems. If you want to persuade people, it is worth to try to understand those.

If all you want to do is persuade some leader or some majority then ridiculing their concerns might get you somewhere. But how do you want to win people over if you do not even appear to understand their problems. Why should people trust you that their costs will be worth the overall benefits if you tell them the costs that they clearly see do not exist? How credible is referring to the bigger picture if the part of the picture they can see does not match what you say the bigger picture looks like?

Don't get trolled and don't troll

There will always be people that might be unreasonable or even try to provoke you. Don't allow being provoked. Remember that for successful changes you need to win broad support. Feeling personally attacked or feeling presented a large amount of pointless arguments easily results in not bringing proper responses or actually looking at arguments. If someone is only trolling and purely malevolent, they will tease you best if they bring actual concerns of people in a way you likely degrade your yourself and your point in answering. Becoming impertinent with the troll is like attacking the annoying little goblin hiding next to the city guards with area damage.

When not being able to persuade people, it is also far to easy to consider them in bad faith and/or attacking them personally. This can only escalate even more. Worst case you frustrate someone in good faith. In most cases you poison the discussion so much that people actually in good faith will no longer contribute the discussion. It might be rewarding short term because after some escalation only obviously unreasonable people will talk against you, but it makes it much harder to find solutions together that could benefit anyone and almost impossible to persuade those that simply left the discussion.

Give security

Last but not least, remember that humans are quite risk-averse. In general they might invest in (even small) chances to win, but go a long way to avoid risks. Thus an important part of enabling change is to reduce risks, real and perceived ones and give people a feeling of security.

In the end, almost every measure boils down to that: You give people security by giving them the feeling that the whole picture is considered in decisions (by bringing them early into the process, by making sure their concerns are understood and part of the global profit/cost calculation and making sure their experiences with the change are part of the evaluation). You give people security by allowing them to predict and control things (by transparency about plans, how far the change will go and guaranteed transitions periods, by giving them enough time so they can actually plan and do the transition). You give people security by avoiding early points of no return (by having wide enough tests, rollback scenarios,...). You give people security by not letting them alone (by having good documentation, availability of training, ...).

Especially side-by-side availability of old and new is an extremely powerful tool, as it fits all of the above: It allows people to actually test it (and not some little prototype mostly but not quite totally unrelated to reality) so their feedback can be heard. It makes it more predictable as all the new ways can be tried before the old ones no longer work. It is the ultimate role-back scenario (just switch off the new). And allows for learning the new before losing the old.

Of course giving the people a feeling of security needs resources. But it is a very powerful way to get people to embrace the chance.

Also in my experience people only fearing for themselves will usually mostly be passive by not pushing forward and trying to avoid or escape the changes. (After all, working against something costs energy, so purely egoistic behavior is quite limiting in that regard). Most people actively pushing back do it because they fear for something larger than only them. And any measure to making them fear less that you ruin the overall organization, not only avoids unnecessary hurdles rolling out the change but also has some small chance to actually avoid running into disaster with closed eyes.

Sun, 16 Nov 2014 16:51:38 +0100permanent linkCategory: philosophical

Where key expiry dates are useful and where they are not.

Some recent blog (here and here) suggest short key expiry times.

Then also highlight some thing many people forget: The expiry time of a key can be changed every time with just a new self-signature. Especially that can be made retroactively (you cannot avoid that, if you allow changing it: Nothing would stop an attacker from just changing the clock of one of his computers).

(By the way: did you know you can also reduce the validity time of a key? If you look at the resulting packets in your key, this is simply a revocation packet of the previous self-signature followed by a new self-signature with a shorter expiration date.)

In my eyes that fact has a very simple consequence: An expiry date on your gpg main key is almost totally worthless.

If you for example lose your private key and have no revocation certificate for it, then a expiry time will not help at all: Once someone else got the private key (for example by brute forcing it, as computers got faster over the years or because they could brute-force the pass-phrase for your backup they got somehow), they can just extend the expiry date and make it look like it is still valid. (And if they do not have the private key, there is nothing they can do anyway).

There is one place where expiration dates make much more sense, though: subkeys.

As the expiration date of a subkey is part of the signature of that subkey with the main key, someone having access to only the subkey cannot change the date.

This also makes it feasible to use new subkeys over the time, as you can let the previous subkey expire and use a new one. And only someone having the private main key (hopefully you), can extend its validity (or sign a new one).

(I generally suggest to always have a signing subkey and never ever use the main key except off-line to sign subkeys or other keys. The fact that it can sign other keys just makes the main key just too precious to operate it on-line (even if it is on some smartcard the reader cannot show you what you just sign)).

Thu, 28 Aug 2014 20:57:39 +0200permanent linkCategory: rants

beware of changed python Popen defaults

From the python subprocess documentation:

Changed in version 3.3.1: bufsize now defaults to -1 to enable buffering by default to match the behavior that most code expects. In versions prior to Python 3.2.4 and 3.3.1 it incorrectly defaulted to 0 which was unbuffered and allowed short reads. This was unintentional and did not match the behavior of Python 2 as most code expected.

So it was unintentional it seems that the previous documentation clearly documented the default to be 0 and the implementation matching the documentation. And it was unintentional that it was the only sane value for any non-trivial handling of pipes (without running into deadlocks).

Yay for breaking programs that follow the documentation! Yay for changing such an important setting between 3.2.3 and 3.2.4 and introducing deathlocks into programs.

Mon, 02 Jun 2014 19:37:23 +0200permanent linkCategory: rants

unstable busybox and builtins

In case you are using busybox-static like me to create custom initramfses, here a little warning:

The current busybox-static in unstable lost its ability to find builtins with no /proc/self/exe, so if you use it make sure you either have all builtins you need up until mount /proc (including mount) and after you umount all file systems as explicit symlinks or simply create a /proc/self/exe -> /bin/busybox symlink...

Sun, 16 Feb 2014 15:43:31 +0100permanent linkCategory: warning

slides for git-dpm talk at debconf13

Since at my git-dpm talk at debconf13 I got the speed a bit wrong and as the slides I uploaded to penta seem not to work from the html export, I've also uploaded the slides to http://git-dpm.alioth.debian.org/git-dpm-debconf13.pdf.

Thu, 15 August 2013 13:00:43 +0200permanent linkCategory: talks

listing your git repositories on git.debian.org

With the new gitweb version available on alioth after the upgrade to wheezy (thanks to the alioth admins for their work on alioth), there is a new feature available I want to advertise a bit here: listing only a subtree of all repositories. Before now one could only either look at a specific repository or get the list of all repositories and the list of all repositories is quite large and slow.

With the new feature you can link to all the repositories in your alioth project. For example in reprepro's case that is http://anonscm.debian.org/gitweb/?pf=mirrorer. Much more I missed what is now possible with the link http://anonscm.debian.org/gitweb/?pf=debian-science: getting a list of all debian-science repositories (still slow enough, but much better than the full list).

Wed, 12 June 2013 13:00:00 +0200permanent linkCategory: advertising

gnutls and valgrind

Memo to myself (as I tend to forget it): If you develop gnutls using applications, recompile gnutls with --disable-hardware-acceleration to be able to test them without getting flooded with false-positives.

Thu, 09 May 2013 13:14:43 +0200permanent linkCategory: mumbling

Git package workflows

Given the recent discussions on planet.debian.org I use the opportunity to describe how you can handle upstream history in a git-dpm workflow.

One of the primary points of git-dpm is that you should be able to just check out the Debian branch, get the .orig.tar file(s) (for example using pristine-tar, by git-dpm prepare or by just downloading them) and then calling dpkg-buildpackage.

Thus the contents of the Debian branch need to be clean from dpkg-source's point of view, that is do not contain any files the .orig.tar file(s) contains not nor any modified files.

The easy way

The easiest way to get there is by importing the upstream tarball(s) as a git commit, which one will usually do with git-dpm import-new-upstream as that also does some of the bookkeeping.

This new git commit will have (by default) the previous upstream commit and any parent you give with -p as parents. (i.e. with -p it will be a merge commit) and its content will be the contents of the tarball (with multiple orig files, it gets more complicated).

The idea is of course that you give the upstream tag/commit belonging to this release tarball with -p so that it becomes part of your history and so git blame can find those commits.

Thus you get a commit with the exact orig contents (so pristine-tar can more easily create small deltas) and the history combined.

.

Sometimes there are files in the upstream tarball that you do not want to have in your Debian branch (as you remove them in debian/rules clean), then when using this method you will have those files in the upstream branch but you delete them in the Debian branch. (This is why git-dpm merge-patched (the operation to merge a new branch with upstream + patches with your previous debian/ directory) will look which files relative to the previous upstream branch are deleted and delete them also in the newly merged branch by default).

The complicated way

There is only a way without importing the .orig.tar file(s), though that is a bit more complicated: The idea is that if your upstream's git repository contains all the files needed for building your Debian package (for example if you call autoreconf in your Debian package and clean all the generated files in the clean target, or if upstream has a less sophisticated release process and their .tar contains only stuff from the git repository), you can just use the upstream git commit as base for your Debian branch.

Thus you can make upstream's commit/tag your upstream branch, by recording it with git-dpm new-upstream together with the .orig.tar it belongs to (Be careful, git-dpm does not check if that branch contains any files different than your .orig.tar and could not decide if it misses any files you need to build even if it tried to tell).

Once that is merged with the debian/ directory to create the Debian branch, you run dpkg-buildpackage, which will call dpkg-source which compares your working directory with the contents of the .orig.tar with the patches applied. As it will only see files not there but no files modified or added (if everything was done correctly), one can work directly in the git checkout without needing to import the .orig.tar files at all (altough the pristine-tar deltas might get a bit bigger).

Thu, 04 Apr 2013 20:39:37 +0200permanent linkCategory: advertisements

Debian version strings

As I did not find a nice explanation of Debian version numbers to point people at, here some random collection of information about:

All our packages have a version. For the package managers to know which to replace with which, those versions needs an ordering. As version orderings are like opinions (everyone has one), none of them would match any single one chosen for our tools to implement. So maintainers of Debian packages sometimes have to translate those versions into something the Debian tools understand.

But first let's start with some basics:

A Debian version string is of the form: [Epoch:]Upstream-Version[-Debian-Revision]

To make this form unique, the Upstream-Version may not contain an colon if there is no epoch and not contain a minus, if there is no Debian-Revision. The Epoch must be an integer (so no colons allowed). And the Debian-Revision may not contain a minus sign (so the Debian-Revision is everything right of the right-most minus sign, or empty if there is no such sign).

Two versions are compared by comparing all three parts. If the epochs differ, the biggest epoch wins. With same epochs, the biggest upstream version wins. With same epochs and same upstream versions, the biggest revision wins.

Comparing first the upstream version and then the revision is the only sensible thing to do, but it can have counter-intuitive effects if you try to compare versions with minus signs as Debian versions:

$ dpkg --compare-versions '1-2' '<<' '1-1-1' && echo true || echo false
true
$ dpkg --compare-versions '1-2-1' '<<' '1-1-1-1' && echo true || echo false
false

To compare two version parts (Upstream-Version or Debian-Revision), the string is split into pairs of digits and non digits. Consecutive digits are treated as a number and compared numerrically. Non-digit parts are compared just like ASCII strings with the exception that letters are sorted before non-letters and the tilde is treated specially (see below).

So 3pl12 and 3pl3s are slit into (3, 'pl', 12, '') and (3, 'pl', 3, 's') and the first is the larger version.

Comparing digits as characters makes not sense at least at the beginning of the string (otherweise version 10.1 would be smaller than 9.3). For digits later in the string there are two different version schemes competing here: There is GNU style 0.9.0 followed by 0.10.0 and decimal fractions like 0.11 < 0.9. Here a version comparison algorithm has to choose one and the one chosen by dpkg is both the one supporting the GNU numbering and also the one easier supporting the other scheme:

Imagine one software going 0.8 0.9 0.10 and one going 1.1 1.15 1.2. With out versioning scheme the first just works, while the second has to be translated into 1.1 1.15 1.20 to still be monotonic. The other way around, we would have to translate the first form to 0.08 0.09 0.10, or better 0.008 0.009 0.010 as we do not know how big those numbers will be, i.e. one would have to know beforehand where the numbers will end up, while adding zeros as needed for our scheme can be done with only knowing the previous numbers.

Another decision to be taken is how to treat non-numbers. The way dpkg did this was assuming adding stuff to the end increases numbers. This has the advantage to not needing to special case dots, as say 0.9.6.9 will be bigger than 0.9.6 naturally. I think back then this decision was also easier as usually anything attached was making the version bigger and one often saw versions like 3.3.bl.3 to denote some patches done atop of 3.3 in the 3th revision.

But this scheme has the disadvantage that version schemes like 1.0rc1 1.0rc2 1.0 do not map naturally. The classic way to work arround this is to translate that into 1.0rc1 1.0rc2 1.0.0 which works because the dot is a non-letter (it also works with 1.0-rc1 and 1.0+rc1 as the dot has a bigger ASCII number than minus or plus).

The new way is the specially treated tilde character. This character was added some years ago to sort before anything else, including an empty string. This means that 1.0~rc1 is less than 1.0:

dpkg --compare-versions '1.0~rc1-1' '<<' '1.0-1' && echo true || echo false

This scheme is especially useful if you want to create a package sorting before a package already there, as you for example do want with backports (as a user having a backport installed upgrading to the next distribution should get the backport replaced with the actual package). That's why backport usually having versions like 1.0-2~bpo60+1. Here 1.0-2 is the version of the un-backported version; bpo60 is a note that this is backported to Debian 6 (AKA squeeze) and the +1 is the number of the backport in case there are multiple tries necessary. (Note the use of the plus sign as the minus sign is not allowed in revisions and would make the part before a part of the upstream version).

Now, when to use which technique?

Some common examples:

Sun, 10 Feb 2013 15:07:26 +0100permanent linkCategory: babble

some signature basics

While almost everyone has already worked with cryptographic signatures, they are usually only used as black boxes, without taking a closer look. This article intends to shed some lights behind the scenes.

Let's take a look at some signature. In ascii-armoured form or behind a clearsigned message one often does only see something like this:

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQIcBAABAgAGBQJQ8qxQAAoJEH8RcgMj+wLc1QwP+gLQFEvNSwVonSwSCq/Dn2Zy
fHofviINC1z2d/voYea3YFENNqFE+Vw/KMEBw+l4kIdJ7rii1DqRegsWQ2ftpno4
BFhXo74vzkFkTVjo1s05Hmj+kGy+v9aofnX7CA9D/x4RRImzkYzqWKQPLrAEUxpa
xWIije/XlD/INuhmx71xdj954MHjDSCI+9yqfl64xK00+8NFUqEh5oYmOC24NjO1
qqyMXvUO1Thkt6pLKYUtDrnA2GurttK2maodWpNBUHfx9MIMGwOa66U7CbMHReY8
nkLa/1SMp0fHCjpzjvOs95LJv2nlS3xhgw+40LtxJBW6xI3JvMbrNYlVrMhC/p6U
AL+ZcJprcUlVi/LCVWuSYLvUdNQOhv/Z+ZYLDGNROmuciKnvqHb7n/Jai9D89HM7
NUXu4CLdpEEwpzclMG1qwHuywLpDLAgfAGp6+0OJS5hUYCAZiE0Gst0sEvg2OyL5
dq/ggUS6GDxI0qUJisBpR2Wct64r7fyvEoT2Asb8zQ+0gQvOvikBxPej2WhwWxqC
FBYLuz+ToVxdVBgCvIfMi/2JEE3x8MaGzqnBicxNPycTZqIXjiPAGkODkiQ6lMbK
bXnR+mPGInAAbelQKmfsNQQN5DZ5fLu+kQRd1HJ7zNyUmzutpjqJ7nynHr7OAeqa
ybdIb5QeGDP+CTyNbsPa
=kHtn
-----END PGP SIGNATURE-----

This is actually only a form of base64 encoded data stream. It can be translated to the actual byte stream using gpg's --enarmor and --dearmour commands (Can be quite useful if some tool only expects one BEGIN SIGNATURE/END SIGNATURE block but you want to include multiple signatures but cannot generate them with a single gpg invocation because the keys are stored too securely in different places).

Reading byte streams manually is not much fun, so I wrote gpg2txt some years ago, which can give you some more information. Above signature looks like the following:

89 02 1C -- packet type 2 (signature) length 540
        04 00 -- version 4 sigclass 0
        01 -- pubkey 1 (RSA)
        02 -- digest 2 (SHA1)
        00 06 -- hashed data of 6 bytes
                05 02 -- subpacket type 2 (signature creation time) length 4
                        50 F2 AC 50 -- created 1358081104 (2013-01-13 12:45:04)
        00 0A -- unhashed data of 10 bytes
                09 10 -- subpacket type 16 (issuer key ID) length 8
                        7F 11 72 03 23 FB 02 DC -- issuer 7F11720323FB02DC
        D5 0C -- digeststart 213,12
        0F FA -- integer with 4090 bits
                02 D0 [....]

Now, what does this mean. First all gpg data (signatures, keyrings, ...) is stored as a series of blocks (which makes it trivial to concatenate public keys, keyrings or signatures). Each block has a type and a length. A single signature is a single block. If you create multiple signatures at once (by giving multiple -u to gpg) there are simple multiple blocks one after the other.

Then there is a version and a signature class. Version 4 is the current format, some really old stuff (or things wanting to be compatible with very old stuff) sometimes still have version 3. The signature class means what kind of signature it is. There are roughly two signature classes: A verbatim signature (like this one), or a signature of a clearsigned signature. With a clearsigned signature not the file itself is hashed, but instead a normalized form that is supposed to be invariant under usual modifications by mailers. (This is done so people can still read the text of a mail but the recipient can still verify it even if there were some slight distortions on the way.)

Then the type of the key used and the digest algorithm used for creating this signature.

The digest algorithm (together with the signclass, see above) describes which hashing algorithm is used. (You never sign a message, you only sign a hashsum. (Otherwise your signature would be as big as your message and it would take ages to create a signature, as asymetric keys are necessarily very slow)).

This example uses SHA1, which is no longer recommended: As SHA1 has shown some weaknesses, it may get broken in the not too distant future. And then it might be possible to take this signature and claim it is the signature of something else. (If your signatures are still using SHA1, you might want to edit your key preferences and/or set a digest algorithm to use in your ~/.gnupg/gpg.conf.

Then there are some more information about this signature: the time it was generated on and the key it was generated with.

Then, after the first 2 bytes of the message digest (I suppose it was added in cleartext to allow checking if the message is OK before starting with expensive cryptograhic stuff, but it might not checked anywhere at all), there is the actual signature.

Format-wise the signature itself is the most boring stuff. It's simply one big number for RSA or two smaller numbers for DSA.

Some little detail is still missing: What is this "hashed data" and "unhashed data" about? If the signed digest would only be a digest of the message text, then having a timestamp in the signature would not make much sense, as anyone could edit it without making the signature invalid. That's why the digest is not only signed message, but also parts of the information about the signature (those are the hashed parts) but not everything (not the unhashed parts).

Sun, 13 Jan 2013 15:23:52 +0100permanent linkCategory: blablabla

Gulliver's Travels

After seeing some book descriptions recently on planet debian, let me add some short recommendation, too.

Almost everyone has heard about Gulliver's Travels already, so usually only very cursory. For example: did you know the book describes 4 journeys and not only the travel to Lilliput?

Given how influential the book has been, that is even more suprising. Words like "endian" or "yahoo" originate from it.

My favorite is the third travel, though, especially the acadamy of Lagado, from which I want to share two gems:

" His lordship added, 'That he would not, by any further particulars, prevent the pleasure I should certainly take in viewing the grand academy, whither he was resolved I should go.'  He only desired me to observe a ruined building, upon the side of a mountain about three miles distant, of which he gave me this account: 'That he had a very convenient mill within half a mile of his house, turned by a current from a large river, and sufficient for his own family, as well as a great number of his tenants; that about seven years ago, a club of those projectors came to him with proposals to destroy this mill, and build another on the side of that mountain, on the long ridge whereof a long canal must be cut, for a repository of water, to be conveyed up by pipes and engines to supply the mill, because the wind and air upon a height agitated the water, and thereby made it fitter for motion, and because the water, descending down a declivity, would turn the mill with half the current of a river whose course is more upon a level.'  He said, 'that being then not very well with the court, and pressed by many of his friends, he complied with the proposal; and after employing a hundred men for two years, the work miscarried, the projectors went off, laying the blame entirely upon him, railing at him ever since, and putting others upon the same experiment, with equal assurance of success, as well as equal disappointment.' "

"I went into another room, where the walls and ceiling were all hung round with cobwebs, except a narrow passage for the artist to go in and out.  At my entrance, he called aloud to me, 'not to disturb his webs.'  He lamented 'the fatal mistake the world had been so long in, of using silkworms, while we had such plenty of domestic insects who infinitely excelled the former, because they understood how to weave, as well as spin.' And he proposed further, 'that by employing spiders, the charge of dyeing silks should be wholly saved;' whereof I was fully convinced, when he showed me a vast number of flies most beautifully coloured, wherewith he fed his spiders, assuring us 'that the webs would take a tincture from them; and as he had them of all hues, he hoped to fit everybody’s fancy, as soon as he could find proper food for the flies, of certain gums, oils, and other glutinous matter, to give a strength and consistence to the threads.'"

Thu, 29 Nov 2012 23:05:14 +0100permanent linkCategory: books

Fun with physics: Quantum Leaps

A quantum leap is a leap between two states where there is no state in between. That makes it usually quite small, but also quite sudden (think of Lasers).

So a quantum leap is a jump not allowing any intermediate states, i.e. a "abrupt change, sudden increase" like Merriam Webster defines it. This then get a "dramatic advance" and suddenly the meaning shifted from something so small it could not be divided to something quite big.

But before you complain people use the new common meaning instead of the classic physicalistic meaning, ask yourself: Would you prefer if people kept talking about "disruptive" changes to announce they did something big?

Update: I'm using quantum jump in the sense as for example used in http://en.wikipedia.org/wiki/Atomic_electron_transition. If quantum jump is something different to you, my post might not make much sense.

Sat, 20 Oct 2012 12:00:53 +0200permanent linkCategory: rants

Time flies like an arrow

It's now 10 years I am Debian Developer. In retrospect it feels like a very short time. I guess because not so much in Debian's big picture has changed. Except I sometimes have the feeling that less people care about users and more people instead prefer solutions incapacitating users.

But perhaps I'm only getting old and grumpy and thriving for systems enabling the user to do what they want was only a stop-gap until there where also open source solutions for second-guessing what the user should have wanted.

Anyway, thanks to all of you in and around Debian that made the last ten years such a nice and rewarding experience and I'm looking forward to the next ten years.

Fri, 19 Oct 2012 23:59:59 +0200permanent linkCategory: anniversary

ACPI power button for the rest of us

The acpi-support maintainer unfortunately decided 2012-06-21 that having some script installed by a package to cleanly shut down the computer should not be possible without having consolekit and thus dbus installed.

So (assuming this package will migrate to wheezy which it most likely will tomorrow) with wheezy you will either have to write your own event script or install consolekit and dbus everywhere.

You need two files. You need one in /etc/acpi/events/, for example a /etc/acpi/events/powerbtn:

event=button[ /]power
action=/etc/acpi/powerbtn.sh

Which causes a power-button even to call a script /etc/acpi/powerbtn.sh, which you of course also need:

#!/bin/sh

/sbin/shutdown -h -P now "Power button pressed"

You can also name it differently, but /etc/acpi/powerbtn.sh has the advantage that the script from acpi-support-base (in case it was only removed and not purged) does not call shutdown itself if it is there.

(And do not forget to restart acpid, otherwise it does not know about your event script yet).

For those too lazy I've also prepared a package acpi-support-minimal, which only contains those scripts (and a postinst to restart acpid to bring it into effect with installation), which can be get via apt-get using

deb http://people.debian.org/~brlink/acpi-minimal wheezy-acpi-minimal main
deb-src http://people.debian.org/~brlink/acpi-minimal wheezy-acpi-minimal main

or directly from http://people.debian.org/~brlink/acpi-minimal/pool/main/a/acpi-support-minimal/.

Sadly the acpi-support maintainer sees no issue at all and ftp-master doesn't like so tiny packages (which is understandable but means the solution is more than a apt-get away).

Sat, 30 Jun 2012 12:04:39 +0200permanent linkCategory: rants

The wonders of debian/rules build-arch

It has taken a decade to get there, but finally the buildds are able to call debian/rules build-arch.

Compare the unfinished old build

 Finished at 20120228-0753
 Build needed 22:25:00, 35528k disc space

with the new one on the same architecture finally only building what is needed

 Finished at 20120404-0615
 Build needed 00:11:28, 27604k disc space
Wed, 04 Apr 2012 11:07:55 +0200permanent linkCategory: happiness

symbol files: With great power comes great responsibility

Symbol files are a nice little feature to reduce dependencies of packages.

Before there were symbol files libraries in Debian just had shlibs files (both to be found in /var/lib/dpkg/info/. A shlibs file says for each library which packages to depend on when using this library. When a package is created, the build scripts will usually call dpkg-shlibdeps, which then looks which libraries the programs in the library use and then calculate the needed dependencies. This means the maintainers of most packages do not have to care what libraries to depend on, as it is automatically calculated. And as compiling and linking against a newer version of a library can cause the program to no longer work with an older library, it also means those dependencies are correct regardless of which version of a library is compiled against.

As shlibs files only have one dependency information per soname, that also means they are quite strict: If there is any possible program that would not work with an older version of a library, then the shlibs file must pull in a dependency for the newer version, so everything needing that library ends up depending on the newer version.

As most libraries added new stuff most of the time, most library packages (except some notable extremely API stable packages like for example some X libs) just chose to automatically put the latest package version in the shlibs file.

This of course caused library packages to be quite strict: Almost every package depended on the latest version of all libraries, including libc, so practically no package from unstable or testing could be used in stable.

To fix this problems, symbols files were introduced. A symbols file is a file (also finally installed in /var/lib/dpkg/info/ alongside the shlibs file) to give a minimum version for each symbol found in the library.

The idea is that different programs use different parts of a library. Thus if new functionality is introduced, it would be nice to differentiate which functionality is used and give dependencies depending on that. As the only thing programmatically extractable from a binary file is the list of dynamic symbols used, this is the information used for that.

But this only means the maintainer of the library package has now not only one question to answer ("What is the minimal version of this library a program compiled against the current version will need?"), but many questions: "What is the minimal version of this library a program compiled against the current version and referencing this symbol name will need?".

Given a symbols file of the last version of a library package and the libraries in the new version of the package, there is one way to catch obvious mistakes: If a symbol was not in the old list but is not in the current library, one needs at least the current version of the library.

So if dpkg-gensymbols finds a missing symbol, it will add it with the current version.

While this will never create dependencies too strict, it sadly can have the opposite effect of producing dependencies that are not strict enough:

Consider for example some library exporting the following header file:

enum foo_action { foo_START, foo_STOP};
void do_foo_action(enum foo_action);

Which in the next version looks like that:

enum foo_action { foo_START, foo_STOP, foo_RESTART};
void do_foo_action(enum foo_action);

As the new enum value was added at the end, the numbers of the old constants did not change, so the API and ABI did not change incompatibly, so a program compiled against the old version still works with the new one. (that means: upstream did their job properly).

But the maintainer of the Debian package faces a challenge: There was no new symbol added, dpkg-gensymbols will not see that anything changed (as the symbols are the same). So if the maintainer forgets to manually increase the version required by the do_foo_action symbol, it will still be recorded in the symbols file as needing the old version.

Thus dpkg will not complain if one tries to install the package containing the program together with the old version of the library. But if that program is called and calls do_foo_action with argument 2 (foo_RESTART), it will not behave properly.

To recap:

Wed, 28 Dec 2011 11:16:02 +0100permanent linkCategory: warnings

checking buildd logs for common issues

Being tired of feeling embarrassed when noticing some warning in a buildd log only after having it uploaded and looking at the buildd logs of the other architectures, I've decided to write some little infrastructure to scan buildd logs for things that can be found in buildd logs.

The result can be visited at https://buildd.debian.org/~brlink/.

Note that it currently only has one real check (looking for E: Package builds NAME_all.deb when binary-indep target is not called.) yet and additionally two little warnings (dh_clean -k and dh_installmanpages deprecation output) which lintian could at catch just as well.

The large size of the many logs to scan is a surprisingly small problem. (As some tests indicated it would only take a couple of minutes for a full scan, I couldn't help to run one full scan, only to learn afterwards that wb-team was doing the import of the new architectures at that time. Oops!)

More surprising for me using small files to keep track of logs already scanned does not scale at all with the large number of source packages. File system overhead is gigantic and it makes the whole process needlessly IO bound. That problem was be easily solved using sqlite to track things done, though as buildd.debian.org doesn't have that installed yet, so no automatic updates yet. [Update: already installed, will be some semi-automatic days first, though anyway]

The next thing to do is writing more checks, where I hope for some help from you: What kind of diagnostics do you know from buildd logs that you would like to be more prominently visible (hopefully soon on packages.qa.debian.org, wishlist item already filed)?

Trivial target is everything that can be identified from regular expression applied to every line of the buildd log. For such cases the most complicated part is writing a short description of what this message means. (So if you sent me some suggestions, I'd be very happy to also get a short text suitable for that, together with the message to looks for and ideally some example package having that message in its buildd log).

I'm also considering some more complicated tests. I'd really like to have something to test for packages being built multiple times to due Makefile errors and stuff like that.

Fri, 25 Nov 2011 22:19:56 +0100permanent linkCategory: announcement

File ownership and permissions in Debian packages

As you will know, every file on a unixoid system has some meta-data like owner, group and permission bits. This is of course also true for files part of some Debian package. And it is not very surprising that different files should have different permissions. Or even different owners or groups.

Which file has which settings is of course for the package maintainer to decide and Debian would not be Debian if there were not ways for the user to give their own preferences and have them preserved. This post is thus about how those settings end up in the package and what is to be observed when doing it.

As you will also have heard, a .deb file is simply a tar archive stored as part of an ar archive, as you can verify by unpacking a package manually:

$ ar t reprepro_4.8.1-1_amd64.deb
debian-binary
control.tar.gz
data.tar.gz
$ ar p reprepro_4.8.1-1_amd64.deb data.tar.gz | gunzip | tar -tvvf -
drwxr-xr-x root/root         0 2011-10-10 12:05 ./
drwxr-xr-x root/root         0 2011-10-10 12:05 ./etc/
drwxr-xr-x root/root         0 2011-10-10 12:05 ./etc/bash_completion.d/
-rw-r--r-- root/root     19823 2011-10-10 12:05 ./etc/bash_completion.d/reprepro
drwxr-xr-x root/root         0 2011-10-10 12:05 ./usr/
drwxr-xr-x root/root         0 2011-10-10 12:05 ./usr/share/
--More--

(For unpacking stuff from scripts, you should of course use dpkg-deb --fsys-tarfile instead of ar | gunzip. Above example is about the format, not a recipe to unpack files).

This already explains how the information is usually encoded in the package: A tar file contains that information for each contained file and dpkg is simply using that information.

(As tar stores numeric owner and group information, that limits group and owner information to users and groups with fixed numbers, i.e. 0-99. Other cases will be covered later.)

The question for the maintainer is now: Where is the information which file has which owner/group/permissions in the .tar inside the .deb, and the answer is simple: It's taken from the files to be put into the .deb.

This means that package tools could simply be implemented first by simply calling tar and there is no imminent need to write you own tar generator. It also means that the maintainer has full control and does not have to learn new descriptive languages or tools to change permissions, but can simply put the usually shell commands into debian/rules.

There are some disadvantages, though: A normal user cannot change ownership of files and one has to make sure all files have proper permissions and owners.

This means that dpkg-deb -b (or the usually used wrapper dh_builddeb) must be run in some context where you could change the file ownership to root first. This means you either need to be root, or at least to fake being root by using fakeroot. (While this could be considered some ugly workaround, it also means upstream's make install is run believing to be root, which also avoids some -- for a packager -- quite annoying automatisms in upstream build scripts assuming a package is not installed system wide if not installed as root).

Another problem are random build host characteristics changing how files are created in the directory later given to dpkg-deb -b. For example an umask which might make all files non-world-readable by default.

The usual workaround is to first fix up all those permissions. Most packages use dh_fixperms for this, which also sets executable bits according to some simple rules and has some more special cases so that the overall majority of packages does not need to look at permissions at all.

So using some debhelper setup, every special permissions and all owner/group information for owner groups with fixed numbers only needs to be set using the normal command line tools between dh_fixperms and dh_builddeb. Everything else happens automatically. Note that games is a group with fixed gid. So it is not necessary (and usually a bug) to change group-ownership of files withing the package to group games in maintainer scripts (postinst,...).

If a user wants to change permissions or ownership of a file, dpkg allows this using the dpkg-statoverride command. This command essentially manages a list of files to get special treatment and ownership and permission information they should get.

This way a user can specify that files should have different permissions and this setting is applied if a new version of this file is installed by dpkg.

Being a user setting especially means, that packages (that means their maintainer scripts) should not usually use dpkg-statoverride.

There are two exceptions, though: Different permissions based on interaction with the user (e.g. asking question with debconf) and dynamically allocated users/groups with dynamic id.

In both cases one should note that settings to dpkg-statoverride are settings of the user, so the same care should be given as to files in /etc, especially one should never override something the user has set in there. (I can think of no example where calling dpkg-statoverride --add without dpkg-statoverride --list in some maintainer script is not a serious bug: Either you override user settings or you are using debconf as a registry.

Moral

To recap, your package is doing something wrong if:

Tue, 1 Nov 2011 16:12:33 +0100permanent linkCategory: explanations

Neue Webseite, mehrsprachiger Blog

Meine neue Homepage nimmt langsam Gestalt an. Auch das Aussehen der html Version dieses Blogs ist darum etwas modernisiert. (Und leider ändern sich all die parma-links)

Auch die rss Erzeugung ist jetzt etwas verkompliziert: Es gibt jetzt mehrere rss Feeds, damit ich auch mal was schreiben oder in einer Sprache schreiben kann, die für Planet Debian nicht so geeignet ist.

Thu, 01 Sep 2011 11:42:10 +0200permanent linkCategory: meta

About feature branches for patch handling and reverting to old states.

Now that the debconf videos are available (big THANKS to the video team), I was able to watch the talk about packages in git at Debconf11 and wanted to share some insights:
Mon, 01 Aug 2011 12:12:44 +0200permanent linkCategory: answers

Paternalism and Freedom

As seen in some mailinglist discussion:

"It seems to be a common belief between some developers that users should have to read dozens of pages of documentation before attempting to do anything.

"I’m happy that not all of us share this elitist view of software. I thought we were building the Universal Operating System, not the Operating System for bearded gurus."

I think this is an interesting quote as it shows an important misunderstand of what Debian is for many people.

Debian (and Linux in general) was in its beginnings quite complicated and often not very easy to use. People still felt a deep relieve to have it and a strong love. Why?

Because it's not so much about about how you can use it, but how you can fix it.

A system that only has a nice surface and hides everything below it, that does in a majority of cases just what you most likely want it is nice. But if the only options are "On", "Off" and perhaps some "something is not working as it should, try to fix it" (aka "Repair") it is essentially a form of paternalism: There is your superior that decided what is good for you, you would not understand it anyway, just swallow what you get.

Not very surprisingly many people to be in the position of the inferior of a computer (the less the more stupid the computer is, but even modern computers are still stupid enough for most people).

So what those people want is not necessarily a system that can only be used after reading a dozen pages of documentation, but a system they know they can force to do what they want even if that might then mean reading some pages of documentation.

So a good software in that sense might have some nice interface and some defaults working most of the time. But more importantly it has a good documentation, easy enough internals so one can grasp them and be transparent enough that one can understand why it is currently not working and what to do against this and then allowing enough user interference to fix it.

If all I get offered is some "interface for users too stupid to understand it anyway" and all options to fix it are checking and unchecking all boxes and restarting a lot or perhaps some gracious offer of "There is the source code, just read that spaghetti code, you can there see anything it does though you might need to build a debug version just to see why it does not work" then I would not call any strong feelings against this situation "elitist".

Tue, 05 Apr 2011 08:40:53 +0200permanent linkCategory: rants

C Code to avoid

One of the bad aspects of the the C Programing language is that it silently allows many bad C programs. Together with the widespread use of an architecture that is very bad at catching errors (i386), this sometimes leads to common idioms that are only working accidentally. This is bad as they often break on other architectures and can break with every new optimizations or new feature the compiler adds.

Take for example a look at the code there (I tried to leave a comment there but did not succeed):

If you see something like this:

     char buffer[1000];
     struct thingy *header;
     header = (struct thingy *)buffer;

then it is time to run. I hope you do not depend on this software, because it is a pure accident if this is doing anything at all.

While you can cast a char * to a struct, that is only allowed if that memory actually was this struct (or one compatible, like a struct with the same initial part and you are only accessing that part).

In this case it is obviously not (it's just an array of char), so you might see bus errors or random values if the compiler does no optimizations and you are on a architecture where alignment matters. Or the compiler might optimize it to whatever it wants to, because the C compiler is allowed to do everything with code like that.

The next problem is the one that post was about: You are not allowed to access an array after its end. Something like

 struct thingy {
     int magic;
     char data[4];
 }

means you may only access the first 4 bytes of data. If you access more than that it may work now on your machine, but it can stop tomorrow with the next revision of the compiler or on another machine.

If you have a struct with a variable length data block, then you can use the new C99 feature of char data[] or the old gcc extension of char data[0]. Or you can use unions. (Or in some case use the behavior of structs with the same initial parts).

If you use C code with undefined semantics then every new compiler might break it with some optimization. There is often the tempting option of just using a slightly different code that currently works. But in the not too distant future the compiler (or even some processor) might again get some new optimizations and the code will break again. Fixing it properly might be harder but it's less likely it will fail to compile again and it also reduces the chances that it will not fail to compile but simply do something you did not expect.

Fri, 17 Dec 2010 16:33:11 +0100permanent linkCategory: rants

git-dpm 0.3.0

I've just uploaded git-dpm 0.3.0-1 packages to experimental.

Apart from many bugfixes (which I will also take a look if I can make an 0.2.1 version targeting squeeze, though the freeze requirements tend to get tighter and tighter, so I may already be too late), the biggest improvement is the newly added git-dpm dch command to spawn dch and then extracts something to be used for the git commit message (I prefer to have more control over debian/changelog, so I prefer this way over the other direction).

Wed, 06 Oct 2010 17:39:05 +0200permanent linkCategory: announcement

common inefficient shell code

There is hardly any use in:

cat filename | while ...
do
...
done

Just do:

while ...
do
...
done < filename

If you want the while run in a subshell, use some parentheses, but you do not need the cat at all.

Another unnecessary inefficient idiom often seen is

foo="$(echo "$bar" | sed -e 's/|.*//')"

which can be replaced with the less forky

foo="${bar%%|*}"

Similarly there is

foo="${bar%|*}"

as short and fast variant of

foo="$(echo "$bar" | sed -e 's/|[^|]*$//')"

and the same with # instead of % for removing stuff from the beginning. (Note that both are POSIX, only the ${name/re/new} not discussed here is bash-specific).

Sun, 15 Aug 2010 11:28:14 +0200permanent linkCategory: rants

git-dpm 0.2.0 with import-dsc

I've uploaded git-dpm version 0.2.0.

Most notable change and the one which could need some testing is the new git-dpm import-dsc, which will import a .dsc file and try to import the patches found into git.<\p>

Mon, 02 Aug 2010 17:31:39 +0200permanent linkCategory: callforhelp

Ghostscript brain-dead...

Some little warning to everyone using ghostscript:

Ghostscript always looks for files in the current directory first, including a file that is always executed first (before any safe mode is activated).

So by running ghostscript in a directory you do not control, you might execute arbitrary stuff.

Two things make this worse:

You have been warned.

Sat, 29 May 2010 20:05:57 +0200permanent linkCategory: warning

reprepro 4.1.0 and new Packages.diff generation

I've just released reprepro 4.1.0 and uploaded it to unstable.

The most noteworthy change and the one where I need your help is that the included rredtool program can now generate Packages.diff/Index files when used as export-hook by reprepro. (Until now you had to use the included tiffany.py script, which is a fork of the script the official Debian archives use. That script is still included in case you prefer the old method).

So instead of

DscIndices: Sources Release . .gz tiffany.py
DebIndices: Packages Release . .gz tiffany.py

you can now use

DscIndices: Sources Release . .gz /usr/bin/rredtool
DebIndices: Packages Release . .gz /usr/bin/rredtool

to get the new diff generator.

The new diff generator has am import difference to the old one: It merges patches so every client should only need to download and apply a single patch, and not multiple after each other, thus reducing the disadvantages of Packages.diff files a bit (and sometimes even reducing the amount of data to download considerably).

While reprepro and apt-get (due to carefully working around bugs/shortcomings of older versions of apt) seem to work, I don't know if there are other users of those files that could be surprised by that. If you know any I'd be glad if you could test them or tell me about them.

Fri, 19 Feb 2010 13:43:47 +0100permanent linkCategory: callfortesters

git-dpm now as alioth project

Git-dpm can now be found at http://git-dpm.alioth.debian.org/ and the source at git://git.debian.org/git/git-dpm/git-dpm.git

Functionality should now be mostly complete, so testers really needed now.

Sat, 09 Jan 2010 16:13:56 +0100permanent linkCategory: announcement

Alpha testers wanted

If you ever tried to determine what patches other distributions apply to some package you are interested in, you might have come to the same conclusion as I: It is quite an impudence how those are presented.

If you don't give up, you end up with programs or scripts to extract many proprietary source package formats, more VCS systems installed than you think there should exist.

Thats when you start to love the concept that every Debian mirror has next to each binary package the source in a format that you can extract the changes easily with only tools you find on every unixoid. And that's why I love the new (though in my eyes quite misnamed) "3.0 (quilt)" format, because that makes it even clearer and easier.

Sadly one problem remained: How to generate and store those patches?

While you can just use patches manually or use quilt to handle those patches and store the result in a vcs of your choice, the newfangled VCSes (especially git) became quite good at managing, moving and merging changes around, so it seems quite a waste to not be able to use this also to handle those patches easily.

While one can either use git to handle a patchset, by storing it as a chain of commits and using the interactive rebase, or using git to store the history of your package, doing both at the same time is tricky and not reasonably doable with git provided porcelain.

Thus I wrote my own tool to facilitate git for both tasks at the same time. The idea is to have three branches: a branch storing the history of the your package, a branch storing your patches in a way suitable to submit them upstream or to create a debian/patches/ directory from, and an branch with the upstream contents.

I've an implementation which seems to already work, though I am sure there is still much to improve and many errors and pitfalls still to find.

Thus if you also like to experiment with handling patches of a debian package in git, take a look at the manpage or the program at git://git.debian.org/~brlink/git-dpm.git
(WARNING: as stated above: alpha quality; also places are temporary and are likely to change in the future).

Sun, 03 Jan 2010 11:00:00 +0100permanent linkCategory: callforhelp

I'll never understand why some people consider it acceptable to depend on udev

This is just a reminder for all of you that have packages that depend on the udev package: I hate you.

A Debian package depending on the udev package (with very few exceptions like for example the initramfs-tools package that actually uses udev) is so wrong.

Fri, 23 Oct 2009 20:43:02 +0200permanent linkCategory: rants

An argument for symbol versioning

A little example for why it is nice to have symbol versioning in libraries. Safe the following as test.sh. Call without arguments: segfault; call with argument "half": segfault; call with argument "both": works.

#!/bin/sh
cat >s1.h <<EOF
extern void test(int *);
#define DO(x) test(x)
EOF
cat >libs1.c <<EOF
#include <stdio.h>
#include "s1.h"

void test(int *a) {
	printf("%d\n", *a);
}
EOF
cat >libs1.map <<EOF
S_1 {
 global:
  test;
};
EOF
cat >s2.h <<EOF
extern void test(int);
#define DO(x) test(*x)
EOF
cat >libs2.c <<EOF
#include <stdio.h>
#include "s2.h"

void test(int a) {
	printf("%d\n", a);
}
EOF
cat >libs2.map <<EOF
S_2 {
 global:
  test;
};
EOF
cat >a.h <<EOF
void a(void);
EOF
cat >liba.c <<EOF
#include "s.h"
#include "a.h"

void a(void) {
	int b = 4;
	DO(&b);
}
EOF
cat > test.c <<EOF
#include "a.h"
#include "s.h"

int main() {
	int b = 3;
	DO(&b);
	a();
	return 0;
}
EOF
rm -f liba.so libs.so* test s.h
if test $# -le 0 || test "x$1" != "xboth" ; then
gcc -Wall -O2 -shared -o libs.so.1 -Wl,-soname,libs.so.1 libs1.c
else
gcc -Wall -O2 -shared -o libs.so.1 -Wl,-soname,libs.so.1 -Wl,-version-script libs1.map libs1.c
fi
if test $# -le 0 ; then
gcc -Wall -O2 -shared -o libs.so.2 -Wl,-soname,libs.so.2 libs2.c
else
gcc -Wall -O2 -shared -o libs.so.2 -Wl,-soname,libs.so.2 -Wl,-version-script libs2.map libs2.c
fi
ln -s libs.so.1 libs.so
ln -s s1.h s.h
gcc -Wall -O2 -shared -o liba.so -Wl,-soname,liba.so liba.c -L. -ls
gcc -Wall -O2 test.c -L. -ls -la -o test
rm libs.so s.h
ln -s libs.so.2 libs.so
ln -s s2.h s.h
gcc -Wall -O2 -shared -o liba.so -Wl,-soname,liba.so liba.c -L. -ls
LD_LIBRARY_PATH=. ./test
Sat, 03 Oct 2009 15:19:02 +0200permanent linkCategory: stuff-to-remember

Call for xtrace testers

I've just released xtrace pre-release 1.0.0~alpha1, to be found at http://alioth.debian.org/frs/?group_id=30990 and soon in experimental.
The biggest change is no longer having protocol specifications compiled in but read at run-time.
So it would be nice if you could test the new version if you have used one of old ones. (Or if you have not used them but are interested what some X11 program sends over the socket).
Tue, 28 Jul 2009 17:43:20 +0200permanent linkCategory: callforhelp

Multiple filesystems for the paranoid

Given the current discussion on planet.debian.org about having only one or multiple file-systems, I just wanted to add a plea for having multiple filesystems.

In my (perhaps a bit overly paranoid) eyes, having multiple filesystems is mainly a security measure. I prefer having enough partitions so that the following properties hold:

Admittedly, those arguments may not be as convincing for a laptop as for a server. But I personally like to have paranoia enacted everywhere. Uniformness makes live much easier sometimes.

Update: If having paranoid in the title was not enough to hint you that I do not claim a system losing a significant amount of security compared to more important measures, let it be told to you know. It's all about thinking about even the little things and taking measures where they do not otherwise harm. To get the warm fuzzy feeling I got when e.g. CVE-2006-3626 was found and my computers had nosuid for /proc already set. ;->

Fri, 16 Jan 2009 11:04:19 +0100permanent linkCategory: paranoia

If it is chaotic and late, we all are at fault

I think we all in Debian agree that the current discussion and the votes are a cruel mess. But if anyone wants to blame anyone else for this, please consider some facts:

The outcome of the vote depends on what is to be voted on. If the vote is "Eat shit or die" a majority of people might choose the shit. That's why our constitution allows everyone to amend the vote, to offer more options, so that people can vote for what they actually want. This might get messy, especially if the process is chaotic. It can only work if people take the time and consideration to discuss the suggestions long enough to get to sane values. But if you haste it will get messy.

That there is so much haste currently is also our all fault. Of course if more people had worked on the firmware issues, we would not have this problem. But this is not the fault of one side. The other side could have worked on that too.

Some "But I have nothing against firmware in the kernel." is as little an excuse for not working on it as "I do not need kernels for hardware without free firmwares".

Because the outcomes of the last votes for sarge and etch made clear that just having the stuff in the kernel is no solution. Everyone that did not propose a GR to allow non-free firmware more than half a year ago and did not work on easing things for users needing non-free firmware has either to admit that it is also his fault by omission as much as those not wanting the firmware in there and not having done more to get rid of it. Or you have to admit you willfully did nothing to now take the release hostage for your goals.

That said, I also want to speak against the "lenny without firmware will be totally unusable". I didn't look at the details. But when I in the last half year had some servers that needed some firmware, that was not even in the kernel and on the installation media, I was extremely surprised how easy it was to put it there and how anything went correctly without thinking much, the installer just copied the needed files directly on the installed system. The initrd generator must have included that somehow (for it was a firmware for the sata card, and it actually boots). And I think it might not even be needed on the installation media, but might also be inserted by some other means. (But putting it in the initrd of the netboot installer was just so easy, that I tried nothing else).

Some post-scriptum: I personally would have deemed it more clean to have Peter Palfrader's proposal not made to amendment of the other vote. But if it had been handled otherwise, I definitely would have suggested an amendment to it (and perhaps some others, too). So do not think it would have made things much faster or easier to grasp.

Another post-scriptum: It's the job of our secretary to protect and interpret the constitution. The only thing looking at the current discussions is why political partisans in some western countries not yet got to the idea of recalling judges whose job it is to protect the constitution. Perhaps because in that setting it would sound just too absurd...

Thu, 18 Dec 2008 21:26:32 +0100permanent linkCategory: plea-for-sanitity

Ever wondered about java windows staying empty in some WMs?

It's a longstanding bug that java programs show empty gray windows when being used in many window managers.

As there is OpenJDK now, I thought: It's free software now, so look at it and perhaps there is a way to fix it. As always, looking at java related stuff is a big mistake, but the code in question speaks volumes. The window configure code has:

        if (!isReparented() && isVisible() && runningWM != XWM.NO_WM
                &&  !XWM.isNonReparentingWM()
                && getDecorations() != winAttr.AWT_DECOR_NONE) {
            insLog.fine("- visible but not reparented, skipping");
            return;
        }

and if you wonder how it detects if there is a non-reparenting window manager, it does it by:

     static boolean isNonReparentingWM() {
        return (XWM.getWMID() == XWM.COMPIZ_WM || XWM.getWMID() == XWM.LG3D_WM);
     }

Yes, it really has a big list of 12 window managers built in for which it tests. And this is not the only place where it has special cases for some, but it does so all the time in the different places.

But what Sun did not think about: There are more than 12 window managers out there. And with this buggy code it would need a list of every single one not doing reparenting (like ratpoison as when I read the bug reports correctly also awesome, wmii and a whole list of quite popular ones, too).

Or it means that you are not supposed to run graphical java applications unless you use openlook, mwm (motif), dtwm (cde), enlightenment, kwm (kde), sawfish, icewm, metacity, compiz or lookinglass or no window manager at all.

As I did not yet had realized that the old workaround of AWT_TOOLKIT=MToolkit no longer works in lenny before reading some debian-release mail, which means I haven't use any graphical java program for a long time, it seems I have decided for the latter.

P.S.: I've sent a patch that one can at least manually tell java that one would like to see windows' contents as b.d.o/508650

Sat, 13 Dec 2008 17:46:54 +0100permanent linkCategory: rants

Iceweasel 3

Trying to get prepared for lenny, the new iceweasel annoys me more and more.

Sat, 11 Oct 2008 18:49:32 +0200permanent linkCategory: call for help

Phony stamp files in debian/rules

As I wrote in blog item 29 there are many ways to break your debian/rules file. As I grew of seeing those and many more, I decided to write a lintian test for this.

Getting that finished will still need several days, as the general Makefile syntax is quite interesting in detail, and lintian is written in perl thus so have to be tests. It's quite interesting that the different cases when variables are resolved and when not seem to quite firmly force an specific way to parse it. (And relearning perl when I so successfully unlearned all parts of that language in the past make it not much faster).

Anyway, the reason I'm blogging is to give you already the results of one particular test a preliminary version gave when running over the lintian lab: It's checking for targets with -stamp in them that are phony, as that makes no sense. It will either cause configure or make run multiple times via build, wasting buildd cycles or even make the build more unstable, or just indicates needlessly complex Makefiles (having an install target that invokes an install-stamp target that does not actually produce a stamp file just makes the Makefile longer without doing anything at all but confusing readers).

You can find the preliminary results for that test at http://people.debian.org/~brlink/debian-rules-phony-stamp-file.log. I looked at some randomly chosen results and did not find a false positive. As that list was produced by the last runnable version which did not yet look at variables, I guess the list will only increase.

Tue, 23 Sep 2008 19:58:24 +0200permanent linkCategory: bugs

Some thoughts about recording differences

When recording changes in some software there are basically three approaches, with their different advantages and disadvantages.

So the format most suitable to Debian packages (stacked patches) is the total opposite of a format most suitable when you are upstream yourself and nothing is suitable for everything. There are many different thinkable ways to combine the different things to get more of the advantages, though many are a bit lacking (like storing quilt series in a VCS as text files), not yet possible or non-trivial with the current tools. Hopefully the future will improve that.

Wed, 21 May 2008 17:13:01 +0200permanent linkCategory: philosophy

patches

Looking at the current discussions, I'm wishing some people would calm down a bit. It's always impressive how some things switch sides like pendulums.

First of all, Debian already is centered about packing software and not developing them. We already have the rules and policies and methods in place. Our policy states:

If changes to the source code are made that are not specific to the needs of the Debian system, they should be sent to the upstream authors in whatever form they prefer so as to be included in the upstream version of the package.

And our source format shows how important marking the difference is to use: We have explicit .diff.gz files to contain them. The differences are not hidden in same Version Control System (like BSD) or in proprietary formats (ever tried to unpack a .srpm without rpm or without downloading some magic perl script?), but in a simple universal understandable format.

That said, please remember we are a distribution. Our priorities are our users and not the whims of software authors. We have to find the middle ground between harmful and necessary changes. Patching software to abide the FHS, to allow the user choose their editor or browser in an common way or any other thing to form a coherent set of packages is no bug in Debian, it is a bug in upstream to not allow this at least via some configure option. We have neither the manpower nor the job to rewrite and fork stuff to a usable state, though. Thus we have to keep to upstream and hope they will include our modifications or forward-port them to every new release.

Thus, we need both: We need to patch (and in general, the worse the software from a distribution's point of view or from a general point, the more of this we need. This does not necessarily hold in the other direction, though.). And we need and want to show what we change. So our users can find out how exactly the software we ship is different. And so other people dealing with the same software can profit from our changes. (After all, free software is about "giving back" a lot.). And of course maintainer change over time and we want the new one understands what the previous one did.

That said, there is of course things that can be improved. But as with all improvements there are things that improve and there are things where nothing is worse than good intentions.

Adding more thing to keep in sync is almost always a bad thing in the long run. The easiest way to keep things accessible is to use stuff actually used. I think any additional place to track patches will be futile. A good interface to view the .diff.gz files in our archive in some browser, on the other hand, could hardly get unuseful.

The format of a single .diff.gz is of course also improvable. Things like a quilt-like standardized patch series look like a very good idea to me. But the format is of course also very downgradeable. Storing VCS formated information for example, tempting as it is, spoils two very important points:

And of course, a lot can be improved by just using some rules more strictly. Not using a pristine tarball or a proper subset of one where the first is not possible should not be some mere ignorable non-conformist behavior, but seen as the serious problem it is.

Sun, 18 May 2008 15:40:26 +0200permanent linkCategory: rants

gpg2txt

Do you want know what is really stored in your gpg keyring? Or do you want to store your keyring into a VCS? Or you you want to be able to delete signatures or other data from a keyring without having to use gpg's absurd interface?

If you do, then you might want to look into a little program I started for this purpose. It's still quite alpha, but for some uses should already work. And if you test now and give some feedback, it might develop more in a direction you need. To give it a test:

cvs -d :pserver:anonymous@cvs.alioth.debian.org:/cvsroot/gpg2txt checkout gpg2txt
cd gpg2txt
./autogen.sh --configure
make
less README
man -l gpg2txt.1
./gpg2txt -o test ~/.gnupg/pubring.gpg
less test
./gpg2txt --recreate -o pubring.gpg test
Wed, 30 Apr 2008 09:06:29 +0200permanent linkCategory: announcement

Some basics about make

Till today I thought make was a very simple concept, but looking at other people's debian/rules files I start to lose that faith. So let's begin with some basics (as I guess many reading this are maintainers for Debian packages, and you might need some of this knowledge):

As you will already know, the most important part of a Makefile is a rule. Each rule is there to produce something and has prerequisites, i.e. things that have to be done before. So far so simple.

When you think about this way, the first pitfall is already no pitfall anymore:

   build: patch build-indep
   build-indep: build-indep-stamp
   build-indep-stamp:
   	$(MAKE) doc
   	touch build-indep-stamp
   

There are two mistakes in this. First of all, the clean is only called on ./debian/rules build, but not on ./debian/rules build-indep. And then patch is only called by the build target. But what you really want is that the source is patched before you do something, so it is something do be done before build-indep-stamp. The pitfall with this error is that you will not see it most of the time. As make usually processes targets in the order it finds them, it usually runs patch first. Except when you have multiple processors and tell make to make sure of them (and even then there is a chance it might work as the command to run first is fast enough) or if someone trusting on what he learned about make does call ./debian/rules build-indep-stamp build.

I guess a reason one sees this so often is the next pitfall. Most likely previous the following was tried:

   build-indep: build-indep-stamp
   build-indep-stamp: patch
   	$(MAKE) doc
   	touch build-indep-stamp
   patch: patch-stamp
   patch-stamp:
   	whatever
   	touch patch-stamp
   

The pitfall with this is some new concept. The problem is that the patch rule is phony. A rule is called phony when it does not produce the target it claims to do. The classical example is a clean target. You do not want that a clean target creates a file clean and the next time it is called says "already everything done".

Targets get phony by telling make it is (via .PHONY:) or by just not producing the file it tells to produce or (surprising to many, it seems) by having a phony target as prerequisite.

In the above example the patch target gets implicitly phony, as it does not produce a file called patch. Thus after having build the source and calling binary to create packages, this will most likely in some way depend on build-indep-stamp. But when make thus looks at build-indep-stamp to decide if it already there and even though it sees the file produces there with the touch command, it cannot determine if that is up to date. It depends on patch and there is no file called this, thus make must assume it is not up to date, thus build-indep-stamp has become effectively also phony, in the sense of having to be called every time it is depended on. (In case you have not noticed, the fix would have been to make build-indep-stamp depend on patch-stamp instead or to go without patch-stamp and make patch the file to be generated).

Thus, you can put as many -stamp files as you want in a row, as long as you have a single phony prerequisite in them, it is all void.

Fri, 25 Apr 2008 21:34:05 +0200permanent linkCategory: tricks

Why I am putting dpkg on hold on my unstable boxes right now...

A hijack of an important package makes me always uneasy. But reading the next two messages to the debian-dpkg list, which I can only read as "oh, I broke make dist. But why should I follow sane practises, all hail my workflow" and "oh, I brake Changelog, that's on purpose because I consider it a bad idea, if you want it make write some scripts to generate it" does nothing to make me believe such a person can be a good maintainer.

Sun, 09 Mar 2008 13:09:21 +0100permanent linkCategory: horror

IPv6 strikes again

If you ever wondered, why exim4 needs so long to start when you have no net access, though you were sure that configured as satellite for a smarthost it should have nothing to lookup as the smarthost in in /etc/hosts, you might just have forgotten to put a

     disable_ipv6 = true
   

in your exim4.conf. (I'm not sure, but that might also help to actually deliver mail to hosts which also have ipv6 addresses on servers with outgoing SMTP when you forgot to blacklist the ipv6 module).

Wed, 13 Feb 2008 11:57:42 +0100permanent linkCategory: tricks

censorship and related things

I don't know why some people always shout censorship when it's about what is acceptable behavior and what not and about what things people want to be associated with and with what not. (Must be some relative of Godwin's law (the original, not the "you lost" Usenet-variant)).

I personally do not care only little about what happens in irc-channels I'm not in. I don't know what happens in this this special channel starting the discussion, and I don't believe any anecdotally examples can make a big difference. (Humans err, sometimes in tone, and even some hundred examples of wrong tone in some backyard alone is nothing I care much about, as long as it is nothing that would be a criminal offense if done in more public places).

What I absolutely dislike is any form of communication forum - especially those that could be associated with me - to be declared as a place where foul behavior (to which I count sexism) is acceptable and even worse to be accepted as norm. (Why didn't anyone shout "censorship" at the "Love it or get the fuck out of here"? Perhaps there is some correlation between views of the world I don't understand).

By the way, there are places where I feel personally offended as a victim of sexism by a statement like "men are pigs". For example in a discussion about sexism. (I know, I know, it might not be sexism according to some definitions, but I see no reason to not use the word or not dislike it just because the forcing into gender roles is done by members of the same sex as opposed to to members of the opposite sex.)

Thu, 17 Jan 2008 13:42:41 +0100permanent linkCategory: blogwars

inoffical vs misusing the name

I hope I am not alone, but a community stating "Of course we are sexists" (if this is a expression of more than a individual) is in my eyes nothing that should be allowed to have debian in its name, even if it is marked as unofficial (and especially if it says "Love it or get the fuck out of there.").

Can't people find a way that is neither pseudo-moral pressure of some "political correctness" nor childish increasing of self-esteem by showing everyone how "political incorrect" you dare to be.

Tue, 15 Jan 2008 20:37:39 +0100permanent linkCategory: wtf

Pretty print library hierachies

Playing around with awk and graphviz can lead to nice but usually totally useless graphs:

   #!/bin/sh
   if test $# != 1 ; then echo "Missing argument!" >&2 ;  exit 1 ; fi
   FILENAME="$(tempfile -s ".ps")"
   ldd "$1" | mawk 'BEGIN{print "graph deps {"}END{print "}"} function dump(name,binary) { system("objdump -x " binary " | grep NEEDED | sed -e \"s#.* # \\\"" name "\\\" -- \\\"#\" -e\"s/$/\\\"/\"")} BEGIN{dump("'"$1"'","'"$1"'")} /=> \// { dump($1,$3)}' | dot -Tps -o "$FILENAME"
   gv "$FILENAME"
   rm "$FILENAME"
   
Tue, 04 Dec 2007 17:55:16 +0100permanent linkCategory: toys

why is your apt pubring not a file or apt as user updated

I had written a little script to create a local config, so one can as user run everything (short of actually installing packages) as user. (Which is quite useful to download all packages needed to update an offline system or to install something on that. Of course one needs that system status file for that).

When updating that script to the apt now checking signatures I had to realize, that the file with the keys to look for in Release.gpg files seems to be no file. At least it's location is not stored in apt's Dir section, where it would be nicely adapted to changes of the other directory, but is stored as a simple value elsewhere, so it needs an additional overwriting.</rant>

Anyway. The updated script can be downloaded here, just in case it might be of interest to anyone else.

Fri, 07 Sep 2007 11:31:12 +0200permanent linkCategory: rant and tricks

Using slapd as thunderbird/icedove addressbook

It's been some time since I got this working, but I decided to now also blog about it here now, as I was just asked it.

The main magic to get thunderbird/icedove use your ldap server as addressbook, is to include the proper schema. Search the web for mozillaAbPersonObsolete and you should find it. You do not have to use any of it's new fields, not even the object class in it is needed. Your slap only have to know about the field names, then thunderbird will be able to show the normal inetorgperson's mail attribute.

Some caveats, though:

You might think you might test your settings in thunderbirds by using that button to download everything and store it locally. In my experience that never works but strangely asks for a password, while the addressbook is already nicely working and needs no password at all.

Also don't be confused by no records shown in the new addressbook. I guess that is some measure against always loading a possibly large remote addressbook. To test just enter anything in the search field, and the matching records should show up nicely. (I'm not sure if all versions allow searching for substrings. If they do, try searching for the at sign, to get a full list.)

The shown fields seem also a bit strange, and differ with the different mozilla messenger/thunderbird/icedove versions. In some versions the field the primary name is extracted from can be changed, but directive to set that seems to change even more often.

Finally, some snippet for your /etc/icedove/global-config.js, which causes all newly created users to have an addressbook as default. I forgot if all are needed or why I added them, but those that are unnecessary at least do not seem to harm. (Last tested version is the one in etch, though. Newer version might again have something changed).

   /* ldap-Server for FOOBAR */
   pref("ldap_2.autoComplete.useDirectory", true);
   pref("ldap_2.prefs_migrated", true);
   pref("ldap_2.servers.mathematik.attrmap.DisplayName", "displayName");
   pref("ldap_2.servers.default.attrmap.DisplayName", "displayName");
   pref("ldap_2.servers.mathematik.auth.savePassword", true);
   pref("ldap_2.servers.mathematik.description", "FOOBAR");
   pref("ldap_2.servers.mathematik.filename", "foobar.mab");
   pref("ldap_2.servers.mathematik.maxHits", 500);
   pref("ldap_2.servers.mathematik.uri", "ldap://HOSTNAME:389/ou=People,dc=FOOBAR,dc=TLD??sub?(mail=*)");
   
Sun, 19 Aug 2007 11:05:59 +0200permanent linkCategory: tricks

Using Xephyr

When debugging window managers or testing your X applications in other window managers, running them in an dedicated fake X server can be quite nice. While every reasonable complete window manager (even the old twm and vtwm can, and of course all of fvwm, qvwm, wmaker, ratpoison, ...) can replace itself with any other, running a window manager in a window of its own makes many things easier: single-stepping a window manager within a debugger when that debugger runs in an Xterm on the same server is no fun. And if some testing needs a more complicated setting, switching may destroy that. And it is just more comfortable to have the editor handled by your favorite WM, while you need another WM to test some aspects of an program. (It's hard to see if initial sizes and layouts work well, when your WM does allow windows to choose their size. And if your WM does not have a bug another has, it's easier to test a work around in the other than trying to port the bug ;-> )

So, here is some example invocation I use:

     #!/bin/sh
     Xephyr :2 -reset -terminate -screen 580x724 -nolisten tcp -xkbmap ../../../../../home/brl/.mystuff/dekeymap -auth ~/.Xauthority &
     export DISPLAY=:2
     icewm
     

Which Options are useful depends on what to use it for:

-reset -terminate means to terminate when the last child exited. This is useful if you want it go away fast. Not useful if you want to switch window managers without other clients running.

-screen 580x724 tells how big the window should be. This is just the size of one of my working frames, so it integrates well into my workspace. (It would be nice if Xephyr could change its resolution upon resize of the window, though i fear programs will either be confused when the size of their X server changes unadvertised or because of too many advertisements of its changing).

-nolisten tcp as there i no need to let the world speak to your X server

-xkbmap ../../../../../home/brl/.mystuff/dekeymap I gave up figuring out how to select a German keyboard, so it justs gets 8 lines of fake description only specifying German keyboard.

-auth ~/.Xauthority tells Xephyr to require authentication. Without this everyone is allowed to control your sub X-server and all programs within it. Don't forget to create an token before with xauth add, though.

Sun, 12 Aug 2007 14:42:56 +0200permanent linkCategory: tricks

about the suggested "Debian maintainers"

As one of the most often made arguments for the current Debian General Resolution about the introduction of DMs seems to be that "finding a sponsor is hard", I want to shift discussion a bit in the other direction: How about more review instead of less?

Currently only sponsored people have the privilege of having a human looking at their packages before upload. We normal DDs only have some automated tools other people wrote for us (lintian, linda, piuparts) and some self written ones (checking diffs, comparing to previous revisions and so on) and have to hope we spot all problems not yet detectable by machines ourself. What do teams and people with comaintainers do? Any chance one of the other can look over the package you generated? Is there any chance to get something like that for the rest of us? (Ideally without drawing too much manpower from the sponsorees, though in my experience slowing that down might also help, there was more than one package I could not sponsor because someone already uploaded it before even being able to write half the list of obvious problems). Perhaps some ring to review each other packages (of course best with some classification. There is often not that much sense to have someone not liking it looking into cdbs packages or vice versa).

Mon, 30 Jul 2007 10:44:19 +0200permanent linkCategory: votes

please add mime-types

Enrico blogged about translating the mime-types of file to a debtags, stating "I'm not sure it's a good idea to encode mime types in debtags".

I just want to throw my two cent in here and state that I only once looked into debtooks and gave up because it does not list mime-types but some obscure other specification.

Getting suggested programs that support formats which have the same type of content like the one I want to show, to convert or to create does not help me. I most of the time do not want to edit "a video" or "a spreadsheet", but I first of all have a very specific file I want to do something with, or a specific set of formats I want to create something in. If I have a AbiWord file, Openoffice.org will not help me, and with video or audio formats it is even worse.

So after turning away disappointedly from debtags, I had to do a full mirror scan for /usr/lib/mime/packages/ files and their contents. Having that data cached in debtags would be something that really makes debtags useful in my eyes. (And the more direct and verbatim the mime-type is encoded, the more useful it would be for me).

Sat, 20 Jan 2007 13:10:44 +0100permanent linkCategory: debtags

clean vs. crowded bug pages

Marc Brockschmidt wrote the BTS is too crowded and Joey Hess objected that a too clean BTS can also be a bad sign.

I think both is true or to say better none of the ways makes sense without the other:

Bug reports are in my eyes one of the most valuable resources we have. No one can test everything even in almost trivial packages. To archive quality we need the users input and a badly worded bug report is still better than no bug report at all. Our BTS is a very successful tool in that as it lowers the barrier to report issues. No hassles to create (and wait for completion of) an account, no regrets by getting funny unparseable mails about some developer changing their e-mail addresses (did I already say I hate bugzilla?).

As those reports are valuable information, one should keep them as long as they can be useful. Looking at the description of the wontfix tag shows that even a request that cannot be or should not be fixed in the current context is considered valuable. Most programs and designs change, and having a place to document workarounds and keep in memory what open problems exist.

On the other hand a crowded bug list is like a fridge you only put food into. Over time it will start to degrade into the most displeasing form of a compost heap. The same holds for bug reports:

Most bugs are easier when they are young: You most probably have the same version as the submitter somewhere, know what changed recently and when you can reproduce it you get some hints on what is happening and get add it. If you cannot reproduce it, the submitter might still be reachable for more information.

When the report is old, things get harder. Is the bug still present? Was it fixed in between by some upstream release? Is the submitter still reachable and does still remember what happened?

When I care enough of a problem to write a bug report and trying to supply a patch for it, I try to always take a look at the bug list and look for some other low hanging fruits to pick and submit some other patch, too. (After all, most of the time is spend trying to understand the package and the strange build system upstream choose instead of plain old autotools and not when fix the problem). But when it is hard to see the trees because of all the dead wood around it, and there is nothing to find with some way to reproduce it and one knows far too well that the most efficient steps would be a tedious search for old versions to see if that was a bug solved upstream many years ago, good intentions tend to melt like ice thrown in lava.

So, when I wrote both is true I meant that keeping real-world issues documented and aware is a good thing. But having bugs rot (and often they do), will pervert all the advantages. In the worst case, people will even stop submitting new reports as it takes to long to look at all the old ones to look for a duplicate.

Sat, 06 Jan 2007 12:31:59 +0100permanent linkCategory: bts

again compiler arguments

I know I repeat myself, but given current discussion, I simply feel the need to do so:

Please do not hide the arguments given to the compiler from me.

I cannot fix what I do not know if wrong. Maybe you can.

Keep the argument list tidy.

Many argument lists are longer than necessary. If there is some -I/<whatever> in the argument list on a Debian system, there is something fishy. (It's not the universal collection of different stuff all going wherever it wants, after all). Common cases are:

- buggy scripts to add -I/usr/include - packages working around upstreams breaking compatibility - plainly broken upstreams - oversight

In short: if the line is too long, that is normally a bug causing more pain than only a long line. Do something against those bugs, please. There is no need at all for a proper made library to give -I for stuff installed. It's installed in the system, the default search path for /usr/include should suffice. It often does not, but that is simply bad design of that libraries interface. Do something against that, please! Also for stuff not installed, why do you need more than one -I? Are you embedding other libraries into your code? Why are they libraries if noone else uses them? If someone else uses them, why are they not made to proper library packages? If it is all intern stuff, why does it need so many include parts instead just a single include/ dir? And if it only needs one include dir why is it added a dozen times? What do you need -D for anything but paths? Ever heard of AM_CONFIG_HEADER?

And yes, I know many modern libraries are written by people never looked at anything but Windows when designing their headers. (Even some seem to have never looked at unixoid systems even after using them for decades). That is a problem, not something to be worked around with even more kludges. Kludges working around kludges are there to stay. So do not add them.

Fri, 03 Nov 2006 10:09:08 +0100permanent linkCategory: rants

"a speech for policy"

If you have to name a single thing that singles out Debian over all the other distributions in practical quality, then you cannot come up with anything but Debian having a policy, packages have to follow.

The little things make something feel raw or polished. Those things that one by itself look to unimportant by itself have real importance in their magnitude.

As with all rules, rulesets can become too large and become a obstacle. This can be avoided by being conservative and minimal in those rules, which Debian always already practised to the extreme.

Limiting this further down to things people deem as "important" will only further reduce the overall quality. Instead of removing those few things that are in the policy, we should rather extend to make everything in current policy not met to be a bug (which can still be tagged wontfix or help) instead of reducing the rules found in policy or making more things non-binding.

Thu, 26 Oct 2006 10:54:12 +0200permanent linkCategory: rants

"current GR and release of etch"

I doubt the current vote can delay etch when accepted. There are many different GR suggestions out there to get additional exceptions for etch. And there is no doubt at all such expectations will get accepted with a gigantic majority.

I see more danger to delay etch when the GR is not accepted but voted down. Then people will have much less ground for what to get exceptions and far less common ground on what to base all the GRs that are to come. And given the large amount of proposals on debian-vote, having many more GR will not help to get etch out.

Also note that if the GR is not accepted, there are many people believing that the current rules still apply and these rules are: source is needed for all bits in the Debian distribution, and much more things has to be ripped out than those mysterious 6 months if no additional exceptions are voted on.

Mon, 02 Oct 2006 20:35:49 +0200permanent linkCategory: rants

Trademarks

If everyone thought that accepting bogus obligations just to be allowed to name something by it's name, take a look at [1 Eric Dorland's blog] or directly into the new problems.

My vote for this: Call it firesomething or mffbrowser or some other free name once and forall. With some luck somebody will then also write a nice patch to have a common Debian ca-certificate handling. (I'm sick of having to do anything twice, especially if it includes writing mozilla extensions adding a ca-certificate every time a user loads its config as I'm ignorant in all this stuff to know any better way). Having things as similar as possible in different environments is a nice goal, but having working solutions and the right to implement working solutions is much more important...

Wed, 20 Sep 2006 13:35:39 +0200permanent linkCategory: rants

Graphic Libraries

Wouter Verhelst asked why simple games are so slow nowadays.

I think the problems are in the libs. All this modern stuff tries to become more and more modern, and get more and more stuff out of all those new render extensions, direct graphics and hardware accelerations. There simply is no way to decide which way is faster, so libraries have to guess. So it is no surprise things go wrong. And the place they go wrong are of course not the fast computers, but the older stuff, that does not has those nifty accelerations and no fast CPU to cancel it out.

Another disadvantage are all those "portable" libraries. SDL for example needs three connections to the X server before it does anything. Three times establishing a connection, checking of security cookies, and so on. Its API looks like living with windows, or never intended to be use for anything other than full-screen mode. (You want to find out how large the window is? Why should you be able to when you said the window cannot be resized?)

QT likes to use extensions, too. I don't know if it is its fault or newer X servers, but the newer your installation gets, the slower 2D games using QT can become. (Note the can, if you have the right graphics card, lucky you, if you have the wrong one, bad luck). To be fair QT is not supposed or designed for 2D games. On the other hand I don't know what it is supposed to do other than being a C++ compiler benchmark measured in hours.

GTK was such a promising design. Object orientated (widget classes are one of those very few things were object orientated can be used with more advantages than disadvantages) but still plain C, small and looking like designed for X. To be fair, I do not even know how well it performs, as the ever increasing library cancer drives me away. From the "users should not be able to change their homedir, that would be far too much the Unix way" glib, over all this myriad of different little libraries, all moving all the time, spewing their headers in so many different directories a compiler invocation folds three time around your terminal.

Well, enough ranted. My next graphical program will use Athena Widgets. I only have to hope all this reanimated X development in the last time will not pull xlib away from under out feet in the future...

Thu, 10 Aug 2006 13:47:43 +0200permanent linkCategory: rants

When things suddenly go very fast

or in other words:

     grep -q 'dn\.regexp' /etc/ldap/slapd.conf && cat <<EOF
     Ha ha, sucker! Ever asked yourself why your ldap database is so fsck'ing
     slow despite all the caches and indices you added?
     EOF
     
Wed, 17 May 2006 22:01:47 +0200permanent linkCategory: suprises

only DDs should be allowed to upload packages

Anthony Towns writes:

"Interestingly, the obvious way to solve the second and third problems is also to do away with sponsorship, but in a different sense - namely by letting the packager upload directly. Of course, that's unacceptable per se, since we rely on our uploaders to ensure the quality of their packages, so we need some way of differentiating people we can trust to know what they're doing, from people we can't trust or who don't yet know what they're doing."

I think the whole point of NM is to make sure we can trust people. This will be extremly different from sponsorship, as I hope no sponsor takes a packages and just uploads it, but makes sure it is as correct as any of his packages, using all his/her experience.

Even some little game or package for special use can cause severe headache, as the maintainer scripts can delete stuff outside that package or open security holes. Things having that much power should only be in the hands of people we actually know and trust. Thus some DD should be responsible. And I doubt that there are enough DDs wanting to be responsible for something another person does when they give a in blank upload privilege for some package without any chance to look what gets uploaded.

That said, I like the idea to make sure the Maintainer in the .changes file and the owner of the key that signed it are the same. (It's nicer to change it to get the mails yourself and bounce them to the person you are sponsoring, but I sometimes forget it). Does the field yet has any meaning other who get the mails from the queue daemons and dak?

Tue, 11 Apr 2006 18:32:58 +0200permanent linkCategory: rants

compiler arguments

Please do not hide the arguments given to the compiler from me.

It's hard to realize something is going wrong if you do not see what is happening. If the argument list is too long, do something against that instead of hiding it.

Make sure you follow policy when packaging software

Debian packages should be compiled with -Wall -g, but more and more do not. Please check you do, but check at the correct place. Do not look into the debian/rules file, but in the build log. If the Makefile sets a default with a single equal sign ("="), running 'CFLAGS="-Wall -g -O2" make' will not suffice. Try 'make CFLAGS="-Wall -g -O2"' instead. (Actually, there is no good reason to put them before the command. Always try to put things as arguments first, both with make or with ./configure)

It really makes everyone's live easier if those options are set.

Keep the argument list tidy.

Many argument lists are longer than necessary. If there is some -I/<whatever> in the argument list on a Debian system, there is something fishy. (It's not the universal collection of different stuff all going wherever it wants, after all). Common cases are:

- buggy scripts to add -I/usr/include

Better fix those scripts. Also make sure they do not cause other problems, like linking your program against libraries your program does not use directly. (Possibly causing funny segfaults when those libs link against other versions of those libraries)

- -I/usr/X11R6/include

For upstream packages this might perhaps be useful to support older operating systems and people unable to give it to CFLAGS themselves. But for FHS systems, this is not needed at all, as it mandates this handy /usr/include/X11 -> /usr/X11R6/include/X11 symlink. And newer X directly puts the headers in the correct place.

- packages working around upstreams breaking compatibility

Life would be too easy if upstream would not break APIs. But if they make a new incompatible version, and even change the library name for that, would it have been that difficult to also change what programs written/ported for that new incompatible API have to place in their #include line?

- plainly broken upstreams

putting stuff in ${PREFIX}/include/subdir/ and #include'ing other files from that subdirectory without the subdir deserves application of some large LART.

- oversight

often it is just not necessary, and everything gets much more readable and easier if left away.

Other things making things unreadable are large amounts of -Ds generated by ./configure. AM_CONFIG_HEADER can help here a lot with non-path stuff. Stuff containing paths is surprisingly often not used at all.

Sun, 26 Mar 2006 15:42:22 +0200permanent linkCategory: rants

Gnu FDL

My suggestion for the GFDL vote is 1342

( 1 ) Choice 1: "GFDL-licensed works are unsuitable for main in all cases"

of course that only means documents only available under FDL or only available under FDL or other non-free software licenses. Documents also available under BSD, GPL or whatever are still free. That "in all cases" means without looking at the loudness of the proponents of some document.

( 3 ) Choice 2: "GFDL-licensed works without unmodifiable sections are free"

This does not mean "without unmodifiable sections", it means "without additional unmodifiable sections". FDL has always to include the license within the work. (I still do not know how to include the license within a binary easily. But as the FDL as GPL-incompatible anyway it perhaps makes such work-flows impossible, anyway).

( 4 ) Choice 3: "GFDL-licensed works are compatible with the DFSG [needs 3:1]"

That's even worse. We have non-free for non-free stuff some of our users might not live without. (Or think so). Foist non-free stuff on them will severely hurt them in the long run.

( 2 ) Choice 4: Further discussion

Don't forget this option. If you do not like choice 2 (perhaps because you think like me that it is almost choice 3), rank 4 above it. Otherwise with equally many [3214] and [1234] votes, choice 2 would most likely win.

So only rank 2 above 4, if you want to see 2 in action. Otherwise vote 4 over 2. (same with 3 and 4, but 3 does not look so innocent as 2)

Mon, 27 Feb 2006 15:13:34 +0100permanent linkCategory: votes

Silver Plate

I just feel like quoting some passage from the Debian Developer's Reference:

A big part of your job as Debian maintainer will be to stay in contact with the upstream developers. Debian users will sometimes report bugs that are not specific to Debian to our bug tracking system. You have to forward these bug reports to the upstream developers so that they can be fixed in a future upstream release.

While it's not your job to fix non-Debian specific bugs, you may freely do so if you're able. When you make such fixes, be sure to pass them on to the upstream maintainers as well. Debian users and developers will sometimes submit patches to fix upstream bugs -- you should evaluate and forward these patches upstream.

(that's from 3.5 in case anyone wants to look up it there)

Mon, 16 Jan 2006 10:03:52 +0100permanent linkCategory: cooperating

Why not CVS?

To Wouter: No, I never used anything else than CVS for everything serious. Whenever I tried any of them for something (mostly because someone else used it for something I wanted to work on) they simply broke. I don't want debug my tools or use funny workarounds but get some work done on what I use the tools for. Using anything not in a Debian stable release is hardly acceptable for me (remember, it are tools), but when then even when the testing or unstable versions are not enough for simple tasks, it's just too bleeding edge for me.

"only suggests you haven't seen many large projects in the heat of code change"

That's simply a matter of style. If a checkin means a full compile, manually reading the diff and a minimal checking for correctness, writing Changelog entries and possibly adopting the documentation there is simply no need to handle checkins with a sub-minute resolution.

"Far too often have I seen people afraid to reorganize their code because that would lose history on the files."

That's a major problem, but the problem is the fear. No rcs will ever be able to track history for even most common possible reorganizations of code. Limiting yourself to what your rcs can cope with is the main problem, the abilities of your rcs are a minor one.

"How about the fact that upstream CVS development is rather extremely dead, [...]"

I prefer tools being able to do what I need over tools that will be able to adopt my needs normally. Active development means when I encounter a bug I either have to wait a year until it does no longer bother me, or wait a week and update software on every computer I want to use on, possibly locally in my user account if I do no administrate the computer or the behavior changes so much other usages are broken. Leading to problems to live within my disk quota and so on.

Don't understand me wrong, I'm not against SVN. I guess now (several years after everyone was already told to not use that old fashioned CVS, but not SVN version N but version N+1, because N was too broken; for several versions of N) it is quite useable. And things like atomic commits might even make it favorable over CVS for larger projects. But not every project can be within the top ten list of size, coding and commit styles differ. But I believe for many people, the ratio of advantages to disadvantages still points into another direction.

Fri, 16 Dec 2005 19:08:44 +0100permanent linkCategory: rcs

Why not CVS?

With this rcs debate currently on planet.debian.org I felt the need to add some thoughts.

My point is mainly: Why not simply stick to plain old CVS?

The pros are easily collected: it's installed everywhere, almost everyone know at least the basic commands, and it is rock solid technology without all those little nasty bugs the newer ones have all the time.

Most of the contra arguments are not applicable to me, so how can they to anybody else? ;-)

Like changelog messages: I write a Changelog after a patch, because I look at the patch for doing so. After all, that is what the Changelog is supposed to document, not what I though I did. (And looking at this is always a good way to catch some obvious mistakes one did).

Making multiple patched off other people's projects: Two versions of the directories you are working on is all that is needed. Change the one, make diffs compared to the other. Revert the diff (patch -R or just answer often enough), change the diff to what it should be, reapply it to make sure it still works, test it, revert it again. To make another patch for the same original software, continue from the beginning, otherwise apply the patch to both copies. Just works. Easier than any darcs or co, even if that would not core dump, go into endless loops or play dead dog.

Even the non-exotic new systems still have plenty of features I never needed:

Something has to be really big moving files around is needed at all. And if it is needed, just delete it here and add it there. That looses a bit of history, but that is still found in the older place's history. Moving whole files is there only a special case of moving routines between files while refactorisation, one sometimes just has to look somewhere else.

Even for svn's global revision numbers I have not yet found a use. Being used to cvsish tagging removes the need if thinking before, and between commits there is normally at least a quarter of an hour, so date based indexing always works.

So, what are we talking about?

Tue, 13 Dec 2005 16:21:44 +0100permanent linkCategory: blogWars

Would you have seen the bug

... if not told it is in there:

     ssize_t wasread = read(fs,buffer,toread);
     if( read > 0 ) {
     
Fri, 11 Nov 2005 11:29:32 +0100permanent linkCategory: funnyThings

fontconfig considered harmful

I'm sometimes a bit behind on the "Make Linux as Unusable as Windows" front. So I only learned today about this 'fontconfig' thing which is a major victory in that respect.

The .fonts-cache1 files alone are very effective in that:

633k in /home for every single user on an quite normal sarge install, thus half a gig for all users.

font-data in /home? Yes, really. I did not believe it when I first saw it, either. Guess sharing your home-dir over an inhomogeneous network is nothing Windows can, so it should no longer be supported....

Running fc-cache as root on any computer will make it stop to do so, but it is disturbing to see again some of unixoid strengths thrown in the wastebasket.

Thu, 15 Sep 2005 17:24:29 +0200permanent linkCategory: rants

When will people learn?

... that the OS exception of the GPL does not help if you want things included in an operating system? (here is the last example people still did not got it.)

... that library functions should not terminate the program when they run out of memory but return some sensible error?

... that the home directory of the current user is in getenv("HOME") and not (and never has been and never will be) in getpwuid(getuid())->pw_dir ? Usage of the latter is a bug almost everywhere. And even some more often. (For example, do not use g_get_home_dir from libglib, as it will return something only in some (though very common) cases the home directory.)

... that there are ways to design libraries and especially their headers in a way that one can compile applications without all those include paths and library paths.

Mon, 05 Sep 2005 11:57:39 +0200permanent linkCategory: rants

Downloading a package and all dependencies

To download a package and all packages it depends on (though only one possible combination, not necessarily the one installed on your system) use:

     mkdir partial
     apt-get -o"Dir::Cache::archives=`pwd`" -o"Debug::NoLocking=true" -o"Dir::State::status=/dev/null" -d install packagename
     
Wed, 24 Aug 2005 13:37:10 +0200permanent linkCategory: tricks

New Blog

After this new changelog to blog scripts were so heavily advertised, I thought that would be a good point to start a blog, too.

Though I felt like patching it a bit, so that the links in the generated html are a bit better readable and no eval or unquoted filenames are used in the script.

And while I am at it, a link to the rss file, making the xhtml checker by absolute value and hiding the e-mail Address (dch adds some random address anyway, and the line is getting so long otherwise)

Mon, 22 Aug 2005 16:27:50 +0200permanent linkCategory: meta