Arch Linux is rock solid although last update broke Virtualbox

I installed Arch Linux UEFI on my main workstation on 04 November 2015 at ~03:10 hrs. I got my trusted lieutenant, Xfce, installed immediately. I used this recipe as a guide.

Now, Linux does not tell us the installation time the way Windows does with the systeminfo command. 

Windows systeminfo command output

Windows systeminfo command output

How, then, do I know when I installed Arch Linux ?

sudo tune2fs -l /dev/sda2 | grep 'Filesystem created:'

The result

Filesystem created:       Wed Nov  4 03:10:39 2015

tune2fs allows the system administrator to adjust various tunable filesystem parameters on Linux ext2, ext3, or ext4 filesystems.

Prior to that I had allowed openSUSE Tumbleweed to rule my workstation. I was very happy for quite a long time. However, stability tumbled away and I was left with an OS that froze every now and then. I believe it was all due to Plasma 5. I had the same experience with it on Arch Linux too. I installed it because I was bored with Xfce.

I had never faced an update issue in the past almost one year. Yesterday’s update, however, broke Virtualbox. A very minor thing. This breakage notwithstanding, I think, this level of stability for a rolling-release is just unbelievable. This is something Corporates with unending supply of money and resources aim to achieve.


Started Virtualbox after the update and tried to launch my Windows 7 virtual machine. The update had nuked Virtualbox and it cried

Kernel driver not installed (rc=-1908)

The VirtualBox Linux kernel driver (vboxdrv) is either not loaded or there is a permission problem with /dev/vboxdrv. Please reinstall the kernel module by executing


as root.

where: suplibOsInit what: 3 VERR_VM_DRIVER_NOT_INSTALLED (-1908) – The support driver is not installed. On linux, open returned ENOENT.

I did what the message asked me to do:

sudo /sbin/vboxconfig

This is what I got in return

sudo: /sbin/vboxconfig: command not found

I did the following in /sbin directory:

ls | grep vbox

The result was


Now, where is /sbin/vboxconfig?

I purged Virtualbox and installed again. No change in results.

Since the message said that it was the loading of vboxdrv that had failed, I thought of loading the driver myself and it worked 🙂

sudo modeprobe vboxdrv


dkms is supposed to do the dirty work of updating modules for me. dkms has been extremely reliable all these months. Never had a trouble. On Arch Linux you need to install Virtualbox dkms package. Let me query my packages and show you:

sudo pacman -Q | grep virtualbox

The result of the query is

virtualbox 5.1.6-1
virtualbox-guest-iso 5.1.6-1
virtualbox-host-dkms 5.1.6-1


My Arch Linux experience has been extraordinary. I trust it so much that I have it installed on my main workstation. It may break some day down the line, but I will not hesitate to spend a couple of hours to install it back and get things back on track.

Software backward compatibility, undocumented APIs and importance of history etc.

Software morphs. Market realities, changes in technology, adding new features or removing something that is no longer needed, refactoring etc. are some of the valid reasons for initiating a change. However, this is a very costly affair where large software are concerned.


The software I am employed to work on is humongous. I am talking millions of SLOC. How huge is that? Two top Google sources[1, 2] reveal the number of words, yes the word-count and not the line-count, of some of the most acclaimed large novels:

311,596 – The Fountainhead – Ayn Rand
316,059 – Middlemarch – George Eliot
349,736 – Anna Karenina – Leo Tolstoy
364,153 – The Brothers Karamazov – Fyodor Dostoyevsky
365,712 – Lonesome Dove – McMurtry, Larry
418,053 – Gone with the Wind – Margaret Mitchell
455,125 – The Lord of the Rings – J. R. R. Tolkien
561,996 – Atlas Shrugged – Ayn Rand
587,287 – War and Peace – Leo Tolstoy
591,554 – A Suitable Boy – Vikram Seth

Each line may contain, say, 10 words. This puts the line-count of the above novels between approximately 30,000 and 60,000. I know comparing a novel with a software is not exactly logical, but it is a funny comparison, nevertheless.

I have all these beasts(except the one crossed) at home. Reading them is an undertaking in itself. The software I talk about do not have the emotional or philosophical underpinnings that these novels contain. No meandering yet arresting plot. The software I talk about is pure logic. Yes, certain logical blocks may meander leading to performance hits, but they are located in the source code and culled.

In large software, you cannot ignore history. No.Every line that makes no sense to you may have a historical reason for its existence. The reason may be  valid or otherwise. It is the job of the guy changing the line to make sure that he is not awakening an ancient beast. I have seen code written to workaround a .NET bug, but the developer might have forgotten to add a comment near the code block because he added one to the change-set during checking-in in RTC Jazz or whatever source control. Or, his comment may not make much sense because he had poor writing skills or he was plain lazy. Or, he might have forgotten to add a comment anywhere. There is not much one can do in such a situation other than:

  • Scan the history of the file(s) in the version control system.
  • Do your basic research and ask the developer. If he has left the company, ask other experts. Doing research reduces burden on others and they become more willing to help.
  • If nobody seems to have any idea, perform the change in your private branch and make sure that nothing unintended occurs – a proper regression testing.


Microsoft Office too is an extremely large and complex software. LibreOffice has a hard time keeping up with it. All moving targets are difficult to shoot at. Imagine shooting at a moving target blind-folded and you will start to appreciate how hard folks at LibreOffice work. Joel Spolsky tries to explain

A normal programmer would conclude that Office’s binary file formats:

  • are deliberately obfuscated
  • are the product of a demented Borg mind
  • were created by insanely bad programmers
  • and are impossible to read or create correctly.

The reasons he mentions for the complexity are:

  • They were designed to use libraries
  • They were not designed with interoperability in mind

    The assumption, and a fairly reasonable one at the time, was that the Word file format only had to be read and written by Word

  • They have to reflect all the complexity of the applications
  • They have to reflect the history of the applications

    A lot of the complexities in these file formats reflect features that are old, complicated, unloved, and rarely used. They’re still in the file format for backwards compatibility, and because it doesn’t cost anything for Microsoft to leave the code around. But if you really want to do a thorough and complete job of parsing and writing these file formats, you have to redo all that work that some intern did at Microsoft 15 years ago.

So, LibreOffice suffers because interoperability  was not a concern for Microsoft. This happens in all fields because you cannot design anything that can cover all bases. I think, we humans are not capable enough. Also, maintaining backward compatibility makes it difficult to shed the historical baggage.

All this add to the complexity. Then there are dark forces that cannot be ignored:

Embrace, extend, and extinguish“,[1] also known as “Embrace, extend, and exterminate“,[2] is a phrase that the U.S. Department of Justice found[3] that was used internally by Microsoft[4] to describe its strategy for entering product categories involving widely used standards, extending those standards with proprietary capabilities, and then using those differences to disadvantage its competitors.


An application developer should not get acquainted with what is popularly known as  Undocumented APIs:

First of all: undocumented API is a wrong term. API means Application Programming Interface. But the function calls that usually get the title undocumented API are not intended to be used to program against. A better name would be undocumented program internal interface or short undocumented interface. That’s the term I will use here in this article.

These are APIs that are supposed to be used only by Microsoft and can be removed anytime. However, some of these are very powerful APIs and power corrupts.

Raymond Chen also describes how difficult it is to get rid of Undocumented APIs once it gets used in some popular software:

Suppose you’re the IT manager of some company. Your company uses Program X for its word processor and you find that Program X is incompatible with Windows XP for whatever reason. Would you upgrade?

Charles Petzold also explains why switching standards is so hard:

The computer industry has always been subject to upheavals in standards, with new standards replacing old standards and everyone hoping for backward compatibility.


Eric Lippert explains the prohibitive cost of fixing a bug or add a new feature:

That initial five minutes of dev time translates into many person-weeks of work and enormous costs, all to save one person a few minutes of whipping up a one-off VB6 control that does what they want.Sorry, but that makes no business sense whatsoever.

IBM’s research shows us the following:
Figure 3: IBM System Science Institute Relative Cost of Fixing Defects 

When fixing a bug or adding a feature can be so costly on complex SW systems, carrying historical baggage makes more sense than switching standards.


When trying to fix a bug, enhancing software or adding a new feature:

  • Do not forget history.
  • Do not forget the cost involved.
  • Do not forget that we all are humans and mistakes are unavoidable. Be humble and move on.