openSUSE 13.2 and Broadcom BCM43228 wireless: Driver woes in the Linux world

openSUSE 13.2 arrived in the beginning of November 2014. I was a bit too busy to try it. I found some free time yesterday and utilized it to install openSUSE on my workstation.

My workstation is a Dell Optiplex 9020 with i7-4770 CPU, 32 GB RAM and Broadcom BCM43228 based Dell 1540 wireless module. I managed to avoid the Microsoft Windows tax because Dell was savvy enough to ship me a machine with Linux (Ubuntu) preinstalled.

The driver for the said wireless module was already available in Ubuntu. So, when I received the machine everything was working like clockwork. I wanted to remove Ubuntu but I also wanted to avoid hunting down the drivers in the World Wide Web. Disappointingly, openSUSE did not have the driver binary in its repositories. I tried Manjaro as it was the only other distro that boasted of precompiled binaries for Broadcom BCM43228. Manjaro did not disappoint me.

For some reason, I like openSUSE more than all the other distros. I really do not know why. So, I decided to do whatever it would take to get openSUSE working on my workstation.

I searched and found that Broadcom provides the source code for the following modules:

BCM4311-, BCM4312-, BCM4313-, BCM4321-, BCM4322-, BCM43224-, and BCM43225-, BCM43227- and BCM43228-based hardware

Make sure that your device is in the above list. To detect the device ID, do the following:

sudo lspci | grep "Broad"
 04:00.0 Network controller: Broadcom Corporation BCM43228 802.11a/b/g/n

To build any kernel module, the following packages are necessary. I had downloaded them earlier to compile the vbox driver for Virtualbox:

sudo zypper in kernel-devel gcc make

I downloaded the 64-bit version of the source. The README.txt (available for download on the Broadcom website) is an utterly important document. Read that carefully.

I started the compilation and found some errors:

<path to the extracted source>/src/wl/sys/wl_cfg80211_hybrid.c:2074:4: error: too few arguments to function ‘cfg80211_ibss_joined’ cfg80211_ibss_joined(ndev, (u8 *)&wl->bssid, GFP_KERNEL);

There is a solution to this problem. Look for the following file in the extracted Broadcom source code:

<path to the extracted source>src/wl/sys/wl_cfg80211_hybrid.c

Now, look for the following line:

cfg80211_ibss_joined(ndev, (u8 *)&wl->bssid, GFP_KERNEL);

Replace the above line with the following piece of code:

cfg80211_ibss_joined(ndev, (u8 *)&wl->bssid, &wl->conf->channel, GFP_KERNEL);
cfg80211_ibss_joined(ndev, (u8 *)&wl->bssid, GFP_KERNEL);

Evidently, the Kernel API has changed and the code is not compatible with it. Compile again.

It worked fine for me and I had a driver ready to be deployed in quick time.

I just hope Broadcom fixes this issue and releases the updated code.

Further Reading:

How to load the wl.ko driver

Text boot : Getting rid of bootsplash/Plymouth in Manjaro or Arch Linux

I like seeing my kernel spewing messages on my terminal when I boot my PC. I like the transparency, the openness. Having kernel messages visible can also help in understanding certain boot issues.

This is how to get a completely GUI free/bootsplash free/plymouth free, fully terminal or text based boot process on Manjaro/Arch Linux:

Open /etc/default/grub:

sudo nano /etc/default/grub

nano is the editor I chose. Other editors can be used provided they are opened with root privileges.

The /etc/default/grub file has many lines but only the below one is to be modified to attain our objective:


The above line needs to be transformed to the following:


We are not done yet. Do the following:

sudo grub-mkconfig -o /boot/grub/grub.cfg

Now reboot and rejoice!

The terminal resolution might be a bit of a concern for those with an eye for details. If that is the case, find the following lines in /etc/default/grub:

# The resolution used on graphical terminal
# note that you can use only modes which your graphic card supports via VBE
# you can see them in real GRUB with the command `vbeinfo'

Change the value of GRUB_GFXMODE to the resolution of your liking:



Further Research:

Kernel parameters

The USING keyword in C#: How to clean unnecessary using directives in Visual Studio


There are two ways of using the using keyword in C#:

  1. As a statement
  2. As a directive


MSDN says the using keyword

Provides a convenient syntax that ensures the correct use of IDisposable objects.

This is also a way of saying that this is the standard way of releasing unmanaged resources. The following is an example of its usage:

using(StreamWriter grepLog = new StreamWriter(fileName, true))
// ...log code here


This is the most common usage of the using keyword. The following example shows how using keyword is used to access types in a namespace. This practice of using the using directives to access types in a namespace allows the developer to avoid fully qualifying the type. This save a lot of time and typing.

using System;

using System.Text;

Another use of using is to create aliases. An excellent discussion on this topic can be found in this Stackoverflow thread.


There are many reasons why your .cs file may end up having more using directives than necessary. There is a very quick way to clean and organize using directives in Visual Studio. Just right-click on the text editor. The context menu pops up and the second option in the menu is Organize Usings.

This menu item has three sub-items:

  1. Remove Unused Usings
  2. Sort Usings
  3. Remove and Sort


Remove Unused Usings:

This option simply removes unused using directives. The end result would be like:


Sort Usings:

This option only sorts the using directives alphabetically. It does nothing else. Also, the sort can not be done in reverse alphabetical order:


Remove and Sort Usings:

This option removes all unused using directives and sorts the list too. This sorting is also done alphabetically and no reverse alphabetical sorting option is available:


This keeps the class file clean and some unnecessary lines of code can be reduced (if that sort of thing appeals to you).

Accessing USB devices from Virtualbox Windows guest on Linux host

This is one issue that had troubled me for long but was not important enough for me to put any effort into soving it. All this changed when I had to upgrade my BIOS on my Dell Optiplex 9020 machine. The problems are:

  • One cannot upgrade BIOS from a Linux machine, a Windows installation is a must. At least, a DOS bootable CD, USB Flash drive is needed
  • I run Manjaro on this machine and installing Windows just for the BIOS upgrade is just too much

So, I had to find a way to access USB devices on my Windows VM guest. I have never been able to do that. Primarily because, I never tried. This time I had to. So here is what is needed:

  1. Go to the Virtualbox Downloads page
  2. If the Virtualbox installed on the system is not the latest move to the Virtualbox Old Builds section
  3. Download the Extension PackVirtualboxWebsite_ExtensionPackLink
  4. Hit the link and the Firefox Save File Dialog pops up with Virtualbox selected as the app to open the file with. Just say OK:Firefox_SaveFileDialog_VirtualboxExtensionPack
  5. Virtualbox will try to install the Extension Pack. It will pop up the following EULA. Scroll to the bottom and the I Agree and I Disagree buttons will get enabled. You know what to choose:Virtualbox_ExtensionPack_EULA
  6. If all goes well, Virtualbox will throw this at you:Virtualbox_ExtensionPack_InstalledSuccessfully
  7. It is time for some radical changes in the system. Bring to life a terminal emulator and feed it the following command to add the desired user to the vboxusers group:
     usermod -a G vboxusers <User_Name>
  8. Now move to the settings of the Virtualbox Windows guest and click the USB settings. Make sure that you click on all checkboxes and finally the settings dialog looks like this:Virtualbox_USBSettings_AddFilter
  9. Make sure that the device that you want to access in the Windows guest is NOT mounted on the Linux host. If it is, unmount it. Then, click on the button with the USB icon which has the green colored plus (+) sign. The above picture shows the tooltip for the said button. Click on that and all available USB devices on the machine will be listed. Choose the one you want to add a filter for. Adding a filter will make the device available on the Windows guest. This makes the device completely inaccessible on the host. Every time you bootstrap the Windows guest, the filter will make sure that the device is available in the guest.
  10. Some driver installations will happen in the guest once the filter is added. A restart of the guest and host would be good for the society.





Unified Extensible Firmware Interface (UEFI) and Logical Volume Manager (LVM) adventures: How upgrading to UEFI resulted in this epic post

This is a work-in-progress…

It all started when I found my backup plan lacking in all ways imaginable. To be really honest, I never had one. This is not because I am digitally retarded or lazy (this is debatable) or lack resources. I did not have much to back up. Yes, in this age where digital asset accumulation is natural and people tend to have terabytes of data without even realizing, I had nothing worth a planned back-up:

  • The important documents, that are very few in number, have been sitting cosily in my mails
  • I, somehow, relied only on text based learning materials which were available online

In the past few months, I have seen myself learning a lot using videos – Mathematics, Computer Science, Arts etcetera. This, inevitably, means a growing digital wealth. One fine day, last week, I found that my 1 Tera Byte (TB) disk was more than 50% full. That is when I realized the folly in my ways.

I purchased a couple of the Transcend 2 TB Hard Drives:

I immediately backed-up all my digital belongings. With all my data safe, my brain started its creative endeavours. What was unthinkable became a real possibility now. Suddenly it occured to me:

  • That I was using Legacy Boot Options on my Dell Optiplex 9020 machine when UEFI was very much waiting for me
  • That the BIOS my machine was shipped with was version A01 while A07 was available on Dell’s website
  • That I was using the old partition scheme on my machine and had never experimented with Logical Volume Manager (LVM)
  • That Manjaro installation had some troubles as the GUI windows weren’t refreshing properly and making me mad (this was a minor inconvinience but humans are just insatiable)

Some History

When I installed Manjaro in the first week of October 2013, there was no UEFI or LVM support in the distro. They added these to the distro only in December 2013. I had very little idea as to how UEFI and LVM actually work and was not bothered to learn because I was already burdened with a lot of stuff. With no knowledge to bank on, using advanced features would have been foolish. So, I decided not to play with them.

Unified Extensible Firmware Interface Upgrade

This is quite easy. Just hit F2 after powering on the machine and the BIOS menu appears. Change the boot option to UEFI. Make sure that the Secure Boot option is not touhed at all. But, I knew that my BIOS version was very old and had to be upgraded compulsarily. This is a problem when you are on the Linux distro. This has nothing to do with Linux. This issue exists due to the attitude of large corporations towards FOSS.This is what Dells’s website instructs:

Installation instructions
BIOS Update Executable for Windows/DOS
1. Click Download File to download the file.
2. When the File Download window appears, click Save to save the file to your hard drive.

Run the BIOS update utility from Windows environment
1. Browse to the location where you downloaded the file and double-click the new file.
2. Windows System will auto restart and update BIOS while system startup screen.
3. After BIOS update finished, system will auto reboot to take effect.

Run the BIOS update utility from DOS environment if Legacy Boot Mode(Non-Windows users).
1. Copy the downloaded file to a bootable DOS USB key.
2. Power on the system, then Press F12 key and Select "USB Storage Device" and Boot to DOS prompt.
3. Run the file by typing copied file name where the executable is located. 
4. DOS System will auto restart and update BIOS while system startup screen. 
5. After BIOS update finished, system will auto reboot to take effect.

Run the BIOS update utility from DOS environment if UEFI boot mode with Load Legacy Option disabled (Non-Windows users)
1. Copy the downloaded file to a bootable DOS USB key.
2. Power on the system, then go to BIOS Setup by pressing F2 and go to "General-Boot Sequence - Boot List Option".
3. Change "UEFI" to "Legacy" of Boot List Option.
4. Click "Apply","Exit" to save changes and reboot system.
5. Press F12, then Select "USB Storage Device" and Boot to DOS prompt.
6. Run the file by typing copied file name where the executable is located. 
7. DOS system will auto restart and update BIOS while system startup screen.
8. After BIOS update finished, system will auto reboot.
9. Go to BIOS Setup by pressing F2 and go to "General > Boot Sequence > Boot List Option".
10. Change "Legacy" to "UEFI" Boot Option.
11. Go to "Exit > Exit Save Changes" and reboot system.
Note 1: You will need to provide a bootable DOS USB key. This executable file does not createthe DOS system files.
Note 2: Please make sure you suspend BitLocker encryption before updating BIOS on a BitLocker enabled system. 
If you don't enable BitLocker on your system you can ignore it.

You need Windows/DOS to flash the new version of the BIOS. Linux will NOT work. I had some choices, though:

  • Install Windows temporarily on the machine and flash the BIOS. Then, reinstall Manjaro. What happens if another update arrives in the near future? Will I be willing to install Windows temporarily and suffer the consequential pain? NO.
  • Use Virtualbox + Windows guest + Rufus or UNetBootin utility in Windows guest to create a bootable FreeDOS. This seems painless and an ideal way to handle the scenario.

The second option might not be clear to many. What is FreeDOS, some may ask. For that let us first see what DOS is:

DOS/dɒs/, short for Disk Operating System,[1] is an acronym for several closely related operating systems that dominated the IBM PC compatible market between 1981 and 1995, or until about 2000 including the partially DOS-based Microsoft Windows versions 95, 98, and Millennium Edition.

Related systems include MS-DOS, PC DOS, DR-DOS, FreeDOS, ROM-DOS, and PTS-DOS.

So, when Microsoft decided to kill DOS, many users were left in the lurch. FreeDOS was an attempt to help those users:

FreeDOS is a free DOS-compatible operating system that can be used to play games, run legacy software, or support embedded systems. FreeDOS is basically like the old MS-DOS, but better! For example, FreeDOS lets you access FAT32 file systems and use large disk support (LBA) — a feature not available in MS-DOS, and only included in Windows 95 and newer.

I knew I had to go the FreeDOS way and use Virtualbox to get a bootable FreeDOS USB. I decided to install latest Manjaro with UEFI enabled and then upgrading the BIOS.

So, I downloaded the latest Manjaro XFCE  edition with full support for UEFI and LVM. Then I did the following:

  • Used the dd command and created a bootable USB flash drive for Manjaro installation
  • Changed the Boot option from Legacy to UEFI in my BIOS settings (hit F2 after powering on the machine)
  • Started the Manjaro installation procedure

Manjaro Graphical Installation With UEFI and LVM Features

Manjaro’s graphical installer is a very beautifully crafted software. It resembles Ubuntu’s installer and is very practical and intuitive. The Installation type dialog had the options for an LVM partitioning in the automatic mode:


and seemingly the LVM option is not available in the advanced mode:


So, I chose automatic mode and let Manjaro installer choose the best partition scheme for me. There seems to be no way to modify the partition scheme to LVM in the automatic mode. This is limiting and would surely be addressed.

The lsblk command spits out this:

sda                    8:0    0 931.5G  0 disk  
├─sda1                 8:1    0     2M  0 part  
├─sda2                 8:2    0   100M  0 part  /boot/efi
├─sda3                 8:3    0   256M  0 part  /boot
└─sda4                 8:4    0 931.2G  0 part  
  └─cryptManjaro     254:0    0 931.2G  0 crypt 
                     254:1    0  29.3G  0 lvm   /
                     254:2    0 886.2G  0 lvm   /home
                     254:3    0  15.7G  0 lvm   [SWAP]

Evidently, my 1 TB hard drive (sda) was partitioned automatically into four partitions (sda1, sda2, sda3, sda4). The sda4 is used as an LVM that hosts the logical /, /home and SWAP partitions. I had chosen a separate partition for /home in the Installation type dialog’s automatic option. And yes, the LVM is LUKS encrypted:


Now, what exactly is LVM? This needs to be answered so as to understand whatever has been described in this section:

The full explanation of the concept would be quite a task. Thus, I resort to copying the important facts copied from the Ubuntu Wiki. It would be really benefitial for the reader to also read the following pages to understand LVM properly:

What is LVM?
LVM stands for Logical Volume Management. It is a system of managing logical volumes, or filesystems, that is much more advanced and flexible than the traditional method of partitioning a disk into one or more segments and formatting that partition with a filesystem.
Why use LVM?
For a long time I wondered why anyone would want to use LVM when you can use gparted to resize and move partitions just fine. The answer is that lvm can do these things better, and some nifty new things that you just can’t do otherwise. I will explain several tasks that lvm can do and why it does so better than other tools, then how to do them. First you should understand the basics of lvm.
The Basics
There are 3 concepts that LVM manages:
Volume Groups
Physical Volumes
Logical Volumes
A Volume Group is a named collection of physical and logical volumes. Typical systems only need one Volume Group to contain all of the physical and logical volumes on the system, and I like to name mine after the name of the machine. Physical Volumes correspond to disks; they are block devices that provide the space to store logical volumes. Logical volumes correspond to partitions: they hold a filesystem. Unlike partitions though, logical volumes get names rather than numbers, they can span across multiple disks, and do not have to be physically contiguous.


Copied from, Markus Gattol’s blog

[The five points below have been copied from the Ubuntu Wiki without modifications]

The Specifics
One of the biggest advantages LVM has is that most operations can be done on the fly, while the system is running. Most operations that you can do with gparted require that the partitions you are trying to manipulate are not in use at the time, so you have to boot from the livecd to perform them. You also often run into the limits of the msdos partition table format with gparted, including only 4 primary partitions, and all logical partitions must be contained within one contiguous extended partition.
Resizing Partitions
With gparted you can expand and shrink partitions, but only if they are not in use. LVM can expand a partition while it is mounted, if the filesystem used on it also supports that ( like the usual ext3/4 ). When expanding a partition, gparted can only expand it into adjacent free space, but LVM can use free space anywhere in the Volume Group, even on another disk. When using gparted, this restriction often means that you must move other partitions around to make space to expand one, which is a very time consuming process that can result in massive data loss if it fails or is interrupted ( power loss ).
Moving Partitions
Moving partitions with gparted is usually only necessary in the first place because of the requirement that partitions be physically contiguous, so you probably won’t ever need to do this with LVM. If you do, unlike gparted, LVM can move a partition while it is in use, and will not corrupt your data if it is interrupted. In the event that your system crashes or loses power during the move, you can simply restart it after rebooting and it will finish normally. When I got my SSD drive, I simply plugged it in, booted it up, and asked lvm to move my running root filesystem to the new drive in the background while I continued working. Another reason you might want to move is to replace an old disk with a new, larger one. You can migrate the system to the new disk while using it, and then remove the old one later.
Many Partitions
If you like to test various Linux distributions, or just different version of Ubuntu, or both, you can quickly end up with quite a few partitions. With conventional msdos partitions, this becomes problematic due to its limitations. With LVM you can create as many Logical Volumes as you wish, and it is usually quite easy since you usually have plenty of free space left. Usually people allocate the entire drive to one partition when they first install, but since extending a partition is so easy with LVM, there is no reason to do this. It is better to allocate only what you think you will need, and leave the rest of the space free for future use. If you end up running out of the initial allocation, adding more space to that volume is just one command that completes immediately while the system is running normally.
This is something you simply can not do without LVM. It allows you freeze an existing Logical Volume in time, at any moment, even while the system is running. You can continue to use the original volume normally, but the snapshot volume appears to be an image of the original, frozen in time at the moment you created it. You can use this to get a consistent filesystem image to back up, without shutting down the system. You can also use it to save the state of the system, so that you can later return to that state if you mess things up. You can even mount the snapshot volume and make changes to it, without affecting the original.

Manjaro Post-Installation

The installtion procedure of Manjaro is like most other established distros – as Zohan would put it, “Silky Smooth”. Once I was done with it, I was focussed on getting my machine updated. I fired up pacman and well, updated. Now I had Manjaro installed and updated. It is time to prepare for a BIOS update. How am I going to do it? The answer took me on a Virtualbox advenutre.

Once I came back from the Virtualbox adventure, I had with me a USB flash drive readily accessible form the Windows VM guest. I  used Rufus to create a bootable FreeDOS USB flash drive from the Windows guest and copied the BIOS executable that I downloaded from the Dell Website. I restarted my machine and hit F12 to get to the one-time boot menu. I chose the USB flash drive for booting and got into FreeDOS. This is where things got a bit confusing.

I had downloaded the following ISO from the official website. This is lacking in some features which makes it inappropriate for BIOS upgrade. Download this if you need to test or learn etc. but not for BIOS upgrade. For BIOS upgrade you need FULL CD:

FreeDOS 1.1 Base


Bittorrentor, via bittorrent


  • FreeDOS 1.1 Base CD
  • Contains packages from BASE, and several useful utilities
  • Includes source code for everything
  • Install only, does not include LiveCD
  • 40 MB

The menu options available after the booting of this ISO are:


So, I had to search for a full ISO image that could help me. Further net surfing helped me get my hands on this: at

There may be many other sources too! The only difference is that this ISO image is around 150 MB in size.

I created another FreeDOS bootable USB flash drive with this ISO and the menu options changed favourably. Choose the option highlighted in the image below:


Once the highlighted option is selected, the following appers:


With the command prompt at A:\>, the BIOS executable that I copied was not available. I used the dir command to look at the files. I had to change the working directory to C: drive. With that done, my file was available. Now, I could type the name of the file and hit enter to execute it. The executable takes control of the situation and flashes the new BIOS, A07 version. Done!

Conic Sections and the Double-Napped Cone: Apollonius of Perga, Rene Descarte, History of Mathematics etc.

I always thought Co-Ordinate Geometry had something to do with the Conic Sections’ being defined based on the Double-Napped cone. I believed Rene Descarte was the one who made that decision. I thought he did this to keep the definition symmetric about the origin of the [x, y, z] space. That is, I thought, a double-napped cone was conceived so as to give one cone the positive z-coordinates and the other one the negative z-coordinates for symmetrical considerations. Still not clear? Keep reading…


Figure 1: The Double Cone or the Double-Napped Cone By DemonDeLuxe (Dominique Toussaint) (Own work) [GFDL or CC-BY-SA-3.0], via Wikimedia Commons

I was going through some historical documents related to the Conic Sections and their development when I  came across a name that cleared everything for me:

Mr. Apollonius of Perga (262 BC – c. 190 BC)

He walked on this earth many centuries before Rene Descarte did. He did his Mathematics long before the invention of Co-Ordinate Geometry. Co-Ordinate Geometry basically helped in the Algebraic treatment of Geometry, which was never a concern or a possibility in the times of Mr. Apollonius. He did not use Algebra. He used words and figures. Still, he defined and described the properties of Conic Sections in such methodical and precise way that not much needed to be added ever after. He covered all scenarios. He clearly explained how Conic Sections could be defined with complete accuracy using an oblique cone too:


Figure 2: Regular Cone on the left and Oblique Cone on the right By DemonDeLuxe (Dominique Toussaint) (Own work) [GFDL or CC-BY-SA-3.0], via Wikimedia Commons

I was surprised beyond limits when I came to know that Mr. Apollonius was the one who defined Conic Sections based on double-napped cone. I thought hard and understood the need for such a definition. A deep look at the following figure will answer all questions:


The Conic Sections By Duk at English language Wikipedia

The co-ordinate system is not explicitly shown in the above diagram. So, let me borrow it from the first diagram. This will keep things simple. Thus, the point where the apex of the two cones meet will be treated as the origin.


When a plane cuts the cone the way it does in the second image of the above diagram, it forms an ellipse. Simple.


A special case of ellipse where the plane that cuts the cone is parallel to the X-Y plane according the scheme chosen by me.


The third image in the diagram is a hyperbola. As can be seen, the plane cutting the cone can be at any angle but never equal to the slant angle of the cone. This has a very important implication. The plane cuts both cones of the double-napped cone. The third double-napped cone of Figure 3 shows two hyperbolas. In one the cutting plane is at an angle to the axis of symmetry of the cone while the other plane runs parallel to it. In all the cases, the sections formed on both cones will be symmetrical. The image 3 of the Figure 3 may be misleading but we need to keep in mind that we are dealing with infinite geometrical structures. We, thus, need to extend the image in our minds and see the symmetry.


I see this as a very special case of hyperbola when dealing with single cones, but with a double-napped cone this relation is ignored. The plane cutting the cone is at an angle equal to the slant angle of the cone. Thus, it runs parallel to the slant at a distance (if it runs on the slant, we get a line). This also means that the plane can cut only one cone. The other cone will see the plane running parallel to it.

Why is a double-napped cone needed?

I detailed a lot of stuff but never explained why we need a double napped cone if it is not for the satisfaction of co-ordinate system symmetry. Well, a double-napped cone is not needed for the definition of any conic section apart from the hyperbola. The hyperbola is the only section that will be formed on both cones if we have a double-napped cone, but can also be defined very well on the single cone. Also, co-ordinate geometry was not available to Apollonius and there was no pressure on him from Rene Descarte. It seems there is no pressing need for a double-napped cone and a single cone would have sufficed:


Conic Sections Defined on Single Cone by Magister_Mathematicae

This is what A History of Mathematics by Carl Boyer and Uta Merzbach has to say about double-napped cone in Chapter 7 on Apollonius of Perga:

TheGenesisOfaDoubleNappedConeThus, symmetry was obviously a consideration. Also, the concept of infinity was taken care of. Greeks would never have wanted to face the wrath the Gods by neglecting symmetry and infinity:)

Why do we need a cone for defining ellipse, circle, parabola and hyperbola?

Because we are dealing with conic sections and conic sections can only be defined on a cone. That is a dumb answer, I know.

Try Cylindrical Sections, instead. There are only two curves possible when a plane intersects a cylinder – a Circle and an Ellipse. Nothing else.

What about Spherical Sections. Oh no! Let us not go to that domain. Let us reserve that for some later advanced research. One thing that needs to be understood is that cone and cylinder have single curvature. Sphere is a different beast. It has double curvature. What this means is that a cone or a cylinder can easily be developed:

Development of various Solids from IIT Guwahati Learning Material

Development of various Solids from IIT Guwahati Learning Material

Development of Cone By Cdang (Own work) [CC-BY-SA-3.0], via Wikimedia Commons

Development of Cone
By Cdang (Own work) [CC-BY-SA-3.0], via Wikimedia Commons

A sphere cannot be developed satisfactorily due to the double curvature. If a plane cuts a sphere, the only curve it can generate is a circle. The largest circle that can be formed is when a plane cuts the sphere in two equal halves:

Spherical cap By Pbroks13 (Image:Spherical cap.gif) [Public domain], via Wikimedia Commons

Spherical cap
By Pbroks13 (Image:Spherical cap.gif) [Public domain], via Wikimedia Commons

I will go into the details of space curves (as opposed to plane curves that we have been dealing with in the whole article) cutting spheres and the shapes thus formed later. For now, I have to go deep into the conic sections and their practical applications, analytical geometry etcetera.

WordPress local installation on Arch Linux

Matthew CharlesMattMullenweg (born January 11, 1984 in Houston, Texas) is an American online social mediaentrepreneur, web developer and musician living in San Francisco, California. He is best known for developing the free and open source web software WordPress, now managed by The WordPress Foundation.

I thank you and all those great souls who contribute to this wonderful system!

A Content Management System (CMS) is very necessary for anyone serious about knowledge and its management.There are many such systems available:

Choices are many and all have pros and cons. I stumbled upon WordPress and have been mesmerised by the power and beauty of the system. I have not tried the other systems due to the limited time available to me but WordPress seems to be the easiest CMS system. The epiphany that WordPress can be used locally for managing all my knowledge was just unbelievable. I created an intranet WordPress blogging system at my workplace and that blew everyone’s mind. Even mine! That was on a Windows machine. WAMP package is available for Windows which makes installing the complete WAMP Stack (Windows, Apache, MySQL, PHP) a breeze. I use Manjaro Linux as my primary OS at home. In Linux world we have the LAMP Stack (Linux, Apache, MySQL, PHP) but no single package equivalent to the WAMP.

Why exactly do we need a WordPress blog on local system?

  • The full power of WordPress in offline mode
  • Can be used as a diary, notebook, journal, knowledge repository etc. and is strictly personal
  • One can easily create a blog at and share with the world. For data not intended for sharing a local WordPress is great
  • Database driven and also maintains history – This is one of the most important features for students and researchers
  • Unlike Word documents or PDFs, links can be used to avoid a static linear mode of learning (which is boring)

It is certainly a big task, at least for someone who has no experience with the LAMP stack. Read the manual. There is no way around the Arch documentation. It is one of the best and needs to be respected. Reading the doc will save a lot of trouble. Let me jot down what I did to get the system up and running. This is just a summary. This worked for me and should for most of the people reading, but remember that there are too many variables and the great Arch documentation is the only place to get the complete information:

WordPress installation is pretty straightforward and takes only 5 minutes or so. It is the pre-requisites that will make buttocks sore.

The task can be broken down into the following smaller tasks:

  1. Install Apache server
  2. Install MySQL (On Arch and Arch based systems MariaDB has replaced MySQL. Why? Well, read this Wikipedia article.)
  3. Install PHP
  4. Install phpMyAdmin
  5. Install WordPress
  6. Configure Apache
  7. Configure MariaDB using the phpMyAdmin front-end (GUI)

Install pre-requisites

Install Apache

sudo pacman -S apache

Install MariaDB

sudo pacman -S mariadb

Install PHP

PHP: Hypertext Preprocessor or PHP can be installed using the following command:

sudo pacman -S php php-apache

Install phpMyAdmin

phpMyAdmin is a web-based tool to help manage MariaDB/MySQL databases using an Apache/PHP front-end. Commandline can be used for managing but it does not hurt to have a GUI.

sudo pacman -S phpmyadmin

Done installing all the pre-requisites. The LAMP stack is ready to rumble.

Configure pre-requisites

Configuring Apache

Once installed, Apache is ready to serve. The installation creates a User ID http and a Group ID http. These defaults should be good enough. Now use systemd and start the httpd service

sudo systemctl start httpd

This should start the Apache server. To test visit http://localhost/

The port option in the /etc/httpd/conf/httpd.conf needs some attention. The current setup is for a local WordPress with no internet access intended at all. That means, the server should listen to the local machine ONLY and serve no other request from any other machine.

So, change

Listen 80



The home directory permissions need to be properly set so that Apache can access the extracted WordPress directory. The following should do the trick:

chmod o+x ~

Configuring MariaDB

Start the MariaDB service

sudo systemctl start mysqld.service

There is a setup script which, when run, will configure MariaDB. Just type the following on the CLI


This setup script will ask a few questions. Answer them honestly:) At the end, MariaDB will be ready with a proper root user and password.

Configuring PHP

Apache needs to be configured to run PHP.

Enable PHP by adding the following lines to the /etc/httpd/conf/httpd.conf

  • Add the following in the LoadModule list anywhere after LoadModule dir_module modules/
LoadModule php5_module modules/
  • Add the following at the end of the Include list
Include conf/extra/php5_module.conf

Restart the httpd service

sudo systemctl restart httpd

Create a file called test.php in Apache DocumentRoot (in my case /srv/http) directory

<?php phpinfo(); ?>

To see if it works go to: http://localhost/test.php

Enable the mysqli and mcrypt extensions in PHP. Uncomment the following lines in /etc/php/php.ini:


There is a possibility that the following error may show up

Apache is running a threaded MPM, but your PHP Module is not compiled to be threadsafe. You need to recompile PHP.
AH00013: Pre-configuration failed
httpd.service: control process exited, code=exited status=1

When this happens open /etc/httpd/conf/httpd.conf and replace

LoadModule mpm_event_module modules/


LoadModule mpm_prefork_module modules/

Configure MariaDB for WordPress using phpMyAdmin

First, configure phpMyAdmin. Create /etc/httpd/conf/extra/httpd-phpmyadmin.conf and add the following content to it

Alias /phpmyadmin "/usr/share/webapps/phpMyAdmin"
<Directory "/usr/share/webapps/phpMyAdmin">
     DirectoryIndex index.html index.php
     AllowOverride All
     Options FollowSymlinks
     Require all granted

Include it in /etc/httpd/conf/httpd.conf

# phpMyAdmin configuration
Include conf/extra/httpd-phpmyadmin.conf

Restart the Apache service

sudo systemctl restart httpd
  1. Go to http://localhost/phpmyadmin/
  2. Login as root with the password created during the MariaDB setup
  3. Create a user
  4. Create a DB for WordPress

Now that all the pre-requisites are installed and configured, it is time for the WordPress install.

Install WordPress

Do not install from the official Arch repositories. Why? Because,

Warning: While it is easier to let pacman manage updating your WordPress install, this is not necessary. WordPress has functionality built-in for managing updates, themes, and plugins. If you decide to install the official community package, you will not be able to install plugins and themes using the WordPress admin panel without a needlessly complex permissions setup, or logging into FTP as root. pacman does not delete the WordPress install directory when uninstalling it from your system regardless of whether or not you have added data to the directory manually or otherwise.

So, just get the zip from the website and unzip it.

Configure WordPress

Create /etc/httpd/conf/extra/httpd-wordpress.conf so that Apache can find WordPress and add the following content:

Alias /wordpress "/usr/share/webapps/wordpress"
<Directory "/usr/share/webapps/wordpress">
     AllowOverride All
     Options FollowSymlinks
     Require all granted
     php_admin_value open_basedir "/srv/:/tmp/:/usr/share/webapps/:/etc/webapps:$"

The alias /wordpress in the first line can be changed. For example, /myblog would require navigating to http://hostname/myblog to see the local WordPress website.
/usr/share/webapps/wordpress is the install location when WordPress is installed from the official repos. So, change the paths to WordPress install folder. Append the parent directory to the php_admin_value variable without fail:

Alias /myblog "/home/arandomuser/wordpress"
<Directory "/home/arandomuser/wordpress">
     AllowOverride All
     Options FollowSymlinks
     Require all granted
     php_admin_value open_basedir "/home/arandomuser/:/srv/:/tmp/:/usr/share/webapps/:/etc/webapps:$"

Now, open /etc/httpd/conf/httpd.conf and add the following line:

Include conf/extra/httpd-wordpress.conf

The extracted WordPress directory contains a file wp-config-sample.php which needs to be renamed to wp-config.php and modify the following lines correctly:

// ** MySQL settings - You can get this info from your web host ** //
/** The name of the database for WordPress */
define('DB_NAME', 'wordpressDB');

/** MySQL database username */
define('DB_USER', 'arandomuser');

/** MySQL database password */
define('DB_PASSWORD', 'thePassword');

/** MySQL hostname */
define('DB_HOST', 'localhost');

Once all this is done, navigate to http://localhost/wordpress

WordPress should be available for personal knowledge management.

WordPress Issues

The default permalinks are not pleasing at all. They need to be changed. For me, a permalink should have the following structure:


So, I went to the Dashboard -> Settings -> Permalinks and looked at the options. The desired option was available. I chose the one I wanted and asked WordPress to save my permalink choice. It refused saying:

If your .htaccess file were writable, we could do this automatically, but it isn’t so these are the mod_rewrite rules you should have in your .htaccess file. Click in the field and press CTRL + a to select all.

It generated a text box that contained the following text which needs to be added to the .htaccess file:

<IfModule mod_rewrite.c>
    RewriteEngine On
    RewriteBase /wordpress/
    RewriteRule ^index\.php$ - [L]
    RewriteCond %{REQUEST_FILENAME} !-f
    RewriteCond %{REQUEST_FILENAME} !-d
    RewriteRule . /wordpress/index.php [L]

This is where the beauty of .htaccess file became quite apparent. The directory where WordPress was extracted, say, /home/arandomuser/wordpress, should have a .htaccess file with the aforementioned text to make new permalinks work. This is the same directory where the index.php file resides.

I went to the directory and found no .htaccess there. I created one and pasted the required text. I tried using the new permalink to navigate to my posts. No success!

Further research pointed me to the /etc/httpd/conf/httpd.conf file where a module needs to be enabled. So, I uncommented the following line:

LoadModule rewrite_module modules/

Everything works now!

I will soon be installing WordPress at workplace on a dedicated server. It will be a Windows machine. It will surely be simpler than this but I will jot down my experience.

Further research

How mapping custom domain to WordPress broke Zoho Mail

I started blogging on WordPress. A free blog gives the user a <blogName> address. Nothing wrong in that but to look more professional I decided to purchase a domain and map it to my WordPress blog. Everything worked beautifully or so I thought.

I am working on making my life Google-free:

  • I have stopped using Google Search. I use DuckDuckGo instead.
  • I have stopped using Gmail. I have a free account with Zoho Mail. I am trying Fastmail.
  • I have some documents in my Google Drive and am removing them
  • I have installed CyanogenMod on my Samsung GT N700 or the Note 1. I have not installed Google Play and other Google apps (gapps).
  • I got rid of my Blogger account too

The last point is the reason for this post. I got rid of my Blogger account. This was a tough decision because I had started seeing a huge increase in the traffic.

I had to find another blogging platform. I found Ghost. I love it. It is a great platform. I had made up my mind to stick to it. I thought deeply and found that if I were to grow, Ghost may not scale up as easily as WordPress would. The flexibility is not there. I cannot host my blog somewhere else if I wish to. Yes, there are ways to extract static pages from Ghost and host them for free on GitHub Pages. This is just too much work for a guy who just wants hassle-free, flexible publishing. The decision to stop using Ghost was a difficult one as I had started getting attached to it, but a full CMS like WordPress would do me good in the long run.

I had purchased a domain from GoDaddy and mapped the MX records to Zoho Mail. When I moved to WordPress, I decided to move my domain too. The movement was straightforward. WordPress wanted to have full control over my DNS and I was fine with that.

I stopped receiving mails. Somehow, I could send mails but receiving was impossible. I was not aware of this fact because I do not normally receive many mails, but 10-12 days of mail drought was just too much to believe. I raised a question on Zoho’s forum a couple of days ago. A few minutes after I posted the question, I realized that there was a weird connection between my movement to WordPress and the start of the mail drought. I went to my WordPress Dashboard and looked at my DNS records.

There were no MX records!

I added the appropriate MX records and mails started raining and inundating my inbox 🙂

I answered my own question on the Zoho forum. Today I received a mail from a Zoho support person stating facts that are quite obvious to me now :). Zoho is a good service with really meaningful support.:

Generally when you transfer the domain from one provider to another, or when your domain gets expired and you restore it, the DNS records of your domain get reset to the default settings. Hence your website, email etc may not work.
You need to update the A, CNAME, MX records etc in the DNS for the websites, redirections, email to get the settings back to working.

A good short adventure.

SUSE Studio: Create a custom Linux distribution with an openSUSE core


openSUSE is one of the most robust and thus respected Linux distributions. It is a very polished and professional distro. There are many options available when it comes to downloading openSUSE, but the most exciting feature that is available to the user is the ability to create a custom distribution with an openSUSE core. Yes, it is true. The user can create a distribution with all the packages he needs and nothing more. I am talking about the venerable SUSE Studio. This is not a new feature but not many know about this. SUSE Studio provides the infrastructure for the user to create, test and distribute the custom distro.

I have been a diehard fan of openSUSE from the 11.x days. openSUSE has never let me down. openSUSE’s KDE implementation is one of the most seamless and beautiful work of art.

I like KDE but it is just too huge. I have many physical and virtual machines. I update all of them twice a month. With Desktop Environments (DE) like KDE and GNOME, I would end up with huge downloads during updates. This is not a trivial issue in countries like India. I had to look for a better DE or a very capable Window Manager (WM). I tested openbox, Xfce, Lxde and Enlightenment. I liked Xfce the most. Xfce is an excellent DE. It has a small disk and RAM footprint while providing a rich set of features. It is based on GTK+ 2. It is very customizable too.

The only problem is that openSUSE has no official spin for Xfce. What are my options?

  • Get Xfce from the openSUSE repositories and have it installed alongside KDE
  • Get Xfce from the openSUSE repositories and remove KDE. This is pure pain in the rear. Never do this.
  • Use the Net install and choose the DE to be installed
  • Use SUSE Studio and create own distro

The last option is by far the best one for any student of Linux. Thus, I decided to create one. There are many other reasons, apart from the one I mentioned above, for creating a custom distro based on openSUSE:

  • to learn
  • for fun
  • to create an ownCloud server or any other type of server for that matter
  • to scratch any other itch 🙂

What follows is a detailed explanation of how to create a custom distro, using the great SUSE Studio and based on openSUSE core. I am logging whatever I did for getting an Xfce system for my personal use. The packages to be selected may be different for creating other types of distros but the general principles will remain unchanged. I am detailing as much as possible so that even enthusiastic newbies can go ahead and play with SUSE Studio. This was a great learning experience for me.



Sign in or create a new account. One can login in many ways.


Once the login is taken care of one enters the beautiful greenish world of SUSE Studio. The adventure is about to begin. So, be ready for it 🙂


On the Home page, under the heading Actions, is the link to the real adventure. Click on the Create new appliance link and the following page opens up. The options available are very clearly explained.


As I  had decided to have the latest openSUSE as base, openSUSE 13.1 Just enough OS (JeOS), was the best choice. A light minimalistic DE is what I was looking for. Thus, there was no point in choosing the GNOME desktop or the KDE 4 desktop. One should avoid Server option because it is more suited for building a server, but can be used as base for creating a desktop too! Anything is possible in the FOSS world.


The architecture should be chosen based on what hardware we intend to target. The really old PCs may not have 64-bit CPUs. This restricts one’s choice to 32-bit. I have a Pentium D dual core 64-bit CPU with a chipset that supports 4 GB DDR2 RAM only. Thus, there is not much that I gain from having a 64-bit OS on that machine. I have many other modern machines too. They are all 64-bit. Thus, I benefit from having just one 64-bit distro ISO that would be capable of getting installed on all my PCs. Such things need to be considered while choosing the architecture.


This is the most difficult thing to do. Wasting too much time here is not good for society.


The green enticing button needs to be pushed and a brand new base appliance is ready for further modifications.


Now that a base template appliance is ready for further tweaking, one should just move ahead without wasting any time. Let me get my custom distro ready before earth stops spinning. SUSE Studio took me to a page where one can edit the base template appliance. There are six tabs available on this page. I will move through all of them but let me first jot down a few words about each:

  • Start

The welcome tab where one can name/rename the appliance

  • Software

The tab where one can select packages from the openSUSE repositories. There is so much to choose that it can be a bit overwhelming.

  • Configuration

As the name suggests, here is where one can configure the various aspects of the system

  • Files

Files added here will be copied into the appliance after packages are installed. Adding files is optional. Single files will be copied to the specified directory. Archives (.tar, .tar.gz, .tar.bz2, .tgz, or .zip) will be extracted into the directory specified. Permissions and hierarchy will be preserved. Using archives is a great way to add many files at one time.

  • Build

Well, this tab helps build the appliance.

  • Share

Let the world know.


Nothing much to do here apart from naming/renaming the appliance and viewing/reviewing the choices made till now. Only choices made on SUSE Studio can be reviewed here. Life’s choices can only be reviewed after death if the notion of God has any truth in it and if that God is really hell bent on doing a post-mortem of my life 😉



This is where most of my time was spent. There are just too many packages to choose from. SUSE Studio has made it easy for the user by categorizing the packages and arranging everything in an intuitive way. Still, a new comer may be distracted. There is a logical way of breaking down the whole task of choosing the packages. The desktop distro needs the following components to work:

X Windowing System or X11 is a client-server system. X Server provides a mechanism for the X Clients to create GUI elements and manage them. A Desktop Environment like Xfce or KDE or GNOME has a X Client component known as the Window Manager that works in tandem with the X Server.

  • A Desktop Environment. I chose Xfce.
  • Package management/system administration tools. The command-line package management tool, zypper, is available by default. This may not be enough for all. Yast – Yet Another Setup Tool, the openSUSE system administration infrastructure can be added. Yast has many other system administration tools apart from package management.
  • Drivers
  • Network tools
  • Firefox Browser
  • VLC audio/video player
  • Office suite. Libre Office has very little competition here.
  • A Display Manager for graphical login. I chose LightDM. Without this component one would have to type in commands to login to the machine and launch the Desktop Environment.

This break-up made the task easier for me.

There are many ways of getting the packages. There is a search box that allows user to input search pattern strings. Also, there are various package groups available. When a package is selected, SUSE Studio does a good job of including all the required dependencies for the user. This makes the whole job very easy. Imagine if the user had to manage dependencies too. That would have been hell.


The X Server packages

Type X11 in the search box and hit enter. SUSE Studio will list all packages matching the X11 pattern. The X11 X Window System package is the most important one. A lot of dependencies will be auto selected. The good thing about the table generated after during the search is that it can be sorted based on various parameters. Just click on the column name and the column can be sorted. Popularity column is the most interesting because that helps in understanding what package has the most popularity for a particular search pattern. This is a good indicator but not the absolute one:


Desktop Environment, Xfce packages

There are a lot if packages to be selected. The good thing is, the whole Xfce package set is very light. 70-100 MB would be enough to have a fully functioning DE. So, search for xfce4 pattern and add all the packages OR go through each package and choose what really is needed. The latter option may consume some time but helps in avoiding unnecessary packages. Once Xfce packages are selected, make sure that the following packages are also selected:

  • gtk2-engine-murrine
  • gtk2-engines

How do we know what package to choose from the list?

It needs some research and some experimentation. The package names are really self-explanatory and that helps a lot. The SUSE Studio Testdrive option also comes to the user’s assistance. To know about Testdrive keep reading…

The Xfce DE is extremely configurable. Xfce is based on GTK2 and needs the above-mentioned engines for proper working of external GTK2 themes. Now, these themes are easy to get from deviantArt and Installing these themes is easier still. That is the beauty of Xfce.

Yast, Yet Another Setup Tool

A one-stop shop for configuring everything on an openSUSE system. The following image shows yast2 in action. The complete openSUSE system can be managed from this application:


Now search for yast2. Again, one can add all the packages OR go carefully through the list and add what is needed.

I had a lot of trouble getting the GUI for yast2 working. Whatever I did, yast2 would only run using ncurses after throwing the following error:

qt gui wanted, but not found, falling back to ncurses

I struggled a lot and found that the libyui-qt5 is necessary for yast2 to run with a GUI.


This needs further breaking down:

Input drivers

The keyboard and mouse drivers:

  • xf86-input-keyboard
  • xf86-input-synaptics (This is needed for the laptop touchpad)
  • xf86-input-mouse
Video drivers

The display/monitor driver and graphics (ATI/AMD Radeon and NVIDIA etc.) driver:

  • xf86-video-intel
  • xf86-video-vesa
  • xf86-video-ati (opensource ATI/AMD Radeon)
  • xf86-video-nv (opensource NVIDIA)
  • xf86-video-fbdev

Network tools

The following packages should be enough:

  • NetworkManager
  • NetworkManager-gnome
  • wireless-tools
  • wpa_supplicant
  • wpa_supplicant-gui
  • yast2-network (this should already be in the selected list)

Firefox Browser

There are many choices when it comes to browsers. Firefox is the one that I use mostly. I have Midori too. Chromium is good too, but Google cannot be kept out of it. Search for firefox.

VLC audio/video player

There is not much competition for VLC on the desktop irrespective of the platform. Search for vlc.

Office suite

Libre Office is really good. Search for libreoffice.

Desktop Manager

There are just too many choices here. LightDM is light and beautiful. Search for lightdm.


This tab obviously helps the user to configure the distro being created 🙂


There are many options available here:

  • General

Choose Default locale, Default time zoneNetwork, Firewall and Users and groups

  • Personalize

Choose Appliance logo and Appliance background:


  • Startup

Choose the Default runlevel. A desktop distro is being created and a Graphical Login would be the best option. The EULA option is mostly for corporate environment but an ordinary mortal too can include an EULA:


  • Server

Skip this.

  • Desktop

 Nothing really to write in detail here.SUSEStudio_Configuration_Desktop

  • Appliance

The Appliance can be created in various formats. That is, one can have a VMWare image, for example. Here one can configure how much RAM and Disk space should be allocated to the VMWare image:


  • Scripts

One can create scripts and:

Run script at the end of the build OR Run script whenever the appliance boots

For a normal user, nothing needs to be done here but this is really a cool feature:



This is where one can add custom files and archives to the appliance. The single files will go to the destination the user mentioned while the archive gets extracted to the desired location after the installation of the package. In case of VMWare images, the appliance will have these files already extracted as part of the build process:



Well, what else would one want to do after all that he has been through till now :)?

SUSEStudio_BuildApplianceThere are options here too:

  • Live CD/DVD (.iso)
  • USB stick / hard disk image
  • Preload ISO (.iso)
  • VMware / VirtualBox (.vmdk)
  • OVF virtual machine
  • Xen guest
  • Hyper-V (.vhd)
  • SUSE Cloud / OpenStack / KVM

These options are fully explained in the Selecting Appliance Formats article. Reading that is really essential for an easy life in the SUSE Studio world.

One can select a default format to build while also selecting multiple additional formats. What did I choose?

I chose Live CD/DVD (.iso) and USB stick / hard disk image. Why?

Live CD/DVD is basically what one gets from the official download locations of almost all distros. The Live CD/DVD helps the user get a feel of the complete system without really installing the distro.

USB stick / hard disk image is really helpful during SUSEStudio Testdrive. It has other purposes but its utility stands out during the Testdrive.

So, choose the correct formats and hit the big green build button SUSEStudio_BuildButton

A progress bar appears and indicates the build progress. The build process can build only one format at a time.

SUSEStudio_BuildProgressOnce the build is done the image is ready for Testdrive and Download. The Build additional button appears if other formats were selected in addition to the default format. Hit that button and the other format gets built.



Testdrive lets the user test the appliance in the browser using Flash plugin. No downloads at all. This test can be done using any format but the USB stick / hard disk image is the most important format. One can modify system files during test drive and get it included in the Files Tab only if one uses the USB stick / hard disk. So, test drive using the USB stick / hard disk and get the system properly running and then build other formats.


I had a problem with the LightDM Display Manager. I could not get it to work. I found that the /etc/sysconfig/displaymanager file needs to be modified. Just make sure that the beginning of the file looks like below:

## Path:    Desktop/Display manager
## Description:    settings to generate a proper displaymanager config

## Type:    string(kdm,kdm3,kdm4,xdm,gdm,wdm,entrance,console)
## Default:    “”
# Here you can set the default Display manager (kdm/xdm/gdm/wdm/entrance/console).
# all changes in this file require a restart of the displaymanager

I modified the file while test driving the USB stick / hard disk image and got it included in the Files Tab. Now LightDM greets me when I try to login to my machine.



After all the hardwork, let the world know about it:


My appliances:

The package list can be found here.

The package list can be found here.

Install FreeBSD 10.0 using memstick IMG file on Virtualbox

When FreeBSD 10.0 arrived, I headed straight to the FreeBSD website and downloaded the FreeBSD-10.0-RELEASE-amd64-memstick.img file. I usually do not burn DVDs. It is just not flexible enough. Writing the installation media on to a USB is easiest thing to do when you are addicted to installing OSs in your free time.

Most Linux distros publish their installation media as ISO images while the FreeBSD team creates an IMG image in addition to the ISO. While the Linux ISO images can be written to a USB (ISO -> USB), the FreeBSD ISO cannot be  written to USB. That is where the FreeBSD IMG file comes into picture. While the IMG file is good for writing onto USB drive, one will surely have a horrid time installing FreeBSD as a Virtualbox guest using the IMG file. Virtualbox does not recognize the IMG format.

What is the solution?

Just run the following command:

VBoxManage convertfromraw -format VDI [filename].img [filename].vdi

Once this is done, a VDI file is generated. This is the virtual hard disk file format used by Virtualbox. Now this file can simply be attached as another storage device in Virtualbox. During the boot process, hit F12 key and enter the virtual BIOS menu. One can select the attached drive and boot from it, in case Virtualbox does not automatically boot from the attached drive.

To write FreeBSD IMG file to USB, I use the following command:

dd if=FreeBSD-10.0-RELEASE-amd64-memstick.img of=/dev/<device> bs=64k

The general command is:

dd if=<Any compatible ISO or IMG file> of=/dev/<device> bs=64k

Yes, dd command works on Linux and FreeBSD

Be very careful while using the above command. The changes are irreversible. Thus, if you select a wrong device, you will definitely lose all data.

Further reading: