Introducing kernel.nighton.net

Rationale:

This bug: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1169984.

What I did:

After fishing around for different solutions, I decided to just use the ubuntu builds of the mainline kernel. But hand installing debs just made me feel… icky. So, I needed…

A secure apt repository.

Ok. I’ve done this before (back when I was packaging Mugshot – Express Guide to Secure Apt Repositories).

But, we’ve got to automate this some how. This led to some poking, prodding, frustration, and joy (you know, system administration😛 ). Behold, wget-mainline-kernel!

#!/bin/sh

APT_REPOSITORY_PATH="/usr/share/nginx/kernel.nighton.net"
USER="dlove"

ORIGIN="http://kernel.nighton.net/"
LABEL="(=) ( nighton . net )"

DISTRIBUTION="raring"
VERSION="13.04"
COMPONENTS="main";
ARCHITECTURES="amd64 i386 armhf";

URL="http://kernel.ubuntu.com/~kernel-ppa/mainline"
APT_REPOSITORY="apt-repository"

DESCRIPTION="Packaged by David Love.";

cd $APT_REPOSITORY_PATH

wget -q --timestamping "$URL"

DIRECTORIES=`grep ".*href=\".*-${DISTRIBUTION}\/\".*" mainline | \
	sed -e "s/^\(.*\)href=\"\(.*-${DISTRIBUTION}\)\/\"\(.*\)/\2/g" -- | \
	awk "\\\$1 !~ /.*-rc.*-${DISTRIBUTION}/ { print \\\$1 }" `

for dir in $DIRECTORIES; do
	wget -q --timestamping -r ${URL}/${dir}/

	FILES=`grep ".*href=\"linux-.*\">.*" kernel.ubuntu.com/~kernel-ppa/mainline/${dir}/index.html | \
	sed -e "s/^\(.*\)href=\"\(.*\)\"\(.*\)\(.*\)/\2/g" -- `

	cd ${APT_REPOSITORY}/pool/l

	for file in $FILES; do
		wget -q --timestamping ${URL}/${dir}/${file}
	done

	cd ../../../
done

cat -- > ftparchive.conf <<_EOF_
APT::FTPArchive {
        Release {
                Origin                  "${ORIGIN}";
                Label                   "${LABEL}";
                Suite                   "${DISTRIBUTION}";
                Version                 "${VERSION}";
                Codename                "${DISTRIBUTION}";
                Architectures           "${ARCHITECTURES}";
                Components              "${COMPONENTS}";
                Description             "${DESCRIPTION}";
        };
};

Tree "dists/${DISTRIBUTION}" {
        Sections                "${COMPONENTS}";
        Architectures           "${ARCHITECTURES} source";
};

Dir {
        ArchiveDir ".";
        CacheDir  "..";
};

TreeDefault {
        Directory               "pool/";
        SrcDirectory            "pool/";
};

// Create Packages, Packages.gz and Packages.bz2, remove what you don't need
Default {
        Packages::Compress ". gzip bzip2";
        Sources::Compress ". gzip bzip2";
        Contents::Compress ". gzip bzip2";
};

// By default all Packages should have the extension ".deb"
Default {
        Packages {
                Extensions ".deb";
        };
};
_EOF_

cd ${APT_REPOSITORY}

apt-ftparchive generate ../ftparchive.conf

apt-ftparchive -c ../ftparchive.conf release dists/${DISTRIBUTION} > dists/${DISTRIBUTION}/Release

cat "${APT_REPOSITORY_PATH}/wget-mainline-kernel.conf" | gpg --sign --local-user ${USER} --passphrase-fd 0 --yes -ba -o dists/${DISTRIBUTION}/Release.gpg dists/${DISTRIBUTION}/Release

cd ../

All of this assumes you have a pre-existing directory structure. I won’t go over the tie-ins between directory structure and setting up an apt repository here; instead, I’ll direct you to my Express Guide to Secure Apt Repositories.

Now that the pre-reqs have been met, we begin.

We prepare for our journey by setting a few variables.

APT_REPOSITORY_PATH="/usr/share/nginx/kernel.nighton.net"
USER="dlove"

ORIGIN="http://kernel.nighton.net/"
LABEL="(=) ( nighton . net )"

DISTRIBUTION="raring"
VERSION="13.04"
COMPONENTS="main";
ARCHITECTURES="amd64 i386 armhf";

URL="http://kernel.ubuntu.com/~kernel-ppa/mainline"
APT_REPOSITORY="apt-repository"

DESCRIPTION="Packaged by David Love.";

Most of these are self-explanatory, but a few that might trip you up are:

  1. APT_REPOSITORY_PATH. This is not the DocumentRoot of your apt repository. It’s one level above that. It is where we’ll be storing cache and configuration files. We do not want this viewable to the world.
  2. USER. This is the user we’ll sign the repository with. It’s a bit of a kludge. Suggestions for improvement are welcome!
  3. APT_REPOSITORY. This is the DocumentRoot of your apt repository.

Now, we start off slow.

cd $APT_REPOSITORY_PATH

wget -q --timestamping "$URL"

It’s important to note that we use the --timestamping option to make sure we don’t hammer the server each time we make a request. If nothing’s changed, we don’t download more than the headers.

DIRECTORIES=`grep ".*href=\".*-${DISTRIBUTION}\/\".*" mainline | \
	sed -e "s/^\(.*\)href=\"\(.*-${DISTRIBUTION}\)\/\"\(.*\)/\2/g" -- | \
	awk "\\\$1 !~ /.*-rc.*-${DISTRIBUTION}/ { print \\\$1 }" `

Gah! Relevant.

But, we can break this bad boy down. First, we use grep to grab only the lines that have an html anchor reference (commonly know as a link) embedded within them.

Now, sed comes into play. If you’re not familiar with sed, it’s a stream editor. It sucks up input and filters the reults. What we’re doing here is matching each line as in the grep above, but replacing it with only the URL of the link. How?

That’s where those extra parentheses come in. We want to capture part of the string we’re matching. First, we match on the beginning of the line with ^, followed by a capture of everything up to the href (currently unused). We then match and capture the link (we’ll need this to find our files later on). We then capture everything following that URL (currently unused) until the end of the line (delimited by $). Finally, we replace the entire line with our captured URL. This results in output that is only the directory names of the available kernels.

Now, we send that output to awk, which then decides to only print the lines that do not have rc as part of the version numbers. At this point, I’m only looking to install released mainline kernels.

The next step in our journey is to loop over those directories, and download the files that make up the release.

for dir in $DIRECTORIES; do
	wget -q --timestamping -r ${URL}/${dir}/

	FILES=`grep ".*href=\"linux-.*\">.*" kernel.ubuntu.com/~kernel-ppa/mainline/${dir}/index.html | \
	sed -e "s/^\(.*\)href=\"\(.*\)\"\(.*\)\(.*\)/\2/g" -- `

	cd ${APT_REPOSITORY}/pool/l

	for file in $FILES; do
		wget -q --timestamping ${URL}/${dir}/${file}
	done

	cd ../../../
done

The pattern is the same. We retrieve the directory listing, match on links that start with the name linux-, and then replace those lines with the link URL only. Once we have that, we dive down into our pool/l directory. This is where our kernel packages will live. For each of these files, we download them locally. We again use the --timestamping option to ensure we’re not re-downloading each individual kernel package every time we run this program. At the end, we return to APT_REPOSITORY_PATH.

For our next trick, we setup the ftparchive.conf file that controls the build of our apt repository (you did read the Express Guide to Secure Apt Repositories, didn’t you?).

cat -- > ftparchive.conf <<_EOF_
APT::FTPArchive {
        Release {
                Origin                  "${ORIGIN}";
                Label                   "${LABEL}";
                Suite                   "${DISTRIBUTION}";
                Version                 "${VERSION}";
                Codename                "${DISTRIBUTION}";
                Architectures           "${ARCHITECTURES}";
                Components              "${COMPONENTS}";
                Description             "${DESCRIPTION}";
        };
};

Tree "dists/${DISTRIBUTION}" {
        Sections                "${COMPONENTS}";
        Architectures           "${ARCHITECTURES} source";
};

Dir {
        ArchiveDir ".";
        CacheDir  "..";
};

TreeDefault {
        Directory               "pool/";
        SrcDirectory            "pool/";
};

// Create Packages, Packages.gz and Packages.bz2, remove what you don't need
Default {
        Packages::Compress ". gzip bzip2";
        Sources::Compress ". gzip bzip2";
        Contents::Compress ". gzip bzip2";
};

// By default all Packages should have the extension ".deb"
Default {
        Packages {
                Extensions ".deb";
        };
};
_EOF_

I did add some additional sections since that guide was written. The relevant passages are:

// Create Packages, Packages.gz and Packages.bz2, remove what you don't need
Default {
        Packages::Compress ". gzip bzip2";
        Sources::Compress ". gzip bzip2";
        Contents::Compress ". gzip bzip2";
};

// By default all Packages should have the extension ".deb"
Default {
        Packages {
                Extensions ".deb";
        };
};

Here, we indicate we want compressed version of the Packages, Sources, and Contents files. Additionally, we only want to index packages that end in .deb.

Now, we actually generate both the apt repository and the Release file.

cd ${APT_REPOSITORY}

apt-ftparchive generate ../ftparchive.conf

apt-ftparchive -c ../ftparchive.conf release dists/${DISTRIBUTION} > dists/${DISTRIBUTION}/Release

Our final step is to sign the whole thing, putting the secure in secure apt.

cat "${APT_REPOSITORY_PATH}/wget-mainline-kernel.conf" | gpg --sign --local-user ${USER} --passphrase-fd 0 --yes -ba -o dists/${DISTRIBUTION}/Release.gpg dists/${DISTRIBUTION}/Release

cd ../

This takes a file located in APT_REPOSITORY_PATH named wget-mainline-kernel.conf and pipes it into gpg. That file should only contain the passphrase that USER needs to sign the key. Again, this is a bit of a kludge. Suggestions for improvement are welcome!

And, that brings us to the end of our journey. All you have to do now is add:

deb http://kernel.nighton.net/ raring main

To your apt sources, and you’ll be good to go! Under Ubuntu 13.04 (raring), you can easily click on the System icon (it’s the one that looks like a gear), then proceed to “Software & Updates”, followed by clicking on the tab that is labeled “Other Software.” Click on “Add…”. Copy and paste the preceeding code into the box, click on “Add Source,” and, finally, click on “Close.” The repository will now be available for you to install packages from.

Getting 16 GB (2 8 GB Sticks) of G.Skill RAM (F3-1600C7D-16GTX ) Working on an Asus M4A79T Deluxe

I was so worried that I had just wasted money on RAM that wouldn’t work with my motherboard. But thanks to user bkeuba011 over on Tom’s Hardware, I’ve got it up, running, and stable. The key is to leave everything on Auto in the BIOS except for the following settings:

DRAM Frequency:  1600 MHz
DRAM Voltage:  1.74
DRAM Timings:  9 9 9 24

http://www.tomshardware.com/forum/255670-29-asus-m4a79t-deluxe-skill-ripjaw-128000cl7d-optimizing

www.nighton.net lives!

You may have noticed that there’s a new link on the top.  Biography.  I decided to make the main domain an overview of, well, me.  After all, this domain is my… uhm… my domain I guess.  King of the castle, no?  It’s just a starting point for now.  But it does give me a chance to branch out and start indexing my online life.  Comments and suggestions are always welcome.

And now, for something different…

Well, If you’ve found your way here, you’ve probably noticed that this blog now lives under the blog.nighton.net sub-domain.  This is part of a larger migration on my part.  This part of the old site is now hosted on WordPress.com.  My email (and god only knows how much else) is now powered by Google Apps for Business.  And http://www.nighton.net has… well… nothing.  I’ve still got to install a webserver on my new VM running now at Linode.  Sorry, Rackspace.  But I liked Slicehost the way it *was*.  There will be more to come (I hope).

P.S.  I just realized the last post was from the end of 2010.  Wow, how time flies.  I’m hoping to re-engage with the Free Software community (among other things).  Yet again, more to come.

Upgrading the iPhone under VMWare Workstation 7

Well, 7.1.3 build-324285 to be exact. This drove me nuts. Luckily, the internets came to the rescue and I was able to finally successfully upgrade my iPhone’s firmware under VMWare running atop maverick. What internets? These internets:

http://blog.yibi.org/2010/06/22/updating-iphone-to-ios-4-in-vmware

Yeah. A simple rmmod took care of everything once I knew to do it. However, after I did that I wasn’t able to mount the phone to restore my ringtones. To the cloud!

http://ubuntuforums.org/showthread.php?t=1628529&page=5

That ppa plus the whole idevicepair let me get the iPhone mounting under Ubuntu again.

I figured I’d collect all this in one place considering I’ll probably forget all about it once the next iFirmware is released. God, do I need a new phone. Thinking about doing a Nexus S with T-Mobile after my AT&T contract is up. Lock-in sucks.