For owners of Macbook11,2 (Late 2013-Mid 2014 Macbook Pro 15") running Linux (I use Ubuntu 15.10), you may have experienced trouble suspending and resuming. On suspend, the screen will blank, the system will attempt to suspend, then hang. This is due to a chvt call failing.

PM_DEBUG=true pm-suspend

will allows some limited tracing of the call flow. (More on PM_DEBUG)

Skipping the chvt call using:

echo "--quirk-no-chvt" > /etc/pm/config.d/suspend_hacks

didn’t seem to be sufficient for me, as suspend would work, but resume would fail. I needed to disable the chvt call at line 100 of /usr/lib/pm-utils/sleep.d/99video as well. I did so using the following:

maybe_deallocvt()
{
  is_set "$QUIRK_NO_CHVT" && return
  state_exists console || return 0
  chvt $(restorestate console)
  deallocvt63
}

I was however, unable to figure out what the actual root cause here was.

More references: https://bugs.launchpad.net/ubuntu/+source/kbd/+bug/1351564https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=604094

So recently I’ve been experiencing some pretty terrible static during audio playback, and a little bit of research seems to suggest it has something to do with the default buffer sizes used by Chrome and PulseAudio. It appears that recent builds of Chrome use a much smaller buffer size of 512 while older builds used 2048. You can restore this behavior by passing the –audio-buffer-size-2048 parameter to the executable. (Source: https://code.google.com/p/chromium/issues/detail?id=178626) However as my issue was system wide (mplayer, mpd also experienced the same thing), that pointed to PulseAudio. It also appears PulseAudio sets some pretty poor default buffer and fragment sizes as well. Those can be remedied by setting the correct values in /etc/pulse/daemon.conf. The values used for default-fragments and default-fragment-size-msec should match the buffer sizes used by your sound card. Those can be viewed by starting PulseAudio with very verbose logging enabled. (Source: http://forums.linuxmint.com/viewtopic.php?f=42&t=44862)

If you play games on emulators, it’s highly recommended you grab something like a SIXAXIS controller to do so with. However if you’re lazy like me and play on the keyboard, analog button presses typically aren’t emulated all that well by most keypad plugins. I hacked up this quick and dirty solution to PCSX2’s Onepad plugin to get past the dragon cave section of Star Ocean 3. 

I’d like to revisit it and make it properly configurable via the GTK gamepad configuration interface, but alas I haven’t found a reason to do so yet.

While I don’t play too many games any more, Napoleon: Total War is one of the few that I do from time to time. Probably the main reason I don’t play much anymore is because it typically takes quite a bit of effort to make Windows games run properly under Wine. Sometimes games require custom patches to Wine to make them work properly. Napoleon: Total War is one of those, as it requires a patch to the SetPixelFormat function to work.

Unfortunately the patch is not updated for Wine 1.5, so I’ve updated it here:

https://gist.github.com/4553809

Those of you who have been following my GitHub repos in the past few months will have noted a large influx of Android related commits. While I was supposed to be working on my M.S. project, I took a lot of my free time to work on a port of CyanogenMod to my Android phone at the time, the HTC Sensation, device codename “Pyramid”. Since this was pretty much my first real go at working on the Android system (app development isn’t quite the same), I managed to learn quite a bit.

The starting point for any device supported by CyanogenMod is the device tree is the device tree, that typically lives in the /device/<manufacturer>/<device codename>. (where / is the top of the Android source tree) This repo is where device specific configuration options live. This repo can inherit from a device-class repo as well. (eg. htc/msm8660-common, for all HTC MSM 8660 devices) Most of the configuration actually lives in two main files, device_<codename>.mk (sometimes simply <codename>.mk), and BoardConfig.mk. BoardConfig.mk is where the description of device and hardware specific features goes. This includes defining the device name, CPU flags, display flags, sound flags, etc. The complete file ends up as a description of the device to the Android build system. Where BoardConfig.mk is responsible for defining a device, device_<codename>.mk is responsible for defining what packages and files go into the final device build. The Android build system largely defines targets as modules, and listing a module in the device_<codename>.mk file as a product package will cause it to be built and included in the final output. This includes all the HAL files (audio.primary, audio.policy, hwcomposer, camera, etc) as well as apps such as live wallpapers. This is also where additional files, such as those necessary for the ramdisk, configuration, etc are copied. Proprietary libraries that are extracted from the original device software are not copied here however. Due to various issues, proprietary files extracted from the original device software actually go to a directory hierarchy under /vendor/<manufacturer>/<device>. Typically a nice script will create the directories, generate the makefiles, extract the libraries, and copy them the correct place. All that is necessary is to tell the script which libraries are to be extracted. The most common method uses the proprietary-files.txt file to list all libraries needed.

Once a suitable device tree has been defined, and the appropriate files have been extracted, we can begin building. On a new device or class of devices, this typically means bringing in the corresponding HAL and other associated libraries. Typically these will come from upstream, bring provided by the chipset manufacturer themselves. (Qualcomm for the MSM series, TI for the OMAP series, Samsung for the Exynos/SMDK series, etc) This code tends to live in the /hardware/<qcom/samsung/ti> directories. Once this platform code is integrated into the build framework, work can proceed on working out the build process, and eventually on to adding device specific support. This is typically necessary as each chipset manufacturer has their own forks of the Android framework which they have then proceeded to hack up to better support their hardware. Since the patches released generally target development boards, real shipping devices (phones, tablets) typically have slight differences, either in additional DSPs for image or audio processing, or just different hardware revisions. All of this must be accounted for in the CyanogenMod hardware repositories. This is typically where all the problems lie in getting new devices supported. This is also again, why a support for a single device usually brings support for many other devices in short order. As they all share the MSM8660/Snapdragon S3 chipset, work on the pyramid device helped enable doubleshot (MyTouch 4G Slide), shooter (Evo 3D), and even Samsung Galaxy S2 US variant support.

The major shortcomings lie primarily in the lack of code and documentation for many of these devices though, making it difficult to get things working for which no code has been provided. This typically means the camera, and sometimes the RIL (radio interface layer). Camera support in particular, continues to be the bane of CM developers, as nearly everything necessary remains proprietary, and hidden. 

As is with any software project, once you become familiar with the infrastructure, work proceeds much faster, as the ville (HTC One S) port I’m working on currently is closing in on full functionality, while the pyramid port took a bit more time. Currently I continue to work on the pyramid as well as the ville, and am working to improve MSM8960/Snapdragon S4 support in CyanogenMod. You can keep up to date with my work on my GitHub, as well as the CyanogenMod Project code review tool. Additionally, you can learn more about this from the man himself, as Cyanogen gave a talk recently on this very subject.

Finally found the need to have a real laptop available, so I went out and picked up a MacBook Air over the Thanksgiving break. Since I’ve gotten so used to XMonad for day to day use, it would have to be running Ubuntu. Turns out that there is an Ubuntu community wiki page available detailing what is necessary to get things going.

A script available on almostsure creates a bootable USB drive to install off of, and the almostsure post-install script takes care of installing most of the drivers and configuration you’d want. For reference, the important things (as of Nov 30, 2011) are the following:

  1. i915 module fix, without which kernel modesetting doesn’t work.
  2. bcm5974 module fix for the trackpad.
  3. btusb module for bluetooth, which for some reason doesn’t seem to be working with 11.10 out of the box.
  4. hid-apple module for keyboard, for what I assume is backlighting and function keys.
  5. xf86-input-mtrack Xorg input module, without which trackpad gestures (two finger scroll, etc) don’t seem to work correctly.
  6. dispad package for touchpad pausing, although I haven’t noticed it coming into use.
  7. macfanctld package for the fan control daemon, since the default 11.10 package doesn’t seem to do much out of the box.
  8. custom xmodmap to remap the Apple key to Alt, Alt/Option to Mod4, since my fingers expect that sort of layout.
  9. i915 power saving mode on boot, appending i915.i915_enable_rc6=1 to GRUB_CMDLINE_LINUX_DEFAULT.
  10. Avoid the long bootloader timeout by executing: sudo bless —device /dev/disk0s4 —setBoot —legacy from the OS X recovery USB drive.

The one thing NOT covered by the Ubuntu community wiki is the wireless driver. The MacBook Air 4,1 has a Broadcom BCM43224 chip, which is actually covered by several different drivers, b43 (if you hack around a bit), brcmsmac, and the wl Broadcom STA driver (closed source). Poking around seems to suggest that the best option is the brcmsmac module, as it is based off the open-sourced driver that Broadcom released late 2010, however owing to it’s immaturity, it still lives in the staging drivers section of the Linux kernel. It also seems to lack some features supported by the Broadcom STA drivers, namely power management, which is the main reason why I chose to use the closed source (the horror) Broadcom STA drivers available from the Ubuntu “restricted” repository under the brcmwl-kernel-source package.

The one item to note with this module is that the brcmwl-kernel-source package actually doesn’t have an updated blacklist file in /etc/modprobe.d, and will fail to blacklist the brcmsmac module available by default, potentially leading to problems. I had to add brcmsmac and bcma to the blacklist with the following:

sudo sh -c "echo 'blacklist brcmsmac' >> /etc/modprobe.d/blacklist-b43.conf"
sudo sh -c "echo 'blacklist bcma' >> /etc/modprobe.d/blacklist-b43.conf"

This prevents the kernel from loading both drivers. Most of this information was found on the ArchLinux Broadcom wireless wiki page. And that concludes all the laptop configuration I had to do. All that remains is reconfiguring my XMonad settings for laptop use!

I took the plunge and upgraded my main machine to 11.10 from 11.04 today. Altogether not too bad since underneath the much maligned Unity lies compiz and ccsm still configures everything I actually care about. The one thing that did break on upgrade however was my sound. I happen to have a Sound Blaster X-Fi Titanium HD (etc etc), which according to Google searches, seems to be the source of quite a few people’s headaches. Specifically, the X-Fi Titanium HD is recognized by the snd_ctxfi driver, but prior to approximately 6/14/2011, it was recognized as an older model, which caused improper behavior, and if you were quick on the kernel logging, yielded messages similar to these:

[ 4881.961765] SB-XFi 0000:02:00.0: setting latency timer to 64
[ 4918.787949] SB-XFi 0000:02:00.0: PCI INT A disabled
[ 4918.787955] ctxfi: Something wrong!!!
[ 4918.787969] SB-XFi: probe of 0000:02:00.0 failed with error -1

Quick check on gmane.linux.alsa.devel shows that this fellow Harry Butterworth has put a lot of effort into patching up the driver to support the new card in these messages. The ALSA project’s git repositories show that these three patches are required to make things work:

Patch1

Patch2

Patch3

Now that I have the requisite background info, it’s time to get to solving the problem. At the time of this writing (10/14/2011), the version of the Linux kernel package Oneiric uses is 3.0.0-12.20 (full package: linux-image-3.0.0-12-generic). Let’s start off by fetching the source for the kernel with:

sudo apt-get install linux-source

This places the Ubuntu kernel source at /usr/src. We need to go there and unpack it somewhere useful.

mkdir ctxfi-module
cd ctxfi-module
cp /usr/src/linux-source-3.0.0.tar.bz2 .
tar xfj linux-source-3.0.0.tar.bz2

We now need to patch the unpacked kernel source with the proper patches from above. In this version of the source (linux-source-3.0.0-12.20), patch 1 has already been included.

patch -p1 < /path/to/patch2     
patch -p1 < /path/to/patch3

We need to now build our new kernel module. I found useful directions at the Ubuntu wiki page on custom kernel builds. Before doing that however, I noticed my current linux-headers package didn’t have the PCI_ID for the X-Fi Ti HD, so I added this line:

#define PCI_SUBDEVICE_ID_CREATIVE_SB1270 0x0062

to

/usr/src/linux-headers-`uname -r`/include/linux/pci_ids.h

file at line 1308, as in patch 1. Then I followed the make directions in from the Ubuntu wiki.

make -C /usr/src/linux-headers-`uname -r` M=`pwd` KBUILD_SRC=`pwd`/../../.. modules
sudo make -C /usr/src/linux-headers-`uname -r` M=`pwd` KBUILD_SRC=`pwd`/../../.. modules_install
sudo depmod -a
sudo update-initramfs -u

This installed the snd_ctxfi.ko module to:

/lib/modules/`uname -r`/extra/snd_ctxfi.ko

From there it’s a simple matter to rmmod snd_ctxfi and insmod your newly patched one. This got sound working for me again.

I used the following to get set up in Ubuntu. Connecting to IPSEC VPN requires installing network-manager-vpnc-gnome, which will pull in the required packages. Let’s do that by running sudo apt-get install network-manager-vpnc-gnome Next we need the settings, which can be pulled from the PCF file, or the PDF on the BOL website. (Requires login) NetworkManager requires the following:

Gateway: vpn.ucla.edu
Group name: <group>
User password: <your BOL user password>
Group password: <decrypted group password>
User name: <your BOL user name>
Domain: blank
Encryption: Secure
NAT Traversal: Cisco UDP
IKE DH Group: DH Group 2

I used the pcf2vpnc program to obtain the relevant group info from the PCF. The pcf2vpnc program should have been installed by the vpnc package, and can be found on Ubuntu at:

/usr/share/vpnc/pcf2vpnc

Lastly it’s important to allow IPSEC passthrough on your router should that not already be enabled. 

In short, I’m back at school, where the objective is to read lots, but not to write as much.

Someone mentioned to me while I was working at Google that it’s more interesting to build things than measure things, which was one of the most notable things I took away from the internship.

As for the MS thesis/project, the hardest part is still starting, and I still lack a good idea in the partitioned global address space topic area to build upon yet.

I’ve been working at Google this summer as an intern, which is part of the reason why there haven’t been any updates to any of my things. (This blog, MicDroid, etc) They say Google is a place that engineers disappear into, and are not heard from again, and that seems somewhat true for me this summer. The other part of it is simply that my life has become busy, and I’ve had to make some sacrifices in what I spend my time on.

If there’s one thing that’s happened fairly frequently, it’s that people are interested in what goes on at Google. Today I’ll attempt to talk about some of the things I found interesting. 

Please note: these are my personal views, they are not meant to be official announcements of any sort, nor are they in any way endorsed by Google.

Let’s begin with what I’ve learned this summer at Google.

  • Python: My projects this summer were primarily Python based, with a small amount of C++ on the side. I estimate I wrote around 3KLOC worth of Python, and around 500LOC worth of C++. Needless to say it’s great to learn another language, and for the Python lessons alone, this summer was worth it. One of the nicer things about learning Python at Google is that “correct” behaviors are enforced by the company style guide as well as code reviews. The best part about Python for my project was that it reduced the logic of the program to data structure manipulation. 
  • Scripting ability: Since my project involves managing a fairly large sized fleet of machines, being able to script fluently using the usual bash tools (awk, grep, sed, etc) is important to successfully doing my job. A related lesson to this is to realize when your scripting has gotten out of control. After a certain level of complexity, perhaps it is better to fold the script functionality into a full-blown program.
  • Dependencies matter in building large distributed systems: I accidentally triggered a DDoS using the entire fleet of machines because of a set of changes that introduced a dependency on an external service. As it turns out that external service was unable to handle the load of the entire fleet of machines because that was beyond the designed usage of the service. Since then I’ve been a lot more careful in invoking library functionality that may depend on external services. 

Next let’s talk about how things are done at Google.

As expected of the engineering-driven culture at Google, the toolchain for working in the main Google source tree is pretty heavily developed. 

  • Google’s build system is fast. This blog post series provides a more in-depth look at it, but the short description is that it caches build output across the entirety of the main source tree to avoid having to do unnecessary compilation. Because of this, the large majority of build work done are actually retrieved from cache, leaving the builder very little actual work to do. Rebuilding my summer project typically takes less than 10 seconds.
  • Code search is very powerful. This blog post introducing public Google Code Search mentions that public Code Search is based on an internal tool that is very widely used. Internal Code Search is several orders of magnitude more powerful than the public variant, so much so that I typically browse through code in Chrome rather than the terminal.
  • Code reviews are required for any code that goes into the main tree (especially for interns!). Google has actually opened up a variant of their code review tool on Google Code. There is also an article and tech talk by Guido van Rossum available as well. I actually liked Google’s tool quite a bit, since I basically tracked my tasks and TODOs through it.
  • Code style is enforced through code reviews, style guides, and the presubmit process, as mentioned earlier. This leads to code of relatively uniform quality–quite a contrast to many other places I’ve worked before.
  • Git at Google. Google is known for being a Perforce company. It turns out there is quite a bit more to the Perforce system at Google than just that though. Also, the engineers have written a Git interface around Perforce, and it works relatively well. This lets me feature branch as much as I want while allowing me to make bugfix branches when necessary. Very useful.

Infrastructure at Google is actually quite interesting too. Having to deal with a lot of machines also equates to having a good amount of tools to deal with them as well.

  • Google’s VM management system is called Ganeti. It’s been open-sourced, and the project page details that it’s built around Xen/KVM. Ganeti is a layer above Google’s vast datacenter resources. There’s a fairly well developed set of tools around making Ganeti programmable as well.
  • Building and deploying to our machines is done through a combination of a constant integration system that produces debs, APT, Puppet, and Slack.
  • A good sysadmin needs to not only know the system inside and out, but needs to be able to script and program their tools as well. This is something I really need to work on.

In addition to being known for engineering prowess, Google is also known for being an interesting place to work.

  • It’s typically said the company is opaque to those on the outside, but transparent to those on the inside. Engineers are given access to almost everything, including source, documentation, information (including non-engineering related info), etc about nearly all Google projects. Additionally, there’s no particular embargo on talking to engineers working on other projects, so it’s not uncommon at all to hear a lot of details about other projects over lunch. There’s just a general openness regarding information inside the company.
  • There’s just as much complaining about Google products inside the company as there is outside. Especially with the recent launch of Google+ and the inconsistent real name policy that accompanied it, there has been just as much debate inside the company as there has been outside. So to those of you who have fallen victim to the real name policy, we feel you.
  • Along with the openness of information, internal public discourse about what the company is doing has many avenues, from mailing lists to groups in the company, to all company meetings. It’s great to be a part of the larger discussion concerning what the company is doing.
  • Google is a very strong proponent of dogfooding. Google employees can (and do) test early builds of almost any Google product before general release. Along with this comes the discussion of future features and products, of which there is quite a bit going on.
  • At least for the teams I was working with it felt like Google was very bottom-up, where we work on a small area of the team mission each day, rather than take direction on a specific product or feature. I’m not sure if this is due to my team not being a strict product team, but it sure does feel like each day you get to choose to devote your time to something that furthers the team mission. Personal ownership of what you’re working on is also pretty high, at least in the team that I work with. It’s definitely not on the same level as a startup, but it’s quite a bit nicer than some of the other places I’ve worked at. Word across the internet says things are changing across Google in this regard though, but it didn’t affect the teams I was in touch with.
  • There is a HUGE amount of information inside Google. This information is about nearly everything Google related, from product information, code documentation, tutorials, internal tools, what’s for lunch, parking, traveling, personal projects, etc. Not all of it may be up to date, but the information is definitely there. One could waste days just learning about Google history, how Google works, etc by reading documentation and sites inside. 
  • There is a slide in one of the Google Mountain View buildings. A slide. I couldn’t find it. Maybe next time.

I’m definitely going to miss working at Google (and not just for the free food!).

  • I am going to miss the (relatively) consistent code style. Having worked at several places where there was no real style guide, being able to actually READ other people’s code was quite pleasant.
  • The tools (those mentioned above, and some that weren’t) make life quite a bit easier, especially the code search. I am seriously going to miss how useful the code search system was. In fact I’m already thinking about a possible system that hooks into Git repositories…
  • Working on a large distributed system is something I haven’t done before (the systems at Supercomputer Center were quite a bit smaller). It’s got it’s own unique issues, but overall it is quite fun to be able to work on.
  • It’s easy to find information at Google (who would have guessed?). Even if what you find is not current, at least it gives you a starting point to get more information. Additionally, there is a LOT of really fascinating information lying around internally. I just wish I had more time to ingest more of it.
  • Free food. (naturally)

There are a few things that I did find irritating while working at Google though. Here are some.

  • Some common usages of infrastructure require approval. This takes time away from useful implementation time and slows down the overall pace of work. I can understand why they require approval, but sometimes it seems rather arbitrary. 
  • Not all the information is up to date. It’s very easy to find old documentation, or deprecated systems. Sometimes there isn’t even new documentation yet. This can also make it difficult to find the RIGHT information. Usually the solution here is to ask, but it nothing breaks up your workflow like finding out you were using a deprecated or undesired feature.
  • Not Invented Here syndrome is supposedly one of the big problems Google faces, however I didn’t actually run into it too much (or maybe I don’t know of enough outside systems to know I’m running into it). I do think that the existing code/build/repository model makes it difficult to integrate 3rd party code however. 

Overall, Google has been great to me this summer, and I really think I would enjoy working there in the future (provided this post doesn’t disqualify me, oops).