One SSD instead of two: simpler or not?

My Core i7 machine, named Drake, had two 120Gb SSD drives. I purchased the first one with the machine and put Windows 7 and Ubuntu on it. Then I needed more space to get Mac OS X, so I added a second 120Gb SSD. Mac OS X became a pain, almost unusable because everything was too small. When I reached the point I had to lower screen resolution to get Thunderbird running comfortably, I got rid of Mac OS X. Then Windows 7, upgraded to Windows 8, started to eat up more space so I needed to move Ubuntu to the second SSD.

I ended up with a brittle configuration composed of the ESP (EFI system partition) on the second SSD, Windows 8.1 on the first drive and Ubuntu on the second. I was waiting for a special deal on a 240Gb SSD and finally got one on TigerDirect at the beginning of September 2014. However, purchasing the SSD is only the easy part. Migrating data from two SSD drives to a single one, with Windows 8.1, Ubuntu 14.04 and UEFI in the way, is an incredible source of headache. This page shows how I got it done.

The easy way: reinstall everything

That would have worked ten, maybe even five years ago. Yes, just reinstall Windows, a few drivers, a few programs, put back Ubuntu, perform some settings, fine tune a bit, and enjoy the rebirth of the system, coming back to life and full functionality. Things changed with years, not for good. Now that Microsoft and other hardware manufacturers assume people won’t install by themselves and rather purchase hardware with everything preinstalled and preconfigured, things became more and more time consuming to setup. Just installing Windows 8 takes more than 45 minutes, and although I could obtain a DVD with Windows 8.1, my Windows 8 serial number won’t work with it. I would have had to install Windows 8, then upgrade to Windows 8.1 again!

Then come the drivers. Since I purchased my motherboard before Windows 8 was released, all my motherboard CD has to offer is Windows 7 drivers. So I cannot use the easy auto-install tool performing an unattended setup. I rather have to download every driver separately from Asus, run them, wait, reboot, run the next one, etc. Then there is the NVIDIA driver, requiring 100 Mb of download and yet another installation taking more than five minutes, and yet another reboot. Maybe I chose the wrong motherboard. By sacrificing a few USB ports, S/PDIF audio and maybe some PCI Express slots, maybe I could get something simpler not requiring as many drivers, that would be able to make use of what is prepackaged within Windows. That’s still to be investigated.

Then come the programs. Yes, Ninite can install me many programs automatically but not GNU Emacs, GNU GPG, it won’t configure my Dropbox, resync my Firefox bookmarks, reinitialize my Thunderbird email settings. It won’t link back my Documents, Images, Music and Videos default folders to my data hard drive.

And then come the licenses. How Windows 8.1 activation will behave? Will it happen smoothly, or will Windows decide that this change of SSD is too much and require me to call Microsoft to perform activation by phone, forcing me to exchange, by voice, on a poor channel, tens of nonsensical digits? After Windows 8.1 activation, my DAW, Live from Ableton, also requires authorization. I’m not sure it will reauthorize, since I activated it on my main PC as well as my ultrabook. That means additional hassle.

Bottom line, reinstalling is a pain, and that is just the Windows side. Ubuntu installation is usually smooth, but when a single thing goes bad, it requires hours of Google searches.

This is why I wanted a better way. I was so tired of this tedious process I was considering giving up on this machine and use my ultrabook instead, if data transfer failed. But my ultrabook, with its 128Gb SSD, won’t have enough storage for editing music made of samples or recording/editing Minecraft videos.

Preliminary connection of the new SSD

Before installing the new 240Gb SSD into my system permanently, I wanted to be sure I would be able to transfer my two operating systems (Windows 8.1 and Ubuntu 14.04) and make them boot. I thus only plugged the disk rather than attaching it right away into my case. I fortunately had some free SATA power cables as well as an extra SATA cable and port. That allowed me to connect the new drive without disconnecting the others. This way, it would have been easy to roll back in case of difficulties forcing me to reinstall everything, and then think about another strategy or gather my courage and patience for the full reinstall.

I then booted from a USB stick with a Live installation of Ubuntu 14.04. This was necessary to perform the data transfer on a totally offline, clean, file system.

Before transferring anything on the drive, I ran a SMART self test. For this, I installed smartmontools with apt-get and ran sudo smartctl -t long /dev/sdb. At this time, /dev/sdb was the device of the drive. That took almost an hour, but I could leave this running and do something else.

The self-test found no defects. I learned to do this preliminary step the hard way when I assembled a machine for my parents. The hard drive failed short while I was configuring Windows and I had to RMA it. Performing a self-test may have avoided me a waste of time and some frustration.

The drive being clean from any defect, at least from the point of view of the self test, I moved to the next step: data transfer.

GParted is the king!

A long time ago, my only friend for partitioning and drive transfer was Parition Magic, from PowerQuest, now purchased by Symantec. That time is over, thanks to GParted, a free open source tool that comes with Ubuntu. But that time, my job was pushing GParted to the limits. Here are the operations I needed to perform with it:

  1. Create a GUID Partition Table (GPT) on the new SSD. This is because I want a pure UEFI-based system. But this is not strictly necessary since the drive is far from the 2Tb limit!
  2. Copy the first partition of my second SSD at the beginning of the new drive: this is the ESP.
  3. Copy the first partition of the first SSD: this is the 128Mb system reserved partition of Windows. That copy wasn’t possible, because GParted didn’t know the partition type. I thus left a 128Mb hole declared as Unformatted, to figure out a way out later on. I was hoping Windows could recreate the data on this partition.
  4. Copy the second partition of the first SSD: this was the Windows main partition.
  5. Copy the 40-ish Gb partition of my second SSD at the end of the new drive: this was my home drive from Ubuntu.
  6. Copy the 20-ish Gb partition of my second SSD at the bottom of the free space on new drive: this was my main Ubuntu installation.
  7. Create an extra 20 Gb partition on the new drive in case I would like to give a shot to a new Linux distribution.
  8. Create a 16Gb swap space on the new drive for Ubuntu’s use.
  9. Resize my Windows main partition to take the rest of the space.

Phew!This long sequence gathering pieces from different sources reminds me of infusion crafting in the Thaumcraft mod of Minecraft, where essentias and items are combined together on an altar to craft powerful magical objects.

I hoped that sequence would work, but that failed at step 5. For no obvious reason, GParted wasn’t able to copy my Ubuntu home drive at the end of the new SSD! I had to leave an 8Mb gap and then resize the partition to fill it. I then performed, one by one, the other operations. That was a quite tedious job, because the mouse pointer was too small and impossible to enlarge without a system hack (Ubuntu bug since 11.10! They chose to remove the option to resize mouse pointer rather than fixing the issue.) and sometimes clicking was opening the menu and closing it right away rather than leaving it open.

Following image gives the final layout. Isn’t that great? Not sure at all this is simpler with one drive than with two, after all…

gparted

After this transfer process, I tried to recreate the entries in my UEFI’s NVRAM, using efibootmgr, for Windows and Ubuntu. I then unplugged the SATA cables of my two 120Gb SSD drives from my motherboard and rebooted the PC. I won’t state the exact commands I used here, because that just failed. System wasn’t booting at all.

Fixing Ubuntu

Back to my Ubuntu live USB, after at least five attempts because my motherboard is apparently defective and misses the F8 key from time to time and the need to jump into Setup and change the boot order from there to boot the UEFI USB stick. Boot time with that Asus board is desperately long. Waiting 15 to 20 seconds from power up to boot loader is a shame when knowing it takes less than 1 second on a 300$ laptop! But the laptop lacks storage expandability I need, so I am always stuck on one end or another.

Then comes the fun part. I am pretty surprised there is no easier ways to restore GRUB than the following. I read about boot-repair, but it is just missing, probably yet another PPA to copy/paste and install. Anyway, I ended up getting it to work.

First I found the partition where Ubuntu was installed, /dev/sda5, and mounted it: sudo mkdir /media/ubuntu && sudo mount -t ext4 /dev/sda5 /media/ubuntu. I did the same with my ESP: sudo mkdir /media/efi && sudo mount -t vfat /dev/sda1 /media/efi.

Second step was to establish bindings:

sudo mount –rbind /dev /media/ubuntu/dev
sudo mount –rbind /proc /media/ubuntu/proc
sudo mount –rbind /sys /media/ubuntu/sys
sudo mount –rbind /media/efi /media/ubuntu/boot/efi

That caused some directories inside my Ubuntu mount to mirror exactly the top level directories.

Then I had to chroot into my Ubuntu, using

sudo chroot /media/ubuntu

After all this, system was behaving a bit the same way as if I started a shell on my Ubuntu setup. From this, I tried

sudo upgrade-grub2

That just updated GRUB’s entries, not the EFI one, so didn’t fix the boot.

Then I tried

sudo grub-install

If I remember well, no arguments were necessary, and that fixed my GRUB EFI and added back the Ubuntu entry to NVRAM. This worked only after /boot/efi was correctly referring to my ESP. Note however that for this to work fully, the USB live Ubuntu had to be booted in UEFI mode, not MBR default mode.

A reboot later, I was starting my Ubuntu setup, fully intact and working! Half of the transfer done! Not quite…

Windows was failing to boot and Ubuntu’s update-grub wasn’t detecting Windows anymore. Quite bad.

Windows: desperately dead

Windows, on the other hand, wasn’t booting at all. It was showing a blue screen suggesting me to use the repair tools from Windows DVD. Last time I did this, the tools ran for at least one minute and bailed out, so I had to do a complete refresh which ended up wiping everything and leaving only applications from the Windows store. If I have to choose between such a messed-up repair and a clean install, I would bet for the second option.

Before entering into this reinstall nightmare once again, I tried to recover the reserved partition. For this, I plugged back my Windows 120Gb SSD and booted from my live USB stick to make sure Windows would not kick in and see two copies of itself (one on the old, one on the new SSD). If Windows sees two copies of itself, it changes the disk ID of one copy. If the new drive is changed, everything is messed up and Windows cannot boot anymore, until a refresh is done (and then everything is messed up again!). Back to my live USB, I used DD to transfer the bytes of the old reserved partition to the new one. I also made sure the new /dev/sda2 reserved partition was marked as such in GParted, by modifying the flags. That changed nothing.

The post How to repair the EFI Bootloader in Windows 8 literally saved me hours of work! This gives a procedure that allows to fix the boot loader. The main idea is to log into console from Windows DVD and run bootrec /fixboot command from directory EFI\Microsoft\Boot\ of the ESP, followed by bcdboot  with a couple of arguments, again from the ESP. Luckily, I had my ultrabook, which was quite handy to check the page while I was running the commands on my primary PC.

That solved the issue and allowed me to boot into Windows 8.1! PHEW! Quite a nice step forward.

GRUB not detecting Windows

Now that my machine was able to boot into both Windows and Linux, one could wonder what was missing. Well, I had no easy way to choose which operating system to boot at startup. Originally, GRUB was offering me an option to boot into Windows or Ubuntu. After the transfer, it was only seeing Ubuntu.

I found procedures to manually add an entry for Windows but that involved finding and copy/pasting drive UUID and probably redoing the change on each kernel update. I didn’t want that. Another possibility was to install an alternative EFI boot loader like rEFInd, but these have tendency to display many unwanted icons doing nothing. I got enough trouble with this while fiddling with triple boot (Windows, Linux, Mac OS X).

There was absolutely no way out. People were doing the manual addition of Windows or that was working out of the box. I had to spend more than 45 minutes inspecting the os-prober script and walking through it! By looking at the script and its logs in /var/log/syslog, I manage to find out it was skipping my ESP because the partition was not flagged as Boot! I fixed that from GParted, reran sudo update-grub and tada! GRUB was seeing Windows!

This is NOT the end!

Then I had to proceed with the hardware installation of the new drive. Since I was too impatient to get a SSD, I ended up with an ill-designed system. If I had waited another year before purchasing my Core i7 PC, I would have got a superb case with support for SSD drives. Now I have a CoolerMaster kind of case with only standard 3.5″ drive bays and need to fiddle with SSD brackets. Screwing the SSD drive in this is a painful process of trial and error. Then the assembly doesn’t fit well with the screwless mechanism of the case. This somewhat holds in place, but that’s not smooth installation like a regular 3.5″ drive.

Some more fiddling later, my new SSD was plugged back into my PSU and motherboard, and I got rid of the extra two SATA cables. I stored them away; they will be useful sooner than later, because my two 120Gb SSD won’t remain unused.

I plan to put one of them into my HTPC, which will be another adventure of its own. My HTPC has only four SATA ports, all used up, so I will have to get rid of one hard drive.

Cascade of problems

It is unbelievable how things can go bad starting from a small number of problems. This afternoon, I was overwhelmed by several hurdles, but there were only two main root causes: keyboard instability and network bandwith issue.

Everything started with the delivery of my new AZIO keyboard and Razer Taipan mouse. Well, these are low-risk plug-in devices that won’t disturb my work too much, so let’s plug them in and see how that goes. The mouse worked fine. It seems sturdy and scroll wheel is working well, far better than this Microsoft Comfort mouse I used  a while ago. Pointer moves smoothly and there are several extra buttons that could be useful. Sensitivity can be adjusted with the touch of a button, so it can be decreased for office work for the pointer to move at reasonable speed and increased in games like Fract where I had to move the mouse five meters away to manipulate some controls!

Keyboard, on the other hand, didn’t work too well. Keys are large and the LED backlight is very nice, but the keyboard has a tendency to skip keys when I type. It skips randomly, especially the e and the s. This is a real pain when trying to do anything other than looking at emails or analyzing code or data. I tried to give it some time, but the problem persisted, up to the point I got fed up and put back my old keyboard.

Some time after I put back the old keyboard, my NX connection to my company’s remote server dropped. I had to reconnect and then got the DPI bug again. The remote desktop was running in a low resolution as if DPI scaling wasn’t disabled for the NX client anymore. I tried to restart the client, checked the DPI setting, everything was fine. I had to reboot the whole system to get my resolution back at 1680×1050 in the remote desktop.

That worked for some time, then things became laggy. Ok, we need a plan B: a VirtualBox guest running Linux and accessing the files remotely using sshfs. I already had a VM on my home PC, so I wanted to copy it on my company’s ultrabook as a first step. That was intended to run in the background and not disturb anything, but file transfer became unstable and started to slow down and stop completely. I had to initiate file transfer from my home PC: it wouldn’t work the other way round, again because of Windows. File transfer from Ubuntu failed, because the VirtualBox image was on my Windows partition and Ubuntu refused to mount it because now, Windows 8 doesn’t unmount partitions correctly, so NTFS-3G will eventuall have to adapt and implement very patchy and ugly workaround against this!

That forced me to switch back and forth between the two PCs and I was having trouble finding the buttons to switch the KVM and the HDMI switch. I don’t have a large enough desk to put two displays, two keyboards and mice so I am stuck with that stupid KVM and HDMI switch.

Things came to a total dead end when the network connection of the ultrabook stopped completely. Windows was unable to interact with my router and thus connect to the Internet. All I could do is turn wi-fi back on. I turned it off this morning, because Windows was stubbornly trying to use wi-fi instead of wired Ethernet! It took a while for wi-fi to come up, it didn’t connect automatically to my router, it took a while to connect, and connection was limited.

If I remember well, I could connect back to my NX server, but everything hung up and I had to terminate the NX client. Nothing would work: no ALT-F4, no right-click+Close program, I had to use the task manager. Then any attempt to connect back to the server returned to hung up NX session. I would have had to find back a long and complicated command on company’s wiki to reset the X server. No way! Tired of all this, I rebooted the whole server instead.

I ended up copying my files locally to not use the remote server at all and transfer the VirtualBox image using an external hard drive. That counter-productive end of afternoon totally drained me out and I was quite exasperated and overwheled after that. I have been fighting for weeks against the NX Client and the grid infrastructure I was connecting to, without nothing other than patchy workarounds that sometimes apply, sometimes fail. I felt I reached the dead end at this point. I needed a solution.

But that was working fine during the morning. Why, all of a sudden, things went south? All started from network issues: the ultrabook preferred wi-fi over wired Ethernet, the network connection to my NX server dropped all of a sudden, file transfer was unstable, Windows couldn’t access the network anymore, etc. So let’s act on network first, before fixing NX client again! Maybe that USB network interface is flawky and I would have to try with a new one.

But first, let’s remove it from that USB hub and plug it in directly into the ultrabook. That hub worked for a while, when I was using network just for sparse file transfers, but higher bandwith is needed for a full remote desktop connection. It is still good for keyboard and mouse, and necessary since that ultrabook has just two USB ports!

So I tried this, and that seemed to help! Connection to wired Ethernet happened almost instantly, Windows didn’t fall back to wi-fi as this morning, and I worked for half an hour, remotely connected, without any issue. As a final test, I transferred an Ubuntu ISO over the network and that worked without a flaw.

That hub is capable of transferring only a theoretic 480Mbts/s. It is used to carry over information from small devices like keyboards, mice, occasional data transfer from  USB stick or external hard drive, but how about something requesting 100Mbits/s constantly? That may well overload the poor little hub.

If that still bugs, I will give a shot to the Gigabit Ethernet adapter I have in the office. If that one fails as well, I will probably have to give up on this utrabook and start carrying over the heavier laptop from office to home.

Why do I suddenly need to use source to call ANY Bash script?

This week, I ran into a somewhat weird and annoying Bash issue that took a couple of minutes to solve. It was a very simple problem, but it caused quite a bit of headaches. A colleague wrote a script that was setting up some configuration variables. The script, named config.sh, was intended to be called using source config.sh. The source command tells Bash to run the script in the current interpreter rather than spawning a new process and run in it. Any variable in such a sourced script gets set up in the current Bash process, so they can be used after the script finished.

The script was containing variables of the form

TOOLS_DIR=<some path>
TSVTOOLS_DIR=<some path>
...

I tried to refer to these variables in one of my script, by using $TOOLS_DIR, for example. However, each time I was calling the script, Bash was acting the same was as if the variable wasn’t defined! The variable was accessible from the Bash process that invoked the config.sh script. Why?

There were two workarounds:

  1. Call my script using source.
  2. Modify my script to call source config.sh in it. I didn’t like this, because this was adding an extra step to all my scripts and running config.sh was taking several seconds.

My colleague and I looked at this to no avail. Then I found the culprit.

The config.sh script was declaring variables local to the Bash process. For the variables to be transferred to forked processes like invoked scripts, they needed to be exported as environment variabies! So the solution was as simple as modifying config.sh to prefix every variable declaration with export! For example,

export TOOLS_DIR=<some path>
export TSVTOOLS_DIR=<some path>
...

After this very simple change, I was able to use the variables in my script, without sourcing the scripts or invoking config.sh locally.

Bumpy Android upgrade

I recently joined the club of unfortunate owners of Galaxy Nexus that reached the down path of death. Many people told me bad things about these Nexus and about other Android smartphones in general. My brother’s device is slow and for some obscure reason, mixed up the sounds altogether. As an example, the device emits the sound of a photo camera when locked and unlocked! My sister’s phone is slow like hell, putting her to the torture each time she opens up an application. One of my friend’s phone has no more mic; he has to leave headphones plugged all the times to answer calls. Another colleague at my work place had issues with the USB port: device was not charging anymore.

My problem is sporadic reboots, several times a day, and sometimes boot loops. I thought my phone was agonizing, but I found something that may give it a second life. I will have to see in the long run, but this was nevertheless an interesting adventure.

The symptoms of my Galaxy Nexus

This started a few months ago, on Thursday March 27, 2014. The phone entered into a boot loop and could not do anything other than rebooting like crazy. One of my colleague and friend managed to remove some applications in a hurry, before the next reboot, and that seemed to stabilize the monkey for a few minutes, but that just increased the length of the boot cycles. The device was rebooting like an old agonizing 486 computer overloaded with Windows 98! As a last resort, I tried a factory reset, which helped… until last week. Yes, the device started to reboot again!

I woke up on Thursday, July 24 2014, and noticed that my phone was stuck on the Google logo. Nothing would get it unblocked, except removing the battery and putting it back. I did it, rebooted the device and it got stuck again. Argghhhh!!! I removed the battery once more, left the device and battery on my desk and searched for some solution, to no avail, except in some cases, a bug in Android 4.2 was causing the phone to boot loop and it would unstuck after a few attempts. I put the battery back and tried again: this worked. Maybe removing the battery for a few minutes discharged some condensers and reset the hardware to a cleaner state, maybe I was lucky, maybe both. But the device remained unstable and was prone to reboot, sometimes twice in an hour. The Sunday after, I got fed up and made a factory reset, then I didn’t install any application until I find something longer term to fix the issue. The device then worked without any reboot, so an hardware defect is less likely, although still possible. I need to keep in mind I dropped the phone a couple of times, including once on my outdoor concrete balcony.

That means at least one installed application is interfering with the OS and causing it to reboot! This is unacceptable in a Linux environment where each process should be well isolated from the others and from the critical system components. A process should not have the possibility to reboot the device, unless it runs as root, but my device was not rooted, so no installed application could run a root process! That lead me to the conclusion that something in the OS itself was flawed, opening an exploit that can be used intentionally or not by applications to harm the device!

An average user cannot do much about that, other than refraining from installing any application, factory resetting the phone every now and then or contacting his phone service provider and getting whatever cheap replacement the provider will be kind enough to grant him until the end of his agreement. I didn’t want to hit the same wall as my brother and get something with a smaller display and bloated with branded applications. If I really have to get a new phone, that will be a Nexus free of crapware or, if I cannot get a Nexus, I am more and more ready to take a deep breath, give up on whatever I will need to give up and go for an iPhone.

First upgrade attempt: not so good

However, I had the power and will to do something more about this! This was a bit unfortunate for my spare time, my level of stress and maybe my device and warranty, but I felt I had to try it. If the OS has a flaw, why can’t I upgrade it to get rid of the flaw and go past this issue? Well, all Galaxy Nexus are not equal. US models have the Yakju firmware from Google, but Canadian models have a special firmware from Samsung instead! The Google firmware is the one that gets updated more often, up to Android 4.3. Samsung’s philosophy differs from Google: if you want to get an upgraded Android version, replace your phone.

That lead me to the next logical step: can I flash the Yakju firmware on my Canadian Galaxy Nexus phone? Any phone provider, any reseller, any technical support guy, will tell you no, but  searches on Google will tell you YES! For example, How to: Flash your Galaxy Nexus Takju or Yakju To Android 4.3 is the guide I started from.

First thing I had to do was to install Google’s Android SDK on my Windows 8.1 PC. Yep, you need the full blown SDK! The simplest solution is to get the Eclipse+SDK bundle, so at least you don’t have to mess around with the SDK Manager to get the full thing. Then I had to set up my PATH environment variable to get tools and platform-tools subdirectory into my path, so adb and fastboot would be accessible from the command line. I also had to download the Yakju firmware from Factory images for Nexus devices.

Second step is easy to forget when recalling the exact sequence I performed to reach my goal. It is as simple as plugging the phone into a USB port of a computer. That requires a USB cable and, of course, a free USB port. Any port will do, given it works. In doubt, test with a simple USB key.

Next step was to put my device in USB debugging mode. I searched and searched for developer options to no avail! Googling around, I found Android 4.2 Developer Mode.  Bottom line, I had to go into phone’s settings, tap on About Phone, then tap seven times on the Build Number! This is just shocking crazy: how was I supposed to find this out? Fortunately, after I unlocked the developer mode options, I was able to turn on USB debugging. Without USB debugging, ADB cannot communicate with the device.

This was necessary for a simple and nevertheless crucial step: running adb reboot bootloader. This reboots the device into the boot loader, a kind of minimal OS from which it is possible to flash stuff on the device’s memory. I read about procedures involving pressing power and volume up/down buttons, but that never worked for me. This is probably like booting the iPhone into DFU required to jailbreak or recover from very nasty failures: you have to watch tens of videos, try it fifty times and get it by luck once in a while. These kinds of patience games are getting on my nerves and making me mad enough to throw the phone away. Fortunately, adb reboot bootloader while device was plugged into my computer and in USB debugging mode did the trick.

Once in the bootloader, you can use Fastboot to interact with the minimal OS. As ADB, Fastboot comes with the Android SDK. However, Fastboot wasn’t working for me: I was stuck at “Waiting for device” prompt. I started Googling again and found awful things about a driver to download from obscure places and install, the driver may differ for Samsung devices with respect to other Nexus phones, I read upsetting stuff about driver not working for Windows 8 without a complicated tweak to disable driver signature validation, about rootkits that could simplify my life if I install yet another hundred of megabytes of applications onto my PC, etc. Flooded with all of this, I gave up and just let my phone run as is. Getting out of the bootloader is easy: just hit the power button and the phone will reboot as normal.

The Penguin saved the deal!

However, one week later, an idea was born in my mind, and it was urging me to be tested! Linux may have the needed driver builtin so it would be worth trying from my Ubuntu box. That’s what I did on Friday evening, August 1 2014, and it was a success after a couple of hurdles.

First, I had to install Android SDK there as well. Once adb and fastboot were accessible, I switched my phone into bootloader once again, using adb reboot bootloader.  Then I tried fastboot devices to get, again, this stupid “Waiting for devices” message. I don’t know exactly how I got to that point, but that command finally output me a message about permission denied. Ok, now I know what to do! sudo fastboot devices. Well no, cannot find fastboot! I had to stick the absolute path of fastboot for it to work, but I finally got a device ID. Yeah, the data path between my Ubuntu box and my phone was established!

Next incantation: sudo fastboot flash bootloader bootloader-maguro-primemd04.img. That gave me a failure, AGAIN! Ok, that’s great, my phone will definitely not accept commands from Fastboot! Maybe it is factory locked to deny these? But before thinking too much, I should have read the error message more carefully and completely. It was saying the following:

FAILED (remote: Bootloader Locked - Use "fastboot oem unlock" to Unlock)

It even gave the incantation needed to go one step further. I thus ran the command, prefixed with sudo. That popped a message on the phone’s screen asking me for confirmation. I moved the cursor to Yes with the volume up/down buttons, pressed power button and voilà, boot loader unlocked!

Why did I have to unlock the boot loader? This was probably because I was switching to a different kind of firmware. If I had a US phone, probably I would be able to install Yakju without unlocking the boot loader. The unlock operation is not without consequences: it wipes out all data on the device! This was a minor issue at this stage, since I refrained from installing anything and extensive configuration until I would find a way to improve the stability of my device. I thus wiped without asking myself any question about important data to back up.

Then with the similar feeling as a wizard gathering all the components to cast a spell, I entered the following command and looked at the output.

eric@Drake:/media/data/yakju$ sudo ~/android-sdk-linux/platform-tools/fastboot flash bootloader bootloader-maguro-primemd04.img 
sending 'bootloader' (2308 KB)...
OKAY [  0.258s]
writing 'bootloader'...
OKAY [  0.277s]
finished. total time: 0.535s

Victory! Not really… That was just the first step! Next step was to reboot the device, using sudo fastboot reboot-bootloader. My phone screen went black for a couple of seconds, enough to fear for an heart attack, then the boot loader came back again! Phew!

Ok, now the radio: sudo fastboot flash radio radio-maguro-i9250xxlj1.img. That went well, similar to the boot loader. Then I had to reboot again: sudo fastboot reboot-bootloader.

Now the main thing: sudo fastboot -w update image-yakju-jwr66y.zip. That took almost two minutes, then my device rebooted automatically, this time in the new firmware. Done!

After these manipulations, I was able to set up my phone normally. Once in the Android main screen, I accessed the phone settings and confirmed I was now on Android 4.3! At least I reached my goal.

What can I do next?

There are a couple of things I will try if the device starts rebooting again. Here they are.

  1. Install a custom ROM providing Android 4.4. Besides upgrade to latest Android, this will give me an extended battery life as 4.4 greatly improved over this as I experienced with my tablet, which benefited from a custom 4.4 ROM recently. I will also be able to return to baseline Yakju 4.3 if needed. Unfortunately, I had no way to back up my 4.2 firmware, so I cannot go back.
  2. Shop for a new phone. I will try to get a Nexus 5 and if I cannot without switching provider, I will shop for an iPhone. Maybe I will find a store in Montreal providing unlocked phones including Nexus, maybe I will have to wait patiently for my next trip to United States to buy an unlocked Nexus 5 there, maybe I will be able to convince someone from a US office of my company to buy the phone for me and ship it to me (if I ship him a check with the amount of the device obviously!), maybe I will find something to make me happy on a web site I don’t know about yet. We’ll see.
  3. If all else fails, I will give up on installing any application and will use the Galaxy Nexus just as a phone and for casual Internet access with the stock browser. After my agreement with Fido ends next November, I will consider other, hopefully better, options.

 

Groovy + Maven + Eclipse = headache

Java is a general-purpose programming language that matured over more than ten years. It provides a solid platform on which many third party libraries (of various quality and complexity of course) were developed. Maven is one of the several ways large Java projects can be described formally and built automatically. Maven has the ability to manage dependencies a lot better than the traditional way of bundling everything together in a large archive and aims at simplifying and unifying build process, although advanced configuration quickly drifts into XML nightmares. Then comes Eclipse, an integrated development environment. Although not perfect, very far from it, Eclipse has been a time saver for me, especially when comes time to search into large code bases and refactor code.  Eclipse is designed for Java, and it has a plugin to integrate well with Maven, called M2Eclipse. We can safely state that Java, Maven and Eclipse play well together.

Then comes Groovy, a language built on top of Java. Source code in the Groovy language is compiled into byte-code like Java is, and the byte-code can run under the same virtual machine as regular Java programs, with the exception they need a set of runtime classes and the generated byte-code has some more indirections as compared to one generated by a traditional Java compiler. As a Java extension, we would expect Groovy to play well with Maven and Eclipse. Well in practice, I found out not to be exactly the case.

I experienced what follows with Eclipse Kepler, Groovy 2.2 and Maven 3. Things may have been better with older versions or with newer ones, that will have to be seen.

Groovy and Eclipse, almost well but…

First time you will try to write a Groovy program in Eclipse, you will notice that there is absolutely no IDE support for that language. You won’t be able to use any code assist and Eclipse will not allow to compile or run Groovy code for you. You will need to install an extension to get Groovy support. This is the Groovy Eclipse plugin. The plugin works relatively well, but it has a couple of annoying drawbacks.

First, code completion works in random erratic ways. I sometimes get tired and turn it off. For example, I had a variable of type String. I knew it was a String and the IDE had the ways to know, because I declared the type of the variable in my code. In Groovy, you can use variables without declaring the type. However, when I was trying to get proposed completions for to, I was getting toUpperCase() but not toLowerCase(). This was completely arbitrary.

When running a Groovy script, the arguments in the launch configuration get prepopulated with a list of standard stuff that you must not delete. If you want to pass your own arguments to your script, you have to append them at the end of what the Groovy extension inserted in the Arguments box and you need to be careful not to delete the predefined stuff if you completely replace your custom arguments.

Debugging Groovy code in Eclipse is like playing Russian roulette. Sometimes you can print the contents of a variable, sometimes you cannot; you don’t know when it will fail and why.  Sometimes you can expand an object and see its fields, sometimes the + icon is not there and you cannot expand, again for no obvious reasons. Execution may step into closures or may not, you don’t know, at least I didn’t. You can work around by putting breakpoints in the closures, but when you go out the closure, you end up in strange places of the execution within internals of Groovy. Conditional breakpoints never worked, at all, so I had to constantly pollute my code with insane if (some condition) println(“Bla”) and be careful to remove all the junk after I’m done debugging.

Error messages are sometimes cryptic. If you are unlucky enough, you can even manage to get an Internal error from the Groovy Eclipse compiler! I was getting that in one of my classes and had to disable static type checking for that class to get rid of it.

On Monday, August 4th 2014, things went completely south after I upgraded my build to Groovy 2.3. Everything was working fine with Maven on the command line. Eclipse was compiling the code fine. I set up the project to use Groovy 2.3 and there was no issue. However, when running the project, I was getting the following runtime error.

Conflicting module versions. Module [groovy-all is loaded in version 2.2.1 and you are trying to load version 2.3.6

I looked at my POM file, analyzed Maven dependencies with both mvn dependency:tree and Eclipse, found no Groovy artifact except the 2.3.6 one, verified my PATH to make sure only Groovy 2.3 was on it, checked Eclipse preferences many many times, restarted Eclipse several times, to no avail. There seems to be something in the Groovy Eclipse plugin hard-coded for Groovy 2.2, even if the compiler is set to 2.3!

Any search on Google is yielding results about Grails and Spring, as if nobody is using Groovy alone anymore, only with other frameworks. Nobody else seems to be having the issue.

Maven + Groovy = fire hazard!

Maven relies on plugins to perform its tasks, so the ability to build something with Maven depends on the quality of the plugins. There is unfortunately no official well known, well tested and stable plugin to build Groovy stuff using Maven. The page Choosing your build tool gives a good idea of what is currently available.

First I read about GMaven, but I quickly learned it was not maintained anymore, so I didn’t try to use it. Then I read that the Groovy Eclipse Compiler was the recommended one. I was a bit reluctant, thinking this was a piece of hackery that would pull out a bunch of dependencies from Eclipse, resulting to an heavy weight solution. But this was in fact well isolated and just the compiler, no need to pull the whole Eclipse core!

Maven Eclipse compiler worked well a couple of months for me. However, yesterday, things went south all of a sudden. First, there were compilation errors in my project that would not show up into Eclipse but appeared when compiling with Maven. These were error messages related to the static type checking. After fixing these, compilation went well, but all of a sudden, at runtime, I was getting a ClassNotFondError about ShortTypeHandling. I read that this class was introduced by Groovy 2.3 while my project was using Groovy 2.2. Digging further, it seemed that the Groovy Eclipse Compiler was pulling Groovy 2.3, compiling code against it but the code was executed with Groovy 2.2. This should in principle not cause any problem, but it seems that in Groovy, byte-code is not fully compatible between versions!

I tried updating my dependency to the Groovy Eclipse Compiler in the hope that would fix the issue. However, that changed my ShortTypeHandling exception for stack overflows. It happened that the clone() method of one of my class was calling super.clone(), which is perfectly normal. But Groovy was making something nasty that was causing super.clone() to recursively call clone() of my subclass! This resulted to an infinite loop causing the stack overflow.

I found this issue to be even more intricate after I tried to compile my code on JDK8 and found it out to be working correctly. In other words, the JDK was affecting how Groovy Eclipse Compiler was building the byte-code!!! In JDK7, something would fluke the byte-code, causing the stack overflow errors, while in JDK8, everything would go fine!

I then tried updating the compiler once more, to the latest and greatest. Things compiled, but I was back at square one with the ShortTypeHandling exception! So no matter what I was trying, Maven was unable to build the project anymore.

I was about to give up on Maven and use a batch file to call Groovy directly, but that would have been a lot of fiddling with the class path. I was not happy at all about this solution.

Then I found out about the GMavenPlus plugin. I tried it and it worked like a charm! The plugin makes use of the Groovy artifact defined in the project’s dependencies rather than hard-coding a compiler for its own version of Groovy. It uses the official Groovy compiler API rather than its own, so things get compiled the same way as when using the Groovy Ant task or the standalone groovyc compiler. GMavenPlus saved my day yesterday, freeing me from a lot of hassle.

Is it worth it?

I’m not sure at all. I got several problems with Groovy that would deserve a separate post. The integration difficulties with Maven and Eclipse make me believe it is better just to use Java directly. JDK8 introduced lambda expressions that fulfill a part of what Groovy is trying to implement in its own special way. For projects that really need a true scripting language, there are already several of them, like Python, which is built from the basis for scripting.

Issues with NoMachine’s NX Client

I recently tried using NoMachine‘s NX Client to connect to a virtual machine at work running NX Server, and got an incredible number of problems. At the end, I gave up on NX and fell back to VNC, but this exploration is nevertheless interesting.

The virtual machine at my work place is running NX Server 3.4.0-12, probably the free edition. I have no control over this. However, I can control on which NX client I run.

I got issues with the screen resolution on Windows 8, erratic keyboard responses and ended up switching to VNC after I couldn’t find any solution under Windows 8.

Awful screen resolution on Windows 8

When I was connecting to NX server using NX Client 3 or 4 using my main corporate laptop running Windows 7, I got no display problem. The desktop showed up in a nice 1674×954 resolution, which is near the native 1680×1050 I have with the 22″ LCD out there. I can bump up the resolution closer to native by switching my NX client to full screen.

However, I got a secondary ultrabook running Windows 8.1. Because it is lightweight, I like to use it when working from home. However, running NX client on this machine causes a major issue: display resolution goes down to 1114×634! I tried searching for a solution or at least a workaround to no avail. Nobody seems to be having this issue, and there is a very good reason why.

Because I am visually impaired, I need larger fonts and mouse pointer. There is a very neat way to get this under Windows since 7: the DPI scaling. It can be adjusted by right-clicking on the Desktop, accessing Personnalize and clicking on Display. I use to bump up the scaling to 150% which makes fonts large enough for most cases and also enlarges mouse pointer. This doesn’t completely remove the need for application-specific tweaking, but this at least helps greatly. Magnification is done without lowering screen resolution, by just applying a scaling before rendering the graphical elements, as opposed to scale bitmaps after the fact as the Windows built-in and third-party zooming applications do.

Under Windows 7, this functionality doesn’t affect NX client at all. It gets the display resolution and can make use of all pixels. Under Windows 8, it seems to get some form of virtual resolution sensitive to the scaling! Let’s do the math real quick. If we divide 1680 by 1.5, we get 1120. Dividing 1050 by 1.5 yields 700. Isn’t that near 1114×634?

This is the third application misbehaving with DPI scaling for me. First one has been Ableton’s Live virtual studio. Second one has been Corel’s VideoStudio, a video editor. In both cases, I was able to turn off DPI scaling locally for the offending application. This is easy for 32 bits applications: just right click on it, access the properties, then the compatibility tab, and there is a check box for this. For 64-bits applications, this is trickier and would deserve its own post since it involves registry hacking. But this is doable, because this is how I worked out Ableton’s Live. I have a post in French about this, called Le problème des caractères trop petits sous Windows.

However, this is miserably failing for NX Client. I tried applying the compatibility setting on each and every executable found in the installation directory, including NXWin.exe which is responsible for showing the desktop, NXSSH.exe which establishes the SSH connection, NXClient.exe which is the frontend offering the configuration panel, etc. I tried upgrading from NX Client 3 (recommended by my company since NX Server 3 is in use) to NX Client 4 with absolutely no change.

There is only ONE workaround I could find: completely disable, session-wide, the DPI scaling, decreasing it from 150% down to the regular 100%. However, I just cannot work efficiently this way. Although I can bump up font size individually for elements, the mouse pointer will remain desperately small, even when I use the Magnified pointer. I couldn’t find any convincing solution, although I got a couple of proposed ideas that I will list there for the sake of completeness.

  1. Start a Remote Desktop Connection from my ultrabook to my corporate laptop, or even from my home PC (that would free me completely from transporting a laptop between office and home). This is an attracting solution used by many colleagues. I tried it and it unfortunately failed because Virtuawin didn’t work at all on the remote desktop. Well, it worked, but with no keyboard shortcuts to switch between desktops! Some colleagues got past this by changing keys of Virtuawin. However, if I start the remote desktop connection while no session is active on the laptop, font sizes and mouse pointer are small as if there were no DPI scaling. I couldn’t find any solution for this second issue.
  2. Run NX client from my home computer instead of my company’s ultrabook. This machine is a dual-boot system with Ubuntu 14.04 and Windows 8.1. I could run NX Client from Ubuntu (version 4, because NX Client 3 for Linux is not available for free anymore on NoMachine’s web site and my company offers the binaries of NX Client 3 only for Windows and Mac), I managed to setup the VPN connection, so that is doable. However, I heard that the VPN connection is unstable under Linux. Yes, I could work around by establishing a SSH tunnel, letting my Windows ultrabook manage VPN and Linux manage NX. But connection to Microsoft’s Lync will cause difficulty, especially for voice chat. I will also have no option for Outlook other than using the web mail which lacks keyboard shortcuts, or switching back and forth like crazy between the ultrabook and the Linux box! Even with an ideal dual monitor setup, with one screen for the ultrabook and a second screen for the home PC, how would I copy/paste contents between the two? I don’t know yet.
  3. How about running NX Client on the Windows side of my personal PC? Yes, it would be possible. I could even go as far and crazy as purchasing a license of Microsoft’s Office to get Outlook set up, Lync could work as a charm, some colleagues got it set up, but my PC runs Windows 8.1, so back at square 1!
  4. Downgrade to Windows 7 or purchase a separate cheap PC with Windows 7, and install a setup on this. Well, that’s possible, but I am still left with no good solution for Outlook, unless I purchase a license that will be locked to that cheap PC. If I have to purchase Office, I would like to make use of it on my main computer, to maximize my investment!
  5. Wipe the ultrabook and install Linux on it. Well, the machine is intended to be used for testing a Windows application developed by my company, so I cannot just thrash Windows and toss Linux in blindly. I would also be in a similar situation to my personal PC running Linux: partial Lync with no or brittle voice chat, no convenient access to my Outlook mail.
  6. Install VirtualBox on the ultrabook and set up Linux on a virtual machine. I tried it, it almost worked on my personal computer, but it failed when transferring the VirtualBox setup to my ultrabook. Symantec Endpoint Protection, used by my company, screws up VirtualBox’s executable, making it complain it cannot run. I tried to add exceptions covering every executable in VirtualBox’s directory to no avail. It seems that this requires changes to the profile policy using a management application I don’t have. Since VirtualBox is also causing random erratic behaviors, like Alt Gr sometimes not working, Alt-tab sometimes switching out of the VM, etc., I gave up on this. If I had to continue exploring this path, I would try exporting the XML profile of SEP, altering it and reimporting, similar to what I did to get rid of the password protection that was preventing me from uninstalling and reinstalling after upgrade to Windows 8.1 broke it.
  7. Use something else than NX. I tried to find an open source NX client that would have less problem: no go. I really have to give up on NX completely. There are two main alternatives to NX: VNC or plain X using a server such as Cygwin/X or Xming. Because of other problems I will cover below, I finally fell back to VNC which is working better although a bit sluggish.

Update August 5 2014: yesterday, I tried again and my Windows 8 company’s ultrabook got connected using the NX Client 3 with almost native screen resolution. Putting the client in full screen bumped the resolution to native 1680×1050! Probably some NX processes were kept running in the background and a reboot was necessary for the disabling of DPI scaling to take effect. On August 5, during the evening, I tested the NX Client on my personal Windows 8 computer. I got the previous low resolution. I then disabled DPI scaling for the following executables in the C:\Program Files (x86)\NX Client for Windows and its bin subdirectory: nxclient.exe, nxauth.exe, nxesd.exe, nxfind.exe, nxkill.exe, nxservice.exe, nxssh.exe, and NXWin.exe. I’m not sure all these are necessary to change, but I did them all. Then a reboot later, I was getting the native resolution on my personal computer as well. So the fix is reproducible!

Two versions, two protocols

My company is using NX 3 while the newest version is 4. In theory, this shouldn’t be a problem as it should be possible to download previous versions or, even better, use newest client with previous server. In practice, this is not exactly the case. First, only NX 4 can be obtained for free from NoMachine. Getting previous versions requires to be a registered customer. My company is providing binaries for NX 3, but only for Windows and Mac OS X. This caused me additional difficulties to test things in Ubuntu.

By default, NX client 4 won’t work with NX server 3, but it is perfectly possible to configure it to work by doing as follows.

When starting the client for the first time, a screen similar to the one displayed below shows up and needs to be dismissed by clicking on the Continue button.

2014-07-19-170744_832x555_scrot

That leads to a second window similar to what follows. That one needs to be dismissed as well using the Continue button.

2014-07-19-170744_832x555_scrot

Then you end up at the main screen where connections can be added, removed or edited. This looks like the image below.

2014-07-19-171049_832x555_scrot

Click on the New button to create a new connection. This pops up a window similar to below.

2014-07-19-171100_832x555_scrot

The first important setting is the protocol; it needs to be SSH, not NX. This is probably because NX 3 was working on top of SSH while NX 4 has its own TCP/IP protocol. Anyway, selecting SSH is necessary for the connection to work. After that, click Continue to go to the next step: a screen allowing to enter the host name of the NX server.

2014-07-19-171127_832x555_scrot

On the next screen (after clicking Continue), you need to perform an additional setting: set the connection type to “Use the NoMachine login”.. The system login seems to work only with NX 4.

2014-07-19-171140_832x555_scrot

Leave the next two screens as they are.

2014-07-19-171155_832x555_scrot

2014-07-19-171201_832x555_scrot

Then you have the opportunity to name the connection. This is just for convenience; this doesn’t affect the ability to connect at all.

2014-07-19-171211_832x555_scrot

After all these steps, you end up at the central menu and can double-click on the newly created connection. You might get a screen similar to the following one. Click Yes to accept the authority of the NX server.

2014-07-19-172312_910x557_scrot

After this you have to enter your regular user name and password.

2014-07-19-172335_910x557_scrot

Then you have to double-click on the virtual desktop you want to create.

2014-07-19-172341_910x557_scrot

Three more screens to dismiss…

2014-07-19-172402_910x557_scrot

2014-07-19-172413_910x557_scrot

2014-07-19-172416_910x557_scrot

At this point, I got some trouble connecting when I tried to update from NX Client 3 to 4 under Windows. The Ubuntu setup worked very well, on the other end. Looking at the logs, I found out errors about cache and had to remove files from an hidden directory I couldn’t access from the Explorer without copy/pasting the name from the log file! The directory wouldn’t show up, even though Explorer is configured to show hidden files for me, and it would not Tab-complete under GNU Emacs.

Almost there, would you say, a bit annoyed. Well, no, that’s not the end of the story as you can see on the image below!

2014-07-19-172459_1680x1050_scrot

How am I supposed to work with such a small screen? Maybe some sighted people are able to, by making fonts tiny, but that’s not acceptable for me. Moreover, ALT-Tab doesn’t work, switching out of the NX window rather than through windows inside the NX desktop.

Fortunately, there are ways to configure things better. First, hit CTRL-ALT-0 (zero, not o). That leads to a new menu with options not available through the connection preferences.

2014-07-19-172505_1680x1050_scrot

First click on Input. That leads to a window from which you can check Grab the keyboard input. This makes ALT-Tab working inside NX, with fortunately significant drawbacks covered in the next section. Dismiss by clicking Done.

2014-07-19-172514_1680x1050_scrot

Then click the Display button. Select Resize remote screen, dismiss with Done.

2014-07-19-172528_1680x1050_scrot

Dismiss the main window with Done and tada!

2014-07-19-172548_1680x1050_scrot

Yes, a fully functional GNOME desktop running inside the NX client. Phew! What a ride!

And that’s not the end…

What’s the point of having a keyboard if it doesn’t work?

Well, I asked myself this question countless number of times while working with NX. This is so erratic, so frustrating, that this was getting on my nerves at times. I strongly rely on keyboard shortcuts for my daily work. Without them, I am completely inefficient, spending a significant amount of time and wasting energy searching for my mouse pointer. Until touch screens spread out and are available in 22″ and bigger formats, I will be stuck dealing with the mouse and working around with keyboard shortcuts as much as I can.

Here are the issues I ran into related to the keyboard, under NX client, both 3 and 4 versions.

  1. CTRL-ALT-<arrow keys> doesn’t allow to switch desktop when NX runs under Windows with Virtuawin, a software tool to add multiple desktops lacking under Windows for years. When I was using the keys, Virtuawin was taking over and getting me out of NX, so I could not use multiple desktops inside NX. To enable this, I had to remap the keys of GNOME inside NX. I could as well remap Virtuawin’s keys.
  2. After approximately one week of usage, NX client started to go crazy with the Alt Gr (right Alt) key. I rely on this key to produce many special characters like @, [, ], {, }, etc., because I am using a Canadian French keyboard. Sometimes, the Alt Gr combination was just doing nothing so I had to type it many times (sometimes more than ten) until the character pops up. Sometimes, the session was getting stuck with no keyboard functionality for at least thirty seconds (mouse was continuing to work). With NX 4, things got worse: no more Alt Gr at all! Running xev, I found that the right Alt key was generating Left Control events instead. Workaround? Well, disable Grab keyboard input! But that makes Alt-Tab non-functional. Alt-Tab is one of my most critical keyboard shortcuts! Without it, I have no efficient way to switch between windows! Well, I remapped the key in GNOME to Ctrl-Tab. This was a bit annoying but somewhat working. The problem got worse on August 5 2014, maybe because the virtual machine was running a script performing I/O competing with the bandwith available for NX. It is thus possible that networking issues are dropping some events while the client and server communicate between each other.
  3. With NX 4, sometimes the keyboard stops working altogether and the session locks itself. It is impossible to type any password and very hard to kill the session. I managed to do so by forcibly terminating the NXWin process. Another way is to temporarily turn on Grab keyboard input, type a few characters, then turn it off! This happens at random times, when switching from a Windows application like Outlook or Lync back to NX client. This issue didn’t happen on NX client 3.
  4. On both client versions 3 and 4, the shift key sometimes sticks. The system behaves as if I was pressing and holding shift while I am not. There is no way out except pressing shift repeatedly until it unblocks. Sometimes that requires disconnecting and reconnecting.

So it seems NX is designed for the simplest scenario: US English keyboard with mouse used to switch tasks.

The only workaround I found is pretty heavy weight: install VirtualBox, setup a Ubuntu virtual machine and run NX client from there. It seems that Windows is the one intercepting some keys and not sending them to NX. It could be literally a lot of different applications: Outlook, Lync, Symantec Endpoint Protection, etc. It would be tedious to find the culprit and probably impossible to disable it.

Copy/paste: a random game

Even the simple and common operation of copy/pasting information causes trouble when NX is involved. There are two main types of problems. The first which I found to happen in both NX 3 and NX 4 is the unstability of the clipboard transfer. Sometimes, I copy a block of text into the clipboard and when I try to paste it in another application, nothing happens. It mainly happened when copying from NX and pasting to a Windows application, say Outlook or Lync. Sometimes I am reaching the point of systematically performing the copy operation twice in a row to make sure it will have greater chance to succeed!

Sometimes, after I selected an area to copy, after one second or two, it automatically unselects. If I try to select the area a second time, it unselects again. When that happens, I have to select and very quickly right-click on the area to get access to the Copy command in the contextual menu. This second issue is intermittent and quite annoying. It seems to happen only on CentOS 6.3 virtual machines running GNOME Terminal. I tried to connect to a Ubuntu machine using NX and didn’t get the issue. The problem also didn’t happen when I ran the NX client into VirtualBox, so this might be caused by Windows or some other application.

VNC

After the second time NX Client 4 went south with the keyboard, leaving me locked out of my session, I got tired and tried something else: VNC. It happened to be simpler than I thought. I just had to SSH into my virtual machine and type vncserver from there. I had to set up a password for my VNC connection and VNCServer told me the host name and display to use on the VNC Viewer.

I tried with UltraVNC, because that viewer is supporting encrypted connection between client and server. Connection worked like a charm, but keyboard support is quite poor. First, Right Alt, again, is failing. It seemed more and more I would have to switch to US English keyboard to work with that virtual machine. Then I noticed the VNC Viewer was skipping keys randomly. For example, I typed exit and got just the e! So I would have, on each key press, stop and look at the screen if it’s there, and retype it, as many times as I need. I am not used to working like this: I rely on the key presses to work. This is a shortcut that prevents me from getting completely drained out quickly!

After a very frustrating and almost catastrophic failure with the installation of VirtualBox on my company’s ultrabook (the beast didn’t like Symantec Endpoint Protection and messed up with my Internet connection), I tried a second VNC client: TightVNC. It does not encrypt the traffic, but that’s not a big issue since the virtual machine is inside the company’s network and thus access through encrypted VPN. That one worked relatively great with a couple of tweaks and two drawbacks.

Here are the tweaks:

  1. TightVNC is misbehaving under Windows 8 in a way similar to NX Client. However, and fortunately for me, that one can be worked around by disabling DPI scaling just for TightVNC.
  2. Under GNOME, I had to use XRandR to switch resolution, by typing xrandr -s 1680×1050 on a terminal.
  3. I had to switch VNC to full screen using CTRL-ALT-SHIFT-F otherwise some portions of the screen are cut away.

But in full screen, TightVNC completely captures ALT-TAB and CTRL-ALT-<arrow keys>! I have to leave the full screen mode to get the keys back to Windows. Right alt key is also working great. This is very nice, as if I were working under Linux! However,

  1. Performance is not as great as NX. The display is a bit sluggish, although manageable, especially given the incredible benefits I got with Alt-Tab working. However, sometimes, especially when I am working from home, the display refresh becomes very slow and when typing a key, the character appears one second later. In one case, it was so slow that I had to figure out a way to work locally for the rest of the day.
  2. Clipboard support is a bit clunky. I managed to transfer data from VNC to Windows by starting vncconfig -nowin inside my VNC session, but that doesn’t solve the other way round: I cannot transfer data from Windows applications to VNC-managed GNOME session. I couldn’t find any solution for this.

If everything else fails

If VNC fails too at the end, there is little remaining other than establish a traditional SSH connection and working from the terminal. I will need to open a new window and SSH again each time I want a new terminal. I tried to use Screen to have multiple terminals in the same window. That works relatively well except under Emacs because the Ctrl-a key used in the editor conflicts with Screen.

Moreover, Emacs somewhat misbehaves under SSH terminal, at least with Cygwin. Selecting text and typing a key doesn’t overwrite the text, just adds characters besides as if no selection was done. Moreover, Ctrl-Backspace erases a full line rather than the last word.

If I need a graphical display, for any reason, I could start a X server such as Cygwin/X or Xming, run export DISPLAY=:0 and use the -X option to start SSH. With this trick, any X client shows contents on the Windows screen. However, this is pretty slow over a VPN connection. At least, this is a known-good solution. This will almost always work!

Can we make things better?

If the server evolves, yes. There are at least two ways I could think of that would improve things.

  1. X protocol is quite old and clunky. There are new protocols in development that could replace it and be more efficient and compact over VPN. The main one is called Wayland, a completely documented and open protocol. Canonical, the maintainer of the Ubuntu distribution, is also developing its own alternative called Mir. As far as I read, Mir is simpler than Wayland, but it is more closed; only the API is open while the protocol is under Canonical’s exclusive control. Using either Wayland or Mir may result in less traffic, so more efficient graphical sessions over the same network bandwith.
  2. Logging on a centralized server and working from there is the 70’s way! Nowadays, each and every laptop has tremendous CPU power that is shockingly unused when logging in to a remote desktop. What we need here instead is a way to share filesystems, and there are numerous protocols for this: SSHFS, WebDAV, CIFS, etc. One argument against this is probably data security. That would need to be addressed by encrypting hard drives of the laptops mounting file systems with data. Moreover, some work may require the use of Linux, either on bare metal (dual boot laptop) or as virtual machines. The company could provide a prefabricated OS image employees would install and would be free to alter as they see fit, to install new applications or make settings.

A second blog because I wrongly chose my CMS

After some time posting on a French WordPress blog, I started to feel the need to post some articles in English, mainly technical contents about computer science, but could extend to something else. I explored possibility of making my WordPress blog bilingual. I found xLanguage which seemed to do the trick, but it just installs and does nothing, not allowing to change language and not providing the language toolbar when editing posts. I searched, added languages, added the widget. The language selector was on my blog page but did nothing. The language toolbar was missing. I then read that the xLanguage extension wasn’t updated since two years, so maybe not maintained anymore. Further searches lead me to WPML as the highest recommended solution. I checked it and gave up on it, because not only it is not free, but in addition, it is offered in three different versions, with no obvious way to pick the right one other than looking at a list of cryptic features that I don’t understand yet but may later and may need! I’m pretty sure there is a way to setup a multilingual site with a CMS, if only I had picked the right one. I would probably be better off coding my own CMS so I would be able to enhance it as I need, but HostPapa only offers PHP and Perl support and I don’t want to invest in PHP, preferring Python/Django or Java/JSP. So for now, I’ll stick with a second blog and see where that goes.