Is Linux overusing hard drives?

Monday, May 18 2015, when I tried to synchronize files between my main computer and my HTPC, I got error messages from Unison telling it was failing for some files. Tired of repeated unexpected failures, I tried copying the files manually. Manual copy was failing too. I quickly noticed that the whole partition I was copying the files to turned into read only. I tried to enable read/write by remounting the partition, that worked, but it reset back to read only after a few minutes. Some Google search later, I was thinking my hard drive was failing. I tried running a self-test using sudo smartctl -t short /dev/sdb and after a few minutes, sudo smartctl -a /dev/sdb. The drive was now failing the short auto-test. Damn, again? Yep, another dead hard drive!

Short term fix

First step was to stop using that beast, so move away all I can before it goes even worse. A failing drive can always become totally unusable, corrupting ALL data on it.

Transfer of my music files from that failing drive (the 1.5Tb one) to another one (the 3Tb one) started to output multiple I/O errors. No, it will fail like this for half of the files and take forever! I aborted that and instead copied my music files from my main PC. At least I would get all the files, not half of them. Fortunately, I was able to transfer most of my ripped DVD and blu-ray disks from my 1.5Tb to my 3Tb drive. I finished the transfer the next day; it ran in the background while I was working from home. I could then unmount the 1.5Tb drive, putting it out of the way for Ubuntu.

I got a new 3Tb drive Friday, May 22 2015 and installed it the morning after. This time, I was able to pick the right drive to remove, because the 1.5Tb was colored green while the 3Tb was red. The new 3Tb is also a red WD model. Unfortunately, things are never smooth when modifying that machine: I had to remove my SSD drive+bracket assembly in order to unscrew the 1.5Tb hard drive. However, after that, I was able to install the new 3Tb drive, put back the SSD assembly and reconnect everything. This went surprising well as opposed to last time where I had trouble finding a way to connect power on the four drives.

The new 3Tb drive seemed to work correctly. I was able to partition it as GPT, create a 3Tb Ext4 partition and mount that partition. I then started a long self-test to make sure it will not fail at me just a couple of days after. The self-test completed during the evening and showed no error.

Tentative prevention

Problem solved? Partly. The thing is that machine runs 24/7 to power a Minecraft server. This makes both hard drives spin non-stop. I would like Ubuntu to stop the drives when they are unused. I moved the Minecraft files to my SSD and will use the hard drives only for media and backup.

No, Ubuntu never ever spins down any hard drive! I tried to set this up with sudo hdparm -S 241 /dev/sdb, no result. Only thing that worked is manually spin down the drive with sudo hdparm -Y /dev/sdb (or /dev/sdg for the other drive). I recently found that gnome-disks has an option to set drive spindown timeout. The spindown setting from gnome-disks was honored once on my main computer, but I need to check if it’s reliable or not.

What if it happens again and again?

Otherwise, it seems I need Windows just to get my hard drives automatically spin down when unused, which is quite a shame! I don’t want to format this HTPC as a Windows machine, because the Minecraft server won’t run smoothly on Windows. I will be stuck with an always-open command prompt window with the server running, unless I search forever to figure out a way to run this as a system service, assuming it is possible.

A colleague at my workplace suggested the use of a Ubuntu virtual machine, but my HTPC doesn’t have enough memory to reliably run a VM and I cannot bump it up more than 4Gb because of motherboard limitation. Well, I could try to stick it 4 4Gb DDR2 modules and see, but I’m not sure the board would accept this at all, even though that could fit physically! If that fails, I would be stuck with useless DDR2 while newer systems use DDR3. What a pain!

I also investigated the possibility of using a RAID to improve reliability of storage. If I put a third 3Tb hard drive, I could configure a RAID5 array of total 6Tb, and even increase to 9Tb with a fourth 3Tb hard drive! The RAID5 splits and mirrors the data in such a way that two drives are involved when accessing files, increasing performance. It also makes sure that if one drive fails, ALL the data can be recovered and the array can be rebuilt by simply removing the failed drive and adding a replacement drive.

I was tempted by this, but that would have forced me to purchase two hard drives and another PSU to have more than four SATA power connectors. I wasn’t sure I wanted to spend more than 300$ just to get this up and running. Moreover, creating the RAID array would have forced me to move all the files away from my current 3Tb drive to combine it with the two new drives, unless I jumped directly to the 4-drive array.

I will instead wait for that machine to die and next system could be a smaller SSD-only HTPC combined with a NAS offering easier drive installation and replacement. I could purchase a dedicated NAS, or build myself a generic computer configured as a NAS. Fortunately, Ubuntu has facilities to configure software RAID, I checked that recently. I’m not sure about the fake RAID using the motherboard, that may or may not work, that may or may not be better than software RAID.

The downsides of SSDs

What’s the point of having a SSD if both Windows 8 and Ubuntu 15.04 introduce artificial timeouts that increase the boot time, making this equivalent as having a standard hard drive? Well, I’m there, I reached that point.

Windows 8 often boots fast from EFI to login screen but after I enter my password, it sometimes reaches the desktop in five seconds, sometimes hangs for 30 to 45 seconds. There is no obvious reason why, no way to track this down and no obvious solution other than deleting my user account and creating a new one. I cannot spend my weekends doing, redoing, redoing and redoing that. This is just pointless and inefficient! I could try to reinstall, but then I will have trouble with reactivating Windows, reauthorizing Ableton’s Live and have to spend hours waiting for manual installation of countless drivers and software tools. Ninite can help with programs, not with drivers.

Some time later, I found out that uninstalling and reinstall the driver for my M-Audio interface fixed the slow boot up. There seems to be a conflict between the M-Audio’s Fast Track Pro and Novation’s UltraNova drivers. Windows 10 also seemed to stabilize things a bit.

Ubuntu, most of the times, boots quickly. However, starting from 15.04, it was taking almost a minute from splash screen to login screen. I had to spend more than half an hour looking at syslog to figure out that the swap partition changed UUID but the update script didn’t reflect that into /etc/fstab. Several people repeat that we shouldn’t do dist-upgrades and rather reinstall, but then, why is there a dist-upgrade option at first place? Fortunately, fixing the partition ID in /etc/fstab restored my boot time.

This is not SSD-specific issues, but they cause the SSD to be less useful. Another factor reducing usefulness of SSD is the never-ending size increase of OS and applications, especially when dealing with virtual machines. This ultimately fills any SSD, requiring time consuming reorganization of the layout (partition resizing, copying on a larger drive, etc.).

I don’t want to go backward, switching from SSD to an hard drive, but practice seems to tell me I should. This is disappointing and quite frustrating.

Minor problems stacking up

It seems to happen too often to me. I’m ending up with many different small, minor problems, sometimes not big deals taken alone. But when they sum up, this becomes unbearable, resulting into a bad and exhausting day. Sometimes, the solutions are simple, sometimes not. Here is the most recent stack of such issues.

Spending my time importing modules in Python

For the moment, the only way I have to edit my Python code running on a remote virtual machine is to use Emacs running from that machine. I’m investigating local solutions, but this is just a non-sense chain of complications or requires software free for commercial use that I cannot adopt at work.

One operation that ends up to be frequent is to import a module. You are writing a piece of code and then need to call into a function defined in another module or in the standard Python library.  When this happens, I need to add an instruction to import the module if it is not yet from my current module. Import constructions can appear anywhere in the code, but convention puts the instructions at the beginning of the files. This seems better for code organization and ensures that all imports happen at start of the program rather than at any time during the execution. If a module imported at top of the file is missing, the code will fail fast, as opposed to fail only when a function importing a module is called.

As a result, I am sometimes editing code and need to step at start of file to add an import, then find back where I am and continue. This small interruption in task flow is not a big deal in itself but it becomes more and more painful when it repeats itself tens of times a day, sometimes at each and every successive line of code!

Maybe I went too granular and split my code in too many modules. Am I in a situation where it would be better to have one single huge file with a lot of stuff, rather than splitting my code in multiple files? Well, last Monday, I was at the point of wondering that.

How did I work this out before? When I was programming in Java, I was using Eclipse as my IDE. This program is able to guess the import statements from referred class names and automatically add them at the beginning of the files, without having me go there, loose my position and come back. Probably a Python IDE such as PyDev or PyCharms would do it, but I cannot use them for the moment.

Fortunately, I came up with a very simple trick. In GNU Emacs, you can split the window in two parts by using C-x 2. Both windows first point at the same part of the current buffer, but it is perfectly possible to move the cursor up. This changes the position of the current window, while leaving the cursor unchanged in the second window. I can then add my import statements, switch to the window with original cursor using C-x o, then make it the only window with C-x 1. Something similar can be done using bookmarks, but this requires other keyboard shortcuts harder to remember and each bookmark needs to be named while sometimes they are one-off save/restore scenarios.

Keyboard shortcut confusion

Since I gave up on Virtuawin because its behavior was too inconsistent on Windows 8, I ended up using Desktops from SysInternals which uses different keyboard shortcut than Ubuntu for switching desktops. I thought CTRL-ALT-F1 to F4 would be nice keys, but this ends up being a nightmare. As soon as I came back to Ubuntu after one week working with Desktops, I was screwed up, always making the same mistake of pressing CTRL-ALT-F1, which switched to the console. I then had to press CTRL-ALT-F7 to go back to X. This also happens in VirtualBox virtual machines.

There is no perfect solution for this. My current workaround is to use CTRL-F1 through F4 instead. At least, if I press that in Ubuntu, nothing happens, as opposed to wiping my screen away and forcing me to press CTRL-ALT-F7.

Windows 8 becoming increasingly disturbing

Even Desktops starts to be a nuisance because of Windows 8. As soon as I am on desktops 2, 3 or 4, I need to be careful not to press on the Windows key. If I do it, which I am used to, this switches to Metro which of course replaces my screen with the home screen. Then I am back to desktop 1. Metro is my most common way of starting applications on Windows 8: press Windows key, type a name, press enter. This also works great on Windows 7 and Ubuntu’s Unity.  But this fails with Desktops on Windows 8. There is no way out, other than putting EVERY application on the desktop and spending more than 30 seconds each time I want to start something! I just cannot do that, this is too inefficient, especially knowing that it will take half the time to find out the icons for a sighted person than for me with a visual impairment. I could work around by grouping the icons in some clever way, creating folders, but this remains clunky.

Windows 8 is also causing issues with Lync, the instant messaging software my company is using. Lync works correctly in text mode, but it intermittently fails for voice chat and screen sharing. When this happens, it consistently displays a network error, no matter how hard we try to establish the communication. We have to fall back on alternative ways, like traditional phone. The only solution has been to reinstall Lync.

IT people at my company were of no help. They would have liked me to try out Lync on Windows 8 in the office rather than finding what is going on. However,the Windows 8 ultrabook my company lent me is almost unusable for me unless I hook it to an external monitor and mouse. At the office, I don’t have an HDMI input I could use to plug my mini DisplayPort to HDMI adapter I use at home, and the mini DisplayPort to VGA adapter that could have worked went away a couple of months ago. Somebody borrowed it and never brought it back. Even if I had the adapter, performing this test would be tedious. I would have to wait for a voice chat and when it happens, quickly switch from my regular laptop to the ultrabook and try. This is just non sense stress that may just give no result: it will probably work correctly, showing that the problem comes from VPN, my router, etc.

There is no way out, other than running away from Windows 8. I thus stopped using the ultrabook, at least for the moment, and used the official laptop instead. That machines runs the good old Windows 7 OS. If Lync starts failing with that as well, then I will have to switch routers, try on somebody else’s Internet connection, etc.

SSH connection timing out for no reason

We recently switched from outdated Cisco’s IPSec VPN client to newer SSL-based AnyConnect. This seemed to go smooth, after I uninstalled and reinstalled VirtualBox (seems VirtualBox is interfering with Cisco’s VPN). However, I noticed that SSH started to time out at me when I wasn’t interacting with the shell for a moment. This was forcing me to reconnect, restart what I was working on, sometimes switching to a directory with a long path, sometimes Bash history was working, sometimes not.

My first attempt at working around unreliable connections was to make use of EmacsClient. For this, I started Emacs with the -daemon option, then used emacsclient instead of emacs. That works surprisingly well. With that, Emacs keeps running on the server and doesn’t loose track of unsaved files if connection drops. Of course, I keep the good habit of saving often, which is an additional elementary safety against unreliable connections.

Yesterday, I may have found out a way to get rid of these timeouts. My current hypothesis is that the new VPN client monitors the TCP connections going through the tunnel it establishes and shuts them off if they are inactive. Fortunately, SSH provides a way to keep connections alive: the ServerAliveInterval configuration option. By putting ServerAliveInterval 60 in my .ssh/config for my virtual machine, I am forcing SSH to send a TCP packet every 60 seconds so nor SSH server, nor VPN client, have a reason to kill the connection. This seems to help, but this is not fully tested yet.

The mouse making me mad!

Why does my Razor mouse was so jumpy? I feel I have less problem at the office than at home with the mouse? Will I really need to get myself a Dell mouse like the one I have at the office?  Is it because I am running crazy? Maybe not! I found out this week that my mouse pad is pretty suspect: it has multiple scratches on it that can well screw up the optical-based mouse tracking, and the surface has some patterns. I remember when I worked at my parents’ home having hard time with the mouse. That ended up to be because of the desk! Putting the mouse on top of a piece of white paper, yes, just a plain old piece of paper, nothing more sophisticated, cleared the issue! So I removed that mouse pad and that seemed to help! It is unbelievable how sometimes, simple solutions, almost non-sense stupid ones, can lead us far!

Functionality versus stability

Switching between multiple windows has always been an issue for me. The simplest solution is to use the Alt-tab key combination. That works pretty fine when I need to switch between two or three windows, but as soon as there are more windows, that becomes more difficult. Holding Alt and pressing Tab multiple times can help, but the order of the windows differs from time to time so I have to check each proposed window one at a time until I get the right one.

I saw several people using the mouse for that, clicking on icons in the taskbar. This has always been a problem for me, because I was constantly loosing track of my mouse pointer when reaching the bottom of the screen. A simple fix helped: moving the taskbar to the top of the screen! However, this remained tedious to switch windows using the mouse like this.

What helped a lot is the possibility to group windows on workspaces, also called desktops. I first discovered this functionality on Linux. I quickly wanted to get this under Windows and found and tried several software tools attempting to emulate it.

Mac OS X also offers this possibility, but it is less useful because the windows proposed by Command-tab, the Mac equivalent of Alt-tab, come from all the desktops, so there is no grouping in practice, just some kind of visual illusion to unclutter the desktop. One workaround for this is to get as few windows as possible on the desktops, then switch task using Ctrl-arrow keys, forgetting about Command-tab. The switcher reachable with F9 could also be handy, but it would better serve me on a touch screen. Of course, Mac OS X has trouble supporting my touch screen because it is not manufactured by Apple!

I have been using Virtuawin for several years. That tool served me well a couple of times, allowing to group windows on virtual desktops, move windows from one desktop to another and configure the number of required desktops. Internally, it seems to work by showing and hiding windows, which allows alt-tab to work properly, showing only the windows on the current desktop rather than all the windows. This reminds me of another tool, a Powertoy from Microsoft, that was working by minimizing windows, so alt-tab was showing windows from all the desktops. I also remember of deprecated tools I tried years ago that were not providing any keyboard shortcut to switch desktops. This is so shocking that I forgot the tool’s name and don’t want to remember it!

However, things started to go bad with Virtuawin on Windows 8, maybe started on Windows 7 with Aero in fact. The behavior of the tool became more and more erratic, causing me increasing frustration. It is possible that the problem comes from some software programs in particular, maybe not Virtuawin itself.

The first issue I ran into is loss of focus when restoring a desktop. Many times after I switched from a desktop with a command prompt or an Emacs window to one running Firefox or Outlook, keyboard shortcuts were not responding anymore. It took me some time to figure out that Virtuawin didn’t restore the focus to any window on the desktop, keeping the focus on itself. Pressing alt-tab to switch to next window works around it. Of course, clicking with the mouse in the window solves it. But this needs to be redone on each desktop switch. Sometimes, the focus is set right, so if I press alt-tab, the window disappears, because Windows 7 with Aero as well as Windows 8 “smartly” propose the desktop as one candidate when pressing alt-tab.

The second issue I ran into, which happens intermittently, is Virtuawin popping up and asking me if I want to close it. This happens when I use alt-f4 to close two ore more windows one after the other. This used to happen when no more windows are on a given desktop, but I got the issue on a desktop with a couple of remaining windows. Again, it seems that the focus messes up and Virtuawin gains focus, proposing to shut itself down rather than intelligently handling focus to another window. There may be no good way to work around this, in fact.

A few weeks ago, I got so fed up of this erratic behavior that I got rid of Virtuawin. I tried several alternatives with no luck. Switcher, which would at least provide an alternative to Alt-tab, just doesn’t retain its settings and doesn’t run at startup. Default key was backtick, which is quite inconvenient on a Canadian French keyboard. Maybe this is because of UAC, but I just cannot accept to turn UAC off, because that would allow any third party application to start as administrator in the background, without any indication. I then tried Windows Pager, which was completely incapable of retaining window focus and was emitting an annoying beep each time I was switching to the desktop with Firefox. I worked a week with VistaSwitcher instead of Alt-tab. This somewhat worked, but it was getting painful because of too many windows, including multiple Cygwin windows looking almost the same and with similar titles! I thought about Dexpot, but this one is free for non-commercial use only and I wanted something that I could use both at work and at home.

I then found Sysinternals Desktops. That tool, bought by Microsoft, seems to make use of some hidden APIs offering multiple workspaces natively under Windows. The tool works by creating multiple copies of the Explorer.exe process running as a desktop manager and provides shortcut keys to switch between the copies. This seems to work very well, except it provides no way to move one application to another desktop. Situation is aggravated under Windows 8 by the fact Metro is involved to start applications by name. More specifically, pressing the Windows key before typing the name of an application switches to first desktop where Metro is running, then pressing Enter starts the application from there. The incorrectly started application cannot be moved after the fact. There is no way to work around this, except finding alternative ways to start the applications from other desktops: putting everything on the desktop (unacceptable, I will not be able to find ANYTHING), pinning applications on the task bar (somewhat works for use cases with small number of applications), launching the applications from Explorer (time consuming to find the programs) or downgrading to Windows 7 (again!). I may try something like Launchy which could allow me to efficiently start applications by name without involving Metro.

Shutting down Windows 8 after multiple desktops were created also makes the spare Explorer.exe crash one at a time with a cryptic error message. Fortunately, this doesn’t prevent me from turning off the computer cleanly. This didn’t happen on Windows 7.

I also experienced issues with clicking on URLs from applications like Outlook and Lync. When Firefox runs on a different desktop, the system doesn’t correctly handle the situation by instructing Firefox to open the request URL. Firefox instead tries to start again on the desktop of Outlook or Lync, fails and displays an error message indicating it is already running. I then have to manually copy/paste the link. At least in Outlook, there is a Copy hyperlink option, so no need to manually select the link, but Lync offers no such option. I have no good solution for now, other than running Lync on the same desktop as Firefox.

Desktops limits to four desktops, no more, no less. Sometimes, it may be useful to have more. However, I found out that I am wasting desktops since years! For example, I am keeping a desktop just for Lync. Instead, I just had to configure it to reduce on the tray instead of the taskbar, so Lync is not in my way when using Alt-tab but still usable for the rarer cases I need to initiate a chat. Moreover, I used to start a Cygwin window, SSH to my home computer and start Audacious from there to listen to music at while keeping the playback controls handy. Why this useless terminal window while ssh -f can be used to start Audacious and put SSH into the background, allowing me to close that extra terminal window? I tried to use Screen to further reduce the number of Cygwin windows needed, but that was causing issues, especially with Emacs.

Despite its limitations, Desktops happened to be a better solution for me than anything else. The reason why is that its behavior is deterministic. It works the same way all the times! It has less functionalities, but it is more stable. If later on that still doesn’t meet my needs, I would have to either downgrade to Windows 7, which has less issues with Virtuawin, work more on VirtualBox virtual machines with Ubuntu guests (supports virtual desktops) or try to get Dexpot on my work laptop.

Networking issues on an ultrabook

Since a few weeks, I was experiencing sluggish performance while connecting to some SSH servers. The problem happened when using SSH through the VPN of the company I am working for. This resulted in lags when typing commands on the SSH terminal, which was more and more problematic because of more and more command-line arguments I had to pass and longer and longer file paths.

Too many layers = too many problems

My current working setup is far from simple, because I need to work on a remote virtual machine to have access to some data and run some processing on it. The official connection method using NX doesn’t work well for me, because NX suffers from intermittent keyboard issues, e.g., right alt stopping to work, server acting as if Shift was pressed while it is not, etc. This works correctly for somebody using the mouse rather than the keyboard or able, without significant loss of efficiency, to check every typed character for an eventual error. This is not my case.

I ended up building myself a multi-layer setup as follows:

  1. I am not hooked directly to the cable modem from Videotron. I am rather using a Linksys WRT310N router, and to make things more interesting, I am running DD-WRT, not the official router’s firmware. This rarely caused any issue, though.
  2. Windows running on a computer provided by the company. I am currently working from home using an ultrabook they provided me, with Windows 8.1 on it.
  3. The ultrabook having no Ethernet connectivity, I was using a USB to 100Mbps Ethernet adapter to get a faster and more stable connection than wi-fi.
  4. A Cisco VPN client is needed to access internal resources  of the company.
  5. The machine runs Ubuntu 14.10 in a virtual machine hosted by VirtualBox.
  6. Inside the guest Ubuntu, SSHFS is configured to access my workspace on the virtual machine, so I can use local editors like Emacs.
  7. Inside the guest Ubuntu, I open a terminal and SSH to the virtual machine to run commands there.

Phew! What a list of layers!

Different networking methods

VirtualBox offers several ways to manage networking. I am currently using the first of the three I know about. Here they are.

  1. Bridged. VirtualBox uses a trick I don’t know too much about to clone the host’s network interface and act as if there was a second interface. The guest OS receives its own IP address from my router and thus acts pretty much like an independent machine on the network. This implies that Ubuntu has to establish the VPN connection, but fortunately, there is a Cisco client available. However, when using that method, the Windows host doesn’t have access to internal resources, unless I establish a second VPN connection, on the host side. I am something tempted by the idea of having the router establish the VPN connection. This might be possible with DD-WRT, but this will introduce a security risk: what if I leave the VPN open after my working day or somebody hacks my network?
  2. NAT. VirtualBox acts a bit like an internal router, allocating a private IP to the guest. Requests are translated by VirtualBox to look like if they came from the host. This works correctly and allows the VPN connection to be established by Windows-based official Cisco client, but on the other hand, this introduces a level of indirection: the NAT applied by VirtualBox. Any indirection is subject to hinder performance.
  3. USB. VirtualBox has support for exposing USB devices to guests, so I could, at least in theory, expose my USB to Ethernet interface to Ubuntu. However, without Oracle closed-sourced extensions, I would get only USB1.1 support, which would result in slow or non-functional networking. The extensions are available only for personal or evaluation use, so I cannot use this at work. Even if I solve the licensing issue and get the extensions, with the USB solution, the Windows side would be unable to access networking, so I would loose access to Lync and Outlook. I could work around by turning wi-fi back on, but that starts to be clunky. If I have to go this way, I would be better off using my personal Ubuntu PC rather than a virtual machine.

It seems that 1 is a bit faster than 2, but I am not totally sure, no way to measure scientifically. Even with 1, I was still experiencing sluggish SSH. After switching from 2 to 1, this seemed a bit better, but performance degraded after a few minutes.

Could a better network interface help?

This morning, I tried with a TruLink USB to Gigabit Ethernet interface rather than the 100Mbps one. However, this didn’t go well at all. I got the following issues, all after the other.

  1. VirtualBox partially blocking network. At first, everything seemed to work well. I was able to browse the web and Outlook was working fine. But I soon discovered that Lync wasn’t connecting at all and although Cisco VPN was starting (from Windows, no virtual machine yet), it couldn’t access any internal resource. Trying Windows network diagnostic reported a potential driver issue. I tried to install the driver I found for this adapter, but it just failed; Windows already had the most recent driver built in or installed by IT. I then found out that Windows was connecting to an unknown network using the VirtualBox host-only adapter. So VirtualBox was in the way, partially blocking networking. I had to remove VirtualBox and reinstall it to fix this.
  2. Cisco VPN not working. After reinstalling VirtualBox, I was able to connect to Lync, but VPN was still non-working. It was connecting without issues, but it would allow access to absolutely no local resource. I tried to remove VirtualBox once again, to no avail. I had to remove Cisco VPN client, reboot to be sure everything was clean, reinstall client, test to see if things were back to normal (they were!), reboot once more to be sure, reinstall VirtualBox and test again! Why reboot? Well, if VirtualBox is installed while Cisco VPN is running, network connectivity stops working completely until VirtualBox is removed.
  3. Distracting side issues. During this frustrating troubleshooting, I got several other issues. Firefox took several seconds to start, once again, because I would ideally have to switch to Chrome, transfer my bookmarks from Firefox to Chrome once again, and live with a browser having very weak support for touch screen, at least at the time I am writing. Emacs, which I tried to open and use as a buffer because I wanted to try Ping commands and was always making typos, took at least one minute to launch, and started multiple instances when it finally unstuck. Then the main window of the Cisco VPN client remained open after connection and couldn’t be closed by the X button, Alt-F4 or any other normal way; I would have to live with it on an unused Virtuawin desktop or restart the client. I ended up shutting everything down and rebooting once again.

At least, after all these efforts, I got functional networking and experienced a lot less lags than during the last 2-3 weeks while working from home!

If problems come back, I will probably give up on using this ultrabook and revert to the official laptop provided by the company. The machine is heavier, doesn’t have any reasonable way to output to digital displays (I would either need to purchase a docking station specific to that machine, or try my luck with the mini HDMI port which sometimes works, sometimes not), but it has VGA output, it has Gigabit Ethernet and runs the good old Windows 7 which definitely seems to play nicer with the software tools of my company.

QuickTime components: one key to step further!

Final Cut Express could almost open my Bandicam AVI captures: no image but sound. I recalled that I chose to use Xvid as encoding format in Bandicam so tonight, I did a quick Google search to find some QuickTime components that could add Xvid support. I was not really hopeful to find anything useful, was expecting to get blocked by yet another trial version, but instead, I found Perian, a set of QuickTime components supporting several formats. Although the project reached its end of life, it can be of some help, and this is definitely better than nothing! It’s free and likely to be released as open source when it is final.

I couldn’t resist the temptation of powering my Mac up again, download the tool and install. That worked very well, no issue. After I did that, I was able to play my Bandicam AVI files in QuickTime, smoothly, no choppy video! I was also able to import such an AVI file into Final Cut Express. Very nice! That means I can use Final Cut Express to work with ANY Bandicam capture, including my next set of Minecraft videos if I feel at trying my luck with this.

Digging a bit deeper, I found out that the MPEG4 causing issues with Final Cut Express are the ones from my Android phone. They are encoded using H.264 AAC, which is quite expensive to decode. This explains why I was getting choppy video yesterday! Moreover, I found out that my new Nexus 5 creates 1080p videos, which really doesn’t help my poor old MacBook Pro! I still have to check the encoding out of my GoPro, which also resulted in choppy videos on my Mac, but again, 1080p definitely doesn’t help, even alone.

I tested with a video from my Sony Cybershot digital camera: MPEG4 encoding in 720p. That played smoothly in QuickTime and imported into Final Cut Express!

That means I can, at least in theory, do something with Final Cut Express now! I expect the machine will soon be too slow for any non-basic editing, especially with my HD material I intend to throw at it (even 720p may hammer it to death), but at least I will have a way to try out Final Cut Express. If the tool happens to be too cool, I will have to choose between putting my Hackintosh back up or purchasing a newer Mac.


Desperately slow

This morning, I tried to install Razer Synapse on my Mac now equipped with Mac OS X 10.7 (Lion). That worked very well but brought absolutely NO improvement with respect to mouse smoothness. In fact, it does absolute nothing: no compensation or removal of the unwanted acceleration, and no handling at all of the side buttons. That program seems to only manage keyboard macros, completely ignoring the mouse. I thus had to reinstall SmoothMouse.

Then I started to have issues with the mouse wheel. Sometimes it wasn’t working at all, sometimes it was scrolling in the wrong direction, sometimes it seemed correct after a couple of attempts. I found an option in Mouse preferences to invert direction of scrolling and unchecked that. That may partly explain the erratic behavior.

The biggest disappointment came from performance. Boot up time is more than a minute and a half and no video playback of any kind is possible. I tried YouTube with Flash, with HTML5: choppy video with correct sound. I tried to play MPEG4 videos with QuickTime: choppy video with correct sound. It seems I would have to convert all my videos to QuickTime, probably with reduced resolution. This is kind of pointless and very time consuming. In fact, QuickTime won’t play AVI files as well: again, transcoding is required. That can of course be worked around with any good video player such as VLC, MPlayer, maybe even Kodi/XBMC, but then I slowly but surely come up to the conclusion that ALL Apple applications need to be supplemented or replaced with a cross-platform alternative that can also work well, even better, on a non-Mac environment!

Final Cut Express, while claimed in the documentation to support many input video formats such as MPEG4 and AVI, happens to support only QuickTime .mov files. As far as I read, QuickTime is a container: a .mov file can contain different kinds of video and audio streams. In practice, there is no tools capable of repackaging a MPEG4 or AVI file into a QuickTime movie: tools will slowly and wrongly reencode video, recompress audio, into some kind of probably proprietary, unknown, undocumented, Apple-specific format. Maybe only trial versions of commercial tools can do the repackaging, if at all possible.

Final Cut Express miserably failed at importing MPEG4 files from my Android phone, bailing out with useless error message. I tried with AVI files from video captures with Bandicam with a bit more success: Final Cut Express imported the clip, was able to preview sound but displayed no video.

Sometimes, more and more often, system response became unacceptable. The mouse started to be choppy, literally jumping from one place to another. Sometimes, the movement was smooth, sometimes the mouse would jump ahead, sometimes the mouse would stop moving. When that happened, I had to shut down applications, which took up to THIRTY seconds!

After that, keyboard stopped working properly. This started with CTRL-F2 not opening up the menu bar. I had to change the binding for CTRL-F10, for no obvious reason. That worked an hour or two, then ALL F1-F12 keys on my Razer external keyboard stopped working, no reason, reboot required to fix! Probably the USB stack has some quirks to flaw non-Apple devices but make people believe they CAN work. So I would have to pay 50$ for an Apple keyboard, 70$ for an Apple USB trackpad or mouse, and find some place to install all this or move all my stuff around when switching from my PC to my Mac.

Conclusion: this machine is almost unusable. I REALLY would have to downgrade to Snow Leopard, probably even better to Leopard, to do anything with this, and even then, what’s the point of using Final Cut Express if I need to preprocess all my input files into QuickTime, loosing quality during the lossy process, and then let Final Cut Express scramble the video information once more during its rendering phase?

From Snow Leopard to Lion: a leap towards the limbo

Again, I was a bit stuck with my Mac. SmoothMouse was helping with the mouse handling but not fully solving the issues with acceleration. Sometimes, the pointer was just going away. I cannot believe I loose it so often: it just disappears! I often have to bring the pointer to the top left corner two or three times per mouse action! Mouse wheel also moves super slowly, making it almost useless. I have to give it an hard swing for scrolling to begin. Besides this very frustrating mouse response, side buttons are just not working. Besides trying with all the proprietary solution, probably ending up purchasing each of them one after the other to figure out none of them will work correctly for me, there was another thing to try: Razer Synapse. But it was requiring Mac OS X 10.7, so named Lion.

Moreover, I tried on November 28th 2014 to synchronize my Google contacts with the builtin Address Book application, again without success. I took me almost fifteen minutes to figure out that the instructions I found to do it were inaccurate. They were referring to a sync menu item while the command to sync was an ICON in the top right area! The icons are too tiny so I don’t look at the area all the times in case of something that appears! The sync icon was there: i had to click on it and select the command to synchronize now. After all these efforts, I got no result, again, no sync.

I was also hoping that getting an updated Mac OS X version would give me better chances to import video files into Final Cut Express without transcoding everything to .mov files.  I read that Final Cut Express is using the QuickTime stack which is built into Mac OS X, so a newer Mac OS X implies a newer QuickTime stack.

Last weekend, I decided to try my luck with this upgrade and stop using that dumb machine if it fails or makes the system unusably slow. Easy, I though, just go the the Mac App Store, get Lion, buy and download. No, no more Lion app, just Mountain Lion! Grrrr, AGAIN! I searched on Google once more, and found that Lion can be purchased from the Apple Store. Again… Will I really have to wait for a new shipping?

No, not this time, have I found. The purchase of Lion gives you a content code to download the product. However, the code came only two days after the purchase. What? That means there is a manual process to get the content code? Quite bad design, once again. I also found on a forum people that waited for ten days to get the code without success. They had to phone Apple, not even email support, to get the email resent, the resent didn’t happen, they had to phone again, etc. I found some downloads of Lion on Kickass that I could try my luck on, but there were images for VMWare and VirtualBox, so I would have ended up downloading gigabytes of data for nothing until I find a correct Lion installation!

Fortunately, I got two emails about an Apple License Agreement from Apple Volume Licensing. Opening the first message was giving me a PDF attachment with indications that the password would come in a second email. The second email was providing the password. If I didn’t look at these emails in depth, I would have thought they only provided a somewhat useless license agreement, not the content code I really needed. I think some people did that mistake and just ignored or deleted the emails. This may explain why some people waited ten days for the content code, contacted Apple, got the email resent, waited, didn’t get anything, etc.

The content code WAS in the PDF file! I needed the first email with the PDF, second email with the password, open PDF, and then the code was there along with the agreement!

Back on my Mac, I plopped the code in the Redeem code part of the App store. That didn’t work, because I had to accept the new conditions for the App Store. I had to reenter the code, then that worked and finally started downloading Lion! Before letting the beast install, I made a backup copy of the installation application.

The installation took almost an hour, but at least it worked. This gave me the launch pad as was as iCloud integration. Synchronization of address book now works and I presume I will be able to installer Synapse now. However, the machine is significantly slower since I upgraded it.

I’m slowly loosing interest in exploring the Mac platform. It seems that Mac OS X is now behind Windows in term of maturity and responsiveness. Apple did the exact same mistake as Microsoft a few years ago: stacking more and more useless functionalities on top of a core, assuming more powerful CPU and higher memory will alone compensate lack of judgment from software designers and programmers. I’m quite annoyed to pay for that mistake once again. I know Microsoft fixed that since my Pentium D bought in 2006 started with Windows XP and evolved to Windows 7, while my Core i7 went from Windows 7 to Windows 8 without significant performance degradation (stability and compatibility are other stories…). Maybe this is fixed in Mac OS X 10.9 or 10.10, but the machine I have just cannot run these versions of Mac OS X, so I would have to revert back to my brittle Hackintosh to continue my exploration of the Mac world.

Groovy and Eclipse incompatible in practice

This week, I tried to get my Groovy+Eclipse setup back and running, because I needed to debug a complex script that was outputting 15-20 line stack traces with a lot of useless information about Groovy internals. Finding the exact error message was taking me several seconds and no matter how hard I was trying to fix things, I was getting other and other errors. A step by step debugger was more than welcome.

I looked once more at Groovy web site for the presence of the plugin for Eclipse 4.4. Up to last Monday, release was supposedly imminent. Searching on Google was giving information that the special STS bundle of Eclipse 4.4 was providing Groovy integration. Wow, will I really have to install a special version of Eclipse to get Groovy now? I didn’t want to get STS which will give me Grails, Spring, etc, which I don’t need.

But last Monday, release was there. I tried to install it, but I found out it couldn’t install because of a compatibility problem with some of the features I had in my environment. It is possible that the plugin works only with the original version of 4.4, not the SR1. I tried a couple of times without success, then fell back on the development build. This one installed. I’m not sure it will be stable, but it was better than nothing. However, the same way as in August 2014, I was consistently getting the following error message:

Conflicting module versions. Module [groovy-all is loaded in version 2.2.1 and you are trying to load version 2.3.6

No matter how hard I searched for a solution, there was NO result. I opened my Maven POM file in Eclipse, pulled the dependency hierarchy, searched for Groovy, and found only one Groovy-All dependency. Nothing other than Groovy 2.3.6 was pulled by transitivity from other dependencies. Any search on Google was yielding results about Grails. Some people fixed the issue by removing the GROOVY_SUPPORT container from the build path: well, no GROOVY_SUPPORT in my build path.

I then thought about running a simple Java program in the same project as my Groovy script and output the class-path from the System.getProperties() object. Looking at the JARs in the class-path, I found no Groovy dependency. From my dummy program, I then tried to call my Groovy script. The script is compiled just as a regular class with a main method that I called… with success!

I then found out that you can execute a Groovy script as a regular Java Application in Eclipse: no need to run as a Groovy script. The script runner most likely involves a wrapper with a Groovy 2.2.1 dependency. This wrapper is built into Eclipse’s Groovy plugin. Running directly as a Groovy class avoided the introduction of the Groovy 2.2.1 dependency in the class-path alongside Groovy 2.3.7 so I got passed the hurdle and was able to more easily debug my code! During the process, I updated to Groovy 2.3.7 to match what was used by Groovy Eclipse.

The newest version of the Groovy plugin behaved a bit better, allowing me to use conditional breakpoints a couple of times, as opposed to the last version for which that was consistently failing with compilation errors.

For the complex stack traces, there is a simple solution: the StackTraceUtils class has methods to sanitize stack traces, removing frames related to Groovy internals. I just wrote a small uncaught exception handler and registered that as the default handler, and tada, more compact exception traces! Of course, I have to do it for each of my scripts, but that’s just a one-liner since I put the code registering the default handler in a utility method.

Audio driver conflict causing slow startup

Since at least two months, Windows 8.1 startup is awfully slow. From UEFI boot to login screen, it takes a few seconds, which is perfectly reasonable and what I expect from a Core i7 computer equipped with a SSD. However, after I was typing my password to login, it was taking between 30 to 45 seconds before I could reach the desktop! This was almost wiping out the benefit of the SSD, making the total boot time as long as if I had a regular hard drive with an older CPU.

I was getting annoyed on every startup, but at least, the machine was correctly responding afterwards. But yesterday, that combined with another issue: each and every Metro application is now closing instantly and NOTHING suggested on forum posts can fix the issue; the only solution seems to completely reset my user account.

I thought the slow login time was also linked to my user account and was starting to feel ready for creating myself a clean account and starting from there. Most applications should survive this process, only some settings will be lost. My documents live on a second hard drive which wouldn’t be affected. However, I found out that the slow startup was also experienced for a new user account! This was thus a system issue, probably yet another program that installed malware without me noticing it.

Searching on forums, it seems I would now have to hunt for and install many different anti-virus and anti-spyware programs and regularly create hard drive images. I didn’t have to run into this trouble while I was using Windows 7? What changed in Windows 8.1? Is the system now so unstable that I would need to regularly restore from an hard disk image? This is a real non-sense!

Deeper analysis of the slow startup seemed to link this to CPU! There was little hard disk activity during the long login time and CPU fan seemed to increase speed. I was quite shocked at this. I have a Core i7 CPU that works very well for Ubuntu. Why would Windows 8.1, suddenly, require something more? It was working six months ago, starting quickly. My Nuance ultrabook, which also has a Core i7, starts at normal speed. My personal ultrabook, equipped with a Core i5, starts in a reasonable time as well. Both machines are equipped with Windows 8.1 and newer than this 2012 Core i7, but they are NOT significantly more powerful! Do I really have to face the fact there is a time bomb in Windows, that will trigger after some time and start marking some CPU/motherboard combinations as arbitrarily obsolete? This makes little sense, but I was slowly but surely drifting to that conclusion. I was awfully disappointing because it takes me weeks to shop for new computer parts, check for compatibility with Linux, make sure I won’t get into a dead end with the motherboard, assemble the thing, test, solve issues, etc. I didn’t want to reenter into this just because Microsoft decided I would have to do so!

I didn’t want to perform a clean install, because it is taking too much time. It would break my GRUB configuration while Ubuntu provides no simple way to restore it (each time I need to search more than fifteen minutes, and apply a manual procedure), would require installing multiple drivers each rebooting the machine, and I wasn’t sure Ableton Live would correctly reauthorize. But I was starting to feel ready to try this reformat during the Christmas holidays, because this was becoming too annoying.

I thought about purchasing another system and moving the Windows 8.1 part of my current configuration on it, leaving only Ubuntu on the original machine. That would remove the issue with Windows breaking the GRUB setup at the cost of more money and space.

I also got issue with my audio system. Yesterday, I started Ableton Live and opened up a Live set I got from my friend. I got an error message, because my Ultranova synthesizer was set as the ASIO audio interface but it wasn’t powered up. Instead of turning it on, I switched to the M-Audio ASIO driver, so sound played through my main Fast Track Pro interface.

Things went well… until I closed Live. After that, for the third time, sound stopped working. The M-Audio driver is getting corrupted and stops working. Restarting the machine does nothing; I really had to reinstall the driver.

But after I removed this damned M-Audio driver and restarted the computer, I found out the startup speed was restored to normal! I reinstalled my M-Audio driver, because I need it for ASIO integration, and startup speed remained normal!

I thought about finding an audio interface more suitable for Windows 8, but I found out it makes no sense to replace a working product just because Windows misbehaves with it while it USED to work! I thought about downgrading to Windows 7, but I HATE its low contrast between selected and unselected items and lack of any solution to fix this without disabling Aero. I considered the possibility of installing Live on a dedicated Windows 7 laptop (tired of assembling computers and when purchasing preassembled PCs, you usually get to choice between cheap desktops, reasonable laptops or high-end desktops with too stylish gamer-centric cases). But the new machine would have low storage so I would need some kind of NAS for my systems to access a common storage. That was starting to be endless stacks of problems! I thought about using my Ultranova as sound card for Windows and M-Audio on Linux (because the Focusrite-based audio chip in the Ultranova does not work under Ubuntu) but that was resulting in need for endless reconfiguration (when switching between the two OS).

But today, things are still working. It seems I just must not use the Ultranova ASIO integration in Live and things will continue working. As a workaround for this uncommon issue, I connected the S/PDIF output of my synth to the S/PDIF input of my audio interface using a plain basic RCA cable (nothing fancy), turned on S/PDIF output on my Ultranova and was able to get the digital audio from the S/PDIF input of my audio interface. This has the added benefit of allowing Live to record BOTH from S/PDIF and the main input jacks of the audio interface. Using this new configuration, I was able to record four tracks at a time: the two one from my synth through S/PDIF, and the two outputs of my Korg’s EMX. S/PDIF input has low volume for I don’t know why, but at least this can be worked around in Live.

I hoped that would solve my Metro issue but no, the problem persists. I tried a lot of stuff to solve this without success. I will thus need to reset my user account if I really need these kind of useless Metro apps. At least startup is now normal so I won’t have to format to get this fixed!