Technical analysis

Many ways to create the same thing

One important issue of software development is that problems can be resolved many ways and not all solutions are equal. Some solutions cannot be maintained because some developers cannot grasp how it works and insist on switching to something else. Some solutions cannot be deployed because the chosen providers don’t support it. Some solutions don’t adapt to the requirements of the customers or new requirements coming after the fact.

Following table summarizes the problems happening again and again in software development.

DevelopersCustomersHosting providers
Limited knowledge of available solutionsImplicit requirements taken for grantedLimited set of platforms
Insist on using the solutions they know about and have others adapt and rewriteLimited tolerance to errors in interpreting requirementsToo high cost for some customers
Narrow understanding of requirementsLimited tolerance to delaysVendor lock in
Problems faced by software development

On one hand, developers know about some solutions and not others. They make their best at understanding requirements and designing something that works, but they cannot know everything. Usually, they are constrained by time so they prefer to go on with what they know rather than spend countless hours learning a new technology just for a new project. This is especially problematic when new developers, with different backgrounds, join a team. Each developer knows about different solutions and just cannot afford adapting to something new, so each will insist on moving everything to what he knows! This can slow down a project and even stop it altogether.

On the other hand, customers present requirements in a shallow way, taking many things for granted. When something granted is not implemented, this usually results in negative reactions making developers upset and, feel guilty and urged to fill the gap. Any additional delay is seen as a catastrophe that threatened the project, the business relationship with the customer and sometimes even the position of the affected developers. This results into more and more hacking and patching reducing the quality of the solution, causing other requirement mismatches, bugs and disappointment, for both customers and developers.

Then the provider deploying the solution imposes constraints as well. Sometimes, the platform is limited to only some technologies the developers may be even unaware of. Other times, the deployment cost is prohibitive for the customer. Some providers, especially cloud services, have specific terminology and system architectures making it hard to switch from one provider to the other without major refactoring, unless these problems are known in advance.

This article illustrates these problems using a very simple use case: a web application showing a clock to the user. There are many ways of implementing a clock and each solution has benefits and drawbacks. Choosing the wrong clock implementation is likely to require a rewrite of the whole system. While reading about this use case, think about how the complexity of real-world projects such as GMail, Facebook, Netflix, Amazon, etc., are orders of magnitude greater than a clock.

Initial requirement

Suppose we want a webpage showing an analog clock to the user. The clock needs to reflect the current time.

Static web page

The very simplest basic way of implementing the clock is using a simple Web page, represented as an HTML file, that will load an image showing the clock. This works in any browser, even the oldest. Some naïve developer may think the problem is solve. But how about showing the current time?

For this, one possibile solution is to create one image of the clock for each position and enhance the HTML page with a piece of Javascript code that would load the correct image based on current time. There will be many images, one for every minute of every hour, namely 720 images.

Some developer will go over the top, creating all these 720 clock images in a graphic program, managing to come up with a naming scheme and implement the piece of Javascript code that will pick the right image based on the current time. Then guess what? The customer will look at this and complain: he wanted three needles, not just two: one for the hours, one for the minutes, one for the seconds. So we end up with 43200 images of a clock to create! Even worse, the customer dislikes the layout, so all the initial 720 images need to be revised anyway.

Then after some thoughts and discussions, developers will agree on the need to create a program that will generate all these images. This could be done ahead of time, but why not have the HTML page call that program itself? This is perfectly doable, by replacing the static image with a web service that will return a dynamic image.

That looks simple, but there are many different ways of implementing that backend service: a CGI script (the old way), PHP (another old way), then others will come up with a proposition to implement everything from scratch using Django, then cloud developers will propose on using serverless technology such as AWS Lambda. After the programming language is determined, there are several libraries to create images.

While developers argue on what backend service to use, the customer discovers with great surprise that the clock image appears very tiny on his 4K monitor. Who thought somebody would use that page on an high resolution display? Bitmap formats such as JPEG, GIF or PNG with fixed resolution won’t do the trick. The system needs to serve a vector representation of the clock, most likely using SVG. Ah no, this doesn’t work with certain old browsers! Are we sure the customer doesn’t use such as an old browser? We can ask the customer, he will probably say no, not even sure he will understand the question, then figure out he has an old PC stuck with Windows XP and an old Internet Explorer not supporting SVG, but that will appear after the fact, after the SVG backend is in place!

We can discuss forever about the backend service to figure out that first, the customer wants to deploy on something only supporting the old PHP or, even worse, that we forgot something: the clock needs to tick with current time, not just render the current time at the time the page is loaded! While this seems intuitive, developers, overwhelmed by all technical details, may loose track of this obvious fact. A clock ticks, needles move!

Some hacker will come in, proposing to just refresh the whole page every second. This will result in something ugly that flickers. The customer may agree on this, but will of course reject the solution when he sees the flickering result!

At this point, it becomes obvious we are in a dead end. Static page won’t do it, we need something more dynamic.

Dynamic web page

There are many ways to render a clock dynamically on a web page nowadays, using Javascript code. One can write code generating SVG drawing the clock and needles, another will prefer using the HTML5 canvas (and will have to argue with the one that prefers SVG because of his background or former job), another will propose using a library such as Raphael instead of builtin SVG or HTML5, and another one may even propose using Flash even though it is deprecated, because supposedly this is easier and more intuitive, and that works with more browsers.

Then how about having the clock move? Some people will advocate in favor of directly manipulating the document model using Javascript, but that won’t work in a portable way across browsers. Most developers thinking about this solution will immediately shift his mind to JQuery, which is a widespread library allowing to manipulate a web page dynamically in a way portable across browsers. Then a developer with background in data processing will propose using D3 library to make the needles move. JQuery and D3 works in very different ways, they implement different paradigms. Switching from one to the other is not so obvious, but the D3 developer expects the seasoned JQuery programmer to use D3, and the JQuery developer would like to get the D3 guy on JQuery.

Then seasoned frontend developers or people that want to become one will go with the heavy lifting, proposing using frameworks such as Angular JS, React, VueJS, etc. Yeah, just to render a clock! Such frameworks will do the job, at the expense of a lot of complexity, but they can do much much more. A React-based page showing a clock could allow customizing its style on the fly, and the modified style would apply immediately, without flickering. If one decides to add multiple clocks at different time zones on the same page, that will be possible without breaking everything.

However, no developers know all frameworks. A newcomer will likely know something different and then start advocating in favor of switching to what he knows. “I don’t know much about React, but that seems easier to do with VueJS.” The other will prefer Ember, another one AngularJS, etc. All of these frameworks will bring development to a stop if each developer wants to use his own framework.

“Modern” web application

Maybe I should not write “modern”, because at the time this article will be read, this may not be modern anymore. In the previous section about dynamic web page, what happened with the backend? Well, it is not needed anymore. Everything can happen on the client side. Now with Javascript and HTML5, web pages can do much more work than before.

One reason a backend may be necessary is persistence. Suppose we want to allow customize the clock and save the settings to a user profile that would be reused on different devices. This simple requirement would make the project a lot more complex, requiring persistent storage (thus a database) and some kind of identification system to know which profile to load. For clock settings, authentication is probably overkill, but it is a necessity for most real-world applications, and that is a huge can of worms.

Things become complex quickly, making it hard for developers to do it right. There are (too) many different solutions. One may prefer to write the backend service using Java, and Java offers tons of frameworks such as Spring MVC, Jersey, etc., for this. Another one will use Python with Django, Tornado, Flask or something else. Yet another one will lean towards NodeJS with Express or some other library. Another one may prefer to keep using PHP or even write something in C++! There are also many possibilities for the database system, such as MySQL, PostgresQL, MongoDB, Redis, etc.

That even doesn’t take into account bold developers that don’t trust anything and absolutely want to implement everything from scratch, using the most basic stuff he can find. Let’s make this backend in C, or even in assembler, and have control over everything!

No matter what is chosen, it will work somehow and will fail somehow. Developers will be stuck with the issues and if they cannot find a solution, others will advise using some other technology.

While dealing with all these options and technical details, many will forget something essential: write automated tests! While that seems unnecessary at first, this becomes fundamental as the complexity of the project increases. Automated tests help going forward with a reasonable confidence that what previously worked still works.

The shocking tendency to give up

By the time developers try to implement this stupid clock, discuss and argue about the best frameworks, the customer will get nothing more than explanations, nothing concrete. At some point, he will just go to a store and buy a watch. Here is my clock. It works, it does what is has to do, it ticks with current time. While the idea was great, it was too long and complex to implement, so giving up and going elsewhere is the way to go.

Another example of this could be a customer requesting a web application to collect data and, after some delays or failures, decide to fall back on a simple Excel (or maybe better maybe not LibreOffice Calc) spreadsheet.


While having multiple solutions available is fine, too much can be worse than not enough. When there are too much options to choose among, not enough guidance and error requires costly refactoring, even rewrite everything from scratch, progressing is hard. We end up redoing the same thing, over and over again. The fact that many people implement the same thing over and over again in silos, independently of others, seems not to help, but gathering everyone as a super large team will cause endless argumentation leading all the development to a full stop.

Maybe developers should spend more time using what is there rather than making it work the way they think it should or reinventing the wheel. Maybe customers should spend more time reflecting about their requirements, discuss more with developers to figure out the consequences of these requirements over the project. Maybe providers, especially web hosts, should stop sticking to obsolete technologies such as PHP and offer facilities to host modern web applications in addition to the old stuff that may still be needed for existing projects, or cloud providers should offer options allowing deployment of web applications at the same cost as simpler web hosting platforms (no virtual machine to run a simple NodeJS server, for example). Or maybe I am wrong, getting out of my mind because all that is going on in the world.

Technical analysis

Not all USB C ports are equal

I kind of already knew but not realized it, until I got my new Flex 15 from Lenovo. This laptop comes with one USB C port. Lenovo proposes a travel hub as an accessory. That hub fits into the USB C port, offering one USB 3.1 port, one HDMI output and one Ethernet port. Unfortunately, that hub won’t work, not because it is defective, but because it is not compatible with the Flex 15! This article tries to explain why and outlines some possible uses of the limited USB C port provided by that laptop.

USB C is the successor of USB 3.1, with a completely different connector. USB 3 connectors fit in USB 2 and even USB 1 ports, but USB C connector is not compatible with USB 3 ports.

The USB C connector looks as follows.

Example of a USB C connector

The port is also different, smaller than the USB 3 and the connector can be plugged both sides. The female connector looks like this.

USB C causes quite a bit of problems, because it is too new and because not all ports are equal, without any clear way of determining its capabilities. Main issues will be, for the time being, compatibility with existing devices and the different variants of host controllers.

New connector = new devices?

The first problem is compatibility. As the connector is different, a USB C port accepts only USB C connectors, but most USB devices at the time I’m writing this are USB 3, not USB C. This won’t be a problem on most systems. At worst, the USB C port will just be a cool artifact that will be never used, a bit like Firewire connector on some older laptops. However, there are some ultrabooks with just USB C ports, like Dell’s newer XPS models. I was quite shocked to see this and thought a user of such a machine would need to buy all new devices. That’s fortunately not the case, as I found out later.

Then, what can be done with that USB C port? First, a couple of devices have a USB C port and comes with a USB A to C cable. You can use that cable, or purchase a USB C to C cable to hook the device through the new USB C port. Examples of such devices are newer Android phones like the Google’s Pixel 2 and 3, V3 WASD CODE keyboards and the ROLI’s BLOCK and Seaboard.

I also found adapters turning a USB C port into a USB A; they look as follows.

A USB C to A adapter

Such adapters allow to use USB C ports like regular USB 3.1 ones, pretty much adding USB ports to the laptop.

Hubs also exist, but that’s were things become tricky and quite annoying. Some USB C ports are not compatible with all hubs!

Variants of host controller

The capabilities of the USB C ports depend not on the physical connector but rather on what is linking it to the system’s motherboard. We call this the host controller. As far as I know, there are the three following variants of such controllers.

  • Thunderbolt 3. This is the most powerful and versatile port. A Thunderbolt 3 port can be used as a regular USB 3.1 port using an adapter, it can carry over display information and can transfer enough power to charge most laptops. You’ll need a hub or docking station to expose this; the connector itself won’t allow it. Thunderbolt 3 also allows the transmission of PCI Express lanes through a cable, pretty much offering extensibility to a system. You can for example hook up an external graphic card or high performance hard drive. The problem with Thunderbolt is that the previous standard relied on a different mini-DisplayPort cable, and many devices will be claimed as Thunderbolt-compatible; you won’t know for sure it is Thunderbolt 2 or 3 unless you look very carefully on the device’s pictures, search on forums, email the vendor, and maybe even then, you may get a Thunderbolt 2 device while expecting a Thunderbolt 3 and have to return/exchange it. Quite annoying. A device claimed to be Thunderbolt 3 compatible should be fine though.
  • USB C with display and power. The port can be used for regular USB 3.1 devices (with an adapter or hub) or transfer display and power, enough power to charge a laptop through just that small USB C connector. Hubs providing HDMI output can be used with such ports. Some docking stations exist and can be used to charge a laptop and extend its display capabilities with one or two extra ports. Docking relying on proprietary connectors is now achievable with a generic, smaller, reversible connector. That’s quite amazing, and a USB C docking station should work with all laptops supporting USB C with display and charging, PC or Mac!
  • Regular USB 3.1 only. USB C doesn’t require support for display and power (Thunderbolt 3 does), so some vendors ship limited USB C ports that can just be used with adapters or hubs providing only USB C or 3.1 ports! Trying to hook up a hub or docking station with display or charging capabilities on such ports will likely fail. Windows will report the device malfunctioned, making you think it is defective. But it is not; this is the port that is limited. But not defective, it will work with USB 3.1 devices (with proper adapter).

Lenovo’s Flex 15 has such a limited USB C port. Hooking up the travel hub, offered as an optional accessory while purchasing the laptop, will always fail. The hub wasn’t defective, I tested it on my work’s laptop that has a USB C port with display support, and it worked. But not on the Flex 15. All that can be done with the hub is a RMA. Sad but true.

Technical analysis

Why do we still need password while we have biometry?

It now happens more and more often that I try to login somewhere and my password doesn’t work. Many sites require an account, with different rules for user names and passwords. Using the same password everywhere only partly alleviates the issue, and can increase security risks. Password reset procedures differ from one site to the other: most of the times an email with a link is sent. Sometimes, a new temporary password is sent by email. More rarely, it is required to make a phone call to complete the reset.

Another more and more annoying (for me at least) trend is to ask for security questions picked among a finite list. These questions are often too subjective and the right answer for me may change over time. I don’t really have a favorite color, my nicknames can be spelled different ways, I’m not totally sure of the spelling of my childhood’s best friend, etc. Some questions are just really stupid and don’t apply to me. Being visually impaired, I cannot and will never be able to drive (unless autonomous vehicles are available in the future), so asking for the brand make of my first car is just silly. Instead of spending five minutes and more looking at the list, I got fed up yesterday and picked a random question, noted it down somewhere, and stuck a silly response, noted down as well. This is pointless loss of time and energy.

Password storage systems such as KeePass or LastPass can help alleviate this. However, they either introduce extra steps to login (start KeePass, enter master password, find site entry, copy, paste), or are not available for all platforms. The commonly excluded platform is Linux, which is quite a shame. Mobile devices are also left alone with no solution, at least nothing free. Moreover, these solutions don’t free the user from entering, creating or regenerating passwords at registration time.

Instead, we should just get rid of passwords altogether. The initial idea for doing so comes from how SSH private key authentication works. SSH is used to establish connections on servers, allowing to open a terminal commands can be entered on. Instead of setting a password to login somewhere, SSH can generate a pair of keys: a private part remaining secret, a public part that is sent to any number of servers on which one would like to connect. SSH manages to authenticate the user without having to transfer the private key. This is done using a cryptographic algorithm called RSA. Any message encrypted with the public key is easy to decrypt using the corresponding secret key but extremely computationally expensive without. The SSH server leverages RSA by using the public key of the user to encrypt a challenge message, that only the corresponding private key can decrypt, then the client sends back the response, possibly encrypted using server’s public key.

Problem with this is that this private key can be lost or compromised. When this happens, it needs to be regenerated. It is possible to protect the key using a password, which decreases the risk of it being compromising if stolen. If the public key is reused several places, at least there is just one master password. But that password can be forgotten of course.

But what if that private key is part of the user’s biology: fingerprints, voice print, eye scan or, even further fetched, a subsequence of DNA? The key cannot be lost, unless the user is harmed or killed. But there is a small problem with that: the key cannot be replaced if it is compromised. Well, I thought about the possibility to encode the key in an unused portion of DNA, but the only idea of having some piece of hardware screwing up the DNA is kind of freaking. It is better to observe some biometric aspects, without altering them.

Well, how about a part of the biometric element? The exact part of the finger print, the exact aspects of the voice print, the exact subsequence of DNA, can be encoded in the public key, and used by to calibrate the scanner in a way reconstructing the private key. Replacing the key is just a matter of picking a new part.

This is just a case of multi-factor authentication. Models for this exist and are implemented by some providers. The problem is nothing is standard, wide-spread.

Unfortunately, this thrilling idea won’t apply easily, and won’t standardize overnight. First, there needs to be a standard piece of hardware each and every device would need to be equipped with in order to acquire the biometric signature. That hardware must not be available just for new computers, it would have to be possible to install it on an existing machine, without complications, without forcing the user to upgrade, even worse change OS. A USB-driven scanner is probably the key here, but that would have to work on any OS: Windows, Linux, Mac. But the device maker will make it work just on Windows, maybe Mac, because there is no such thing as a universal device driver except for very simple device like keyboards and mice. But well, if it is possible for these devices, why not a footprint scanner? Phones would also need to be covered as well, since it is more and more common to authenticate from such a device.

Next issue is integration. Each and every service provider would have to agree on a standard and comply to it. That would probably start with the biggest players like Google, Microsoft, etc., but if small businesses do not enter in, we will be stuck with yet another authentication method.

Ideally, a solution based on a service should be put in place. Anybody would be able to subscribe to this authentication service and use it in his own application, whether he is an individual developer, small or large business. This is similar to popular platforms such as AWS, Heroku, etc.

That thus seems perfectly possible. Will this happen? I don’t know, but if we don’t think about it, it won’t.

Technical analysis

The downsides of SSDs

What’s the point of having a SSD if both Windows 8 and Ubuntu 15.04 introduce artificial timeouts that increase the boot time, making this equivalent as having a standard hard drive? Well, I’m there, I reached that point.

Windows 8 often boots fast from EFI to login screen but after I enter my password, it sometimes reaches the desktop in five seconds, sometimes hangs for 30 to 45 seconds. There is no obvious reason why, no way to track this down and no obvious solution other than deleting my user account and creating a new one. I cannot spend my weekends doing, redoing, redoing and redoing that. This is just pointless and inefficient! I could try to reinstall, but then I will have trouble with reactivating Windows, reauthorizing Ableton’s Live and have to spend hours waiting for manual installation of countless drivers and software tools. Ninite can help with programs, not with drivers.

Some time later, I found out that uninstalling and reinstall the driver for my M-Audio interface fixed the slow boot up. There seems to be a conflict between the M-Audio’s Fast Track Pro and Novation’s UltraNova drivers. Windows 10 also seemed to stabilize things a bit.

Ubuntu, most of the times, boots quickly. However, starting from 15.04, it was taking almost a minute from splash screen to login screen. I had to spend more than half an hour looking at syslog to figure out that the swap partition changed UUID but the update script didn’t reflect that into /etc/fstab. Several people repeat that we shouldn’t do dist-upgrades and rather reinstall, but then, why is there a dist-upgrade option at first place? Fortunately, fixing the partition ID in /etc/fstab restored my boot time.

This is not SSD-specific issues, but they cause the SSD to be less useful. Another factor reducing usefulness of SSD is the never-ending size increase of OS and applications, especially when dealing with virtual machines. This ultimately fills any SSD, requiring time consuming reorganization of the layout (partition resizing, copying on a larger drive, etc.).

I don’t want to go backward, switching from SSD to an hard drive, but practice seems to tell me I should. This is disappointing and quite frustrating.

Technical analysis

Functionality versus stability

Switching between multiple windows has always been an issue for me. The simplest solution is to use the Alt-tab key combination. That works pretty fine when I need to switch between two or three windows, but as soon as there are more windows, that becomes more difficult. Holding Alt and pressing Tab multiple times can help, but the order of the windows differs from time to time so I have to check each proposed window one at a time until I get the right one.

I saw several people using the mouse for that, clicking on icons in the taskbar. This has always been a problem for me, because I was constantly loosing track of my mouse pointer when reaching the bottom of the screen. A simple fix helped: moving the taskbar to the top of the screen! However, this remained tedious to switch windows using the mouse like this.

What helped a lot is the possibility to group windows on workspaces, also called desktops. I first discovered this functionality on Linux. I quickly wanted to get this under Windows and found and tried several software tools attempting to emulate it.

Mac OS X also offers this possibility, but it is less useful because the windows proposed by Command-tab, the Mac equivalent of Alt-tab, come from all the desktops, so there is no grouping in practice, just some kind of visual illusion to unclutter the desktop. One workaround for this is to get as few windows as possible on the desktops, then switch task using Ctrl-arrow keys, forgetting about Command-tab. The switcher reachable with F9 could also be handy, but it would better serve me on a touch screen. Of course, Mac OS X has trouble supporting my touch screen because it is not manufactured by Apple!

I have been using Virtuawin for several years. That tool served me well a couple of times, allowing to group windows on virtual desktops, move windows from one desktop to another and configure the number of required desktops. Internally, it seems to work by showing and hiding windows, which allows alt-tab to work properly, showing only the windows on the current desktop rather than all the windows. This reminds me of another tool, a Powertoy from Microsoft, that was working by minimizing windows, so alt-tab was showing windows from all the desktops. I also remember of deprecated tools I tried years ago that were not providing any keyboard shortcut to switch desktops. This is so shocking that I forgot the tool’s name and don’t want to remember it!

However, things started to go bad with Virtuawin on Windows 8, maybe started on Windows 7 with Aero in fact. The behavior of the tool became more and more erratic, causing me increasing frustration. It is possible that the problem comes from some software programs in particular, maybe not Virtuawin itself.

The first issue I ran into is loss of focus when restoring a desktop. Many times after I switched from a desktop with a command prompt or an Emacs window to one running Firefox or Outlook, keyboard shortcuts were not responding anymore. It took me some time to figure out that Virtuawin didn’t restore the focus to any window on the desktop, keeping the focus on itself. Pressing alt-tab to switch to next window works around it. Of course, clicking with the mouse in the window solves it. But this needs to be redone on each desktop switch. Sometimes, the focus is set right, so if I press alt-tab, the window disappears, because Windows 7 with Aero as well as Windows 8 “smartly” propose the desktop as one candidate when pressing alt-tab.

The second issue I ran into, which happens intermittently, is Virtuawin popping up and asking me if I want to close it. This happens when I use alt-f4 to close two ore more windows one after the other. This used to happen when no more windows are on a given desktop, but I got the issue on a desktop with a couple of remaining windows. Again, it seems that the focus messes up and Virtuawin gains focus, proposing to shut itself down rather than intelligently handling focus to another window. There may be no good way to work around this, in fact.

A few weeks ago, I got so fed up of this erratic behavior that I got rid of Virtuawin. I tried several alternatives with no luck. Switcher, which would at least provide an alternative to Alt-tab, just doesn’t retain its settings and doesn’t run at startup. Default key was backtick, which is quite inconvenient on a Canadian French keyboard. Maybe this is because of UAC, but I just cannot accept to turn UAC off, because that would allow any third party application to start as administrator in the background, without any indication. I then tried Windows Pager, which was completely incapable of retaining window focus and was emitting an annoying beep each time I was switching to the desktop with Firefox. I worked a week with VistaSwitcher instead of Alt-tab. This somewhat worked, but it was getting painful because of too many windows, including multiple Cygwin windows looking almost the same and with similar titles! I thought about Dexpot, but this one is free for non-commercial use only and I wanted something that I could use both at work and at home.

I then found Sysinternals Desktops. That tool, bought by Microsoft, seems to make use of some hidden APIs offering multiple workspaces natively under Windows. The tool works by creating multiple copies of the Explorer.exe process running as a desktop manager and provides shortcut keys to switch between the copies. This seems to work very well, except it provides no way to move one application to another desktop. Situation is aggravated under Windows 8 by the fact Metro is involved to start applications by name. More specifically, pressing the Windows key before typing the name of an application switches to first desktop where Metro is running, then pressing Enter starts the application from there. The incorrectly started application cannot be moved after the fact. There is no way to work around this, except finding alternative ways to start the applications from other desktops: putting everything on the desktop (unacceptable, I will not be able to find ANYTHING), pinning applications on the task bar (somewhat works for use cases with small number of applications), launching the applications from Explorer (time consuming to find the programs) or downgrading to Windows 7 (again!). I may try something like Launchy which could allow me to efficiently start applications by name without involving Metro.

Shutting down Windows 8 after multiple desktops were created also makes the spare Explorer.exe crash one at a time with a cryptic error message. Fortunately, this doesn’t prevent me from turning off the computer cleanly. This didn’t happen on Windows 7.

I also experienced issues with clicking on URLs from applications like Outlook and Lync. When Firefox runs on a different desktop, the system doesn’t correctly handle the situation by instructing Firefox to open the request URL. Firefox instead tries to start again on the desktop of Outlook or Lync, fails and displays an error message indicating it is already running. I then have to manually copy/paste the link. At least in Outlook, there is a Copy hyperlink option, so no need to manually select the link, but Lync offers no such option. I have no good solution for now, other than running Lync on the same desktop as Firefox.

Desktops limits to four desktops, no more, no less. Sometimes, it may be useful to have more. However, I found out that I am wasting desktops since years! For example, I am keeping a desktop just for Lync. Instead, I just had to configure it to reduce on the tray instead of the taskbar, so Lync is not in my way when using Alt-tab but still usable for the rarer cases I need to initiate a chat. Moreover, I used to start a Cygwin window, SSH to my home computer and start Audacious from there to listen to music at while keeping the playback controls handy. Why this useless terminal window while ssh -f can be used to start Audacious and put SSH into the background, allowing me to close that extra terminal window? I tried to use Screen to further reduce the number of Cygwin windows needed, but that was causing issues, especially with Emacs.

Despite its limitations, Desktops happened to be a better solution for me than anything else. The reason why is that its behavior is deterministic. It works the same way all the times! It has less functionalities, but it is more stable. If later on that still doesn’t meet my needs, I would have to either downgrade to Windows 7, which has less issues with Virtuawin, work more on VirtualBox virtual machines with Ubuntu guests (supports virtual desktops) or try to get Dexpot on my work laptop.

Bug Technical analysis

Groovy + Maven + Eclipse = headache

Java is a general-purpose programming language that matured over more than ten years. It provides a solid platform on which many third party libraries (of various quality and complexity of course) were developed. Maven is one of the several ways large Java projects can be described formally and built automatically. Maven has the ability to manage dependencies a lot better than the traditional way of bundling everything together in a large archive and aims at simplifying and unifying build process, although advanced configuration quickly drifts into XML nightmares. Then comes Eclipse, an integrated development environment. Although not perfect, very far from it, Eclipse has been a time saver for me, especially when comes time to search into large code bases and refactor code.  Eclipse is designed for Java, and it has a plugin to integrate well with Maven, called M2Eclipse. We can safely state that Java, Maven and Eclipse play well together.

Then comes Groovy, a language built on top of Java. Source code in the Groovy language is compiled into byte-code like Java is, and the byte-code can run under the same virtual machine as regular Java programs, with the exception they need a set of runtime classes and the generated byte-code has some more indirections as compared to one generated by a traditional Java compiler. As a Java extension, we would expect Groovy to play well with Maven and Eclipse. Well in practice, I found out not to be exactly the case.

I experienced what follows with Eclipse Kepler, Groovy 2.2 and Maven 3. Things may have been better with older versions or with newer ones, that will have to be seen.

Groovy and Eclipse, almost well but…

First time you will try to write a Groovy program in Eclipse, you will notice that there is absolutely no IDE support for that language. You won’t be able to use any code assist and Eclipse will not allow to compile or run Groovy code for you. You will need to install an extension to get Groovy support. This is the Groovy Eclipse plugin. The plugin works relatively well, but it has a couple of annoying drawbacks.

First, code completion works in random erratic ways. I sometimes get tired and turn it off. For example, I had a variable of type String. I knew it was a String and the IDE had the ways to know, because I declared the type of the variable in my code. In Groovy, you can use variables without declaring the type. However, when I was trying to get proposed completions for to, I was getting toUpperCase() but not toLowerCase(). This was completely arbitrary.

When running a Groovy script, the arguments in the launch configuration get prepopulated with a list of standard stuff that you must not delete. If you want to pass your own arguments to your script, you have to append them at the end of what the Groovy extension inserted in the Arguments box and you need to be careful not to delete the predefined stuff if you completely replace your custom arguments.

Debugging Groovy code in Eclipse is like playing Russian roulette. Sometimes you can print the contents of a variable, sometimes you cannot; you don’t know when it will fail and why.  Sometimes you can expand an object and see its fields, sometimes the + icon is not there and you cannot expand, again for no obvious reasons. Execution may step into closures or may not, you don’t know, at least I didn’t. You can work around by putting breakpoints in the closures, but when you go out the closure, you end up in strange places of the execution within internals of Groovy. Conditional breakpoints never worked, at all, so I had to constantly pollute my code with insane if (some condition) println(“Bla”) and be careful to remove all the junk after I’m done debugging.

Error messages are sometimes cryptic. If you are unlucky enough, you can even manage to get an Internal error from the Groovy Eclipse compiler! I was getting that in one of my classes and had to disable static type checking for that class to get rid of it.

On Monday, August 4th 2014, things went completely south after I upgraded my build to Groovy 2.3. Everything was working fine with Maven on the command line. Eclipse was compiling the code fine. I set up the project to use Groovy 2.3 and there was no issue. However, when running the project, I was getting the following runtime error.

Conflicting module versions. Module [groovy-all is loaded in version 2.2.1 and you are trying to load version 2.3.6

I looked at my POM file, analyzed Maven dependencies with both mvn dependency:tree and Eclipse, found no Groovy artifact except the 2.3.6 one, verified my PATH to make sure only Groovy 2.3 was on it, checked Eclipse preferences many many times, restarted Eclipse several times, to no avail. There seems to be something in the Groovy Eclipse plugin hard-coded for Groovy 2.2, even if the compiler is set to 2.3!

Any search on Google is yielding results about Grails and Spring, as if nobody is using Groovy alone anymore, only with other frameworks. Nobody else seems to be having the issue.

Maven + Groovy = fire hazard!

Maven relies on plugins to perform its tasks, so the ability to build something with Maven depends on the quality of the plugins. There is unfortunately no official well known, well tested and stable plugin to build Groovy stuff using Maven. The page Choosing your build tool gives a good idea of what is currently available.

First I read about GMaven, but I quickly learned it was not maintained anymore, so I didn’t try to use it. Then I read that the Groovy Eclipse Compiler was the recommended one. I was a bit reluctant, thinking this was a piece of hackery that would pull out a bunch of dependencies from Eclipse, resulting to an heavy weight solution. But this was in fact well isolated and just the compiler, no need to pull the whole Eclipse core!

Maven Eclipse compiler worked well a couple of months for me. However, yesterday, things went south all of a sudden. First, there were compilation errors in my project that would not show up into Eclipse but appeared when compiling with Maven. These were error messages related to the static type checking. After fixing these, compilation went well, but all of a sudden, at runtime, I was getting a ClassNotFondError about ShortTypeHandling. I read that this class was introduced by Groovy 2.3 while my project was using Groovy 2.2. Digging further, it seemed that the Groovy Eclipse Compiler was pulling Groovy 2.3, compiling code against it but the code was executed with Groovy 2.2. This should in principle not cause any problem, but it seems that in Groovy, byte-code is not fully compatible between versions!

I tried updating my dependency to the Groovy Eclipse Compiler in the hope that would fix the issue. However, that changed my ShortTypeHandling exception for stack overflows. It happened that the clone() method of one of my class was calling super.clone(), which is perfectly normal. But Groovy was making something nasty that was causing super.clone() to recursively call clone() of my subclass! This resulted to an infinite loop causing the stack overflow.

I found this issue to be even more intricate after I tried to compile my code on JDK8 and found it out to be working correctly. In other words, the JDK was affecting how Groovy Eclipse Compiler was building the byte-code!!! In JDK7, something would fluke the byte-code, causing the stack overflow errors, while in JDK8, everything would go fine!

I then tried updating the compiler once more, to the latest and greatest. Things compiled, but I was back at square one with the ShortTypeHandling exception! So no matter what I was trying, Maven was unable to build the project anymore.

I was about to give up on Maven and use a batch file to call Groovy directly, but that would have been a lot of fiddling with the class path. I was not happy at all about this solution.

Then I found out about the GMavenPlus plugin. I tried it and it worked like a charm! The plugin makes use of the Groovy artifact defined in the project’s dependencies rather than hard-coding a compiler for its own version of Groovy. It uses the official Groovy compiler API rather than its own, so things get compiled the same way as when using the Groovy Ant task or the standalone groovyc compiler. GMavenPlus saved my day yesterday, freeing me from a lot of hassle.

Is it worth it?

I’m not sure at all. I got several problems with Groovy that would deserve a separate post. The integration difficulties with Maven and Eclipse make me believe it is better just to use Java directly. JDK8 introduced lambda expressions that fulfill a part of what Groovy is trying to implement in its own special way. For projects that really need a true scripting language, there are already several of them, like Python, which is built from the basis for scripting.