Technical analysis

Many ways to create the same thing

One important issue of software development is that problems can be resolved many ways and not all solutions are equal. Some solutions cannot be maintained because some developers cannot grasp how it works and insist on switching to something else. Some solutions cannot be deployed because the chosen providers don’t support it. Some solutions don’t adapt to the requirements of the customers or new requirements coming after the fact.

Following table summarizes the problems happening again and again in software development.

DevelopersCustomersHosting providers
Limited knowledge of available solutionsImplicit requirements taken for grantedLimited set of platforms
Insist on using the solutions they know about and have others adapt and rewriteLimited tolerance to errors in interpreting requirementsToo high cost for some customers
Narrow understanding of requirementsLimited tolerance to delaysVendor lock in
Problems faced by software development

On one hand, developers know about some solutions and not others. They make their best at understanding requirements and designing something that works, but they cannot know everything. Usually, they are constrained by time so they prefer to go on with what they know rather than spend countless hours learning a new technology just for a new project. This is especially problematic when new developers, with different backgrounds, join a team. Each developer knows about different solutions and just cannot afford adapting to something new, so each will insist on moving everything to what he knows! This can slow down a project and even stop it altogether.

On the other hand, customers present requirements in a shallow way, taking many things for granted. When something granted is not implemented, this usually results in negative reactions making developers upset and, feel guilty and urged to fill the gap. Any additional delay is seen as a catastrophe that threatened the project, the business relationship with the customer and sometimes even the position of the affected developers. This results into more and more hacking and patching reducing the quality of the solution, causing other requirement mismatches, bugs and disappointment, for both customers and developers.

Then the provider deploying the solution imposes constraints as well. Sometimes, the platform is limited to only some technologies the developers may be even unaware of. Other times, the deployment cost is prohibitive for the customer. Some providers, especially cloud services, have specific terminology and system architectures making it hard to switch from one provider to the other without major refactoring, unless these problems are known in advance.

This article illustrates these problems using a very simple use case: a web application showing a clock to the user. There are many ways of implementing a clock and each solution has benefits and drawbacks. Choosing the wrong clock implementation is likely to require a rewrite of the whole system. While reading about this use case, think about how the complexity of real-world projects such as GMail, Facebook, Netflix, Amazon, etc., are orders of magnitude greater than a clock.

Initial requirement

Suppose we want a webpage showing an analog clock to the user. The clock needs to reflect the current time.

Static web page

The very simplest basic way of implementing the clock is using a simple Web page, represented as an HTML file, that will load an image showing the clock. This works in any browser, even the oldest. Some naïve developer may think the problem is solve. But how about showing the current time?

For this, one possibile solution is to create one image of the clock for each position and enhance the HTML page with a piece of Javascript code that would load the correct image based on current time. There will be many images, one for every minute of every hour, namely 720 images.

Some developer will go over the top, creating all these 720 clock images in a graphic program, managing to come up with a naming scheme and implement the piece of Javascript code that will pick the right image based on the current time. Then guess what? The customer will look at this and complain: he wanted three needles, not just two: one for the hours, one for the minutes, one for the seconds. So we end up with 43200 images of a clock to create! Even worse, the customer dislikes the layout, so all the initial 720 images need to be revised anyway.

Then after some thoughts and discussions, developers will agree on the need to create a program that will generate all these images. This could be done ahead of time, but why not have the HTML page call that program itself? This is perfectly doable, by replacing the static image with a web service that will return a dynamic image.

That looks simple, but there are many different ways of implementing that backend service: a CGI script (the old way), PHP (another old way), then others will come up with a proposition to implement everything from scratch using Django, then cloud developers will propose on using serverless technology such as AWS Lambda. After the programming language is determined, there are several libraries to create images.

While developers argue on what backend service to use, the customer discovers with great surprise that the clock image appears very tiny on his 4K monitor. Who thought somebody would use that page on an high resolution display? Bitmap formats such as JPEG, GIF or PNG with fixed resolution won’t do the trick. The system needs to serve a vector representation of the clock, most likely using SVG. Ah no, this doesn’t work with certain old browsers! Are we sure the customer doesn’t use such as an old browser? We can ask the customer, he will probably say no, not even sure he will understand the question, then figure out he has an old PC stuck with Windows XP and an old Internet Explorer not supporting SVG, but that will appear after the fact, after the SVG backend is in place!

We can discuss forever about the backend service to figure out that first, the customer wants to deploy on something only supporting the old PHP or, even worse, that we forgot something: the clock needs to tick with current time, not just render the current time at the time the page is loaded! While this seems intuitive, developers, overwhelmed by all technical details, may loose track of this obvious fact. A clock ticks, needles move!

Some hacker will come in, proposing to just refresh the whole page every second. This will result in something ugly that flickers. The customer may agree on this, but will of course reject the solution when he sees the flickering result!

At this point, it becomes obvious we are in a dead end. Static page won’t do it, we need something more dynamic.

Dynamic web page

There are many ways to render a clock dynamically on a web page nowadays, using Javascript code. One can write code generating SVG drawing the clock and needles, another will prefer using the HTML5 canvas (and will have to argue with the one that prefers SVG because of his background or former job), another will propose using a library such as Raphael instead of builtin SVG or HTML5, and another one may even propose using Flash even though it is deprecated, because supposedly this is easier and more intuitive, and that works with more browsers.

Then how about having the clock move? Some people will advocate in favor of directly manipulating the document model using Javascript, but that won’t work in a portable way across browsers. Most developers thinking about this solution will immediately shift his mind to JQuery, which is a widespread library allowing to manipulate a web page dynamically in a way portable across browsers. Then a developer with background in data processing will propose using D3 library to make the needles move. JQuery and D3 works in very different ways, they implement different paradigms. Switching from one to the other is not so obvious, but the D3 developer expects the seasoned JQuery programmer to use D3, and the JQuery developer would like to get the D3 guy on JQuery.

Then seasoned frontend developers or people that want to become one will go with the heavy lifting, proposing using frameworks such as Angular JS, React, VueJS, etc. Yeah, just to render a clock! Such frameworks will do the job, at the expense of a lot of complexity, but they can do much much more. A React-based page showing a clock could allow customizing its style on the fly, and the modified style would apply immediately, without flickering. If one decides to add multiple clocks at different time zones on the same page, that will be possible without breaking everything.

However, no developers know all frameworks. A newcomer will likely know something different and then start advocating in favor of switching to what he knows. “I don’t know much about React, but that seems easier to do with VueJS.” The other will prefer Ember, another one AngularJS, etc. All of these frameworks will bring development to a stop if each developer wants to use his own framework.

“Modern” web application

Maybe I should not write “modern”, because at the time this article will be read, this may not be modern anymore. In the previous section about dynamic web page, what happened with the backend? Well, it is not needed anymore. Everything can happen on the client side. Now with Javascript and HTML5, web pages can do much more work than before.

One reason a backend may be necessary is persistence. Suppose we want to allow customize the clock and save the settings to a user profile that would be reused on different devices. This simple requirement would make the project a lot more complex, requiring persistent storage (thus a database) and some kind of identification system to know which profile to load. For clock settings, authentication is probably overkill, but it is a necessity for most real-world applications, and that is a huge can of worms.

Things become complex quickly, making it hard for developers to do it right. There are (too) many different solutions. One may prefer to write the backend service using Java, and Java offers tons of frameworks such as Spring MVC, Jersey, etc., for this. Another one will use Python with Django, Tornado, Flask or something else. Yet another one will lean towards NodeJS with Express or some other library. Another one may prefer to keep using PHP or even write something in C++! There are also many possibilities for the database system, such as MySQL, PostgresQL, MongoDB, Redis, etc.

That even doesn’t take into account bold developers that don’t trust anything and absolutely want to implement everything from scratch, using the most basic stuff he can find. Let’s make this backend in C, or even in assembler, and have control over everything!

No matter what is chosen, it will work somehow and will fail somehow. Developers will be stuck with the issues and if they cannot find a solution, others will advise using some other technology.

While dealing with all these options and technical details, many will forget something essential: write automated tests! While that seems unnecessary at first, this becomes fundamental as the complexity of the project increases. Automated tests help going forward with a reasonable confidence that what previously worked still works.

The shocking tendency to give up

By the time developers try to implement this stupid clock, discuss and argue about the best frameworks, the customer will get nothing more than explanations, nothing concrete. At some point, he will just go to a store and buy a watch. Here is my clock. It works, it does what is has to do, it ticks with current time. While the idea was great, it was too long and complex to implement, so giving up and going elsewhere is the way to go.

Another example of this could be a customer requesting a web application to collect data and, after some delays or failures, decide to fall back on a simple Excel (or maybe better maybe not LibreOffice Calc) spreadsheet.


While having multiple solutions available is fine, too much can be worse than not enough. When there are too much options to choose among, not enough guidance and error requires costly refactoring, even rewrite everything from scratch, progressing is hard. We end up redoing the same thing, over and over again. The fact that many people implement the same thing over and over again in silos, independently of others, seems not to help, but gathering everyone as a super large team will cause endless argumentation leading all the development to a full stop.

Maybe developers should spend more time using what is there rather than making it work the way they think it should or reinventing the wheel. Maybe customers should spend more time reflecting about their requirements, discuss more with developers to figure out the consequences of these requirements over the project. Maybe providers, especially web hosts, should stop sticking to obsolete technologies such as PHP and offer facilities to host modern web applications in addition to the old stuff that may still be needed for existing projects, or cloud providers should offer options allowing deployment of web applications at the same cost as simpler web hosting platforms (no virtual machine to run a simple NodeJS server, for example). Or maybe I am wrong, getting out of my mind because all that is going on in the world.


The down side of a NAS

Recently, I almost lost a bunch of ripped blu-rays, DVDs and downloaded movies and TV series. I thought a RAID5 would preserve that reasonably well, but I didn’t consider carefully enough recovery scenarios in case the whole NAS dies. I learned how a NAS can be expensive and paint myself in a corner, but I also learned more about logical volumes that could greatly improve my Linux installation in the future.

A NAS: a great idea at start

I used to store movies and TV series on separate drives. After I almost filled up 2 3Tb drives with ripped blu-rays, I wanted to do better than just adding another drive. Otherwise, I would end up checking all three drives to find out a given movie or TV series. I thus needed a solution to combine the storage space into a single pool.

One way of doing that is a Redundant Array of Inexpensive Drives (RAID). This can use several drives to improve reliability (mirroring), performance (striping) or both. This also combines the space of multiple drives, resulting into a virtual device with more space. I wanted to go with a RAID5, because that resulted in more space, with the possibility of one drive dying without loosing access to the data. Of course, if a drive of a RAID5 dies, you need to replace it as soon as possible so the array can rebuild and be reliable again, otherwise, if another drive fails, data loss occurs. The larger the storage pool is, the more catastrophic is the data loss!

Unfortunately, at the time I investigated that, there was no user interface in Ubuntu I knew to set up a RAID (that may have changed). One needed to copy/paste obscure commands from web pages. I got tired of such processes and wanted some easier way, other than trying with Windows because I don’t have a Windows license for my HTPC with the storage on it.

I thus wanted to experiment with a Network-Attached Storage (NAS) device. I picked the TS-453Be from QNAP, because I wanted a 4-bay device that is not too expensive. I found a lot of 2-bay devices which would have prevented me from building a RAID5, forum posts suggesting that RAID5 is not great because all data is lost if more than one drive dies, that a mirrored RAID is better, etc. Because I didn’t have 4 drives yet, I started with 2 3Tb drives with the project of expanding to four.

One good side of these devices is their ability to be configured remotely. Instead of hooking the device to an HDMI port (but you can do so if you want to) and interact with it using a screen, you can point a web browser to its interface and configure via a web browser.

Downside is the quantity of settings that made little sense to me, and some still do remain obscure, like the JBOD and iSCSI. It was relatively easy to get a RAID setup, but I quickly noticed that only a portion of the space available on the drives was usable by the volume containing the data. This is because the physical space can be used several ways by the NAS, not just for storing data. One can create snapshots or split the space into multiple logical volumes. Volumes can also host virtual machines. I had an hard time finding how to expand the logical volume with my contents because the option in the UI was hidden away, and it was called “Étendre” in French, which I confused with “Éteindre”, meaning Power off. But everything was there to expand the logical volume, but not shrink it afterwards.

A couple of months later, I found out it was possible to convert the mirrored RAID into a RAID5 after I added two new drives. That gave me 9Tb of storage space, enough for the moment to store my stuff.

Getting more would however be painful, time consuming and expensive, requiring me to swap out each 3Tb drive with larger ones, one drive at a time. Each time a drive sawp would occur, the RAID would have to rebuild from the three remaining drives, and only after the rebuild is complete would I be able to swap another drive. After all drives are swapped, the storage pool would expand based on the size of the smallest drive.

Then what’s the point? Why not copy all the data elsewhere, rebuild a new array and copy the data back? The longer process has the benefit of leaving the storage pool online throughout the whole migration. Data can still be accessed and even, at least as far as I know, modified! A dying drive also doesn’t bring the pool offline. The degraded RAID can still work, until a new drive is added. The rebuild occurs online, while data can still be accessed. I realized that this high availability was overkill for me, but for a business, that would be critical.

An hybrid backup strategy

I didn’t have enough storage space to back up the whole RAID5 array. Instead of solving that, I went with an hybrid strategy.

  • My personal data is already backed up on Dropbox and present on two machines at home. My idea was to make my NAS the second Dropbox client, but I quickly noticed that Dropbox doesn’t run on QTS, the Linux variant installed by QNAP on the NAS. I thus needed to keep my HTPC for that. Dropbox allows me to have an offsite backup, which would be handy if a disaster such as a fire destroyed my home. But thinking more, after such a disaster, getting back my photos, videos, Live sets, etc., would be kind of minor concern.
  • I have hundreds of gigabytes of Minecraft videos I authored and uploaded on YouTube but wanted to keep an archive of these. I ended up uploading these to an Amazon S3 bucket, which is now moved to Glacier. This allows to save the data for a low price but requires more time and some fees to get the data back. I thought the NAS could itself synchronize the MinecraftVideos folder with S3. No, it cannot run S3, not anymore! QNAP switched gear, giving up on S3 in favor of inter-NAS synchronization! That means if I want an automated backup, I would need to buy a second NAS with the same (or higher) storage space, and set it up to be synchronized with the first! For a small business, I can imagine this possible, but for a home user, that looks like overkill.
  • I thought that not backing up the ripped DVDs and blu-rays would not be a big problem since I have the originals. This was a big mistake.
  • More and more recorded videos added up, with no backup. Using my HD PVR from Hauppauge, I recorded several movies from my Videotron set top box.

My backup plan was thus outdated and needed some revisions and improvements.

Noisy box

The NAS, in addition to its lack of integration with Dropbox and S3, was quite noisy. Sometimes, it was quiet, but regularly, it started to make an humming noise, and it wouldn’t stop unless I power it off or tap on it a few times. I searched a long time to solve this. I tried to screw the drives instead of just attaching them with the brackets, but the problem was that one of the drives (a new drive) I put in the NAS was bad and noisy! Switching that with another 3Tb drive I had fixed the issue.

The noisy drive, I moved it into my main computer and it was relatively quiet at first. But it ended up making an annoying humming sound that my mic was picking, reducing the quality of many of my Minecraft videos. At some point, I got fed up and decommsionned that drive, in favor of the last 3Tb drive remaining in my HTPC. My NAS combined with a NVIDIA Shield was pretty much replacing my HTPC which I was more and more thinking about decommisioning.

The disastrous scenario

During 2020 summer, my QNAP NAS suddenly turned off and never powered on. When I was turning the unit on, I was getting a blinking status light and nothing else. At first, I was pissed off, because I wasn’t sure I would be able to fix this and was anticipating delays to get that repaired because of the COVID-19 pandemic. My best hope was to recover the data using a Linux box, then I would decide whether to get this NAS repaired or replace it with a standard Linux PC. Recovery eneded up to be harder than I thought, which made me angry.

When I lost access to all my files, I noticed how time consuming re-ripping the blu-rays and DVD will be. I would need to insert each disc into the blu-ray drive, start MakeMKV, click the button to analyze the disc, wait, wait, wait, then enter the path to store the ripped files to. Even though there is a single field text can be entered in, MakeMKV doesn’t put it in focus, forcing me to locate the super small mouse pointer (at least on my HTPC where I had the blu-ray drive to rip from), click, enter the path, check (and the font was super small), click again and then wait, wait, wait. For one disc, that’s OK. For 30…

I also lost a bunch of movies recorded using my HD PVR. The quantity of recordings increased over time and I didn’t realize none of this was backed up! Backup plans need to evolve and be revised over time.

Recovery attempts

First problem was to move the four drives into a computer that would be able to host them. I didn’t have enough free bays in my main computer, not without removing another drive. I was worried Windows would get screwed up by this and wouldn’t re-establish broken links to my Documents, Music, Videos and Pictures folders. I thus chose to use my old HTPC as a host, but fitting and powering the four drives was a painful procesess. I had to use pretty much all the SATA cables I had, and one broke during the process. I had to unplug the hard drive in a really old PC to get the SATA cable, unplug my DVD drive in my main PC to get the SATA cable, found one other SATA cable in a drawer, etc. I also needed a molex to SATA converter cable because the PSU only had four SATA power cables. I needed to power five drives: the four NAS drives to rebuild the RAID array, and the SSD containing Ubuntu! Because of the pandemic, I wasn’t sure the computer stores in my area were open or not, so my best bet to get new hardware was online, with delays of several days, even for something as simple and stupid as a SATA cable. Using what I have was my best option.

All these efforts were pretty much worthless because I wasn’t able to access the data. We’ll see why later on. All I could do is trigger a SMART long self-test, to at least verify the drives were good. All four drives passed the test. No need to get the NAS fixed or continue recovery attempts if more than one drive had failed that test.

I couldn’t go further without ordering hardware. I started with a four-bay USB to SATA docking station. Finding one was first tricky (one bay, just for Mac, etc.), but I got one and it worked like a charm. However, at first, it caused me issues: plugged the power cable in wrong direction, tested with a defective drive (yes, the noisy 3Tb drive I removed from my system just doesn’t power up anymore!!!), but it ended up working.

I was hoping to put the four NAS drives in there and have Windows see the drives, then I would try using ReclaiMe to get the files back. I also needed to get a large enough drive to hold all the data. I got a 10Tb one, that would do, I thought.

Since I was able to reassemble the block device using MDADM, I explored the idea of dumping that block device to a partition on the new 10Tb drive. For this, I had to plug the 10Tb drive to my HTPC, which already was missing connectors for 5 drives! I used the USB docking station for that. Bad idea: the HTPC is too old, having just USB2, and copying 10Tb through USB2 is a good way to strengthen your patience, and get you annoyed. It would have taken more than 2 days just for that step to finish! There was no solution, unless I got a PCI Express card with a USB3 port or probably better eSATA port, and then I would need to get a eSATA drive enclosure to put the 10Tb drive in! I didn’t like this at all, because that was a lot of waiting for an intermediate result only.

After the failure to create the block device because of USB2 slownesss, I got super fed up and decided to proceed with the repair of my NAS. I contacted their technical support and we figured out that a repair was needed and my warranty was over since a year. I thus needed to pay 300$, plus shipping of the NAS. I felt at that time it was my last hope of recovering the data.

But I tried anyway to get the drives out of my HTPC and plug them into my docking station, so ReclaiMe could analyze them. Well, RelciMe completely failed. It detected the logical volumes on the drives, which looked promising, but instead of making use of the Ext4 file system, it just scanned the drive for data looking like files. It ws thus just able to extract files with no name but some contents, even unsure if the files were complete! That would be unusable garbage, better off just re-ripping all the discs. R-Studio, another tool I tossed at this, also failed miserably, not even able to reassemble the RAID5. I got so fed up that at some point I considered contacting a data recovery company, but I was concerned that it would be too expensive, and I would just get chunks of data named with hash codes like I was about to get with ReclaiMe.

Lasagna file system

QTS makes use of multiple layers when configuring a file system. Each layer provides some features, but all of this is adding complexity at recovery time.

The diagram below summarizes the structure.

3TbHard drive3TbHard drive3TbHard drive3TbHard drive9Tb RAID5 storage poolmdadmDRDB?Volume groupLVM pvcreate/vgcreateLogical partitionLVM lvcreateEXT4 filesystemMoviesTV seriesMusic

First, the physical drives are combined into a RAID5 using the DMRAID module of the Linux kernel. This allows to create a RAID array in software, without any specialized hardware. The MDADM tool can be used to configure or activate a RAID. I was able to activate the RAID and get a block device out of it.

The block device could be used to host a filesystem. Instead, it is formated by QNAP as a DRDB storage pool, at least according to forums I searched on. Some people attempted to mount the DRDB device without success, because QTS uses a forked version of DRDB preventing anything other than QTS to read it! Because of that, only a QNAP NAS can reassemble the RAID5 and get data back!

The DRDB volume is formatted as a physical device for the LVM system. Logical Volume Manager (LVM) allows to split a storage device into multiple logical partitions. Partitions can be resized at runtime and don’t have to use contiguous space on the physical volumes. They can span multiple physical volumes as well. This is something any ordinary Linux distribution supports, as this is part of the mainline Linux kernel! Only caveat is the absence (at the time I am writing) of user interfaces exposing these. One needs to use command lines such as pvcreate, vgcreate and lvcreate to manipulate the logical volumes, but the commands are not as complex as I thought.

I read that QNAP also forked LVM, so I was worred that even if I got past the DRBD layer, I would not cross the LVM one.

Note that when I partitioned my 10Tb drive, I found a LVM partition on it! The assembled RAID apparently was a LVM, so maybe I would have been able to vgimport it and get access to the logical partitions! However, any attempt to do so would have failed or changed the machine id in the volume group, reducing chances my fixed NAS mounts the array and filesystems. My new 10Tb drive was already formatted at the time, with some data on it, so couldn’t use it to back up the full RAID and test, unless I get another drive. I thus decided to stop my attempts there since my NAS was shipped to QNAP at the time of that discovery.

Below the LVM layer, there are logical partitions, at least one large with the files I wanted to recover. The Ext4 native Linux file system is used here. That is used to organize the space into files and folders. Recovering all the data requires handling the Ext4 filesystem to get the full file contents back, not just portions of files with no names.

Full recovery

I got the fixed NAS back and inserted the drives in. The NAS powered up and recognized the drives as if it never got broken. I was thus able to get my files back, everything was there. Because the recovery process was so painful and expensive, I didn’t feel any victory, just a bit of relief that this was over.

While waiting for the fixed NAS, I formatted my 10Tb drive. I experimented with logical voluems, ccreating several partitions on the drive: one for Minecraft Videos, one for movies, one for TV Series, one for a full copy of my Dropbox folder, one for my music files, etc. Using the Ubuntu wiki, it was simple to create the logical volumes and then I started copying files on them. I was ready to transfer the contents of my NAS on the new disk when I got the NAS back.

Even though I recovered my data, I will probably have to re-rip some DVDs that are unreadable by Kodi. The VIDEO_TS structure is causing a lot of headache for pretty much all Linux-based players. VLC seems the most versatile one able to read most DVDs, but sometimes, I needed to use MPlayer, Kaffeine, etc. I remember that almost destroyed my dream of having all my DVDs and blu-rays on hard drives. Of course, Windows with PowerDVD or similar DVD player will work better, but I don’t want Windows on my HTPC, better return back to the sandalone DVD/blu-ray player and spend countless minutes searching for disks. MakeMKV should help solving that, because Kodi can read MKVs without issues. I may be able to convert previously ripped VIDEO_TS into MKV, saving me the trouble of re-ripping the disks.

After that bad experience, I came up with the plan of keeping the NAS as long as it would work, but back up all data on another drive. If the NAS dies a second time, then I would not need to recover any data and would just repurpose the drives, probably in a standard Linux PC.

Lesson learned: a NAS is for cases where you have multiple drives, more than four, if not more than eight, no usual PC can accomodate. It is relatively straightforward to get a standard ATX case that will host six drives, including SSDs, and getting a power supply unit with six SATA connectors is perfectly fine. Having to do it, I would probably explore the route of modular power supplies to reduce cable clutter, but even that is optional.

By trying to save myself some copy/pasting, I ended up with more pain and problems. Having spent at least half a day exploring logical volumes, at worst experimenting in a virtual machine, I would have figured out that my existing HTPC would have been able to combine my existing drives into a storage pool that can expand over time. If the HTPC dies, another Linux PC can import the volume group and things go on. Unfortunately, nothing can prevent disaster caused by failed drives other than back ups.

Other benefits of logical volumes

After I explored logical volumes, I am pretty sure I need them for my next Linux installation, because they will solve a bunch of fundamental issues I am getting again, again and again.

  • Each time I perform an upgrade, I am running into the risk of a catastrophic issue making the whole system unusable. Ubuntu offers no downgrade path. If an upgrade fails, just reinstall from scratch. This is why several people suggest to not dist-upgrade but just reinstall clean, every time. Logical volumes alleviate that through snapshots. Before a dist-upgrade, I could just create a snapshot of the volume holding Linux, upgrade and in case of an issue, just restore the snapshot in a few minutes, no reinstall, no reconfiguration.
  • Supporting upgrades, downgrades, multiple versions, multiple Linux distributions, all of this requires the home directory to be separate from the root file system. But each time I over partition my drive, one partition gets full and I have to either move data around, or restart computer and perform a repartitioning, which is time consuming and a risk for data loss (e.g., power outage while GParted moves data!). Logical voumes solve that, by allowing resizing at runtime. If I need more than 50Gb for my home drive, no problem, just claim some extents from the physical volumes, no need to be continugous space, and the resize occurs at runtime, without any unmount or reboot. I can keep working while the resize occurs. That’s really neat and powerful.
  • Even the classical problem of expanding a drive is easier with logical volumes. LVM can move all data from one physical volume to another, transparently, at runtime, while I can work on the machine. Replacing a too small or end of life SSD is thus easier. Of course, sharing a SSD between Windows and Linux is always painful and problematic process, although perfectly possible. Dual booting Windows and Linux is itself a painful problematic process anyhow.

An intricate sounding puzzle

All this story started on Wednesday, October 23rd 2019. When I turned on my subwoofer, I found out it was producing a constant noise, even without audio source playing. I verified the connector at both ends, everything was fine. My first reaction was to think the subwoofer was broken. It was too big for me to carry it over some repair shop, and not sure worth doing that, so I was planning to buy a new one. In the mean time, my sound system would still work, although without the subwoofer.

Following is a picture of the back of the subwoofer, with the connector plugged in.

The back of the subwoofer with connector plugged in
The humming sound my subwoofer was constantly making

The best solution to get a new subwoofer, I though, would be to go to the place I purchased my home theater system, but it was in Chambly, near my parents’ place. It would take weeks before I go back there. I thought about waiting but was tempted to check if I could order a new subwoofer online, on Amazon for example, or find a store in Montreal. However, many stores only sell full kits with everything included (A/V receiver, speakers, subwoofer), or all-in-one sound bars.

Finding the culprit: a ball game

After thinking more about it, before buying the new subwoofer, I wanted to make sure mine was really faulty. A year ago, I got issues with my A/V receiver which started to repeatedly power cycle without any possible solution. I replaced it with a new one but kept the old one in case I could find somebody able to fix it. Although flaky, the old receiver could be used to test my subwoofer. If it produces the noise with the old receiver too, I will know for sure it is broken.

To my surprise, the subwoofer worked flawlessly with the old receiver. Of course, it was just a basic test; I didn’t plug any audio source to verify it would produce sound, but the absence of the constant noise was already a progress. Unfortunately, chances were that the problem came from my A/V receiver again, which would force me to disconnect everything it from it, carry it to a repair shop, wait days, and then spend hours reconnecting everything. The speaker cables were quite hard to plug on this one, too close to each other. I didn’t want to redo this and was thinking about giving up and purchasing a dumb stupid sound bar instead. But I was quite reluctant to give up on my speakers.

At least, I was happy I didn’t order a new subwoofer or, even worse, go into the trouble of carrying one into the subway, from a local store back to my place!

Before going into the trouble of disconnecting the speakers, I tried unplugging devices, to no avail. Even unplugging the A/V receiver didn’t stop the subwoofer from making its constant humming sound! Of course, turning off or unplugging the subwoofer stopped the sound!

After more than one hour of plugging and unplugging, trying to replace the subwoofer cable to no avail, having trouble will stuff falling behind my TV, especially the AM/FM antenna of my A/V receiver and a small router configured as a switch to dispatch Ethernet to my HTPC and NAS, I found out that unplugging an HDMI cable from the A/V receiver stopped the humming sound! What? Which cable is that? A couple of crawling later, I found out: the cable linking my A/V receiver to my cable TV terminal. Trying to replace the cable didn’t change anything. I tried disconnecting the cable TV terminal to no avail. I don’t know why I thought about this but found out that disconnecting the coaxial input cable from the terminal stopped the humming. Of course, that would prevent the cable TV from working!

Next step: replace the cable. Unfortunately, I didn’t have a long or short enough cable. The only cable that worked was a 40 feet (too long) one, but the wires going out of the connector were too long, preventing me from tightening the connector into my terminal’s female port. Even worse, when the 40 feet cable touched the terminal’s connector the first time, the subwoofer briefly hummed. I was pretty sure purchasing and plugging a new long enough cable would produce the same effect as the original cable.

I was stuck, because the problem was coming from the combination of my subwoofer, A/V receiver and cable TV terminal. I could by trial and error replace the subwoofer, then the A/V receiver, to no avail! Contacting manufacturer for any of them would just bounce me the others! Only simple “solution” was to sacrifice one of the devices. Short term, it would be the subwoofer. Longer term, I was mentally preparing myself to give up on cable TV. I prefer a sound system over cable TV because I could get contents from elsewhere like Netflix, if of course I can have it played under Linux. I won’t purchase a Windows license for a 10-year old HTPC! Doing this would be a stupid non-sense, for me at least.

What changed?

But that setup was working before! What, on a sudden, changed, and caused it to fail? Well, it was a workaround for a cable signal issue!

I was getting issues with my cable TV terminal since a couple of day, complaining sporadically about low signal and decided it was enough. Wednesday morning, October 23rd 2019, I examined the connectors at both ends of the coaxial cable, they seemed tight, that should work.

The coaxial cable from the wall’s outlet connects into my power bar offering protection for coaxial input. Another cable goes out of the power bar and connects into my cable TV terminal. That worked like this for years, but it seems that every time somebody in my building is getting hooked up or adds a device, the cable signal weakens for everybody else. I thus thought that maybe I could work around by bypassing the power bar, just hooking the cable directly to the terminal. I tested it, that seemed to work, but just turned on the TV, not the subwoofer.

Then I went working and only later I turned on the whole system to watch a video on YouTube. Then I got the issue with the subwoofer and didn’t think at all it could be linked to the cable terminal.

I ended up, Wednesday night October 30th, reconnecting the cable to my power bar. This looks as follows.

Coaxial cable going in and out of the power bar

But then wait a second, wouldn’t that cause the signal issue to come back? Yes, it will. For now, I didn’t observe it again, but I know it can come back. This just gave me a delay, to think about something better, or to prepare myself for sacrificing the cable TV. I need, for that, to test Netflix on my Linux HTPC, which will be a frustrating experience of trial and error.

Why not contact my cable TV provider?

I thought about it and may end up doing it. The problem is that there is little they can do to improve the signal without accessing a technical room in the basement of my building, and getting the key to that room is problematic these days, because the company managing the building is overwhelmed and not responding to every inquiry.

Moreover, my cable company, Videotron, is moving towards Helix, a new technology combining everything into one device. Helix not only replaces the cable TV box but also the Internet modem and would constrain me to use a specific router I know little about. If I’m lucky, the router will just work as a regular device, offering Ethernet ports and 802.11n, hopefully 802.11ac wi-fi, with standard WPA2 and customizable network name and password. If I’m less lucky, network name will be hard coded and I will have to search forever among similar but slightly different names, and I may be limited to 802.11n 2.5GHz wi-fi (no ac, no 5GHz). If I’m really really unlucky, establishing a wi-fi connection will require a proprietary software which may work just on some devices! The problem is if I switch to Helix to get rid of my problems with my current Illico terminal, I won’t be able to know in advanced if all these quirks exist and will be stuck afterwards; it will be hard to go back. Concurrent telecommunication providers are either not offering cable TV at all, either adopting the same all-in-one modem/router strategy.

Technical analysis

Not all USB C ports are equal

I kind of already knew but not realized it, until I got my new Flex 15 from Lenovo. This laptop comes with one USB C port. Lenovo proposes a travel hub as an accessory. That hub fits into the USB C port, offering one USB 3.1 port, one HDMI output and one Ethernet port. Unfortunately, that hub won’t work, not because it is defective, but because it is not compatible with the Flex 15! This article tries to explain why and outlines some possible uses of the limited USB C port provided by that laptop.

USB C is the successor of USB 3.1, with a completely different connector. USB 3 connectors fit in USB 2 and even USB 1 ports, but USB C connector is not compatible with USB 3 ports.

The USB C connector looks as follows.

Example of a USB C connector

The port is also different, smaller than the USB 3 and the connector can be plugged both sides. The female connector looks like this.

USB C causes quite a bit of problems, because it is too new and because not all ports are equal, without any clear way of determining its capabilities. Main issues will be, for the time being, compatibility with existing devices and the different variants of host controllers.

New connector = new devices?

The first problem is compatibility. As the connector is different, a USB C port accepts only USB C connectors, but most USB devices at the time I’m writing this are USB 3, not USB C. This won’t be a problem on most systems. At worst, the USB C port will just be a cool artifact that will be never used, a bit like Firewire connector on some older laptops. However, there are some ultrabooks with just USB C ports, like Dell’s newer XPS models. I was quite shocked to see this and thought a user of such a machine would need to buy all new devices. That’s fortunately not the case, as I found out later.

Then, what can be done with that USB C port? First, a couple of devices have a USB C port and comes with a USB A to C cable. You can use that cable, or purchase a USB C to C cable to hook the device through the new USB C port. Examples of such devices are newer Android phones like the Google’s Pixel 2 and 3, V3 WASD CODE keyboards and the ROLI’s BLOCK and Seaboard.

I also found adapters turning a USB C port into a USB A; they look as follows.

A USB C to A adapter

Such adapters allow to use USB C ports like regular USB 3.1 ones, pretty much adding USB ports to the laptop.

Hubs also exist, but that’s were things become tricky and quite annoying. Some USB C ports are not compatible with all hubs!

Variants of host controller

The capabilities of the USB C ports depend not on the physical connector but rather on what is linking it to the system’s motherboard. We call this the host controller. As far as I know, there are the three following variants of such controllers.

  • Thunderbolt 3. This is the most powerful and versatile port. A Thunderbolt 3 port can be used as a regular USB 3.1 port using an adapter, it can carry over display information and can transfer enough power to charge most laptops. You’ll need a hub or docking station to expose this; the connector itself won’t allow it. Thunderbolt 3 also allows the transmission of PCI Express lanes through a cable, pretty much offering extensibility to a system. You can for example hook up an external graphic card or high performance hard drive. The problem with Thunderbolt is that the previous standard relied on a different mini-DisplayPort cable, and many devices will be claimed as Thunderbolt-compatible; you won’t know for sure it is Thunderbolt 2 or 3 unless you look very carefully on the device’s pictures, search on forums, email the vendor, and maybe even then, you may get a Thunderbolt 2 device while expecting a Thunderbolt 3 and have to return/exchange it. Quite annoying. A device claimed to be Thunderbolt 3 compatible should be fine though.
  • USB C with display and power. The port can be used for regular USB 3.1 devices (with an adapter or hub) or transfer display and power, enough power to charge a laptop through just that small USB C connector. Hubs providing HDMI output can be used with such ports. Some docking stations exist and can be used to charge a laptop and extend its display capabilities with one or two extra ports. Docking relying on proprietary connectors is now achievable with a generic, smaller, reversible connector. That’s quite amazing, and a USB C docking station should work with all laptops supporting USB C with display and charging, PC or Mac!
  • Regular USB 3.1 only. USB C doesn’t require support for display and power (Thunderbolt 3 does), so some vendors ship limited USB C ports that can just be used with adapters or hubs providing only USB C or 3.1 ports! Trying to hook up a hub or docking station with display or charging capabilities on such ports will likely fail. Windows will report the device malfunctioned, making you think it is defective. But it is not; this is the port that is limited. But not defective, it will work with USB 3.1 devices (with proper adapter).

Lenovo’s Flex 15 has such a limited USB C port. Hooking up the travel hub, offered as an optional accessory while purchasing the laptop, will always fail. The hub wasn’t defective, I tested it on my work’s laptop that has a USB C port with display support, and it worked. But not on the Flex 15. All that can be done with the hub is a RMA. Sad but true.


Misleading broken connector

Wednesday, October 23rd 2019, while troubleshooting an issue with my subwoofer, I discovered that one of my S/PDIF optical cable was disconnected; I found out the loose cable near my A/V receiver. This was previously used to send audio from my HTPC to my A/V receiver, before I got HDMI I/O on my new Yamaha receiver. I left the cable around in case the HDMI audio starts to fail, but it is kind of pointless as if I need to fall back on this, I would have to reconnect the HDMI cable directly on my TV as well. I wanted to either reconnect the cable or remove it altogether. However, I was unable to find back the optical connector on the receiver’s back. All I could find were RCA connectors.

Having a closer look at the back of the receiver, I discovered that one of the input had a broken connector in it. The connector looked like an RCA female but a bit different. The image below shows that connector.

Broken male connector in optical female port

After I discovered that and confirmed this was supposed to be the optical connector, I tried to remove the broken male connector from the female port using a plier. After a couple of attempts, I got the connector part back. It looks as follows.

Broken connector extracted from the port

The broken end of the cable looks as follows.

Optical cable with broken connector

I was tempted to put the broken connector back on the cable.

The “repaired” connector

By screwing the metal part back, it seems the connector is not broken. I didn’t test if it works. Maybe, or maybe it would be unstable.

The complete connector

That’s it. There’s nothing more to this!


Will it remain possible to upgrade laptops with SSD?

Last year, I upgraded my sister’s Thinkpad G500 with a SSD, which greatly increased its performance. The laptop also suffered from hardware issues because of its faulty DVD drive; removing the drive surprisingly cured it.

After this success, I was thinking about improving her boyfriend’s laptop, an HP machine that happened to be slower than the older Thinkpad. The presence of a McAfee virus scan, possibly running on top of Windows Defender, wasn’t helping much, but the 5200 RPM hard drive was definitely making startup slow.

Figure out if we can install the SSD

Since Apple started to make it harder and harder to upgrade their laptops, to the point it is now nearly impossible with their most recent MacBook, possibly other laptop makers can follow so it is important to verify if and how we can upgrade the hard drive before purchasing a drive!

The best way to figure this out is to search for hard drive replacement for the laptop model, on Google. The model was HP 15-bw028ca. However, this time, all I could find was the specifications of the laptop, and some YouTube videos showing how to disassemble other similar laptops but not that one! I searched for more than an hour to find out the maintenance manual of that laptop, and then I was able to get the information I needed.

The hard drive was installed behind a bottom cover that can be removed. However, according to the manual, only the battery and optical drive should be removed by the user. Everything under the bottom cover should be serviced only by an HP-authorized technician. Quite bad! But since that laptop wasn’t under warranty anymore, it was less of an issue. But this makes it more important to carefully evaluate if I can reliably remove that cover and put it back, without breaking it. Without the cover, the laptop may at best look ugly, at worst not hold together anymore so not work!

Besides assessing the risk of disassembling the laptop to reach the hard drive without making it ugly or non-working, I needed to figure out the type of drive to install. The machine supports SATA 2.5″ drives, but it also accepts M2 ones. HP used SATA hard drives but M2 SSD. However, M2 requires the replacement of the connector, which is specific to HP. Getting the M2 connector is likely to be problematic, so I decided to try with a SATA SSD, since both hard drives and SSDs can be SATA. I got a 500Gb SSD on and ordered it.

Replacing the hard drive

When I got the laptop and the SSD, I first examined the laptop a bit and figured out it looked like the one referred to by the manual I found. There was a small difference, though: no optical drive, so the laptop wasn’t the same as the one in the manual.

First I put the laptop upside down and removed the battery, using the latches. That operation was easy as expected. Only MacBook and ultrabooks have soldered batteries that cannot be removed.

The HP laptop upside down
Latches holding the battery

After that, I had to find and remove all the screws holding the cover in place. There were screws pretty much everywhere, even under the four rubber pads and under the battery. Trying to pry the cover starting from the back near the battery slot without removing the screws was a risk of breaking the cover or the chassis, making the reassembly impossible. I was thus quite worried, and more and more concerned the cover couldn’t be removed without a special tool.

Screws can be anywhere, including center and sides

Some attempts at removing that cover caused concerning cracking sounds. I was seriously concerned about the possibility of breaking that laptop altogether. But at some point, the cover unclipped completely, showing up the inside of the machine.

Removed cover
Inside the machine

The hard drive is at the upper left corner of the above picture. It is held in place by a bracket screwed to the chassis, similar to the Thinkpad. I removed the screw and was able to disconnect the drive. Then I transferred the bracket on the SSD and put the SSD in there. Nice, it seemed to work. I was so sure it worked that I put back the bottom cover and the screws.

Unfortunately, when I turned on the laptop, I found out it didn’t detect the SSD at all. I first started Ubuntu from a USB key: that worked but couldn’t find the SSD. I booted up without the USB key and was told to install an OS or press F2 for diagnostic. I pressed F2, tried to start a test of the hard drive, and got a message telling there was no installed hard drive. Could it be that only HP-approved drives can be installed?

I thus had to remove the cover a second time. Before putting back the old hard drive and call it a day (my sister’s boyfriend would have to contact HP that would check his warranty, then advise him to buy a new laptop), I removed the SSD and checked the connection. First time, it was a bit too easy to connect the drive; the drive wasn’t aligned into the SATA connector! The second attempt, I felt a slight resistance showing that the drive engaged into the connector. I screwed it back in and tested, with the cover but not the screws yet. After I checked it worked and passed the auto-test from the HP diagnostic, I put back the screws; the drive was installed and detected!

If I had just put back the old drive, without investigating the connection further, maybe it would have failed again, and I would have been stuck, not able to get the SSD working but also unable to put the laptop back into its original state. Every hardware modification causes such a risk; this must be carefully considered before attempting this. This is why I decided not to try my luck on my Lenovo Ideapad Yoga 13, which contains a too small SSD; that one is trickier to access.

Another activation concern

Then comes the time to install Windows 10. My plan was to use my USB-based installation medium as I did on other laptops. However, will activation work? It was possible that it would not, asking me for a product key. I could not get the product key unless I put back the old hard drive and log in to the old Windows installation, which would either require the password of my sister’s boyfriend or a way to hack the installation in order to turn on the local administrator account. Even with that product key, activation could fail anyway, requiring me to call Microsoft and try by phone. This could have gone as far as requiring HP’s custom recovery partition, or the purchase of a new product key.

Fortunately, the simple installation worked like a charm. The installer didn’t ask me for any product key and after that, Windows was activated!

Some testing and post-installation steps

After that successful installation, I installed the drivers and tested the machine a bit. It was working and not crashing. I added Firefox and LibreOffice and I created the user account for my sister’s boyfriend, making sure it was set to Administrator and not Standard account.

Restoring Enigmail

My sister’s boyfriend is using Thunderbird and Enigmail to send encrypted messages to some of his friends. If we don’t fully restore his configuration, that means he will have to generate a new private key, notify all his friends about that new key and get back all their public keys. That is kind of annoying and inefficient, both for him and his friends. I thus wanted to restore this configuration but we didn’t know where it was stored.

I had to install Enigmail on my machine to test and figured out it’s using the GnuPG’s keyring. That keyring is located inside the Application Data folder under the gnupg directory. I wasn’t sure he found and backed up that directory, so I plugged his hard drive on a SATA to USB adapter and got the directory back. He would thus be able to copy it at the right location, after he installed Thunderbird and Enigmail.

Why couldn’t have I set this back up completely? Because that would require logging in to his account, which would have required his password. Then to set up his GMail account, I would need his GMail password. It is important that passwords remain secret, even if both of us knew I would not misuse the password afterwards.


Bumpy Windows 10 activation

It’s been months I was thinking about a way to upgrade my parents’ computer to Windows 10. The machine was running a copy of Windows 7 that failed to activate so required a crack. I wanted to both upgrade to Windows 10 and get it a fully activated genuine copy. However, the machine was four year old, so a bit worthless to pay more than 150$ to get a OEM Windows on it, that cannot be transferred to another PC. I thus put this thing off, since the computer was working correctly.

However, during summer 2019, I was forced to figure something out, because in January 2020, Windows 7 will be out of life (no more support and security updates), and the software program my mother was using for her taxes, ImpôtRapide, was discontinuing support for Windows 7 as well. So we needed a solution. Without my help, my parents would have been stuck going at a computer store that would be likely to tell them they are sorry, but it’s worthless to install Windows 10 and would be better (for their cash flow…) to purchase a new computer. They have some at 400$. Yeah, likely to get a uselessly slow laptop with a small hard drive. My sister got caught this way. Her old laptop on which I replaced the hard drive with a SSD is faster than her new one!

I thought about two possibilities to solve the Windows 10 issue:

  1. Pay 150$ for the OEM copy from Microsoft. It would have worked, but if the computer broke a couple of months later, we would need, in addition to the new computer, a new OEM copy!
  2. Convince my parents to buy a new computer with Windows 10 preinstalled. This would have been far simpler for me but more costly for them. My mother told me some day she would like a laptop to transfer photos on my grandmother’s digital frame. But finding a good laptop requires time. It is easy to end up with something slow, overbloated with crapware and difficult to upgrade.

But in June 2019, I found a third possibility: Kinguin, a website offering product keys for Windows, Office and several Steam games. There was a risk to get a key that would not activate, even worse, a completely non-working key, but if it works, it would cost 40$ for each key. I took the plunge on Sunday, July 14 2019, and purchased both keys. It would have been smarter to purchase just the Windows key and try it first, but I wanted to have both keys at the time I was at my parents’ place.


Before attempting the installation, I used Clonezilla to make a full backup copy of the SSD that would be formated. The disk contained Windows 7 and a few programs. In case of a total and catastrophic failure, such as non-working activation causing repeated errors or an unexpected but super shocking hardware incompatibility, it would be possible to restore Windows 7 in less than half an hour, as opposed to fully reinstall it. Of course, that would have been temporary; a new attempt at installing Windows 10 would have been needed later.

Even that operation sucked and caused issues, because for some reasons, the USB stick Clonezilla is installed on gets corrupted (this happened at least three times since I started using Clonezilla, most likely because of flaky USB keys, not the program itself) and some programs fail to load. The image checking program failed, causing errors saying the images were broken. I had to reformat the stick and reinstall Clonezilla on it with Tuxboot, then try again. Second time, Clonezilla was reporting the images to be fully restorable.

I was expecting the new installation to be fully UEFI compliant as opposed to the Windows 7 setup for which I needed to fall back on BIOS/MBR because of the crack used for the activation. I wanted no trace of the MBR at all and wasn’t sure Windows would remove it, so I booted the machine using a USB key containing Ubuntu MATE, and used GParted to create a new GUID Partition Table (GPT). That pretty much destroyed all the data on the SSD.

Installation but no activation

After the preparation, I successfully booted another USB key, that one containing the installation medium of Windows 10 I just recreated a couple of days ago; there is now a free tool from Microsft allowing that. That contained the latest updates, so no ever-lasting installation of updates like in Windows 7. The boot happened correctly, in UEFI mode. To make sure the USB key booted in UEFI, I pressed F8 at computer startup to get the boot menu and picked the UEFI entry of the USB key. I was then able to proceed with the installation, and the product key from Kinguin worked without any issue. It was shipped as a scanned or pictured sticker with the key written on it. Using my laptop, I displayed the image and zoomed in until I could see the 25 characters and typed them.

After the installation completed without issues, I was required to login with a Microsoft account. My mother wanted to use the email address provided by her ISP so I tried that. The system was saying she already had an account. I didn’t know she has a Microsoft account. Fortunately, she managed to remember the password and we could connect it to the new Windows 10 installation, so no need to create a new account with a Microsoft email address or attempt a password recovery.

Then I tried to install the drivers. The Intel graphics driver from ASUS failed to install. I then figured out that all devices were working and decided not to try installing the drivers. Graphic was OK, audio was working, network as well, except maybe a quite slow Internet connection. At the time of writing this post, I was starting to question myself: maybe I should have installed the Ethernet driver.

Then came the dreadful part: is that new installation activated? In order to determine that, I pressed the Windows and Pause keys simultaneously to access system properties, searched a bit, and found, at the bottom, a message saying that Windows was not activated. Oups! I found out a button to Activate, clicked, was offered to activate by Internet or by phone, naively tried the Internet activation, and that failed. A concerning error message was stating that the key may be used on another PC. Aouch! The system was then proposing to purchase a key on the Windows store.

I was kind of stuck, not knowing what to try next, and forum posts I found on Google didn’t help at all. One I found was stating that Kinguin keys are from volume licenses; they may work, they may not, they may work for some time or not. Ah! No!

In order to evaluate the extents of our losses, I tried to install Microsoft’s Office 2016. For this, I used my mother’s Microsoft account to log in to Office website and found an option to enter a product key. That time, it was possible to just copy/paste the product key. That got me a download link for Office 2016. I downloaded the program and installed it. I don’t remember if I had to copy/paste the key again at the installation program, but what I remind is that the installation was awfully long. Something seems to be throttling my parents’ Internet connection, maybe Videotron because my parents chose a lower end plan, I’m not sure. Anyhow, the installation succeeded, but activation, again, failed.

The pain of the phone-based activation

Next step was to try to activate by phone, before contacting Kinguin. I thus restarted the Windows activation wizard, and selected the option to activate by phone. I was asked for my country, and given a phone number. I first entered into an automated system asking me if I already activated Windows, if I replaced some hardware, etc., but no matter what I picked, I ended up at an operator. I had to identify myself to her: name, phone number, email address. Then I needed to provide the product key. This was a long and painful process, as the phone line or my parents’ handheld phone were causing sound issues. I had to repeat several parts of the 25-character sequence. But that ended and she got the whole thing checked. It was an OEM key, used by sellers like Lenovo, HP, etc., but the key was valid and usable! Phew! But it would need a phone-based activation.

That procedure consists of stating the installation ID, which is split into 8 groups of six digits. The operator had me utter and confirm the digits, then despite my doubt about the correct communication of that awfully long sequence, we tried to generate the confirmation code that I entered into the second page. That one is also a long sequence of digits split into groups. I used the keypad to directly type the digits into the fields, no pen and paper for that! Then expecting a long revision process, I clicked on Activate and got the thing activated. YEAH!

After a small break, a couple of glasses of water, and a short walk, I came back at the computer for the second part: Office. That one was pretty much the same principle, with different challenges. I again needed to pick a country, then got a phone number. This time, the system was fully automated. My past experience with automatic speech recognition told me that errors are perfectly possible, so I took care of speaking as clearly as possible while dictating the installation identifier, again a sequence of numbers. The confirmation code given as the response was uttered relatively fast, so I had to be careful not to miss any number. The keypad was essential to get this done flawlessly. After I entered the whole confirmation code, I tried to click the button to activate and that worked!

Both Windows 10 and Office 2016 were now activated!

After this installation and bumpy activation succeeded, I created a second Clonezilla image. If something bad causes the installation to be corrupted in the future, it will be easy to restore it from the image without having to reinstall and reactivate.

Various small problems

When my mother tried to access her Facebook account, she got a completely different UI with no access to games. This was because Chrome opened instead of After that, the games took a long time to load and one of them failed, but this was caused by connection issues, not Windows 10.

There was also a strange issue with the Volume icon in the notification area. The icon just disappeared the day after the installation of Windows 10. I searched for a while to figure out how to solve this. There is an option to disable system icons: all system icons were enabled, including Volume. There is a group policy to disable the icon; it was turned off. The solution was to unlock the task bar, expand it to use two rows, then the icon showed up. Coming back at one row, the icon stayed visible.

Fonts are small than on Windows 7. I thought I was getting crazy or loosing sight because of too much time spend in front of my computer, but no, my mother also found characters to be smaller than before. We searched and searched, no way to enlarge them, besides changing the DPI scaling. But bumping up the DPI causes her Scrabble game not to fully show up on the screen. Part of the problem is the too small screen, running in 1440×900 as opposed to a full HD 1920×1080 display.

I suspect the installation, besides activation, was too smooth. Like with my own systems, problems will happen after the fact. Hopefully, things will not be too bad, but we don’t know.


A dying laptop

My Lenovo Ideapad Yoga 15 is dying.

It happened the second time today, and that’s quite concerning. Suddenly, the machine freezes, keyboard is not responding, the mouse is moving but clicking does nothing. There is no way out other than rebooting, and when I did it, the screen became black with an error message displayed in white on a blue background.

The machine was telling me that there was no boot device or boot failed. What? This Lenovo laptop has a builtin mSATA SSD. I didn’t disassemble the laptop, ever, so why suddenly either the SSD became faulty, or the connection between the SSD and the motherboard is now unstable? Fixing any of these would require disassembling the laptop, with the risk of not being able to put it together again afterwards. The keyboard that needs to be removed to reach the components is clipped, and clips can break when trying to remove.

As the last time, during the holidays, powering off the laptop and booting up again “fixed” it. But who knows if that will last. Maybe it will fail tonight, maybe the next week, I don’t know. And next time it fails, maybe it won’t boot up again.

Besides of that, the battery lasts not really more than 2 hours and the wi-fi is now so clunky that most of the time, I need to plug in a USB to Ethernet or 802.11 adapter.

I’m wondering if that is Windows 10 that is killing this machine. As soon as I upgraded, the laptop became slower. A few months later, the battery life was reduced. If I could make Ableton Live work on Linux, I would attempt a switch to Ubuntu, but nope, no Linux version. All major DAWs have just Windows or Mac builds, no Linux. Things may change in 10-20 years, but that doesn’t matter to me, my passion may not be there at that time.

I’m kind of hopeless now to get any type of working laptop in the 10-20 next years. They all suck. Any candidate, any, I can find bad reports on some forums. People are slowly but surely stopping to use computers and/or migrating to Mac, which I cannot do because of font size issues. Maybe home computing is coming to an end, replaced by “smart” but limited phones and tablets not offering any efficient way of typing text. People are working around by posting meaningless pictures with any description, that’s what I see every day on Facebook: XYZ added on his story, no text, just a picture. I came to the point of disabling Facebook notifications on my smartphone and debating the possibility of closing my Facebook account.

But I want a solution. As a computer scientist, I just cannot give up having a computer, that makes just no sense to me.

Bug Configuration

Ubuntu 18.10: a silent release

Usually, upgrading Ubuntu goes well. I run the Upgrade tool which downloads new packages, installs them and then asks me to reboot. Some new versions had minor issues, for example Wayland not fully compatible with my NVIDIA card or an old version of MATE preventing the dist-upgrade, but nothing major, nothing that couldn’t be worked around. This time was different: no sound, and no way to easily get around this.

Cannot dist-upgrade

Sunday, October 21 2018, Ubuntu 18.10 was released since a couple of days. However, after I installed the updates through the Software update tool, I wasn’t proposed to upgrade. I had to dig into the software options and reconfigure the delivery of releases to “every release”, not just the LTS. Then I reran the Software Update tool, and got the option to upgrade. However, clicking on the upgrade button did just… nothing. I tried two or three times: same result.

Before accepting that this time, I would have to go through the clean install route, I searched a bit on Google, and found out that sometimes, the upgrade doesn’t start when updates are still available. But updates were all installed. However, running the following on a terminal found and installed a couple of additional updates.

sudo apt update
sudo apt dist-upgrade

I then retried the Upgrade button, and that worked! Yeah!

The upgrade went well, and then I was offered to reboot, which I did.

Slow and silent

First the new release felt a bit sluggish, then pretty flaky. First time I tried to reboot because I needed to go to Windows, to play Minecraft loaded by Twitch and recorded through OBS, the system hung up waiting for a stop job. I am getting this issue from time to time and when that happens, I have to wait for the 1min30s timeout to elapse before the stop job is forcibly killed. I don’t know what is a stop job, what job is frozen and how I could get rid of this issue for good.

I was in my living room turning off my TV and AV receiver while my new Ubuntu setup finally rebooted, and I didn’t have time to select Windows in GRUB, so it restarted Ubuntu, and trying to restart redid the “stop job” issue! Argh, don’t tell me I’ll get this each time I reboot or shutdown now? I’ll have to wait 1min30s, maybe even 3min, just to power off or reboot. What’s the point of having a SSD if timeouts like this counter its performance benefits?  I got fed up and powered off my PC with the power button. I had to press and hold the button almost ten seconds for the machine to finally turn off, then I had to press the button 2-3 times for it to finally turn on. Maybe I’m heading towards the necessity of replacing my computer CASE, which sucks me to the point of thinking about getting rid of that damned thing and use just a laptop. But that case issue has nothing to do with Ubuntu, I should just have picked a better case, there’s nothing more to it, at least for this post.

After my Minecraft session, which was kind of fruitful, I wanted to check that my Ubuntu installation would boot again and be able to shutdown at least once without the “stop job” issue. I thus rebooted to Ubuntu, but the system froze before showing the welcome screen. I had to press CTRL-ALT-F2 to reach a console, log in, check the syslog, nothing of interest. Then the system finally booted into X. I wanted to switch back to the console to log off, but when coming back to X, it froze again on a black screen, this time no key was working. Then another hard reboot!

Second attempt worked: I reached the desktop. I launched the backup script for my Minecraft world, then the MKV->MP4 batch conversion script. OBS records in MKV, which is more robust against crashes, but VideoStudio doesn’t accept MKV, so I have to turn the recordings into MP4. Fortunately, FFMPEG does it without loss, by just repackaging the MPEG stream into another container. Then I wanted to organize my videos into directories, so I launched one to check it, and found THE thing: no sound!

Checking pavucontrol, I found that my sound card disappeared. Only detected devices were my HDMI port hooked up to my monitor, and a crappy USB Webcam that could serve as very basic and rudimentary mic. I don’t use that for my Minecraft recording; I have an AKG real microphone for that! I tried to reboot to no avail. Trying to run speaker-test just hung, again, needed to press ALT-F4 to shutdown the terminal.

Searching on Google for solutions lead to nothing except old stuff that didn’t work. Some people restored sound by reinstalling ALSA and PulseAudio. Others had to downgrade to the previous kernel. Others edited configuration files, commenting a line that is not there in my case, and adding another line. I tried to reinstall PulseAudio with sudo apt install –reinstall pulseaudio, to no avail. As a last resort, I tried to reboot with the 4.15 kernel of Ubuntu 18.04,

After almost an hour of searching, it was more and more obvious that I needed to give up on Ubuntu 18.10 and either downgrade (which essentially means reinstall) to Ubuntu 18.04, or switch to a new distribution. I was quite annoyed, as preceding upgrades went right, and then the hardware problems start again like in the past. Moreover, the system was freezing for a second each time I hit Tab while in a terminal, before displaying completions.

Can this work at all?

I had to go sleeping (it was past 11 PM), go to work the day after, but I thought about it. First I needed to test if that can work! For this, the simplest solution is to test using a live USB. Monday morning, I had to resist the temptation of testing that before going to work. Doing that would have made me start late and thus finish work late in the evening. So I did my workday first. So on Monday evening, I needed to download the Ubuntu 18.10 ISO; I picked the MATE one since that’s what I’m using now, instead of that GNOME3 thing which works so so. I got the ISO, and used the disk creator tool built into Ubuntu to write it into a USB drive. However, the tool refused to format my drive: too small for the new 2Gb ISO! That old 2Gb key was just a bit too small, which is kind of annoying. As a smart man, I should have extra empty, unused USB keys hanging around, but seems I’m not smart enough for that! So I had to use an existing key.

So I took the Clonezilla key, a 16Gb device, really too large for that small backup tool. I stored Ubuntu on it, and then I installed Clonezilla on the old 2Gb stick. Then I booted off my new Ubuntu medium, the system booted successfully, and I got sound! Ok, so at worst, if I clean install, I should get sound… except maybe if that is an incompatibility with the NVIDIA driver.

Unlocking ALSA

First I ironed out the NVIDIA hypothesis by uninstalling the NVIDIA proprietary graphic driver. After I reboot, I tested and still had no sound, and was back to a default VESA resolution. I thus reinstalled the driver, a bit annoyed that Nouveau cannot even basically handle my graphic card. I don’t expect full 2D/3D acceleration out of Nouveau, but at least, mode switching should work with a card bought in 2013, reaching a 1080p resolution. No, nothing. But at least, there was no incompatibility between the graphic and sound driver.

I then dug further into ALSA, which was still detecting my sound chip. Here is the list of devices it found:

eric@Drake:~$ aplay -L
    Playback/recording through the PulseAudio sound server
    Discard all samples (playback) or generate zero samples (capture)
    PulseAudio Sound Server
    HDA Intel PCH, ALC887-VD Analog
    Default Audio Device
    HDA Intel PCH, ALC887-VD Analog
    Front speakers
    HDA Intel PCH, ALC887-VD Analog
    2.1 Surround output to Front and Subwoofer speakers
    HDA Intel PCH, ALC887-VD Analog
    4.0 Surround output to Front and Rear speakers
    HDA Intel PCH, ALC887-VD Analog
    4.1 Surround output to Front, Rear and Subwoofer speakers
    HDA Intel PCH, ALC887-VD Analog
    5.0 Surround output to Front, Center and Rear speakers
    HDA Intel PCH, ALC887-VD Analog
    5.1 Surround output to Front, Center, Rear and Subwoofer speakers
    HDA Intel PCH, ALC887-VD Analog
    7.1 Surround output to Front, Center, Side, Rear and Woofer speakers
    HDA Intel PCH, ALC887-VD Digital
    IEC958 (S/PDIF) Digital Audio Output
    HDA Intel PCH, ALC887-VD Analog
    Direct sample mixing device
    HDA Intel PCH, ALC887-VD Digital
    Direct sample mixing device
    HDA Intel PCH, ALC887-VD Analog
    Direct sample snooping device
    HDA Intel PCH, ALC887-VD Digital
    Direct sample snooping device
    HDA Intel PCH, ALC887-VD Analog
    Direct hardware device without any conversions
    HDA Intel PCH, ALC887-VD Digital
    Direct hardware device without any conversions
    HDA Intel PCH, ALC887-VD Analog
    Hardware device with all software conversions
    HDA Intel PCH, ALC887-VD Digital
    Hardware device with all software conversions
    HDA NVidia, HDMI 0
    HDMI Audio Output
    HDA NVidia, HDMI 1
    HDMI Audio Output
    HDA NVidia, HDMI 2
    HDMI Audio Output
    HDA NVidia, HDMI 3
    HDMI Audio Output
    HDA NVidia, HDMI 0
    Direct sample mixing device
    HDA NVidia, HDMI 1
    Direct sample mixing device
    HDA NVidia, HDMI 2
    Direct sample mixing device
    HDA NVidia, HDMI 3
    Direct sample mixing device
    HDA NVidia, HDMI 0
    Direct sample snooping device
    HDA NVidia, HDMI 1
    Direct sample snooping device
    HDA NVidia, HDMI 2
    Direct sample snooping device
    HDA NVidia, HDMI 3
    Direct sample snooping device
    HDA NVidia, HDMI 0
    Direct hardware device without any conversions
    HDA NVidia, HDMI 1
    Direct hardware device without any conversions
    HDA NVidia, HDMI 2
    Direct hardware device without any conversions
    HDA NVidia, HDMI 3
    Direct hardware device without any conversions
    HDA NVidia, HDMI 0
    Hardware device with all software conversions
    HDA NVidia, HDMI 1
    Hardware device with all software conversions
    HDA NVidia, HDMI 2
    Hardware device with all software conversions
    HDA NVidia, HDMI 3
    Hardware device with all software conversions

The speaker-test command was hanging, but if I waited enough time after pressing Ctrl-C, it returned. Ok, cool! Running pavucontrol while pseaker-test was playing its noise, I could see it in the PulseAudio applications. Ok, so speaker-test

What if I switch device? I tried speaker-test -D sysdefault:CARD=PCHI got an error because ALSA couldn’t open the device. Searching on Google about that, not specific to Ubuntu, lead to clues: it could be a permission issue. Trying to check /dev/dsp failed: no such file. But I could fiund the following.

eric@Drake:~$ ls -ld /dev/snd/*
drwxr-xr-x  2 root root       60 oct 23 08:11 /dev/snd/by-id
drwxr-xr-x  2 root root      100 oct 23 08:11 /dev/snd/by-path
crw-rw----+ 1 root audio 116,  9 oct 23 08:11 /dev/snd/controlC0
crw-rw----+ 1 root audio 116, 15 oct 23 08:11 /dev/snd/controlC1
crw-rw----+ 1 root audio 116,  8 oct 23 08:11 /dev/snd/controlC2
crw-rw----+ 1 root audio 116,  6 oct 23 08:11 /dev/snd/hwC0D0
crw-rw----+ 1 root audio 116, 14 oct 23 08:11 /dev/snd/hwC1D0
crw-rw----+ 1 root audio 116,  3 oct 23 08:11 /dev/snd/pcmC0D0c
crw-rw----+ 1 root audio 116,  2 oct 23 19:23 /dev/snd/pcmC0D0p
crw-rw----+ 1 root audio 116,  4 oct 23 08:11 /dev/snd/pcmC0D1p
crw-rw----+ 1 root audio 116,  5 oct 23 08:11 /dev/snd/pcmC0D2c
crw-rw----+ 1 root audio 116, 10 oct 23 08:11 /dev/snd/pcmC1D3p
crw-rw----+ 1 root audio 116, 11 oct 23 08:11 /dev/snd/pcmC1D7p
crw-rw----+ 1 root audio 116, 12 oct 23 08:11 /dev/snd/pcmC1D8p
crw-rw----+ 1 root audio 116, 13 oct 23 08:11 /dev/snd/pcmC1D9p
crw-rw----+ 1 root audio 116,  7 oct 23 08:11 /dev/snd/pcmC2D0c
crw-rw----+ 1 root audio 116,  1 oct 23 08:11 /dev/snd/seq
crw-rw----+ 1 root audio 116, 33 oct 23 08:11 /dev/snd/timer

Ok, interesting, only root and users of group audio can access these devices, and thus play sounds. Am I part of the audio group? I tried groups eric and found out I wasn’t. But on my HTPC still running Ubuntu 18.04, I was! Lucky I had that HTPC, otherwise I would have been forced to reboot to the Live USB to check, or boot another machine with it, like my ultrabook.

Ok, so what if I add myself to the group?

sudo usermod -a -G audio eric

I had to relog, and then got sound through ALSA using Speaker-Test! Yeah! But still no PulseAudio! I tried to reboot, not just relog, to no avail. Ok, at least I can reconfigure Audacious to play through ALSA, but I suspect I’ll get trouble for YouTube and Spotify for which I cannot configure audio output.

Unlocking PulseAudio

I tried all sorts of things to debug this. I was able to get some logs out of PulseAudio several ways, but the most useful way was through SystemD. In Ubuntu, PulseAudio is started by a userspace variant of SystemD. There is a command allowing to get the logs: journalctl. I couldn’t figure out a way to filter so I ended up calling journalctl -a -f on one console, and then pulseaudio -k on another to force PulseAudio to restart. Then I was checking the produced log. I ended up finding out errors. The system was not able to communicate through DBus using a certain socket. I started to think that each time the system freezes, this is because PulseAudio tries to emit a sound, has to wait and timeout.

I couldn’t find out how to reconfigure DBus, or understand how PulseAudio and DBus interact enough to troubleshoot this. I was quite stuck, and all I could find was deprecated information. There was a bug report that seemed to affect Ubuntu 18.10, but no solution. It was past 10 PM and I was on this for almost two hours, trying and searching to no avail, more and more pissed off and risking to be so angry that I would end up having trouble to sleep. I was about to give up or reinstall from scratch.

I ended up fed up and decided to completely get rid of PulseAudio using sudo apt purge pulseaudio. After a reboot, sound was WORKING! What??? How come MATE plays sound without PulseAudio? Audacious was working, and VLC as well. But not Firefox for YouTube, and not Spotify.

Before accepting the sacrifice of YouTube and Spotify to avoid a clean install, at least for now, I tried to reinstall PulseAudio. For I really don’t know why, the sound continued working, and PulseAudio displayed my internal audio device now. YouTube and Spotify resumed working. I read posts about some people that got the issue fixed and it came back the next reboot. Ok, let’s reboot and hammer that bug for good, then! I rebooted, and sound was still working. I still don’t fully understand why.

Either something messed into the groups I was member of and PulseAudio got screwed up because I lost permission to use ALSA, either some configuration files needed to be updated but APT decided to keep my current versions. Purging PulseAudio removed the configuration files and reinstalling reverted to sane defaults. At least, sound is working now.

Even better, the system seems more responsive now and didn’t freeze on startup. It really seems that PulseAudio and ALSA had trouble communicating, causing these hangups.

Why not a clean install?

Because my home directory lives on the same partition as my Ubuntu install. Any attempt to put my /home on a separate partition leads to insufficient disk space after a time. I have a 250Gb SSD shared with Windows so I cannot put 50Gb for my /home and 30Gb for just Ubuntu. One simple solution would be to move my /home to an hard drive. As long as Ubuntu is on a SSD, I’ll have a fine boot time. Or I would need a way to use SSD only as a cache and put everything on the hard drive. I could also come back to the dual SSD strategy: one SSD for Windows, one for Ubuntu. I’ll think about it, but at least I don’t have to do anything short term. Maybe I can wait and replace everything, and get a better case to fix the power button hickup at the same time…

If everything had failed, before the clean install, I could have tried to restore my Clonezilla image of my SSD, and that would have got me Ubuntu 18.04 back to normal. In case of failure, then I would have had to clean install 18.04, 18.10 or something else, or just give up on Linux for the moment. At least this is not necessary anymore.


Trying to salvage a lemon

My sister got a Thinkpad G500 from Lenovo pretty much at the same time I purchased my Ideapad Yoga 13 from the same brand. My Ideapad, while suffering from battery life and wi-fi issues and being stuck with a too small SSD, still performs reasonably. It still boots fast and doesn’t exhibit hardware issues. The upgrade to Windows 10 seems to be the change that made it worse, but it still works… compared to the G500 of my sister. That machine was painfully slow and, I discovered, suffered from various hardware issues I didn’t know about.

Assuming the only issue was slowness, I proposed my sister to replace the hard drive of the laptop with a SSD. Before doing so, however, I checked in the user’s manual of the laptop how to replace the drive. It was possible to do it by removing the back cover of the machine and unscrewing the drive, no need to disassemble the keyboard, remove the memory, the display, etc., like on my Ideapad or even worse, on a MacBook. If there is too high risk of breaking something while dissassembling the laptop to reach the hard drive, it is better to leave that machine alone and invest in a new one later on. This is what I chose to do for my Ideapad, because of brittle keyboard clips that could completely break, preventing the keyboard to hold on afterwards. I also needed to make sure a 2.5″ SATA SSD would fit, because maybe I could need a M2 instead. If something intermediate is needed, like a mSATA, upgrading is less worth it. A SATA or M2 SSD can at least be reused in another system if the target laptop fails later on.

The SSD doesn’t fit! WHY?

I started to work on the laptop January 5, 2018. I first removed the battery, unscrewed the back cover and located the hard drive. It was attached to the system using a bracket screwed into the chassis. I removed the screw and was able to disconnect and pull off the drive. I then needed to unscrew the bracket from the hard drive and screw it to the SSD instead. That part didn’t work well. No matter how hard I was trying, I wasn’t able to fit the screws. I thought the laptop had a 1.5″ hard drive instead of a 2.5″, but no, it ended up working. The SSD had something written on both sides so I was trying to install the bracket on the wrong side. After I was able to screw the bracket, I connected the SSD to the laptop.

After I installed the SSD, I put back the cover and the battery. Then I turned on the laptop and booted using a USB key containing Ubuntu Live. I used that to run a SMART check on the SSD, making sure it wasn’t flawed right from the start. This is more likely to happen with an hard drive, but it CAN happen with a SSD as well. Doing a self-test before installing anything can thus save a lot of time.

Stuck at Lenovo logo, requiring RMA

After that check, I inserted my USB-based Windows 10 installation medium. I got this medium from Microsoft, it can now be downloaded freely as opposed to previously, requiring the purchase of a CD or DVD. Getting the medium is now easy, the challenge is to get the activation working now.

But before tackling the activation issues, I had to make this laptop boot the USB medium. Instead of booting, the laptop just froze at the Lenovo logo. The only thing I could do is hit ctrl-alt-delete, get a blank screen, and the logo back again. I tried to power off the laptop, power it back on, to no avail.

I searched on Google and found other occurrences of this issue. Several people are experiencing that problem, with no other solution than contacting Lenovo’s technical support and get the laptop replaced… when it is under warranty. It seems more and more that when warranty is over, a laptop is now a piece of crap that is just good to be thrown away, which annoyed me quite a lot. Fortunately, after a couple of attempts, removing the battery, putting it back in, rebooting again, again and again, I got past the frozen Lenovo logo. After that hurdle, I was able to install Windows 10.

Windows 10 asked me to connect to a Microsoft account. I used mine in order to perform the installation, but I knew I would need to do something to hook up my sister’s account, so she could login without me having to give her my main password.

Activation working without efforts? Strange…

After I finished installing Windows 10, I noticed from the system properties it was activated. Cool. However, I found that without even asking me, the system used the same product key as my own Lenovo Idepad Yoga 13 ultrabook. I was worried that the Microsoft’s tool creating the USB medium customized the installation USB key with my product key, so I was attempting to activate Windows on multiple computers with the same key. Maybe at some point, Microsoft will detect that and deactivate one of the two copies, either mine or the one of my sister, after she got her laptop back.

After significant amount of wasted time searching on Google, I figured out that this situation happened to others. It seems that the product key is used by several laptops of a given brand. Either the key is hard-wired somewhere in a read-only store of the machine, maybe the trusted platform module, or there is a registry at Microsoft of OEM machine ids mapped to product keys. Anyhow, my sister got a fully activated Windows 10 without any effort from me.

I was worried I would need to have her purchase a new license key or reactivate her current license via phone. Or maybe even worse, the current license would work for Windows 8, not Windows 10, and my sister would prefer to get back Windows 8. All my concerns went away with this effortless activation, but keep in mind things will not always be as smooth.

Looping Windows Update

That one caused me a lot of wasted time. There was a bug with one of the updates that failed to install. However, the update was partially installed. The failure caused Windows to retry installing the update each and every time the computer was shut down or rebooted. This pretty much wiped out the benefits of having a SSD because of increased shut down, reboot and even boot time, since at boot, Windows was finalizing the installation of the faulty update and failing!

I searched a long time for that one, tried to manually install the update to no avail. Some forum posts were referring to that problem without a known solution. Sometimes, it worked at some point after a lot of attempts. Other times, it required reinstallation. I even read a post from a user who called Microsft, got a new ISO image of Windows 10, installed that and the update worked! But then, why is there a medium creation tool if we need to get an ISO from elsewhere? Will everybody reinstalling Windows 10 from scratch need to call Microsoft to get that alternate ISO? Really? That is a serious bummer in my opinion. I thought about asking a link to that ISO, but that would have given me the English version while my sister wanted the French one.

At some point, I got so pissed off that I started searching a way to disable the automatic updates. This is possible by changing group policies… on the Pro version of Windows 10. My sister had the Home version. But the problem didn’t happen on my systems, probably because I started from Windows 8, then got 8.1, then 10. Maybe I’d need to go that same route on that Thinkpad?

Fortunately, there is a tool from Microsoft called Show or hide updates. I installed that tool, and told it to hide the offending update. That fixed the issue without having to tentatively reinstall, call Microsoft or try to install Windows 8, then upgrade to 8.1, then upgrade to 10!

Short-term laptop to be replaced after warranty?

What happened after the installation of Windows 10 pretty much lead me to believe what the section’s title states. I was really shocked and annoyed and questioning my trust against Lenovo. A PC is not like a simple appliance you just plug in and start. It has settings, it has applications installed, there is sometimes even a physical configuration to get used to. I cannot afford having to fully replace that configuration every year or two! That’s a serious non-sense, and not counting the very bad ecological impact of such a short-sighted offering. I know, it was not my laptop, but my next laptop could very well suffer from these more and more common flaws.

The freezing at Lenovo logo was just the tip of the iceberg! I then discovered that only Windows 8 drivers were offered for that Thinkpad G500 on the Lenovo website, no Windows 10 drivers at all. However, the preinstalled drivers coming with Windows 10 allowed the machine to pretty much work. Later on, I also found out that the DVD drive was also broken, not detected at all by Windows. Moreover, the Webcam was broken, showing a black screen.

Wi-fi started to go bad, the mouse pointer started to move erratically, the machine just became totally unusable. I had to plug in that laptop to an external keyboard, mouse and even an Ethernet cable.

Again, for the DVD drive and Webcam, the only solution was to contact Lenovo and get replacements, IF the laptop was still under warranty. At this point, I was stuck. Without another idea, I would have had to give up and tell my system the best solution is to throw that laptop away.

Pulling some things off

I noticed a black tape at the place of the Webcam. Maybe that tape is not supposed to be there and can be removed. I removed it and started the Camera application again: I got an image, yeah!

I then found out that some people were having issues with both the DVD drive and the freezing at boot. Could these two be related? If the DVD drive is flaky, it can slow down or even prevent boot as the BIOS/UEFI will try to query it in order to figure out if a disk is inserted.

The solution was quick and simple: just remove the optical drive from the machine, yes, really, pull it off. After I did that, the laptop booted flawlessly. I tested it several times, also tried to reboot, without any freezing issue. It also seemed, although I didn’t benchmark it formally, that the boot was faster. There would be a hole in the laptop casing instead of the DVD drive, because I didn’t have any other drive or something dummy to go into the bay. But at least, it would work.

Transferring ownership without my password

A small but significant step remained: how to allow my sister to log in to her “new” laptop without giving her my personal password in order for her to create her Microsoft account, or me getting her password to connect to her account? There are two ways to solve this cleanly, and I implemented both to be sure.

  1. Create a new account based on a known email address linked to a Microsoft account. I knew my sister’s email address and was sure she used that to create her Microsoft account, so I just had to create the account with that address; no need to enter the password until logging in. There are two pitfalls though. Firstly, the person needs to be connected to Internet for the first login, and not sure wi-fi will work, maybe just wired Ethernet since you need to be logged in to set up wi-fi! Secondly, the created account is not Administrator by default; I had to fix that so my sister would be able to install new programs on her machine.
  2. It is still possible to create a local account, so I did it and set a dummy password my sister can change after, or remove the local account altogether. Again, I needed to make sure the local account was in the Administrators group; by default it is not!

My sister was amazed at the speed increase we achieved by replacing the hard drive with a SSD. She thought that laptop was good for the thrash can before I fixed it. Even funnier, later on her boyfriend got an HP 15-bw028ca that although more recent, happened to be slower than that old fixed Thinkpad!

Eventually that HP piece of junk will benefit from a similar treatment. Maybe that will deserve another post.