Adventures with Windows 11

Like many of you, I have been seeing the press and blog posts about Windows 11, and thinking meh, its not for me, Windows 10 is fine. Of course, to be fair, I’m not a typical Windows user – my platform of choice is OpenSUSE Leap, with the fvwm window manager, which is basically insanely simple, and gets out of the way to let me use the command line, which is where I at most of the time anyway.

Windows for me is just a way of support Windows builds of my software for my users who prefer the Microsoft way. I have a copy of Windows 10 running inside a VirtualBox virtual machine, which actually works insanely well – windows installs very easily from the downloaded .iso file, apart from the idiotic way you need to avoid connecting the internet during the install process to get the ability to create a local login account. More on that later.

However, in Windows update, it reports that the hardware does not support Windows 11 – that is the virtual hardware, since the physical hardware was bought in the last 12 months, and is clearly modern enough. But that is coming. So, I just thought I could wait, and things will happen in due course.

Enter Minsky ticket 1352. Minsky, for some reason, triggers an assertion failure on Windows 11, even though it works perfectly well on Windows 10. So now, it has become imperative that I gain access to a working Windows 11 system. Since VirtualBox wasn’t considered acceptable hardware, because of the TPM requirement , I initially tried the low hanging fruit: an old 2017 laptop that used to belong to my son (hardware not acceptable for Windows 11); asking my son if any of his laptops were at Windows 11 (they weren’t). Then I tried VMWare, who have a free (as in beer) VMPlayer option, but even following the rules to add TPM and secure boot to VMPlayer, was to no avail. Frustratingly, Windows 11 does not tell you what is blocking it from installing, making it impossible to diagnose what else is needed.

So then I examined online which of my NUCs were supposedly capable of running Windows 11. My Gen 5, 2015 era NUCs were out for the count, but a replacement NUC I bought last year (Gen 11) was listed as compatible. Out of all the computers I have here at home, only one was compatible – the newest, which also was my main development machine, which I was reluctant to “put in the line of fire”.

It was time to get my hands dirty, having exhausted the simple options, apart from buying a cheap laptop with Windows 11 already installed on it. For me, that was a last resort, as once the bug was solved, it would go up on a shelf to gather dust until it becomes yet another item of our growing e-waste problem. I had already downloaded the Windows 11 install ISO, so let’s bung it on a USB stick and see if we can install it on a USB hard drive, of which I have a few. I took the precaution of removing the SSD with my precious Linux development environment. Windows doesn’t treat Linux formatted drives as unformatted drives like it used to, but I wasn’t taking any chances. I was half expecting it to not work on the USB drive, but instead I got “A media driver your computer needs is missing“, and a filebrowser to go looking for driver files. So off to Intel’s website, and download all the drivers for the NUC I could find. Drivers for graphics, drivers for wired ethernet, drivers for wifi – about five or six of them. Most were zip files, some were exe files, and one had a .cap extension. Pop them on the USB flash drive – and reboot again into the installer. Again the missing media driver message, and even though I could see the flash drive, none of the driver files were recognised. Not zip files, not exes, nor the .cap file. I unzipped the zip files, to discover they just contain more exes. So I still have no idea what media files Windows is looking for. And in any case, the Windows 11 installation instructions mention nothing about having to install any drivers until Windows is installed.

Yet more Googling and browsing StackOverflow. There were many, many complaints about the “media driver” error, which has been blighting Windows installs since at least Windows 7. Some people suggested that you cannot use the raw .iso file downloaded from Microsoft (like I’d done), but you need to use Windows Media creator. So you need to have Windows installed in order to install Windows! OK – even though the irony of that was not lost on me – I thought, I can do that. I rebooted my Linux box, fired up my Windows 10 virtual machine, downloaded Windows Media Creator, and clicked “generate iso image”, as I’m not sure that virtual passthrough USB drives work. Half an hour later, after Windows Media Creator had downloaded Windows 11 again, I had an ISO file, which I again copied onto the USB flash drive using dd.

Again – same result: A media driver your computer needs is missing” Frustrating as. Back to StackOverflow. Eventually, I came across this page. Buried in that page was the comment “TL;DR ** : Failed with Ubuntu and dd. Smooth ride with Windows and rufus.” So it seems in the Windows world “ISOs aint ISOs”, special magic is required to write them USB flash drives, unlike the Linux world, where you just write the ISO to the flash drive by copying the bits and use it. Rather than Rufus, I used Windows Media Creator on an old clunky laptop I had spare with Windows 10 on it, and chose the option to write directly to the flash drive. No more “media driver” error!

Now I got the message that Windows 11 will not install onto USB drives. The only option I had would be to install it onto an M2 SSD, as my NUC doesn’t have a SATA connector. I could either buy one (a 120GB M2 SSD is not expensive), but I’d need to wait 3 days for delivery. So instead, I had another dead laptop with a SATA drive in it. I swapped it into the Windows 10 laptop, and went through he same process as above (Windows Media Creator to create an installable flash drive with Windows 10), took out the SSD and then installed Windows 10 on the SATA drive. Well I had to do it twice, because I’d forgotten that you need to refuse to give wifi connection details ion order to have the option of creating a local account, but at the end of that process, I’d freed up a spare M2 SSD.

Installation of Windows 11 on the SSD on the new NUC now proceeded smoothly until the step where it asks for a network connection. This time, you cannot proceed without a network connection. What the? More googling, and I found a way of doing it. It turns out that Shift F10 is a magic escape sequence giving you a command prompt (rather like Linux’s Alt-Shift-F1), which allows you to run taskmgr. Use it to kill the process called “Network connection flow.” Magic! It drops you right into the same part of the Windows 10 install that allows you to create a local account.

With that, which took about two days, I was able to make short work of the Windows 11 bug, which coincidently turned out to be reproducible on Windows 10 too. But seriously, this is way harder than it has ever been to get Linux installed, even in the bad old days of Linux 0.9.x in the early ’90s.

Posted in Uncategorized | 1 Comment

Gates to Hell – Yup!

Somebody else who finds Apple’s ecosystem a disaster

Posted in Uncategorized | 1 Comment

“Error: you did not specify -i=mi on GDB’s command line!” (SOLVED)

In a recent update to gdb, emacs gdb mode stopped working. Trawling through the digital dust, the above message appeared to be the most relevant. Googling didn’t really turn up much, not even a StackOverflow question, which is why I’m writing this blog post.

The problem turns out to some additional stuff gdb is spewing out that the emacs gdb mode is not expecting. In this case, typing “gdb -i=mi” on the command line gave more clues. It was fingering a python syntax warning error:

zen>gdb -i=mi gui-tk/minsky
/usr/share/gdb/python/gdb/command/ SyntaxWarning: "is not" with a literal. Did you mean "!="?
if self.value is not '':

In this case, python is complaining about comparing a variable literally with a literal, which is probably invalid as these simple types get assigned by value.
The cure was to replace the “is not” with a “!=” as suggested by the warning message. Now emacs gdb mode works.

Until next time gdb is updated. Or maybe not, hopefully this problem gets fixed upstream.

Posted in Uncategorized | Leave a comment

Excessive Encapsulation Syndrome

I have seen this occasionally on other code bases, where an object’s attribute is declared private, and then both a getter and a setter is provided, effectively exposing the attribute as public. For example

class Foo
   int bar;
   int getBar() const {return bar;}
   void setBar(int x) {bar=x;}

I take this as a code smell that bar should have been declared public in the first place. Requiring the use of accessors means that the code is littered with foo.getBar() or foo.setBar(2.0), which is arguably less readable than the equivalent, and it also means there is now useless code that has been added, and Code That Doesn’t Exist Is The Code You Don’t Need To Debug.

Don’t get me wrong, I’m not arguing that encapsulation is evil. If you have ever written C code, which doesn’t have encapsulation, you will realise it is an enormously valuable technique. To understand when encapsulation should be used, you need to understand the concept of an invariant. An invariant is a condition about the class that must hold at all times. Consider the example of a simple vector class, which we implement in terms of a pointer and a size attribute:

struct Vec
   double *data=nullptr;
   size_t size;
   Vec(size_t size=0): size(size) {data=new double[size];}
   ~Vec() {delete [] data;}

In this code, we have taken care of memory management issues by using RAII, however what happens if a user wants to resize the vector by assigning a newly allocated array to the data pointer. Maybe they might use the malloc() function for the purpose, or they might simply take the address of some object on the stack:

double a[]={1.0,2.0};
Vec b;;

These usages will lead to insidious memory leaks, or outright crashes if you’re lucky. There are a couple of implicit invariants here: One is that the data member is a heap allocated array by the new[] operator, and the second that size is less than or equal to the size of the allocated array.
In this case, encapsulation is entirely appropriate:

class Vec
   double *data;
   size_t m_size;
   Vec(size_t size=0): size(size) {data=new double[size];}
   ~Vec() {delete [] data;}
   size_t size() const {return m_size;}
   size_t size(size_t newSize) {
      if (newSize>m_size) {
        delete [] data;
        data = new double[newSize];

Of course, there is another invariant here, and that is that one and only one Vec object exists for each allocated heap object. That invariant is violated by the implicit copy constructor and assignment operator. At a minimum, these methods should be deleted, but alternatives such as explicitly creating a new data pointer and copying the contents is also possible.

What, then, if there were no invariants that need to be preserved by a particular member? Is there any other reason to use encapsulation? The only other reason I can think of is if you’re writing a library (an API), and you wish to insulate users of that library against changes to the implementation. For example, in the original Foo example above, you may decide one day that bar should be a computed value from some other items, eg multiplying two values. Then your change will break any code that links against your library. But there are two answers to this: if you are creating a library for others, you should be versioning your library, and such breaking API changes should be reason for a major version bump. On the other hand, if the class is for internal use only, then you just need to refactor your code to handle the new interface. For even quite large code bases (eg 1 million lines), that rarely takes more than a few hours. So I would argue that unless you foresee a likelihood for an implementation change, encapsulating members for that reason is premature pessimisation. YAGNI!

Anyway, I did wonder why in some code bases there are private member variables, with useless public setters and getters. Recently I joined a group who shall remain nameless that has as its coding style “all data members should be private” (with the exception for POD structures where everything is public). Obviously, under such a coding style, useless accessor methods will proliferate, as it is the only allowed way to implement a public attribute. I don’t know how widespread this belief that encapsulation is so good it should be used everywhere, even when it has no functional benefit. That encouraged me to write this hopefully incendiary post. Please leave comments!

Posted in Uncategorized | Tagged , , | Leave a comment

C++ Reflection for Python Binding

Recently, I have been developing a Classdesc descriptor that automatically exposes C++ classes and objects into Python. I wrote this work up as an Overload journal article if you want to know more.

Posted in Uncategorized | Leave a comment

[SOLVED] osc build complains about can’t open control.tar.xz without python-lzma

This message occurs whenever trying to do a Debian style osc build

This one stumped me for a while. There is no python-lzma package available either in zypper or pip.

I ended up downloading the source code for osc (which is all python based), and grepping for the error code.

The problem occurs when attempting to import the lzma package. lzma is available in python3, but not python2. For various reasons, python2 is still required to run osc.

I eventually found what I wanted by installing backports.lzma via pip2, but this created a module “backports.lzma”, not “lzma”. In order to get import lzma to work under python2, I had to create a symbolic link:

ln -sf /usr/lib64/python2.7/site-packages/backports/lzma/ /usr/lib64/python2.7/site-packages/

Eventually, python2 will go away, and this note will be of cure historical interest only.

Posted in Uncategorized | Leave a comment

Tramp mode not working on emacs 26

On a recent upgrade to emacs, I discovered that remote editing of files via tramp stopped working. This bugged me a lot, as I use it a lot. After much Googling (much), I came across a workaround using sshfs, which allows mounting a remote ssh connection as an fuse mount point.

But the real reason why tramp stopped working is that now an extra “method” parameter is required in the tramp string, eg

^X^F /ssh:remote:file

You can abbreviate ssh to ‘-‘ if you have not changed the defaults too much.

A nice additional feature is the su and sudo methods, eg

^X^F /sudo:/etc/passwd

See What’s new in emacs 26.1 (search for “tramp”).

Posted in Uncategorized | Leave a comment

Making the web great again

After a number of years getting increasingly frustrated at how slow websites were on my phone, I finally bit the bullet and disabled Javascript.

I mostly use my phone to read news websites/aggregator such as slashdot, which are text heavy. Javascript really only serves the useless function of serving ads, which I don’t read. I really only kept Javascript on because many sites do not work properly without it.

When I open up slashdot with Javascript disabled, I get the message:

It looks like your browser doesn't support Javascript or it is disabled. Please use the desktop site instead.

The link to the desktop site just takes you back to the same address, but in the web browser (using just the standard Android browser, as Chrome is rather heavy weight), there is the option “Desktop view”. Setting this now takes one to the desktop version of the Slashdot website. And apart from having to zoom in a bit to be able to read the text, load times are, well, blisteringly fast. Pages load in seconds, rather than minutes. And no more browser crashes. Bliss!!!!

So I’m going all retro, and reading the web like its 1996. And it is so much better!

Posted in Uncategorized | Leave a comment

Cross-compiling openSUSE build service projects

I recently needed to debug one of my OBS projects that was failing on the ARM architecture. Unfortunately, my only ARM computer (a Raspberry Pi) was of a too-old architecture to run the OBS toolset directly – the Raspberry was armv6l, and I needed armv7l.

After poking around a lot to figure out how to run OBS as a cross compiler, I eventually had success. These are my notes, which refer to doing this on an OpenSUSE Tumbleweed system on a 64 bit Intel CPU.

1. Install the emulator

zypper install qemu
zypper install build-initvm-x86-64

2. Enable binfmt support for ARM (allows the running of ARM executable on x86_64 CPUs)

cat >/etc/binfmt.d/arm.conf <<EOF

3. restart the binfmt service

systemctl restart systemd-binfmt

4. run osc build using:

osc build --alternative-project=openSUSE:Factory:ARM standard armv7l ecolab.spec

This will fail, because it can’t run the ARM executables.

5. copy the qemu-arm-binfmt executable into the build root

cp /usr/bin/qemu-arm-binfmt /var/tmp/build-root/standard-armv7l/usr/bin

6. rerun the osc build command above. It will complain that the build root is corrupt. Do not clean the build root, but just type continue.

Posted in Uncategorized | Leave a comment

How to build a Macintosh executable that will run on older versions of MacOSX.

This took me a good day of drilling into MacOSX, with a paucity of appropriate advice on the internet, which is why I’m writing this post.

-mmacosx-version-min compiler flag

The first hint is to use the -mmacosx-version-min compiler flag, which takes values like 10.9 (for Mavericks) or 10.12 (for Sierra). If you are compiling everything from self-contained source code, it suffices to add this compiler flag to your CFLAGS variable, and build away. I discovered by experimentation that Mavericks was about the earliest OSX that support the C++11 standard library.

Checking the minimum version of an executable or library

Use otool -l executable or library and then look for the tag LC_VERSION_MIN_MACOSX

MACOSX_DEPLOYMENT_TARGET environment variable

If you don’t specify the above compiler flag, then the clang compiler will examine the value of the MACOSX_DEPLOYMENT_TARGET environment variable, and use that the target of the compiler. This is useful as a way of setting the deployment target without editing a bunch of files (say you’re compiling a bunch of 3rd party libraries).

If the environment variable not set, then the current OSX release version is used.


The problem with MacPorts is that it overrides the MACOSX_DEPLOYMENT_TARGET and sets it to your current machine’s value.
After a lot of browsing of the TCL scripts that MacPorts used, I found that you can add it as a configuration option to /opt/local/etc/macports/macports.conf

macosx_deployment_target 10.9
buildfromsource always

The second option is required to prevent macports downloading prebuilt binary packages.

Final tip is if you have already built some packages before setting the above options, then you can rebuild the ports via

port upgrade --force installed
Posted in Uncategorized | Leave a comment