The problem is accidentally hitting the power key when reaching for backspace, causing the machine to shutdown. The answer is to add the following two lines to /etc/systemd/logind.conf
:
[Login]
HandlePowerKey=ignore
The problem is accidentally hitting the power key when reaching for backspace, causing the machine to shutdown. The answer is to add the following two lines to /etc/systemd/logind.conf
:
[Login]
HandlePowerKey=ignore
The following is a answer I posted on the Apple support forums, which was removed with the message:
We removed your post “No mouse pointer visible when connecting
via VNC” because it was nontechnical or off-topic. We
understand wanting to share experiences, but these forums are
meant for technical questions that can be answered by the
community.
Quite bizarre, as it was technical, on-topic and an answer to a question posted on the same forum which had not been answered.
I’m connecting to a remote Mac from a Linux system, using
TigerVNC. This has worked fine on an old Mac Mini running
Monterey. I’ve recently bought and installed a new Mac Mini
running Sonoma. No mouse! The machine is practically useless
without any form of mouse control.Fortunately, I have discovered a simple solution – adding the
option “-dotwhennocursor” to the vncviewer command line seems
to enable the mouse cursor. My theory is that the Mac assumes
there is no mouse, but VNC forces a mouse connection with that
option.
Recently, my Windows code signing certificate expired after about 2.5 years of unproblematic operation, after a traumatic changeover from the previous code signing certificate.
Previously, the drama had to do with the fact that reputation was lost on transferring to the new certificate. Reputation, in this case, is that Windows has seen your compiled executables downloaded by many people without any complaints or detections of malware.
This time I thought – let’s use the exact same company I bought the previous certificate through – surely they can just renew the certificate and transfer the reputation with it. Not so, as it turned out. Let’s also purchase the renewal a good month out from the old certificate expiring. The whole certificate expiry thing happened at the worst possible time – right when we’re about to launch a new product.
So I’m writing this blog – partly to vent my frustrations at the whole process – and partly so I have some breadcrumbs to be able to go through the process in 3 years time in hopefully a less traumatic fashion than the last two times.
Firstly – let me just say, the whole code signing thing is a protection racket. “Nice application you have there, it would be shame for anything to happen to it!”. Only things do happen to it, even if you do pay up!
Sure, in this day and age of malicious actors, a consumer of software will want to know who created a piece of software, and a creator of said software will want to ensure that the bits delivered to the consumer are the bits created on eir development machine. It is the reason why commits to Github should be signed, and why Linux packages are signed by public key infrastructure (PKI). In the case of Github, you can use your ssh key, which makes it easy and free, and consumers (ie other developers) can check who ultimately made each commit. Linux packages generated by the OpenSUSE build service are signed by the repository key, but this is linked to the user account on OpenSUSE, which is also credentialled by a form of PKI.
I first came across code signing for “mainstream” operating systems in the context of Apple’s Macintosh. It appeared in the Mountain Lion release that introduced a feature called GateKeeper that flagged you software as suspicious if it were not signed by a certificate issued by Apple under its developer programme. I grumbled a bit at the time about the annual fee for the developer programme, but given their compilers were given away for free, it didn’t seem to too bad. I also had to upgrade my development environment from Snow Leopard, which I did to Yosemite, and set the MAC_OSX_MIN_VERSION to Lion. Gradually, though, the requirements became more strict. Apple introduced “notarization”, which implied I had to bump up the minimum MacOSX version to High Sierra. The last bump occurred late last year, when I had to rebuild my developer environment on Big Sur. With notarization, Apple scans the binary blob for known malware signatures, and so this forms part of one’s developer reputation.
In the Windows world, the whole code signing system is completely insane. As an individual, your code is treated as suspicious until enough people download the software, thus generating your reputation. And by suspicious, Windows does a number of dark patterns to cajole the user into abandoning your software.
xxx isn't commonly downloaded. Make sure you trust
xxx before you open it.
When you move the mouse cursor over the download, you are presented with a garbage bin icon, and three little dots. The correct answer is to select the three little dots.Microsoft Defender SmartScreen couldn't verify if this file is safe because it isn't commonly downloaded. Make sure you trust the file you downloaded or its source before you open it
Alright – so I can understand the idea of reputation – so that consumers can be reminded to be careful if installing software that hasn’t been tried by many others. But Windows is really over the top – you have to click through 4 different warnings (outlined above), ie
Are Windows really that click happy they have to be prompted 4 times about whether they know what they’re doing?
OK, so code signing is a a source of revenue for Apple, albeit a probably rather minuscule part compared with the 30% tax they slug App Store developers. And you could argue that providing infrastructure at scale to scan executables (notarization) is also a cost, for which some sort of cost recovery is reasonable.
You could argue that Microsoft could also justify a tax to cover to cost of running their Windows Defender SmartScreen infrastructure. Only they don’t. Instead it is handled by a handful of certificate authority companies, such as Comodo, DigiCert and Sectigo. As an individual developer, in the past, you could purchase a certificate from one of them, or a reseller, it was delivered as a download, you might need to convert the file, install it on your developer machine and set up the development pipeline. Over time, as you manage to cajole your users to download the software, your reputation builds to the point where SmartScreen allows the software download without comment. As a company, there is also a much more expensive alternative called an EV certificate. This involves extensive checks that the company is properly registered, that a publicly listed phone number and email is responded to, and that a physical device (dongle) is posted to the publicly listed mailing address. The dongle contains the certificate, so that only a single “release manager” can sign the software. The upside to this is instant reputation – SmartScreen waves through software signed by EV certificates without a murmur. I can understand that company validation requires a substantial fee, but for individual certificates where the only purpose is to identify who you are and reputation has to be earned, it should really be free, or close to. After all, Let’s Encrypt manages to issue free website certificates.
So when offered a 10% discount on renewing my certificate with the same company I bought the previous certificate, and having already suffered from loss of reputation the last time my code signing certificate was renewed, and thinking that getting the renewed certificate though the same company would prevent loss of reputation, I jumped.
To my surprise, the cost was suddenly a lot more expensive than three years previously, by about a factor of four. I paid up, but then wondered what exactly it was that I had bought. In the receipt was an amount for shipping a token – what token, I thought – that is supposed to only be for the EV “extended validation” version. Perhaps that is what I bought. I double checked the price list, and what I paid was a little less than the rack price for an EV certificate, but a little more for the base individual developer certificate. So, obviously, that was the 10% discount ordered, and I had mistakenly clicked on the more expensive option. Oh, well, at least I won’t need to worry about loss of reputation. Nup! I had bought the individual certificate, but since late 2023, all code signing certificates require a hardware token. I also had to go through validation checks – presumably they checked my company’s entry in ASIC, our national registry for corporations, and I received a phone call and email to check contact details, as well as a zoom call so that I could show my driver’s license. So the company was verified, but I don’t get the advantage of instant reputation. Reputation is not even transferred, as I found out later once the token finally arrived (another story, the token was lost in-transit in the wilds of Kentucky for a whole month), and I got it to work (outlined in the next section).
So I started reading. There are, of course, many complaints about this whole system, and how to deal with the sudden loss of reputation every three years. The best idea I came across was to order the new certificate a bit in advance of the old one expiring (OK – so maybe a month before is cutting it a bit fine, as it turns out), and then to sign executables with both the old and the new certificate to allow reputation to gradually transfer to the new certificate. I wish I’d known that sooner. But I thought, let’s try signing my last release with the new certificate as well. The old signature should be valid forever, so long as the signing was done prior to certificate expiry. Alas, that was no good, as it turned out I was signing the executable incorrectly, ever since version 3.0 of my software came out. I had omitted specifying a timestamp server, and jsign quite happily signs the software anyway without any notification of anything wrong. Then when the code signing certificate expires, so does the signature – boom! Seriously, why can’t jsign have at least emitted an error message to say no timestamp server had been supplied, or better still supply a default server.
So having escalated the case of the lost token, it finally turns up in a Kentucky warehouse, and its on it way, a fortnight or so after my previous certificate had expired. But that was not the end of the story. I now have to figure out how to integrate the token into my development pipeline. The package arrives, there’s the physical token in the package with the name “SafeNet” on it, but nothing else, no instructions or anything. In the email I got from my reseller, after I’d requested more information, was a password, and a recommendation that I download “SafeNet Authentication Client”, although potentially other software might work. That was it. I did a quick internet search to see if jsign would support SafeNet eTokens, and being assured that it was supported, set about trying to get this thing to work. Suffice it to say, it was a long story, taking nearly a month of investigation and trial and error to get this thing to work. In the interest of having a neat list of things I found out so that next time (in three years) I can consult this list to hopefully narrow down what needs to be done, I’m going to list these as dot points.
name = SafeNet eToken 5100
library = /usr/lib64/libeToken.so
slotIndex = 0
rpm -qlp SafenetAuthenticationClient-10.8.28-1.e18.x86_64.rpm
Like many of you, I have been seeing the press and blog posts about Windows 11, and thinking meh, its not for me, Windows 10 is fine. Of course, to be fair, I’m not a typical Windows user – my platform of choice is OpenSUSE Leap, with the fvwm window manager, which is basically insanely simple, and gets out of the way to let me use the command line, which is where I at most of the time anyway.
Windows for me is just a way of support Windows builds of my software for my users who prefer the Microsoft way. I have a copy of Windows 10 running inside a VirtualBox virtual machine, which actually works insanely well – windows installs very easily from the downloaded .iso file, apart from the idiotic way you need to avoid connecting the internet during the install process to get the ability to create a local login account. More on that later.
However, in Windows update, it reports that the hardware does not support Windows 11 – that is the virtual hardware, since the physical hardware was bought in the last 12 months, and is clearly modern enough. But that is coming. So, I just thought I could wait, and things will happen in due course.
Enter Minsky ticket 1352. Minsky, for some reason, triggers an assertion failure on Windows 11, even though it works perfectly well on Windows 10. So now, it has become imperative that I gain access to a working Windows 11 system. Since VirtualBox wasn’t considered acceptable hardware, because of the TPM requirement , I initially tried the low hanging fruit: an old 2017 laptop that used to belong to my son (hardware not acceptable for Windows 11); asking my son if any of his laptops were at Windows 11 (they weren’t). Then I tried VMWare, who have a free (as in beer) VMPlayer option, but even following the rules to add TPM and secure boot to VMPlayer, was to no avail. Frustratingly, Windows 11 does not tell you what is blocking it from installing, making it impossible to diagnose what else is needed.
So then I examined online which of my NUCs were supposedly capable of running Windows 11. My Gen 5, 2015 era NUCs were out for the count, but a replacement NUC I bought last year (Gen 11) was listed as compatible. Out of all the computers I have here at home, only one was compatible – the newest, which also was my main development machine, which I was reluctant to “put in the line of fire”.
It was time to get my hands dirty, having exhausted the simple options, apart from buying a cheap laptop with Windows 11 already installed on it. For me, that was a last resort, as once the bug was solved, it would go up on a shelf to gather dust until it becomes yet another item of our growing e-waste problem. I had already downloaded the Windows 11 install ISO, so let’s bung it on a USB stick and see if we can install it on a USB hard drive, of which I have a few. I took the precaution of removing the SSD with my precious Linux development environment. Windows doesn’t treat Linux formatted drives as unformatted drives like it used to, but I wasn’t taking any chances. I was half expecting it to not work on the USB drive, but instead I got “A media driver your computer needs is missing
“, and a filebrowser to go looking for driver files. So off to Intel’s website, and download all the drivers for the NUC I could find. Drivers for graphics, drivers for wired ethernet, drivers for wifi – about five or six of them. Most were zip files, some were exe files, and one had a .cap extension. Pop them on the USB flash drive – and reboot again into the installer. Again the missing media driver message, and even though I could see the flash drive, none of the driver files were recognised. Not zip files, not exes, nor the .cap file. I unzipped the zip files, to discover they just contain more exes. So I still have no idea what media files Windows is looking for. And in any case, the Windows 11 installation instructions mention nothing about having to install any drivers until Windows is installed.
Yet more Googling and browsing StackOverflow. There were many, many complaints about the “media driver” error, which has been blighting Windows installs since at least Windows 7. Some people suggested that you cannot use the raw .iso file downloaded from Microsoft (like I’d done), but you need to use Windows Media creator. So you need to have Windows installed in order to install Windows! OK – even though the irony of that was not lost on me – I thought, I can do that. I rebooted my Linux box, fired up my Windows 10 virtual machine, downloaded Windows Media Creator, and clicked “generate iso image”, as I’m not sure that virtual passthrough USB drives work. Half an hour later, after Windows Media Creator had downloaded Windows 11 again, I had an ISO file, which I again copied onto the USB flash drive using dd.
Again – same result: A media driver your computer needs is missing
” Frustrating as. Back to StackOverflow. Eventually, I came across this page. Buried in that page was the comment “TL;DR ** : Failed with Ubuntu and dd. Smooth ride with Windows and rufus.” So it seems in the Windows world “ISOs aint ISOs”, special magic is required to write them USB flash drives, unlike the Linux world, where you just write the ISO to the flash drive by copying the bits and use it. Rather than Rufus, I used Windows Media Creator on an old clunky laptop I had spare with Windows 10 on it, and chose the option to write directly to the flash drive. No more “media driver” error!
Now I got the message that Windows 11 will not install onto USB drives. The only option I had would be to install it onto an M2 SSD, as my NUC doesn’t have a SATA connector. I could either buy one (a 120GB M2 SSD is not expensive), but I’d need to wait 3 days for delivery. So instead, I had another dead laptop with a SATA drive in it. I swapped it into the Windows 10 laptop, and went through he same process as above (Windows Media Creator to create an installable flash drive with Windows 10), took out the SSD and then installed Windows 10 on the SATA drive. Well I had to do it twice, because I’d forgotten that you need to refuse to give wifi connection details ion order to have the option of creating a local account, but at the end of that process, I’d freed up a spare M2 SSD.
Installation of Windows 11 on the SSD on the new NUC now proceeded smoothly until the step where it asks for a network connection. This time, you cannot proceed without a network connection. What the? More googling, and I found a way of doing it. It turns out that Shift F10 is a magic escape sequence giving you a command prompt (rather like Linux’s Alt-Shift-F1), which allows you to run taskmgr
. Use it to kill the process called “Network connection flow.” Magic! It drops you right into the same part of the Windows 10 install that allows you to create a local account.
With that, which took about two days, I was able to make short work of the Windows 11 bug, which coincidently turned out to be reproducible on Windows 10 too. But seriously, this is way harder than it has ever been to get Linux installed, even in the bad old days of Linux 0.9.x in the early ’90s.
In a recent update to gdb, emacs gdb mode stopped working. Trawling through the digital dust, the above message appeared to be the most relevant. Googling didn’t really turn up much, not even a StackOverflow question, which is why I’m writing this blog post.
The problem turns out to some additional stuff gdb is spewing out that the emacs gdb mode is not expecting. In this case, typing “gdb -i=mi” on the command line gave more clues. It was fingering a python syntax warning error:
zen>gdb -i=mi gui-tk/minsky
/usr/share/gdb/python/gdb/command/prompt.py:48: SyntaxWarning: "is not" with a literal. Did you mean "!="?
if self.value is not '':
In this case, python is complaining about comparing a variable literally with a literal, which is probably invalid as these simple types get assigned by value.
The cure was to replace the “is not” with a “!=” as suggested by the warning message. Now emacs gdb mode works.
Until next time gdb is updated. Or maybe not, hopefully this problem gets fixed upstream.
I have seen this occasionally on other code bases, where an object’s attribute is declared private, and then both a getter and a setter is provided, effectively exposing the attribute as public. For example
class Foo { int bar; public: int getBar() const {return bar;} void setBar(int x) {bar=x;} };
I take this as a code smell that bar should have been declared public in the first place. Requiring the use of accessors means that the code is littered with foo.getBar() or foo.setBar(2.0), which is arguably less readable than the foo.bar equivalent, and it also means there is now useless code that has been added, and Code That Doesn’t Exist Is The Code You Don’t Need To Debug.
Don’t get me wrong, I’m not arguing that encapsulation is evil. If you have ever written C code, which doesn’t have encapsulation, you will realise it is an enormously valuable technique. To understand when encapsulation should be used, you need to understand the concept of an invariant. An invariant is a condition about the class that must hold at all times. Consider the example of a simple vector class, which we implement in terms of a pointer and a size attribute:
struct Vec { double *data=nullptr; size_t size; Vec(size_t size=0): size(size) {data=new double[size];} ~Vec() {delete [] data;} };
In this code, we have taken care of memory management issues by using RAII, however what happens if a user wants to resize the vector by assigning a newly allocated array to the data pointer. Maybe they might use the malloc()
function for the purpose, or they might simply take the address of some object on the stack:
double a[]={1.0,2.0}; Vec b; b.data=&a;
These usages will lead to insidious memory leaks, or outright crashes if you’re lucky. There are a couple of implicit invariants here: One is that the data member is a heap allocated array by the new[] operator, and the second that size is less than or equal to the size of the allocated array.
In this case, encapsulation is entirely appropriate:
class Vec { double *data; size_t m_size; public: Vec(size_t size=0): size(size) {data=new double[size];} ~Vec() {delete [] data;} size_t size() const {return m_size;} size_t size(size_t newSize) { if (newSize>m_size) { delete [] data; data = new double[newSize]; } m_size=newSize; } };
Of course, there is another invariant here, and that is that one and only one Vec object exists for each allocated heap object. That invariant is violated by the implicit copy constructor and assignment operator. At a minimum, these methods should be deleted, but alternatives such as explicitly creating a new data pointer and copying the contents is also possible.
What, then, if there were no invariants that need to be preserved by a particular member? Is there any other reason to use encapsulation? The only other reason I can think of is if you’re writing a library (an API), and you wish to insulate users of that library against changes to the implementation. For example, in the original Foo example above, you may decide one day that bar should be a computed value from some other items, eg multiplying two values. Then your change will break any code that links against your library. But there are two answers to this: if you are creating a library for others, you should be versioning your library, and such breaking API changes should be reason for a major version bump. On the other hand, if the class is for internal use only, then you just need to refactor your code to handle the new interface. For even quite large code bases (eg 1 million lines), that rarely takes more than a few hours. So I would argue that unless you foresee a likelihood for an implementation change, encapsulating members for that reason is premature pessimisation. YAGNI!
Anyway, I did wonder why in some code bases there are private member variables, with useless public setters and getters. Recently I joined a group who shall remain nameless that has as its coding style “all data members should be private” (with the exception for POD structures where everything is public). Obviously, under such a coding style, useless accessor methods will proliferate, as it is the only allowed way to implement a public attribute. I don’t know how widespread this belief that encapsulation is so good it should be used everywhere, even when it has no functional benefit. That encouraged me to write this hopefully incendiary post. Please leave comments!
Recently, I have been developing a Classdesc descriptor that automatically exposes C++ classes and objects into Python. I wrote this work up as an Overload journal article if you want to know more.
This message occurs whenever trying to do a Debian style osc build
This one stumped me for a while. There is no python-lzma package available either in zypper or pip.
I ended up downloading the source code for osc (which is all python based), and grepping for the error code.
The problem occurs when attempting to import the lzma package. lzma is available in python3, but not python2. For various reasons, python2 is still required to run osc.
I eventually found what I wanted by installing backports.lzma via pip2, but this created a module “backports.lzma”, not “lzma”. In order to get import lzma to work under python2, I had to create a symbolic link:
ln -sf /usr/lib64/python2.7/site-packages/backports/lzma/ /usr/lib64/python2.7/site-packages/
Eventually, python2 will go away, and this note will be of cure historical interest only.
On a recent upgrade to emacs, I discovered that remote editing of files via tramp stopped working. This bugged me a lot, as I use it a lot. After much Googling (much), I came across a workaround using sshfs, which allows mounting a remote ssh connection as an fuse mount point.
But the real reason why tramp stopped working is that now an extra “method” parameter is required in the tramp string, eg
^X^F /ssh:remote:file
You can abbreviate ssh to ‘-‘ if you have not changed the defaults too much.
A nice additional feature is the su and sudo methods, eg
^X^F /sudo:/etc/passwd
See What’s new in emacs 26.1 (search for “tramp”).