“Error: you did not specify -i=mi on GDB’s command line!” (SOLVED)

In a recent update to gdb, emacs gdb mode stopped working. Trawling through the digital dust, the above message appeared to be the most relevant. Googling didn’t really turn up much, not even a StackOverflow question, which is why I’m writing this blog post.

The problem turns out to some additional stuff gdb is spewing out that the emacs gdb mode is not expecting. In this case, typing “gdb -i=mi” on the command line gave more clues. It was fingering a python syntax warning error:

zen>gdb -i=mi gui-tk/minsky
/usr/share/gdb/python/gdb/command/prompt.py:48: SyntaxWarning: "is not" with a literal. Did you mean "!="?
if self.value is not '':

In this case, python is complaining about comparing a variable literally with a literal, which is probably invalid as these simple types get assigned by value.
The cure was to replace the “is not” with a “!=” as suggested by the warning message. Now emacs gdb mode works.

Until next time gdb is updated. Or maybe not, hopefully this problem gets fixed upstream.

Posted in Uncategorized | Leave a comment

Excessive Encapsulation Syndrome

I have seen this occasionally on other code bases, where an object’s attribute is declared private, and then both a getter and a setter is provided, effectively exposing the attribute as public. For example

class Foo
{
   int bar;
public:
   int getBar() const {return bar;}
   void setBar(int x) {bar=x;}
};

I take this as a code smell that bar should have been declared public in the first place. Requiring the use of accessors means that the code is littered with foo.getBar() or foo.setBar(2.0), which is arguably less readable than the foo.bar equivalent, and it also means there is now useless code that has been added, and Code That Doesn’t Exist Is The Code You Don’t Need To Debug.

Don’t get me wrong, I’m not arguing that encapsulation is evil. If you have ever written C code, which doesn’t have encapsulation, you will realise it is an enormously valuable technique. To understand when encapsulation should be used, you need to understand the concept of an invariant. An invariant is a condition about the class that must hold at all times. Consider the example of a simple vector class, which we implement in terms of a pointer and a size attribute:

struct Vec
{
   double *data=nullptr;
   size_t size;
   Vec(size_t size=0): size(size) {data=new double[size];}
   ~Vec() {delete [] data;}
};

In this code, we have taken care of memory management issues by using RAII, however what happens if a user wants to resize the vector by assigning a newly allocated array to the data pointer. Maybe they might use the malloc() function for the purpose, or they might simply take the address of some object on the stack:

double a[]={1.0,2.0};
Vec b;
b.data=&a;

These usages will lead to insidious memory leaks, or outright crashes if you’re lucky. There are a couple of implicit invariants here: One is that the data member is a heap allocated array by the new[] operator, and the second that size is less than or equal to the size of the allocated array.
In this case, encapsulation is entirely appropriate:

class Vec
{
   double *data;
   size_t m_size;
public:
   Vec(size_t size=0): size(size) {data=new double[size];}
   ~Vec() {delete [] data;}
   size_t size() const {return m_size;}
   size_t size(size_t newSize) {
      if (newSize>m_size) {
        delete [] data;
        data = new double[newSize];
      }
      m_size=newSize;
   }
};

Of course, there is another invariant here, and that is that one and only one Vec object exists for each allocated heap object. That invariant is violated by the implicit copy constructor and assignment operator. At a minimum, these methods should be deleted, but alternatives such as explicitly creating a new data pointer and copying the contents is also possible.

What, then, if there were no invariants that need to be preserved by a particular member? Is there any other reason to use encapsulation? The only other reason I can think of is if you’re writing a library (an API), and you wish to insulate users of that library against changes to the implementation. For example, in the original Foo example above, you may decide one day that bar should be a computed value from some other items, eg multiplying two values. Then your change will break any code that links against your library. But there are two answers to this: if you are creating a library for others, you should be versioning your library, and such breaking API changes should be reason for a major version bump. On the other hand, if the class is for internal use only, then you just need to refactor your code to handle the new interface. For even quite large code bases (eg 1 million lines), that rarely takes more than a few hours. So I would argue that unless you foresee a likelihood for an implementation change, encapsulating members for that reason is premature pessimisation. YAGNI!

Anyway, I did wonder why in some code bases there are private member variables, with useless public setters and getters. Recently I joined a group who shall remain nameless that has as its coding style “all data members should be private” (with the exception for POD structures where everything is public). Obviously, under such a coding style, useless accessor methods will proliferate, as it is the only allowed way to implement a public attribute. I don’t know how widespread this belief that encapsulation is so good it should be used everywhere, even when it has no functional benefit. That encouraged me to write this hopefully incendiary post. Please leave comments!

Posted in Uncategorized | Tagged , , | Leave a comment

C++ Reflection for Python Binding

Recently, I have been developing a Classdesc descriptor that automatically exposes C++ classes and objects into Python. I wrote this work up as an Overload journal article if you want to know more.

Posted in Uncategorized | Leave a comment

[SOLVED] osc build complains about can’t open control.tar.xz without python-lzma

This message occurs whenever trying to do a Debian style osc build

This one stumped me for a while. There is no python-lzma package available either in zypper or pip.

I ended up downloading the source code for osc (which is all python based), and grepping for the error code.

The problem occurs when attempting to import the lzma package. lzma is available in python3, but not python2. For various reasons, python2 is still required to run osc.

I eventually found what I wanted by installing backports.lzma via pip2, but this created a module “backports.lzma”, not “lzma”. In order to get import lzma to work under python2, I had to create a symbolic link:

ln -sf /usr/lib64/python2.7/site-packages/backports/lzma/ /usr/lib64/python2.7/site-packages/

Eventually, python2 will go away, and this note will be of cure historical interest only.

Posted in Uncategorized | Leave a comment

Tramp mode not working on emacs 26

On a recent upgrade to emacs, I discovered that remote editing of files via tramp stopped working. This bugged me a lot, as I use it a lot. After much Googling (much), I came across a workaround using sshfs, which allows mounting a remote ssh connection as an fuse mount point.

But the real reason why tramp stopped working is that now an extra “method” parameter is required in the tramp string, eg

^X^F /ssh:remote:file

You can abbreviate ssh to ‘-‘ if you have not changed the defaults too much.

A nice additional feature is the su and sudo methods, eg

^X^F /sudo:/etc/passwd

See What’s new in emacs 26.1 (search for “tramp”).

Posted in Uncategorized | Leave a comment

Making the web great again

After a number of years getting increasingly frustrated at how slow websites were on my phone, I finally bit the bullet and disabled Javascript.

I mostly use my phone to read news websites/aggregator such as slashdot, which are text heavy. Javascript really only serves the useless function of serving ads, which I don’t read. I really only kept Javascript on because many sites do not work properly without it.

When I open up slashdot with Javascript disabled, I get the message:

It looks like your browser doesn't support Javascript or it is disabled. Please use the desktop site instead.

The link to the desktop site just takes you back to the same address, but in the web browser (using just the standard Android browser, as Chrome is rather heavy weight), there is the option “Desktop view”. Setting this now takes one to the desktop version of the Slashdot website. And apart from having to zoom in a bit to be able to read the text, load times are, well, blisteringly fast. Pages load in seconds, rather than minutes. And no more browser crashes. Bliss!!!!

So I’m going all retro, and reading the web like its 1996. And it is so much better!

Posted in Uncategorized | Leave a comment

Cross-compiling openSUSE build service projects

I recently needed to debug one of my OBS projects that was failing on the ARM architecture. Unfortunately, my only ARM computer (a Raspberry Pi) was of a too-old architecture to run the OBS toolset directly – the Raspberry was armv6l, and I needed armv7l.

After poking around a lot to figure out how to run OBS as a cross compiler, I eventually had success. These are my notes, which refer to doing this on an OpenSUSE Tumbleweed system on a 64 bit Intel CPU.

1. Install the emulator

zypper install qemu
zypper install build-initvm-x86-64

2. Enable binfmt support for ARM (allows the running of ARM executable on x86_64 CPUs)


cat >/etc/binfmt.d/arm.conf <<EOF
:arm:M::\x7fELF\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x28\x00:
\xff\xff\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff
:/usr/bin/qemu-arm-binfmt:P
:armeb:M::\x7fELF\x01\x02\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x2
8:\xff\xff\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\x
ff:/usr/bin/qemu-armeb-binfmt:P
EOF

3. restart the binfmt service

systemctl restart systemd-binfmt

4. run osc build using:


osc build --alternative-project=openSUSE:Factory:ARM standard armv7l ecolab.spec

This will fail, because it can’t run the ARM executables.

5. copy the qemu-arm-binfmt executable into the build root


cp /usr/bin/qemu-arm-binfmt /var/tmp/build-root/standard-armv7l/usr/bin

6. rerun the osc build command above. It will complain that the build root is corrupt. Do not clean the build root, but just type continue.

Posted in Uncategorized | Leave a comment

How to build a Macintosh executable that will run on older versions of MacOSX.

This took me a good day of drilling into MacOSX, with a paucity of appropriate advice on the internet, which is why I’m writing this post.

-mmacosx-version-min compiler flag

The first hint is to use the -mmacosx-version-min compiler flag, which takes values like 10.9 (for Mavericks) or 10.12 (for Sierra). If you are compiling everything from self-contained source code, it suffices to add this compiler flag to your CFLAGS variable, and build away. I discovered by experimentation that Mavericks was about the earliest OSX that support the C++11 standard library.

Checking the minimum version of an executable or library

Use otool -l executable or library and then look for the tag LC_VERSION_MIN_MACOSX

MACOSX_DEPLOYMENT_TARGET environment variable

If you don’t specify the above compiler flag, then the clang compiler will examine the value of the MACOSX_DEPLOYMENT_TARGET environment variable, and use that the target of the compiler. This is useful as a way of setting the deployment target without editing a bunch of files (say you’re compiling a bunch of 3rd party libraries).

If the environment variable not set, then the current OSX release version is used.

MacPorts

The problem with MacPorts is that it overrides the MACOSX_DEPLOYMENT_TARGET and sets it to your current machine’s value.
After a lot of browsing of the TCL scripts that MacPorts used, I found that you can add it as a configuration option to /opt/local/etc/macports/macports.conf

macosx_deployment_target 10.9
buildfromsource always

The second option is required to prevent macports downloading prebuilt binary packages.

Final tip is if you have already built some packages before setting the above options, then you can rebuild the ports via

port upgrade --force installed
Posted in Uncategorized | Leave a comment

Saying goodbye to Aegis

After nearly 16 years, I am now saying a long farewell the the Aegis source code management system (http://aegis.sf.net). Aegis was, in its day, years ahead of its time. But now, with Aegis’s author dead, and only a handful of stalwarts promoting and maintaining Aegis, it is time to look for a replacement. After now more than 18 months of using git and github in anger, I think I finally have have an SCM that is up to the job. On the plus side, Github’s enormous developer community, and fork/pull request model means that people are more likely to contribute. Whilst Aegis has something similar, the reality is very few people will bother to download and install Aegis, so you’re left implementing clunky workflows combining multiple SCMs. More than once, the heterogenous repositories lead to code regressions.

The biggest hurdle was how to handle continuous integration, a feature Aegis had from its inception. After a considerable learning curve, I found a solution in terms of TravisCI, which integrates quite nicely with Github. Then I needed something to replace the versioning workflow I had with Aegis. After studying Gitflow, I realised it was pretty close to what I was doing with Aegis, so I have implemented a versioning workflow using a script “makeRelease.sh” that uses the git tag feature to add version numbers, and added a dist target to the Makefile to create clean tarballs of a particular version.

I’m changing things slightly, though. Whereas Aegis branch numbers bear no relation to delta numbers, so branch ecolab.5.32 is actually incremental work on top of ecolab release 5.D29, with my new workflow, branches and deltas will be identical. Release 5.32.1 will be an incremental beta release on ecolab.5.32. Also to indicate that the new system is in place, Aegis’s delta numberings (D in final place) are gone, and versions will be purely numeric.

You can check out the new stuff in the github repositories, https://github.com/highperformancecoder/minsky and https://github.com/highperformancecoder/ecolab.

Posted in Uncategorized | Leave a comment

Living la vida Hackintosh

Like many, I make a living from open source software development, which I develop on Linux, but then build on Windows and Macintosh. I do have a Mac, a rather cute Mac mini, which is a cost effective way of owning the platform, however it does have a couple of disadvantages:

  1. I need to test my software on a minimally installed user machine, not my developer machine, to ensure I have bundled all the necessary dynamic libraries required for my software to run.
  2. I need to build a 32 bit version of the software for maximum compatibility, whereas my Mac mini is 64 bits
  3. I’d like to have my Macintosh environment with me when travelling, without having to throw the mac mini in my suitcase, along with montor, keyboard etc.

Yes, I know, I could buy a Mac laptop, but I don’t particularly like MacOS for my development environment, so it would still be an extra piece of hardware to throw into the suitcase.

The answer to all of these questions is to load MacOSX onto a Virtual Machine, such as Oracle’s Virtual Box, available as Freeware. Initially, I loaded the MacOSX Snow Leopard distribution provided with my Mac Mini into Virtual Box. This worked on some versions of Virtual Box, but not others, so I was constantly having to ignore the pleading to upgrade Virtual Box. Then I discovered I could run the Vbox image on my main Linux computer, provided I didn’t need to boot it, as MacOSX checks that it is running on genuine hardware at boot time only. This was a great liberation – I could now do the Macintosh portioon of my work from the comfort of my linux workstation.

Then, unfortunately, upgrades happened – both the Mac Mini to Yosemite, and my Linux machine to OpenSUSE 13. With the upgrades, Virtual Box also neeeded to be upgraded, with the result that the VMs would only run on the Mac Mini. Unhappy day.

But now I have discovered the iBoot tool from Tony Mac http://www.tonymacx86.com. This great tool allows one to install a “Hackintosh”, Macintosh operating system running on a virtual machine anywhere – exactly what I need. Whilst Apple seem to take a dim view towards people running their software on virtual machines – really that is exactly what I need to do, and all other alternatives don’t cut the mustard.

To get iBoot to work took a little bit of getting used. The most important points were:

  1. Ensure EFI boot is disabled. Virtual Box will enable it by default if you tell it you’re loading MacOSX.
  2. Other settings to be selected are PA/NX, VT-x/AMD-V and Nested Paging
  3. Under display, select 3D acceleration, and about 20MB of video memory
  4. Make sure the SATA type is AHCI
  5. The other item that really tripped me up was getting the correct version of iBoot. Initially, I downloaded iBoot-3.3.0, which did not work. What I had to do was consult my processor information in /proc/cpuinfo, which told me:
    Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz
    

    Then I looked up Intel chips on Wikipedia, and found that chip’s model number on the “Haswell” page. So I needed to download iBoot-Haswell-1.0.1, which did the trick.

With the iBoot.iso file, put it into your virtual DVD drive, and boot up your virtual machine. If you already have MacOSX installed in your VM, you can use the right arrow key to select it and boot it. Since that is the situation I found myself in, that is what I did. However, if you don’t, you just replace the iBoot.iso image with a MacOSX install disk, and boot that instead.

That’s it. I’m now in the processing of cloning one of my VMs and upgrading it to Yosemite! Wish me luck.

Posted in Uncategorized | Leave a comment