[SOLVED] osc build complains about can’t open control.tar.xz without python-lzma

This message occurs whenever trying to do a Debian style osc build

This one stumped me for a while. There is no python-lzma package available either in zypper or pip.

I ended up downloading the source code for osc (which is all python based), and grepping for the error code.

The problem occurs when attempting to import the lzma package. lzma is available in python3, but not python2. For various reasons, python2 is still required to run osc.

I eventually found what I wanted by installing backports.lzma via pip2, but this created a module “backports.lzma”, not “lzma”. In order to get import lzma to work under python2, I had to create a symbolic link:

ln -sf /usr/lib64/python2.7/site-packages/backports/lzma/ /usr/lib64/python2.7/site-packages/

Eventually, python2 will go away, and this note will be of cure historical interest only.

Posted in Uncategorized | Leave a comment

Tramp mode not working on emacs 26

On a recent upgrade to emacs, I discovered that remote editing of files via tramp stopped working. This bugged me a lot, as I use it a lot. After much Googling (much), I came across a workaround using sshfs, which allows mounting a remote ssh connection as an fuse mount point.

But the real reason why tramp stopped working is that now an extra “method” parameter is required in the tramp string, eg

^X^F /ssh:remote:file

You can abbreviate ssh to ‘-‘ if you have not changed the defaults too much.

A nice additional feature is the su and sudo methods, eg

^X^F /sudo:/etc/passwd

See What’s new in emacs 26.1 (search for “tramp”).

Posted in Uncategorized | Leave a comment

Making the web great again

After a number of years getting increasingly frustrated at how slow websites were on my phone, I finally bit the bullet and disabled Javascript.

I mostly use my phone to read news websites/aggregator such as slashdot, which are text heavy. Javascript really only serves the useless function of serving ads, which I don’t read. I really only kept Javascript on because many sites do not work properly without it.

When I open up slashdot with Javascript disabled, I get the message:

It looks like your browser doesn't support Javascript or it is disabled. Please use the desktop site instead.

The link to the desktop site just takes you back to the same address, but in the web browser (using just the standard Android browser, as Chrome is rather heavy weight), there is the option “Desktop view”. Setting this now takes one to the desktop version of the Slashdot website. And apart from having to zoom in a bit to be able to read the text, load times are, well, blisteringly fast. Pages load in seconds, rather than minutes. And no more browser crashes. Bliss!!!!

So I’m going all retro, and reading the web like its 1996. And it is so much better!

Posted in Uncategorized | Leave a comment

Cross-compiling openSUSE build service projects

I recently needed to debug one of my OBS projects that was failing on the ARM architecture. Unfortunately, my only ARM computer (a Raspberry Pi) was of a too-old architecture to run the OBS toolset directly – the Raspberry was armv6l, and I needed armv7l.

After poking around a lot to figure out how to run OBS as a cross compiler, I eventually had success. These are my notes, which refer to doing this on an OpenSUSE Tumbleweed system on a 64 bit Intel CPU.

1. Install the emulator

zypper install qemu
zypper install build-initvm-x86-64

2. Enable binfmt support for ARM (allows the running of ARM executable on x86_64 CPUs)

cat >/etc/binfmt.d/arm.conf <<EOF

3. restart the binfmt service

systemctl restart systemd-binfmt

4. run osc build using:

osc build --alternative-project=openSUSE:Factory:ARM standard armv7l ecolab.spec

This will fail, because it can’t run the ARM executables.

5. copy the qemu-arm-binfmt executable into the build root

cp /usr/bin/qemu-arm-binfmt /var/tmp/build-root/standard-armv7l/usr/bin

6. rerun the osc build command above. It will complain that the build root is corrupt. Do not clean the build root, but just type continue.

Posted in Uncategorized | Leave a comment

How to build a Macintosh executable that will run on older versions of MacOSX.

This took me a good day of drilling into MacOSX, with a paucity of appropriate advice on the internet, which is why I’m writing this post.

-mmacosx-version-min compiler flag

The first hint is to use the -mmacosx-version-min compiler flag, which takes values like 10.9 (for Mavericks) or 10.12 (for Sierra). If you are compiling everything from self-contained source code, it suffices to add this compiler flag to your CFLAGS variable, and build away. I discovered by experimentation that Mavericks was about the earliest OSX that support the C++11 standard library.

Checking the minimum version of an executable or library

Use otool -l executable or library and then look for the tag LC_VERSION_MIN_MACOSX

MACOSX_DEPLOYMENT_TARGET environment variable

If you don’t specify the above compiler flag, then the clang compiler will examine the value of the MACOSX_DEPLOYMENT_TARGET environment variable, and use that the target of the compiler. This is useful as a way of setting the deployment target without editing a bunch of files (say you’re compiling a bunch of 3rd party libraries).

If the environment variable not set, then the current OSX release version is used.


The problem with MacPorts is that it overrides the MACOSX_DEPLOYMENT_TARGET and sets it to your current machine’s value.
After a lot of browsing of the TCL scripts that MacPorts used, I found that you can add it as a configuration option to /opt/local/etc/macports/macports.conf

macosx_deployment_target 10.9
buildfromsource always

The second option is required to prevent macports downloading prebuilt binary packages.

Final tip is if you have already built some packages before setting the above options, then you can rebuild the ports via

port upgrade --force installed
Posted in Uncategorized | Leave a comment

Saying goodbye to Aegis

After nearly 16 years, I am now saying a long farewell the the Aegis source code management system (http://aegis.sf.net). Aegis was, in its day, years ahead of its time. But now, with Aegis’s author dead, and only a handful of stalwarts promoting and maintaining Aegis, it is time to look for a replacement. After now more than 18 months of using git and github in anger, I think I finally have have an SCM that is up to the job. On the plus side, Github’s enormous developer community, and fork/pull request model means that people are more likely to contribute. Whilst Aegis has something similar, the reality is very few people will bother to download and install Aegis, so you’re left implementing clunky workflows combining multiple SCMs. More than once, the heterogenous repositories lead to code regressions.

The biggest hurdle was how to handle continuous integration, a feature Aegis had from its inception. After a considerable learning curve, I found a solution in terms of TravisCI, which integrates quite nicely with Github. Then I needed something to replace the versioning workflow I had with Aegis. After studying Gitflow, I realised it was pretty close to what I was doing with Aegis, so I have implemented a versioning workflow using a script “makeRelease.sh” that uses the git tag feature to add version numbers, and added a dist target to the Makefile to create clean tarballs of a particular version.

I’m changing things slightly, though. Whereas Aegis branch numbers bear no relation to delta numbers, so branch ecolab.5.32 is actually incremental work on top of ecolab release 5.D29, with my new workflow, branches and deltas will be identical. Release 5.32.1 will be an incremental beta release on ecolab.5.32. Also to indicate that the new system is in place, Aegis’s delta numberings (D in final place) are gone, and versions will be purely numeric.

You can check out the new stuff in the github repositories, https://github.com/highperformancecoder/minsky and https://github.com/highperformancecoder/ecolab.

Posted in Uncategorized | Leave a comment

Living la vida Hackintosh

Like many, I make a living from open source software development, which I develop on Linux, but then build on Windows and Macintosh. I do have a Mac, a rather cute Mac mini, which is a cost effective way of owning the platform, however it does have a couple of disadvantages:

  1. I need to test my software on a minimally installed user machine, not my developer machine, to ensure I have bundled all the necessary dynamic libraries required for my software to run.
  2. I need to build a 32 bit version of the software for maximum compatibility, whereas my Mac mini is 64 bits
  3. I’d like to have my Macintosh environment with me when travelling, without having to throw the mac mini in my suitcase, along with montor, keyboard etc.

Yes, I know, I could buy a Mac laptop, but I don’t particularly like MacOS for my development environment, so it would still be an extra piece of hardware to throw into the suitcase.

The answer to all of these questions is to load MacOSX onto a Virtual Machine, such as Oracle’s Virtual Box, available as Freeware. Initially, I loaded the MacOSX Snow Leopard distribution provided with my Mac Mini into Virtual Box. This worked on some versions of Virtual Box, but not others, so I was constantly having to ignore the pleading to upgrade Virtual Box. Then I discovered I could run the Vbox image on my main Linux computer, provided I didn’t need to boot it, as MacOSX checks that it is running on genuine hardware at boot time only. This was a great liberation – I could now do the Macintosh portioon of my work from the comfort of my linux workstation.

Then, unfortunately, upgrades happened – both the Mac Mini to Yosemite, and my Linux machine to OpenSUSE 13. With the upgrades, Virtual Box also neeeded to be upgraded, with the result that the VMs would only run on the Mac Mini. Unhappy day.

But now I have discovered the iBoot tool from Tony Mac http://www.tonymacx86.com. This great tool allows one to install a “Hackintosh”, Macintosh operating system running on a virtual machine anywhere – exactly what I need. Whilst Apple seem to take a dim view towards people running their software on virtual machines – really that is exactly what I need to do, and all other alternatives don’t cut the mustard.

To get iBoot to work took a little bit of getting used. The most important points were:

  1. Ensure EFI boot is disabled. Virtual Box will enable it by default if you tell it you’re loading MacOSX.
  2. Other settings to be selected are PA/NX, VT-x/AMD-V and Nested Paging
  3. Under display, select 3D acceleration, and about 20MB of video memory
  4. Make sure the SATA type is AHCI
  5. The other item that really tripped me up was getting the correct version of iBoot. Initially, I downloaded iBoot-3.3.0, which did not work. What I had to do was consult my processor information in /proc/cpuinfo, which told me:
    Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz

    Then I looked up Intel chips on Wikipedia, and found that chip’s model number on the “Haswell” page. So I needed to download iBoot-Haswell-1.0.1, which did the trick.

With the iBoot.iso file, put it into your virtual DVD drive, and boot up your virtual machine. If you already have MacOSX installed in your VM, you can use the right arrow key to select it and boot it. Since that is the situation I found myself in, that is what I did. However, if you don’t, you just replace the iBoot.iso image with a MacOSX install disk, and boot that instead.

That’s it. I’m now in the processing of cloning one of my VMs and upgrading it to Yosemite! Wish me luck.

Posted in Uncategorized | Leave a comment

Regression test coverage analysis in TCL

If you’re like me, you like having lots of regression tests to keep you covered from making stupid mistakes when hacking up some complicated piece of code. Whilst code coverage tools exist for the major development enviroments, one major blind spot is how do do coverage analysis of TCL, which becomes a problem when your application (eg Minsky) starts to sport a significant amount of application logic in TCL.

A quick Google indicated that you could buy into the Active TCL way of doing things (not so useful for me), or use and application called Nagelfar. Unfortunately, Nagelfar really assumes you are coding in a standard environment, such as wish or tclsh, not in an application scripting environment such as Minsky or EcoLab. Then the realisation I could do it fairly simply in TCL. Well I did have a few false turns which took its time, but I found I could attach a command to fire on every step executed in TCL. Then I peek into the immediately enclosing stack frame to look at details such as which line I’m executing, and save these to a database. Since I’m doing this in the EcoLab environment, I make use of the cachedDBM class to accumulate executaion counts as their found. Finally, I write a C++ program that reads in a TCL file, identifies which proc I’m in and checks whether an entry for the proc, or for the file line number is in the database, and produces output not unlike gcov, with ### indicating a line that wasn’t executed.

The C++ code is called tcl-cov, and is currently located in Minsky’s test directory, although I’m considering moving it to the ecolab utilities directory.

The TCL code to be added to the main application? Here is is:

proc attachTraceProc {namesp} {
foreach p [info commands $namesp*] {
if {$p ne "::traceProc"} {
trace add execution $p enterstep traceProc
# recursively process child namespaces
foreach n [namespace children $namesp] {
attachTraceProc ${n}::

# check whether coverage analysis is required
if [info exists env(MINSKY_COV)] {
# trace add execution proc leave enableTraceProc
proc traceProc {args} {
array set frameInfo [info frame -2]
if {$frameInfo(type)=="proc"} {
minsky.cov.add $frameInfo(proc) $frameInfo(line)
if {$frameInfo(type)=="source"} {
minsky.cov.add $frameInfo(file) $frameInfo(line)
# open coverage database, and set cache size
minsky.cov.init $env(MINSKY_COV) w
minsky.cov.max_elem 10000
# attach trace execuation to all created procs
attachTraceProc ::

The name of the coverage database is passed in via the MINSKY_COV environment variable. minsky.cov.add is a command for adding 1 to the counter for file/line, or proc/line as appropriate. The traceProc command is attached to all defined procs, which requires walking through all namespaces, hence the recursive call into attachTraceProc (which starts in global namespace ::).

That’s it! Enjoy.

Posted in Uncategorized | Leave a comment

Movie Graph Argument Revisted

In this paper, we reexamine Bruno Marchal’s Movie Graph
, which demonstrates a basic incompatibility between
computationalism and materialism. We discover that the incompatibility
is only manifest in singular classical-like universes. If we accept
that we live in a Multiverse, then the incompatibility goes away, but
in that case another line of argument shows that with
computationalism, fundamental, or primitive materiality has no causal
influence on what is observed, which must must be derivable from basic
arithmetic properties.

Draft paper here

Posted in Uncategorized | 2 Comments

Sucked in by a fake journal?

Recently, the phenomenon of fake open-access peer reviewed journals has been put under the spotlight, with blogs, a New York Times article and even a special issue in Nature covering this issue.

I just wanted to cover anecdotally my own experience with these journals here.

It all started with a post the Everything List in June 2002 where I wondered why we weren’t ants, given that ants outnumber humans many times over. For those in the know, this type of argument is known as Anthropic Reasoning, which has developed a notoriety as being the sort of argument that seems too good to be true, yet not obviously wrong. The gist of the argument in this case is that we reason from the fact that we’re conscious beings, and that there are many, many more ants than use humans, to wonder why it is we’re human beings rather than ants. The conclusion is that perhaps ants are not conscious, although the cynic might point out that ants are just too busy getting on with their lives to bother wasting time with anthropic thoughts.

I initially published this idea in my wildly speculative book Theory of Nothing, where I made an effort to quantify the extent of the problem, and head off a few retorts, such as “Why are we not Chinese”. Flushed with the success of getting the book out, I thought of extracting a couple of sections of original research, and writing them up as peer-reviewed articles. Maybe the reviewers might spot some obvious flaw that eluded me, and that would be the end of it. Alternatively, if they couldn’t find a flaw, then hopefully the argument could be taken seriously enough by the scholarly community to debate its strengths and weaknesses. We might even learn something about the tricky nature of consciousness.

The first section I tackled was the anthropic ants argument. I recall starting to write this on holidays, which would have been in January 2007, although the earliest evidence of submitting to a journal was to “Mind” on 27th February 2008. I can’t quite recall why the delay – perhaps I was allowing the paper to “brew”, but possibly I submitted it to a journal without any email trace. At the same time, I uploaded the article to arXiv.org, and it generated the delightful response Slandering Ants Anthropomorphically.

The article was rejected on editorial grounds – the editors thought it wasn’t interesting enough for their readers. Fair enough – editors are ultimately responsible for the boundaries of what their journal covers, even though according to their mission statement, my paper should have been on topic. Then a gap of nearly a year follows without any trace in my email record. I suspect I submitted it to another journal, from which I received not one sceric of email in response. Then I submitted it to Australian Journal of Philosophy. The paper was reviewed, and the referee had some excellent constructive criticism, which I duly incorporated into the paper. However, the paper was ultimately rejected because it did not deal with the historical controversies of anthropic reasoning. I did not want to add a review of historical controversies because a) mostly I don’t understand the contra points, b) it would significantly lengthen the paper, and only serve to muddy the argument. Instead, I took pains to clarify what my assumptions were, and the approach I was taking, and only make a passing nod at the literature critical of anthropic reasoning.

Next I tried the journal Erkenntnis. I did not hear anything from the editors for nearly 12 months, in spite of several email pings I made to them over the time. So I then submitted the journal to Philosophical Quarterly, who made an editorial decision that the paper was off topic.

In the meantime, the editor of Ekenntnis actually contacted me, stating that he’d had difficulties in getting referees to return reviews, although he had had one review returned. Finally, in June 2011, Ekenntnis notified me that they were rejecting the paper based on the reviewers comments – which to me seemed mostly along the lines of not dealing with generic philosophical problems in anthropic reasoning. It has become increasingly clear that anthopic reasoning has become one of those topics that’s “too hot a potato to handle”.

Having pretty much covered the gamut of appropriate traditional journals, it was time to try some of the newer open-access journals. Having had a long time association with an open-access peer reviewed journal, Complexity International, that is now, unfortunately, no longer accepting submissions, I had a favourable impression of the open access model. I submitted the manuscript to Open Philosophy Journal, which was produced by the Bentham group. If I had known about perhaps I shouldn’t have bothered. After a year, I didn’t hear anything from them, so I then submitted to Open Philosophy Journal. Quite quickly, a review came in. Clearly, the reviewer didn’t have a handle on the paper, yet after my response to the editor, the journal accepted the paper, nearly five years to the day of when I first submitted the article to a peer reviewed journal. I should have been suspicious. Only later, did I discover this publisher (Scientific Research Publishing) is listed on John Bealle’s excellent list of predatory publishers, and now realise that I have been SCAMMED!

There is little benefit in paying to have my paper put online by someone who may very well not be around next year. My article is available (in unrefereed form) through arXiv.org. Even though this paper has been through peer review, and has even been improved as a result, in the end, it may as well not have been. My idea may well be truly profound, or it may be utter horseshit. But it doesn’t look like I will find out through peer review. I don’t think I’ll bother with the other section (The “How Soon until Doom” appendix of my book). Some topics are just not suitable for the peer review process.

Posted in Uncategorized | Leave a comment