Saying goodbye to Aegis

After nearly 16 years, I am now saying a long farewell the the Aegis source code management system (http://aegis.sf.net). Aegis was, in its day, years ahead of its time. But now, with Aegis’s author dead, and only a handful of stalwarts promoting and maintaining Aegis, it is time to look for a replacement. After now more than 18 months of using git and github in anger, I think I finally have have an SCM that is up to the job. On the plus side, Github’s enormous developer community, and fork/pull request model means that people are more likely to contribute. Whilst Aegis has something similar, the reality is very few people will bother to download and install Aegis, so you’re left implementing clunky workflows combining multiple SCMs. More than once, the heterogenous repositories lead to code regressions.

The biggest hurdle was how to handle continuous integration, a feature Aegis had from its inception. After a considerable learning curve, I found a solution in terms of TravisCI, which integrates quite nicely with Github. Then I needed something to replace the versioning workflow I had with Aegis. After studying Gitflow, I realised it was pretty close to what I was doing with Aegis, so I have implemented a versioning workflow using a script “makeRelease.sh” that uses the git tag feature to add version numbers, and added a dist target to the Makefile to create clean tarballs of a particular version.

I’m changing things slightly, though. Whereas Aegis branch numbers bear no relation to delta numbers, so branch ecolab.5.32 is actually incremental work on top of ecolab release 5.D29, with my new workflow, branches and deltas will be identical. Release 5.32.1 will be an incremental beta release on ecolab.5.32. Also to indicate that the new system is in place, Aegis’s delta numberings (D in final place) are gone, and versions will be purely numeric.

You can check out the new stuff in the github repositories, https://github.com/highperformancecoder/minsky and https://github.com/highperformancecoder/ecolab.

Posted in Uncategorized | Leave a comment

Living la vida Hackintosh

Like many, I make a living from open source software development, which I develop on Linux, but then build on Windows and Macintosh. I do have a Mac, a rather cute Mac mini, which is a cost effective way of owning the platform, however it does have a couple of disadvantages:

  1. I need to test my software on a minimally installed user machine, not my developer machine, to ensure I have bundled all the necessary dynamic libraries required for my software to run.
  2. I need to build a 32 bit version of the software for maximum compatibility, whereas my Mac mini is 64 bits
  3. I’d like to have my Macintosh environment with me when travelling, without having to throw the mac mini in my suitcase, along with montor, keyboard etc.

Yes, I know, I could buy a Mac laptop, but I don’t particularly like MacOS for my development environment, so it would still be an extra piece of hardware to throw into the suitcase.

The answer to all of these questions is to load MacOSX onto a Virtual Machine, such as Oracle’s Virtual Box, available as Freeware. Initially, I loaded the MacOSX Snow Leopard distribution provided with my Mac Mini into Virtual Box. This worked on some versions of Virtual Box, but not others, so I was constantly having to ignore the pleading to upgrade Virtual Box. Then I discovered I could run the Vbox image on my main Linux computer, provided I didn’t need to boot it, as MacOSX checks that it is running on genuine hardware at boot time only. This was a great liberation – I could now do the Macintosh portioon of my work from the comfort of my linux workstation.

Then, unfortunately, upgrades happened – both the Mac Mini to Yosemite, and my Linux machine to OpenSUSE 13. With the upgrades, Virtual Box also neeeded to be upgraded, with the result that the VMs would only run on the Mac Mini. Unhappy day.

But now I have discovered the iBoot tool from Tony Mac http://www.tonymacx86.com. This great tool allows one to install a “Hackintosh”, Macintosh operating system running on a virtual machine anywhere – exactly what I need. Whilst Apple seem to take a dim view towards people running their software on virtual machines – really that is exactly what I need to do, and all other alternatives don’t cut the mustard.

To get iBoot to work took a little bit of getting used. The most important points were:

  1. Ensure EFI boot is disabled. Virtual Box will enable it by default if you tell it you’re loading MacOSX.
  2. Other settings to be selected are PA/NX, VT-x/AMD-V and Nested Paging
  3. Under display, select 3D acceleration, and about 20MB of video memory
  4. Make sure the SATA type is AHCI
  5. The other item that really tripped me up was getting the correct version of iBoot. Initially, I downloaded iBoot-3.3.0, which did not work. What I had to do was consult my processor information in /proc/cpuinfo, which told me:
    Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz
    

    Then I looked up Intel chips on Wikipedia, and found that chip’s model number on the “Haswell” page. So I needed to download iBoot-Haswell-1.0.1, which did the trick.

With the iBoot.iso file, put it into your virtual DVD drive, and boot up your virtual machine. If you already have MacOSX installed in your VM, you can use the right arrow key to select it and boot it. Since that is the situation I found myself in, that is what I did. However, if you don’t, you just replace the iBoot.iso image with a MacOSX install disk, and boot that instead.

That’s it. I’m now in the processing of cloning one of my VMs and upgrading it to Yosemite! Wish me luck.

Posted in Uncategorized | Leave a comment

Regression test coverage analysis in TCL

If you’re like me, you like having lots of regression tests to keep you covered from making stupid mistakes when hacking up some complicated piece of code. Whilst code coverage tools exist for the major development enviroments, one major blind spot is how do do coverage analysis of TCL, which becomes a problem when your application (eg Minsky) starts to sport a significant amount of application logic in TCL.

A quick Google indicated that you could buy into the Active TCL way of doing things (not so useful for me), or use and application called Nagelfar. Unfortunately, Nagelfar really assumes you are coding in a standard environment, such as wish or tclsh, not in an application scripting environment such as Minsky or EcoLab. Then the realisation I could do it fairly simply in TCL. Well I did have a few false turns which took its time, but I found I could attach a command to fire on every step executed in TCL. Then I peek into the immediately enclosing stack frame to look at details such as which line I’m executing, and save these to a database. Since I’m doing this in the EcoLab environment, I make use of the cachedDBM class to accumulate executaion counts as their found. Finally, I write a C++ program that reads in a TCL file, identifies which proc I’m in and checks whether an entry for the proc, or for the file line number is in the database, and produces output not unlike gcov, with ### indicating a line that wasn’t executed.

The C++ code is called tcl-cov, and is currently located in Minsky’s test directory, although I’m considering moving it to the ecolab utilities directory.

The TCL code to be added to the main application? Here is is:


proc attachTraceProc {namesp} {
foreach p [info commands $namesp*] {
if {$p ne "::traceProc"} {
trace add execution $p enterstep traceProc
}
}
# recursively process child namespaces
foreach n [namespace children $namesp] {
attachTraceProc ${n}::
}
}

# check whether coverage analysis is required
if [info exists env(MINSKY_COV)] {
# trace add execution proc leave enableTraceProc
proc traceProc {args} {
array set frameInfo [info frame -2]
if {$frameInfo(type)=="proc"} {
minsky.cov.add $frameInfo(proc) $frameInfo(line)
}
if {$frameInfo(type)=="source"} {
minsky.cov.add $frameInfo(file) $frameInfo(line)
}
}
# open coverage database, and set cache size
minsky.cov.init $env(MINSKY_COV) w
minsky.cov.max_elem 10000
# attach trace execuation to all created procs
attachTraceProc ::
}

The name of the coverage database is passed in via the MINSKY_COV environment variable. minsky.cov.add is a command for adding 1 to the counter for file/line, or proc/line as appropriate. The traceProc command is attached to all defined procs, which requires walking through all namespaces, hence the recursive call into attachTraceProc (which starts in global namespace ::).

That’s it! Enjoy.

Posted in Uncategorized | Leave a comment

Movie Graph Argument Revisted

In this paper, we reexamine Bruno Marchal’s Movie Graph
Argument
, which demonstrates a basic incompatibility between
computationalism and materialism. We discover that the incompatibility
is only manifest in singular classical-like universes. If we accept
that we live in a Multiverse, then the incompatibility goes away, but
in that case another line of argument shows that with
computationalism, fundamental, or primitive materiality has no causal
influence on what is observed, which must must be derivable from basic
arithmetic properties.

Draft paper here

Posted in Uncategorized | 2 Comments

Sucked in by a fake journal?

Recently, the phenomenon of fake open-access peer reviewed journals has been put under the spotlight, with blogs, a New York Times article and even a special issue in Nature covering this issue.

I just wanted to cover anecdotally my own experience with these journals here.

It all started with a post the Everything List in June 2002 where I wondered why we weren’t ants, given that ants outnumber humans many times over. For those in the know, this type of argument is known as Anthropic Reasoning, which has developed a notoriety as being the sort of argument that seems too good to be true, yet not obviously wrong. The gist of the argument in this case is that we reason from the fact that we’re conscious beings, and that there are many, many more ants than use humans, to wonder why it is we’re human beings rather than ants. The conclusion is that perhaps ants are not conscious, although the cynic might point out that ants are just too busy getting on with their lives to bother wasting time with anthropic thoughts.

I initially published this idea in my wildly speculative book Theory of Nothing, where I made an effort to quantify the extent of the problem, and head off a few retorts, such as “Why are we not Chinese”. Flushed with the success of getting the book out, I thought of extracting a couple of sections of original research, and writing them up as peer-reviewed articles. Maybe the reviewers might spot some obvious flaw that eluded me, and that would be the end of it. Alternatively, if they couldn’t find a flaw, then hopefully the argument could be taken seriously enough by the scholarly community to debate its strengths and weaknesses. We might even learn something about the tricky nature of consciousness.

The first section I tackled was the anthropic ants argument. I recall starting to write this on holidays, which would have been in January 2007, although the earliest evidence of submitting to a journal was to “Mind” on 27th February 2008. I can’t quite recall why the delay – perhaps I was allowing the paper to “brew”, but possibly I submitted it to a journal without any email trace. At the same time, I uploaded the article to arXiv.org, and it generated the delightful response Slandering Ants Anthropomorphically.

The article was rejected on editorial grounds – the editors thought it wasn’t interesting enough for their readers. Fair enough – editors are ultimately responsible for the boundaries of what their journal covers, even though according to their mission statement, my paper should have been on topic. Then a gap of nearly a year follows without any trace in my email record. I suspect I submitted it to another journal, from which I received not one sceric of email in response. Then I submitted it to Australian Journal of Philosophy. The paper was reviewed, and the referee had some excellent constructive criticism, which I duly incorporated into the paper. However, the paper was ultimately rejected because it did not deal with the historical controversies of anthropic reasoning. I did not want to add a review of historical controversies because a) mostly I don’t understand the contra points, b) it would significantly lengthen the paper, and only serve to muddy the argument. Instead, I took pains to clarify what my assumptions were, and the approach I was taking, and only make a passing nod at the literature critical of anthropic reasoning.

Next I tried the journal Erkenntnis. I did not hear anything from the editors for nearly 12 months, in spite of several email pings I made to them over the time. So I then submitted the journal to Philosophical Quarterly, who made an editorial decision that the paper was off topic.

In the meantime, the editor of Ekenntnis actually contacted me, stating that he’d had difficulties in getting referees to return reviews, although he had had one review returned. Finally, in June 2011, Ekenntnis notified me that they were rejecting the paper based on the reviewers comments – which to me seemed mostly along the lines of not dealing with generic philosophical problems in anthropic reasoning. It has become increasingly clear that anthopic reasoning has become one of those topics that’s “too hot a potato to handle”.

Having pretty much covered the gamut of appropriate traditional journals, it was time to try some of the newer open-access journals. Having had a long time association with an open-access peer reviewed journal, Complexity International, that is now, unfortunately, no longer accepting submissions, I had a favourable impression of the open access model. I submitted the manuscript to Open Philosophy Journal, which was produced by the Bentham group. If I had known about perhaps I shouldn’t have bothered. After a year, I didn’t hear anything from them, so I then submitted to Open Philosophy Journal. Quite quickly, a review came in. Clearly, the reviewer didn’t have a handle on the paper, yet after my response to the editor, the journal accepted the paper, nearly five years to the day of when I first submitted the article to a peer reviewed journal. I should have been suspicious. Only later, did I discover this publisher (Scientific Research Publishing) is listed on John Bealle’s excellent list of predatory publishers, and now realise that I have been SCAMMED!

There is little benefit in paying to have my paper put online by someone who may very well not be around next year. My article is available (in unrefereed form) through arXiv.org. Even though this paper has been through peer review, and has even been improved as a result, in the end, it may as well not have been. My idea may well be truly profound, or it may be utter horseshit. But it doesn’t look like I will find out through peer review. I don’t think I’ll bother with the other section (The “How Soon until Doom” appendix of my book). Some topics are just not suitable for the peer review process.

Posted in Uncategorized | Leave a comment

Elliot banned!

It seems like both Elliot and Alan have now been banned from FoR. See Elliot’s posting on BoI

Posted in Uncategorized | Leave a comment

Why Windows is not ready for the desktop!

OK – yes I know this is a provocative title. But it is often claimed that Linux is not ready for the desktop. I have been using Linux as my primary desktop environment since 1996, so the statement always surprises me when applied to Linux. Prior to that, Solaris 1 was my desktop from 1990, and before that it was Mac. Each step was an improvement on what came before. Compared with the Windows of the day (Windows 95), the Linux desktop was so many light years ahead that it was a butt of jokes (eg “PC-contemptibles”).

However, inevitably, the Windows juggernaut caught up with me. For a variety of reasons, my paid work now involves working with Windows machines a substantial fraction of the time. In 2006, I started to use Windows for the first for anything more significant than driving a scanner, or running a web browser. This was Windows XP, which I would consider to be the first release of Windows that might be ready for the desktop. It has proper multitasking, remote administration (though not in the “Home” version), schedulable tasks, and services. It lacks, though, in the multiuser department, although for my day to day work, that is not too important.

However, as I began to use Windows more, the more its deficiencies became apparent. Bearing in mind the long history of posix systems for my desktop, it is crucial that my desktop support posix functionality so that I don’t take an immediate productivity hit from trying figure out an alien way of doing this. Bear in mind, that all modern desktop operating systems (eg Linux, MacOS, Solaris, etc), with the single exception of Windows, supports posix natively out of the box. I have invested heavily in the platform, and have a raft of open source tools that I can use on a posix platform, at the cost of a bit of compilation if the application is not available out of the box.

Fortunately, however, through the efforts of RedHat and other volunteers, an almost complete posix environment called Cygwin is available as a free download. Most of the applications I have grown to love and use are simply available via a simple point and click installer. Full kudos to the Cygwin team for taming Windows and making it a usable and productive platform.  Nevertheless, even with Cygwin, there are very many nasty sharp corners with Windows, and I want to document these peeves: Why Windows is not ready for the desktop.

  1. Cygwin is a second class citizen. By this, I mean a couple of things. The filesystem as seen by Cygwin is rooted somewhere like c:/cygwin. The actual drive letters are mounted in the form of /cygdrive/c, etc. Regular Windows software doesn’t understand cygwin paths, and cygwin programs often have trouble with regular windows paths. Trying to interface the output of, say, Microsoft’s Visual C compiler with something that the compilation buffer of emacs understands, is fraught. Similarly, to generate paths that Visual C understands from within Cygwin’s make utility is convoluted and messy. Another aspect of this problem is that you can’t expect bash to be available on all systems, meaning you have to descend into the pit of hell that is cmd.exe programming. More on this later.
  2. Opening a file locks it for deletion. This “feature” of Windows has caused uncountable aggrevation for users (not just developers). Ever wondered why you had to reboot your computer after installing software? Its because the install has replaced some dynamic libraries (aka dlls), but existing running software is in memory using the old version of the dll. Similarly as a developer, creating programs, you find your builds fail because either there is a spurious compilation running in the backgounrd (see Peeve No. 3), or because the program your building is still running (eg within a debugger). It doesn’t actually need to be that way. On Posix systems, deleting a file (or overwriting it) simply removes it from the directory. If there are no other references to that file (eg links), the file still exists, but is effectively unnamed until all processes using that file close their file handles. Works a treat. Pity Microsoft chose the less useful semantics when they implemented NTFS.
  3. Signals not propagated. This means that ^C and ^S do not work with native Windows programs, only Cygwin programs, because Cygwin has made an effort to support signals. It also means, for example, that when emacs kills a subordinate compilation process, only the toplevel cmdproxy process. This usually means a hunt and kill on the now orphaned compilation processes. A useful tool is Process Explorer, which has sufficient smarts in to allow you to kill a whole tree of processes. However, it is manual – it would be so nice if the operating system supported this out of the box.
  4. X-windows copy/cut/paste. If you have ever used the 3 mouse buttons to select, copy, cut and paste in X-windows, you will realise just what a timesave it is, let alone saving on RSI from not having to repeatedly type ^C,^X,^V in intensive edit session. The problem is that Windows does not understand the middle mouse button at all, and the right mouse button is reserved for context menus. X-windows programs running under Cygwin do understand these, as does emacs, but that’s about it. To alleviate this lack of functionality, TXMouse works passably well. The right mouse functionality is not supported, byt you do get swipe to copy, and paste on the middle button. Well mostly – some Windows programs refuse to cooperate (here’s looking at you, Visual Studio). It also does a quite passable job of fixing Peeve No. 6.
  5. No virtual desktop. One of the things I love about my Linux netbook is its light weight. The downside of such a small machine is its small screen – 1024×600. Does that mean I’m limited in what I can do? No. I run a virtual desktop,  which is 4000×3000 pixels. The window manager allows me to drag the physical screen around the virtual one – navigation is quite easy, and allows me to have literally hundreds of windows open, spread over multiple projects I might be working on. Even applications requiring larger window sizes (Visual Studio, for instance is unusable on screens less than 800 pixels high), are not difficult to use. I have for instance, requested 1200×1000 remote desktops to manage visual studio development from my netbook. It actually works a treat. Its a toss up as to whether it is better working environment than a dual monitor setup, but it is a hell of lot more portable. The bog standard window managers for Linux (KDE and Gnome) sport workspaces, which gives something like the virtual desktop (not as good IMHO, though), as does MacOSX, but Windows 7 lacks this functionality, out of the box. I tried a workspaces like solution from sysinternals called Desktops, but sadly it didn’t play well with the Cygwin programs I need to use.
  6. Click to focus. Click to focus has been the mode of operation since the first Macintosh, and was probably inherited from the original Xerox Parc GUI research. X-windows introduced an alternative mode called “focus follows mouse”. Here, the window under the mouse point has focus. Its a bit confusing going from one system to the other – and I frequently finding myself typing text into the wrong window, since my eye (and mouse) are focussed on one window, but the operating system is focussed on another. So why would I argue that focus follows mouse is superior to click to focus? A very common operation I find myself doing is copying and pasting selected text from one window into another (possibly completely different application). With focus follows mouse, and X-style copy paste this is extremely simple. Just have the source window on top, with a small  part of the destination window showing at one edged. The select text using swipe (or left button-right button when large), then move the mouse to the small part of the destination window, and press middle-mouse. Click to focus destroys this fundamental mode of working, as you need a separate click to bring the destination window to the front, (which may take some time to redraw), followed by the paste followed by another click to bring the original source window to the front. Focus follows mouse is not to everybody’s taste, but it is painful when the computer cannot be configured to use it. Note that TXMouse partially support focus follows mouse, although windows are auto-raised in this case, which defeats the purpose somewhat. Also Visual Studio really behaves badly when TXMouse is running.
  7. Modal dialogs prevent windows from being moved. This is a doozy! If an application pops up a modal dialog box, the app’s window remains fixed in place until the modal dialog is dismissed. Even worse is if the application becomes busy, not responding to its event loop. The problem is that in Windows, applications are responsible for moving, raising, lowering and iconising windows – events which are the responsbility of the window manager!
  8. No symbolic links. This is an extremely useful feature for organising the file system, and has been added to Windows 7 (yay!). Cygwin will fake symlinks in its own way, because many posix programs will assume the ability to create them. The problem comes from Windows software that knows nothing about symbolic links, making navigation around the filesystem traumatic.
  9. Not all programs use ‘/’ consistently. Back in the bad old days of MSDOS, ‘\’ (slosh) was used as the directory separator, even though ‘/’ (slash) had been in use for some years in the unix world. When Windows grew up (gained the NT kernel), the directory separator became the slash, with slosh being recognised as a synonym by most windows software for backwards compatibility. But not all – some software insists on slosh being used (which is hard to input at the bash console), and occasionally software will fail because a path has been entered with a mix of slashes and sloshes.
  10. General slowness of NTFS. This is a bit hard to quantify, but on similarly speced hardware, running Windows XP and OpenSUSE Linux, I have noticed a factor of 10 performance difference in writing files. Some of this might be due to Cygwin’s posix emulation layer, and supposedly Windows 7 has a more lively I/O subsystem, but one thing that is painfully obvious is that when the system is performing I/O, such as compiling C++ code, the user interface becomes extremely sluggish (taking 10s of seconds to respond to mouse clicks). This behaviour is not evident on my OpenSUSE Linux system, unless it is actually swapping.

Windows 7

While Windows 7 has improved some things, such as remote desktops for all versions, and symbolic links (see point 8 above), unfortunately its better security model is not well delivered. On Linux, there is a very simple security model – normal users have priveleges to access only their own areas, and a single priveleged user can do anythign on the system. To escalate priveleges, there are the su and sudo commands, which so easily allow one to execute commands with the correct privelege level.

Here are the new peeves with Windows 7.

  1. Unfortunately, in Windows 7, there is no equivalent to the sudo command. This means that all shells need to be run as administrator, otherwise you will quickly find that you cannot do something. As a consequence, many other programs will then also need to be run as administrator, as they need to access file that have been created or copied by the shell.
  2. “Resource temporarily unavailable”, which afflicts Cygwin processes being forked. The Cygwin team claim this is a problem with the way Microsoft has implemented it process control – I don’t really want to get into the details, but the effect is to degrade the user experience, as often it takes 2-3 goes to get software to start.
Posted in Uncategorized | 2 Comments

Microsoft Word Rage

Way back in the dim distant past (well, the 1980s, for those who remember), whilst a PhD student of theoretical physics, I noticed a distemper amongst my colleagues attempting to write their theses using a word processor. It involved much swearing and cursing at the computer, slamming of doors, and running full tilt down the corridor screaming at the top of one’s lungs.

To be fair, it was not just Microsoft. Microsoft Word did exist at the time, but this was well before its market dominance. Other word processors existed, and were used, but all seemed to suffer the same flaw.

What was the cause of this most antisocial behaviour? After having added more than a handful of equations, or graphical images, the program developed its own personality, remeniscient of Beezlebub. Equations and figures would be randomly selected for deletion and sometimes (if you’re lucky) insertion at some random point elsewhere in the document.

I took heed, and joined the document processing revolution. In particular, I started using LaTeX, and never looked back. I have written 4 books (including my thesis) using hardware that is considered laughable today, and it was a joy to use.

However, this is not a post for exhorting the virtues of LaTeX.

What prompted me into writing this is that one would have expected that with two decades of computer development in both software and hardware (with the hardware being 10,000 more powerful now than when I wrote my thesis), this condition of “Word rage” would be a thing of the past. Not so. My son recently was writing up a report on his school science assignment. This was no book! It was around 30 pages, and yes, had quite a few figures and tables, but I found him swearing at the computer, complaining of Word “crashing and running slowly” in an eerily similar way I noticed my PhD colleagues do all those years ago.

Unfortunately, I don’t think Libre Office is much better than Microsoft Word.

We need a word for this phenomenon. The obvious Greek neologisms “lexicomania”, or “leximania”, having already been taken for an excessive obsession with words, and as a synomym for logorrhea (ie verbal diarrhea) respectively, are no longer available. Hopefully with more visibility of this problem, the frustrations of scientists and technical writers might finally be addressed by the writers of our word processing software.

Posted in Uncategorized | Leave a comment

How I got bitten by the Tiger.

I have had a mediocre experience with Tiger – the first couple of
flights were delayed significantly – taking 5 hours to travel between
Melbourne to Sydney each way, and more recently being caught up with
recent CAA grounding of the airline in July of this year, and having
to make alternate travel arrangements.

But to be fair, there were times when the Tiger experience went well,
and was about what you’d expect from a budget airline. I thought I’d
give Tiger the benefit of doubt.

However, I just recently tried to book tickets for a return trip
Sydney to Melbourne, and discovered after I received my booking
confirmation that my forward leg was booked for a different time to
what I had selected. Since I am meeting someone flying in on a different
flight, this was unacceptable.

I cannot explain what happened. I do know that I missed the flight
details in the small box at the bottom left corner of the web page
(where it is typically not visible on a netbook’s screen). I was
expecting a more prominent “review order details” page before
committing to the flight. I also did not expect that my flight
selection should have changed, either.

Just like other people have found, it is very difficult to contact
customer support to rectify the issue. Given that the problem was
detected only minutes after the order went in, it would be no skin of
Tiger’s nose to correct the order to what I originally requested then
and there. However, it took some hours before I was able to get
through to customer support on the phone. In the meantime, I filed a
website problem report about the issue, to which I have yet to receive
a response.

Customer support, instead of being helpful, decided it was more worth
their while spending more than half an hour arguing with me than to
fix the problem.

I then attempted to change the flight details of the forward leg,
thinking that even if I end up not getting the $50 change charge
refunded, I might still be better off than if I flew with an
alternative airline. Two problems occurred – my debit card maxed out,
so the transaction was declined, but more importantly, even though I
selected two flights earlier than the current flight, the system
booked me on the previous day! Now customer support refused to reverse
this entry as well, and the upshot is that the return leg (which was
booked correctly) is now unusable. It is actually cheaper to start the
booking all over again, than to pay an additional change fee.

This is just not worth it. Tiger’s booking system is a roulette, with
no customer service to fix up problems when they occur. I have no
faith that spending another $120 will get me the flights I need.

Posted in Uncategorized | Leave a comment

Banned from Fabric of Reality list

On Thursday 25th of August, I was banned from posting to the FoR discussion list by the moderator Alan Forrester, on the basis of flouting a newly made up posting rule, without warning, or it appears, even reply.

Many people have written to me privately in support, or posted public expressions of support the the FoR list. Thank you for your kind words.

At the request of a number of people, I have now established an alternative discussion list free from the tyranny of arbitrary censorship.

Alan Forrester’s Notice
My response to Alan Forrester
The posting that caused the ban

Posted in Uncategorized | 4 Comments