Date: Wed, 24 Aug 2011 14:29:05 +1000

From lists@hpcoders.com.au Wed Aug 24 14:29:05 2011

From: Russell Standish <lists@hpcoders.com.au>

To: fabric-of-reality@yahoogroups.com

Subject: Induction

Summary:

Recently, I had a posting to the BoI group rejected by the group

moderator, on the grounds of it being off-topic, as it was not using

induction as defined in BoI or FoR. This was initially puzzling to me,

as I was using the term in a purely conventional way, and didn’t think

David Deutsch was using differently either. I still don’t think so

after rereading the relevant chapters of FoR and BoI, and reading some

further papers on induction on the internet.

However, having looked into this now, it has given me a better understanding of some of the postings I received earlier this

year. I’ll highlight one of the the most peculiar: A claim that by

Elliot Temple in this list on March 26th this year that Popper had a

detailed refutation of Solomonoff’s algorithmic information theory

(AIT) in his work “Logic of Scientific Discovery”, which was published

in 1959, a full decade before AIT was fully developed. Furthermore,

that book is a rework of a book originally published in German in

1934. How much does this criticism reflect what was known about

induction in the 1950s – or does it really just represent criticism of

what was known in the 1930s. In either case, the criticism would have

to be out of date.

The point is induction does work. It works in machines. It is the primary mode of learning in non-human animals, and most likely was the

primary mode of learning with humans until the scientific

revolution. David’s point can be made succinctly: the scientific

method of conjecture and refutation is simply better than

induction. It is a pity that he chose a fairly simplistic

characterisation of induction to contrast with C&R, but no matter –

IMHO, it is not essential to David’s arguments.

For the interested, I enclosed the rejection notice, my response, and

finally the original posting that was rejected.

The reasons given for the rejection was:

> By “induction” I meant what BoI means by it. Your post does not engage with or

> discuss BoI’s position on induction, so it’s off topic. We don’t need

+committed

> anti-Popperians implicitly trashing and explicitly ignoring BoI’s worldview;

> that’s one of the things wrong with FoR list.

Whilst he wasn’t explicit, I assume that he is labelling me a

“committed anti-Popperian”.

The first comment I wish to make is that the post was not

anti-Popperian. No mention of Popper was made at all, and frequent

mentions were made of the Popper’s key idea of falsification (ie

conjecture & refutation) in a positive and context-sensitive manner,

as can be seen in the original quoted message below.

The more important point is “induction as BoI means it”. Up until now,

I hadn’t even considered that there was more than one sort of

induction, other than mathematical induction (which I won’t mention

again, as its not relevant).

So I reread chapter 1 of BoI and also chapter 3 of FoR. I had a good

laugh at Russell’s chicken. Also fo good measure I consulted the

Wikipedia entry for induction (and inductive inference), as well as

the Plato entry written by John Vickers. I realised that there isn’t

more than one sort of induction, but there are degrees of

sophistication between different formulations of it. The version

presented in both FoR and BoI is but a cariacature, good for

explaining some of the epistimological problems people have had with

induction, but not really presenting the ture insight that came

through algorithmic information theory – the work of Solomonoff,

Levin, Chaitin and so on. To put it fairly bluntly, the sort of

inductive reasoning described in FoR and BoI is like a skyrocket to

the Saturn V inductive reasoning found in something like IBM’s

Watson. The skyrocket will not get you to the moon. But it is still

the same thing.

Now I pardon David’s use of what is effectively a strawman, (indeed I

never noticed until now) because I think his argument correct that an

understanding is necessary for real progress in science, and the only

path to understanding is via conjecture and refutation. I do have some

qualms about his presentation of this in BoI (which will be the

subject of a future posting), but ultimately think he will be

vindicated in this. Nevertheless, he shouldn’t deny that induction

works – it very clearly does, as pretty much all knowledge prior to

the Scientific Revolution was gathered by informal inductive

means. And in the snail example brought up by John Clark (IIRC), the

snail learnt stuff about the world via induction.

Now for the rejected post from myself.

> On Fri, Aug 19, 2011 at 11:41:45AM -0700, Elliot Temple wrote:

> >

> >

> > Also there is no such thing as “induction in action”. Induction is not the

> +name of any set of steps or approach which is possible to follow. Induction

> +works as follows:

> >

> > 1) get data

> > 2) induce a theory

> > 3) profit

> >

> > However, there is no method of “inducing” a theory from data. It’s not just

>

> +that no method is specified but also that it’s impossible. The only way to

+get

> +theories is by conjecture. (And the only way to get good theories is by> +conjecture and refutation.)

>

> That rather depends on what you mean by “induce a theory”. The classic

> archetype of induction is the example of Kepler inducing his laws of

> planateary motion from Brahe’s data. However, what probably happened

> was that Kepler tried a few different simple mathematical formulae,

> and checked to see if they fit the data. That is, conjecture and

> refutation. Just not formalised as such.

>

> However, there is another approach. One could start off with a

> “universal model”, then tweak the parameters to make it fit the

> data. For example, one could suppose that planetary motion is given by

> a polynomial relationship between the coordinates, then fit the > data. For example, one could suppose that planetary motion is given by

> a polynomial relationship between the coordinates, then fit the

> polynomial by means of the Gaussian least squares procedure. For any

> dataset with n entries, it is possible to find a perfect fit to a

> polynomial with n coefficients. However, such a model is pretty

> useless, as Occam’s razor testifies. So instead, we examine other

> lower order models to find the one with fewest parameters that also

> fits well. If you understand the notion of a Pareto front, it is a

> model from the middle of the Pareto front that exhibits best

> predictive power.

>

> Now many machine learning algorithms work in just this sort of

> way. For example, it is known that fuzzy inference systems can model

> exactly the same class of phenomena as artificial neural

> networks. They are both “universal models” in a certain > sense. Learning on these models typically involves some form of

> optimisation of the model on a training set – whether by an

> evolutionary method (your conjecture and refutation), a local method

> (sometimes known as hill climbing), or various other methods too

> numerous to mention.

>

> This is what I mean by induction, and yes, it is known to be

> effective. And no, it does not always involve evolution (C&R), but

> does sometimes.