Sunday, 1 November 2009

God and Proof: Part IV


I will introduce you to what I consider to be the ‘four cardinal rules of logic’; for we cannot show precisely why those who demand proofs quarantine themselves from good rationale until we look at the four different ways that we construct logic. Once we have looked at the four cardinal rules of logic I can then show why the demands for proof are not only ill-conceived but solecisms against robust enquiry.

Godel’s theorem warns us that the axiomatic method of making logical deductions from given assumptions cannot in general provide a system which is both provably complete and consistent. There will always be truth that lies beyond human scope; truth that cannot be reached from a finite collection of axioms. This also means that no door in the labyrinthe palace of empiricism opens directly onto the ‘Absolute’ - there are hints of the infinite in mathematics (Cantor’s Absolute), but any complete unity must include itself, thus we hit the self-referencing problem of Russell’s paradox. But what we do know from our cognitive set-up is that we are created with a vacancy in our hearts that Christ is waiting to fill. Moreover, that our ‘minds’ can attune themselves into the ‘nature’ of the Absolute gives us the biggest hint that we are truly meant to be here, and that this life is but a shadow of a deeper and more astounding reality.

In the past on Network Norwich and Norfolk we have had two periodic contributors, both called Mike (Mike H and Mike 2), both of which claim to be unbelievers, and both of which set up their framework for objection by continually insisting that there is no proof for God, and until there is, they will remain unfalteringly sceptical. It is here that we must cover something very important – the world isn’t quite like that, and the two Mikes might end up in a perpetual cycle of disappointment because they made demands that were not strictly realistic. The truth of the matter is this; there will always be statements which are true but that one cannot prove are true.

Take the following statement (call it ‘S’)..

S - “Mike cannot prove this statement to be true”

Suppose Mike were to arrive at the conclusion that S is true - this means that the contents of S will have been falsified, because Mike will have just done so. But if S is falsified, S cannot be true. Thus if Mike answers ‘true’ to S, he will have arrived at a false conclusion, contradicting its infallibility. Hence Mike cannot answer ‘true’. That means that S is true; but in arriving at that conclusion we have demonstrated that Mike cannot arrive at that conclusion. This means we know something to be true that cannot be demonstrated to be true.

Now consider something a little different. Let us say that we meet a man for five minutes that we have never met before and will never meet again. Our job is to find out if he can speak English. If he remains silent throughout the five minutes and he disappears never to be seen again, we cannot prove that he cannot speak English but we have no evidence that he can. If, however, he were to say the words ‘My name is Robert and I can speak English’ we would, of course, have the deductive proof that he can speak English, as the English words themselves would be contained within the statement.

This is different from the first example, for the first example is about the axiomatic method of logical proof itself and is not a property of the statements one is trying to prove or disprove. One can always make the truth of a statement that is unprovable in a given axiom system ITSELF an axiom of some extended system. But then there will always be other statements in the self same system that are unprovable.

Now we come to something else that is key - in the first example there is no new predicate that can be added to S (without changes to its intrinsic structure) that can alter the fact that S cannot be shown to be true. Now very obviously that is not the case with the second example - if we were told that Robert was an English lecturer and that there was footage of one of his lectures, we could show that ‘Robert can speak English’ is a fact without having to prove it in those five minutes. Moreover, unless we start flirting with nonsense there are a number of things we could find out about Robert that improve the probability that he could speak English.

S1 - Robert was born and raised in Ghana
S2 - Robert was born and raised in Mozambique

It is very clear which out of S1 ad S2 is more likely to be suffixed with the statement ‘Robert speaks English’ - S1 because Ghana is a former British colony whereas Mozambique is a former Portuguese colony. Of course S1 and S2 might only improve the probability very slightly, but this is what we do in all walks of life. Knowing, as most mathematicians do, that there are some statements of logic that cannot be proved to be true (one can also read about the distinctions between realism and antirealism) we use our perceptive and investigative toolkit to reason our way through these things. If for example Robert lives and works in this country it is much more likely that he speaks English than if he lives and works in Ecuador.

One must also bear in mind that there are many axioms or regularities that only become that way by our adding something to facts. Look at this set of six numbers (all single integers used are under the value of six)

234232 - 344232 - 121232 - 523334 - 552555 - 122311

A Turing machine can show which number sets are computable given a set of rules. If the rule is, say, take the first number of the first set and add 1 (giving us 3), do the same to the second number in the second set (giving us 5), the third in the third (giving us 2) , and so on, we find that with that rule in place we have the number 352462. Unlike the “Mike cannot prove this statement to be true” example, this time we have created a rule or procedure and shown that logical deductions can be reached without messing around with the axioms in logic. For example, given this rule, I know that if I have the answer 352462 then none of the sets will contain the exact sequence 352462.

We are beginning to see why demands for proof of God’s existence are knottier than the sceptics realise, and that greater mental prudence is needed before such demands are made. We certainly do not completely abandon a mechanical procedure for investigating mathematics because of Godel’s theorem and Turing’s halting system. Those unprovables are rare elements of mathematics and can be sifted out allowing us to continue on a logical trajectory, and that is precisely what we do with our enquiries about God; we do not churlishly shout for evidence or make unreasonable empirical demands, we must take the sagacious approach and realise that sense-making is about joining the dots, not demanding the whole picture in front of our eyes. How silly it is to stridently decree ‘Unless there is proof of God’s existence, I’m going to carrying on believing that He doesn’t exist’.

Now we reach the last of the cardinal rules – demonstrated in Chaitin’s theorem. Having shown you with Turing that there are mathematical problems that cannot be proved by any fixed heuristic procedures, we now move on to how we know if what we know (or contend) is right or whether further compressibility is required.

It also ought to be remembered that one can compress something too much into logical nonentity - the biggest example being with self-referencing paradoxes such as ‘This statement is false’ - here we have something that is too compressed to be logical, because if it’s true then it’s false and if it’s false then it’s true. It is nonsensical because there is no subject or predicate pointer extended to ‘false’. The statement “3 + 5 = 9 is false” is true because ‘false’ has mathematical integer subjects extended to it - it has the ideal compressibility for deductive analysis. Now in Chaitin’s theorem, a computer is given this command - “Search for a string of digits that can only be generated by a program longer than this one”. Now obviously if the search succeeds the search program itself will have generated the digit string. But then the digit string cannot be “one that can only be generated by a program longer than this”. This obviously leads to the fact that the search must fail, even if it is an infinite search. The search was intended to find a digit string that needed a generating program at least as big as the search program, which is to say that any shorter program has to be ruled out. But as the search fails, we cannot be sure there is no shorter program, as we do not know whether a given digit string can be encoded in a program shorter than the one we happen to have discovered.

Now here’s the rub - a random sequence is one that cannot be algorithmically compressed - but as I have just shown, you cannot know whether or not a shorter program exists for generating that sequence. The cardinal point in these algorithmic programs is that you never know if you’ve unturned every stone in trying to shorten the description. Therefore you cannot prove that a sequence is random, although you could disprove it by actually finding a compression. If you are paying close attention you will see that this is congruous with the Robert speaking English model - here you have much greater access to empirical evidence (certainly less complex than algorithmic mathematics) but you can find the shortest compression (so to speak) by hearing Robert speaking English.

The practical conclusion here is twofold. In the first place, one can prove mathematically that almost all digit strings are random, but one cannot know precisely which. But more essentially for everyday purposes, taking the cosmos as an algorithmic whole, events or activities that appear random may not be random at all - even things like the indeterminism of quantum mechanics. The cardinal point here is not that something like, say, Heisenberg’s Uncertainty Principle probably belongs to uniform laws of which we as yet know nothing. The cardinal point is that we might never be able to know - in fact, Chaitin’s theorem ensures that we can never ‘prove’ that quantum mechanical measurement outcomes are random.

Falsification and verification:
Some philosophers, most notably Karl Popper, contended that because science aims at making universally quantified statements (for example, all Xs are Y) the principal issue was not whether such statements are verifiable but whether they are falsifiable. In other words, according to Popper falsifiability has greater meaningfulness than verifiability.

However, as a theist one must remember that the boundaries must be reconstituted to make way for infinite complexity within a God-created reality, and that given our present (finite) limitations, we must also remind ourselves of what we covered in the foregoing sections – that within this theory there will be some things that are axiomatically true yet non-falsifiable (non-contextually - in the Absolute sense) and that the efficacy of a contention is not predicated on its falsifiability. What Popper meant by falsifiability should not be misunderstood as an accusation levelled at reason itself, merely to construct parameters and reconstitute boundaries within the edifice of reasoning and rationale. Furthermore it is difficult, often impossible, to apply falsifiability to psychological, historical, sociological, and emotional aspects of life, as they are rarely amenable to falsifiability and are individual and unique events or facts. Our perceptive qualities and, more importantly, our ability to assess the validity of a theory based upon its appearance in front of our perceptive tools is what we can use.

The greater a theory’s potential for falsification the better and more enduring it can be. We see so often that a theory that could easily be falsified but never is, often endures in a most robust way. For example, the theory 'all plants have DNA' is not as good as the theory 'all life has DNA' because the set of objects 'plants' is a subset of 'life' and hence the 'life' theory has more scope for falsification - it is a more general and robust theory. The Popperian demarcation of ‘probability’ and ‘degree of corroboration’ is reasonable providing the burden of verifiability isn’t too great. The statement ‘It is going to rain somewhere in the world in the next hundred years’ has a much less burden of verifiability than the statement ‘It is going to rain on Buckingham Palace at 1:23pm on Thursday 23 April 2010’. In the sense of our assumptions about the Divinely created order, logical improbability, of course, does not take its place in the predicative inner-context of the sentence alone (as is the case with the second prediction about the rain on Buckingham Palace), for it will have to be admitted that any thoughts of falsification and non-verification add no weight to an argument so grand in scope; it envelops principles higher than those in the purview of ‘logical probability’ and ‘logical improbability’, although with the concept of Aseity one can make a good assessment based on a structural underwritten logical framework, particularly if it exposes the infinite regress problem.

The falsification principle falls down with the realisation that no individual theory is anything other than a constituent part of a ‘chain of validity’ – a chain that we apprehend bit by bit, by the aforementioned method of joining the ontological dots, and thus can even be positively affirmed in a partially-isolated context yet at the same time amount to a discordant cell on one link of the ‘chain of validity’. In other words falsifying something (singularly) might not upset the link in the chain. This a slight reworking of the ‘verisimilitude’ of a theory - that is, its appearance of truth; this is the extent to which a theory corresponds to the totality of reality, rather than just those in the immediate proximity (I would say that any providential being with a priori complexity would need to furnish us with the perceptive qualities necessary for such an understanding, if we are to have a relationship with Him).

Having already shown that there are many tenets to existence that are not amenable to the test/refute procedural analysis and are therefore not disposable in the sense that atheists wish for, I think the Popperian caricature is relevant in their thinking; for the general rule of sense-making must readily include the theistic ventures which Christians say are an essential part of our epistemic framework. This is compounded by the fact that the vast non-testable domains covered by our best efforts for analysis, along with the limitations of human perceptual resources, only allow a very sparse interrelational sampling of life. Formalisation of our best theoretics are in the strictest sense simulations unless they and conflated with experience to provide us with ideas of validity, otherwise the trail would stop dead. In the truest Popperian sense, and given the stakes, I would say that many atheists treat the question of God’s existence far too frivolously and unconscientiously as they attempt to decide which out of theism and atheism makes the more easily refutable claims. All they end up doing is forming an allegiance with the side that makes the best impression on their emotions, but as any who have witnessed the cults ensnaring people at their most vulnerable will tell you, this is a potentially dangerous allegiance.

No comments:

Post a Comment