Monday, May 17, 2010

The Humanity in Evolutionarily Derived Proof-Seeking Systems

I've been wondering lately about the limitation of evolutionarily derived systems of the following sort: Taking a pool of already-proved theorems, a "primordial soup" of proof systems are built with only the most rudimentary basics of arithmetic (perhaps even before a Godel incomplete system), and pick off the systems that prove the least amount of theorems in each generation and promoting the reproduction on the best systems of each generation. If we allow for many sorts of duplication as well, such as exact copies, mutations, mergings and branchings, we are left with systems that encompass more and more theorems. We have here an evolutionary system: there is reproduction, mutation and selective pressure.
The question is not whether such systems would ever be succesful at opening up new truths for mathematics, but rather to look at the sort of system that could possibly survive in this environment. There would be the expected systems, that rigidly keeps faith to the laws of mathematics as we understand them, and these are the ones we would hope would open new doors for us. On the other end of the scale, we have those that barely scrape off the last percentile to survive, and we expect those to probably die off in a few generations. The kind that is of interest to us is the is able to keep a solid foundation of mathematics as the first kind did, but also allows certain proofs to be proved at the expense of other proofs. What we'd find in these systems is a system that could perhaps disregard or extend fundamental theorems so that a larger number of theorems are proved. Such a system would be able to prove many "surface" theorems, but would be fundamentally flawed. I believe we would have here something similar to phlogistons, the element in fire thought by the ancient greeks to be present in combustile matter. It had explanatory and predictive power, but was later found out to be fundamentally flawed. Such system could potentially compartimentalize most of its theorems so that some rules it has applies to some theorems, and some other rules apply to others.

Now, you might ask, "But if the pool of theorems is only based on theorems we know, wouldn't the whole idea favor systems that are most like the ones we already have?" It would certainly be that way, but that is not the purpose of the thought experiment. We are not trying to find the "truest" system, but only the sort of system that could be built from it. The fundamentally flawed systems are systems that are most able to "work in their environment" without ascribing to fundamentally correct or coherent behaviour. It is simply trying to do the best in the situation it is in, without regard to any fully teleological bottom-up approach.

The question is, how far off from this sort of system is a human being in its environment? It would seem at first glance that humans, unlike the proof-seeking systems, do have a teleological approach to our environment. But it hasn't always been that way. The first multi-cellular beings had no such approach, they responded as our systems do, simply trying to optimize the results for the given requirements. As we move along the evolutionary spectrum, we start leaving these 'deterministic' approaches and arrive at a time where the information by the beings can be analysed and used to judge other information and to promote actions. But there remains the nagging fact that there are vestiges of such evolution (compartimentalization, emotions and reflexes are all ways to circumvent the bottom-up approach of teleology) and I wonder whether there are fundamental flaws in our reasonings, but we have circumventions used to avoid them without being aware of it, because it was more favorable that we did not think this way.

No comments: