In my last two posts on Computational Theory, I first explained the Church-Turing Thesis which can be summarized as the idea that all (full-featured) computers are equivalent. I then went on to summarize some Computational Theory principles we can study and research once we assume that the Church-Turing Thesis is true. This research is primarily based around the limits of what a Turing Machine can do or how fast it can perform.

In this post I’m going to explore some of the philosophical ramifications of the Church-Turing Thesis, if it were to actually hold true. And at least so far (with one interesting exception) it has held true. Though in the end, I suspect many readers will feel they need to ultimately reject the Turing Thesis. But even if it does ultimately prove false, the very fact that it holds true in every case we know how to currently devise still makes it an useful scientific principle, for now.

**Is the Church-Turing Thesis True?**

If you are ever the sceptic – and I certainly am – you probably want to know why we should even assume that the Church-Turing Thesis is true. It has, after all, never been proven true. So why should we simply assume it’s true — as Computational Theorists do?

To answer this question, I must refer you to my epistemology posts. This summary in particular will be helpful. Here we have our first interaction of epistemology and computational theory.

The basic principle is very simple. We do not need to prove the Church-Turing Thesis to be true because we can simply treat it like any other scientific theory or explanation. We can *conjecture* it is true and we can *test it through experiment*. We have done this (as described in this post) and it was not refuted. So we can now *tentatively accept it as true* and therefore *embrace it as if it’s true until a better scientific theory comes along.*

The idea that Computational Theory is really an empirical science will probably make you a bit dizzy for a while. That’s okay. This tends to happen when bad thinking habits fall away.

**The Church Principle vs. The Turing Principle**

Now that we are tentatively accepting the Church-Turing Thesis as wholly true (I know that sounds paradoxical, but as summarized in this post, it makes perfect sense) we can further discuss an important difference between how Church saw the Thesis vs. how Turing saw it.

Roger Penrose, in his book *The Emperor’s New Mind*, points out that Church and Turing did not agree upon how far-reaching the Church-Turing Thesis is. Church was more circumspect with it. He simply claimed that any sort of program (we call them ‘algorithms’) that can possibly exist can be run on a Turing Machine (or on the equivalent Church notation.) He simply states that there are no *algorithms in existence* that are not Turing compatible. This is what Penrose calls “The Church Principle.” We might summarize it like this:

There is no such thing as a computation or mathematical algorithm that can’t be run on a Turing Machine.

Alan Turing, however, noticed something that Church apparently did not or was unwilling to add to his conjectured explanation. Turing noticed that just as we can assume the Church-Turing Thesis to be true based on observation and a failure to refute it, so can we go one step further and conjecture that *absolutely everything ever discovered by the physical sciences can be broken down into computations that are Turing compatible!*

**The Computational Nature of Reality**

In other words, Turing used the Church-Turing Thesis to assert the Computational Nature of Reality. This is a principle so assumed by us modernly (our ancient counterparts assume otherwise) that we don’t even think about it anymore. We just assume that if we have a scientific theory (particularly a basic physical one) that it will be described mathematically (i.e. computationally.)

No one ever *expects* for scientists to one day announce “we’ve found this strange phenomenon that apparently can’t be described mathematically!” If that ever did happen it would blow our minds out the back of our heads. We just take for granted that reality can be described using math (i.e. computation) and that science describes all its most basic theories using math. And even most of its non-basic theories too we assume will be largely translated into math – including biology and even now psychology. In fact we often view the maturity of a scientific explanation as directly related to how much we’ve been able to describe it via mathematical laws.

Think of the example of Darwin’s theory of natural selection. In *The Origin of the Species* Darwin didn’t use any math at all. So people consider natural selection to be a non-mathematical theory. But that isn’t entirely correct. First of all, Darwin was very specific about how he felt natural selection worked. Precise and specific enough that had computers existed at the time, it would have been possible to create a program out of the steps he described. Secondly, when DNA was discovered it taught us that natural selection was essentially – well, exactly like the tape on a Turing machine! Our DNA is literally a chemical Turing machine that can be literally copied and simulated by modern computers. So biological natural selection is in fact wholly Turing compatible.

**What if We Did Find a Natural Process that Exceeded Turing Machines?**

Now let’s explore this idea further via a thought experiment. What if some scientist did one day announce that she has discovered a natural process that can outperform a Turing Machine as per our previous discussion about the limits of Turing Machines?

In fact, this has already happened once in a very limited form. Quantum Mechanics does in fact perform computations at speeds a Turing machine never can and can even perform some very limited esoteric functions that a Turing machine can’t. But when such a case arises, this doesn’t somehow invalidate the Church-Turing Thesis per say, but rather it just expands it.

Specifically the Church-Turing Thesis, once quantum mechanics is taken into consideration, became the Church-Turing-Deutsch Thesis. The reason it never truly invalidates the thesis is because for every new physical phenomenon you find that can outperform a Turing machine, you can then build a new type of computer using that physical phenomenon and that new type of computer becomes the new highest sort of computational machine. In the case of quantum mechanics, they simply invented the concept of the quantum computer.

The fact that invalidating a computational thesis really just expands it is precisely like any other scientific theory; you only invalidate the theory by replacing it with a new one.

This also places hard limits on what we can expect in nature. For example, one commenter suggested that a biological body can perform functions that an electronic computer can’t. As of yet, we have no evidence that is the case. So far biological functions really are just Turing compatible. But if we do find biological functions that can do something an electronic computer can’t, we’ll just invent a new type of biological computer and utilize the same chemical process biology does. The end result will just be a new Turing-like Thesis to replace the old one.

**Non-Computational Phenomenon?**

One possible escape from the above dilemma is to propose the idea that perhaps nature (let’s say biology) actually has some sort of non-computational phenomenon in it. In this case, when we discovered the non-computational phenomenon, perhaps we can’t then take it and create a new type of computer out of it.

Now it’s difficult for me conceive this as a possibility for the simple reason that I’m not clear how this fictional scientist could possibly even know that she had discovered a non-computational process for certain. And being a scientist, wouldn’t there be expectation that she’d write up a paper describing the laws on which this new non-computational process works? But to write such a paper describing the laws this process follows would take use of math to describe the laws precisely — and you see where I’m going with this — all math can be run on a Turing machine.

But for the sake of this hypothetical argument let’s pretend like this scientist somehow discovered a non-computational phenomenon. The simple truth is that we are now not talking so much about a non-computational phenomenon as we are talking about a non-lawful one. So let’s make up a rule to suggest this connection:

Science studies repeatable laws, which by definition can be described mathematically and therefore are computational by definition.

Given this rule, there is simply no way to describe the laws that this non-computational physical phenomenon follow because the way we describe such laws scientifically *is mathematically.* So if we find such a non-computational phenomenon, we have in fact found something inscrutable to science all together!

Now maybe many people would find this exciting. But I’d hope you’d also see why this would be a rather scary discovery. If there is one mathematically / scientifically inscrutable aspect of reality, then this implies that science only works by chance. The universe is not lawful and in so far as we’ve happened to discover lawfulness so far, this is only because we happened to explore a part of reality that happened to appear lawful. But this lawful part of reality must, of course, exist as part of an overall non-lawful reality. This may excite you personally, but it also means that reality is scarily incomprehensible to us.

In short, so long as any new scientifically discovered process is in fact logically and lawfully describable and therefore comprehensible, *we are absolutely guaranteed that nothing in physical reality will never be non-computational!*

**The Turing Principle Defined At Last**

Now if you’ve followed my line of logic and the thought experiments, you probably have already for yourselves derived what the Turing principle is. We might write it something like this:

All math we currently know plus all we will ever know can be programmed on a physical computer of some sort. Since all of nature can be described via math, that means we will never come across natural phenomenon that can’t be simulated on a computer.

However you might feel about the above Turing principle, there are some very cool ramifications that follow logical from it. Here are a few:

- All of physical reality can be described via computation and we understand all possible computations.
- Therefore we know we can comprehend all of reality. If it exists, we can understand it and comprehend it.
- There are no limits placed on human knowledge.

Though it’s beyond the scope of this post (I’ll do a follow-up post later on) there are also many obvious religious ramifications of the Turing Principle. Even ramifications about God Himself and our relationship to Him.

Now many will immediately see some possible negative ramifications for religion. Does this mean God’s knowledge is limited by the laws of computation? Does this mean there is no free will because the mind is purely computational?

But bear in mind that questions like this are only scary if we both assume the Turing Principle to be true and assume there were never be discovered a computational machine that is above a Turing machine. Since we’ve already discovered one (very limited) exception, I’d caution people against jumping for their knives right off the bat in what may or may not ultimately prove an impossible fight for religion comparable to evolution. (To be honest, it is not comparable to evolution. The Turing principle is on much stronger ground scientificaly than evolution since all science that succeeds, including evolution itself, are also confirmations of the Turing Principle.)

Further, not all the ramifications of the Turing principle are negative for religion, particularly for Mormonism. For example, the Turing principle must assert that there is no unbridgeable divide between God and Humankind. And it direclty backs up Joseph Smith’s claim that there is no such thing as immaterial matter. In fact, given Joseph Smith’s doctrine on that point, the Turing Principle is presumably compatible with all of Mormonism with the possible exception of the doctrine of Intelligences, as explained in D&C 93:29-31. But even here I’d suggest caution for two reasons. First, because it is not yet clear if any of our speculations about Intelligences are correct. Second, because even if Intelligences do end up being some non-computational thing, so long as they are physical — and Joseph Smith insists they are — they could also be incorporated into some future Super-Turing machine.

So I ask readers who want to fight against the Turing principle on religious grounds to instead accept it as a current best scientific assumption -but with the jury still out on whether or not it will ultimately prove correct once we factor God into the equation. (Pun intended.)

**What If I Don’t Accept the Turing Principle?**

Okay, what if you don’t like some of the ramifications of the Turing principle? Some people (including Roger Penrose) can’t accept it as true in its current Turing machine form. However, we need to keep epistemology in mind. While you are personally under no obligation to accept the Turing Principle, remember that the reason why Alan Turing conjectured it as an explanation about reality in the first place was precisely because it explained why…

a) All full-featured computational machines are equivalent

b) All of reality (so far) can be described via computation

And remember that those two points are really just two ways of looking at a single point. The *explanation* for why all of reality that we have ever discovered can be described via computations (i.e. math) is *because* all full-featured computational machines are equivalent – including any sort of computation nature does.

So you are indeed free to reject the Turing Principle if you wish for the very same reason you are free to reject any scientific theory if you wish. But keep in mind that this is our current best explanation. When you reject the Turing Principle you are not rejecting it by refuting it with a better explanation, you are merely being a Rejectionist and refusing to accept the best explanation but offering no alternative.

Perhaps there is nothing wrong with you doing this. Truth be told, if there was not at least some scientist out there rejecting the prevailing theories we’d never generate the conjectures necessary to make progress. But keep in mind the difference between a rejection of a theory due to the existence of an offered alternative and merely rejecting it without such an alternative.

**Conclusion: What Would a Non-Computational World Look Like?**

In conclusion I want to briefly consider what a non-computational reality would be like. Because we are modern people and because we take the existence of scientific explanations for granted now, it’s probably a bit hard for us to imagine what a non-computational reality would be like.

But for our ancestors, they lived in such a world. To the ancient pagans, the weather was not a computational process that can be fully explained; it was a matter of agency of capricious gods. What if our ancestors had turned out to be right? That is what a non-computational reality would have looked like.

Imagine such a world for a moment. All things that move or happen do so due to capricious choices of gods rather than due to comprehensible laws. It’s impossible to ever really know why the gods do what they do. Reality is simply incomprehensible. You can make a sacrifice to the gods, perhaps, but there is no guarantee it will make any sort of difference. The gods will do what the gods decide to do. It’s an unlawful reality where nothing really makes sense and there is no reason to believe that it should make sense.

To some people — I think maybe myself included — this does not sound like a very appealing world to live in. I think it is understandable that many would out of hand, and perhaps even with religious fervor, reject a non-scientific world view precisely because such a view comes across to them like the above world.

New Post: The Turing Principle: In my last two posts on Computational Theory, I first explained t… http://t.co/aQ8D87xVgE #LDS #Mormon

The Turing Principle http://t.co/Ym5i97ciZ3

TheMillennialStar: The Turing Principle http://t.co/AIooalqRky #lds #mormon

Tell me if I’m missing something.

We can describe (mathmatically/logically) everything observable, therefore everything observable is describable mathmatically.

If there is something that COULDN’T be described (mathmatically/logically,) we can’t truly know that it couldn’t be described. We assume that the lack is in us, not in the capacity for description.

A law that can’t be described doesn’t exist, the temporary appearance of such a law is only that we don’t have the language to describe it yet.

I think that the hangup for me is you’re taking something descriptive and turning it around to make it seem proscriptive. If I’m understanding that correctly, the whole thought experiment seems kind of pointless. It’s like saying we can name everything because if there’s anything that we can’t name, we can still make up a new name. Ergo, everything can be named.

To me, that doesn’t mean that math is the only language, only that it’s a sufficient one because we’ve built it to be infinitely expandable (like a linguistic language vs. a more rigid programming language.)

Not to get all meta, but the mere fact that we have the capacity to expand math infinitely suggests to me that the true flexibility, the true capacity to understand the universe is in

us, not in the math. The math is simply the decoding algorithm we’ve created to communicate our understanding.Maybe you’ve completely lost me, I’m not used to stretching my comprehension in this particular direction. *L*

Also, it would mean that there is nothing truly random.

Haven’t computers failed to mimic genuine randomness? And is there not evidence for randomness in life? Or is the posit simply that our perceived randomness is an illusion generated by a limited capacity to circumscribe it?

I’m kind of thrown how the Turing Thesis failed, so they changed the Turing Thesis so it could never fail. Doesn’t that make it a different Thesis?

I’m still researching on quantum computing, since what little I’ve read of it makes it seem more of a pipe dream. A lot of money is being poured into it, but I don’t see it being able to work since you’re trying to work with states that are by definition unobservable. Anyway.

I don’t think your example of non-computational works with the possibility of Gods. I think it would still be able to be explained and placed in mathematical functions, but like the shift with Turing, the scope would have to be greatly expanded. If you could quantify what each weather god would do when, you could predict their actions before they happened. It’s just the amount of our understanding that is insufficient, kinda like it’s insufficient now to explain all the aspects of weather. We can predict about 24 hours ahead, but the further out we go, the poorer our predictions will be.

Silver Rain,

The thing you are missing on the first question is that the Turing principle states that math can always be broken down to a small set of basic principles (i.e. we can make it run on a Turing machine or equivalent that has a very limited number of options available). I hope you can see the ramifications here, but let me try to explain a bit further.

You refer to ‘stretching math’ and liken it to making up words (names for things). But math and words are not similar. We might try to build some sort of analogy, like saying words are built out of small building blocks (sounds/letters) just like math. But again, the anology is bad. The sounds or letters that make up a word are not related to the meaing of the word. But the building blocks of math are directly related to the math or computations we build out of it. Words are not reductive to sounds or letters when it comes to meaning, whereas math is. That’s the difference.

The ramification of this, if true, is that no matter how complex our math is in the future, it will always be meaningful to *anyone*. The idea of some minds being able to grasp certain subjects vs. others isn’t really correct under the Turing principle (again, if its actually true). Certainly some minds have natural advantages. But any normal adult person can, at least in theory, understand any subject if they take the time to learn it. Presumably interest determines if they take the time or not.

The end result of this is that everything is *comprehensible* any anyone, at least in principle. At least if they take the time to understand it and learn it. This to me seems like a pretty cool thing, if true. It doesn’t sound at all negative to me. I am not entirely sure why people see this as a negative. I would assume it because they are worried that this means “humans are computers” and they dislike the idea of an electronic computer being similar to us. But this is a misunderstanding too. It’s like saying “human bodies are made up of chemicals that do chemical processes” and then getting upset because reducing our bodies to chemicals somehow destroys the beauty of human bodies. Only if you let it.

I suppose the other issues is that this might imply the “human minds are computers” and therefore “we have no free will.” But at least for this post, I make no such assupmtions and give several different ways to think of “computers” that might, say, include ‘intelligences’ and are therefore unlike modern electronic computers. The hard part here is that for some reason it is VERY difficult for people to break the hard connection they have in their mind between an ‘electronic computer’ and ‘any type of computer.’ But the word ‘computer’ has existed a lot longer than electronic computers and really doesn’t imply electronic computers at all.

I think the hard thing here is the idea of an expanding Turing Principle. The Turing Principle, as it stands in science today, *would* imply that the human mind does nothing more special than an electronic computer, or at least a quantum computer (if the brain uses quantum processes.) But that is just the theory as of today. Who knows what the future holds. Some future Turing Principle might refer to computers of a very different nature. Yet the basic ideas of the principle could still hold, namely that we can always invent a new type of computer that handles whatever new discoveries in computation show in nature.

As for your question about randomness — I thought about explaining this but thought it would just muddy the point in a post of this size. Essentially, you are correct for a pure Turing machine. They are not random. But there is also such a thing as a Turing machine that has a random generator attached to it. This, of course, is a computer that can do true randomness. (And these exist in real life, say in Casinos.) There is then a debate over what, if anything, this Turing machine plus random generator can do that a regular Turing machine cannot using psuedo random numbers (which is what normal computers use) which are to an average human being, for all intents and purposes, the same as random numbers.

For the purposes of this post, assume that any time I use the term “Turing Machine” I am actually refering to a “Turing machine plus random generator.” Therefore I’m assuming we’re dealing with a computer that can handle true randomness.

Frank, this is no different than Newtons theory giving way to Einsteins. Yes, that makes it a new theory. Yes, that makes it a new expanded thesis. The Church-Turing Thesis gave way to the Church-Turing-Deustch thesis. But the underlying idea of everything still be comprehensible and mathematically based holds under both thesis and it is difficult to see (and this is my main point near the end) how one could actually invalidate such a thesis in such a way that math would no longer apply to it. And since math is always reducible to a very primative set of actions, that means that even as the Turing principle expands, we should expect it to do so in a way that continues to be comprehensible to us.

The real exception to this would be something like ‘intelligence’ computers as I suggest. If there is some hard atomic incomprehensible thing out there, then we could still build a computer around it, but that one hard atomic incomprehensible thing would itself be inscrutable by science or math. Can such a thing exist? You suggest maybe it can’t (as per your example with the gods.) In my post, I am staying open on this possiblity.

I have researched quantum computation extensively. I bought a text book and went through it all the way, making sure I understood it mathematically. It is NOT a pipe dream. We will almost assuredly eventually figure out how to engineer one. The issue right now is really the amount of time it takes. The degredation of states happens too fast. Do it faster and you can get the result of the quantum state before it degrades.

And, yes, you can’t actually observe it directly. But this is where the total coolness comes in. Essentially quantum computation is a series of ‘tricks’ where you figure out how to queue up the computation such that when you finally observe it and the states collapse they give you a hint as to what the possible answer was. You then re-run the computation a few times gathering hints. Finally you use a real computer to finish the computation as a speed unimaginable by modern computers. It’s like having a sort of ‘oracle machine’ attached to a normal computer that just *somehow* knows the right answer and keeps offering hints as to how to find it.

One more thing on quantum computers. They are *extremely* limited in application as of today. That might change as researches find new algorithms to use on them (once we know how to engineer one). But as of today, the *only* exponential speed up we know of is finding prime numbers that make up a security key, therefore undermining all current crypto-key based schemes.

There is a cute little algorithm that exists that allows for a quadratic speed up of *any* algorithm in existence, which is pretty cool. But just not enough of a speed up to get super excited over.

In the end, the true break through would be if someone could figure out how to use a quantum computer to do an exponential speed up of *any* algorithm. Now that *would* be a game changer. Quantum computers would instantly become nearly unlimited in scope and far more interesting and useful than they are as of today. You’d probably see billions of dollars poured into developing one at that point because of the incredible value of such a computational speed up. But as of today, we don’t know any algorithm that can do this and aren’t even sure if such an algorithm exists at all.

Silver Rain,

Let me try one other way to explain the difference between what you are saying and the actual principle being explained.

Yes, we can make up all sorts of new types of math. But we DO NOT need to go build a new type of computer for each new type of math. It should be obvious then that this means that all new types of math we create are actually reducible to whatever basic set of instructions are built into a computer as of today.

Though this set of basic instructions differs from computer to computer, the whole point of the Turing principle is that once you get past a certain level of complexity in the types of instructions you can do on your computer, all computers become functionality equivalent in terms of power and ability to run programs.

Even in the case of the quantum computer, really anything you can do on a quantum computer you can also do on a regular old turing machine. The only real difference is the speed. Quantum computers violate the laws of computation under the Church-Turing thesis by performing some functions faster than a Turing machine.

There is some confusion here and you have to have read my previous posts to make sense of this. People tend to think of computers going ‘faster’ in terms of their processor speed. But this isn’t what I’m talking about. Yes, you can speed up computers in terms of processor speed from one generation to the next. Eventaully we’ll hit a max speed and that increase in speed will stop. But there are certain types of programs (for example, crypto-key encryption) that are so computationally intensive that no matter how much you speed up your computer, they will still not be able to return a result within a period of time less than billions or trillions of years. But a quantum computer – which might actually be processwise quite a bit slower than a regular computer — has the means by which to run an algorithm that simply requires fewer steps to perform the same computation. So it can crack crypto-key encryption in a meaningfully short period of time despite actually being a slower type of computer in terms of processing speed.

So the real difference between a classing Turing machine and a quantum computer isn’t that it can do things the Turing machine can’t, but that it has ways to compute the result faster. In fact, I plan to write a quantum computer simulator program on my computer to play around with what a quantum computer is like. This can be easily done precisely because Turing machines can run anything, including a quantum computer simulation. But because of teh physical difference of the nature of quantum computers and physical computers I would be limited in the number of qubits I can simulate before I run my memory out on a classical computer. In fact, my plans were to only simulate 8 qubits because frankly I start getting scared after that that I’ll bog the simulation down too much and 8 qubits still ought to allow me to play around with what a quantum computer would really be like.

“We can always invent a new type of computer that handles whatever new discoveries in computation show in nature.”

I think that’s part of my point. If *WE* can always invent a new computer that handles all new discoveries, we are still limited by ourselves. In essence, since we are a computer ourselves (and I don’t have the problems you outline regarding that) we would either 1) have to have been invented by something with a greater computational capacity, ad infinitum or 2) at our ultimate natures, we are not computers. I.e. there is something in us that is not inherently computation that has the capacity to operate on other than mathematical means.

This is because math is descriptive. It is basically a translation of laws, but is not the substance of law. This is where it starts to get fuzzy for me. Being able to break something down to a small set of principles, to me, only means that everything that happens is describable. But if those principles are universal only in a limited system, and there are other systems out there, they are not truly unlimited. They are only unlimited in a limited sense (which is borne out by observed behavior from macro- to microcosms.)

Basically this: either there are things out there that are comprehensible on principles other than logic and math, or there are not. To claim the latter is, to me, a bit short-sighted. As you pointed out, logic and math are a fairly recent foundation on which to build comprehension of the world. They are infinitely useful to describe a certain set of realities. But there are realities that exist outside of a logical, mathematical comprehension. As we have discussed before, I have experienced some of them. In other words, you could try to say that such leaps of intuition or synergy (for lack of a better description) are simply operating on subconscious higher principles of math, or there is something else going on. Some other set of principles on which to base an understanding of reality that can access other truths.

My personal opinion is that there are (at least and probably limited to) two frameworks to reach truth, an animus and anima if you will. The animus of math and logic can wholly frame reality as it is capable of perceiving it and likewise with chaotic anima. But it is when you master both that creation and destruction balance, and you truly comprehend.

Just as you can see the whole world with only one eye, but when you have two you gain a previously-unknowable perspective. Yet three eyes adds no similar dimension.

From what I understand, there are no computational random generators that can produce true randomness, only an approximation of randomness. I fully admit I may be wrong on that, though.

Your second stab at explanation seems to reflect fractality. Basically, you’re saying there is a baseline. But what principles describe the basic principles? Does 1+1=2 because we have defined it to be so? Or is there something essentially reflective in that which describes reality? What is the underlying fractal equation? The entirety of the fractal can be

describedby the math, but not trulyencompassedby it.In a way of thinking, Turing principle, etc. is based on is-is not. Boolean principles. Everything more complex is based off of Boolean descriptions. But what makes something 0 vs. 1? Boolean describes what happens, but it doesn’t cause it. That question, what causes things to operate the way they do, is what I’m trying to get to. Not deny what you’re saying, but to add that there’s more.

And I’m probably totally skewing your point in the process, but this is some pretty heavy philosophical stuff. *L* I don’t get to exercise that very often any more. I’m not really up to par on it.

And, for the record, I have read your previous posts, but it’s been awhile.

SR,

You do not seem to be arguing the basic point of comprehensibility implied by the Turing principle. This would seem to be wholly implied in your idea of math being descriptive. So as far as this post is concerned, I think you get the point and I think you agree.

You are, indeed, taking this beyond any point I’ve made. But there is nothing wrong with that. Well, there is one thing wrong with it. It’s a complex subject that has no easy answer. But I think this is your point.

Let’s take a few things that I can easily clarify:

“From what I understand, there are no computational random generators that can produce true randomness, only an approximation of randomness. I fully admit I may be wrong on that, though.”

This is a definitional problem, not a real problem. Some people would, by definition, consider randonmenss to not be computational. Therefore, given that by definition randomness is not computational, it is therefore true that there are no computational random generators. This is almost assuredly what you’ve heard and why you keep wondering about this.

This seems like an unnecessary hardline to me. In casinos they use things like, say, white noise from heat, to generate truly random numbers (there is a philosophical question here as to whether or not truly random numbers exist at all and if it’s just that we don’t have all information. You hint at this. For now, let’s assume there are truly random things out there.) You can then hook a computer up to this truly random process and viola! you now have a computer than can handle truly random numbers. This is what they do in real life. Whether you want to call this “a computer that has access to truly random numbers” or “a computer that can handle truly random numbers” is a completely silly and arbitrary distinction unless I’m a researcher specifically trying to decide what a Turing machine with or without a random number generate can do. So I am taking the line that we can assume computers can handle random numbers if we want them to.

“But what principles describe the basic principles?”

Wow! Facinating question. I’ve given some thought to this and, suprisingly, there is an answer to this question. It’s actually implied in something I said in a previous comment. There essentially *aren’t* basic principles of math, per say. For example, all computation can be turned into NOT-AND opertions. So typically Not-And is offered up as the truly most basic type of computation. Yet, it should be obvious that Not-And can be broken into a NOT and an AND. So there is no ‘atomic particles of computation’ so to speak. Everything is reducible to something else as far as you want to go.

“Does 1+1=2 because we have defined it to be so? Or is there something essentially reflective in that which describes reality?”

Ah, now wer’e getting to the heart of the matter. Computational theory, at its heart, implies that the laws of physics do in fact dictate what can be computed. So there is a hard tie between the laws of physics and computation. Indeed, one might even think of them as being different views of the same thing. Change the laws of physics and you change the laws of computation. So the answer to your question should therefore be that 1+1 = 2 because this reflects something in reality.

Where I am struggling with what you are saying, SR, is that you keep making a hard division between descriptive and being encompassed. Specifically you say “The entirety of the fractal can be described by the math, but not truly encompassed by it.”

But honestly, I don’t see why this is the case. And your anima animus didn’t help me undertstand why you are making this division. It seems like trying to make a hard division between reading and writing to me. Obviously math can describe anything (that we currently know about) and obviously the reason why it can is because the universe (that we currently know about) only does things that are algorithmic. There are (so far as we know) no physical process that aren’t in fact also algorithms. So I just honestly can’t see the division you are making here as meaningful yet. The concept is bi-directional, not uni-directional as far as I can see. Each implies constraints on the other.

It seems to me that the real question is what you say here:

“But if those principles are universal only in a limited system, and there are other systems out there, they are not truly unlimited. They are only unlimited in a limited sense” (I didn’t understand your statement about “which is borne out by observed behavior from macro- to microcosms” and I am unclear in what sense that could be true, but the rest I understand.)

I think this is the most interesting point. Are there, in fact, “other systems out there”? Does, for example, God comprehend things using the same math we use based on the same reducible principles? Or is there a sort of “God math” out there that is some wholly different system that simply has no corollary to anything in our reality and is thus utterly incomprehensible to us? Wish I knew the answer to this question. I try to hold me mind open to both possiblities.

But what I can say at this point is this. Our science is wholly built on the assuption that math can literally describe everything, period. What other assumption could we make at this time?

This is then the real point. “Basically this: either there are things out there that are comprehensible on principles other than logic and math, or there are not. To claim the latter is, to me, a bit short-sighted.”

Is it short-sighted? Maybe in one sense, yes. You should always leave your mind open to new possibilities. But that’s where epistemology comes in. We hold to our best explanations tenatively but also as wholly true and then drive them to their logical conclusions.

The reason we do this is because until a new explanation exists, we can’t really draw any conclusions about it.

I read a book by some brain scientists that wanted to convince the scientific community that they should take dualism seriously. But at no point did the brain scientists suggest how this could be actually done. They spent quite a bit of time pointing to what they saw as issues with materialistic assumptions or possible scenarios that disproved materialism in their opinion (such as sitautions suggestive of life after death.)

But despite their insistence that science take dualism seriously, they couldn’t come up with even a single example of how it would.

This is where I think the Turing principle basically proves Joseph Smith correct. For the sake of argument, let’s say that some day we understand the spiritual world completely. Just like our world, we can use math to describe the laws by which it operates. Further, we can explain (again, using math to describe laws) how the spirit world interacts with ours (i.e. how does a spirit get put into a body? How does it communicate with the bodies brain? What is the division of labor between the two?)

Obviously, for a brain and a spirit to communicate at all, there must be some sort of describable force by which the spirit can in fact interact with our physical world and vice versa.

But now we come to the true issue. If all this is the case, how is the spirit world not in fact really just part of our physical world? How are the laws of the physical world not in fact really just part of the laws of physics of our world that we just didn’t yet know about? In short, how is dualism in fact really not dualism but just an arbitrary divide between two parts of the physical world? So Joseph Smith is correct. If spirits exist they are in some sense physical. Period.

Yet, until we have a theory of physics that encompasses these additional forces and laws, just exactly how would science even begin to interact with the concept of the spirit world? Obviously it can’t! Therefore the spirit world IS outside of science as of today, though that might change in the future. But the way it will change in the future will be by having physics absorb it into a single comprehensive theory.

If you followed that logic — then YES it is a good assumption for science to assume there is no other things out there not comprehensible on principles other than logic and math. For it is the only assumption that can be made at this time. And, in fact, if there are things out there not comprehensible on principles of logic and math, then how would we ever merge it into our science at all? It would be impossible. Therefore, science MUST always make this assumption. For it has no other choice.

Yes, exactly. I’m not arguing against anything. I agree, as far as it goes. I should have said that. *L*

“Everything is reducible to something else as far as you want to go.”

Yes, that’s the fractality I was referring to. Not only that everything can be broken down infinitely, but it can be infinitely expanded, and that they are similar in all iterations. (Which has interesting theological implications, especially regarding LDS theology.)

“It seems like trying to make a hard division between reading and writing to me.”

Kind of, yes. Because reading and writing both use the same tool, but they do not amount to the same information. In other words, even beyond the imprecision of language, there is a breakdown between what is written and what is read. (And a similar breakdown between what is comprehended and what is written.) To me, math is a tool similar (though more precise) to language. It may be eventually adaptable to infinite precision, but it still is explaining something that ultimately needs no explanation because it

is.I’m making the distinction because I’m essentially saying that math is a brilliant descriptor of reality, but it doesn’t have to be the ONLY descriptor. In other words, leaving our minds open to other methods of describing and approaching reality in no way diminishes the usefulness of mathematics. It enhances it. In other words, in this modern limited world, math may be the best transferable access to describing reality. But when we mistake that description FOR the reality, we close ourselves off to other ways to describe it. I’m a big proponent of mastering as many tools in the box as possible. In fact, other tools may give us the ability to properly adapt math and logic to the world.

Of course, I can’t offer those other tools as better alternatives for two reasons. First, they AREN’T “better,” only different. It would be like saying a screwdriver is a better tool than a hammer. Secondly, I can’t logically communicate them any more than I can drive a screw with a hammer. They can only be communicated by their own parameters.

To use the best possible mutual example (which I’ve been avoiding,) it is like the Spirit. Can the influence of the Spirit be described mathematically? Or can it only be described spiritually? Can spiritual things be communicated mathematically? (I’d say yes!) But there is obviously another way to access knowledge that is NOT circumscribed by math or logic. Take anything ephemeral. There are chemical processes that are associated with love and hate, for example, but are those processes the love and hate, or are they merely the physical manifestations of something uncircumscribed by logic? I, of course, believe the latter.

There may be no physical processes that are independent of algorithms, as far as we perceive physicality, but there is ample evidence of forces at work beyond physicality. We know there is a limit to our ability to describe those forces. My point is that even the division between what is physical and what is not is fuzzier than we’d like to think. I believe there is synergy. Not that there is a higher “God math,” but that God has the full toolbox (whatever that entails beyond our current comprehension.) We are here, perhaps, to explore one or two tools (say math as the hammer and “intuition” as the screwdriver) and really get to know them without being distracted with the others. But I do believe there are others.

Like what you’re saying at the end. I think this reflects back on conversations we’ve had before. You are very logical and reasoned. (Which is a good thing.) So naturally, you gravitate to and know better how to use logic and reason. But that doesn’t mean that is the only tool. I find value in recognizing that there are things beyond logic and reason so they do not become Gods in and of themselves.

We see far too much of that, these days.

Good response, SR.

I suppose one thing I should admit is that I believe that communication from the Spirit is in fact some sort of ‘physical’ process. And therefore I believe intuition to be so as well. Now it might be a physical process we currently know nothing about (in fact it pretty much has to be.) But like any physical process, it may well be describable by math and therefore functions under some set of computational laws itself.

In the contrast of logic and reason (my strength) and intuition, I’d say that that division makes sense in so far as we experience life. Intuition is NOT the same as logic and reason to us. Yet, once you look under the hood, they should still happen as ‘physical processes’ that ARE describable down to logic. This does, in some sense, mean intuition has logic underlying its implementation. But it doesn’t really make logic and intuition one and the same. Perhaps this is even the point you are making.

The question of whether math is separate from reality takes us into some strange areas. Specifically into the question of simulation vs. reality. There is no clear division — possibly no division — between the two.

The thing that is hard to accept is that math and reality are one and the same, but this isn’t really implied in any way by the turing principle. What is really implied is that computation and reality are one and the same. I can run the computation that is a waterfall on either a computer simulation of the waterfall, or on the waterfall itself. Both are really the same computation either way and they are really just two types of computer that run that computation.

It’s tempting to say, but one is a simulation and one is real, but if you were to create that waterfall simulation in a simulation of the entire universe, including people, it’s hard to see in what sense there is still a hard line between reality and simulation any more, at least from the point of view of the ‘simulated’ people that are in fact real people living in an honest to gosh physical universe as far as they are concerned. To them, the waterfall is a real waterfall.

I admit I’ve long favored the ‘matrix’ view of reality. I love the idea that we’re all really a simulation running on God’s giant Urim and Thumim and therefore when we explore our reality with science, we are finding truth — but only truth about the simulation.

If we were to find out that God has us all living in ‘the matrix’ and in fact we’re programs in that matrix (using that vaunted God math that everyone keeps hearing about) it would make us any less ‘real’ in any sense I can think of. It is therefore possible that simulation and reality aren’t different and therefore computation and reality aren’t either. The idea that the universe is a giant computer isn’t merely an analogy if the Turing principle is true. It is in fact a giant computer in some legitimate sense.

Ok, I probably shouldn’t even bring this up… but I think its kind of interesting and funny…

So there is a school of thought in philosophy that asks the whole simulation vs. reality question. The question it asks is “what breathes life into a computational universe?” So using the example from the previous post, in one reality — the top level — the waterfall exists as a computation being done by a mass of water molecules falling according to the laws of physics. The other is implemented inside a computer that is mimicking an entire universe as if it is doing its normal physical computation.

Why is that what we need — some giant computer? And if so, what is this top level of reality implemented on?

So your answer was ‘it just is’ and that is fine. But they suggested another possiblity, namely that the fundamental building blocks of reality are math itself. Under this proposal every mathematically consistent universe exists because math exists.

What will they think of next?

I’m skeptical about the idea that everything can be modeled as an algorithm. Specifically, I’m not sure you can model a person’s free will that way. And as far as I know science hasn’t had much luck in doing so either. We’ve got some decent biological models of parts of the brain but creating an entire algorithm of the mind is far beyond us. Of course, this is an issue you covered above. Is this because we just haven’t studied the human brain enough, or because there is something genuinely non-turing about it?

Personally I believe we live in a universe of 99.9999% computational matter with a relatively small number of unique non-deterministic agents. It fits my view of human free will and seems to mesh nicely with that second Nephi verse about how some things act and some things are acted upon.

But I can’t say I feel particularly strongly one way or the other. Like most of the interesting questions about reality pinning down a hard answer is impossible. No matter how much of the brain is proven to be a Turing device the opposition can always claim that the unique spark of consciousness and human agency is lurking just beyond at a scale we can’t yet see, fiddling with the biochemical noise.

Anyways, an incredibly well written article on a rather difficult topic. I enjoyed it immensely.

JSG,

I certainly think the majority view of LDS doctrine is pretty much what you said, i.e. that “intelligences” are an exception and are non-computational.

I think the challenge there is, in what sense can something be non-computational? i.e. Does it simpy follow no laws at all? Does it behave in ways that can’t be described by math? Are they just atomic with no way to reduce them physicall or logically? It seems the answer to all of these much be ‘yes’ or in fact “intelligences” would actually be computational.

As an alternative, I think you could create a view that Intelligences are in fact also computational and that free will is as well. On the surface this seems contradictory. If its just an algorithm, isn’t the outcome predictable? But in fact the vast majority of computations are entirely unpredictable. It’s impossible to simulate them and find out what the end result would be beforehand. This seems to leave at least a little bit of space for the idea that computation and free will are not mutually exclusive and thus that “intelligences” can be computational as well.

I’m undecided, myself. I suppose at a minimum I do think that everything we discover about the brain will prove computational and that those that believe the brain isn’t computational will need to hide the non-computational part in disappearing islands of what we still don’t understand about the brain. Certainly, for the most part, brains compute things.

“But in fact the vast majority of computations are entirely unpredictable. It’s impossible to simulate them and find out what the end result would be beforehand.”

I’m not sure what you mean. If you’re referring to non-deterministic algorithms, that’s just a case of can-kicking, either to initial conditions (the fudgeness of which should be obvious) or ultimately to some incarnation of the almighty RANDOM – all hail! (Indeed, I consider the scientific world to be idol-worshippers of the Random-shaped gap!) The reason it has to appeal to Random is because by definition randomness cannot be mathematically described. If it could, it wouldn’t be random. Thus, due to its supposed non-intelligence, it is the available approved mechanism for non-determinism. Never mind that it explains absolutely nothing…

So yeah, for me agency, and thus intelligences, must be non-computational.

“I think the challenge there is, in what sense can something be non-computational? i.e. Does it simpy follow no laws at all? Does it behave in ways that can’t be described by math? Are they just atomic with no way to reduce them physicall or logically? It seems the answer to all of these much be ‘yes’ or in fact “intelligences” would actually be computational.”

Certainly 2 is ‘Yes’. I expect 3 is also ‘Yes’ (intelligences are meant to be something fundamental after all) but we’re reaching the point where we have to question what physical/logical reduction really means in this context. As for 1, again I’m not certain we can really comprehend the question, but the way the question is presented, I would say ‘Yes’.

After all – If it’s computational, what’s intelligent about it?

I’m also of the opinion that brains are mostly just very efficient computers. But I don’t think it takes much free-will to guide the direction of the brain machine. It’s like my computer: 99.9% of the work it does is deterministic algorithm crunching but my 0.1% human input is enough to make the difference between the algorithm rendering a screen saver for twelve hour and the algorithm instead rendering this website.

In any case, I’m not sure that the impossibility of knowing what an algorithm will do without actually running it is a valid argument for free will. An un-run algorithm has no freedom to choose its outcome. It will always give the same answer (or get stuck in the same loop) for the same input when/if we finally get around to running it.

As for the nature of a non-computational intelligence… I suppose I would mathematically describe it as a unique process that produces non-random yet simultaneously non-deterministic output when provided with input. You might be able to statistically note that it tends towards certain behaviors, but it’s future behavior will always be uncertain.

I think that’s enough to break the Turing definition, since with this sort of black box we could build devices that couldn’t be mimicked. Two identical devices given identical input but attached to different non-computational intelligences will behave differently in both the short term and the long term. This is unlike a turing machine with mere randomness, for while those may not synchronize in the short term they will behave statistically similarly after a sample of billions of inputs and outputs.

In some ways this is similar to the un-run algorithm idea: There is no way to tell what a non-computational intelligence will do to a certain piece of input without actually giving it that input. The big difference here is that the non-computational intelligence can’t be simulated. Only the original will behave like itself. Each non-computational intelligence is unique.

As for how such a non-computational intelligence would function… I have no clue. All I can say is that my gut instinct and personal experience, along with the most obvious interpretation of my religion, tells me I have some degree of free will. Of course, I might just be computationally predetermined to believe that

Just a thought…maybe it isn’t that they follow no rules, but that they can choose which rules they follow.

In other words, total chaos doesn’t exist, but the ability to choose which computational constructs one is framed by is, and that choice itself is not, and cannot be computational.

Though the suggestion of omniscience as commonly understood would seem to indicate that you are right, and everything is ultimately computational.

While not having read the post very carefully, I will venture the following:

I frankly don’t care whether the computational theory can be universally generalized or not because universality does not entail exclusivity. Just because the computational theory can be applied to everything doesn’t mean that reality “is really just” computational stuff or that non-computational theories aren’t just as valid. All it would mean is that the computational theory can (as opposed to must!) be applied to everything. Big deal.

Again, I didn’t read the post too closely, so be gentle if my comment makes that all too obvious.

JSG,

I had plans to do a post about the possiblity of “intelligences” being a “non-random yet simultaneously non-deterministic output” but you just said it so well, I’m not sure exactly what I could add now.

Gee! Thanks for stealing my thunder!

Jeff G,

Silver Rain has argued the very point you argued and did so very well even after reading the article in detail.

I do think the post suggests that there is more of a problem here than can be merely dismissed on a “can” vs. “must” basis without more of an argument to explain how that makes a difference. But I also think there are many that will ultimately agree with you.

Fraggle,

My comment that most algorithms are unpredictable probably does need a bit more explanation. Rudy Rucker’s “The Lifebox, the Seashell, and the Soul” is about unpredictable computations. I’d have to go break out that book and do a post on just that subject.

The basic idea is that most computations have a heavy chaos factor. Unless you can simulate them to an infinite level of precision (which violates the laws of physics) you simply can’t know what the outcome will be without running it. And because most have no short cuts (unlike most of the things we do with our science) and because computation speeds are constrainted by the laws of physics / laws of computation there is in fact no way to ‘build a faster computer’ to determine the outcome in advance if you’re trying to compare something happening in the real world vs. something happening in a simulation. Despite years of SciFi that have left us convinced computers are faster than us and can simulate whole universes in virtual reality, the simple truth is that violates the laws of physics as we currently understand them. We compute faster than any currently conceivable computer and not the other way around. And we do things we think of computers as being super fast at — such as search and retrieval of information — far more efficiently than any computer. (If you *really* want to find the best information on a subject, the quickest way to do it is to ask a human expert.)

This sort of reality check on what computation can do compared to what fiction says it can do isn’t something we usually do, we so just sort of assume fiction has it right.

The net result is, simulations are almost always slower than reality because reality is already computing at maximum possible speed.

Having been involved in far too many algorithms, most practical computational algorithmic methods, when compared to reality, follow what we sonar folks call a receiver operating characteristics (ROC) curve. Aparently other folks use ROC curves as well: ROC curve: applications and principles in biology. A ROC curve shows the probability of true alert compared to the probability of false alert as one tunes the algorithm.

Reality is the gold standard. The fact that the processes involved in reality can be expressed mathematically, given sufficient data points and sub-process specific algorithms, isn’t so much a problem. But one can quickly develop a set of algorithms that would be completely incapable of computing the real-time result of real processes.

As we then simplify our algorithms to allow real-time computation, we encounter the phenomenon that sometimes the algorithm predicts a result that wasn’t the real result.

So as Bruce himself has indicated and others have asserted, the fact that algorithms can computationally describe reality doesn’t mean that reality is limited to the simplistic computations arm-chair pseudo-scientists might assert. This is particularly true when it comes to phenomena associated with faith.

I read all three of your posts on this topic with great interest. I think I was able to follow your reasoning and explanations fairly well.

You did a nice job demonstrating that “all of nature can be described via math,” and that “we will never come across natural phenomenon that can’t be simulated on a computer”, which I don’t see any reason to disagree with. Where you lose me is when you conclude that “[t]herefore we know we can comprehend all of reality” and that “[t]here are no limits placed on human knowledge.”

The latter follows from the former, only if “nature” or “physical reality” are the same as “all reality”. But as far as I can see you don’t demonstrate this equivalence, but merely assume it.

I also disagree with this conclusion: “If there is one mathematically / scientifically inscrutable aspect of reality, then this implies that science only works by chance.”

On the contrary, it’s possible that God is a scientifically inscrutable aspect of reality, yet if this is true, it doesn’t logically follow that science only works by chance. It may work not by chance, but because God has intentionally designed and created, and continually maintains physical reality such that it its internal phenomena are understandable via science, math, etc.

In terms of how scary reality would be if we reject the Turing Principle, I don’t think it makes much difference whether a hurricane is explainable by math or not. Whether a capricous god sends a hurricane or a hurricane happens as the end result of a chain of physical reactions, the hurricane is equally out of our control, and for all practical purposes appears equally capricious and frightening from the perspective of human experience.

But in fact, I would argue that even if I accept the Turing Principle (as stated by you), I can still believe in aspects of reality that are scientifically inscrutable. This is so because the statement “Since all of nature can be described via math, that means we will never come across natural phenomenon that can’t be simulated on a computer”, doesn’t logically imply the conclusion that there exists *nothing* that cannot be described via math, but only that all of *nature* can be so described.

“It would be like saying a screwdriver is a better tool than a hammer.”

Ok, I’ll say it. I’d bet on The Doctor over Thor any day

Dude! Thor would be toast!

Men!