- 100 Years Since Principia Mathematica By Stephen Wolfram. At blog.stephenwolfram.com
I learned the book Principia Mathematica (PM) in about 1991, as a college student. At the time my math isn't good enough for the prospect of reading it. In early 2000s, i'm ready to read the book, always wanted to, but never did. In fact never saw a single page of the book. Though, today, i don't think the book is worth a detailed reading even if you specialize in math logic. Rather, as a required acquaintance.
Here's some explanation of the notation used.
- The Notation in Principia Mathematica By Bernard Linsky. At plato.stanford.edu
It seems, to read the book and have some basic understanding would require at least a year full time. (assuming you already understand the basics of logic and some history of math foundation.)
- Principia Mathematica - Volume One By Alfred North Whitehead, Bertrand Russell. Buy at amazon
- Principia Mathematica - Volume Two Buy at amazon
- Principia Mathematica - Volume Three Buy at amazon
Wolfram said that Russell commented that there are only about 6 people who read the whole thing. Today, perhaps the number of people alive who have read it might be under a hundred.
Also, Wolfram said, quote:
In my own work, Mathematica shares with Principia Mathematica the goal of formalizing mathematics—but by building on the concept of computation, it takes a rather different approach, with a quite different outcome. And in A New Kind of Science, one of my objectives, also like Principia Mathematica, was to understand what lies beneath mathematics—though again my conclusion is quite different from Principia Mathematica.
That's typical bullshit from Wolfram. PM was intended to be a math foundation. Mathematica was a computer algebra system, and is never any sort of math foundation nor even a computer based proof system. At best, one might call Mathematica as the best math calculator that exists today, but the nature of math calculator is very different than a proof system. Even if we consider Wolfram|Alpha being 10 times better, it's still can't do a bit of proofs. There are proof systems, such as Coq, HOL Light, Isabelle, etc. (See: State of Theorem Proving Systems 2008)
But Wolfram later says:
Beyond these notational issues, there is a much more fundamental difference between the formalization of mathematics in Principia Mathematica and in Mathematica. For in Principia Mathematica the objective is to exhibit true theorems of mathematics, and to represent the processes involved in proving them. But in Mathematica, the objective is instead to compute: to take mathematical expressions, and evaluate them.
Well, so it is a calculator, not a proof system.
I think what's interesting is Wolfram's systematic exploration of all possible axiom systems. He went on to talk about that.
Wolfram says that logic that PM have “chosen” to build on is not basic, and compared that to his investigation of all simple computations as in his A New Kind of Science (ANKS). I appreciate very much his systematic investigation of all simple computations, but i find it hard to take that to claim that logic is not basic. These 2 are different things, different contexts. You can consider symbolic logic or formal math as a computational system, or vice versa. If you do consider logic as a computational system, then of course, it's not the simplest. All cullular automatica can be rephrased as logic, logic can be rephrased as a particular cellular automatica. In either case, they become more complex.
I guess he's saying in the context of building up a formal math system based on some sort of cellular automata or simple rules… conceivable, but nobody has done that, and nobody knew what the result would be like, or what or which simple system should be the “foundation”. Logic being logic, because it is based on something we understand and in touch: true, false, and sets (things).
So can one say that the idea of logic somehow underlying mathematics is wrong? At a conceptual level, I think so. But in a strange twist of history, logic is currently precisely what is actually used to implement mathematics.
Hard to take this. He says “logic is currently precisely what is actually used to implement mathematics.”. But most professional mathematicians do not take logicism as their approach, nor vast majority of math publications. Basically, they “implement math using logic” only in the sense that they use reasoning. In this sense, what possibly could be the alternative? I imagine Wolfram would imagine mathematicians that when they do a proof, the proof goes like “because CA xyz at step n is this way, therefore this lemma is true”?
We know from computational universality—and more precisely from the Principle of Computational Equivalence—that things do not have to work this way, and that there are many very different bases for computation that could be used. And indeed, as computers move to a molecular scale, standard logic will most likely no longer be the most convenient basis to use.
But so why is logic used in today’s computers? I suspect it actually has quite a bit to do with none other than Principia Mathematica. For historically Principia Mathematica did much to promote the importance and primacy of logic, and the glow that it left is in many ways still with us today. It is just that we now understand that logic is just one possible basis for what we can do—not the only conceivable one.
In the above, it makes a good point about his ANKS, about how a particular type of computer, the DNA computer, where CA will be a better model than logic. Though, it seems silly the way he suggest why today's computer are based on logic. Computer uses electronics, with electricy flowing or not flowing as on and off, true and false, which seems the simplest to do with electronics. That's basically why, if one wants to say it's basic on logic. Rather, it's not “based on logic”, but a “2 states system”. We might call it boolean algebra, or we might say it's based on math, or we might just say it's based on properties of eletricity or physics. What does it mean to say that today's computer are based on “logic” anyway?
I suppose he's suggesting that the process of designing a cpu, memory, bus, circuit design, etc, could be more based on some arbitrary CA or other system.
To resolve this, Russell introduced what is often viewed as his most original contribution to mathematical logic: his theory of types—which in essence tries to distinguish between sets, sets of sets, etc. by considering them to be of different “types”, and then restricts how they can be combined. I must say that I consider types to be something of a hack. And indeed I have always felt that the related idea of “data types” has very much served to hold up the long-term development of programming languages. (Mathematica, for example, gets great flexibility precisely from avoiding the use of types.)
The intro of types considered as a hack is common opinion. What's interesting is his opinion about “data types” in programing languages. This is also a common debate. I was never good at typed languages… One obvious reason is that most of them are low level langs (C, C++, Java), and i despise low-level langs (was never a systems programer). However, i thought i might like it with typed high-level langs such as Haskell and OCaml. Tried to learn them around 2007 but didn't went far.