Around 1998 i tried to write down in Mathematica notebook the essence of the book Linear Algebra and Its Applications by David C Lay. amazon The notebook is here: linearAlgebraNotes.nb. PDF version: linearAlgebraNotes.pdf.

I've been a Mathematica expert since about 1995. I love Mathematica, and tend to view it as a codification of Mathematics. I'm into Bertrand Russell's math philosophy that mathematics is no more than a logical system. And i'm also into Hilbert's formalism school of thought that all math is mere a transformation of symbols, in itself need not have meanings. With the advent of computer algebra system like Mathematica, i want mathematics to become a computable system of theories, a pure symbolic manipulation system that we can give meaning in the traditional way.

Thus, in my notes of math in mathematica, i have always tried to write the theorems down as some computational formulas or equations. This, has turned out to be very difficult. (i have since come to understand that this is no trivial matter.) In trying to write down many theorems from Lay's book as some computational equivalent, i find it difficult. The process gets one to think a lot of logic matters on different levels and philosophical problems. So in the end, my notebook is merely a restatement in the same way traditional math literature are written: as statements of concepts and notations. (as opposed to being a system of symbolic logic or formulaic transformation).

This made me think, that perhaps it is rather fruitful as it is that math be thought of as concepts, and the notations are symbols that represents concepts. In particular, it is problematic to view math as pure formalistic system of symbol transformation. However, on the other hand, my difficulty may be merely the size of the problem. The efforts of great mathematicians turning the corpus of math into pure symbolic logic is not met with satisfaction. Itself is met with great difficulties and requires a enormous amount of math knowledge. And also, perhaps the difficulty is also due to logicism and formalism being not popular or the de facto way people do math, consequently its literature and understanding are comparatively meager.

The mathematics codification is not a trivial problem. Although today with the incredible advancement of computation technologies and understanding, i believe it is quietly making advances even without explicit effort by logicians. Look at computer algebra systems like Mathematica, which already encode a good portion or aspect of math knowledge far beyond any mathematician (who are limited by their finite human expertise in a field), and can also generate surprising results in particular fields. (integration and identities are a commonly cited examples). Even though it is merely a calculator in essence, but as a symbolic processing machine it has already proved to be *indispensable* in any future mathematics at *least* as a *tool*. If we look at Mathematica, one interesting question relevant to this discussion is “what does those symbols mean?”. And when designing such a computer algebra system, how can one design a system that allows one to do general mathematics?

For examples, how would one turn concept of a “group” into some computational-equivalence statement? (⁖ in Mathematica, might be *something* like `group[f_,set_]:={Function[_],_List}`

) Turning discrete fields of mathematics such as algebra into computational systems is rather easy in comparison to non-discrete systems such as analysis. Think about how one would turn concepts like orientbility of surfaces into code, or other ideas such as compact spaces or metric spaces.

Computer algebra system is not the only practical thing that faces these questions. In recent years there's the design of MathML, which faces exactly the same questions. Ultimately, the question faced is: “What is the relation of math notations to mathematics”? Note that MathML is not merely a notation system, but has a computable aspect. This is in contrast to Donald Knuth's TeX typesetting system. As a typesetter, it is a purely notational system that does not embody mathematics. Thus, the design issues of typesetting system like TeX does not face the difficult problems of relations between math and notations, but merely that of positioning of glyphs.

The Introduction section of MathML specification captures some of my sentiments here. http://www.w3.org/TR/2001/REC-MathML2-20010221/

For detailed exposition on TeX problems, see: the TeX pestilence.

During my math learning over the years, somehow i developed a peculiar habit of always wanting to understand or view math as some logical system. Thus, in later years i always have problems digesting common text books. The majority of math text books always discuss concepts, often not self-contained and rely on many implicit assumptions and non-math concepts (such as from physics). For example, in calculus books, teaching integration on length of curve, the “length” is often mathematically undefined but relied on implicit understanding. That math concepts depends on everyday experiences. I'm often stuck wanting to resolve these concept into self-contained logical equivalents, and this often ends up botching my learning of the subject. (for example, how to define curve length. This particular example may be simply defined as a integration. However, other math notions may be harder, and is often not given as such in normal text books, unless the text book takes a exact axiomatic treatment. (which is however different from a pure symbolic logic or formula/symbols based system. (There are relatively many math texts based on a pure axiomatic approach, but almost zero math texts based on formula/symbolic logic system.)))

Some math books i totally love. The Book of Euclid, for example, is a work i wouldn't have a problem with. (i haven't read it though) Coxter's Real Projective Plane, is also a great book that i've read 6/10 of it. This book treats projective geometry on the plane purely axiomatically, even though it is highly condensed. These axiomatic books are often alienating to most readers, because they are “dry”, doesn't connect to every day experiences, seemingly pointless. I love them because they give a logical, precise structure of the subject matter. Interpretation isn't a problem. Interpretation should be added on top of it separately from the subject matter, so that we understand precisely the subject content logically, and its precise relation to daily conceptions.

Axiomatic system is not equivalent to a symbolic logic system. It seems to be only a start. Even with axiomatic systems such as Euclid's, there's the question of how to turn it into a computable system. In other words, how to turn concepts like triangles and angles into a series strings and their manipulation that can be manipulated to generate new strings as new theorems.

I think all these concepts are related: axiomatic system, symbolic logic system, Hilbert's formalism, and formal mathematics. The last, “formal math” refers to mathematics done thru some rigor, but not necessarily axiomatic or systematic formulas. Also i don't quite understand the relation between symbolic logic system and Hilbert's formalism and Russell's logicism. Literature explaining these are rare, and rarer is their comparison or reconciliation. Partly is because these are esoteric research topics and they are often not exposed or presented to non-specialists even it is possible.

During the 18th century, there is a period that mathematicians thought that the subject matter of math has exhausted itself, that all questions are solved or will be soon and math is running out of questions. As things turned out, the opposite is happening. I think as 20th century abstract math shows, that even today's body of math is so vast that no single or group of mathematicians can understand them all, but overall math knowledge is thought as a pitiful drop in the sea. The math fields have diversified to a degree that no one mathematician knew the basic body of knowledge of another field, yet almost in any single field tremendous territories are unexplored or unasked.

During the last century, computers spurred a brand new field of discrete mathematics, and algorithmic studies and approaches are totally new. Computers, aside from aiding traditional math tremendously by the sheer computing power, has spurred even brand new fields of traditional math such as fractals analysis. Things such as “experimental math” has came to the scene, and also simulations. Stephen Wolfram, the megalomaniac genius went as far to say that simulation is the future of facing the vast amount of unasked questions, and that traditional math is reaching a impasse.

See also: Notes on A New Kind of Science.

A related idea is that English or any Natural Language is terrible as a logical and precise communication tool. Thus, it would be great if all math, including text books, be written in symbolic logic. Russell started logical positivism and logical linguistics studies. As of today, we are in luck. Around 1950, a logical language was invented by James Brown called loglang, from lingustic concerns without direct ties to math foundations or analytic philosophy. Loglang is a artificial language based on predicate logic, designed for human communication. After decades of development, the loglang community split and spun-off into lojban for political reasons. I've been learning lojban for several months.

For some learning notes about lojban, see: Xah's lojban Tutorial.

Today, i understood for the first time clearly the diff between propositional logic and first-order logic (FOL). From wikip First-order logic, first section “Why is first-order logic needed”.

Basically, propositional logic, when translated into symbols (as a formal system as a movement to rigoroufy the study of logic in the 1800 and 1900s), lacks something to be desired. Specifically, it does not clearly states the context. First order logic (sometimes aka predicate logic, symbolic logic), basically came about in the process of turning propositional logic into a system using symbols (i.e. formal system ).

Another thing i learned that cleared a long time confusion i had, is about the “predicate”. The “predicate” in FOL is actually not a function. It is a unitary relation. Here's a quote from wikip:

Note that in first-order logic, Man(x) is not a function, as functions convert inputs to outputs through a specific process. Man(x) is a unary relation which simply means that x is described by “Man”, or that x is a “Man”, or that x falls under the category of “Man”.

For example, in the following:

∀ x p[x] → q[x] p[x] ⇒ q[x]

The p and q are actually not functions. I had this confusion because in computer languages (⁖ Mathematica, Lisp), a function that returns true/false are called predicate. If we consider the p and q as functions that returns true/false, then the above needs another quantifier, namely, that we assume all statements are true.

For example, the theorem
`∀ x p[x] → q[x]`

can have 2 interpretations depending whether we consider p (or q) to be “unitary relation operator” or “a function that returns true/false”. If the latter, then the statement simply provide a mapping for 2 funcitons.