Chapter 3: The World Of Simple Programs
the search for general features p51
2 page non-tech description on how this chapter is to set out to survey simple programs, and that their behavior falls into a few types: • repetition • nesting • randomness • localized structure
more cellular automata p53
introduced the naming scheme for simple 1D CA. ➢ for example: why rule 30 is “30”.
p57 Says that 2/3 of all these elementary CA remain fixed in size. (i.e. doesn't grow to the side(s)) The rest 1/3 grows forever. Says that 14% grows with complicated kinds of patterns. .
p60 Says that rule 110 is unique (besides symmetry of rules) in that it generate “complex mixture of regular and irregular parts”.
p60 It introduces totalistic CA with 3 colors. Here, “totalistic” consider the average of neighboring cells, not counting the cell itself. p61 gives a illustration of 50 such CA. Suggests that there's nothing more interesting that's different that the elementary CA.
In general, totalistic 3-color CA are exampled. The book by some cursory remarks that they are not much different than the elementary CA show before. Some examples of the 3-color totalistic CA are shown, with remarks. For example, some with nesting behavior, some with random-like behavior, are shown. Remarks that it takes quite a while for code 1599 to settle down. (8282 steps) p70.
Mobile Automata p71. (Ant CA)
Introduces the rule for mobile automata. The motivation given is that CA is claimed to update all cells in parallel, and Wolfram wants to see if it makes any difference of CA system that update single cell at a time.
Mobile Automata (MA) is not well named. It might be better named “Ant CA”, because “ant” better connotes a single cell moving, than mobile. And, the Cellular should be kept, because the system is still grid based.
one may think that CA is parallel, and MA is non-parallel. However, using the word “parallel” is probably a misleading characterization. Parallel connotes to parallel computation, or the concept of many things happening all at once, but the difference here isn't parallel computation.
The CA considered before seems to be a parallel computation system only because of its row-by-row change nature. CA can actually be considered non-parallel if we think of updating cell by cell. So, if we were to consider some intrinsic diff between CA and MA, it is not parallelism.
A better phrase that describes their mathematical difference is global change vs local change. In CA, the change happens everywhere in the entire space. In MA, the change happens only at one (or more) locations.
p73. Says that all 65 536 possible MA rules, all are repetitive. The text then broaden the definition to include updating cells that are neighbors to the moving cell. Says that there are 4 294 967 296 possible rules of this type. Claims that by random sampling, 99% of these are just repetitive behavior.
p73 Claims that once in every few thousand rules, a nesting behavior is found.
p74 Claims that after searching over 50k rules, a random behavior like that rule 30 is found. A example is shown. Says that although the coloring pattern is random, the path of the moving cell is still regular.
“generalized MA” (Ants-CA)
p75 Introduces a MA such that a moving cell may split to 2 or more moving cells. The motivation given, is that interesting behavior of MA seems to be very hard to come by, while it is common in elementary CA. So, Wolfram wants to see what is the cause for these 2 systems to differ, by investigating their intermediate form, specificallys, systems that updates at more than one location.
p76 Says that vast majority of this “generalized MA” still exhibit no interesting behavior, and when it does, often there are large number of moving cells. Says as some conclusion, that the reason elementary CA so easily generate complex behavior is probably because of the very fact that they update multiple cells at once.
Wolfram seems to think that the reason MA usually does not have interesting behavior is due to not much computation going on.
I think this is interesting and should be given more weight. We should give some more precise, mathematical definition and theorem, that gives us insight on why this is so.
For example, the books emphasizes that simple programs easily generate complex behavior. MA are simple programs, simpler than CA. Why does it usually not generate complex behavior? Why is it, that CA with a global updating rule more easily generate complex behavior?
turing machines p78. (Moody-lone-ant CA)
Introduces a CA he calls “turing machines” (TM). Basically, a mobile CA with a single moving cell (i'll call it the Ant), but such that the ant itself has 3 states that are independent from the cell's states/color.
Says that for a “turing machine” where the ant has 2 states and cell 2 colors, then implies there are 4096 of such case, and p79 shows 6 examples. Says that repetitive and nested behavior are seen, but nothing more complicated is found.
p79 Says in a non-firm way, that if the head has 3 states, then there are about 3 million possible CA, and they are all just repetitive or nested behavior, when they start with all white cells.
in this section, calling these CA as “Turing Machines” may be a abuse of term. Turing Machines means 2 things: (1) A particular abstract model of computation involving a tape and a writing head. We might refers to this as Alan's Turing Machine. (2) Refers to any abstract model of universal computation, often called Universal Machines.
in the book, a 2-colors CA with a moving head that has states, are called “Turing Machines”. This is a abuse of term because such CA is not a Alan's Turing Machine, and it is not necessarily Universal Machines. A better term for these type of CA might be Turing-Machine CA (TMCA), or moody-Ant CA.
it is interesting that Wolfram seems to have not found a TMCA that exhibit what he calls class 4 behavior, i.e. those with localized structures.
the whole section's examination of TMCA is too cursory. In just 4 pages (2 of which are pictures), it says that TMCA with 4 head states starts to have random behavior. It does not state in a firm way whether behavior with localized pattern are found. Also, the examples given are too few.
Here, we might want to define more precisely the relation of having localized structure vs being random. I think having localized structure implies randomness, while being random does not imply localized structure. Thus, systems with localized structure might just be a particular phenomenon of systems exhibiting randomness, and it may have mathematical significance that is not covered by simply saying it is random. Having localized structure may indicate that the CA might be a universal machine.
substitution systems p82
defines “substitution system”, pretty much a string replacement system, but done using cells. i.e. a list of rules that define how (a sequence of) cell(s) is to be replaced by other cell(s). (a cell has 2 possible colors). The replacement is done from left to right.
the motivation given is that unlike previous CA studied, which doesn't change the fixed width of the cell grid.
The statement that his Substitution Systems (SS) being variable width and his MA and his TM being fixed-width, is not true in some reasonable way. Wolfram's MA and his so-called Turing Machines does not have a fixed cell frame width.
A more proper characterization on the MA and TM's width compared to SS, is that the former's width always grows, while in SS it may shrink. To be more precise, we can give a mathematical definition to a term Universe Width, which should be a concept that is the size of the whole universe (grid) the CA system works in. If we consider CA, MA, TM, all be fixed width, then there's the question of what happens at the edge of the cell, or when the change activities reaches the edge. For example, in CA, we'll need to define special rules for the edge columns, or using a toroidal topology, which rather changes the CA's property in non-trivial ways. In SS, we can think of it as a variable-size-universe system.
p82 The paragraph “And with this kinds of rules,…” the whole paragraph is untrue. It is basically empty of content as in a significant portion of the book's text. It tries to justify a CA image rendering technique of variable cell-width by saying that too many cells is “unwieldily”. In fact, CA image shown in previous pages has far more number of cells. It is not technically necessary to represent Wolfram's “substitution system” by widening the first few cells. Nor does this widening provides any math insight. Nor is it necessarily better in some esthetics sense.
p85 The paragraph “One feature…”. This paragraph is literally incorrect. Page 82 to 85 are full of bloat that can reduced to half of its length.
p.86. In a non-firm way, seems to conclude that substitution system with 2 colors (including deleting strings) are mostly uninteresting.
p87 Shows 6 substitution system with 3 or 4 possible colors. It contains one example with repetition behavior, 1 with nesting behavior. Others seems random.
p88 First 2 paragraph: says tha SS is “almost exactly like a CA…”. I think this 2 paragraphs are technically incorrect or meaningless.
the whole section is too wishy-washy, and there does not seems to have any concrete, technical knowledge. In summary, the text seems to suggest that the so-called substitution system of 2 colors is generally uninteresting, while SS systems with 3 or more colors shows more random behavior.
Note: unlike previous sections, the exploration did not start by stating how many total possibilities of rules there are. In the SS case, we could define the number of rules, and also for each rule how many cells are involved, and thus give the total number of possible rules. For example, we might code it as: “ss[c][r][n]”, where c is the number of possible colors, r is the number of rules, and n is the maximum number of cells in each rule (both in pattern and replacement). Then, given a ss code, we can then say how many possible rules there are, and perhaps have some statistical summary on what percentage shows uninteresting or interesting behavior.
sequential substitution systems p88
p89 Sequential Substitution: consider the cells as strings of letters. Given a sequence of string replacement rules, try them one by one in order. As soon as a match is found, replace, then start the next generation. With 2 colors and 3 rules, viewing their evolution, is found nesting behavior and random behavior.
trying out the rules in order is important.
in this section, like previous section, no systematic knowledge is presented. All there is is some examples of SSS showing repetition and “apparent randomness” behavior.
tag systems p93
p93 Tag system: consider a sequence of cells. Remove one or more from the head. Depending on what colors are removed, add to the end one or more cells. Suggests faintly that this system is also capable of complicated behavior.
This section on tag systems is just 2 pages, and most of it pictures. There is no mathematical substance. The very few tag system examples shown, indicate that some of them does have random behavior, in particular their length. However, no nesting or localized structure are shown or even mentioned.
In the substitution system and tag system, they seems pointless. In the examples given, seems to be weak indication that these can generate random numbers, but nothing else much.
cyclic tag systems p95
p95 cyclic tag system: like the tag system, except that one has 2 set of rules, and they take turns.
Somewhat pointless section. The only content shown is that in this rather contrived system, their length flutuates.
register machines p97
symbolic systems p103
some conclusions p105
1 page. States that each system covered in this chapter all shows complex behavior, with the conclusion that the complex behavior of CA isn't dependent on CA's particular set up such as the fixed grid or row updating. States that the phenomenon of complexity is universal and not dependent on particular systems. Suggests that if the rules of a system is too simple, then repetitive behavior, if the rule is a bit more complex, then nesting behavior will show up, and for more complex behavior, some threshold of the underlying rules must be crossed.
how the discoveries in this chapter were made p108
states that discoveries are made by computer experimentation. Some defense on such experimentation, how how easy they are done, and their simplicity. States that it is better to do mindless massive systematic study of simple programs than carefully crafted select examples. States that pictorial representation is a good way to understand massive amounts of experimental data. States his late realization of the significance of rule 30.
p112 Gives some of his history studying CA, in particular how he initially searched ten million MA rules without seeing complex behavior. States similar experience in other systems he studied in this chapter.
Some comments about .
p53. In the basic 1D CA, numbered from rule 0 to rule 255, their behavior is classified from number 1 to number 4. Make a comprehensive list of their classifications. Also, determine whether there is a relation of their class number of init condition of 1 cell vs random. See p227, p55.
p60. Totalistic CA is a CA such that its rule for next generation consider some average of its neighbors. (as opposed to the exact state of each neighbors) Totalistic CA is born to reduce the possible rules of a CA. ➢ for example: Game of Life is a totalistic CA, because it disregards the positions of neighbors. Another example is simple 1D CA with more than 2 colors as states, and one takes the average of neighbor's colors.
p61 some examples of 3-color 1D CA with totalistic rule are given. There are 2187 such rules. 85% produce regular behavior. (probably class 1 or 2, and probably includes class 3 as well) Class 4 behavior is of course also seen. Some examples are given.
Question: of the totalistic 3color 1D CA examples given, they all start with a init cell. Is there examination of random init cond on them? In general, does CA behavior based on single cell init condi the same as random init cond? according to p250, the answer sees yes.
question: when examing the 3-color 1D CA, why not start with a random selection of all possible ones. Why example only totalistic ones? There doesn't seems to be obvious relation from totalistic ones to just basic.
ant 1D CA (single-cell-updating)
p71 single-cell-updating automata (Mobil Automata). There are 256^2 cases here. We consider like the normal 1D basic CA, but with a dot that moves to the left or right. It is asserted that all 256^2 of them has uninteresting repetitive behavior. p73 Nesting behavior shows when one extends the rule to include updating the color of the neighbor as well. This rule extension is fairly complex. It has 4 294 967 296 possible rules. 99% of them are simple repetitive. Though, nesting behavior appears.
p75 Random behavior appears too. The ant's path also seems random. This is after searching a million rules.
p76 a new mobile CA that allows the ant to split into two. It is found that a CA is likely exhibit complex behavior when there are multiple ants crawling at the same time. Note: no numbering system is given to mobile automata.
p78 a mobile automata with the extension that the active cell itself has states. This is taken to be called as Turing Machines. One exams such with heads having n states, and 2 color cell. It is found that when the head is 4 states, complex behavior appears easily. p81 gives one example with complex behavior.