Guy Steele Says: Don't Iterate, Recurse, and Get rid of cons!
A fascinating talk by the computer scientist Guy Steele. (well known as one of the author of Scheme Lisp)
〔How to Think about Parallel Programming: Not! By Guy Steele. @ http://www.infoq.com/presentations/Thinking-Parallel-Programming〕
Here's a paper written by Guy Steele, on the same topic, in 2009, but with lots lisp and Fortress code 〔Organizing Functional Code for Parallel Execution (or, foldl and foldr Considered Slightly Harmful). Guy Steele. @ ICFPAugust2009Steele.pdf〕 At the end of the document, he says, in big red letters: “Get rid of cons!”.
The talk is a bit long, at 70 minutes. The first 26 minutes he goes thru 2 computer programs written for 1970s machines. It's quite interesting to see how software on punch card works. For most of us, we've never seen a punch card. He goes thru it “line by line”, actually “hole by hole”. Watching it, it gives you a sense of how computers are like in the 1970s.
At 00:27, he starts talking about “automating resource management”, and quickly to the main point of his talk, about what sort of programing paradigms are good for parallel programing. Here, parallel programing means solving a problem by utilizing multiple CPUs. This is important today, because CPU don't get much faster anymore; instead, each computer is getting more CPU (multi-core).
In the rest 40 min of the talk, he steps thru 2 programs that solve a simple problem of splitting a sentence into words. First program is typical procedural style, using do-loop (accumulator). The second program is written in his language Fortress, using functional style. He then summarizes a few key problems with traditional programing patterns, and itemize a few functional programing patterns that he thinks is critical for programing languages to automate parallel computing.
In summary, as a premise, he believes that programers should not worry about parallelism at all, but the programing language should automatically do it. Then, he illustrates that there are few programing patterns that we must stop using, because if you write your code in such paradigm, then it would be very hard to parallelize the code, either manually or by machine AI.
If you are a functional programer and read FP news in the last couple of years, his talk doesn't seem much new. However, i find it very interesting, because:
- ① This is the first time i see Guy Steele talk. He talks slightly fast. Very bright guy.
- ② The detailed discussion of punch card code on 1970s machine is quite a eye-opener for those of us who haven't been there.
- ③ You get to see Fortress code, and its use of fancy Unicode characters.
- ④ Thru the latter half of talk, you get a concrete sense of some critical “do's ＆ don'ts” in coding paradigms about what makes automated parallel programing possible or impossible.
In much of 2000s, i did not understand why compilers couldn't just automatically do parallelism. My thought have always been that a piece of code is just a concrete specification of algorithm and machine hardware is irrelevant. But i realized, that any concrete specification of algorithm is specific to a particular hardware, by nature. 〔➤ Why Must Software be Rewritten for Multi-Core Processors?〕
Parallel Computing vs Concurrency Problems
Note that parallel programing and concurrency problem are related but not the same thing.
- Parallel computing is about writing code that can use multiple CPUs.
- Concurrency problems is about problems of concurrent computations using the same resource or time (typical issues: race condition, file locking).
Elegant Functional Style ≠ Automatic Parallelism
I've been doing functional programing for about 17 years (since 1993), pretty much ever since i started programing with my first language Mathematica. Two paradigms i use frequently, is sequencing functions (aka chaining functions, piping) and recursion (⁖ fold, nest). In some sense, most of my programs is one single sequence of function chain. The input is feed to a function, then its output to another, then another, until the last function spits out the desired output. It is extremely elegant, no state whatsoever, and often i even take the extra mile to avoid using any local temp variables or constants in my functions. But after watching this video, i learned, that function chaining is NOT good for parallelism, despite it is considered a very elegant functional paradigm.
As a functional programer, typically we are familiar with all the FP style constructs, and we strive for such elegance. The important thing i learned from this video is that, elegance in functional style does not mean it will be good for parallelism. Similarly, software written with purely functional language such as Haskell, does not imply it'll be good for parallelism. (but will be a lot better than procedural langs, of course.) FP and Automatic Parallelism certainly share some aspects, such as declarative code and not maintaining states, but is not the same issue.
Don't Iterate, Recurse, and Get rid of cons!
The followings are items from his slide.
Good Properties of Functions
- Associativity: grouping doesn't matter. ⁖
f[f[a,b],c] == f[a,f[b,c]]〔➤ the Nature of Associative Property of Algebra〕
- Commutativity: “order doesn't matter!”. ⁖
f[a,b] == f[b,a]
- Idempotent: “duplicates don't matter”. ⁖
f[f[x]] == f[x]
- Identity: “this value doesn't matter!”. ⁖
f[x] == f[x]
- Zero: “other values don't matter!”.
Things to Avoid
- DO loops.
- linear linked list. (lisp's
- Java style iterator.
- even arrays are suspect.
- as soon as you say “first, SUM = 0” you are hosed.
- accumulators are BAD. They encourage sequential dependence and tempt you to use non-associative updates.
- If you say, “process sub-problems in order,” you lose.
- The great tricks of the sequential past WON'T WORK.
- The programing idioms that have become second nature to us as everyday tools for the last 50 years WON'T WORK.
- A program organized according to linear problem decomposition principles can be really hard to parallelize. (⁖ accumulator, do loop)
- A program organized according to independence and divide-and-conquer principles is easy to run in parallel or sequentially, according to available resources.
- The new strategy has costs and overheads. They will be reduced over time but will not disappear.
- in a world of parallel computers of wildly varying sizes, this is our only hope for program portability in the future.
- Better language design can encourage “independent thinking” that allows parallel computers to run programs effectively.
Fortress ＆ Unicode
It's interesting that Fortress language freely uses Unicode chars. See: Computer Languages, ASCII Jam, Fortress, Unicode.
Parallel Programing Exercise
Here's a interesting Parallel Programing Exercise: Parallel Programing Problem: asciify-string.