Programing Language and Its Machine
C Is Not a Low-level Language. Your computer is not a fast PDP-11. And, Machine Architecture
interesting article. good one.
[C Is Not a Low-level Language. Your computer is not a fast PDP-11 By David Chisnall. At https://queue.acm.org/detail.cfm?id=3212479 ]
esr wrote a follow up thought with respect to golang. to read.
[Embrace the SICK By Eric Raymond. At http://esr.ibiblio.org/?p=7979 ]
so, in recent years, i became more aware that programing lang or any algorithm REQUIRES a machine to work with. This is the CPU design. this is fascinating. what machine u have changes the nature of ur lang entirely. it also changes the profile of efficiency n type of tasks.
if you are writing pseudo code, or hand waving in theory, you no need machine. but as soon you actually do compute, a painstaking nittygritty lang becomes necessary, and, an actual machine the lang is to work on.
the design of a machine dictates what your lang's gonna be like, and what kinda tasks it does well. e.g. lisp had lisp machines. Java talked bout java cpu ~1999. (note java runs on Java VIRTUAL MACHINE, which in turn is translated to actual machine).
Turing tape, formal lang, automata, are abstract concrete machines. Integers, or set theory, are also abstract concrete machines, that math works on. As soon as you talk about step by step instruction (aka algorithm), you need a machine.
abacus is a machine. babbage's difference engine is a machine. 5 stones on sand, is a machine. 10 fingers, is a machine.
virtual abstract machine is easy. It exist on paper. But actual running hardware, we have fingers to stones to metal cogs n wheels to vacuum tubes to solid state transistors to printed circuits to incomprehensible silicon die. and there is dna n quantum machine.
if we just think about cpu/gpu of current era. there are lots designs aka architectures. i dunno nothing about this, but the question is, what are the possible designs, pros n cons, what are the barrier to new design, how it effect the language and computing tasks we do today.
Instruction Set Architecture (ISA)
a machine, can be abstracted into “Instruction Set Architecture”. i.e. a set of instructions. These makes up a machine language, the lowest form of programing language. Each instruction do things like move this stone over there, or put this number here, add 2 numbers, etc.
e.g. 10 fingers machine. Its instruction set is: 0up, 0down, 1up, 1down, etc. (position your hands palm facing you. Left thumb is id 0. Right thumb is id 9. up means raise finger, down means down.)
one example of Instruction Set Architecture (ISA), is the x86 instruction set.
Given a ISA, there can be many diff implementation in hardware. Design of such implementation, is called microachitecture. it involves the actual design and make of the circuits and electronics.
so now, there r 2 interesting things bout designing a machine to run comp lang. 1 is Instruction Set Architecture, n sub to it is so-called microachitecture, which is the actual design and building of the machine, with restrains of electronics or physical reality.
for our interest about programing language design and its bondage by the design of machine it runs on, the interesting thing is Instruction set architecture, which is like the API to a machine.
i guess, design of cpu, has as much baggage as comp lang of human habit. for lang, it has to wait till a generation die off. For cpu, the thing is corp power struggle to keep (their) design in use, and hard barrier to entry. (of course, there is also backward compatibility, for both)
so now when we see programing language speed comparison, we can say, in fact technically true, that it's not because #haskell or functional programing are slower than C, but rather, the cpu we use today are designed to run C-like instructions.
a good ram chip will sometime flip a bit
a good ram chip will sometime flip a bit, due to cometic ray. This is partly why, Error-correcting code memory (ECC memory) is invented. ECC is used in intel xeon chip. xeon is x86 compatible but with added niceties such as ecc, more core, more ram cache etc.
look into cpu designs, random notes
apparently, there is a completely different cpu design, 1980s, called Transputer, which are chips designed to be connected together for parallel computing. The lang for it is occam. very nice. The lang is more tight to the cpu than C is to x86 chips.
In this foray of relations of algorithm and machine, i learned quite a lot bits. Many familiar terms: ISA x86 XEON CISC RISC MIPS ECC RAM, superscalar, instruction level parallelism, FLOPS, now i understand in a coherent way.
here's a Wikipedia topic tree that gives you an overview of the ontology
All Software, Need to be Rewritten Again, and Again
Before, whenever new chip comes into pop product, such as Apple switching to PowerPC in 1994, then to Intel in 2006, or the rise of ARM cpu in smart phones, i read that all software needs to be rewritten. Was always annoyed and shocked, and don't know why.
because: algorithm is fundamentally meaningful only to a specific machine. Note, new cpu design will be inevitable. That means, all software, need to be rewritten again. (practically, needs to be recompiled. the compilers need to be rewritten. This take years to mature. And, new language may be invented to better fit describing instructions for the new machine.)
dedicated hardware is a magnitude speedup
what if we don't rewrite instead rely on a emulation layer? A magnitude slower! That's why, IBM Deep Blue chess had its own dedicated chips, audio want its own processor (DSP), Google invent its own cpu for machine learning ( Tensor processing unit ). and vid games need GPU.
I was playing Second Life (3D virtual world, like 3d video game) around 2007 to 2010, and don't have a dedicated graphics card. I thought, if i just turn all settings to minimal, i'll be fine. No matter what i tried, 'was super slow, no comparison to anyone who just have a basic gpu.
[see Xah Second Life]
back in 1990s, i see some SGI machine (Silicon Graphics) demo of its 3d prowess. Or, in 1999 a gamer coworker is showing me his latest fancy graphics card with a bunch of jargons, something something texture, something something shawdowing. But i asked him about doing a 3D raytracing, turns out it has 0 speedup, take hours to render 1 picture. I am greatly puzzled. how is it possible, some fantastic expensive graphics machine yet useless when rendering 3d??
Now i know. The tasks GPU is designed to do is in general different from raytracing 3d scenes. Showing pre-recorded video (example: watching movie on YouTube), rendering 3d video game scenes, drawing 2d windows in computer screen (your browser), manipulate photos in photoshop, n lots other computation that we all think of as graphics, actually are all very different in terms of creating a dedicated hardware to run those algorithms.
See also: CUDA
interesting read [Every 7.8μs your computer's memory has a hiccup By Marek Majkowski.. At cloudflare.com ]
cpu hidden instructions [Breakingthe x86 ISA By Christopher Domas. At githubusercontent.com/xoreaxeaxeax ]
If you have a question, put $5 at patreon and message me.