Costs of Computer Language Operations
documenting time+space costs of computer language core operations
in my recent study of the relations of Programing Language and Its Machine [see Programing Language and Its Machine] i noticed that the cost of operation in programing lang is critical. i.e. each operator or func should have doc about its costs: a “cpu cycle” cost and memory cost.
one primitive way to get cpu+memory cost of each operation (the core operator, statement, function) of a lang is simply by profiling, and count cpu cycle and memory footprint. The result then normalized to have a scale say 1 to 10.
but, note that the cost also depends on the compiler/implementation, and the specific hardware. still, it seems reasonable to have a “score” of time+space costs for each and every core operations of a lang.
the explicit documentation of operation costs of computer langs have strong advantages. It makes all programers painfully aware. And that these costs are inherent, not man-made complexity. It also makes it trivial to know a program's efficiency.
Sometimes when coding, no matter what u do, there seems always a inelegant extra var, init, slot, assignment, push, copy …. When studying cpu e.g. Superscalar, RISC, CISC …, u see lots of cycles r wasted or memory copied vainly. So, “inelegance” seems INEVITABLE in algorithms.
Programing Languages and their Machines
If you have a question, put $5 at patreon and message me.