Best-fit synthesis: coherent areas grow until reaching some insurmountable obstacle. There is a certain distribution of insumountability of obstacles, dependant on the norm used. This distribution gives rise to the distribution of chunk sizes seen. Considered as a system evolving over time, the pattern of behaviour is a kind of self-organizing criticality.
Ghost diagrams: when hitting an unsolvable problem, Ghost Diagrams destroys a surrounding area of random size. This distribution of this destruction can be explicitly set. The form of criticality is explicitly imposed... but there probably is an optimum distribution dependant on the tile set -- how much long range coherence there is -- and it may be possible to adapt the algorithm to produce a correct self-organizing criticality.
Linking this into the whole L1/L2 thing a bit more: mean vs median. Median (L1 minimizing estimate of the center of a distribution) can display self-organzing criticality as data builds up. It sometimes makes big leaps if the data source is bi-modal. The mean (L2 minimizing estimate of the center of a distribution) does not.
There remains a certain chicken/egg question. Are all systems with self-organizing criticality related in some way to L1... is there some way of viewing all such systems in which the L1 norm is explicit? Or is the self-organizing criticiality the key aspect, though there are various ways that it may arise?
Update 5/11/04: Reading Stewart Kauffman's "Investiagations". A thought occurs, how many ways are there to link two edges in a Ghost-Diagram separated by a certain space? Something like a collection of Feynman diagrams describing all the ways in which a certain quantum transition could occur. Two edges a certain distance apart will have some probability of being linked within a certain time by a random process. In some tile-sets there will be a great many ways to link any two edges, in others the set of edges that may be linked will be somewhat complex.