A consequence of computation being easy in this universe: a random stimulus applied to the universe will provoke a random form of computation, and the response will then depend on the outcome of that computation. So in general the universe will respond in very scale free (alpha=1) manner.
I noted in my last post that Wolfram distinguishes between brute-force basic science and rather more structured engineering. Here is an interpretation of that using the above:
- Brute-force basic science is like poking at the universe (or the symbolic systems that underly all possible universes, or idea evaluation modules in the scientist's own mind) in a rather uniform manner. The number of times any particular area of the universe is poked is rather Gaussian (alpha=2).
- Engineering uses the catalogue of responses created by brute-force science to achieve specific ends. The tendency will be to poke at any particular part of the universe in inverse proportion to the response that part produces, in order to elicit a roughly uniform response from the universe. Therefore, engineers pokes at the universe rather more selectively, in a way that might be characterized as having lower alpha.
but we might expect that having elicited whatever response from the universe, people will pay attention to it in simple proportion to that response. So the basic scientists will live in a rather exciting universe and thus display diverse levels of excitation, whereas the engineers will have engineered for themselves a quite predictable universe and will therefore have a rather more consistent level of excitation overall. (On further thought, i am probably talking from my posterior here. Attention and excitement patterns will probably have exactly the same distribution as "poking".)
... this is all just words though, and I don't trust words. If I can plug some maths into it I might trust it better... A first approximation that i can pin some maths to is that an engineering type search algorithm would adapt quickly to a ridge in a fitness landscape, whereas a basic science search algorithm would spend much of its time jumping off the ridge in the hope that it was just a local feature.
later... Ah, I think can implement this in Ghost Diagrams. The scientist enumerates all small blocks of tiles, then the engineer assembles them into a larger structure. The scientist produces a representitive sampling of small assembliages, the engineer produces a heavily biassed sampling of large assembliages.
Currently Ghost Diagrams only implements the engineer. The algorithm I arrived at after some fiddling for the current implementation of the engineer has a power-law backtrack mechanism. This fits the above theory.
Why are engineers biassed? Well, if the engineer backtracks less than 100%, similar assembliages to a failed assembliage get a slight boost to their probability. If he instead threw out his entire assembliage the moment he found an unsatisfiable condition he would eventually produce an unbiassed sample assembliage. But this would take an unreasonably long time, and a biassed sample is better than none at all.
(Another source of bias would be if the assembler were to produce only a fragment of a larger (possibly unbounded) assembliage. It might turn out that there was no way to construct such a larger assembliage, or that there were unrepresentatively few or many ways to do so.)
This connects with my recent rant on Occam's razor and bias.
And now I understand everything, for this predicts that while autistic people will produce ideas of fairly strictly bounded size (the likelyhood of an idea of a certain size perhaps decaying exponentially), they will display abrupt shifts in attention. For example as relates to eye movement, they will investigate some hypothesis until they find some single mismatch to it, throw it out, and begin to consider some almost completely unrelated hypothesis, in the process likely displaying an abrupt saccade. On further reflection, no, this is too much of a stretch.
(one final piece and i'm done, honest... there is one trick a scientist has up her sleeve upon meeting a contradiction: she may mutate the assembliage, so long as she does so in such a way that she is as likely to mutate it from A to B as from B to A. (This condition excludes backtracking, as many many A's map directly to a single backtracked B but a corresponding move from B to one of the A's can not occur directly.) Metropolis-Hastings is an instance of this. So essentially to remain unbiassed one must either backtrack almost totally or only a little, but not some medium amount. That is a rather good description of a low-alpha Levy stable distribution. ... also, a preference for doing the Metropolis walk will look like obsessively focussed behaviour -- it is well known that the Metropolis-Hastings algorithm will often get stuck in some local region for an unreasonably long time, especially in high dimensions -- while a preference for huge backtracks will look ADHDish and in the extreme case possibly even epileptic)
Sigh, one final note: My eye movement simulation (i did a blog entry on it some time ago) used 1<alpha<2 for saccade sizes. This is probably incorrect. I expect it should be something like 0.5<alpha<1. After some algorithm tweaking on Ghost Diagrams, this is looking like a reasonable range for it also. ... or just plain larger alpha for autistic. Which would be much simpler really.