Problem with Occam's razor

homeblogmastodonthingiverse



(Just listened to a talk by Prof Alan Hajek on paradoxes in decision theory, which inspired this entry.)

Occam's razor seems necessary if there is an unbounded list of possible models: if all models were given probability above some certain amount the total probability would be infinite, so the only sensible thing to do seems to be to assign probability as a decreasing function of the description length of a model.

It's a nice and useful principle for coming up with priors, but it's not enough...

Suppose there is some sequence of models which predict a utility for you who's magnitudes increases faster than their probability decreases. Then, even though you can assign a definite probability to each possible model, you can not compute your expected utility. And you can't sensibly make choices that maximise your utility if you can't compute it.

For example, people who assign non-zero probability to perfect heaven or hell do all kinds of crazy stuff.

I would guess that such sequences will arise in any sufficiently general assignment of probabilities to models.




Uh-oh.




This would seem to require a further and possibly quite strict limitation on sensible a-priori beliefs in terms of the utilities that they predict.




[æ]