This leads quite naturally to a "selfish" theory of how people act:
- People act so as to maximize their happiness in the future.
There are problems with this way of looking at it though. People become unconsious if knocked on the head or given general anaesthetic. We become unconsious every night when we sleep. So our stream of consciousness has gaps in it. Maybe it isn't the funamental unit after all.
What if the fundamental unit of consciousness is an event in space-time? We might say a consciousness event comprises our feelings at this instant, and the things that are in our immediate short term memory. In this ontology, time is just another dimension, and continuity in time is given no great importance.
Then our stream of consciousness just becomes a set of events (possibly infinite) along a time-like curve. Note the similarity to the Sparrowfall Design Notes|Sparrowfall state machine model. Gaps in the stream make much more sense now.
Under this model, how do people (person-events) act? An event is a fixed thing, it can not do anything to make itself happier. This seems to leave only one reasonable possibility:
- A person-event will try to make future happy person-events, and prevent future sad person-events.
Doing this usually involves coordinated action with past and future person-events, since a single conscious event might only be able to initiate the barest twitch of a muscle. Future person-events might be associated with other people and not just the person of the person-event that is acting. So selfishness is unnatural under this model, people will by default act altruistically.
Why then do people act selfishly? Maybe because people understand themselves best, and are therefore most expert at making themself happy. They might not realize the consequences of an action that makes others unhappy. This explanation also leads us to expect people to do things that make other people that they know well happy (friends, family, people of the same culture) -- which is what people actually do.
So this model allows for some things that are hard to explain otherwise: altruism, sleep, gaps in consciousness. It also leads to a natural way to mathematically model people, the Sparrowfall model. So I think it is preferable to the sentient-being-as-unit-of-consciousness model.
We can even use this model to think sensibly about weird stuff that's going to happen close to the singularity:
- Copying a person's mind
- Joining and leaving group minds
Is it possible to think in these terms, to not distinguish between yourself and others? Possibly. I feel like i've come close a few times, but it's an utterly alien mindset. Our whole culture and language is tilted against it. But it's certainly something worth attempting.