Saturday, July 11, 2009

Nozick's experience machine

Robert Nozick holds that people want real experience and not the subjective one.

I used to think he is entirely mistaken. It just sounds bad. It depends on how you phrase the question + most people cannot realistically understand the question.

Now I have two more points.
1) In line with the idea that many of our intuitions are rational as general decisions rules (Gigerenzer "ecological rationality" and "simple heuristics that make us smart").
Possibly we have a strong intuition against false deals and fake stuff. The whole experience machine sounds like cheating. If taken to its logical conclusion, it is a good deal when taken seriously, but most people are following their thinking rules, yielding a strong no. Would they be able to truly comprehend it they would have liked it.


2) Goals are another fancy issue. Do we want to feel good? or do we want status, sex and food?
It sounds contradictory, but human motivation need not be defined in feeling good. Humans seem to have in fact multiple motivations, and not all of them are always compatible, nor can we say that people know how to trade off between different goals.
If the evolutionary embedded goals are specific, we can understand why people refuse the experience machine. We have in mind the embedded goal of food not of its experience.

Does it imply that it is not experience that matters? surely not. Because it is quite possible that in a sense people prefer the good experience to the real stuff.
In a sense the want to feel good has better rational justifications, and is more coherent when looking at life, than when taking the various goals themselves seriously.


[PS. goals as means: Sometimes, these goals bring the good life indirectly, as when being immersed in an "important" task makes a person happy. He must truly believe in the essentially valuableness of the goal in order to have the experience.

If goals are means for the good life, than they will tell us to refuse the experience machine, but it will be misleading. Since they are means and the experience of the machine is the very end.]



Some will say that it is not good, since we need a way to determine what people want. How will we determine preferences, wants and goals, if people are so unreliable?
The question is mis-defined. Because it is very possible that we do not have a way to ascertain what people want. Maybe people themselves do not know.

This is also not contradictory to the idea that we should use certain rules for practical purposes, because even without knowing "really" and "for sure", we may take the rules that seem best for us.

But I believe that if we want to know what people want we got to look at the variety of people actions AND related thoughts. Analyzing actions and preferences in light of what people think and know when they decide and act. This is very subtle and unreliable. Smart skeptics will say impossible. But we have a glimpse.

2 comments:

Anonymous said...

This post points to the essential question of how one recognizes nonconscious needs and motives. The ability to think about and plan for the future allows people to pursue goals. However, if those goals come into conflict with nonconscious needs and motives, pursuing the former may become problematic. If one accepts that conscious processes through which goals are constructed develop out of nonconscious processes, then the potential for conflict arises in selecting an experience via Nozick's experience machine. In other words, the consciously selected experience may run counter to nonconscious motives. In some regard this problem appears akin to that of optimisation, wherein optimising one variable or set of variables may be ecologically irrational.

One other point on Nozick's experience machine: If one expressly retains agency over when to unplug the machine, in the midst of the experience, then the problem is partially mitigated. Giving up even an illusion of control may not feel right at some nonconscious level.

Jazi Zilber said...

an objection to the experience machine coming from nonconscious processes is the crux of the problem.

Can we say that these nonconscious motives are reliable? or maybe we are better off seeing them as intuitions about preferences that are misleading us, because ultimately we want only to feel good?

Hard to decide. But I have a tendency in my heart towards seeing all these preferences as means towards the end of feeling good. (that is, not descriptively, because they are partly there as a given without asking my analysis, but rather something like normatively)