Wednesday, February 20, 2008
"If you make something idiot-proof, the universe creates a better idiot."
Sunday, February 17, 2008
"... we began with the more basic question of how people choose to code multiple events such as a gain of $30 followed by a loss of $9. One approach we used was to ask people their preferences about temporal spacing. For two specified financial outcomes, we asked subjects who would be happier, someone who had these two events occur on the same day, or a week or two apart? The reasoning for this line of inquiry was that temporal separation would facilitate cognitive segregation. So if a subject wanted to segregate the outcomes x and y, he would prefer to have them occur on different days, whereas if he wanted to integrate them, he would prefer to have them occur together. A large majority of subjects thought temporal separation of gains produced more happiness. But [...] subjects thought separating losses was also a good idea. Why?
The intuition for the hypothesis that people would want to combine losses comes from the fact that the loss function displays diminishing sensitivity. Adding one loss to another should diminish its marginal impact. By wishing to spread out losses, subjects seem to be suggesting that they think that a prior loss makes them more sensitive towards subsequent losses, rather than the other way around. In other words, subjects are telling us that they are unable to simply add one loss to another (inside the value function parentheses). Instead, they feel that losses must be felt one by one, and that bearing one loss makes one more sensitive to the next.
There are two important implications of these results for mental accounting. First, we would expect mental accounting to be as hedonically efficient as possible. For example, we should expect that opportunities to combine losses with larger gains will be exploited wherever feasible. Second, loss aversion is even more important than the prospect theory value function would suggest, as it is difficult to combine losses to diminish their impact. This result suggests that we should expect to see that some of the discretion inherent in any accounting system will be used to avoid having to experience losses."
Thaler, Richard H. (1999) Mental accounting matters. Journal of Behavioral Decision Making 12:183-206.
I'm no economist, but this should be qualitatively comprehensible.
I re-read Dennett's Darwin's Dangerous Idea and no longer think that the chapter on utilitarianism is ludicrous. Probably because it was never about utilitarianism per se in the first place. Dennett gave an example of a very human decision-making situation right after he discussed utilitarianism, and trampolined his thoughts off from that example thereafter instead of further discussing utilitarianism, as I'd first thought. The most interesting take-home message from that example is as follows:
The mistake that is sometimes made is to suppose that there is or must be a single (best or highest) perspective from which to assess ideal rationality. Does the ideally rational agent have the all-too-human problem of not being able to remember certain crucial considerations when they would be most telling, most effective in resolving a quandary? If we stipulate, as a theoretical simplification, that our imagined ideal agent is immune to such disorders, then we don't get to ask the question of what the ideal way might be to cope with them.
That paragraph in itself is not quite satisfying. I reproduce the preceding paragraphs for you to chew on (italics in the original):
The fundamentality of satisficing - the fact that it is the basic structure of all real decision-making, moral, prudential, economic, or even evolutionary - gives birth to a familiar and troubling slipperiness of claim that bedevils theory in several quarters. To begin with, notice that merely claiming that this structure is basic is not necessarily saying that it is best, but that conclusion is certainly invited - and inviting. We began this exploration, remember, by looking at a moral problem and trying to solve it: the problem of designing a good (justified, defensible, sound) candidate-evaluation process. Suppose we decide that the system we designed is about as good as it could be, given the constraints. A group of roughly rational agents - us - decide that this is the right way to design the process, and we have reasons for choosing the features we did.
Given this genealogy, we might muster the chutzpah to declare that this is optimal design - the best of all possible designs. This apparent arrogance might have been imputed to me as soon as I set the problem, for did I not propose to examine how anyone ought to make moral decisions by examining how we in fact make a particular moral decision? Who are we to set the pace? Well, who else should we trust? If we can't rely on our own good judgment, it seems we can't get started:
"Thus, what and how we do think is evidence for the principles of rationality, what and how we ought to think. This itself is a methodological principle of rationality; call it the Facturnorm Principle. We are (implicitly) accepting the Facturnorm Principle whenever we try to determine what or how we ought to think. For we must, in that very attempt, think. And unless we can think that what and how we do think there is correct - and thus is evidence for what and how we ought to think - we cannot determine what or how we ought to think. (Wertheimer 1974, pp. 110-111; see also Goodman 1965, p. 63)"
Optimality claims have a way of evaporating, however; it takes no chutzpah at all to make the modest admission that this was the best solution we could come up with, given our limitations. The mistake that is sometimes made is to suppose that there is or must be a single (best or highest) perspective from which to assess ideal rationality. Does the ideally rational agent have the all-too-human problem of not being able to remember certain crucial considerations when they would be most telling, most effective in resolving a quandary? If we stipulate, as a theoretical simplification, that our imagined ideal agent is immune to such disorders, then we don't get to ask the question of what the ideal way might be to cope with them.
And, two paragraphs down:
[regarding the Prisoner's Dilemma] What does the "ideally rational" player do? Perhaps, as some say, he sees the rationality in adopting the meta-strategy of turning himself into a less than ideally rational player - in order to cope with the less than ideally rational players he knows he is apt to face. But then in what sense is that new player less than ideally rational? It is a mistake to suppose this instability can be made to go away if we just think carefully enough about what ideal rationality is.
Gabriel has an interesting post on irrational agents.
Tuesday, February 12, 2008
Passage. And the creator's statement:
"It presents an entire life, from young adulthood through old age and death, in the span of five minutes. As you age in the game, your character moves closer and closer to the right edge of the screen. Upon reaching that edge, your character dies. [...]
The world in Passage is infinite. As you head east, you'll find an endless expanse of constantly-changing landscape, and you are rewarded for your exploration. However, even if you spent your entire lifetime exploring, you'd never have a chance to see everything that there is to see. If you spend your time plumbing the depths of the maze, however, you will only see a tiny fraction of the scenery.
You have the option of joining up with a spouse on your journey (if you missed her, she's in the far north near your original starting point). Once you team up with her, however, you must travel together, and you are not as agile as you were when you were single. Some rewards deep in the maze will no longer be reachable if you're with your spouse. You simply cannot fit through narrow paths when you are walking side-by-side. In fact, you will sometimes find yourself standing right next to a treasure chest, yet unable to open it, and the only thing standing in your way will be your spouse. On the other hand, exploring the world is more enjoyable with a companion, and you'll reap a larger reward from exploration if she's along. When she dies, though, your grief will slow you down considerably.
As I said before, there's no right way to play this game. Part of the goal, in fact, is to get you to reflect on the choices that you make while playing. The rewards in Passage come in the form of points added to your score, and you have two options for scoring points: treasure chests, which give 100 points for each hit, and exploration, which gives double-points if you walk with your spouse. There's a pretty tight balance between these two options---there's no optimal choice between the two.
Yes, you could spend your five minutes trying to accumulate as many points as possible, but in the end, death is still coming for you. Your score looks pretty meaningless hovering there above your little tombstone. This treatment of character death stands in stark contrast with the way death is commonly used in video games (where you die countless times during a given game and emerge victorious---and still alive---in the end). Passage is a game in which you die only once, at the very end, and you are powerless to stave off this inevitable loss."
Sunday, February 10, 2008
The colleagues I spend the most time with, one chooses conversation topics of no interest to me whatsoever, and the other has the empathy of a sharp rock. I shouldn't be spending time with people who make me feel this way. At all. Colleague relationships notwithstanding.
And I would rant more about Chinese New Year, but right now am just too tired. Too tired of being ignored, of being expected to do things I cannot do well, and of being accommodating to people's terrible ill-thought-out wants. Far too tired.