Archive of UserLand's first discussion group, started October 5, 1998.

Re: Heart Attack

Author:Paul Snively
Posted:8/10/2000; 10:25:22 AM
Topic:Heart Attack
Msg #:19634 (In response to 19631)
Prev/Next:19633 / 19635

Jacob Levy quoting Dave Winer:I want to live forever

Why set yourself up for such an ultimate disappointment? :)

It gets worse: for some definitions of "rational," it's far from clear that an agent who lived forever would be able to act rationally. In particular, a Decision-Theoretic agent that lived forever would have no basis for constructing a meaningful utility function for any actions (basically, the problem of "why do at any particular time what you can literally put off indefinitely?") and without a meaningful utility function you can't reason Decision Theoretically. There's a great sidebar on this issue in <http://www1.fatbrain.com/asp/bookinfo/bookinfo.asp?theisbn=0131038052>. In practical terms, this means that someone who is developing a Decision-Theoretic agent of some kind has to factor "death" (the system stops working for some reason) into the utility function for the agent ("it'd be good if I could do X before I die!")


There are responses to this message:


This page was archived on 6/13/2001; 4:56:02 PM.

© Copyright 1998-2001 UserLand Software, Inc.