Search results
Results From The WOW.Com Content Network
Temporal discounting (also known as delay discounting, time discounting) [12] is the tendency of people to discount rewards as they approach a temporal horizon in the future or the past (i.e., become so distant in time that they cease to be valuable or to have addictive effects). To put it another way, it is a tendency to give greater value to ...
The term of present bias was coined in the second half of the 20th century. In the 1930s economic research started investigating time preferences. The findings led to the model of exponential discounting, thus time consistent discounting. However, later research led to the conclusion that time preferences were indeed not consistent, but ...
The phenomenon of hyperbolic discounting is implicit in Richard Herrnstein's "matching law", which states that when dividing their time or effort between two non-exclusive, ongoing sources of reward, most subjects allocate in direct proportion to the rate and size of rewards from the two sources, and in inverse proportion to their delays. [8]
Exponential discounting yields time-consistent preferences. Exponential discounting and, more generally, time-consistent preferences are often assumed in rational choice theory, since they imply that all of a decision-maker's selves will agree with the choices made by each self. Any decision that the individual makes for himself in advance will ...
The pragmatic factor usually results in a "pure time preference" factor in the social discount rate, that a pleasurable experience at a certain date is intrinsically more valuable than the exact same experience at a later date, and that the life of a person born sooner has more intrinsic value than the life of a person born later.
Get AOL Mail for FREE! Manage your email like never before with travel, photo & document views. Personalize your inbox with themes & tabs. You've Got Mail!
It is calculated as the present discounted value of future utility, and for people with time preference for sooner rather than later gratification, it is less than the future utility. The utility of an event x occurring at future time t under utility function u, discounted back to the present (time 0) using discount factor β, is
Preference learning is a subfield of machine learning that focuses on modeling and predicting preferences based on observed preference information. [1] Preference learning typically involves supervised learning using datasets of pairwise preference comparisons, rankings, or other preference information.