Search results
Results From The WOW.Com Content Network
In behavioral economics, time preference (or time discounting, [1] delay discounting, temporal discounting, [2] long-term orientation [3]) is the current relative valuation placed on receiving a good at an earlier date compared with receiving it at a later date. [1] Applications for these preferences include finance, health, climate change.
Hyperbolic discounting, where discounting is the tendency for people to have a stronger preference for more immediate payoffs relative to later payoffs. Hyperbolic discounting leads to choices that are inconsistent over time—people make choices today that their future selves would prefer not to have made, despite using the same reasoning. [52]
Exponential discounting yields time-consistent preferences. Exponential discounting and, more generally, time-consistent preferences are often assumed in rational choice theory, since they imply that all of a decision-maker's selves will agree with the choices made by each self. Any decision that the individual makes for himself in advance will ...
The phenomenon of hyperbolic discounting is implicit in Richard Herrnstein's "matching law", which states that when dividing their time or effort between two non-exclusive, ongoing sources of reward, most subjects allocate in direct proportion to the rate and size of rewards from the two sources, and in inverse proportion to their delays. [8]
The term of present bias was coined in the second half of the 20th century. In the 1930s economic research started investigating time preferences. The findings led to the model of exponential discounting, thus time consistent discounting. However, later research led to the conclusion that time preferences were indeed not consistent, but ...
It is calculated as the present discounted value of future utility, and for people with time preference for sooner rather than later gratification, it is less than the future utility. The utility of an event x occurring at future time t under utility function u, discounted back to the present (time 0) using discount factor β, is
Preference learning is a subfield of machine learning that focuses on modeling and predicting preferences based on observed preference information. [1] Preference learning typically involves supervised learning using datasets of pairwise preference comparisons, rankings, or other preference information.
Intuitively, it seems odd that the welfare of an 80-year-old born in 1970 is intrinsically superior to the welfare of an 80-year-old born in 1980; in the context of social (rather than private) discount rates, when asked for their preferences over the welfare of others, most peoples' apparent "pure time preferences" becomes smaller or even ...