Search results
Results From The WOW.Com Content Network
In behavioral economics, time preference (or time discounting, [1] delay discounting, temporal discounting, [2] long-term orientation [3]) is the current relative valuation placed on receiving a good at an earlier date compared with receiving it at a later date. [1] Applications for these preferences include finance, health, climate change.
The phenomenon of hyperbolic discounting is implicit in Richard Herrnstein's "matching law", which states that when dividing their time or effort between two non-exclusive, ongoing sources of reward, most subjects allocate in direct proportion to the rate and size of rewards from the two sources, and in inverse proportion to their delays. [8]
Exponential discounting yields time-consistent preferences. Exponential discounting and, more generally, time-consistent preferences are often assumed in rational choice theory, since they imply that all of a decision-maker's selves will agree with the choices made by each self. Any decision that the individual makes for himself in advance will ...
Prices can increase over time; Increasing the number of periods can decrease efficiency. Grossman and Perry [4] study sequential bargaining between a buyer and a seller over an item price, where the buyer knows the gains-from-trade but the seller does not. They consider an infinite-turn game with time discounting.
Hyperbolic discounting, where discounting is the tendency for people to have a stronger preference for more immediate payoffs relative to later payoffs. Hyperbolic discounting leads to choices that are inconsistent over time—people make choices today that their future selves would prefer not to have made, despite using the same reasoning. [52]
It is calculated as the present discounted value of future utility, and for people with time preference for sooner rather than later gratification, it is less than the future utility. The utility of an event x occurring at future time t under utility function u, discounted back to the present (time 0) using discount factor β, is
In real world situations, "discounting makes sense because of the inherent uncertainty of future payoffs". [55] One study looked at how reward discounting is context specific. [18] By differing the time and space between small and large rewards, they were able to test how these factors affected the decision making in tamarins and marmosets ...
Preference learning is a subfield of machine learning that focuses on modeling and predicting preferences based on observed preference information. [1] Preference learning typically involves supervised learning using datasets of pairwise preference comparisons, rankings, or other preference information.