Ad
related to: time discounting and preference learning psychology today book clubunsubby.com has been visited by 10K+ users in the past month
Search results
Results From The WOW.Com Content Network
Temporal discounting (also known as delay discounting, time discounting) [12] is the tendency of people to discount rewards as they approach a temporal horizon in the future or the past (i.e., become so distant in time that they cease to be valuable or to have addictive effects). To put it another way, it is a tendency to give greater value to ...
An abstract, high-level construal of an activity (e.g., "learning to speak French") may lead to a more positive evaluation of that activity than a concrete, low-level construal (e.g., "learning to conjugate the irregular French verb 'avoir ' "). Thus, CLT predicts that we will think about the value of the low-level construals when evaluating an ...
Given two similar rewards, humans show a preference for one that arrives in a more prompt timeframe. Humans are said to discount the value of the later reward, by a factor that increases with the length of the delay. In the financial world, this process is normally modeled in the form of exponential discounting, a time-consistent model of ...
It is calculated as the present discounted value of future utility, and for people with time preference for sooner rather than later gratification, it is less than the future utility. The utility of an event x occurring at future time t under utility function u, discounted back to the present (time 0) using discount factor β, is
In real world situations, "discounting makes sense because of the inherent uncertainty of future payoffs". [55] One study looked at how reward discounting is context specific. [18] By differing the time and space between small and large rewards, they were able to test how these factors affected the decision making in tamarins and marmosets ...
Hyperbolic discounting, where discounting is the tendency for people to have a stronger preference for more immediate payoffs relative to later payoffs. Hyperbolic discounting leads to choices that are inconsistent over time—people make choices today that their future selves would prefer not to have made, despite using the same reasoning. [52]
The term of present bias was coined in the second half of the 20th century. In the 1930s economic research started investigating time preferences. The findings led to the model of exponential discounting, thus time consistent discounting. However, later research led to the conclusion that time preferences were indeed not consistent, but ...
Preference learning is a subfield of machine learning that focuses on modeling and predicting preferences based on observed preference information. [1] Preference learning typically involves supervised learning using datasets of pairwise preference comparisons, rankings, or other preference information.