The baseline model, referred to as Best Estimate And Simulation Techniques (BEAST), assumes that Option A is strictly preferred over option B, after r trials, if and only if:
where BEVA(r) – BEVB(r) is the advantage of A over B based on the best estimation of the expected values, STA(r) – STB(r) is the advantage of A over B based on mental simulations, and e(r) is an error term. In trivial choices, when one of the options dominates the other, e(r) = 0. In all other cases e(r) is drawn from a normal distribution with a mean 0 and standard deviation σi (a property of agent i).
When the payoff distributions are known (as in the non-ambiguous problems in our study), the best estimations of the expected values are the actual objective ones. That is, BEVj(r) equals the expected value of option j, EVj (for all r). The simulation-based estimate of option j, STj(r), equals the average of κi (a property of i) outcomes that are each drawn (from option j’s possible outcomes) in one mental simulation.
Each simulation uses one of four techniques. Simulation technique Unbiased implies random and unbiased draws, either from the options’ described distributions or from the options’ observed history of outcomes. Before obtaining feedback (decisions in trials 1 to 6) the draws are taken from the objective distributions using a luck-level procedure: The agent first draws a luck-level, a uniform number between zero and one. Then, for each prospect, the agent uses the same luck-level as a percentile in the prospect’s cumulative distribution function and draws the outcome that fits that percentile. When the agents can rely on feedback (trials 7 to 25) they first sample one of the previous trials (all trials are equally likely to be sampled), and the drawn outcomes for both options are those observed in that trial.
The other three techniques are “biased”: they can be described as a mental draw from distributions that differ from the objective distributions. The probability of choosing one of the biased techniques decreases when the participants receive feedback. Specifically, it equals:
where βi > 0 captures the magnitude of the agent's initial tendency to use one of the biased techniques, t is the number of trials with feedback, and θi > 0 captures agent i's sensitivity to feedback.
Notice that when κi is small, even unbiased sampling can lead to deviations from maximization; it implies reliance on small samples, and thus underweighting of rare events. The assumption that the probability of using the unbiased techniques increases with feedback was introduced to capture the observation that feedback decreased the weighting of rare events.
The three biased techniques are each used with equal probability, PBias(t)/3. Simulation technique Uniform yields each of the possible outcomes with equal probability (see a related idea in Birnbaum, 2008) using the luck-level procedure described above (the draws are made from the uniform cumulative distribution function even after feedback is obtained) . This technique enables the model to capture overweighting of rare events and the splitting effect.
Simulation-technique Contingent Pessimism is similar to the priority heuristic (Brandstätter et al., 2006); it depends on the sign of the best possible payoff (SignMax), and the ratio of the minimum payoffs (RatioMin). When SignMax > 0, and RatioMin ≤ γi (0 < γi <1 is a property of i), this simulation yields the worst possible payoffs for each option (MINA and MINB). This helps the model capture loss aversion and the certainty effect. When one of the two conditions is not met, the current simulation implies random choice among the possible payoffs (identically to technique Uniform). RatioMin is computed as:
For example, RatioMin = 0 in Problem 9 ("1" or "100, .01; 0"), and 0.5 in Problem 10 ("2" or "101, .01; 1"). The contingencies capture two regularities. The sensitivity to SignMax implies less pessimism (less risk aversion) in the loss domain, hence the reflection effect. The second, RatioMin contingency, implies less pessimism when the minimal outcomes appear similar (have the same sign and are close in magnitudes). This implies that the addition of constant to all the payoffs, decreases risk aversion in the gain domain. In addition, it implies higher sensitivity to rare events in problems like Problem 10 and Problem 61 (large RatioMin), than in problems like Problem 9 and Problem 25 (small RatioMin).
Simulation technique Sign implies high sensitivity to the payoff sign. It is identical to technique Unbiased with one important exception: Positive drawn values are replaced by R, and negative outcomes are replaced by -R, where R is the payoff range (the difference between the best and worst possible payoffs in the current problem; e.g., 100 in Problem 9 and Problem 10). When the probabilities of the different outcomes are unknown (as in the problems with ambiguous Option B), they are initially estimated with a pessimistic bias (Gilboa & Schmeidler, 1989). The initial expected value of the ambiguous option is estimated as a weighted average of three terms: EVA, MINB, and UEVB, which is the estimated EV from Option B under the assumption that all the possible outcomes are equally likely. We assume the same weighting for EVA and UEVB, and capture the weighting of MINB with 0 ≤ φi ≤ 1: an ambiguity aversion trait of i. That is,
For example, assuming φi = 0.05, BEVB(0) in Problem 22 (“10, .5; 0” or “10, p; 0”) equals .95(5+1)/2 + .05(0) = 2.85. In the no feedback trials (1 to 6) the probabilities of the m possible outcomes are estimated under the assumption that the subjective probability of the worst outcome SPMINB is higher than 1/m, and each of the other m−1 subjective probabilities equal (1- SPMINB )/(m-1). Specifically, SPMINB is computed as the value that minimizes the difference between BEVB(0) and the estimated expected value from Option B based on the subjective probabilities: SPMINB∙MINB + (1- SPMINB)UBh, where UBh =(mUB- MINB)/(m-1) denotes the average of the best m−1 outcomes. This assumption implies that
That is, in Problem 22 with φi = 0.05, SPMINB = (10 − 2.85)/(10 − 0) = 0.715.
Each trial with feedback in the ambiguous problems moves BEVB(t) toward EVB. Specifically,
where T is the expected number of trials with feedback (20 in the current setting) and OB(r) is the observed payoff generated from the ambiguous Option B at trial r.
The six properties of each agent are assumed to be drawn from uniform distributions between 0 and the model's parameters: σi ~ U(0, σ), κi ~ (1,2, 3, ..., κ), βi ~ U(0, β), θi ~ U(0, θ), γi ~ U(0, γ), and φi ~ U(0, φ). Namely the model has six free parameters: σ, κ, β, γ, φ, θ. Notice that only four of these parameters are needed to capture decisions under risk without feedback (the class of problems addressed by prospect theory). These parameters are σ, κ, β, and γ. The parameter φ captures attitude toward ambiguity, and θ abstracts the reaction to feedback.
We estimated BEAST’s parameters using the Mean Squared Deviation (MSD) measure and 14 additional constraints that correspond to the 14 qualitative phenomena summarized in Table 1. Specifically, we used a grid search procedure to find the set of parameters that minimizes the MSD over the 450 B-rates (90 problems times 5 blocks) and also reproduces the 14 qualitative phenomena. Best fit was obtained with the parameters σ = 7, κ = 3, β = 2.6, γ = .5, φ = .07, and θ = 1. The MSD score is 0.007. The right-hand graphs in Figures 1 through 10 present the predictions of BEAST with these parameters.
Baseline Models SAS example
Baseline Models Matlab example
Baseline Models R example