Economists view people’s decisions as reflecting stable inherent constructs such as “risk attitudes.” Psychologists, on the other hand, argue that decisions are mostly contextual in nature. My research focus on the intersection between these two approaches by evaluating how stable constructs and situational factors interact. My coleagues and I discern stable constructs that underlie judgments, and use them to predict risk-taking behavior and explain individual differences. At the same time we highlight the boundaries of these constructs and evaluate the relative effect of contextual factors such as the effect of losses, exposure to others decisions, and past experience.
Adaptive decision-making and decisions from experience
Constructs of risk-taking and risk perception
Cognitive modelling and choice prediction competitions
Trust and Cooperation
2005-2008 Ph.D., Technion, Israel (Behavioral and Management Sciences)
2003-2005 M.Sc., Technion, Israel (Behavioral and Management Sciences), Cum-Laude
2000-2003 B.A., University of Haifa, Israel (Economics and Psychology, Double Major)
2017- present Associate Professor, Department of Environmental Economics and Management, Hebrew University
2010 - 2017 Senior Lecturer, Department of Environmental Economics and Management, Hebrew University
2008 - 2010 CLER research fellow, Harvard Business School
2010 Computational Modeling Prize in Applied Cognition (Cognitive Science 2010 awards)
2007 "Jacobs Award" for excellence in research
2007 "Sandor Szego Award" for continued excellence in teaching, Technion
2006 "Sandor Szego Award" for continued excellence in teaching, Technion
2005 "Vivian Konigsberg Award" for excellence in teaching, Technion
2019-present: Associate Editor: Journal of Economic Psychology
2015-present: Journal of Behavioral Decision Making
2015-present: Journal of Travel Research
2011 Guest Editor (with Ido Erev and Al Roth): Special issue: Predicting Behavior in Games Games
CV_Eyal_Ert.pdf | 62 KB |
Choice prediction competitions aim to promote the development of models that may predict, rather than merely describe, behavior in the social sciences. Each competition focuses of a different area of interest in behavioral decision/economic research.
each competition starts with a large experiment which includes many choice problems, randomly selected from the space of relevant problems. The results of this experiment are then posted on the competition website, and we invite researchers to fit models on this data. We then challenge them to submit their models to compete on the prediction of the results of another large experiment (the competition set) that includes similar but different randomly selected problems from the relevant space. Thus they focus on predicting behavior, rather than merely fitting previous results.
The projects were initiated by Ido Erev, Al Roth, and myself, and have been funded by the BSF. This page summarizes the competitions we ran thus far, for more data see our competitions websites and papers. In 2010 the new journal GAMES published a special issue on “predicting behavior in games”that includes the 2nd and 3rd competition described in this page, as well as papers by colleagues who participated in these competitions.
Experimental studies of human choice behavior have documented clear violations of rational economic theory and triggered the development of behavioral economics. Yet, the impact of these careful studies on applied economic analyses, and policy decisions, is not large. One justification for the tendency to ignore the experimental evidence involves the assertion that the behavioral literature highlights contradicting deviations from maximization, and it is not easy to predict which deviation is likely to be more important in specific situations. To address this problem Kahneman and Tversky (1979) proposed a model (Prospect theory) that captures the joint effect of four of the most important deviations from maximization: the certainty effect (Allais paradox, Allais, 1953), the reflection effect, overweighting of low probability extreme events, and loss aversion (see top four rows in Table 1). The current paper extends this and similar efforts (see e.g., Thaler & Johnson, 1990; Brandstätter, Gigerenzer, & Hertwig, 2006; Birnbaum, 2008; Wakker, 2010; Erev et al., 2010) by facilitating the derivation and comparison of models that capture the joint impact of the four "prospect theory effects" and ten additional phenomena (see Table 1). These choice phenomena were replicated under one "standard" setting (Hertwig & Ortmann, 2001): choice with real stakes in a space of experimental tasks wide enough to replicate all the phenomena illustrated in Table 1. The results suggest that all 14 phenomena emerge in our setting. Yet, their magnitude tends to be smaller than their magnitude in the original demonstrations. The current choice prediction competition focuses on developing models that can capture all of these phenomena but also predict behavior in other choice problems. To calibrate the models we ran an “estimation set” study that included 60, randomly selected, choice problems.
Two independent, but related, choice prediction competitions are organized that focus on behavior in simple two-person distribution games (http://sites.google.com/site/extformpredcomp/): one focuses on predicting the choices of the first mover and the other on predicting the choices of the second mover. The competitions are based on an estimation experiment and a competition experiment. The two experiments use the same methods and subject pool, and examine games randomly selected from the same distribution. The current introductory paper presents the results of the estimation experiment, and clarifies the descriptive value of some baseline models. The best baseline model assumes that each choice is made based on one of several rules. The rules include: rational choice, level-1 reasoning, an attempt to maximize joint payoff, and an attempt to increase fairness. The probability of using the different rules is assumed to be stable over games. The estimated parameters imply that the most popular rule is rational choice; it is used in about half the cases. To participate in the competitions, researchers are asked to email the organizers models (implemented in computer programs) that read the incentive structure as input, and derive the predicted behavior as an output. The submission deadline is December 1st 2011, the results of the competition experiment will not be revealed until that date. The submitted models will be ranked based on their prediction error. The winners of the competitions will be invited to write a paper that describes their model.
A choice prediction competition is organized that focuses on decisions from experience in market entry games (http://sites.google.com/site/gpredcomp/ http://www.mdpi.com/si/games/predict-behavior/). The competition is based on two experiments: An estimation experiment, and a competition experiment. The two experiments use the same methods and subject pool, and examine games randomly selected from the same distribution. The current introductory paper presents the results of the estimation experiment, and clarifies the descriptive value of several baseline models. The experimental results reveal the robustness of eight behavioral tendencies that were documented in previous studies of market entry games and individual decisions from experience. The best baseline model (I-SAW) assumes reliance on small samples of experiences, and strong inertia when the recent results are not surprising. The competition experiment will be run in May 2010 (after the completion of this introduction), but they will not be revealed until September. To participate in the competition, researchers are asked to E-mail the organizers models (implemented in computer programs) that read the incentive structure as input, and derive the predicted behavior as an output. The submitted models will be ranked based on their prediction error. The winners of the competition will be invited to publish a paper that describes their model.
Erev, Ert, and Roth organized three choice prediction competitions focused on three related choice tasks: One shot decisions from description (decisions under risk), one shot decisions from experience, and repeated decisions from experience. Each competition was based on two experimental datasets: An estimation dataset, and a competition dataset. The studies that generated the two datasets used the same methods and subject pool, and examined decision problems randomly selected from the same distribution. After collecting the experimental data to be used for estimation, the organizers posted them on theWeb, together with their fit with several baseline models,and challenged other researchers to compete to predict the results of the second (competition) set of experimental sessions. Fourteen teams responded to the challenge: The last seven authors of this paper are members of the winning teams. The results highlight the robustness of the difference between decisions from description and decisions from experience. The best predictions of decisions from descriptions were obtained with a stochastic variant of prospect theory assuming that the sensitivity to the weighted values decreases with the distance between the cumulative payoff functions. The best predictions of decisions from experience were obtained with models that assume reliance on small samples.