TEL  Vol.6 No.4 , August 2016
Screening Agents in Belief Eliciting Mechanisms
ABSTRACT
This paper considers the problem of a decision maker (DM), who needs to hire an agent to assess the probability of occurrence of an event that is of interest to her. To decide the agent’s reward, the DM proposes a mechanism that will give reward based on the agent’s reported subjective probability and the actual outcome of the event. The reward mechanism needs to incentivize the expert to honestly reveal his subjective probability, and the reward has to be non-negative in all cases in order to ensure agent’s participation. In such a situation, it is possible that there are some agents who lack the expertise to assess the situation, but still participate to get sure non-negative payoff. The DM wants to screen out such uninformed agents from the informed ones. This work considers two mechanisms, and analyzes the behavior of both types of agents for the two mechanisms. It shows that screening is possible along with belief elicitation in some cases.

Received 2 August 2016; accepted 19 August 2016; published 22 August 2016

1. Introduction

The problem considered here is that of assessment of subjective probability of an event by an agent who is an expert and has no stakes in the outcome. The problem is of interest to a decision maker (DM), to whom the resolution of this uncertainty will help in taking appropriate decisions. For example, bidders in an oil field auction may be interested in knowing the chances of finding oil at a particular location. A pharmaceutical company may be interested in knowing the chances of success of a drug trial. In such cases, the DM hires an agent, who is an expert in the matter, and she offers him a reward based on his prediction and the actual outcome.

The DM wants to devise a mechanism that will incentivize the agent to use his expertise and arrive at correct assessment of the situation. One such method was proposed by Brier (1950) [1] using scoring rules for elicitation of subjective probabilities that incentivize the agent to report truthfully. In such methods, the payoff to the agent has to be non-negative to ensure his participation. This non-negative payoff attracts some agents who are either incapable of assessing the situation, or who want to exert no efforts. The DM wants to screen out the latter type of agents. This present work considers the problem where the DM uses a mechanism that incentivizes the agent who works to correctly assess the situation, and it is possible to identify the agent who lacks expertise or who shirks.

The next sub-section reviews the literature on belief elicitation methods. Section 2 describes the problem setting and assumptions. The two mechanisms and the behavior of both types of agents in case of both the mechanisms are studied in Section 3. The last section discuses the scope for further work and theoretical and practical implications.

Literature Review

Brier (1950) [1] first proposed Quadratic Scoring Rules (QSR) to evaluate the performance of weather forecasts. These are proper scoring rules in the sense that it induces the agent to truthfully report his subjective probability. The QSRs work when the agent is risk neutral. Related methods are proper scoring rules, promissory notes method, and lotteries method (Kadane and Winkler (1988) [2] and Savage (1971) [3] ).

Allen (1987) [6] showed a method for eliciting the probability of occurrence of an event where the agent’s utility function is not known. Here, the agent receives a fixed reward if the reported probability is less than a randomly drawn number, or a lesser reward otherwise. This randomly drawn number is drawn from two different distributions depending upon whether the event has occurred or not. Karni (2009) [7] has developed a novel mechanism where the resulting game ensures that agent’s truthfully reveal their subjective probabilities.

Hossain and Okui (2013) [8] have presented a parsimonious method for constructing scoring rules that can be used to elicit an agent’s beliefs without any assumptions on the agent’s risk preference. Here, the agent receives a fixed prize if the prediction error loss function is less than a randomly generated number, or else the prize is smaller. The method can be adapted to various contexts by suitably defining the loss function. The rewards are fixed and the only thing that gets decided is which of the two rewards is to be given. Hence, the rule is labeled as the Binarized Scoring Rule.

All these methods consider the probability elicitation problem under the assumption that the agent has the expertise to correctly assess the situation. To ensure participation from agents, the rewards are strictly non-nega- tive. It is quite possible that some agents do not have the expertise to assess the situation, but they still participate to get a non-negative reward by “playing the system” (Brier, 1950) [1] . There is a need to screen out such participants. Sandroni (2014) [9] has shown that a contract exists, where the informed agents truthfully report their assessments, and the uninformed agents echo the decision maker’s (DM’s) prior belief. The existence of such a contract is proven, but the contract itself is not elaborated. Hao and Houser (2012) [10] study a belief elicitation mechanism in a laboratory setup where they consider agents who can analyze and respond to incentives, along with other naïve agents whose responses add noise to the information being elicited. Olszewski and Peaski (2011) [11] discuss the problem that the DM cannot verify the expertise of the agent, but show that contracts do exist that can be used to get the best payoffs without knowing whether the agent is an expert or not.

The belief elicitation problem has also been studied in laboratory experiments (Armantier and Treich (2013) [12] , Gachter and Renner (2010) [13] , Hao and Houser (2012) [10] ). A survey of laboratory studies is provided by Schotter, & Trevino (2014) [14] .

2. Problem Setting

A decision maker (DM) is interested in knowing the probability of occurrence of an event E, and hires an agent to assess the situation and report his subjective probability for the event E. The DM designs a mechanism where she offers a contract in the form of a scoring rule to the agent. The scoring rule determines the payment to the agent, which is based on the prediction made by him and the actual outcome.

2.1. Types of Agents: Expert and Novice

The agent may or may not have the capability to assess the situation, or may be willing or unwilling to exert efforts. We identify two cases arising out of this scenario. In the first case, the agent is capable of assessing the situation and is willing to exert efforts (we label this type of agent as an “expert”); whereas in the second case, the agent is either incapable of assessing the situation or is unwilling to exert any efforts (we label this type of agent as a “novice”). Sandroni (2016) has labelled the cases as informed and uninformed agents. Experts assess the situation, get the correct estimates, but report the numbers that maximize their utility payoff based on the scoring rule. Novices cannot (or do not) assess the situation and report the numbers that maximize their utility based on the scoring rule and assumptions about occurrence of event E. The expert exerts efforts and assesses to be the probability of the event E. The novice assumes that the probability of the event “” comes from.

A bet on an event has a payoff x if the event occurs, and y if the event does not occur, and is denoted as. A lottery gives a payoff of x with probability p, and payoff of y with probability. The expert’s preference relation on the set of lotteries displays probabilistic sophistication and domination in the sense that for all if and only if (as in Karni, 2009). The novice is risk neutral and goes by expected utility maximization.

3. Eliciting Mechanism and Analysis

We consider two eliciting mechanisms, and analyze the behavior of both types of agents for these mechanisms. The first mechanism is the same as Karni (2009), the second one is a modification of Allen (1989).

3.1. Eliciting Mechanism-1

Mechanism 1: Karni (2009)

1) A random number r is selected from a uniform distribution on, and the agent is asked to submit his report, of his assessment of the subjective probability of the event.

2) The mechanism gives the agent if, and the lottery if, where.

3.2. Agent’s Behavior in Mechanism-1

3.2.1. Case 1: Agent Is an Expert

An expert exerts efforts and correctly assesses to be the probability of the event E. He reports the right estimate, as truthful reporting is his unique dominant strategy (Karni, 2009). If the expert reports, the agent’s payoff is the same as truthful reporting for all values of the random number r except for, where the expert gets instead of. Since, in the expert’s assessment, for, the expert will report truthfully. Similarly, if the expert reports, he stands to lose as he gets instead of, and for.

3.2.2. Case 2: Agent Is a Novice

A novice cannot (or does not) assess the situation, and in the absence of any information tries to maximize his reward based on the scoring rule and his prior notion of occurrence of the event. If the novice reports to be his subjective probability of the event, his payoff is for, and for. The novice goes by his assumption that probability of event E, is a draw from a uniform distribution on.

To simplify the analysis, we consider. The analysis is not affected by these values, as this is a linear transformation. It amounts to deducting y from the payoff and dividing by. The payoff for the novice is now for, and for. The expected payoff of novice is given by

(1)

The expected payoff is maximized for, and hence the novice will report this as his subjective probability.

In this mechanism, the expert truthfully reveals his subjective probability as his payoff is maximized when the reported probability is same as actual assessed probability. The novice reports the subjective probability to be 0.5, as his expected utility is maximized here. Based on the reports from both types of agents, the DM will be able to screen out the novice, except in the case where the subjective probability in the expert’s assessment is also 0.5. Thus, the mechanism proposed by Karni (2009) also serves the screening purpose, except in one case as discussed before.

3.3. Eliciting Mechanism-2

Mechanism 2: Proposed Mechanism

Consider two distributions A and B.

Distribution A has density function

Distribution B has density function

1) A coin is tossed. In case of Heads, a random number r is selected from distribution A, and in case of Tails, it is selected from distribution B. The agent is asked to submit his report, , of his assessment of the subjective probability of the event.

2) The mechanism gives the agent if, and the lottery if.

Allen (1989) has proposed a similar mechanism, where the distribution is chosen later depending upon the occurrence of the event E. But, in this mechanism the distribution is chosen randomly.

3.4. Agent’s Behavior in Mechanism-2

3.4.1. Case 1: Agent Is an Expert

In this case, truthful report is in the expert’s best interest, as in the previous mechanism. The probability of occurrence of r does not affect the reasoning. So, an expert correctly assesses to be the probability of the event E, and reports.

3.4.2. Case 2: Agent Is a Novice

As in the earlier case, the novice cannot (or does not) assess the situation, and assumes that the probability of its occurrence comes from a uniform distribution on. He calculates his expected value of payoffs, and submits the report () that maximizes his expected payoff.

If distribution A is chosen, the expected payoff of novice is given by

(2)

The expected payoff is maximized at, and minimized at. The novice will report 0.5 to be his subjective probability.

If distribution B is chosen, the expected payoff of novice is given by

(3)

The expected payoff is maximized at, and minimized at. The novice will report 0.5 to be his subjective probability.

The expert truthfully reveals his subjective probability in this case also. The novice reports 0.5 for both distributions of the random number. Based on these reports, it is possible for the DM to screen out the novice as in the previous case.

4. Discussion

This problem of assessing subjective probabilities with the help of an expert was discussed here. A decision maker (DM) hires an agent to assess the probability of occurrence of an event, and pays him based on the reported probability and the actual outcome. Two mechanisms were considered, in which the informed agent (expert) truthfully reveals his subjective probability assessment. The possibility that the non-negative reward mechanism might attract uninformed agents (novices) was also considered. It is found that both the mechanisms can screen out the novices except in the case when the expert’s assessment is also 0.5.

There is scope for further work in this area firstly by considering novices with more general assumptions about prior distribution of occurrence of the event. Mechanisms that screen out novices in all the cases also need to be devised. There is ample scope for work using laboratory experiments to verify the effectiveness of the various belief elicitation mechanisms that are proposed in theory. Setting up of laboratory experiments with experts and novices can be implemented simply by giving different information to the agents, thereby making them informed and uninformed in the experimental setup.

The problem is practically relevant and decision makers who participate in bidding of natural resources often find themselves in such a situation. It is important for the decision maker to be able to screen out the novices, so that only the suggestions provided by the experts can be taken into consideration. Some other applications involve screening out recommendations of naïve stock market analysts and naïve respondents in large scale surveys. This short paper suggests a way forward for addressing the joint problem of screening out novices and incentivizing the experts in belief elicitation mechanisms.

Acknowledgements

I would like to thank the anonymous reviewers for their helpful comments.

Cite this paper
Chawan, V. (2016) Screening Agents in Belief Eliciting Mechanisms. Theoretical Economics Letters, 6, 783-788. doi: 10.4236/tel.2016.64082.
References
[1]   Brier, G.W. (1950) Verification of Forecasts Expressed in Terms of Probability. Monthly Weather Review, 78, 1-3.
http://dx.doi.org/10.1175/1520-0493(1950)078<0001:VOFEIT>2.0.CO;2

[2]   Kadane, J.B. and Winkler, R.L. (1988) Separating Probability Elicitation from Utilities. Journal of the American Statistical Association, 83, 357-363.
http://dx.doi.org/10.1080/01621459.1988.10478605

[3]   Savage, L.J. (1971) Elicitation of Personal Probabilities and Expectations. Journal of the American Statistical Association, 66, 783-801.
http://dx.doi.org/10.1080/01621459.1971.10482346

[4]   Schlag, K.H. and van der Weele, J.J. (2013) Eliciting Probabilities, Means, Medians, Variances and Covariances without Assuming Risk Neutrality. Theoretical Economics Letters, 3, 38-42.
http://dx.doi.org/10.4236/tel.2013.31006

[5]   Sandroni, A. and Shmaya, E. (2013) Eliciting Beliefs by Paying in Chance. Economic Theory Bulletin, 1, 33.
http://dx.doi.org/10.1007/s40505-013-0009-1

[6]   Allen, F. (1987) Discovering Personal Probabilities When Utility Functions Are Unknown. Management Science, 33, 542-544.
http://dx.doi.org/10.1287/mnsc.33.4.542

[7]   Karni, E. (2009) A Mechanism for Eliciting Probabilities. Econometrica, 77, 603-606.
http://dx.doi.org/10.3982/ECTA7833

[8]   Hossain, T. and Okui, R. (2013) The Binarized Scoring Rule. The Review of Economic Studies, 80, 984-1001.
http://dx.doi.org/10.1093/restud/rdt006

[9]   Sandroni, A. (2014) At Least Do No Harm: The Use of Scarce Data. American Economic Journal: Microeconomics, 6, 1-4.
http://dx.doi.org/10.1257/mic.6.1.1

[10]   Hao, L. and Houser, D. (2012) Belief Elicitation in the Presence of Naive Respondents: An Experimental Study. Journal of Risk and Uncertainty, 44, 161-180.
http://dx.doi.org/10.1007/s11166-011-9133-1

[11]   Olszewski, W. and Peaski, M. (2011) The Principal-Agent Approach to Testing Experts. American Economic Journal: Microeconomics, 3, 89-113.
http://dx.doi.org/10.1257/mic.3.2.89

[12]   Armantier, O. and Treich, N. (2013) Eliciting Beliefs: Proper Scoring Rules, Incentives, Stakes and Hedging. European Economic Review, 62, 17-40.
http://dx.doi.org/10.1016/j.euroecorev.2013.03.008

[13]   Gachter, S. and Renner, E. (2010) The Effects of (Incentivized) Belief Elicitation in Public Goods Experiments. Experimental Economics, 13, 364-377.
http://dx.doi.org/10.1007/s10683-010-9246-4

[14]   Schotter, A. and Trevino, I. (2014) Belief Elicitation in the Laboratory. Annual Review Economics, 6, 103-128.
http://dx.doi.org/10.1146/annurev-economics-080213-040927

 
 
Top