Inference of Median Subjective Threshold in Psychophysical Experiments

Show more

1. Introduction

In psychophysical experiments, it is always desirable to understand the mechanisms that give rise to the observed behavior [1]. Three types of psychophysical methods are commonly used in estimating the point of subjective equality and the just noticeable difference: the method of adjustment, the method of limits and the method of constant stimuli [2]. In the present study, we restrict our attention to the method of limits and the method of constant stimuli in psychophysical experiments.

We consider the response of a test subject upon a skin area being heated with a contact surface [3] [4] or with an electromagnetic wave [5]. In the case of contact heating, when the size and temperature of the contact surface are fixed (or in the case of electromagnetic heating, when the cross-section and power density of the beam are fixed), the strength of stimulus is solely described by the time period of heating, which we shall call the exposure duration. At any time, the test subject has two options: allowing the heating to continue or escaping from the heating when the heat induced exceeds the subject’s personal pain tolerance. The subject’s response by any given time is binary: escape or no escape. Once the subject moves away from the heating, that particular trial is completed and the time elapse from the start of heating to the escape is recorded as the escape time. The measured escape time in each trial provides a sample value of subjective threshold for a particular subject in a particular realization. The point of subjective equality (PSE) is given by the median of subjective threshold: when the exposure duration is set to the median subjective threshold, 50% of subjects are expected to escape. The objective of this study is to infer the median subjective threshold by adaptively adjusting test procedures and using the data collected in trials. We examine four different approaches and thereby identify an optimal approach for estimating the median subjective threshold.

2. Mathematical Formulation

Due to the biovariability and other uncertainties, the escape time of the test subject will be different from one trial to another. We model the subject response as solely determined by the stimulus (the exposure duration) and a subjective threshold that varies among test subjects and varies among different trial realizations for a given subject. The outside stimulus and the internal threshold are connected via the pain signal generated at the heat-sensitive nociceptors activated by the skin temperature increase [6]. The internal threshold on the nociceptors signal is manifested outside as the observable threshold on the exposure duration.

Let *Z* be the subjective threshold. We set
$t=0$ as the start of heating exposure. By time
$t=u$, the cumulative stimulus (the exposure duration) is *u* and the subject response is determined by stimulus *u* and subjective threshold *Z*.

$\text{Subject response}\left(u\right)=\{\begin{array}{ll}\text{no escape}\hfill & \text{if}u<Z\hfill \\ \text{escape}\hfill & \text{if}u>Z\hfill \end{array}$ (1)

In a particular trial realization, *Z* is fixed but unknown. In (1), we write the subject response as a function of stimulus *u* to emphasize that in a trial, as the exposure duration *u* increases, the escape response will eventually occur. Over many subjects and over many trial realizations, mathematically *Z* is a random variable, reflecting the biovariability and other uncertainties. By definition, *Z* is the minimum exposure duration needed to induce the escape response for a particular subject in a particular trial. Since *Z* is naturally constrained by
$Z>0$, we use a Weibull distribution to model *Z*. The actual distribution of *Z* may be different. We will investigate the consequence of this discrepancy on the inference accuracy. The probability density function (PDF) of the Weibull distribution is

${\rho}_{Z}\left(z\right)=\mathrm{exp}\left(\frac{-{z}^{\beta}}{{\alpha}^{\beta}}\right)\frac{\beta {z}^{\beta -1}}{{\alpha}^{\beta}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}z>0$ (2)

which is described by two parameters: shape parameter
$\beta $ and scale parameter
$\alpha $. The mean, median and standard deviation of *U* have the expressions

$E\left(Z\right)=\alpha \Gamma \left(1+\frac{1}{\beta}\right)$

${z}^{\left(m\right)}\equiv \text{med}\left(Z\right)=\alpha {\left(\mathrm{ln}2\right)}^{\frac{1}{\beta}}$ (3)

$\text{std}\left(Z\right)=\alpha \sqrt{\Gamma \left(1+\frac{2}{\beta}\right)-\Gamma {\left(1+\frac{1}{\beta}\right)}^{2}}$

Here
$\text{med}\left(Z\right)$ denotes the median of *Z*, and
$\Gamma \left(\text{\hspace{0.05em}}\right)$ is the gamma function.

Two types of psychophysical experiments may be conducted using, respectively, the method of limits (ML) [7] [8] or the method of constant stimuli (MCS) [7] [9]. In the method of limits, the heating is kept on until the subject escapes. In other words, the stimulus (exposure duration) keeps increasing until escape. The recorded escape time gives the sample value of threshold for that particular trial realization. In this way, independent samples of the subjective threshold can be collected from multiple trials. In the method of constant stimuli, the stimulus (exposure duration) is prescribed. At the end of exposure, the escape may or may not occur, depending on the prescribed stimulus and the (unknown) value of threshold in that particular trial. The binary outcome, escape or no escape, is recorded in each trial.

Let *z* be the sample value of the subjective threshold in a particular trial. Under the condition that the test subject is unaware of the test type used in the trial, it is reasonable to expect that threshold *z* is independent of the test type. In a test using the method of constant stimuli, let *u* be the prescribed stimulus (exposure duration). For a random trial with stimulus *u*, under the assumption that *Z* has a Weibull distribution, the probability of escape is given by the Weibull psychometric function denoted by
$P\left(u\right)$, which is the cumulative distribution function (CDF) of the Weibull distribution.

$\text{Pr}\left(\text{escape}\right)=\text{Pr}\left(Z<u\right)={\displaystyle {\int}_{0}^{u}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\rho}_{Z}\left(z\right)\text{d}z=1-\mathrm{exp}\left(\frac{-{u}^{\beta}}{{\alpha}^{\beta}}\right)\equiv P\left(u\right)$ (4)

The median subjective threshold ${z}^{\left(m\right)}\equiv \text{med}\left(Z\right)$ satisfies $P\left({z}^{\left(m\right)}\right)=\mathrm{50\%}$. In the context of psychometric function $P\left(u\right)$, $u={z}^{\left(m\right)}$ is the point of subjective equality (PSE), the stimulus level at which it is equally likely that the heat induced is above or below the subjective pain tolerance. If we view $P\left(u\right)$ as a dose-response relation, $u={z}^{\left(m\right)}$ is the half maximum response dose. We write the psychometric function $P\left(u\right)$ in terms of ${z}^{\left(m\right)}$ and $\beta $.

$P\left(u\right)=1-\mathrm{exp}\left(\frac{-\left(\mathrm{ln}2\right){u}^{\beta}}{{\left({z}^{\left(m\right)}\right)}^{\beta}}\right)$ (5)

The goal of this study is to estimate ${z}^{\left(m\right)}$ from data/observations. We choose to infer ${z}^{\left(m\right)}=\text{med}\left(Z\right)$ instead of $E\left(Z\right)$ for several reasons:

· ${z}^{\left(m\right)}$ is the point of subjective equality (PSE) in the psychometric function.

· If we measure samples of *Z* using the method of limits, the sample median is less susceptible to outliers, which may occur in real experiments.

· The median is preserved in any monotonic transformation, which allows us to work with random variable
$Y={\mathrm{log}}_{10}Z$ to infer
$\text{med}\left(Y\right)$ in the adaptive Bayesian method. Notice that while *Z* is constrained by
$Z>0$, variable *Y* is unconstrained.

· In the adaptive Bayesian method, the convergence of the inferred median is independent of whether the assumed model is correct. In our simulation analysis, we will demonstrate that even when the assumed model deviates from the actual psychometric function, the inferred median still converges to the correct value. If we choose $E\left(Z\right)$ as the inference variable, however, this convergence property is lost.

3. Inference Methods

We discuss four inference methods. Each method is based on different assumptions and thus is applicable in different situations.

3.1. Method 1: Sample Median

This method does not assume any particular distribution form of *Z*. It is applicable for estimating the median of any random variable. We measure independent samples of subjective threshold *Z* in psychophysical tests using the method of limits. The data set is

${Z}^{\left(data\right)}=\left\{{z}_{j},j=1,2,\cdots ,N\right\}$ (6)

We use ${\stackrel{^}{z}}^{\left(m\mathrm{,1}\right)}$ to denote the inference result of method 1 for $\text{med}\left(Z\right)$.

${\stackrel{^}{z}}^{\left(m\mathrm{,1}\right)}=\text{samplemedianof}\left\{{z}_{j}\mathrm{,}j=\mathrm{1,2,}\cdots \mathrm{,}N\right\}$ (7)

3.2. Method 2: Maximum Likelihood Estimation with 2 Unknown Variables

Like method 1 above, the maximum likelihood estimation (MLE) requires data set
${Z}^{\left(\text{data}\right)}$, consisting of measured independent samples of *Z*. MLE (2 variables) assumes that the subjective threshold *Z* has the Weibull distribution given in (2), with shape parameter
$\beta $ and scale parameter
$\alpha $ both unknown. MLE (2 variables) is applicable only when *Z* has the Weibull distribution. MLE (2 variables), we infer
$\left(\alpha \mathrm{,}\beta \right)$ simultaneously by maximizing the likelihood and then calculate
${z}^{\left(m\right)}$ from
$\left(\alpha \mathrm{,}\beta \right)$. For mathematical convenience, let
$\eta \equiv {\alpha}^{\beta}$. In terms of
$\left(\eta \mathrm{,}\beta \right)$, the likelihood function and the log-likelihood have the expressions:

$L\left(\eta \mathrm{,}\beta \mathrm{;}{Z}^{\left(\text{data}\right)}\right)={\displaystyle \underset{j=1}{\overset{N}{\prod}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\rho}_{Z}\left({z}_{j}\right)={\displaystyle \underset{j=1}{\overset{N}{\prod}}}\mathrm{exp}\left(\frac{-{z}_{j}^{\beta}}{\eta}\right)\frac{\beta {z}_{j}^{\beta -1}}{\eta}$

$\mathcal{l}\left(\eta ,\beta ;{Z}^{\left(\text{data}\right)}\right)=\mathrm{ln}L\left(\eta ,\beta ;{Z}^{\left(\text{data}\right)}\right)=\frac{-1}{\eta}{\displaystyle \underset{j=1}{\overset{N}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{z}_{j}^{\beta}-N\mathrm{ln}\eta +{\displaystyle \underset{j=1}{\overset{N}{\sum}}}\mathrm{ln}\left(\beta {z}_{j}^{\beta -1}\right)$ (8)

We maximize the log-likelihood $\mathcal{l}\left(\eta ,\beta ;{Z}^{\left(\text{data}\right)}\right)$ to estimate $\left(\stackrel{^}{\eta}\mathrm{,}\stackrel{^}{\beta}\right)$.

$\left(\stackrel{^}{\eta}\mathrm{,}\stackrel{^}{\beta}\right)=\mathrm{arg}\underset{\left(\eta \mathrm{,}\beta \right)}{\mathrm{max}}\mathcal{l}\left(\eta \mathrm{,}\beta \mathrm{;}{Z}^{\left(\text{data}\right)}\right)$ (9)

Once $\stackrel{^}{\eta}$ and $\stackrel{^}{\beta}$ are determined, the inference result of method 2 for $\text{med}\left(Z\right)$, denoted by ${\stackrel{^}{z}}^{\left(m\mathrm{,2}\right)}$, is calculated using Equation (3) and $\stackrel{^}{\alpha}={\stackrel{^}{\eta}}^{\frac{1}{\stackrel{^}{\beta}}}$.

${\stackrel{^}{z}}^{\left(m\mathrm{,2}\right)}={\left(\stackrel{^}{\eta}\mathrm{ln}2\right)}^{\frac{1}{\stackrel{^}{\beta}}}$ (10)

3.3. Method 3: Maximum Likelihood Estimation with 1 Unknown Variable

Like methods 1 and 2 above, method 3 requires data set
${Z}^{\left(\text{data}\right)}$, consisting of measured independent samples of *Z*. This method assumes 1) the subjective threshold *Z* has the Weibull distribution of the form (2) and 2) the shape parameter
$\beta $ is known, which leaves the scale parameter
$\alpha $ as the only unknown. MLE (1 variable) is applicable only when *Z* has the Weibull distribution and the true value of
$\beta $ is given. We write the log-likelihood in terms of
$\eta \equiv {\alpha}^{\beta}$.

$\mathcal{l}\left(\eta ;{Z}^{\left(\text{data}\right)}\right)=\mathrm{ln}L\left(\eta ;{Z}^{\left(\text{data}\right)}\right)=\frac{-1}{\eta}{\displaystyle \underset{j=1}{\overset{N}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{z}_{j}^{\beta}-N\mathrm{ln}\eta +{\displaystyle \underset{j=1}{\overset{N}{\sum}}}\mathrm{ln}\left(\beta {z}_{j}^{\beta -1}\right)$ (11)

The log-likelihood has the derivative

$\frac{\text{d}}{\text{d}\eta}\mathcal{l}\left(\eta ;{Z}^{\left(\text{data}\right)}\right)=\frac{1}{{\eta}^{2}}{\displaystyle \underset{j=1}{\overset{N}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{z}_{j}^{\beta}-\frac{N}{\eta}.$

The maximization problem of $\mathcal{l}\left(\eta \mathrm{;}{Z}^{\left(\text{data}\right)}\right)$ has an analytical solution.

$\stackrel{^}{\eta}=\mathrm{arg}\underset{\eta}{\mathrm{max}}\mathcal{l}\left(\eta ;{Z}^{\left(\text{data}\right)}\right)=\frac{1}{N}{\displaystyle \underset{j=1}{\overset{N}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{z}_{j}^{\beta}$ (12)

The inference result of method 3 for $\text{med}\left(Z\right)$, denoted by ${\stackrel{^}{z}}^{\left(m\mathrm{,3}\right)}$, is calculated using (3)

${\stackrel{^}{z}}^{\left(m,3\right)}={\stackrel{^}{\eta}}^{\frac{1}{\beta}}={\left(\frac{1}{N}{\displaystyle \underset{j=1}{\overset{N}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{z}_{j}^{\beta}\right)}^{\frac{1}{\beta}}$ (13)

3.4. Method 4: Adaptive Bayesian Method

Unlike methods 1-3 above, the adaptive Bayesian method does not require measured independent samples of *Z*. Rather it utilizes binary outcomes observed in tests using the method of constant stimuli (MCS). Suppose a sequence of *n* trials of the MCS type have already been completed with prescribed stimuli (exposure durations)
$\left\{{u}_{j},j=1,2,\cdots ,n\right\}$. The observed binary outcome
${I}_{j}$ of each trial indicates either
${z}_{j}>{u}_{j}$ (if no escape occurs) or
${z}_{j}<{u}_{j}$ (if escape occurs).

${I}_{j}=\{\begin{array}{ll}0\left(\text{no escape}\right)\hfill & \text{if}{z}_{j}>{u}_{j}\hfill \\ 1\left(\text{escape}\right)\hfill & \text{if}{z}_{j}<{u}_{j}\hfill \end{array}$ (14)

Here
${z}_{j}$ is the (unknown) value of subjective threshold in trial *j*. In trials of the MCS type, values of
$\left\{{z}_{j}\right\}$ are not fully revealed. Instead, only
${z}_{j}>{u}_{j}$ or
${z}_{j}<{u}_{j}$ is observed.

Following the approach used in the Quest package [10], we use variable
$x\equiv {\mathrm{log}}_{10}u$ to quantify the stimulus. Thus, stimulus *x* corresponds to exposure duration
$u={10}^{x}$. Notice that
$x\equiv {\mathrm{log}}_{10}u$ is mathematically unconstrained and covers a wide range of exposure duration *u* when*x* is varied moderately. The data set from the *n* trials consists of *n* pairs of
$\left(x\mathrm{,}I\right)$ where *x* is the stimulus and *I* is the corresponding outcome.

${D}^{\left(n\right)}=\left\{\left({x}_{j},{I}_{j}\right),j=1,2,\cdots ,n\right\}$ (15)

To infer ${z}^{\left(m\right)}\equiv \text{med}\left(Z\right)$ in a Bayesian framework, we treat ${y}^{\left(m\right)}\equiv {\mathrm{log}}_{10}{z}^{\left(m\right)}$ as a random variable and use a normal distribution as the prior.

$\text{Priorof}{y}^{\left(m\right)}\sim N\left({\mu}_{y}\mathrm{,}{\sigma}_{y}^{2}\right)$ (16)

For a trial with stimulus *x* (exposure duration
$u={10}^{x}$ ), under the assumption that the subjective threshold *Z* has a Weibull distribution, the probability of escape is described by the psychometric function
$P(\cdot )$ in (5)

$\begin{array}{c}\mathrm{Pr}\left(I=1\right)=\mathrm{Pr}\left(Z<{10}^{x}\right)=P\left({10}^{x}\right)\\ =1-\mathrm{exp}\left(-{10}^{\beta \left(x-{y}^{\left(m\right)}\right)+\delta}\right)\equiv F\left(x\right)\mathrm{,}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\delta \equiv {\mathrm{log}}_{10}(\; ln\; 2\; )\end{array}$

The model psychometric function $F\left(x\right)$ has parameters $\beta $ and ${y}^{\left(m\right)}={\mathrm{log}}_{10}{z}^{\left(m\right)}$. In experiments, however, the probability of escape is governed by the actual psychometric function ${F}_{\text{actual}}\left(x\right)$, which may be different from $F\left(x\right)$.

In the adaptive Bayesian method, we assume $\beta $ is given and our goal is to infer ${y}^{\left(m\right)}$. For that purpose, we need to view the probability of escape as a function of ${y}^{\left(m\right)}$. We write $F\left(x\right)$ in terms of $\left(x-{y}^{\left(m\right)}\right)$ and $\beta $ as.

$F\left(x\right)=\varphi \left(x-{y}^{\left(m\right)}\mathrm{;}\beta \right)$ (17)

where $\varphi \left(s\mathrm{;}\beta \right)$ is parameter-free function defined as

$\varphi \left(s\mathrm{;}\beta \right)\equiv 1-\mathrm{exp}\left(-{10}^{\beta s+\delta}\right)\mathrm{,}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\delta \equiv {\mathrm{log}}_{10}\left(\mathrm{ln}2\right)$ (18)

To facilitate the inference of ${y}^{\left(m\right)}$, we shift ${y}^{\left(m\right)}$ by its prior mean and consider $\xi \equiv {y}^{\left(m\right)}-{\mu}_{y}$. The prior distribution of $\xi $ is

${\rho}_{\xi}^{\left(\text{pri}\right)}\left(\xi \right)\sim \mathrm{exp}\left(\frac{-{\xi}^{2}}{2{\sigma}_{y}^{2}}\right)$ (19)

(19) is our prior knowledge about
$\xi $. When data set
${D}^{\left(n\right)}$ from *n* trials becomes available, the posterior distribution of
$\xi $ is calculated according to the Bayes theorem.

${\rho}_{\xi}^{\left(\text{post}\right)}\left(\xi \mathrm{|}{D}^{\left(n\right)}\right)\sim {\rho}_{\xi}^{\left(\text{pri}\right)}\left(\xi \right){\displaystyle \underset{j=1}{\overset{n}{\prod}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}B\left(\varphi \left({x}_{j}-{\mu}_{y}-\xi \mathrm{;}\beta \right)\mathrm{,}{I}_{j}\right)$ (20)

where
$B\left(p\mathrm{,}k\right)\equiv {p}^{k}{\left(1-p\right)}^{1-k}$ is the Bernoulli distribution. A key component of the adaptive Bayesian method is to select the stimulus for the next trial based on the current posterior of
$\xi $. Suppose*n* trials have been conducted and the data set
${D}^{\left(n\right)}$ is used to calculate the posterior
${\rho}_{\xi}^{\left(\text{post}\right)}\left(\xi \mathrm{|}{D}^{\left(n\right)}\right)$ in (20). Let
${x}_{n+1}$ denote the stimulus for the next trial. The optimal choice of
${x}_{n+1}$ is the point of subjective equality (PSE) of the actual psychometric function where
${F}_{\text{actual}}\left({x}_{n+1}\right)=\mathrm{50\%}$. The PSE of the actual psychometric function is unknown. It is the inference variable. We use the best available estimate of PSE as the next stimulus. We set stimulus
${x}_{n+1}$ to the posterior median of
${y}^{\left(m\right)}={\mu}_{y}+\xi $, which reflects the prior of
${y}^{\left(m\right)}$ and the observed outcomes of the preceding *n* trials.

${x}_{n+1}={\mu}_{y}+\text{med}\left({\rho}_{\xi}^{\left(\text{post}\right)}\left(\xi \mathrm{|}{D}^{\left(n\right)}\right)\right)$ (21)

Once stimulus ${x}_{n+1}$ is prescribed, trial $\left(n+1\right)$ is carried out and the observed binary outcome ${I}_{n+1}$ is used to update the posterior distribution of $\xi $.

${\rho}_{\xi}^{\left(\text{post}\right)}\left(\xi \mathrm{|}{D}^{\left(n+1\right)}\right)\sim {\rho}_{\xi}^{\left(\text{post}\right)}\left(\xi \mathrm{|}{D}^{\left(n\right)}\right)B\left(\varphi \left({x}_{n+1}-{\mu}_{y}-\xi \mathrm{;}\beta \right)\mathrm{,}{I}_{n+1}\right)$ (22)

It is important to emphasize that the probability of escape in real experiments is governed by $\text{Pr}\left({I}_{n+1}=1\right)={F}_{\text{actual}}\left({x}_{n+1}\right)$, independent of the assumed model distribution.

We repeat the process of adaptively selecting stimulus *x* based on the current posterior distribution, carrying out the next trial and using the observed outcome to update the posterior distribution. The iterative process of (21)-(22) is repeated either for a specified number of trials or until the standard deviation of posterior
$\xi $ is less than a specified tolerance. When the adaptive Bayesian iteration is completed, the inference result of method 4 for
$\text{med}\left(Z\right)$, denoted by
${\stackrel{^}{z}}^{\left(m\mathrm{,4}\right)}$, is calculated as the posterior median of
${z}^{\left(m\right)}={10}^{{y}^{(\; m\; )}}$

${\stackrel{^}{z}}^{\left(m\mathrm{,4}\right)}={10}^{{\stackrel{^}{y}}^{\left(m\mathrm{,4}\right)}}\mathrm{,}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\stackrel{^}{y}}^{\left(m\mathrm{,4}\right)}={\mu}_{y}+\text{med}\left({\rho}_{\xi}^{\left(\text{post}\right)}\left(\xi \mathrm{|}{D}^{\left(N\right)}\right)\right)$ (23)

4. Relative Errors of Inference Results: Direct Monte Carlo Simulations

We use Monte Carlo simulations to evaluate the inference accuracy for each of the four methods described in the previous section.

4.1. Measurement of Relative Error

We define the relative error as the root-mean-square (RMS) relative difference between the inference result and the true value. For each inference method, we carry out *M* rounds of Monte Carlo simulations. For methods 1 - 3, each round of simulations involves generating a data set of *N* samples and calculating an inference result from each simulated data set. For method 4, each round of simulations consists of repeating for *N* trials the Bayesian process of adaptively selecting the stimulus based on the posterior distribution, carrying out the trial and updating the posterior distribution according to the observed outcome. Let

· ${z}^{\left(m\mathrm{,}e\right)}$ be the true value of $\text{med}\left(Z\right)$ and

·
${\stackrel{^}{z}}^{\mathrm{(}m\mathrm{,}k\mathrm{,}i\mathrm{)}}$ be the inferred
$\text{med}\left(Z\right)$, using method *k* in round *i* of simulations.

From the Monte Carlo simulations, we calculate the relative error of method *k* as

${\text{err}}^{\left(k\right)}=\sqrt{\frac{1}{M}{\displaystyle \underset{i=1}{\overset{M}{\sum}}}{\left(\frac{{\stackrel{^}{z}}^{\left(m\mathrm{,}k\mathrm{,}i\right)}-{z}^{\left(m\mathrm{,}e\right)}}{{z}^{\left(m\mathrm{,}e\right)}}\right)}^{2}}$ (24)

4.2. Parameters in Simulations

We use $M={10}^{6}$ rounds of Monte Carlo simulations in the calculation of each relative error (each entry in Table 1 and Table 2). We calculate the relative error of each inference method, for each prior and for each sample size of $N=2$, 20, 40 or 80 trials.

Before we select values of
$\left(\alpha \mathrm{,}\beta \right)$, we point out that random variable *Z*, the exact and inferred (mean, median, std) of *Z*, are all proportional to scale parameter
$\alpha $. In particular, both
${\stackrel{^}{z}}^{\left(m\mathrm{,}k\mathrm{,}i\right)}$ and
${z}^{\left(m\mathrm{,}e\right)}$ are proportional to
$\alpha $. Thus, the relative error is independent of
$\alpha $.

In simulations of this section, we use $\beta =2.5$ and 5. For the convenience of setting up simulations, we set $\alpha ={\left(\mathrm{ln}2\right)}^{\frac{-1}{\beta}}$ to make $\text{med}\left(Z\right)=1$. Figure 1 displays the Weibull distributions and Weibull psychometric functions for three values of $\beta $. For a smaller value of $\beta $, the Weibull distribution is wider (the normalized standard deviation is larger).

4.3. Prior Distribution of $lo{g}_{10}{z}^{\left(m\right)}$ in Method 4

In the adaptive Bayesian method, an estimate of ${z}^{\left(m\right)}$ is obtained via inferring ${y}^{\left(m\right)}\equiv {\mathrm{log}}_{10}{z}^{\left(m\right)}$. The prior for ${y}^{\left(m\right)}$ is a normal distribution: ${y}^{\left(m\right)}\sim N\left({\mu}_{y}\mathrm{,}{\sigma}_{y}^{2}\right)$. The corresponding prior of ${z}^{\left(m\right)}$ is a log-normal distribution.

$\rho \left({z}^{\left(m\right)}=s\right)=\frac{1}{{q}_{0}s\sqrt{2\pi {\sigma}_{y}^{2}}}\mathrm{exp}\left(\frac{-{\left({\mathrm{log}}_{10}s-{\mu}_{y}\right)}^{2}}{2{\sigma}_{y}^{2}}\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{q}_{0}=\mathrm{ln}10$ (25)

The prior (mean, median and standard deviation) of ${z}^{\left(m\right)}$ are related to

Figure 1. Effect of shape parameter $\beta $. Left panel: 3 Weibull distributions with median = 1. Right panel: 3 Weibull psychometric functions with PSE = 1.

$\left({\mu}_{y}\mathrm{,}{\sigma}_{y}\right)$ by

$E\left({z}^{\left(m\right)}\right)={10}^{{\mu}_{y}}\sqrt{\mathrm{exp}\left({q}_{0}^{2}{\sigma}_{y}^{2}\right)}\mathrm{,}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{q}_{0}=\mathrm{ln}10$

${\mu}_{z}\equiv \text{med}\left({z}^{\left(m\right)}\right)={10}^{{\mu}_{y}}$

${\sigma}_{z}\equiv \text{std}\left({z}^{\left(m\right)}\right)={10}^{{\mu}_{y}}\sqrt{\mathrm{exp}\left({q}_{0}^{2}{\sigma}_{y}^{2}\right)\left[\mathrm{exp}\left({q}_{0}^{2}{\sigma}_{y}^{2}\right)-1\right]}$ (26)

Suppose we know the prior median and standard deviation of
${z}^{\left(m\right)}$ from existing data of measurements on the subjective threshold *Z*. Given
$\left({\mu}_{z}\mathrm{,}{\sigma}_{z}\right)$, we can build the normal prior of
${y}^{\left(m\right)}$ by solving for
$\left({\mu}_{y}\mathrm{,}{\sigma}_{y}\right)$ in (26)

${\mu}_{y}={\mathrm{log}}_{10}{\mu}_{z}$

${\sigma}_{y}=\frac{1}{{q}_{0}}\sqrt{\mathrm{ln}\left(\frac{1}{2}+\frac{1}{2}\sqrt{1+4{\left(\frac{{\sigma}_{z}}{{\mu}_{z}}\right)}^{2}}\right)}\mathrm{,}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{q}_{0}=\mathrm{ln}10$ (27)

For small $\left({\sigma}_{z}/{\mu}_{z}\right)$, the Taylor expansion of the RHS yields

${\sigma}_{y}=\frac{1}{{q}_{0}}\left(\frac{{\sigma}_{z}}{{\mu}_{z}}\right)\left[1-\frac{3}{4}{\left(\frac{{\sigma}_{z}}{{\mu}_{z}}\right)}^{2}+\cdots \right]\approx \frac{1}{\mathrm{ln}10}\left(\frac{{\sigma}_{z}}{{\mu}_{z}}\right)$ (28)

The prior relative uncertainty of ${z}^{\left(m\right)}$ is described by the ratio ${\sigma}_{z}/{\mu}_{z}$. From the given ${\sigma}_{z}/{\mu}_{z}$, the prior standard deviation of ${y}^{\left(m\right)}$ is calculated using Equation (27). In Monte Carlo simulations, the prior mean of ${y}^{\left(m\right)}$ is set to a random variable

${\mu}_{y}\sim N\left({y}^{\left(m\mathrm{,}e\right)}\mathrm{,}{\sigma}_{y}^{2}\right)$

where ${y}^{\left(m\mathrm{,}e\right)}={\mathrm{log}}_{10}{z}^{\left(m\mathrm{,}e\right)}$ is the exact median of ${y}^{\left(m\right)}={\mathrm{log}}_{10}{z}^{\left(m\right)}$. This randomness of ${\mu}_{y}$ in simulations is both realistic and necessary. In real applications, the exact value of ${y}^{\left(m\mathrm{,}e\right)}$ is unknown. In the adaptive Bayesian method, the inference result for ${y}^{\left(m\right)}$ is given by the median of the posterior ${y}^{\left(m\right)}$. If we set ${\mu}_{y}={y}^{\left(m\mathrm{,}e\right)}$, then before any Bayesian updating, the prior distribution already gives the exact solution, which is unreasonable. Setting ${\mu}_{y}\sim N\left({y}^{\left(m\mathrm{,}e\right)}\mathrm{,}{\sigma}_{y}^{2}\right)$ is consistent with that the prior error of ${y}^{\left(m\right)}$ is ${\sigma}_{y}$.

4.4. Simulation Results of the Four Methods

In all simulations, data sets are generated using the actual distribution. In Section 4, the actual distribution and the assumed model distribution have the same Weibull form and the same shape parameter $\beta $. The median of the actual distribution, ${z}^{\left(m\mathrm{,}e\right)}$, is set to 1 while the median of the model distribution, ${z}^{\left(m\right)}$, is the inference variable. Not all methods utilize the Weibull distribution form or the value of $\beta $.

· Method 1 (the sample median) does not assume any distribution form.

· Method 2 (MLE, 2 variables) assumes the Weibull distribution form of *Z* but leaves shape parameter
$\beta $ as an unknown variable.

· Method 3 (MLE, 1 variable) both assumes the Weibull distribution form and uses the value of shape parameter $\beta $.

· Method 4 (adaptive Bayesian) both assumes the Weibull distribution form and uses the value of shape parameter $\beta $.

The situation where the actual distribution deviates from the assumed model distribution will be investigated in Section 5.

The relative errors for $\beta =2.5$ are reported in Table 1. Methods 1 - 3 are based on measured escape times in tests using the method of limits. Of these three methods, the sample median has the largest relative error (except for $N=2$ ). This is reasonable since the sample median does not assume any distribution form.

Accordingly, MLE (1 variable) has the smallest relative error since it assumes both the correct distribution form and the correct shape parameter. Method 4 is based on binary outcomes (escape or no escape) observed in tests using the method of constant stimuli. Intuitively, a binary outcome provides less information than a full sample of escape time. The relative error of method 4 is larger than that of method 3 even though they have the same assumptions. However, we will learn in Section 5 that method 4 is very robust with respect to an incorrect model while for method 3, an incorrect model is fatal. The relative error of method 4

Table 1. Relative errors of the four inference methods: $\beta =2.5$.

is comparable to that of method 1. This feature will be examined in more detail in Section 5. An interesting result of Table 1 is that with 20 trials ( $N=20$ ), the relative error is well above 10% unless we use the correct distribution form and the correct shape parameter in method 3, and use a very narrow prior in method 4 (adaptive Bayesian).

The relative errors for $\beta =5$ are reported in Table 2. As $\beta $ increases, both the actual distribution and the model distribution (they are the same in Section 4) get narrower (see Figure 1). A narrower distribution reduces the relative errors in all methods.

4.5. Relative Error vs Sample Size *N*

We study numerically the relative error (
${\text{err}}^{\left(k\right)}$ ) of each inference method as the number of trials (*N*) is increased. The simulation results for
$\beta =2.5$ are shown in Figure 2. For each of the three methods based on measured samples of random variable *Z* (left panel), the relative error is proportional to
$1/\sqrt{N}$. Method 1 has the largest proportionality coefficient while method 3 has the smallest.

Method 4 (adaptive Bayesian) updates the prior distribution based on binary

Table 2. Relative errors of the four inference methods: $\beta =5$.

Figure 2. Relative error vs *N* for
$\beta =2.5$. Left panel: three methods based on measured samples of *Z* in the method of limits. Right panel: adaptive Bayesian method based on observed binary outcomes in the method of constant stimuli.

outcomes observed in the method of constant stimuli. Unlike the three methods in the left panel, method 4 has a finite relative error (the prior) before any binary outcome is observed. Intuitively, the prior can be viewed as an existing set of ${n}_{0}$ binary outcomes. A narrower prior corresponds to a larger existing set (larger ${n}_{0}$ ). Consequently, the relative error is expected to be proportional to $1/\sqrt{N+{n}_{0}}$.

$\text{err}\left(N\right)=\frac{{C}_{4}}{\sqrt{N+{n}_{0}}}\mathrm{,}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{n}_{0}={\left(\frac{{C}_{4}}{{\text{err}}^{\left(\text{prior}\right)}}\right)}^{2}\mathrm{,}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\text{err}}^{\left(\text{prior}\right)}=\frac{{\sigma}_{z}}{{\mu}_{z}}$ (29)

The right panel of Figure 2 confirms that the relative error of the adaptive Bayesian method is well explained by relation (29).

4.6. Effect of Shape Parameter *β*

We explore how the relative error changes as the shape parameter $\beta $ of the Weibull psychometric function is varied. We reiterate that in all simulations of Section 4, both the actual and the model psychometric function have the same Weibull form and the same shape parameter $\beta $. The situation where the actual psychometric function deviates from the model will be studied in the next section.

Figure 3 compares the results of
$\beta =2.5$ and
$\beta =5$. For
$\beta =5$, the behaviors of relative error vs *N* are qualitatively the same as those for
$\beta =2.5$. The proportionality coefficients are noticeably smaller for
$\beta =5$, consistent with the observation that the Weibull distribution is narrower for
$\beta =5$ (see Figure 1).

5. Inference Errors When the Actual Psychometric Function Deviates from the Model

As demonstrated in Figure 3, relative errors of inference are smaller when $\beta $ is larger. To reduce the inference error, one may attempt to use a large $\beta $ in the model disregarding the true $\beta $ of the actual psychometric function (or in the absence of knowing the true $\beta $ ). In this section, we focus on the two methods that require the value of $\beta $ as an input: the MLE (1 variable) and the adaptive

Figure 3. Comparison of relative error vs *N* for
$\beta =2.5$ and for
$\beta =5$. Left panel: MLE (2 variables) and MLE (1 variable). Right panel: adaptive Bayesian method.

Bayesian method. We examine the behaviors of relative errors in several situations where the model deviates from the actual psychometric function.

5.1. Situation 1: The Actual Psychometric Function and the Model are Both Weibull

When both the actual psychometric function and the model have the Weibull form, we write them as two functions of $x={\mathrm{log}}_{10}u$ with different shape parameters and medians.

${F}_{\text{actual}}\left(x\right)=\varphi \left(x-{y}^{\left(m\mathrm{,}e\right)}\mathrm{;}{\beta}^{(\; e\; )}\right)$

${F}_{\text{model}}\left(x\right)=\varphi \left(x-{y}^{\left(m\right)}\mathrm{;}\beta \right)$

where function $\varphi \left(s\mathrm{;}\beta \right)$ is defined in (18). In the actual psychometric function, ${\beta}^{\left(e\right)}$ is the true shape parameter and ${y}^{\left(m\mathrm{,}e\right)}$ the true median of ${\mathrm{log}}_{10}Z$. In the model psychometric function, $\beta $ is the prescribed shape parameter and ${y}^{\left(m\right)}$ the inference variable.

The results for the 4 cases of $\left(\beta \mathrm{,}{\beta}^{\left(e\right)}\right)=\left\{\left(\mathrm{2.5,2.5}\right)\mathrm{;}\left(\mathrm{2.5,5}\right)\mathrm{;}\left(\mathrm{5,2.5}\right)\mathrm{;}\left(\mathrm{5,5}\right)\right\}$ are displayed in Figure 4. It is clear that when $\beta \ne {\beta}^{\left(e\right)}$, the MLE (1 variable) does not converge to the true median (left panel of Figure 4). Therefore, we arrive at the conclusion below regarding the method of MLE (1 variable):

· the $\beta $ value cannot be set arbitrarily in the MLE (1 variable); and

· when the true ${\beta}^{\left(e\right)}$ of the actual psychometric function is unknown, the method of MLE (1 variable) is simply not applicable.

One striking feature of the adaptive Bayesian method is that it converges to the true median even when a wrong
$\beta $ value is prescribed in the model (right panel of Figure 4). At any given *N*, the relative error is mainly influenced by the true
${\beta}^{\left(e\right)}$ of the actual psychometric function. A smaller
${\beta}^{\left(e\right)}$ corresponds to a wider actual distribution and leads to a larger relative error at any given *N*. We discuss separately
$\beta >{\beta}^{\left(e\right)}$ and
$\beta <{\beta}^{\left(e\right)}$.

When the prescribed $\beta $ in ${F}_{\text{model}}\left(x\right)$ is larger than the true ${\beta}^{\left(e\right)}$ in

Figure 4. Relative error vs *N* when
$\beta \ne {\beta}^{\left(e\right)}$. Left panel: MLE (1 variable). Right panel: adaptive Bayesian method. Both methods require an input of
$\beta $.

${F}_{\text{actual}}\left(x\right)$, the model distribution is narrower than the actual distribution. When
$\beta $ deviates from
${\beta}^{\left(e\right)}$ in the direction of
$\beta >{\beta}^{\left(e\right)}$, the relative error becomes noticeably larger than that of
$\beta ={\beta}^{\left(e\right)}$. In addition, the decay of relative error vs *N* no longer follows the trend of
$1/\sqrt{N+{n}_{0}}$. Fitting
$C/{\left(N+{n}_{0}\right)}^{d}$ to the relative error vs *N* for
$\left\{\beta =5,{\beta}^{\left(e\right)}=2.5\right\}$ yields an exponent of
$d=0.43$ instead of 0.5. The fitting curves respectively for
$\beta >{\beta}^{\left(e\right)}$ and for
$\beta ={\beta}^{\left(e\right)}$ have the expressions

$\text{err}=\frac{0.50}{{\left(N+1.6\right)}^{0.43}}\text{\hspace{1em}}\text{for}\beta =5>{\beta}^{\left(e\right)}=2.5$

$\text{err}=\frac{0.58}{\sqrt{N+2.1}}\text{\hspace{1em}}\text{for}\beta ={\beta}^{\left(e\right)}=2.5$

When the prescribed
$\beta $ in
${F}_{\text{model}}\left(x\right)$ is smaller than the true
${\beta}^{\left(e\right)}$ in
${F}_{\text{actual}}\left(x\right)$, the model distribution is wider than the actual distribution. When
$\beta $ deviates from
${\beta}^{\left(e\right)}$ in the direction of
$\beta <{\beta}^{\left(e\right)}$, the relative error increases only slightly from that of
$\beta ={\beta}^{\left(e\right)}$. The relative error vs *N* is still approximately described by the law of
$1/\sqrt{N+{n}_{0}}$, with the coefficient slightly above that of
$\beta ={\beta}^{\left(e\right)}$. The fitting curves respectively for
$\beta <{\beta}^{\left(e\right)}$ and for
$\beta ={\beta}^{\left(e\right)}$ are

$\text{err}=\frac{0.35}{\sqrt{N+0.74}}\text{\hspace{1em}}\text{for}\beta =2.5<{\beta}^{\left(e\right)}=5$

$\text{err}=\frac{0.30}{\sqrt{N+0.56}}\text{\hspace{1em}}\text{for}\beta ={\beta}^{\left(e\right)}=5$

In the right panel of Figure 4, the relative errors for $\beta =5>{\beta}^{\left(e\right)}=2.5$ (triangles) are noticeably higher than those for $\beta ={\beta}^{\left(e\right)}=2.5$ (diamonds), which are already relatively large due to the small ${\beta}^{\left(e\right)}$. In comparison, the relative errors for $\beta =2.5<{\beta}^{\left(e\right)}=5$ (squares) are only slightly higher than those for $\beta ={\beta}^{\left(e\right)}=5$ (circles), which are relatively small due to the large ${\beta}^{\left(e\right)}$. This observation suggests that when facing an uncertainty of ${\beta}^{\left(e\right)}\in \left[{\beta}^{\left(e\mathrm{,}L\right)}\mathrm{,}{\beta}^{\left(e\mathrm{,}H\right)}\right]$, the best strategy for minimizing the inference error is to use the lower end of the range, $\beta ={\beta}^{\left(e\mathrm{,}L\right)}$, in the model. In this way, if ${\beta}^{\left(e\right)}$ is near the lower end, then it is well captured by $\beta $ ; if ${\beta}^{\left(e\right)}$ is near the higher end, then the relative error for $\beta ={\beta}^{\left(e\right)}$ is small and the relative error for $\beta <{\beta}^{\left(e\right)}$ is only slightly higher. Notice that the lower end of ${\beta}^{\left(e\right)}$ corresponds to the wider end of the distribution width. In summary, when facing an uncertainty in the actual distribution width, the best strategy for minimizing the inference error is to use a wide distribution in the inference model.

5.2. Situation 2: The Actual Psychometric Function Is Gamma and the Model Is Weibull

We consider the situation where the model distribution of subjective threshold *Z* is Weibull while the actual distribution of *Z* is gamma, which has the form

${\rho}^{\left(\text{gamma}\right)}\left(z\right)=\frac{1}{\Gamma \left(\kappa \right)\theta}{\left(\frac{z}{\theta}\right)}^{\kappa -1}\mathrm{exp}\left(\frac{-z}{\theta}\right)$ (30)

where $\kappa $ is the shape parameter and $\theta $ the scale parameter of the gamma distribution. Two Weibull distributions for $\beta =\left\{\mathrm{2.5;5}\right\}$ and three gamma distributions for $\kappa =\left\{\mathrm{4;8;16}\right\}$ are compared in the left panel of Figure 5. In each distribution, the scale parameter is selected to make median = 1 so all distributions are aligned at the same median. In terms of $x={\mathrm{log}}_{10}u$, the actual and the model psychometric functions are

${F}_{\text{actual}}\left(x\right)=\frac{1}{\Gamma \left(\kappa \right)}\gamma \left(\kappa \mathrm{,}\frac{{10}^{x}}{\theta}\right)$

${F}_{\text{model}}\left(x\right)=\varphi \left(x-{y}^{\left(m\right)}\mathrm{;}\beta \right)$

where $\gamma \left(\kappa \mathrm{,}s\right)$ is the lower incomplete gamma function defined as

$\gamma \left(\kappa \mathrm{,}s\right)={\displaystyle {\int}_{0}^{s}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{t}^{\kappa -1}{\text{e}}^{-t}\text{d}t$

As in the case of Weibull distribution, when the actual distribution of *Z* is gamma, the relative error of inferred
${z}^{\left(m\right)}\equiv \text{med}\left(Z\right)$ is independent of the scale parameter
$\theta $.

We test the adaptive Bayesian method using the Weibull model with
$\beta =2.5$ on three actual psychometric functions of gamma distributions with respectively
$\kappa =4$, 8, and 16. Relative errors vs *N* are plotted in the right panel of Figure 5. The results clearly demonstrate that the adaptive Bayesian method converges to the true median even when the assumed model and the actual psychometric function are of different types.

Now we look at the most important reason why we choose to infer $\text{med}\left(Z\right)$ instead of $E\left(Z\right)$. In the assumed model (Weibull), the mean is related to the median by

$E\left(Z\right)=\Gamma \left(1+\frac{1}{\beta}\right){\left(\mathrm{ln}2\right)}^{\frac{-1}{\beta}}\text{med}\left(Z\right)$ (31)

which varies with $\beta $. In the actual psychometric function, the relation between

Figure 5. Left panel: two Weibull and three gamma distributions with media = 1. Right panel: inference errors of the adaptive Bayesian method using Weibull model with $\beta =2.5$ on three actual psychometric functions of gamma distribution with $\kappa =4$, 8, and 16.

the mean and the median is in general different from (31). Given the convergence of inferred $\text{med}\left(Z\right)$, it follows that if we choose to infer $E\left(Z\right)$, then the inferred $E\left(Z\right)$ does not converge to the true $E\left(Z\right)$. That is, if we choose to infer $E\left(Z\right)$, the convergence of inference is lost.

In the left panel of Figure 5, the three gammas span a range of distribution width. In the right panel of Figure 5, we carried out inferences using the Weibull model of $\beta =2.5$, which is more consistent with the wider end ( $\kappa =4$ ) of the three gammas. Next we try inferences using the Weibull model of $\beta =5$, which is more consistent with the narrower end ( $\kappa =16$ ) of the three gammas. Figure 6 compares the relative errors of the two strategies of selecting $\beta $ in the inference model: 1) selecting a smaller $\beta $ to accommodate the wider end in a possible range of the actual distribution width or 2) selecting a larger $\beta $ to accommodate the narrower end.

The results indicate that it is more important to accommodate the wider end in a possible range of the actual distribution width. If the actual distribution has a large width, failing to accommodate it will lead to a significant increase in relative error (compare triangles and diamonds in Figure 6). On the other hand, if the actual distribution has a small width, failing to accommodate it will increase the relative error only slightly (compare squares and circles in Figure 6). In addition, the relative errors for the case of small actual distribution width are significantly below those for the case of large actual distribution width. All these observations suggest that we should accommodate the wider end in a possible range of the actual distribution width.

5.3. Relation between the Actual Relative Error and the Prediction from the Posterior Standard Deviation

In the adaptive Bayesian method, the posterior distribution of ${y}^{\left(m\right)}={\mathrm{log}}_{10}{z}^{\left(m\right)}$ not only produces a posterior median as the inference result for ${y}^{\left(m\mathrm{,}e\right)}$, but also

Figure 6. Comparison of the two strategies of selecting $\beta $ in the inference model: 1) smaller $\beta $ (wider distribution) or 2) larger $\beta $ (narrower distribution).

yields a posterior standard deviation
${\sigma}_{y}^{\left(\text{post}\right)}$. When the sample size *N* is not very small, the posterior of
${y}^{\left(m\right)}$ is approximately a normal distribution. A normal posterior for
${y}^{\left(m\right)}$ translates to a log-normal posterior for
${z}^{\left(m\right)}$. We view the posterior standard deviation of
${z}^{\left(m\right)}$ divided by the posterior median of
${z}^{\left(m\right)}$ as the predicted relative error in the inferred
${z}^{\left(m\right)}$. This prediction is practically operational since it comes out naturally from the posterior. The predicted relative error is expressed in terms of
${\sigma}_{y}^{\left(\text{post}\right)}$ using (26).

$\begin{array}{l}\left(\text{Predicted relative error in inferred}{z}^{\left(m\right)}\right)\equiv \frac{{\sigma}_{z}^{\left(\text{post}\right)}}{{\mu}_{z}^{\left(\text{post}\right)}}\\ =\sqrt{\mathrm{exp}\left({q}_{0}^{2}{\left({\sigma}_{y}^{\left(\text{post}\right)}\right)}^{2}\right)\left[\mathrm{exp}\left({q}_{0}^{2}{\left({\sigma}_{y}^{\left(\text{post}\right)}\right)}^{2}\right)-1\right]}\mathrm{,}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{q}_{0}=\mathrm{ln}10\end{array}$ (32)

Figure 7 compares the actual and the predicted relative errors when the adaptive Bayesian method using the Weibull model is applied on actual psychometric functions of the gamma-type (Situation 2). As we discussed in Figure 4 (Situation 1) and Figure 5 & Figure 6 (Situation 2), the actual relative error is mainly influenced by the width of the actual distribution of *Z*: the larger the width, the larger the error. The predicted relative error, on the other hand, is determined by the width of the model distribution.

The blue line and red line in the left panel of Figure 7 are almost identical to those in the right panel while the actual distributions (specified by $\kappa $ ) are very different for these two panels. For an actual distribution that is relatively wide ( $\kappa =4$ in the left panel), if we use $\beta =5$, the narrow model distribution significantly increases the actual error and significantly decreases the predicted error at the same time. Thus, using a narrow model distribution gives a false sense of inference accuracy: the predicted error is small while the actual error is much larger. We want to avoid this situation, which is another reason for not using a narrow model distribution. For an actual distribution that is relatively narrow ( $\kappa =16$ in the right panel), if we use $\beta =2.5$, the wide model distribution increases the actual error only slightly and increases the predicted error significantly.

Figure 7. Comparison of the actual and the predicted relative errors. The model distribution is Weibull with
$\beta $. The actual distribution of *Z*: is gamma with
$\kappa $.

Thus, using a wide model distribution yields a predicted error that is too conservative. A wide model distribution is less a problem for minimizing the actual error than for precisely predicting the error. Synthesizing the results of subsections 5.2 and 5.3, we draw two conclusions regarding the adaptive Bayesian method:

· when presented with the choice of two likely candidate model distributions, to minimize the actual error of inference in this uncertain situation, the best strategy is to select the wider one; and

· the error predicted from the posterior standard deviation is dominated by the assumed model distribution and is unreliable; the actual error of inference is mainly influenced by the actual distribution of the subjective threshold.

5.4. Performance Comparison of the Adaptive Bayesian Method and the Sample Median

The sample median is calculated from independent samples of the subjective threshold *Z* measured in the method of limits. The adaptive Bayesian method utilizes binary outcomes observed in the method of constant stimuli. Intuitively each binary outcome contains less information than a full sample value of *Z*: it only indicates whether the value of *Z* in that trial is below or above the prescribed stimulus. Given the difference in information content between these two types of data, we like to find out if the adaptive Bayesian method based on binary outcomes is less efficient in inferring
$\text{med}\left(Z\right)$ than the sample median calculated from full samples of *Z*.

Figure 8 compares the relative errors vs *N* of the two methods. In the Monte Carlo simulations, the actual distribution of *Z* is gamma with
$\kappa =4$. In the adaptive Bayesian method, the model distribution is Weibull with
$\beta =2.5$. A fairly uninformative prior (
${\sigma}_{z}/{\mu}_{z}=\mathrm{100\%}$ ) and a moderately informative prior (
${\sigma}_{z}/{\mu}_{z}=\mathrm{40\%}$ ) are tested. Despite the fact that the assumed model is different

Figure 8. Comparison of the sample median and the adaptive Bayesian method. The actual distribution of *Z* is gamma with
$\kappa =4$. The assumed model is Weibull with
$\beta =2.5$.

from the actual psychometric function, the Bayesian method is as efficient as the sample median in estimating
$\text{med}\left(Z\right)$. For moderately large *N*, the relative errors of the Bayesian method are indistinguishable from those of the sample median, and are fairly independent of the prior. Thus, we conclude that for estimating the point of subjective equality (PSE) in psychophysical experiments, the method of constant stimuli is as efficient as the method of limits.

6. Concluding Remarks

We considered the psychophysical experiments in which a skin area of a test subject is exposed to an electromagnetic heating or a contact heating. Once the heat induced exceeds the subject’s personal pain tolerance, the subject escapes from the heating. In this stimulus-response problem, when the specifications of heating (the spot size and power density of the electromagnetic beam) are fixed, the stimulus is solely described by the exposure duration, and the escape response is governed by a subjective threshold that varies among different subjects and varies among different realizations for the same subject. Mathematically, the subjective threshold is a random variable (*Z*). In the context of psychometric function, the median subjective threshold is the point of subjective equality (PSE). We studied the problem of inferring the median subjective threshold in psychophysical experiments. There are two types of experiments: a) the method of limits, and b) the method of constant stimuli. In the method of limits, the heating is kept on until escape (*i.e.*, the stimulus keeps increasing until escape). The recorded escape time gives a sample value of the subjective threshold for that particular trial. In this way, independent samples of the subjective threshold are measured. In the method of constant stimuli, the heating is kept on only for a prescribed duration (*i.e.*, the stimulus is prescribed). The observed outcome is binary: escape or no escape. In the inference, we modeled the subjective threshold as a Weibull distribution which is described by shape parameter
$\beta $ and scale parameter
$\alpha $. The actual distribution of the subjective threshold may deviate from the assumed model.

We examined four methods for inferring the median subjective threshold: 1) sample median, 2) maximum likelihood estimation (MLE) with 2 unknown variables ( $\alpha $ and $\beta $ are both unknown), 3) MLE with 1 unknown variable ( $\beta $ is given and $\alpha $ is unknown), and 4) adaptive Bayesian method. Methods 1 - 3 are based on independent samples of subjective threshold measured in the method of limits. Out of methods 1 - 3, method 1 does not assume any distribution form; not surprisingly it has the largest inference error. Method 3 assumes both the Weibull distribution form and the exact value of $\beta $ ; it has the smallest inference error. However, when the assumed $\beta $ value is wrong or the actual distribution deviates from the assumed Weibull model, method 3 produces an incorrect result.

A large part of our study was focused on the adaptive Bayesian method, which assumes a Weibull model distribution and uses it to extract information from binary outcomes observed in the method of constant stimuli. Based on the Monte Carlo simulation results, we draw several key conclusions about the adaptive Bayesian method.

· Even when the actual psychometric function deviates from the assumed model, the inferred $\text{med}\left(Z\right)$ in the adaptive Bayesian method converges to the true value.

· If we choose to infer $E\left(Z\right)$ (the mean subjective threshold) in the adaptive Bayesian method, this nice convergence property is lost.

· At any fixed *N* (number of trials), the actual relative error of the adaptive Bayesian method is mainly influenced by the width of the actual distribution of *Z*: a wider actual distribution leads to a larger relative error.

· When the width of the assumed model distribution is wider than that of the actual distribution, the actual relative error is fairly insensitive to the inconsistency. In contrast, when the width of the assumed model distribution is narrower than that of the actual distribution, the actual relative error increases significantly with the inconsistency.

· The predicted relative error from the posterior standard deviation is mainly influenced by the width of the assumed model distribution. Using a model distribution that is too narrow produces both a large actual error and a false sense of accuracy (small predicted error). On the other hand, using a model distribution that is too wide yields a predicted error that is too conservative.

· When facing an uncertainty in the actual distribution width, the best strategy is to select a smaller $\beta $ in the Weibull model to accommodate the wider end in a possible range of the actual distribution width.

· The actual errors of the adaptive Bayesian method are indistinguishable from those of the sample median, and are fairly independent of the prior used.

In summary, the adaptive Bayesian method is as efficient as the sample median for estimating the point of subjective equality in psychophysical experiments. This efficiency of the Bayesian method is remarkable since the sample median requires measuring full sample values of the subject threshold in the method of limits. In contrast, the adaptive Bayesian utilizes the binary outcomes observed in the method of constant stimuli. In addition, the Bayesian method achieves this efficiency even when the actual psychometric function deviates from the assumed model. In the adaptive Bayesian method, the error predicted from the posterior standard deviation is unreliable; it does not reflect the actual error. The actual error is mainly determined by the actual distribution of *Z* while the predicted error is dominated by the assumed model. To minimize the actual inference error when the actual distribution width is uncertain, it is best to use a wider model distribution.

Acknowledgements and Disclaimer

The authors acknowledge the Joint Intermediate Force Capabilities Office of U.S. Department of Defense and the Naval Postgraduate School for supporting this work. The views expressed in this document are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government.

References

[1] Waskom, M., Okazawa, G. and Kiani, R. (2019) Designing and Interpreting Psychophysical Investigations of Cognition. Neuron, 104, 100-112.

https://doi.org/10.1016/j.neuron.2019.09.016

[2] Kuroda, T. and Hasuo, E. (2014) The Very First Step to Start Psychophysical Experiments. Acoustical Science and Technology, 35, 1-9.

https://doi.org/10.1250/ast.35.1

[3] Douglass, D.K., Carstens, E. and Watkins, L.R. (1992) Spatial Summation in Human Thermal Pain Perception: Comparison within and between Dermatomes. Pain, 50, 197-202.

https://doi.org/10.1016/0304-3959(92)90161-4

[4] Defrin, R. and Urca, G. (1996) Spatial Summation of Heat Pain: A Reassessment. Pain, 66, 23-29.

https://doi.org/10.1016/0304-3959(96)02991-0

[5] Parker, J.E., Nelson, E.J., Beason, C.W. and Cook, M.C. (2017) Effects of Variable Spot Size on Human Exposure to 95-GHz Millimeter Wave Energy. Technical Report, AFRL-RH-FS-TR-2017-0017.

[6] Wang, H., Burgei, W.A. and Zhou, H. (2020) A Concise Model and Analysis for Heat-Induced Withdrawal Reflex Caused by Millimeter Wave Radiation. American Journal of Operations Research, 10, 31-81.

https://doi.org/10.4236/ajor.2020.102004

[7] Herrick, R.M. (1967) Psychophysical Methodology: VI. Random Method of Limits. Perceptual & Motor Skills, 24, 915-922.

https://doi.org/10.2466/pms.1967.24.3.915

[8] Herrick, R.M. (1973) Psychophysical Methodology: Comparison of Thresholds of the Method of Limits and of the Method of Constant Stimuli. Perception & Psychophysics, 13, 548-554.

https://doi.org/10.3758/BF03205818

[9] (2009) Method of Constant Stimuli. In: Binder, M.D., Hirokawa, N. and Windhorst, U. (Eds.), Encyclopedia of Neuroscience, Springer, Berlin, Heidelberg.

[10] Watson, A.B. and Pelli, D.G. (1983) Quest: A Bayesian Adaptive Psychometric Method. Perception & Psychophysics, 33, 113-120.

https://doi.org/10.3758/BF03202828