The amount of trading data has exploded in finance thanks to the continuing progress of high frequency techniques. It constrains practitioners to use more and more state-of-the-art algorithms to deal with this overwhelming amount of information. Computers and algorithms are more and more efficient, but still decision making is based on both the quantity and the quality of information. Thus, errors and speculations that can make the financial market toxic, i.e. conducive to crashes, are still possible. Examples in the past, such as the “Flash Crash” of May 6, 2010, have shown that algorithmic trading in finance has made it possible to introduce new kind of crashes characterized by their suddenness. Such quick crashes seem dangerous because of a kind of inherent unpredictability. However, theoretical framework to model this new phenomenon exists.
Easley, Engle, O’Hara and Wu  designed a model of the high-frequency financial market based on flows of informed and uninformed traders. In this model, informed traders are aware of the evolution of the price in the future and thus of which decision takes (buy or sell). The authors managed to show that information is a key parameter of the spread between ask and bid of prices, as they demonstrate that the probability of being informed within their theoretical framework is proportionally linked with it. They named this key parameter the Probability of Informed Trading (PIN). A high value of the PIN is an indicator of the level of toxicity of this high frequency trading market, as it would mean it relies on too many informed traders. Later, Easley, Lopez de Prado, O’Hara   designed a tool, nicknamed Volume-synchronized Probability of Informed Trading (VPIN), supposed to approximate the PIN. It appeared it could predict the “Flash Crash” of May, 6 2010 a few hours before it happened . A number of papers have been written   , and it is proposed to use it for regulation through a VPIN contract  . However, critics pointed out some flaws, questioning its reliability. For example, Andersen and Bondarenko have shown  that the VPIN is quite sensitive to the starting point of when one starts computing the VPIN on a data set. It indeed questions the VPIN prediction quality. Moreover, they have also shown that the VPIN is sensitive to other parameters, such as the trade classification rule used , or how one defines the average daily volume of trades . Changing the classification rule may drastically change the VPIN behavior . Tomas Pöppe, Sebastian Moos and Dirk Schiereck have arrived to the same conclusions with a different approach. Using a different classification rule can change the VPIN prediction power toward a crash (in their paper a German blue-chip stock) . Besides, controlling ex-ante parameters seem to give poorer prediction quality  . This point has also been checked by D. Abad, M. Massot and R. Pascual . Controlling for ex-ante realized volatility, and trading intensity, as did T. G. Andersen and O. Bondarenko , prediction quality seems to vanish. More deeper, they have also underlined that it is not obvious how one should define a VPIN prediction, analyzing more precisely toxic and non-toxic halts, as well as toxic events. Furthermore, Torben G. Andersen and Oleg Bondarenko interpret the VPIN as being too sensitive to trading intensity. They have also explained the VPIN metric is sometimes unexpectedly correlated with other usual ones (such as VIX or RV)  . Moreover, it has been shown   that the VPIN does not approximate the PIN, as the PIN was built on a time-clock theoretical framework, and the VPIN with a volume-clock paradigm. In this study, we propose another way to estimate the PIN within its original time-clock framework.
The purpose of this paper is to improve the PIN theoretical framework. Some concerns have been raised about its theoretical foundations. For this reason we assess step by step all the different theoretical ideas of the PIN model. More precisely, we firstly want to explicit all the theoretical framework of the PIN and the VPIN model to have a better view of all its different assumption subtleties. It secondly makes it possible to point out some approximation errors in the formula used to approximate the PIN and to propose another exact way to compute the PIN. In the following, we first recall the PIN model (Section 2). Second, after introducing the VPIN original ideas we analyze the original first order approximation and then recall the difference of time clock and volume clock paradigm (Section 3). Finally, we suggest another way to compute the PIN (Section 4).
2. The PIN Model
2.1. The Time-Clock Framework
The Probability of Informed Trading (PIN) is computed on a simple model of information among traders . Let’s describe it with the following tree below (Figure 1), originally designed in . Suppose prior to the beginning of any trading day, Nature determines whether an information event is relevant to the value of the asset to occur. Suppose information events are independently distributed and occur with a Bernoulli probability of value , which can be seen on the first two branches on the left-hand side of the tree. These events are good news with a Bernoulli probability (i.e. signal High), or bad news with probability (i.e. signal Low). After the end of trading on any day, and before Nature moves again, the full information value of the asset is realized. Hence, for any of the three leaves of the tree in Figure 1, an informed trader would know which action to take. Trade arises from both informed traders (those who have seen any signal) and uninformed traders. On any day, arrivals of uninformed buyers and uninformed sellers are described by independent Poisson processes of respective intensity and . Individuals trade a single risky asset and money with a market maker over trading days. Within any trading day, time is continuous and it is indexed by . Let’s define for , for a given trading day, and the events that an order of respectively a sell and a buy arrive at time t. Let be the market maker’s prior belief about the events “no news” (n) “bad news” (b) and “good news” (g) at time t1. Within this model we compute the spread at t which is equal to , where and are the ask and bid at time t (respectively the minimum price a seller is willing to receive and the maximum price a buyer
Figure 1. A tree summarizing the theoretical framework.
is willing to pay). Within this framework is the expectation of the asset value, we denote , conditional on the history prior to t and on sell order . Similarly, is the expectation of conditional on the history prior to t and on buy order . Let note , and respectively the value of the asset under the conditions of good new, no information and bad new. We have of course the following inequalities: .
2.2. Computation of the Spread
We explicit now more the content of . Let’s compute the bid, the ask follows exactly the same idea2:
It can be re-written this way using the different possibilities of the tree on an event:
Let’s compute the first term , others follow the same idea. Using Bayes rule one finds the following:
so, by decomposing the denominator:
Let’s have a look at the term which is the probability at t that there will be a sell order at t under the constraints of no news. is a transition rate. To compute it, one must first calculate the transition probability for a strictly positive time length let say h. Formally, if one notes the number of jumps of the corresponding Poisson process up to t under conditions of no events, we know its intensity is under the constraint of no news. For any h strictly positive and small enough we look to the limit of the number when h goes to zero remaining strictly positive, which defines the transition rate. At first order on h, one finds:
Dividing by h, one re-finds indeed the intensity of the Poisson process, which is a special case of a Markov jump process. Applying the same for other cases (“bad event”, “good event”), we have finally the following:
As the probabilities with sum to one we get the following expression:
Finally the bid has this expression:
With the same reasoning the ask has this expression:
Actually one may simplify a bit these expressions as the expectation of V has the following form:
So the spread equals to:
In the special case where one finds the following simple form:
If we make the hypothesis that is constant, then we have the following:
Thus, with the assumptions: , i.e. and , the PIN equals the following:
We will keep the same hypothesis for the rest of the paper.
3. Analysis of the First Order Approximate within the Time-Clock Framework
The idea behind the VPIN is to find an easy way to compute the last above expression of the PIN using a volume-clock paradigm. More precisely, it aims at finding a way to easily compute the expressions obtained for the numerator and denominator ( ). The key heuristic behind the VPIN is to take advantage of a supposedly good property of the expectation of the absolute difference between Poisson random variable within a volume-clock framework to approximate , i.e.: , where X and Y are Poisson variables. We will see this heuristic does not really make it possible to conclude as expected. More precisely, in the first subsection we will see which idea has been used to approximate the PIN within a time-clock framework. Secondly, we will see that first-order approximations used are not correct as the framework does not verify a required hypothesis. We analyze more precisely the first order approximates which can be made in the time-clock framework. In the third subsection, we describe the volume-clock framework and explain why its hypotheses lead to different results compared to the time-clock framework. Finally, we illustrate our results with simulations.
3.1. The Design of a New Heuristic
In the first subsection we see which idea has been used to approximate the PIN within a time-clock framework. We refer now to the related work of Easley et al. . Considering the previous framework the probability to obtain on the same time , S sells and B buys for day t of length one is:
So, if one notes the total number of trades for this day, one finds, conditioning by all possibilities of the model:
S and B are independant Poisson process, so one can sum in each case their respective intensities to find new Poisson processes. Thus:
Note the following:
· Remark 1: the time period is fixed, thus S and B can take whatever possible positive integer values, which won’t be the case if S + B was fixed.
· Remark 2: intensities are rates, thus the equation has a meaning because one implicitly multiplies it by one (trading day).
The authors propose to compute the expectation of the absolute value of the following random number K = S − B with an approximate. This is the intuition behind the computation of the VPIN. They refer to the following paper of Katti  but do not explicit any calculus. They assert that thanks to a first order approximation without explaining what it does mean. Let’s first describe the content of this reference and assumptions assumed. Then let’s describe which computations are involved within this time-clock framework.
3.1.1. Katti’s Reference Assumptions
The reference proposes several ways to compute the expectation of the absolute value of two random variables that follow same discrete positive distribution but with possibly different parameters. The case of Poisson processes is treated. Let’s describe the beginning of Katti’s paper . Let’s note and two Poisson random variables of intensity and . We would like to compute the following number . One can write the following:
where the summations are over and and . Then, one can develop it as follows:
with and the two different sums. The author, in order to simplify the calculus and use a trick, makes the following assumptions: , where is a constant not linked anymore to nor . It implies thus a relation between the two variables (for example ). Thanks to this assumption he can do the following:
where is a confluent hypergeometric function. Operating by ( ) it finally leads to:
The particular case of cannot be treated with this trick because it would imply equal numbers are linked by an inverse relation, so that the product is independant of . But , is not anymore a constant of the main parameters and , so applying the operator does not give the previous results. One may use here another reference, one cited by Katti . We will detail later the same ideas for our precise the VPIN framework. Anyway, this case leads to the following:
where is a modified Bessel function of first kind.
3.1.2. How to Use as Far as Possible References’ Work to Approximate the VPIN in a Time-Clock Framework
First, let’s put ourselves in the context where we have the differences of only Poisson processes. It’s pretty simple, one just have to condition the expectation of for each case:
Then, remind . S and B are, under the model assumption, Poisson processes describing the number of sells and buys in one day of trade. We only need two different kinds of Poisson processes to describe the mixture of Poisson processes resulting of informed and uninformed traders in each case (“good event”, “bad event” and “no event”). Let’s note them as follows , , and , S and B labelling buys or sells. One finds then:
As all Poisson processes are independant one can sum them to produce new Poisson processes, as follows3:
One can thus sum the two first terms and obtain the following:
One has to treat finally two different cases:
· different intensities: first term
· same intensities: second term
3.2. How to Reach a First Order Approximate
In this subsection we will first see that main assumption to use Katti’s result cannot be used to approximate the PIN. Therefore to approximate the PIN using authors’ intuition we describe then the following two steps:
· one way to reach numerator exact value consists in using Ramasubban’s ideas ,
· first order asymptotic analysis involves separate cases to study sensitivity of the approximate to parameter’s values.
3.2.1. Katti’s Assumptions Are Not Met in the New Setting
We have seen that Katti’s reference use the assumption that Poisson intensities are linked by a relation of the form where is independent of these parameters. Here the respective parameters would be and . The product has clearly no single reason to be a constant. One could create some tricky cases, but it does not seem that the model would like to be limited to these cases (indeed, one may consider for example to fit the PIN parameters maximising likelihood, like in  ). Thus the assumptions are not met and the reference  cannot be invoked to say at first order, as it was done in  for example.
3.2.2. Computation of
Anyway, let’s do nevertheless calculations to compute . We follow the same natural ideas of T. A. Ramasubban in this paper which treats only the case of same Poisson intensities . We begin with:
Let’s start with the easier calculation: the case where Poisson intensities are equal.
All the sums separately exist, we can split them in two different ones:
One recognizes here a modified Bessel functions of first kind: for an integer n and, say scalar x, . Here we obtain:
which is the result of Ramasubban’s quoted paper. The computation with different intensities follow the same idea, expect that the symmetry of the two initial sums is broken, so we have to compute them separately.
Let’s calculate the first sum and then the second:
which separates as follows as all sums exist separately:
Replacing first sum of the rigth hand side by Bessel functions of second kind, we finally find:
For the second sum, we do an equivalent calculus and find the following:
If we put together all the terms we find:
Arranging the last two sums of the left hand side of the equality we finnaly get:
With an arbitrarily time length t for a trading period, we find:
3.2.3. Analysis of the First Order Approximate
Recall that and are rates of uninformed and informed traders per day (in the original the PIN model). Thus, these parameters are pretty high integers: this is the first intuition behind first order approximate. Moreover, Hankel  derived an asymptotic expansion of modified Bessel function of first kind as follows:
We first apply this expansion to with the condition and , as we consider there are a lot of informed and uninformed traders per day (compared to 1). We find the following:
Let’s now distinguish these three cases:
· and are of same order,
If and are of same order,
in this case: , thus one can neglect the corresponding term. We obtain:
if then it reduces to:
If , then:
Thus, we can see that first order approximation depends a lot of:
· the respective values of and ,
· and in a lot of cases of the weighted average of a given Poisson distribution of the difference between cumulative density functions from opposite parts of the tail of another Poisson distribution, i.e.:
The first order approximation proposed in  is not incorrect as we will see in the simulations, but sometimes, imprecise.
3.3. The Volume-Clock Paradigm: The Implicit Change of Model Assumptions
In this subsection, we describe the volume-clock framework and explain why its hypotheses lead to different results the PIN compared to the time-clock framework. More precisely, we first describe the new assumptions. Secondly, we make the computations within this new framework, which lead to a new value of the PIN.
3.3.1. The New Assumptions
In  D. Easley, M. de Prado and M. O’Hara describe a new model to compute easily the VPIN and therefore the PIN using the above previous results:
· , supposedly at first order,
They introduce the paradigm of volume clock and time bars. Let’s first describe it and see that the assumptions are implicitly changed, but ignored. The idea is pretty simple. Consider a trade described by a time serie of price, say , labelled with time t. First, They package trades in objects called “bars” that have a fixed time volume, i.e.: they aggregate the time serie in, for example, one-minute time bars. It is equivalent to a sampling of the time serie. Each bar is a kind of new trade with several rules to guess its price. Second they agreggate these time bars to form fixed in volum “buckets”. Say these buckets have a volume V.
· Remark 1: nothing can ensure us that buckets will have a fixed volume size. Indeed, each time bar is sensitive to trading intensity. The last time bar can often be too big to be aggregated to a fixed size bucket. Which mean, that if one wants to force bucket size to be constant then, a lot of time bar won’t be of one minute lenght. If one on the contrary wants to preserve time size to be constant, a lot of buckets might not be of constant volume size.
Suppose anyway that everything is ideal and that each bucket is of constant volume. Authors note the label of a bucket of volume V, , and and respectively the total number of sells and buys that occured in this bucket. They then refer to their previous work  result: . But even if the result does not hold as previously shown, one must note the following:
· First: here the bucket is constant in volume, thus filling volume time is random, it is a really strong hypothesis, as we have then: that holds almost surely,
· Second: they use the result indeed to say that as then the expectation equals .
· But finally, one should remark that this equality lacks a time, as we are talking of rates of traders. In the first model, the time was one day, and implicitly one would multiply within the time-clock framework, rates by one day. Here, in the volume-clock framework, one does not control anymore time. One should take into account filling bucket time which is a new random variable. At first glance, the expression is inhomogenous and even if right, it is far from being trivial.
Indeed the authors preciss us “recall that we divide the trading day into equal-sized volume buckets and treat each volume bucket as equivalent to a period for information arrival”. It’s misleading. Recall that in the initial model time is fixed (one day) and thus volum is random. Here one has the contrary, volume is fixed and time is thus random. Let’s detail a bit more the calculus with the new assumptions. To do so let’s precise a bit more the new implicit framework.
3.3.2. A New Computation of
In fact we want to compute now , as bucket volume is fixed. Note the filling time of the bucket and then note the following:
with , , and the Poisson processes of the total sell up to t and and the total buys up to t and . One has in distribution the following:
where and are Poisson processes describing total sells and buys in the bucket labelled with . The variables being independent, we can thus write the following:
One must note that there is still the constraint of the volume of a bucket:
Thus, imposing one value, imposes the other. Let’s calculate . First, one can condition the events: “good event” (g), “bad event” (b) and “no event” (n):
On each event, one knows the distribution of and . One can then re-write it the following way4:
The two first terms corresponding to “good” or “bad events” are equal in distribution, that’s why we have:
Before going further, let’s implement the joint probability density function of for example, sells and buys and respective filling bucket time t-t’ in the case of a bad event. Let’s note it . Now, we synthetise and refer to the great ideas of the proof of Kin and Le . Remark first the following:
and as follows a Poisson law of intensity , then classically follows an Erlang law with the following parameters . Second, as almost surely, we have the following equalities:
We know and considering for example a continuous bounded function g, one can guess easily computing using that . We find a binomial law i.e.:
The “no event” case is similar. We thus find the following:
And after an integration on the random variable t-t’:
Taking the previous joint probability into account we are thus computing the following expectations of let say X and Y in fact:
Moreover, if x follows the binomail distribution of which p.d.f is , then using Jensen inequality for the concave function we have:
· for large enough m and p differing from
Thus, for large enough V:
Thus the VPIN metric approximates the following for large enough n as shown by Kin and Le  :
which is indeed different of .
3.4. Some Simulation Verification
We present here some simulation verification. First we present the framework and the experienced tested. Second, we present the results.
3.4.1. Framework and Experience Tested
For purpose of illustration, we compare the empirical form of with the PIN and the asymptotic limit5 found within the time clock framework for different cases of and . It is pretty easy to do, as controlling ex-ante all the parameters of the model one then just has to generate the appropriate Poisson processes to obtain all the values. We illustrate the results with three examples:
· and of same order than : we took and ,
· and of same order than : we took and ,
· of same order than : we took6 and .
· We compute 20 values for each choice of and in the three cases above,
· For each of the 20 values, the empirical expectations are computed with an average of 10,000 values,
· To compute the sum , considering the values of and , we have bounded the sum to , when probability values starts to be then very little.
On each case, we plot first the empirical numerator , , and the asymptotic limit found (Figure 2, Figure 4 and Figure 6). Second, we plot , the PIN (i.e. and the asymptotic limit divided by (Figure 3, Figure 5 and Figure 7).
On Figure 2, first order and asymptotic estimations are very close.
Figure 2. Empirical, asymptotic and first order numerators.
Figure 3. Empirical, asymptotic and first order approximations of the PIN.
Figure 4. Empirical, asymptotic and first order numerators.
Figure 5. Empirical, asymptotic and first order approximations of the PIN.
Figure 6. Empirical, asymptotic and first order numerators.
Figure 7. Empirical, asymptotic and first order approximations of the PIN.
4. Another Suggestion to Compute the PIN
In this section, we propose another way to compute the PIN. Indeed, as it was seen in the last section, the first order approximation of the PIN within the time-clock is not always precise and its theoretical foundation is not correct. Furthermore, the one we propose is only asymptotic and not easy to compute. Hence we propose an exact formula to compute the PIN in the time-clock framework. More precisely, in the first subsection we describe how to compute exactly the numerator and then the PIN. Secondly, we describe how numerically one can design at least one methodology to compute the PIN. Finally, we present some simulation verification of our results.
4.1. One PIN Upgrade
In this subsection, we detail how to compute exactly the PIN. Recall that the probability to obtain S sells and B buys during a period of length t is:
Recall that to compute the PIN we have the assumption: , thus we have:
So, if one notes the total number of trades for this day, we find:
and we even have:
So to estimate the PIN denominator, one can first use for an arbitrary time period an average of S, B or TT. Let’s work with S and take a time period of length t. Let’s estimate the numerator . To do this, we firstly explicit the margin probability function to obtain S sells in a time period of length t and secondly we compute its first three moments. Thirdly we explain how to compute and hence the numerator, which finally leads to a new PIN formula.
4.1.1. Margin Function
The probability to obtain S sells during a time period of length t is the following:
4.1.2. Computation of First Three Moments
Let’s compute the moment-generating function of this process. We will estimate the numerator using relations between moments. Let u be a real value, let be the random variable representing the volume of sells and t the fixed time period associated. We have:
Let’s compute the first three moments of :
· First moment:
· Second moment:
i.e. we have the classic decomposition:
· Third moment:
i.e. we just wrote:
4.1.3. Estimation of α
Remark the following:
Then with the same idea let’s compute the following:
and we know that and that , so:
If we use again the formula, we can then replace by :
If we arrange a bit the expression on denominator and numerator on the left hand side of the equation, we remark the following:
· Remark 1:
· Remark 2:
Introducing the skewness and the following notations: , and , we obtain finally:
Skewness, standard deviation and expectation are measured from data. To estimate we thus just have to solve the following second order equation on :
The discriminant is positive: . As is a probability, we finally find:
which is indeed between 0 and 1.
4.1.4. Estimation of
We know that , so let’s replace the of the right hand side of the equality (not with ) by previous expresion. We then estimate . We finally obtain the following as with the previous notations:
4.1.5. A New PIN Formula
Finally we obtain the following equivalent exact formula:
or after simplifying a bit:
One then just have to estimate on a arbitrary time lenght t, m, and to estimate the PIN number. The difficulty is then put on estimating on this time period the volume of direction of trades. We describe further a possible framework to compute this number. One can verify numerically that these two formula give the exact same numbers of the PIN.
4.2. A New Framework to Compute the PIN
In this subsection we explain, how at least one framework can be designed to compute the PIN. We would like to compute the PIN number from time, let say t, and period length, let say , i.e. from t to . With previous framework, we obviously have:
as, all numbers , and are defined on these time t and period . And we also have:
where, m, and are calculated for the volume of sell , between t and .
Thus two things must be implemented to well estimate the PIN:
· the empirical averages implicitly behind m, and : we will have to put some hypothesis on the time series of volumes to use classic theorems.
· the volume of sells: one needs a model of classifier to guess on a given amount of time the number of sells within the total volume of sells.
Estimation of m, σ and γ
We would like to use the law of large number. We basically need random variable independant and identically distributed. Here: noting the Poisson process of sells at time t (i.e. the number of sells at t). Then we have:
According to the model, Nature chooses at each time period the parameters and independently each day. So is a sequence of (successive non-overlapping) in dependant random variables. But, the are not identically distributed. Nothing guarantees it. Indeed, Nature’s choices won’t necessarily be the same, and so , and . To handle with this, one can do the following. We need a statistically significant mean. Within the time period: 7, Nature’s choice is the same, so considering n intervals of length within , the random variables are then independent and identically distributed. For n high enough the following approximations hold:
Thus the choices to make here are:
· the time length ,
· the number n of sub-intervals to have a precise average.
To reduce standard variation of , one direct way to do it is to take both the averages of the PIN estimated using volume of sells (let’s note it now and the PIN estimated using volume of buys (let’s note it ). Indeed, the previous calculations are exactly the same if one would have use volume of buys instead of sells. And within the PIN framework and are independent random variables. So:
and so, an estimate of is only a function of the process ( ) and the same function but depending of the process ( ). Thus, these two estimates being independent, if one notes the standard deviation using only the process , now the standard deviation with both process equals the following:
4.3. Some Simulation Verification
We present finally some simulation verification. First we describe its framework. Second we present the results. The values of parameter tested are exactly the same as in the last framework, as we would like to compare previous results with the values of our new formula. The only difference which slightly change our framework, is that to compute the new formula one needs more sample. We detail it now.
4.3.1. Framework and Experience Tested
For purpose of illustration, we compare the empirical form with the PIN and the new formula8 found within the time clock framework for different cases of and . It is pretty easy to do, as controlling ex-ante all the parameters of the model one then just has to generate the appropriate Poisson processes to obtain all the values. We illustrate the results with three examples:
· and of same order than : we took and ,
· and of same order than : we took and ,
· of same order than : we took9 and .
· We compute 20 values for each choice of and in the three cases above,
· For each of the 20 values, for a choice of and , we generate 1,000,000 Poisson processes, we divide them in 100 consecutive intervals of 10,000 values. For each of the 100 intervals we compute empirical average to approximate mean m, standard deviation and skewness . We then compute an approximation of the PIN with an average of these 100 values10.
On Figure 8, new formula (NPIN) and PIN are very close.
Here on Figure 9 one can see better the difference when one does not change anymore.
This last case on Figure 10 illustrates a market where the number of informed and uninformed traders are of same order. The VPIN really slightly over-estimates the true PIN value.
In any case one sees that new formula estimated is closer than the VPIN one. By the way, we have checked that new PIN formula obviously equals true PIN formula for any parameter , and of the model.
In this last section, we present first a general summary of our findings. Then we propose suggestion for further research on this topic.
Figure 8. Old (VPIN) and new approximation (NPIN) of the PIN.
Figure 9. Old (VPIN) and new approximation (NPIN) of the PIN.
Figure 10. Old (VPIN) and new approximation (NPIN) of the PIN.
In this study we have analyzed the theoretical foundation of the PIN model and we have shown that its time-clock framework makes it hard to apply the VPIN original heuristic to estimate the probability of informed trading. Indeed, first order asymptotic is not that simple to estimate theoretically and in practice. That’s why we propose another way to estimate the PIN, which is theoretically exact and hence more precise than the asymptotic formula, which is confirmed by our first tests. Moreover, the study recalls and highlights the difference of the volume-clock and time-clock paradigms which leads to a different formula of the PIN, and which respective hypotheses cannot therefore be used simultaneously to approximate the PIN.
Here are some ideas to further study this precise subject:
· test and compare the performance of the new formula within the time-clock framework with real trading data: find local optima parameters (n, , trade classification algorithm, …) to maximize prediction quality,
· analyze and assess stability of the new formula and compare it to other ones.
We thank the Editor and the referee for their comments. Useful guidance and discussions in the LBNL team are gratefully acknowledged.
1We summarize here the theoretic framework as described in . Formally, considering the random variables corresponding to order arrival of sells and buys St and Bt we associate the canonical respective filtrations to define later conditioned expectations. They are still noted as the events “St” and “Bt”.
2We use the same notations as the author, distinguishing the events “t” and “ ”.
3S and B labels do not have any more importance, to differenciate Poisson processes of the last expectation we have thus just put label one and two to distinguish the “no event” case.
4the label 1, 2, 3, … are used to note that these are the same distributions, but these are still different random variables.
6This case is more tricky and actually the asymptotic limit is closer to the empirical value than the first order approximate proposed by the authors, but the trend is not obvious and need more study. We present here the good case that works fine. Further study must maybe be done.
7Let’s suppose that choices are made in this time interval, to not bother about possibly overlapping Nature’s choice.
8We use in these simulations for symmetry reasons this formula.
9This case is more tricky and actually asumptotic limit is closer to the empirical value than first order approximate proposed by authors, but the tren is not obvious and needs more study. We present here the good case that works fine. Further study must maybe be done.
10This double average equals traditional the VPIN formula as values are consecutive.