The problem of measurement in quantum mechanics  has been defined in various ways, originally by scientists, and more recently by philosophers of science who question the foundations of quantum mechanics. Measurements are described with diverse concepts in quantum physics such as:
• wave-functions (probability amplitudes) which according to the linear Schrödinger equation involve a unitary and deterministic operator, thus preserving information,
• superposition of states: linear combinations of wave-functions with complex coefficients that carry phase information and produce interference effects, known as the principle of superposition,
• quantum jumps between states accompanied by the “collapse” of the wave-function that according to Dirac’s projection postulate in a von Neumann’s Process can destroy or create information,
• collapses and jumps probabilities given by the square of the absolute value of the wave-function for a given state,
• values for possible measurements given by the eigenvalues associated with the eigenstates of the combined measuring apparatus and measured system, in other words, the axiom of measurement,
• the Heisenberg’s uncertainty principle.
The original problem stems from Niels Bohr’s “Copenhagen interpretation” of quantum mechanics since our measuring instruments, which are usually macroscopic objects and treatable with classical physics, can give us information about the microscopic world of atoms and subatomic particles like electrons and photons.
Bohr’s idea of “complementarity” insisted that a specific experiment could reveal only partial information―for example, the position of the particle. Whereas “exhaustive” information requires complementary experiments, for example when determining the momentum of the particle, responding to the limits of the Heisenberg’s uncertainty principle.
Some of us define the problem of measurement simply as the logical contradiction between two laws describing the motion of quantum systems: the first one talks about the unitary, continuous, and deterministic time evolution of the Schrödinger equation, whereas the second one involves their complete opposite counterpart, i.e., the non-unitary, discontinuous, and indeterministic collapse of the wave-function. John von Neumann saw a problem with two distinct (indeed, opposing) processes.
The mathematical formalism of quantum mechanics does not provide a way to predict when the wave-function stops evolving in a unitary fashion and collapses. Experimentally and practically, however, we can say that this occurs when the microscopic system interacts with a measuring apparatus.
Others define the measurement problem as the failure to observe macroscopic superpositions.
Decoherence theorists, e.g., H. Dieter Zeh and Wojciech Zurek, who use various non-standard interpretations of quantum mechanics that deny the projection postulate―quantum jumps―and even the existence of particles, define the measurement problem as the failure to observe superpositions such as Schrödinger’s cat. They also claim that unitary time evolution of the wave-function according to the Schrödinger wave-equation should produce such macroscopic superpositions.
Physics of Quantum Information treat a measuring apparatus in a quantum mechanically manner by describing its parts as in a metastable state like the excited states of an atom: the critically poised electrical potential energy in the discharge tube of a Geiger counter, or the supersaturated water and alcohol molecules of a Wilson cloud chamber. The pi-bond orbital rotation from cis- to trans- in the light-sensitive retinal molecule is an example of a critically poised apparatus.
Excited (metastable) states are poised to collapse when an electron (or photon) collides with the sensitive detector elements in the apparatus. This collapse is macroscopic and irreversible, generally a cascade of quantum events that release large amounts of energy, increasing the (Boltzmann) entropy. But in a “measurement” there is also a local decrease in the entropy (negative entropy or information). The increase in the global entropy is normally orders of magnitude more than the decrease in the small local entropy (an increase in stable information or Shannon entropy) that constitutes the “measured” experimental data available to human observers.
The creation of new information in a measurement follows the same two core processes of all information creation―quantum cooperative phenomena and thermodynamics. These two are involved in the formation of microscopic objects like atoms and molecules, as well as macroscopic objects like galaxies, stars, and planets.
According to the correspondence principle, all the laws of quantum physics asymptotically approach the laws of classical physics in the limit of large quantum numbers and large numbers of particles. Thus, Quantum Mechanics can be used to describe large macroscopic systems.
Does this mean that the positions and momenta of macroscopic objects are uncertain? Yes, it does. Although the uncertainty becomes vanishingly small for large objects, it is not zero. Niels Bohr used the uncertainty of macroscopic objects to defeat Albert Einstein’s several objections to quantum mechanics at the 1927 Solvay conferences.
But Bohr and Heisenberg also insisted that a measuring apparatus must be regarded as a purely classical system, since they cannot have it both ways: classical and quantum. So, can the macroscopic apparatus also be treated by quantum physics or not? Can it be described by the Schrödinger equation? And, can it be regarded as in a superposition of states?
The most famous examples of macroscopic superposition are perhaps Schrödinger’s cat, which is claimed to be in a superposition of being alive and dead at the same time for a cat in a box, and the Einstein-Podolsky-Rosen experiment, in which entangled electrons or photons are in a superposition of two-particle states that collapse over macroscopic distances to exhibit properties “non-locally” at a speed faster-than-light.
These treatments of macroscopic systems with quantum mechanics were intended to expose inconsistencies and incompleteness in quantum theory. The critics hoped to restore determinism and “local reality” to Physics. They resulted in some strange and extremely popular “mysteries” about “quantum reality”, such as the “many-worlds” interpretation, “hidden variables”, and signaling at a faster-than-light speed.
Physics developed a quantum-mechanical treatment of macroscopic systems, especially a measuring apparatus to show how it can create new information. If the apparatus were describable only by classical deterministic laws, no new information could come into existence. The apparatus needs to be adequately determined only, i.e., “classical” to a sufficient degree of accuracy.
Everything said so far indicates how sensitive is quantum computing to the correct measurement of the quantum states.
On the other hand, a new technology allows us to avoid the problem of quantum measurement  . However, this technology lets us work exclusively with Computational Basis States (CBS), i.e., pure and orthogonal quantum base states.
In other words, none of the quantum measurement techniques currently in use: weak measurement, strong measurement, projective measurement and quantum state tomography allow a correct recovery of a generic quantum state resulting from the exit of a quantum algorithm without destructively distorting said state. This problem converts several (almost all) areas of Quantum Information Processing into mere theoretical speculations, namely: Quantum Algorithms, Quantum Image Processing, Quantum Signal Processing, Quantum Neural Networks, among others; which work fundamentally with generic qubits. Obviously, a new procedure to accurately estimate a generic quantum state is imperative as quantum technology advances.
Therefore, a new method of quantum measurement in the case of generic qubits becomes imperative (i.e., not just for CBS) and more accurate than the methods currently in use  - . Thus, in this work, we present a novel proposal to recover quantum state to the output of a quantum algorithm after its measurement via a modified Kalman’s Filter      , and Recursive Least Squares (RLS) filter    , too. This is the essence of this work, which is organized as follows: Preliminaries to the new quantum measurement method are outlined in Section 2. A tour from Schrodinger equation to quantum algorithms is presented in Section 3. The new method (optimal state estimator) is outlined in Section 4. Finally, Section 5 provides a conclusion and future work proposals of this paper.
2. The Quantum Measurement Problem
In this section, we present the following topics:
− Wave-function collapse.
− Quantum Measurement Problem.
− Before and after measurement.
− Types of measurement and state reconstruction.
2.1. Wave-Function Collapse
In quantum mechanics, wave-function collapse is the phenomenon in which a wave-function (initially in a superposition of several eigenstates) appears reduced to a single eigenstate after interaction with a measuring apparatus . It is the essence of measurement in quantum mechanics, and connects the wave-function with classical observables like position and momentum. Collapse is one of the two processes by which quantum systems evolve in time; the other is continuous evolution via the Schrödinger equation . However in this role, collapse is merely a black box for thermodynamically irreversible interaction with a classical environment . Calculations of quantum decoherence predicts apparent wave-function collapse when a superposition forms between the quantum system’s states and the environment’s states. Significantly, the combined wave-function of the system and environment continue to obey the Schrödinger equation .
When the Copenhagen interpretation was first expressed, Niels Bohr postulated that wave-function collapse cut the quantum world from the classical . This tactical move allowed quantum theory to develop without distractions from interpretational worries. Nevertheless, it was debated if the collapse was a fundamental physical phenomenon, rather than just the epiphenomenon of some other processes. If this is the case, then, it would mean that nature is fundamentally stochastic, i.e. nondeterministic, and an undesirable attribute for a theory   . This issue remained incomplete until quantum decoherence entered mainstream opinion after its reformulation in the 1980s   . Decoherence explains the perception of wave-function collapse in terms of interacting large- and small-scale quantum systems, and is commonly taught at the graduate level, e.g. the Cohen-Tannoudji textbook . The quantum filtering approach     and the introduction of quantum causality non-demolition principle  allows us to think about a classical-environment derivation of wave-function collapse from the stochastic Schrödinger equation.
2.2. The Quantum Measurement Problem Itself
The measurement problem in quantum mechanics is the unresolved problem of how (or if) wave-function collapse occurs. The inability to observe this process directly has given rise to different interpretations of quantum mechanics, and poses a key set of questions that each interpretation must answer. The wave-function in quantum mechanics evolves deterministically according to the Schrödinger equation as a linear superposition of different states, but actual measurements always find the physical system in a definite state. Any future evolution is based on the state the system was discovered to be in when the measurement was made, meaning that the measurement “did something” to the process under examination. Whatever that “something” done does not appear to be explained by the basic theory.
To express matters differently (according to Steven Weinberg   ), the Schrödinger wave-equation will determine the wave-function at any later time. If observers and their measuring apparatus are themselves described by a deterministic wave-function, why can we not predict precise results for measurements, but only probabilities? As a general question: how can one establish a correspondence between quantum and classical reality? .
2.3. Before and after Measuring
In quantum mechanics, measurement is a non-trivial and highly counter-intuitive process. First, because measurement outcomes are inherently probabilistic, i.e. regardless of the carefulness in the preparation of a measurement procedure, the possible outcomes of such measurement will be distributed according to a certain probability distribution. Secondly, once a measurement has been performed, a quantum system is unavoidably altered due to the interaction with the measurement apparatus. Consequently, for an arbitrary quantum system, pre-measurement and post-measurement quantum states are different in general .
Postulate. Quantum measurements are described by a set of measurement operators , which are indexed with m labels for the different measurement outcomes. These outcomes act on the state space of the system being measured. Measurement outcomes correspond to values of observables, such as position, energy and momentum, which are Hermitian operators   corresponding to physically measurable quantities.
Let be the state of the quantum system immediately before the measurement. Then, the probability that the m-th results occurs by
where is the adjoint of , and the post-measurement quantum state is
Operators must satisfy the completeness relation  , i.e.,
because it guarantees that probabilities will sum to one:
Let us work out a simple example, assuming we have a polarized photon with associated polarization orientations “horizontal” and “vertical”, where the horizontal polarization direction is denoted by , the vertical polarization direction is denoted by , and (•)T is the transpose of (•). Thus, an arbitrary initial state for our photon can be described by the generic quantum state , where and ( ) are complex numbers constrained by the normalization condition and is the computational basis spanning in the Hermitian space .
Now, we construct two measurement operators and and two measurement outcomes . Then, the full observable used for measurement in this experiment is . According to the Postulate, the probabilities of obtaining outcome or outcome are given by and . Corresponding post-measurement quantum states are as follows: if outcome is equal to then ; if outcome is equal to then, .
2.4. Types of Measurement and State Reconstruction
As we have seen in the previous subsection, quantum measurement is not a minor issue   . In fact, it is an issue still unresolved   , which would make it impossible for every practical effort to implement any genuine quantum algorithm in general and quantum image processing algorithm in particular. Actually, it is an inherited problem of quantum physics also known as the paradox of measurement    .
From a practical point of view, inside the context of quantum image processing, the problem is reduced to the following: suppose we develop a quantum algorithm for filtering classic images. Clearly, the first problem would be, how to introduce a classical noisy image within a quantum computer, i.e., the design of the interfaces classical-to-quantum, and quantum-to-classical. But, the second would be, how to measure the results of a quantum filtering algorithm, and to take the result of that filtering process and carry it out to the classical world, in other words, the recovery of the classical version of the filtered image into its original space: the classical world where it was generated. It is obvious that an absolutely accurate technique of measurement is needed. Unfortunately, all efforts in this regard have been useless  .
However, in the last decade there have been several efforts to remedy this situation, namely:
− Weak measurement
− Restoring the quantum state
− Quantum state tomography
Weak measurement is a technique to measure the average value of a quantum observable without appreciably affecting the initial state of the system being measured     . Weak measurements differ from normal (sometimes called “strong” or “von Neumann”) measurements in two ways:
1) If has discrete spectrum (which we assume for simplicity purposes), a strong measurement yields an eigenvalue of when the system is in a state. If the measurement is repeated many times, starting each time with the system in a state, one obtains a sequence of eigenvalues of which when averaged yields an approximation to , the expectation of in the state.
By contrast, a weak measurement only yields a sequence of numbers which average to . For example, a strong measurement of the spin of a particle with a spin −1/2 must yield spin 1/2 or −1/2, but a particular weak measurement could yield spin 100, while a subsequent weak measurement on an identical system might be −128.3. Typically, a single weak measurement gives little information; only the average of a large number of such measurements is meaningful.
2) A strong measurement changes, or projects, an initial pure state to an eigenvector of . The particular eigenvector obtained cannot be predicted, though its probability is determined. This substantially changes the state unless happened to be close to that eigenvector.
However, a weak measurement does not substantially change the initial state.
Weak measurements are usually implemented by coupling the original system to be measured with an auxiliary quantum meter system M. The measurement along a scale involves―in practice―various microscopic quantum systems. The composite system is mathematically represented as the tensor product of with M, denoted . A product state in this tensor product is typically denoted , where is a state of and is a state of M. States which are not product states are called entangled states.
The results obtained by this technique are as weak as its name, therefore, we proceed to the next.
Restoring the quantum state is an effort to recover the original state from the alleged reversibility of a measurement operator through the matrix that represents such operator, that is to say, of Subsection 2.3 . Parrott’s work is presented in opposition to the technique of weak measurement in general and Katz et al. work  in particular. Other relevant works mediate between the above   , also without success.
Nowadays, we know based on Stochastic Processes and Adaptive Filtering  -  that the single matrix inversion in an estimate or identification process does not restore the state of a hidden system behind such matrix. This is due to the need of modeling the state, the measurement noises, and defining the architecture of the estimator in an accurate way for a correct system state recovery from the observables. This deficiency explains why Wiener’s filter was completely replaced by Kalman’s filter in the presence of such noises     . Therefore, this technique is as weak as that to which it opposes.
Quantum state tomography is the process of reconstructing the quantum state (via a density matrix) for a source of quantum systems by measurements done on the systems coming from that source  . Being the density matrix for pure or mixed states,
The source may be any device or system which prepares quantum states either consistently into quantum pure states or otherwise into general mixed states. To be able to uniquely identify the state, the measurements must be tomographically complete. That is, the measured operators must form an operator basis on the Hilbert space of the system, providing all the information about that state. Such a set of observations is sometimes called a quorum. On the other hand, in a quantum process tomography, known quantum states are used to prove if such quantum process can find out how that process can be described. Similarly, quantum measurement tomography works to find out what measurement is being performed. The general principle behind quantum state tomography is that by repeatedly performing many different measurements on quantum systems described by identical density matrices frequency counts can be used to infer probabilities. These probabilities are combined with Born’s rule to determine a density matrix which fits best with the observations  . Obviously, this method is a rustic estimator of the density matrix and not the states themselves. In fact, it is a monitor of the elements of the matrix, only. Therefore, our problem persists.
3. From Schrödinger’s Equation to Quantum Algorithms
3.1. Schrödinger’s Equation and the Unitary Operators
A quantum state can be transformed into another state by a unitary operator, symbolized as U, with , where is the adjoint of U and I is the identity matrix, which is required to preserve the inner products: If we transform and to and , then , being and two wave-functions. In particular, unitary operators preserve lengths: .
On the other hand, the unitary operator satisfies the following differential equation known as the Schrödinger’s equation     :
where represents the Hamiltonian matrix of the Schrödinger’s equation, , and is the Planck constant. Multiplying both sides of Equation (4) by and setting , yields
The solution to the Schrödinger’s equation is given by the matrix exponential of the Hamiltonian matrix for the time invariant case:
Thus, the probability amplitudes evolve across time according to the following equation:
Equation (7) is the main piece in building circuits, gates and quantum algorithms, being U who represents such elements .
Finally, the discrete version of Equation (5) is
Equation (8) is the foundation on which we build the optimal estimator of quantum states.
3.2. Quantum Circuits, Gates and Algorithms
As we can see in Figure 1, and remember Equation (8), the quantum algorithm (with identical considerations for circuits and gates) can be seen as a transfer (that makes an input-to-output mapping) that has two types of output:
a) the result of the algorithm (circuit of the gate), i.e., , and
b) part of the input , i.e., (underlined ), in order to impart reversibility to the circuit, which is a critical need in quantum computing .
Besides, we can clearly see a module for measuring (which will be extensively discussed in the next section) with their respective output, i.e., , and a number of elements needed for the physical implementation of the quantum algorithm (circuit or gate), namely: control, ancilla and trash . In this figure as well as in the rest of them (unlike  ) a single fine line represents a
Figure 1. Module to measuring, quantum algorithm and the elements needed to their physical implementation.
wire carrying 1 qubit or N qubits (qudit), interchangeably, while a single thick line represents a wire carrying 1 or N classical bits, interchangeably, too.
However, the mentioned concept of reversibility is closely related to energy consumption, and hence to the Landauer’s Principle.
On the other hand, computational complexity studies the amount of time and space required to solve a computational problem. Another important computational resource is energy. In this section, we study the energy requirements for computation. Surprisingly, it turns out that computation, both classical and quantum, can in principle be done without expending any energy. Such energy consumption in computation turns out to be deeply linked to the reversibility of the computation.
What is the connection between energy consumption and irreversibility in computation? Landauer’s principle provides the connection, stating that, in order to erase information, it is necessary to dissipate energy. More precisely, Landauer’s principle may be stated as follows:
Landauer’s principle (first form): Suppose a computer erases a single bit of information. The amount of energy dissipated into the environment is at least kBT ln 2, where kB is a universal constant known as Boltzmann’s constant, and T is the temperature of the environment around the computer.
According to the laws of thermodynamics, Landauer’s principle can be given an alternative form stated not in terms of energy dissipation, but rather in terms of entropy:
Landauer’s principle (second form): Suppose a computer erases a single bit of information. The entropy of the environment increases by at least kB ln2, where kB is Boltzmann’s constant.
Consider a gate which takes two bits as input and produces a single bit as output. This gate is intrinsically irreversible because, given the output of the gate, the input is not uniquely determined. For example, if the output of the gate is 1, then the input could have been any one of 00, 01, or 10. On the other hand, the gate is an example of a reversible logic gate because, given the output of the gate, it is possible to infer what input must have been. Another way of understanding irreversibility is to think of it in terms of information erasure. If a logic gate is irreversible, then some of the information input to the gate is lost irretrievably when the gate operates―that is, some of the information has been erased by the gate.
Conversely, in a reversible computation, no information is ever erased, because the input can always be recovered from the output. Thus, saying that a computation is reversible is equivalent to saying that no information is erased during the computation.
Summing-up, the above expressed justifies the inexcusable need for the presence of to the output of the quantum gate .
4. Optimal State Estimator (OSE)
4.1. Classical State Estimator in Noiseless Environments
In order to develop an optimal estimate of quantum states, we start defining everything on a classical type of estimator called Recursive Least Square RLS    and derived from the famous Kalman’s filter     . Such estimator (in time discrete version and in a noiseless environment) is based on Figure 2, in which,
M: measurement operator
Δ: unitary delay
X: state to be estimated
ε: estimate error
K: Kalman’s gain
: estimated state
: output of estimator
Figure 2. RLS.
Then, we can define a priori and a posteriori (respectively) estimate error as:
The a priori estimate error covariance is then
where means square error of “•”, and (•)T means transpose of “(•)”. On the other hand, the a posteriori estimate error covariance is
This adaptation process is based on the minimization of the mean square error criterion defined in the last equation. Developing Equation (16), rearranging terms, and minimizing the mean square error with respect to , we obtain the Wiener’s filter for stationary signals:
where, is the autocorrelation matrix M and is the cross-correlation vector of M and Y. In the following equation, we formulate a recursive, time-update and adaptive version of Equation (17). In fact, can be expressed in a recursive fashion as
To introduce adaptability to the time variations of the signal statistics, the autocorrelation estimate in Equation (18) can be windowed by an exponentially decaying window:
where is the so-called adaptation, or forgetting factor, and is in the range . Similarly, the cross-correlation vector can be calculated in a recursive form as
This equation can be made adaptive using an exponentially decaying forgetting factor again:
For a recursive solution of the least square error Equation (21), we need to obtain a recursive time-update formula for the inverse matrix in the form
where “Updatet” is an updated factor to be actualized in each step of time. After an extensive series of considerations, developments and replacements, such as , we get the following set of equations related to RLS adaptation algorithm    in a very similar form to Kalman’s filter     .
being I the identity matrix and a number different to 0
Filter gain matrix:
Error signal equation:
Inverse correlation matrix update:
Discrete estimator time-update equations:
Indeed, A and M are time-invariant  - . However, Equation (30) should be modified to work in noiseless environments, which are the most real scenarios the filter will be used.
4.2. Quantum State Estimator in Noiseless Environments
From Equation (2), we have
being a norm of , as follows,
In fact, we can take any norm of , even for different of the original. Thus, we will have a
for each m, i.e., a battery of estimators as shown in Figure 3.
According to Figure 3, A will be the quantum algorithm (circuit or gate), and, we can get for each m with this estimator. Therefore, the complete set of equations is:
Inside Quantum Computer:
(quantum algorithm) (34)
(quantum measurement) (35)
Optimal State Estimator (OSE):
Three important considerations:
• although A is time-invariant, this methodology also resists the variant version. In fact, we can do similar considerations relating to M. Besides, A arises from Equation (7), i.e., , then: , which in its discrete version will be: ,
• OSE is a reorganized RLS/Kalman’s filter, but it is the same as them algorithmically speaking, and we started with a poor measurement, however as OSE evolves the accuracy of estimate improves through successive measurements.
Figure 3. Modified RLS.
Figure 4. Quantum algorithm (circuit or gate), measurement and OSE.
4.3. Quantum State Estimator in Noisy Environments
We assume the existence of state and measurement noises, as seen in Figure 5, with equation inside a quantum computer
(quantum algorithm) (39)
(quantum measurement) (40)
where, the random variables and represent the state and measurement noises, respectively. Both are assumed to be independent of each other. In practice,
− the state noise covariance , and
− the measurement noise covariance
matrices might change with each time-step or measurement, however here we assume that both are constant. Thus, only three equations change regarding classic estimator, namely,
Filter gain matrix:
Inverse correlation matrix update:
Discrete estimator time update equation:
However, and as the OSE is a linear system, we can move the state noise to the output and work with a unique noise that represents both. Therefore, the last equation is not used.
All these noises may be associated with different factors: quantum noise      , quantum decoherence   -  , and measurement errors  - . The accuracy of our estimator (OSE) depends on two aspects
• our ability to model these noises, and
• the greater or lesser presence of such noises in the experiment.
Figure 5. Modified Kalman’s estimator for noisy environments.
5. Conclusions and Future Works
In this paper, we have presented an optimal estimator of quantum states based on a modified Kalman’s Filter. Such estimator acts after state measurement, allowing us to obtain an optimal estimate of the quantum state resulting in the output of any quantum algorithm (circuit or gate). Finally, the OSE allows us a complete estimate of the quantum state in a much more accurate way than methods currently in use, which are: weak measurement, strong measurement, projective and quantum state tomography.
All of them fail to give an exact value for the state of a generic qubit resulting from a quantum algorithm. This lack can be seen explicitly in those algorithms involved in Quantum Image Processing (QImP) . In that paper, it is clearly shown that quantum measurement itself acts as a noise that disturbs what is measured, e.g., if the quantum algorithm used consists of a filter which eliminates the noise of an image (inside quantum machine), the quantum measurement―on its way out―will add a new noise to the resulting image, i.e., the image returns to have noise. A question arises automatically: why do we then introduce the image into a quantum machine if after all the filtering must be done in the classical environment, that is, outside the quantum machine? For this reason, it is very important to apply the innovation of this paper to those algorithms used in QImP.
Finally and based on our current study, the solution presented in this paper for an optimal estimate of a generic quantum state is essential to effectively and efficiently face the simulation of all types of quantum algorithms involved in quantum information processing, in general, and quantum signal processing and quantum neural networks, in particular.
We would like to thank Luis and Federico Guyet from Merx Communications LLC, for their tremendous help and support.
 Ghirardi, G.C., Rimini, A. and Weber, T. (1980) A General Argument against Superluminal Transmission through the Quantum Mechanical Measurement Process. Lettere Al Nuovo Cimento, 27, 293-298.
 Katz, N., et al. (2008) Reversal of the Weak Measurement of a Quantum State in a Superconducting Phase Qubit. Physical Review Letters, 101, Article ID: 200401.
 Berry, M.V., Brunner, N., Popescu, S. and Shukla, P. (2011) Can Apparent Superluminal Neutrino Speeds Be Explained as a Quantum Weak Measurement? Journal of Physics A: Mathematical and Theoretical, 44, Article ID: 492001.
 Altepeter, J.B., Jeffrey, E.R. and Kwiat, P.G. (2005) Photonic State Tomography Review Article. Advances in Atomic, Molecular, and Optical Physics, 52, 105-159.
 Dini, D.H. and Mandic, D.P. (2012) Class of Widely Linear Complex Kalman Filters. IEEE Transactions on Neural Networks and Learning Systems, 23, 775-786.
 Master, C.P. (2005) Quantum Computing under Real-World Constraints: Efficiency of an Ensemble Quantum Algorithm and Fighting Decoherence by Gate Design. PhD Thesis. Stanford University, Stanford.