Quantum-Classical Algorithm for an Instantaneous Spectral Analysis of Signals: A Complement to Fourier Theory

Show more

1. Introduction

The main concepts related to Quantum Information Processing (QIP) may be grouped in the next topics: quantum bit (qubit, which is the elemental quantum information unit), Bloch’s Sphere (geometric environment for qubit representation), Hilbert’s Space (which generalizes the notion of Euclidean space), Schrödinger’s Equation (which is a partial differential equation that describes how the quantum state of a physical system changes with time), Unitary Operators, and Quantum Circuits. In quantum information theory, a quantum circuit is a model for quantum computation in which a computation is a sequence of quantum gates; which are reversible transformations on a quantum mechanical analog of an n-bit register (this analogous structure is referred to as an n-qubit register). Another group is Quantum Gates; a quantum logic gate is a basic quantum circuit operating on a small number of qubits (in quantum computing and specifically the quantum circuit model of computation). Finally, Quantum Algorithms which run on a realistic model of quantum computing, are being the most commonly quantum circuit used for computation [1] [2] [3] [4] . Nowadays, other concepts complement our knowledge about QIP. The most important ones related to this work are:

1.1. Quantum Signal Processing (QSP)

The main idea is to take a classical signal, sample it, quantify it (for example, between 0 and 2^{8 }− 1), use a classical-to-quantum interface, give an internal representation of that signal, process that quantum signal (by denoising it, compressing it, etc.), measure the result, use a quantum-to-classical interface and subsequently detect the classical outcome signal. Interestingly, and as we will see later, quantum image processing has aroused more interest than QSP, quoting its creator: “Many new classes of signal processing algorithms have been developed by emulating the behavior of physical systems. There are also many examples in the signal processing literature in which new classes of algorithms have been developed by artificially imposing physical constraints on implementations that are not inherently subject to these constraints” [5] . Therefore, QSP is a signal processing framework [6] that is aimed at developing new or modifying existing signal processing algorithms by borrowing from the principles of quantum mechanics and some of its fascinating axioms and constraints. However, in contrast with such fields as quantum computing and quantum information theory, it does not inherently depend on the physics associated with quantum mechanics. Consequently, in developing the QSP framework, we are at liberty to impose quantum mechanical constraints that we find useful and to avoid those that are not. This framework provides a unifying conceptual structure for a variety of traditional processing techniques and a precise mathematical setting for developing generalizations and extensions of algorithms; leading to a potentially useful paradigm for signal processing, with applications in areas including frame theory, quantization and sampling methods, detection, parameter estimation, covariance shaping, and multiuser wireless communication systems. The truth is that to date, papers on this discipline are less than half a dozen, and their practical application is in reality non-existent. Moreover, although what has been developed so far is an interesting idea, it does not withstand further comments.

1.2. Quantum Fourier Transform (QFT)

In quantum computing, the QFT is a linear transformation on quantum bits and it is the quantum version of the discrete Fourier transform. The QFT is a part of many quantum algorithms: especially Shor’s algorithm for factoring and computing the discrete logarithm; the quantum phase estimation algorithm for estimating the eigenvalues of a unitary operator; and algorithms for the hidden subgroup problem.

The QFT can be performed efficiently on a quantum computer, with a particular decomposition into a product of simpler unitary matrices. Using a simple decomposition, the discrete Fourier transform can be implemented as a quantum circuit consisting of only O(n^{2}) Hadamard gates and controlled phase shift gates, where n is the number of qubits [1] . This can be compared to the classical discrete Fourier transform which takes O(2n^{2}) gates (where n is the number of bits), which is exponentially more than O(n^{2}). However, the quantum Fourier transform acts on a quantum state, whereas, the classical Fourier transform acts on a vector. Therefore not all the tasks that use the classical Fourier transform can take advantage of this exponential speedup; since, the best QFT algorithms known today require only O(n log n) gates to achieve an efficient approximation [7] .

Finally, this work is organized as follows: Fourier Theory is outlined in Section 2, where, we present the following concepts inside Fourier’s Theory: Fourier Transform, Discrete Fourier Transform, and Fast Fourier Transform. In Section 3, we show the proposed new spectral methods with its consequences. Section 4 provides conclusions and a proposal for future works.

2. Fourier’s Theory

In this section, we discuss the tools which are needed to understand the full extent QSA. These tools are: Fourier Transform, Discrete Fourier Transform (DFT), and Fast Fourier Transform (FFT). They were developed based on a main concept: the uncertainty principle, which is fundamental to understand the theory behind QSA-FIT. Other transforms, which are members of the Fourier Theory too, like Fractional Fourier Transform (FRFT), Short-Time Fourier Transform (STFT), and Gabor transform (GT); make a poor contribution in pursuit of solving the problems of the Fourier Theory described in the Abstract. That is to say, the need for a time-dependent spectrum analysis, undoubtedly including the wavelet transform in general and Haar basis in particular.

What the ubiquity of QSA in the context of a much larger modern and full spectral analysis should be clear at the end of this section.

On the other hand, this section will allow us to better understand the role of QSA as the origin of several currently-used-today-tools in Digital Signal Processing (DSP), Digital Image Processing (DIP), Quantum Signal Processing (QSP) and Quantum Image Processing (QIP). Finally, it will be clear why we say that QSA crowns a set of tools insufficient to date.

2.1. Fourier Transform

The Fourier Transform decomposes a function of time (a signal) into the frequencies that make it up, in the same way as a musical chord can be expressed as the amplitude (or loudness) of its constituent notes. The Fourier transform of a function of time itself is a complex-valued function of frequency whose absolute value represents the present amount of that frequency in the original function, and whose complex argument is the phase offset of the basic sinusoid in that frequency.

The Fourier transform is called the frequency domain representation of the original signal. The term Fourier transform refers to both the frequency domain representation and the mathematical operation that associates the frequency domain representation to a function of time. The Fourier transform is not limited to functions of time, but in order to have a common language, the domain of the original function is frequently referred to as the time domain. For many functions of practical interest, we can define an operation that reverses this: the inverse Fourier transformation, also called Fourier synthesis of a frequency domain representation, which combines the contributions of all the different frequencies to recover the original function of time [8] .

Linear operations performed in one domain (time or frequency) have corresponding operations in the other domain, which is sometimes easier to perform. The operation of differentiation in the time domain corresponds to multiplication by the frequency, so that some differential equations are easier to analyze in the frequency domain. Also, convolution in the time domain corresponds to an ordinary multiplication in the frequency domain. Concretely, this means that any linear time-invariant system, such as a filter applied to a signal, can be expressed in a relatively simple way as an operation on frequencies. After performing the desired operations, the transformation of the result can be made backwards, towards the time domain. Harmonic analysis is the systematic study of the relationship between the frequency and time domains, including the kinds of functions or operations that are “simpler” in one or the other, and has deep connections to almost all areas of modern mathematics [8] .

Functions that are localized in the time domain have Fourier transforms (FT) that are spread out across the frequency domain and vice versa, a phenomenon that is known as the Uncertainty Principle. The critical case for this principle is the Gaussian function, of substantial importance in probability theory and statistics as well as in the study of physical phenomena exhibiting normal distribution (e.g., diffusion). The FT of a Gaussian function is another Gaussian function. Joseph Fourier introduced the transform in his study of heat transfer where Gaussian functions appear as solutions of the heat equation [8] .

2.2. Discrete Fourier Transform (DFT)

In mathematics, the discrete Fourier transform (DFT) converts a finite list of equally spaced samples of a function into the list of coefficients of a finite combination of complex sinusoids, ordered according to their frequencies. The frequency domain has the same number of values as the original samples of time domain. DFT can be said to convert the sampled function from its original domain (often time or position along a line) to the frequency domain [9] .

Both, the input samples (which are complex numbers and in practice they are usually real ones), and the output coefficients are complex. The frequencies of the output sinusoids are integer multiples of a fundamental frequency whose corresponding period is the length of the sampling interval. The combination of sinusoids obtained through the DFT is therefore periodic with that same period. The DFT differs from the discrete-time Fourier transform (DTFT) in that their input and output sequences are both finite; it is therefore said to be the Fourier analysis of finite-domain (or periodic) discrete-time functions [9] .

Since it deals with a finite mass of data, it can be implemented in computers by numerical algorithms or even dedicated hardware. These implementations usually employ efficient fast Fourier transform (FFT) algorithms; so much so that the terms “FFT” and “DFT” are often used interchangeably. Prior to its current usage, the “FFT” acronym may have also been used for the ambiguous term “Finite Fourier Transform” [9] .

No Compact Support If DFT is the following product X = Wx, where X is a complex output vector, W is a matrix of complex twiddle factors, and x is the real input vector; therefore, we can see that each element X_{k} of output vector results from multiplying the kth row of the matrix by the complete input vector; that is to say, each element X_{k} of output vector contains every element of the input vector. A direct consequence of this is that DFT spills the energy to its output, in other words, DFT has a disastrous treatment of the output energy. Therefore, no compact support is equivalent to:

・ DFT has a bad treatment of energy in the output;

・ DFT is not a time-varying transform, but a frequency-varying transform.

Time-domain vs. frequency-domain measurements As we can see in Figure 1, thanks to DFT we have a new perspective regarding the measurement of signals, i.e., the spectral view [10] [11] .

Figure 1. Time-domain vs. frequency-domain measurements.

Both points of view allow us to make an almost complete analysis of the main characteristics of the signal [8] - [13] . As we can see above, DFT consists of a product between a complex matrix by a real vector (signal). This gives us a vector output which is also complex [10] [11] . Therefore, for practical reasons, it is more useful to use the Power Spectral Density (PSD) [8] - [13] , and in this way, to work with all the values involved as real, without loss of generality or power of analysis.

Spectral Analysis When the DFT is used for signal spectral analysis, the $\left\{{x}_{n}\right\}$ sequence usually represents a finite set of uniformly spaced time-samples of some signal x(t), where t represents time. The conversion from continuous time to samples (discrete-time), changes the underlying Fourier transform of x(t) into a discrete-time Fourier transform (DTFT); which generally entails a type of distortion called Aliasing. The choice of an appropriate sample-rate (see Nyquist rate) is the key to minimizing that distortion.

Similarly, the conversion from a very long (or infinite) sequence to a manageable size entails a type of distortion called Leakage, which is manifested as a loss of detail (also known as Resolution) in the DTFT. The choice of an appropriate length for the sub-sequence is the primary key to minimize that effect. When the available data (and the time to process it) is more than the amount needed to attain the desired frequency resolution, a standard technique is to perform multiple DFTs; for example, to create a spectrogram. If the desired result is a power spectrum and noise or randomness is present in the data, calculating the average of the magnitude components of the multiple DFTs is a useful procedure to reduce the variance of the spectrum; (also called a Periodogram in this context). Two examples of such techniques are the Welch method, and the Bartlett method, the general subject of estimating the power spectrum of a noisy signal is called Spectral Estimation.

DFT itself, can also lead to distortion (or perhaps illusion), because it is just a discrete sampling of the DTFT-which is a function of ax continuous frequency domain. Increasing the resolution of the DFT can mitigate the problem. That procedure is illustrated by sampling the DTFT [10] [11] .

・ The procedure is sometimes referred to as zero-padding, which is a particular implementation used in conjunction with the fast Fourier transform (FFT) algorithm. The inefficiency of performing multiplications and additions with zero-valued samples is more than offset by the inherent efficiency of the FFT.

・ As already noted, leakage imposes a limit on the inherent resolution of the DTFT. Therefore, benefits obtained from a fine-grained DFT are limited.

The most important disadvantages of DFT are summarized below.

Disadvantages:

・ DFT fails at the edges. This is the reason why in the JPEG algorithm (used in image compression), we use the DCT instead of the DFT [14] - [17] . What’s more, discrete Hartley transform outperforms DFT in DSP and DIP [14] [15] .

・ As there is no compact support, and in order to arrive at the frequency domain, the corresponding element by element between the two domains (time and frequency) is lost, resulting in a poor treatment of energy.

・ As a consequence of not having compact support, DFT is not time present. In fact, it moves away from the time domain. For this reason, in the last decades, the scientific community has created some palliative measures with better performance in both domains simultaneously; i.e., time and frequency. Such tools are: STFT, GT, and wavelets.

・ DFT has phase uncertainties (indeterminate phase for magnitude = 0) [10] [11] .

・ As it arises from the product of a matrix by a vector, its computational cost is O(N^{2}) for signals (1D), and O(N^{4}) for images (2D).

All this would seem to indicate that it is an inefficient transform; however, there are several advantages which justify its use in the last centuries. See [10] [11] .

2.3. Fast Fourier Transform (FFT)

Fast Fourier Transform FFT inherits all the disadvantages of the DFT, except the computational complexity. In fact, and unlike DFT, the computational cost of FFT is O(N*log_{2}N) for signals (1D), and O((N*log_{2}N)^{2}) for images (2D). This is the reason why, it is called fast Fourier transform.

FFT is an algorithm that computes the Discrete Fourier Transform (DFT) of a sequence, or its inverse. Fourier analysis converts a signal from its original domain (often time or space) to the frequency domain and vice versa. An FFT rapidly computes such transformations by factorizing the DFT matrix into a product of sparse (mostly zero) factors [18] . As a result, it succeeds in reducing the complexity of computing the DFT from O(N^{2}), which arises if we simply apply the definition of DFT to O(N*log_{2}N), where N is the data size. The computational cost of this technique is never greater than the conventional approach; in fact, it is usually significantly less. Further, the computational cost as a function of n is highly continuous, so that linear convolution of sizes somewhat larger than a power of two.

FFT is widely used for many applications in engineering, science, and mathematics. The basic ideas were made popular in 1965, however some algorithms were derived as early as 1805 [19] . In 1994 Gilbert Strang described the Fast Fourier Transform as the most important numerical algorithm of our lifetime [20] , and it was included in Top 10 Algorithms of the 20th Century by the IEEE journal on Computing in Science & Engineering [21] .

2.4. Fourier Uncertainty Principle

In quantum mechanics, the uncertainty principle [1] , also known as Heisenberg’s uncertainty principle; is one among a variety of mathematical inequalities which set a fundamental limit to the precision with which certain pairs of physical properties of a particle, known as complementary variables, can be known simultaneously, such as energy E and time t, momentum p and position x, etc.

They cannot be simultaneously and arbitrarily measured with high precision. There is a minimum for the product of uncertainties of these two measurements. First introduced in 1927 by the German physicist Werner Heisenberg, the Heisenberg’s uncertainty principle states that the more precisely the position of a particle is determined, the less precisely its momentum can be known, and vice versa. The formal inequality relating the uncertainty of energy $\Delta E$ and the uncertainty of time $\Delta t$ was derived by Earle Hesse Kennard later that year and by Hermann Weyl in 1928:

$\Delta E\Delta t\ge \hslash \text{/2}$ (1)

where ħ is the reduced Planck constant, h/2π. The energy associated with such system is

$E=\hslash \omega $ (2)

where ω = 2πf, being f the frequency, and ω the angular frequency. Then, any uncertainty about ω, is transferred to the energy; that is to say:

$\Delta E=\hslash \Delta \omega $ (3)

Replacing Equation (3) into (1), we will have:

$\hslash \Delta \omega \Delta t\ge \hslash /2$ (4)

Finally, simplifying Equation (4), we will have:

$\Delta \omega \Delta t\ge 1/2$ (5)

Equation (5) tells us that a simultaneous decimation in time and frequency is impossible for FFT. Therefore, we must make do with decimation in time or frequency, but not both at once. Linking the last four transforms individually (STFT, GT, FrFT, and WT), each sample in time with its counterpart in frequency in a biunivocal correspondence represents a futile effort to date. That is to say, they are transforms without compact support; with the exception of (WT) which sometimes does [22] [23] [24] .

3. A Brief on Quantum Information Processing

In this section, we will see the three main players of quantum information processing: the elemental unit of quantum information or qubit (i.e., a quantum bit), the Schrödinger’s equation, and the quantum measurement problem.

3.1. Quantum Bit (Qubit)

Since Quantum Mechanics is formulated in projective Hilbert space, then, we need to appeal to the Bloch’s sphere, see Figure 2, where we can see three axes (x, y, z), an equator, two poles or qubit basis states ( $|0\rangle \equiv \text{North}\text{\hspace{0.17em}}\text{Pole}$ , $|1\rangle \equiv \text{South}\text{\hspace{0.17em}}\text{Pole}$ ), two angles $\left(\theta ,\varphi \right)$ , and a generic wave function $|\psi \rangle $ .

In this context, the complete wave function will be:

Figure 2. Bloch sphere.

$|\psi \rangle ={\text{e}}^{i\gamma}\left(\mathrm{cos}\frac{\theta}{2}|0\rangle +{\text{e}}^{i\varphi}sin\frac{\theta}{2}|1\rangle \right)={\text{e}}^{i\gamma}\left(\mathrm{cos}\frac{\theta}{2}|0\rangle +\left(\mathrm{cos}\varphi +i\mathrm{sin}\varphi \right)sin\frac{\theta}{2}|1\rangle \right)$ (6)

where $0\le \theta \le \text{\pi}$ , $0\le \varphi <2\text{\pi}$ . We can ignore the factor ${\text{e}}^{i\gamma}$ because it has no observable effects [1] [2] [3] , and for that reason, we can effectively write:

$|\psi \rangle =\mathrm{cos}\frac{\theta}{2}|0\rangle +{\text{e}}^{i\varphi}sin\frac{\theta}{2}|1\rangle $ (7)

The numbers $\theta $ and $\varphi $ define a point on the unit three-dimensional Bloch sphere, as shown in Figure 2, with, $\alpha =\mathrm{cos}\frac{\theta}{2}$ , and $\beta ={\text{e}}^{i\varphi}sin\frac{\theta}{2}$ , then, replacing them into Equation (7),

$|\psi \rangle =\alpha |0\rangle +\beta |1\rangle $ . (8)

Besides, a column vector
$|\psi \rangle $ is called a ket vector
${\left[\alpha \text{\hspace{0.17em}}\beta \right]}^{\text{T}}$ , where (•)^{T} means transpose of (•); while a row vector
$\langle \psi |$ is called a bra vector
$\left[{\alpha}^{*}\text{\hspace{0.17em}}{\beta}^{*}\right]$ . The numbers
${\alpha}^{*}$ and
${\beta}^{*}$ are the complex conjugate of
$\alpha $ and
$\beta $ numbers respectively, although thinking of them as real numbers for many purposes does not hurt. In other words, the state of a qubit is a vector in a two-dimensional complex vector space. The special states
$|0\rangle $ and
$|1\rangle $ have a crucial importance in quantum computing and they are known as Computational Basis States (CBS) and form an orthonormal basis for this vector space, being

Spin down $=|\downarrow \rangle =|0\rangle =\left[\begin{array}{c}1\\ 0\end{array}\right]=$ qubit basis state = North Pole (9)

and

Spin up $=|\uparrow \rangle =|1\rangle =\left[\begin{array}{c}0\\ 1\end{array}\right]=$ qubit basis state = South Pole (10)

Finally, if the wave function is on the sphere, $|\psi \rangle $ will be a pure state, with,

$\langle \psi |\psi \rangle =\left[\begin{array}{cc}{\alpha}^{*}& {\beta}^{*}\end{array}\right]\left[\begin{array}{c}\alpha \\ \beta \end{array}\right]={\left|\alpha \right|}^{2}+{\left|\beta \right|}^{2}=1$ (11)

3.2. Schrödinger Equation

A quantum state can be transformed into another state by a unitary operator, symbolized as U (U: H → H on a Hilbert space H, being called a unitary operator if it satisfies ${U}^{\u2020}U=U{U}^{\u2020}=I$ , where ${(\u2022)}^{\u2020}$ is the adjoin of (•), and I is the identity matrix), which is required to preserve inner products: If we transform $|\chi \rangle $ and $|\psi \rangle $ to $U|\chi \rangle $ and $U|\psi \rangle $ , then $\langle \chi |{U}^{\u2020}U|\psi \rangle =\langle \chi |\psi \rangle $ . In particular, unitary operators preserve lengths:

$\langle \psi |{U}^{\u2020}U|\psi \rangle =\langle \psi |\psi \rangle =1$ , (12)

That is to say, it is equal to Equation (11). Besides, the unitary operator satisfies the following differential equation known as the Schrödinger equation [1] - [3] :

$\frac{\text{d}}{\text{d}t}U\left(t+\Delta t,t\right)=\frac{-i\stackrel{^}{H}}{\hslash}U\left(t+\Delta t,t\right)$ (13)

where $\stackrel{^}{H}$ represents the Hamiltonian matrix of the Schrödinger equation, while $i=\sqrt[2]{-1}$ , and $\hslash $ is the reduced Planck constant; i.e.: $\hslash =h/2\text{\pi}$ . Multiplying both sides of Equation (13) by $|\psi \left(t\right)\rangle $ and setting

$|\psi \left(t+\Delta t\right)\rangle =U\left(t+\Delta t,t\right)|\psi \left(t\right)\rangle $ (14)

Being $U\left(t+\Delta t,t\right)=U\left(t+\Delta t-t\right)=U\left(\Delta t\right)$ a unitary transform (operator and matrix), yields

$\frac{\text{d}}{\text{d}t}|\psi \left(t\right)\rangle =\frac{-i\stackrel{^}{H}}{\hslash}|\psi \left(t\right)\rangle $ (15)

The Hamiltonian operator represents the total energy of the system and controls the evolution process. In the most general case, the Hamiltonian is formed by kinetic and potential energy. However, if the particle is stationary thus the kinetic energy is canceled, leaving only the potential energy which will be the only one that will be linked to external forces applied to this particle. Thus the control of the external forces is at the same time the control of the evolution of the states of the system [1] [2] [3] [25] [26] [27] [28] . For example, in the case of bosons (in particular, photons), they possess integer spin (i.e., ${m}_{s}=\pm 1$ ), consequently, we would have a momentum,

$\sigma \cdot P=\left(\begin{array}{cc}{P}_{z}& {P}_{x}-i\text{\hspace{0.05em}}{P}_{y}\\ {P}_{x}+i\text{\hspace{0.05em}}{P}_{y}& -{P}_{z}\end{array}\right)$ , (16)

being $\sigma =\left({\sigma}_{x}\text{\hspace{0.05em}},{\sigma}_{y},{\sigma}_{z}\right)$ Pauli’s matrices, that is to say:

${\sigma}_{x}=\left(\begin{array}{cc}0& 1\\ 1& 0\end{array}\right),\text{\hspace{1em}}{\sigma}_{y}=\left(\begin{array}{cc}0& -i\\ i& 0\end{array}\right),\text{\hspace{1em}}{\sigma}_{z}=\left(\begin{array}{cc}1& 0\\ 0& -1\end{array}\right),$ (17)

while spin will be,

$S=\hslash \sigma =\hslash {m}_{s}\sigma =\hslash {m}_{s}\left({\sigma}_{x},{\sigma}_{y},{\sigma}_{z}\right)$ . (18)

Then, the Hamiltonian takes the following form,

$H=\frac{cS\cdot P}{\hslash}=\frac{c{m}_{s}\hslash \sigma \cdot P}{\hslash}=\frac{c{m}_{s}\hslash}{\hslash}\left(\begin{array}{cc}{P}_{z}& {P}_{x}-i{P}_{y}\\ {P}_{x}+i{P}_{y}& -{P}_{z}\end{array}\right)=\hslash {m}_{s}\Omega $ (19)

being c the speed of light, $\Omega $ will result in this case:

$\Omega =\frac{c}{\hslash}\left(\begin{array}{cc}{P}_{z}& {P}_{x}-i\text{\hspace{0.05em}}{P}_{y}\\ {P}_{x}+i\text{\hspace{0.05em}}{P}_{y}& -{P}_{z}\end{array}\right)$ (20)

Now, if we consider a spatially isotropic and homogeneous $\Omega $ and a polarization of spin regarding the z-axis exclusively, thus,

${\Omega}_{z}=\frac{c}{\hslash}\left(\begin{array}{cc}{P}_{z}& 0\\ 0& -{P}_{z}\end{array}\right)=\frac{c}{\hslash}{P}_{z}\left(\begin{array}{cc}1& 0\\ 0& -1\end{array}\right)=\frac{c}{\hslash}{P}_{z}{\sigma}_{z}$ (21)

with

$H=\hslash {m}_{s}{\Omega}_{z}=\hslash {m}_{s}\frac{c}{\hslash}{P}_{z}{\sigma}_{z}=\hslash {m}_{s}\omega {\sigma}_{z}$ (22)

where $\omega $ is the angular frequency.

Finally, solving Equation (15) depending on the Hamiltonian of Equation (22), we will have the solution to the Schrödinger equation given by the matrix exponential of the Hamiltonian matrix, that is to say;

$|\psi \left(t+\Delta t\right)\rangle ={\text{e}}^{\frac{-i\stackrel{^}{H}\Delta t}{\hslash}}|\psi \left(t\right)\rangle $ (if Hamiltonian is not time-dependent) (23)

or

$|\psi \left(t+\Delta t\right)\rangle ={\text{e}}^{\frac{-i}{\hslash}{\displaystyle {\int}_{t}^{t+\Delta t}\stackrel{^}{H}\text{d}t}}|\psi \left(t\right)\rangle $ (if Hamiltonian is time-dependent) (24)

Discrete versions of Equations (23) and (24) for a time-dependent (or not) Hamiltonian, being k the discrete time will be:

$|{\psi}_{k+\Delta k}\rangle ={\text{e}}^{\frac{-i\stackrel{^}{H}\Delta k}{\hslash}}|{\psi}_{k}\rangle ={\text{e}}^{-i{m}_{s}{\omega}_{k}{\sigma}_{z}\Delta k}|{\psi}_{k}\rangle $ (if Hamiltonian is not time-dependent) (25)

and

$|{\psi}_{k+\Delta k}\rangle ={\text{e}}^{\frac{-i}{\hslash}{\displaystyle \underset{k}{\overset{k+\Delta k}{\sum}}{\stackrel{^}{H}}_{k}}}|{\psi}_{k}\rangle ={\text{e}}^{-i{m}_{s}{\sigma}_{z}{\displaystyle \underset{k}{\overset{k+\Delta k}{\sum}}{\omega}_{k}}}|{\psi}_{k}\rangle $ (if Hamiltonian is time-dependent) (26)

$|{\psi}_{k+1}\rangle ={\text{e}}^{-i{m}_{s}{\sigma}_{z}{\displaystyle \underset{i=1}{\overset{k+1}{\sum}}{\omega}_{i}}}|{\psi}_{0}\rangle $ (with $\Delta k=1$ and starting from initial state $|{\psi}_{0}\rangle $ [1] ) (27)

On the other hand, replacing Equation (22) into Equation (23), we will have another main equation for this paper,

$|\psi \left(t+\Delta t\right)\rangle ={\text{e}}^{-i{m}_{s}\omega \left(t\right){\sigma}_{z}\Delta t}|\psi \left(t\right)\rangle $ , (28)

and into Equation (24)

$|\psi \left(t+\Delta t\right)\rangle ={\text{e}}^{-i{m}_{s}{\sigma}_{z}{\displaystyle {\int}_{t}^{t+\Delta t}\omega \left(t\right)\text{d}t}}|\psi \left(t\right)\rangle $ . (29)

Finally, considering an incremental approximation of Equation (15) as well as in its discrete version, and considering the proper replacements of Equation (22), both versions of Schrödinger’s equation will take the following form respectively,

$\frac{|\Delta \psi \left(t+\Delta t\right)\rangle}{\Delta t}=-i{m}_{s}\omega \left(t\right){\sigma}_{z}|\psi \left(t+\Delta t\right)\rangle $ (30)

and

$|\Delta {\psi}_{k+\Delta k}\rangle =\frac{|{\psi}_{k+\Delta k+1}-{\psi}_{k+\Delta k-1}\rangle}{2}=-i{m}_{s}{\omega}_{k}{\sigma}_{z}|{\psi}_{k+\Delta k}\rangle $ (31)

These last equations will be fundamental in Section 4.

3.3. Quantum Measurement Problem

In quantum mechanics, measurement is a non-trivial and highly counter-intuitive process [1] . In fact, it is a destructive process responsible for the collapse of the wave function. Firstly, because measurement outcomes are inherently probabilistic, i.e. regardless of how carefully the measurement procedure has been prepared, the possible outcomes of such measurement will be distributed according to a certain probability distribution [1] . Secondly, once the measurement has been performed, a quantum system is unavoidably altered due to the interaction with the measurement apparatus. Consequently, for an arbitrary quantum system, pre-measurement and post-measurement quantum states are different in general [1] , with one exception, which takes place when we work with CBS.

Quantum measurements are described by a set of measurement operators $\left\{{\stackrel{^}{M}}_{m}\right\}$ , index m labels the different measurement outcomes, which act on the state space of the system being measured. That is to say, measurement outcomes correspond to values of observables, such as position, energy, and momentum, which are Hermitian operators [1] corresponding to physically measurable quantities. Being $|\psi \rangle $ the state of the quantum system immediately before the measurement. Then, the probability that result m occurs is given by

$p\left(m\right)=\langle \psi |{\stackrel{^}{M}}_{m}^{\u2020}{\stackrel{^}{M}}_{m}|\psi \rangle $ (32)

and the post-measurement quantum state is

${|\psi \rangle}_{pm}=\frac{{\stackrel{^}{M}}_{m}|\psi \rangle}{\sqrt{\langle \psi |{\stackrel{^}{M}}_{m}^{\u2020}{\stackrel{^}{M}}_{m}|\psi \rangle}}$ (33)

where subscript pm means post-measurement. Besides, operators ${\stackrel{^}{M}}_{m}$ must satisfy the completeness relation of Equation (34), because that guarantees that probabilities will sum to one; see Equation (35) [1] :

${\sum}_{m}{\stackrel{^}{M}}_{m}^{\u2020}{\stackrel{^}{M}}_{m}}=I$ (34)

${\sum}_{m}\langle \psi |{\stackrel{^}{M}}_{m}^{\u2020}{\stackrel{^}{M}}_{m}|\psi \rangle}={\displaystyle {\sum}_{m}p\left(m\right)=1$ (35)

Let us illustrate with a simple example. Let’s assume we have a polarized photon with associated polarization orientations ‘horizontal’ and ‘vertical’. The horizontal polarization direction is denoted by $|0\rangle $ and the vertical polarization direction is denoted by $|1\rangle $ . Therefore, an arbitrary initial state for our photon can be described by the quantum state $|\psi \rangle =\alpha |0\rangle +\beta |1\rangle $ (recalling Subsection 3.1, Equation 8), where $\alpha $ and $\beta $ are complex numbers constrained by the very famous normalization condition ${\left|\alpha \right|}^{2}+{\left|\beta \right|}^{2}=1$ , and $\left\{|0\rangle ,|1\rangle \right\}$ is the computational basis (or CBS) spanning ${{\rm H}}^{2}$ . Then, we construct two measurement operators ${\stackrel{^}{M}}_{0}=|0\rangle \langle 0|$ and ${\stackrel{^}{M}}_{1}=|1\rangle \langle 1|$ with two measurement outcomes ${a}_{0},{a}_{1}$ . Thus, the full observable used for measurement in this experiment will be the diagonal matrix $\stackrel{^}{M}={a}_{0}|0\rangle \langle 0|+{a}_{1}|1\rangle \langle 1|$ , i.e., the complete matrix. According to the postulate, the probabilities of obtaining outcome ${a}_{0}$ or outcome ${a}_{1}$ are given by $p\left({a}_{0}\right)={\left|\alpha \right|}^{2}$ and $p\left({a}_{1}\right)={\left|\beta \right|}^{2}$ . Corresponding post-measurement quantum states are as follows: if outcome = ${a}_{0}$ , then ${|\psi \rangle}_{pm}=|0\rangle $ ; if outcome = ${a}_{1}$ then ${|\psi \rangle}_{pm}=|1\rangle $ .

4. Quantum Spectral Analysis: Frequency in Time (QSA-FIT)

This tool plays a main role in the study of quantum entanglement [25] ; at the same time, it is a key piece when applied in signal analysis -in a much more elegant way than by the use of The Fourier theory in particular for the practical calculation of the bandwidth of any type of signal [9] [12] [13] . In fact, A quantum time-dependent spectrum analysis, or simply, quantum spectral analysis: frequency in time (QSA-FIT), complements and completes the Fourier theory, especially its maximum exponent; i.e., the fast Fourier transform (FFT) [10] [11] [18] [19] . For all the above, QSA-FIT is the first and true temporal-spectral bridge [29] [30] . Finally, QSA-FIT is a metric which assesses the impact of the flanks of a signal on its frequency spectrum at each instant, something not taken into account by the Fourier theory and even less in real time.

4.1. Application to a Quantum State

Next, we are going to deduce this operator in its continuous and discrete forms. There are several versions of QSA-FIT [29] [30] ; in this case, we will deduce this operator in its continuous and discrete versions from Equations (30) and (31), respectively. Therefore, if we multiply both sides of Equation (30) by $\langle \psi |$ , we will have:

$\langle \psi \left(t\right)|\frac{\Delta \psi \left(t\right)}{\Delta t}\rangle =-i{m}_{s}\omega \left(t\right)\langle \psi \left(t\right)|{\sigma}_{z}|\psi \left(t\right)\rangle $ (36)

then,

$\Delta \omega \left(t\right)={m}_{s}\omega \left(t\right)=i\frac{1}{\langle \psi \left(t\right)|{\sigma}_{z}|\psi \left(t\right)\rangle}\langle \psi \left(t\right)|\frac{\Delta \psi \left(t\right)}{\Delta t}\rangle $ . (37)

Now, if we multiply both sides of Equation (31) by $\langle {\psi}_{k}|$ , we will have:

$\frac{\langle {\psi}_{k}|{\psi}_{k+1}-{\psi}_{k-1}\rangle}{2}=-i{m}_{s}{\omega}_{k}\langle {\psi}_{k}|{\sigma}_{z}|{\psi}_{k}\rangle $ (38)

then,

$\Delta {\omega}_{k}={m}_{s}{\omega}_{k}=i\frac{\langle {\psi}_{k}|{\psi}_{k+1}-{\psi}_{k-1}\rangle}{2\langle {\psi}_{k}|{\sigma}_{z}|{\psi}_{k}\rangle}=i\frac{\left(\langle {\psi}_{k}|{\psi}_{k+1}\rangle -\langle {\psi}_{k}|{\psi}_{k-1}\rangle \right)}{2\langle {\psi}_{k}|{\sigma}_{z}|{\psi}_{k}\rangle}$ (39)

That is to say, we are going to have a Δω at each instant of the signal (continuous or discrete, classical or quantum). On the other hand, a very interesting attribute of this operator is that it is not affected by the quantum measurement problem, because its output is a classical scalar, in other words, it can be measured with complete accuracy. In fact, the operator Δω is a hybrid algorithm with quantum and classical parts, as we can see in Figure 3 where a single fine line represents a wire carrying 1 or N qubits, while a single thick line represents a wire carrying 1 or N classical bits. Moreover, the quantum part of the operator Δω must respect the concept of reversibility which is closely related to energy consumption, and hence to the Landauer’s Principle [1] , for this reason, $|{\psi}_{k}\rangle $ also appears on the way out. Thus,

Quantum part:

$\begin{array}{l}{a}_{k}=\langle {\psi}_{k}|{\psi}_{k+1}\rangle \\ {b}_{k}=\langle {\psi}_{k}|{\psi}_{k-1}\rangle \\ {c}_{k}=\langle {\psi}_{k}|{\sigma}_{z}|{\psi}_{k}\rangle \end{array}$ (40)

Classical part:

$\Delta {\omega}_{k}={m}_{s}{\omega}_{k}=i\frac{\left({a}_{k}-{b}_{k}\right)}{2{c}_{k}}$ (41)

Finally, for all mentioned cases, that is to say, continuous or discrete, classical or quantum signals, the bandwidth BW will result from the difference between the maximum and the minimum frequency of such signal,

$BW={f}_{\mathrm{max}}-{f}_{\mathrm{min}}=\frac{1}{2\text{\pi}}\left(\Delta {\omega}_{\mathrm{max}}-\Delta {\omega}_{\mathrm{min}}\right)$ (42)

4.2. Application to Classical Signals

In no other way is the application of QSA-FIT more conspicuous than in this case. There are several versions and ways to apply QSA-FIT to a classical signal

Figure 3. A hybrid algorithm with quantum and classical parts.

[29] [30] . However, the direct classical continuous version of Equations (37) and (39) will be of the form:

$\omega \left(t\right)=\frac{\eta}{s\left(t\right)}\frac{\text{d}s\left(t\right)}{\text{d}t}$ , (43)

where s(t) is the signal, and η is an adjustment factor. While the discrete version will be:

${\omega}_{k}=\frac{\eta}{{s}_{k}}\frac{\left({s}_{k+1}-{s}_{k-1}\right)}{2}$ . (44)

The problem with Equations (43) and (44) consists in the indeterminacy of Δω when the signal is null at that instant. Then, we will use a modified version of the signal called baseline less (BLL) which consists of,

$\omega \left(t\right)=\frac{1}{{s}_{BLL}}\frac{\text{d}s\left(t\right)}{\text{d}t}$ , (45)

with η = 1, where,

${s}_{BLL}=\frac{{s}_{\mathrm{max}}-{s}_{\mathrm{min}}}{2}$ , (46)

then,

$\omega \left(t\right)=\frac{1}{\left(\frac{{s}_{\mathrm{max}}-{s}_{\mathrm{min}}}{2}\right)}\frac{\text{d}s\left(t\right)}{\text{d}t}$ , (47)

with,

${f}_{\mathrm{max}}=\frac{1}{2\text{\pi}}\frac{1}{\left(\frac{{s}_{\mathrm{max}}-{s}_{\mathrm{min}}}{2}\right)}{\left(\frac{\text{d}s\left(t\right)}{\text{d}t}\right)}_{\mathrm{max}}=\frac{{\left(\text{d}s\left(t\right)/\text{d}t\right)}_{\mathrm{max}}}{\text{\pi}\left({s}_{\mathrm{max}}-{s}_{\mathrm{min}}\right)}$ , (48)

and,

${f}_{\mathrm{min}}=\frac{1}{\text{2\pi}}\frac{1}{\left(\frac{{s}_{\mathrm{max}}-{s}_{\mathrm{min}}}{2}\right)}{\left(\frac{\text{d}s\left(t\right)}{\text{d}t}\right)}_{\mathrm{min}}=\frac{{\left(\text{d}s\left(t\right)/\text{d}t\right)}_{\mathrm{min}}}{\text{\pi}\left({s}_{\mathrm{max}}-{s}_{\mathrm{min}}\right)}$ . (49)

Now, if we consider a signal like Figure 4 (in blue),

$s\left(t\right)=A\mathrm{cos}\left(\omega t+\phi \right)+B$ , (50)

where A is the amplitude, φ is the phase, and B is the baseline, with,

$\frac{\text{d}s\left(t\right)}{\text{d}t}=-A\omega \mathrm{sin}\left(\omega t+\phi \right)$ , (51)

then,

$\begin{array}{l}{s}_{\mathrm{max}}=A+B\\ {s}_{\mathrm{min}}=-A+B\end{array}$ (52)

Now, replacing Equations (51) and (52) into (47), we will have:

Figure 4. Example of a cosine signal (in blue), its QSA-FIT (in green), and its |FFT| (in red). Five cycles of a cosine signal with a unitary modulus and without baseline of 30 [hertz] and 1024 numbers of FFT samples is showed. The amplitude of the QSA-FIT is coincident with the distance between both maxima peaks of the |FFT|. Such distance or separation is known as bandwidth of the signal.

$\begin{array}{l}\omega \left(t\right)=\frac{1}{\left(\frac{\left(A+B\right)-\left(-A+B\right)}{2}\right)}\left(-A\omega \mathrm{sin}\left(\omega t+\phi \right)\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}=-\omega \mathrm{sin}\left(\omega t+\phi \right)\end{array}$ , (53)

in green in Figure 4; then,

$\begin{array}{l}{f}_{\mathrm{max}}=\frac{\omega}{\text{2\pi}}=\frac{2\text{\pi}f}{\text{2\pi}}=f\\ {f}_{\mathrm{min}}=\frac{-\omega}{\text{2\pi}}=\frac{-2\text{\pi}f}{\text{2\pi}}=-f\end{array}$ (54)

So, replacing Equation (54) into (42), we will have:

$BW={f}_{\mathrm{max}}-{f}_{\mathrm{min}}=f-\left(-f\right)=2f$ . (55)

This result can be seen in the lower part of Figure 4, between QSA-FIT and |FFT|, which is the total aperture of QSA-FIT (in green) and at the same time, the distance between the peaks of |FFT| (in red). Now, if we consider a perfect gate signal like Figure 5 (in blue), where perfect gate means a gate signal with infinite slope in its transitions from one state to another, with

$s\left(t\right)=Agate\left(\omega t+\phi \right)+B$ , (56)

where A is the amplitude, φ is the phase, and B is the baseline; with,

Figure 5. Example of a perfect gate signal (in blue), its QSA-FIT (in green), and its |FFT| (in red). One and a half cycle of a perfect gate signal with a modulus equal to ½ and a similar baseline of 3 [hertz] and 1024 numbers of FFT samples is showed. The thick gray lines represent the infinite dilation of the range of frequencies, which otherwise would not enter in the figure. Besides, this lines also represent the monotonous values of both QSA-FIT as well as the |FFT| in that frequency range, and since it does not make sense to graph infinitely the same thing, said rank is replaced with the aforementioned line. The ends of the QSA-FIT coincide with those of the |FFT| and in this way an infinite bandwidth is obtained for this signal.

$\frac{\text{d}s\left(t\right)}{\text{d}t}=A\omega \frac{\text{d}gate\left(\omega t+\phi \right)}{\text{d}t}$ , (57)

where the derivative of the gate can have only 3 possible values,

$\frac{\text{d}s\left(t\right)}{\text{d}t}=\{\begin{array}{l}A\omega \infty \\ 0\\ -A\omega \infty \end{array}$ (58)

then, if,

$\begin{array}{l}{s}_{\mathrm{max}}=A+B\\ {s}_{\mathrm{min}}=-A+B\end{array}$ (59)

So far, we have obtained similar results to the previous case in relation to s_{max} and s_{min}, however, the true difference is in everything related to the derivative. In this case, the perfect gate takes values
$\pm \infty $ . Now, replacing Equations (58) and (59) into (47), we will have:

$\omega \left(t\right)=\frac{1}{\left(\frac{\left(A+B\right)-\left(-A+B\right)}{2}\right)}\{\begin{array}{l}A\omega \infty \\ 0\\ -A\omega \infty \end{array}=\{\begin{array}{l}\omega \infty \\ 0\\ -\omega \infty \end{array}$ (60)

in green in Figure 5, where, we have represented with a gray thick line an infinite discontinuity in the graphics of QSA-FIT (in green) and |FFT| (in red). Therefore,

$\begin{array}{l}{f}_{\mathrm{max}}=\frac{\omega \infty}{\text{2\pi}}=\frac{2\text{\pi}f\infty}{\text{2\pi}}=f\infty =\infty \\ {f}_{\mathrm{min}}=\frac{-\omega \infty}{\text{2\pi}}=\frac{-2\text{\pi}f\infty}{\text{2\pi}}=-f\infty =-\infty \end{array}$ (61)

Then, replacing Equation (61) into (42), we will have:

$BW={f}_{\mathrm{max}}-{f}_{\mathrm{min}}=\infty -\left(-\infty \right)=2\infty =\infty $ (62)

4.3. Application to Entangled States

Quantum Information Processing has two fundamental tools permanently used in Quantum Computing and Communications: the Principle of Superposition and Quantum Entanglement [25] . These tools are based on the work of Erwin Schrödinger [26] [27] , who defined the entangled of pure states as the pure quantum states of composite systems that cannot be represented in the form of simple tensor products of subsystem state-vectors, i.e.:

$|{\Psi}_{AB}\rangle \ne |{\psi}_{A}\rangle \otimes |{\psi}_{B}\rangle $ (63)

where “
$\otimes $ ” indicates the Kronecker’s product (also known as a tensor product), while
$|{\psi}_{A}\rangle $ and
$|{\psi}_{B}\rangle $ are vectors providing the states of both subsystems, such as elementary particles [26] [27] . The product states [25] are those states of composite systems which can be represented as tensor products of subsystem states that constitute the complement in the set of pure states. In fact, states of the composite system that can be represented in this form are called separable states. Then, since not all states are separable states (and thus product states) we will carry out the following analysis. We will establish a pair of basis:
$\left\{|{u}_{A}\rangle \right\}$ for H_{A} and
$\left\{|{v}_{B}\rangle \right\}$ for H_{B}. In H_{A} ⊗ H_{B}, the most general state is of the form:

$|{\Psi}_{AB}\rangle ={\displaystyle \underset{x,y}{\sum}{r}_{xy}|{u}_{A}\rangle \otimes |{v}_{B}\rangle}$ . (64)

This state is separable if there are vectors $\left[{r}_{u}^{A}\right]$ , $\left[{r}_{v}^{B}\right]$ so that ${r}_{uv}={r}_{u}^{A}{r}_{v}^{B}$ yielding $|{\psi}_{A}\rangle ={\displaystyle \underset{u}{\sum}{r}_{u}^{A}{u}_{A}}$ and $|{\psi}_{B}\rangle ={\displaystyle \underset{v}{\sum}{r}_{v}^{B}{v}_{B}}$ . It is inseparable if for any pair of vectors $\left[{r}_{u}^{A}\right]$ , $\left[{r}_{v}^{B}\right]$ at least for one pair of coordinates ${r}_{u}^{A}$ , ${r}_{v}^{B}$ we have ${r}_{uv}\ne {r}_{y}^{A}{r}_{v}^{B}$ . If a state is inseparable, it is called an entangled state.

Moreover, in 1935 Albert Einstein, Boris Podolsky and Nathan Rosen (EPR) suggested a thought experiment by which they tried to demonstrate that the wave function did not provide a complete description of physical reality (which gives rise to the famous EPR paradox); and hence that the Copenhagen interpretation is unsatisfactory. Resolutions of the paradox have important implications for the interpretation of quantum mechanics [31] . The essence of the paradox is that particles can interact in such a way that it is possible to measure both their position and their momentum more accurately than Heisenberg’s uncertainty principle allows [28] , unless measuring one particle instantaneously affects the other to prevent this accuracy, which would involve information being transmitted faster than light [32] [33] [34] as forbidden by the theory of relativity (spooky action at a distance) [28] [31] [35] [36] [37] [38] [39] . These consequence had not been previously noticed and seemed unreasonable at the time; the phenomenon involved is now known as quantum entanglement [25] [28] .

On the other hand, in 1964 John S. Bell introduces his famous theorem [35] associated with 4 states, i.e., 2-qubit vectors into a combined space of Hilbert ${{\rm H}}_{AB}={{\rm H}}_{A}^{2}\otimes {{\rm H}}_{B}^{2}$ , and relative to two subsystems A and B,

$|{\Phi}_{AB}^{\pm}\rangle =\frac{1}{\sqrt{2}}\left(|{0}_{A},{0}_{B}\rangle \pm |{1}_{A},{1}_{B}\rangle \right),\text{\hspace{1em}}|{\Psi}_{AB}^{\pm}\rangle =\frac{1}{\sqrt{2}}\left(|{0}_{A},{1}_{B}\rangle \pm |{1}_{A},{0}_{B}\rangle \right)$ . (65)

They are called Bell’s states, and also known as EPR pairs. This theorem raises an inequality, which when violated by quantum mechanics establishes the non-locality present in the entanglement of two subsystems like A and B. Besides, a posterior redefinition of this inequality due to Clauser, Horne, Shimony, and Holt (CHSH) leads to a more conducive way to experimental testing [40] .

As we can see in Equation (65), Bell basis have two components. In particular, one of the components of $|{\Phi}_{AB}^{+}\rangle $ is $|{0}_{A},{0}_{B}\rangle =|00\rangle $ , while the other one is $|{1}_{A},{1}_{B}\rangle =|11\rangle $ . Applying Equation (30) to each component individually, we can calculate the spectral analysis thanks to the operator QSA-FIT

$\frac{\text{d}|00\rangle}{\text{d}t}=\left(-i{m}_{|00\rangle}\omega \right)\left[{\sigma}_{z}.\oplus {\sigma}_{z}\right]|00\rangle $ . (66)

We are going to need to use a new operator “ $.\oplus $ ” (which is easy to generalize) on the Pauli matrix ${\sigma}_{z}$ of Equation (17), this new operator is the only substantial difference between Equations (30) and (66); and accounts for the dimensional difference between the two equations. So that, if

$A=\left[\begin{array}{cc}{a}_{11}& {a}_{12}\\ {a}_{21}& {a}_{22}\end{array}\right]$ , and $B=\left[\begin{array}{cc}{b}_{11}& {b}_{12}\\ {b}_{21}& {b}_{22}\end{array}\right]$ ,

therefore,

$\begin{array}{c}A.\oplus B=\left[\begin{array}{cc}{a}_{11}& {a}_{12}\\ {a}_{21}& {a}_{22}\end{array}\right].\oplus \left[\begin{array}{cc}{b}_{11}& {b}_{12}\\ {b}_{21}& {b}_{22}\end{array}\right]=\left[\begin{array}{cc}\left[\begin{array}{cc}{a}_{11}& {a}_{12}\\ {a}_{21}& {a}_{22}\end{array}\right]+{b}_{11}& \left[\begin{array}{cc}{a}_{11}& {a}_{12}\\ {a}_{21}& {a}_{22}\end{array}\right]+{b}_{12}\\ \left[\begin{array}{cc}{a}_{11}& {a}_{12}\\ {a}_{21}& {a}_{22}\end{array}\right]+{b}_{21}& \left[\begin{array}{cc}{a}_{11}& {a}_{12}\\ {a}_{21}& {a}_{22}\end{array}\right]+{b}_{22}\end{array}\right]\\ =\left[\begin{array}{cc}\left[\begin{array}{cc}{a}_{11}+{b}_{11}& {a}_{12}+{b}_{11}\\ {a}_{21}+{b}_{11}& {a}_{22}+{b}_{11}\end{array}\right]& \left[\begin{array}{cc}{a}_{11}+{b}_{12}& {a}_{12}+{b}_{12}\\ {a}_{21}+{b}_{12}& {a}_{22}+{b}_{12}\end{array}\right]\\ \left[\begin{array}{cc}{a}_{11}+{b}_{21}& {a}_{12}+{b}_{21}\\ {a}_{21}+{b}_{21}& {a}_{22}+{b}_{21}\end{array}\right]& \left[\begin{array}{cc}{a}_{11}+{b}_{22}& {a}_{12}+{b}_{22}\\ {a}_{21}+{b}_{22}& {a}_{22}+{b}_{22}\end{array}\right]\end{array}\right]\\ =\left[\begin{array}{cccc}{a}_{11}+{b}_{11}& {a}_{12}+{b}_{11}& {a}_{11}+{b}_{12}& {a}_{12}+{b}_{12}\\ {a}_{21}+{b}_{11}& {a}_{22}+{b}_{11}& {a}_{21}+{b}_{12}& {a}_{22}+{b}_{12}\\ {a}_{11}+{b}_{21}& {a}_{12}+{b}_{21}& {a}_{11}+{b}_{22}& {a}_{12}+{b}_{22}\\ {a}_{21}+{b}_{21}& {a}_{22}+{b}_{21}& {a}_{21}+{b}_{22}& {a}_{22}+{b}_{22}\end{array}\right]\end{array}$ (67)

Now, applying the new operator on the Pauli matrices

$\begin{array}{c}{\sigma}_{z}.\oplus {\sigma}_{z}=\left[\begin{array}{cc}1& 0\\ 0& -1\end{array}\right].\oplus \left[\begin{array}{cc}1& 0\\ 0& -1\end{array}\right]=\left[\begin{array}{cc}\left[\begin{array}{cc}1& 0\\ 0& -1\end{array}\right]+1& \left[\begin{array}{cc}1& 0\\ 0& -1\end{array}\right]+0\\ \left[\begin{array}{cc}1& 0\\ 0& -1\end{array}\right]+0& \left[\begin{array}{cc}1& 0\\ 0& -1\end{array}\right]-1\end{array}\right]\\ =\left[\begin{array}{cc}\left[\begin{array}{cc}1+1& 0+1\\ 0+1& -1+1\end{array}\right]& \left[\begin{array}{cc}1+0& 0+0\\ 0+0& -1+0\end{array}\right]\\ \left[\begin{array}{cc}1+0& 0+0\\ 0+0& -1+0\end{array}\right]& \left[\begin{array}{cc}1-1& 0-1\\ 0-1& -1-1\end{array}\right]\end{array}\right]=\left[\begin{array}{cccc}2& 1& 1& 0\\ 1& 0& 0& -1\\ 1& 0& 0& -1\\ 0& -1& -1& -2\end{array}\right]\end{array}$ (68)

Then, if we multiply both sides of the Equation (66) by $\langle 00|$ ,

$\langle 00|\text{d}00/\text{d}t\rangle =\left(-i{m}_{|00\rangle}\omega \right)\langle 00|{\sigma}_{z}.\oplus {\sigma}_{z}|00\rangle $ . (69)

Then, if ${m}_{|00\rangle}=+1$ for photons,

$\Delta {\omega}_{\mathrm{max}}=\Delta {\omega}_{|00\rangle}={m}_{|00\rangle}\omega =\omega =\frac{i\langle 00|\text{d}00/\text{d}t\rangle}{\langle 00|{\sigma}_{z}.\oplus {\sigma}_{z}|00\rangle}$ . (70)

That is, Equations (37) and (70) coincide in their form, and independently of the term of the extreme right of Equation (70), it is clear that the spectral analysis for its counterpart with ${m}_{|11\rangle}=-1$ will be:

$\Delta {\omega}_{\mathrm{min}}=\Delta {\omega}_{|11\rangle}={m}_{|11\rangle}\omega =-\omega $ . (71)

Then, the bandwidth of the original entangled spins will be:

$B{W}_{\text{original}}=\frac{1}{\text{2\pi}}\left(\Delta {\omega}_{\mathrm{max}}-\Delta {\omega}_{\mathrm{min}}\right)=\frac{1}{\text{2\pi}}\left(\omega -\left(-\omega \right)\right)=\frac{2\omega}{\text{2\pi}}=\frac{\omega}{\text{\pi}}$ . (72)

That is to say, the bandwidth of the link between the original spins is finite.

4.4. Trade-Off between Δω and Δt

Another important concept regarding QSA-FIT comes up from Equation (37). That equation shows us the trade-off between $\Delta t$ and $\Delta \omega $ , through which the change in one produces the change in the other. That is to say, this attribute of functional dependence is interchangeable. This very strong dependence from the trade-off with the mentioned characteristics ensures the projection of QSA-FIT on elements as important to Quantum Physics as is Quantum Entanglement [25] [28] [41] , in particular, its implication in Quantum Communication [42] [43] [44] [45] [46] . In other words, everything revolves around Equation (37), which allows us a deduction of the trade-off:

$\Delta \omega \Delta t=i\frac{\langle \psi \left(t\right)|\Delta \psi \left(t\right)\rangle}{\langle \psi \left(t\right)|{\sigma}_{z}|\psi \left(t\right)\rangle}$ (73)

Now, if we consider the division of the derivative by 2 and take modulus on the right side of the equality,

$\Delta \omega \Delta t=\frac{1}{2}\left|\frac{i\langle \psi \left(t\right)|\Delta \psi \left(t\right)\rangle}{\langle \psi \left(t\right)|{\sigma}_{z}|\psi \left(t\right)\rangle}\right|$ (74)

Therefore, the trade-off becomes,

$\begin{array}{c}\Delta \omega \Delta t=\frac{1}{2}\left|\frac{i\langle \psi \left(t\right)|\Delta \psi \left(t\right)\rangle}{\langle \psi \left(t\right)|{\sigma}_{z}|\psi \left(t\right)\rangle}\right|=\frac{1}{2}\left|i\right|\left|\frac{\langle \psi \left(t\right)|\Delta \psi \left(t\right)\rangle}{\langle \psi \left(t\right)|{\sigma}_{z}|\psi \left(t\right)\rangle}\right|\\ =\frac{1}{2}{\left\{{\left(i\right)}^{*}\left(i\right)\right\}}^{1/2}=\frac{1}{2}{\left\{\left(-i\right)\left(i\right)\right\}}^{1/2}\ge \frac{1}{2}\end{array}$ (75)

Although Equation (75) is similar to Equation (5) of the Fourier Uncertainty Principle from Subsection 2.4, the concept here is completely different; because, while in FFT Equation (5) tells us that a simultaneous decimation in time and frequency is impossible; in QSA-FIT this trade-off means that the shorter the change in time in the state of a signal, the higher the spectral tone that represents that change in time.

4.5. Application to Quantum Signals

Let’s see below QSA as a procedure, therefore, given a streaming (or time sequence) of quantum states $\left\{|{\psi}_{0}\rangle ,|{\psi}_{1}\rangle ,|{\psi}_{2}\rangle ,\cdots ,|{\psi}_{N-1}\rangle \right\}$ , we will do:

1) If the time sequence is cyclic, then, we will use the modified time sequence,

$\left\{|{\psi}_{N-1}\rangle ,|{\psi}_{0}\rangle ,|{\psi}_{1}\rangle ,|{\psi}_{2}\rangle ,\cdots ,|{\psi}_{N-1}\rangle ,|{\psi}_{0}\rangle \right\}$

else-if, we will use the modified time sequence based on a $|0\rangle \text{-padding}$ criterion,

$\left\{|0\rangle ,|{\psi}_{0}\rangle ,|{\psi}_{1}\rangle ,|{\psi}_{2}\rangle ,\cdots ,|{\psi}_{N-1}\rangle ,|0\rangle \right\}$

end-if

2) According to Figure 3, and applying Equations (40) and (41),

$\left\{\Delta {\omega}_{0},\Delta {\omega}_{1},\Delta {\omega}_{2},\cdots ,\Delta {\omega}_{N-1}\right\}$

3) Finally and carrying out classical measurements (with all the required precession), we will obtain,

$\left\{\Delta {\omega}_{0},\Delta {\omega}_{1},\Delta {\omega}_{2},\cdots ,\Delta {\omega}_{N-1}\right\}$ .

Clearly, the sketch of Figure 3 represents a hybrid algorithm (quantum-classical, with a first quantum stage, and a second classical part), which at the moment of measurement is not subject to or affected by the quantum measurement problem [47] [48] . That is, the only limitation to obtain exact values of $\Delta \omega $ lies in the expertise of the research team, the measurement technique employed and the quality of the instrumentation.

5. A Pair of Practical Simulations

In this section, we present a set of two very important simulations, which expose the complete potential of the new tool. If we wanted to do the same thing through the Quantum Fourier Transform (QFT), we would have the serious inconvenience that it loses the direct and biunivocal relationship with the time, since the QFT (as we mentioned above) does not have an important attribute of functional analysis called compact support. This dysfunctionality is inhereted from its classical counterpart, i.e., FFT. This has dire consequences when trying to spectrally analyze a signal formed by quantum states in a quantum streaming way.

Therefore, we have prepared two simulations of very different characteristics: the first one has a circular temporal transition between samples (quantum states), as we can see on the left of Figure 6, where both components of each wave function
$|\psi \rangle $ can be observed, α (in red) and β (in blue). On the other hand, the right hand side of the same figure shows us the frequency in hertz in a direct relationship with the time. Besides, this quantum signal will have a bandwidth equal to 3.9751 × 10^{−15} hertz, which is absolutely reasonable considering the type of signal.

The second simulation consists of a very different type of signal regarding the last one. In this case, we have chosen a sequence with a completely random temporal transition in the orientation of the subsequent spins inside Bloch sphere; i.e., each quantum state which is part of the quantum signal makes a sudden jump in its direction with respect to its predecessor and its successor without the slightest commitment with a typical functional relation. In fact, both α (in red) and β (in blue) follow a random sequence of Gaussian distribution with null mean value. See the left hand side of Figure 7. While the right hand side of that figure shows us the frequency in hertz with a bandwidth equal to 57.4453 hertz.

6. Conclusions and Future Works

This work began with an extensive tour on traditional spectral techniques based on Fourier’s Theory, without compact support and completely disconnected from the link between time and frequency (this analysis included wavelet transform which sometimes has compact support), and the responsibility of each

Figure 6. The graph to the left shows α and β for a circular evolution in terms of time. Actually, α is circular like a cosine, while β will be equal to $\sqrt{1-{\left|a\right|}^{2}}$ . The graph to the right shows to QSA in hertz (i.e., instantaneous frequency) in terms of time, where the peak in the middle of graph represents the spectral behavior of the wave-function $|\psi \rangle $ with an unmistakable characteristic of pure tone.

flank with respect to final spectral components of a signal, as we can see in Section 4.2. Besides, these attributes extend to image and video [29] [30] . For that reason, QSA-FIT was created, i.e., to cover such space and also as a complement to the aforementioned Fourier’s Theory, in particular, FFT. A simple comparison between QSA-FIT and FFT sheds light on some initial conclusions, which can be seen synthesized in Table 1.

Specifically, and as we have seen, FFT doesn’t have compact support, therefore, we say that FFT is a non-local process, while, FIT has compact support, so that, we say that FIT is a local process, with all that this implies when we apply this tool to the study of the quantum entanglement. It is worth mentioning that FIT is an important tool to assess the importance of the flanks (or edges in the case of images) in a compression process weighting in real time and sample by sample (or pixel by pixel), the importance of temporal spectral components in the final result [29] [30] .

Figure 7. The graph to the left shows α (up and red) and β (down and blue) for a random evolution (with a normalized Gaussian distribution, and null mean value) in terms of time. Actually, α has a random evolution, while β is also random but as a consequence of arising from $\sqrt{1-{\left|a\right|}^{2}}$ . The graph to the right shows to QSA in hertz (i.e., instantaneous frequency) in terms of time, where the distribution of the different peaks represent the spectral behavior of the wave-function $|\psi \rangle $ with a typical characteristic of random jumps on the Bloch sphere.

Table 1. Comparison between FFT and FIT.

On the other hand, and considering that when the wave function collapses, we pass from QSA to FIT, it is critical to mention that the applications of FIT are obvious for a better understanding of the Information Theory and Quantum Information Theory, in particular, Quantum Signal and Image Processing, Quantum Communications, and quantum entanglement, fundamentally. In fact, a finite bandwidth for entanglement is not a trivial or accessory subject at all. If we take into account Equation (72), the finite bandwidth takes place from a procedure based on the individual components of the Bell basis
$|{\Phi}_{AB}^{+}\rangle $ , although this fact is absolutely concomitant with its possible values (and especially its signs) that can take the spin m_{s }, i.e., positive and negative, for
${m}_{|00\rangle}$ and
${m}_{|11\rangle}$ , respectively. The pending task is to delve deeper into the linkage between this new tool, QSA and the entanglement.

Acknowledgements

M. Mastriani thanks all the technical staff of several laboratories of the National Commission of Atomic Energy for the help they gave him in the preparation of the experiments. It is impossible to name them all here, simply, thank you all.

References

[1] Nielsen, M.A. and Chuang, I.L. (2004) Quantum Computation and Quantum Information. Cambridge University Press, Cambridge.

[2] Kaye, P., Laflamme, R. and Mosca, M. (2004) An Introduction to Quantum Computting. Oxford University Press, Oxford.

[3] Stolze, J. and Suter, D. (2007) Quantum Computing: A Short Course from Theory to Experiment. Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

[4] Busemeyer, J.R., Wang, Z. and Townsend, J.T. (2006) Quantum Dynamics of Human Decision-Making. Journal of Mathematical Psychology, 50, 220-241.

https://doi.org/10.1016/j.jmp.2006.01.003

[5] Eldar, Y.C. (2001) Quantum Signal Processing. PhD Thesis, MIT, Boston.

[6] Eldar, Y.C. and Oppenheim, A.V. (2002) Quantum Signal Processing. IEEE Signal Processing Magazine, 19, 12-32.

[7] Weinstein, Y.S., Lloyd, S. and Cory, D.G. (2001) Implementation of the Quantum Fourier Transform. Physical Review Letters, 86, 1889-1891.

[8] Tolimieri, R., An, M. and Lu, C. (1997) Mathematics of Multidimensional Fourier Transform Algorithms. Springer, New York.

[9] Tolimieri, R., An, M. and Lu, C. (1997) Algorithms for Discrete Fourier Transform and Convolution. Springer, New York.

[10] Oppenheim, A.V., Willsky, A.S. and Nawab, S.H. (1997) Signals and Systems. 2nd Edition, Prentice Hall, Upper Saddle River.

[11] Oppenheim, A.V. and Schafer, R.W. (1975) Digital Signal Processing. Prentice Hall, Englewood Cliffs.

[12] Briggs, W.L. and Van Emden, H. (1995) The DFT: An Owner’s Manual for the Discrete Fourier Transform. SIAM, Philadelphia.

[13] Hsu, H.P. (1970) Fourier Analysis. Simon & Schuster, New York.

[14] Jain, A.K. (1989) Fundamentals of Digital Image Processing. Prentice Hall, Englewood Cliffs.

[15] Gonzalez, R.C. and Woods, R.E. (2002) Digital Image Processing. Prentice Hall, Englewood Cliffs.

[16] Gonzalez, R.C., Woods, R.E. and Eddins, S.L. (2004) Digital Image Processing Using Matlab. Pearson Prentice Hall, Upper Saddle River.

[17] Schalkoff, R.J. (1989) Digital Image Processing and Computer Vision. Wiley, New York.

[18] Van Loan, C. (1992) Computational Frameworks for the Fast Fourier Transform. SIAM, New York.

[19] Heideman, M.T., Johnson, D.H. and Burrus, C.S. (1984) Gauss and the History of the Fast Fourier Transform. IEEE ASSP Magazine, 1, 14-21.

https://doi.org/10.1109/MASSP.1984.1162257

[20] Strang, G. (1994) Wavelets. American Scientist, 82, 256-266.

[21] Dongarra, J. and Sullivan, F. (2000) Guest Editors Introduction to the Top 10 Algorithms. Computing in Science Engineering, 2, 22-23.

[22] Ding, J.J. (2007) Time-Frequency Analysis and Wavelet Transform Class Note. Department of Electrical Engineering, National Taiwan University, Taipei.

[23] Meyer, Y. (1992) Wavelets and Operators. Cambridge University Press, Cambridge.

[24] Chui, C.K. (1992) An Introduction to Wavelets. Academic Press, San Diego.

[25] Jaeger, G. (2009) Entanglement, Information, and the Interpretation of Quantum Mechanics. Springer, Berlin.

[26] Schrödinger, E. (1935) Die gegenwaertige Situation in der Quantenmechanik. Die Naturwissenschaften, 23, 807-812.

[27] Schrödinger, E. (1935) Discussion of Probability Relations between Separated Systems. Proceedings of the Cambridge Philosophical Society, 31, 555.

[28] Audretsch, J. (2007) Entangled Systems: New Directions in Quantum Physics. Wiley-VCH Verlag GmbH & Co., Berlin.

[29] Mastriani, M. (2018) Quantum Spectral Analysis: Frequency in Time.

[30] Mastriani, M. (2018) Quantum Spectral Analysis: Frequency in Time with Applications to Signal and Image Processing.

[31] Einstein, A., Podolsky, B. and Rosen, N. (1935) Can Quantum-Mechanical Description of Physical Reality Be Considered Complete? Physical Review, 47, 777-780.

[32] Einstein, A., Lorentz, H.A., Minkowski, H. and Weyl, H. (1952) The Principle of Relativity: A Collection of Original Memoirs on the Special and General Theory of Relativity. Courier Dover Publications, New York.

[33] Herbert, N. (1982) FLASH—A Superluminal Communicator Based upon a New Kind of Quantum Measurement. Foundations of Physics, 12, 1171-1179.

https://doi.org/10.1007/BF00729622

[34] Eberhard, P.H. and Ross, R.R. (1989) Quantum Field Theory Cannot Provide Faster-than-Light Communication. Foundations of Physics Letters, 2, 127-149.

https://doi.org/10.1007/BF00696109

[35] Bell, J. (1964) On the Einstein Podolsky Rosen Paradox. Physics, 1, 195-200.

https://doi.org/10.1103/PhysicsPhysiqueFizika.1.195

[36] Vaidman, L. (2014) Quantum Theory and Determinism. Quantum Studies: Mathematics and Foundations, 1, 5-38.

https://doi.org/10.1007/s40509-014-0008-4

[37] Dieks, D. (1982) Communication by EPR Devices. Physics Letters A, 92, 271-272.

https://doi.org/10.1016/0375-9601(82)90084-6

[38] Ghirardi, G.C., Grassi, R., Rimini, A. and Weber, T. (1988) Experiments of the EPR Type Involving CP-Violation Do Not Allow Faster-than-Light Communication between Distant Observers. Europhysics Letters, 6, 95-100.

[39] Aspect, A., Grangier, P. and Roger, G. (1982) Experimental Realization of Eistein-Podolsky-Rosen-Bohm Gedankenexperiment: A New Violation of Bell’s Inequalities. Physical Review Letters, 49, 91-94.

https://doi.org/10.1103/PhysRevLett.49.91

[40] Clauser, J.F., Horne, M.A., Shimony, A. and Holt, R.A. (1969) Proposed Experiment to Test Local Hidden-Variable Theories. Physical Review Letters, 23, 880-884.

https://doi.org/10.1103/PhysRevLett.23.880

[41] Horodecki, R., Horodecki, P., Horodecki, M. and Horodecki, K. (2009) Quantum Entanglement. Reviews of Modern Physics, 81, 865-942.

[42] NIST (2014) Quantum Computing and Communication. Create Space Independent Publishing Platform, New York.

[43] Pathak, A. (2013) Elements of Quantum Computation and Quantum Communication. CRC Press, New York.

[44] Cariolaro, G. (2015) Quantum Communications. Springer International Publishing, New York.

[45] Mishra, V.K. (2016) An Introduction to Quantum Communication. Momentum Press, New York.

[46] Imre, S. and Gyongyosi, L. (2012) Advanced Quantum Communications: An Engineering Approach. Wiley-IEEE Press, New York.

[47] Schlosshauer, M. (2005) Decoherence, the Measurement Problem, and Interpretations of Quantum Mechanics. Reviews of Modern Physics, 76, 1267-1305.

https://doi.org/10.1103/RevModPhys.76.1267

[48] Busch, P., Lahti, P., Pellonpaa, J.P. and Ylinen, K. (2016) Quantum Measurement. Springer, New York.