Received 20 April 2016; accepted 8 July 2016; published 11 July 2016
Seismic migration is a process to turn the reflectors on the stack image into true geological interface. It can be divided ray- and wave equation-based method. Seismic migration based on ray methods such as Kirchhoff (Gray and May, 1994  , Bevc, 1997  ) and Gaussian beam method (Gray, 2005  ) is a popular imaging method for prestack and post stack data due to computing efficiency. While ray-based method offers efficient computation, it mostly uses single path ray tracing. Multi-path arrivals, however, are needed for reasonable imaging in complex subsurface structures. Since there are multi-path arrivals for migration mapping, it is difficult to calculate correct travel time for complex structures. Migration-based on wave equation is another migration method. Since the wave equation migration methods control multi-path arrivals, it can give better images than ray methods for complex subsurface structures. There are two-way and one-way wave equation migrations. The first one, like reverse time migration (RTM) (Baysal et al., 1983  , Liu et al., 2013  ), gives a more correct subsurface image, but imposes a heavy computational burden. For shot domain migration, RTM is necessary to compute wavefields of both forward propagation from the source and backward propagation from the receiver (Loewenthaland Mufti, 1983  , Whitmore, 1983  , Chattopadhyay, et al., 2008  ). Migration by one-way wave equation is the most popular scheme due to less computing time and high efficiency. These methods are well known as phase-shift migration (Gazdag, 1978  ) and split-step-Fourier (SSF) (Stoffa et al., 1990)  method. However, the phase-shift migration is strictly valid for horizontally layered media. In the case of strong lateral velocity variations, phase-shift-plus-interpolation (PSPI) method is appropriate (Gazdag and Sguazzero, 1984)  . There was PSPI with Monte Carlo wavefield as an imaging condition (Bonomi et al., 2006  ). Since power spectrum of a seismic source is band-limited with cutoff frequency far below the temporal Nyquist, imaging data in the space-frequency domain allows significant data compression. The phase shift formulation of the migration leads to a parallel implementation. The depth extrapolation is composed of entirely concurrent operations, and only the imaging condition requires inter-processor communication or remote memory access. This is beneficial for parallel coding for PSPI algorithm. Since subsurface depth imaging requires large seismic data volume, which accordingly must be reduced, transformed, and interpreted to obtain meaningful information, the algorithm must be parallelized. Here, we describe the practical appearance of the implementation on 3D PSPI using Open MP library and the method is tested on a synthetic data set from the 3D SEG/EAGE salt model (Aminzadeh et al., 1997)  .
2. 3D PSPI
The PSPI method in isotropy media, presented by Gazdag and Sguazzero (1984)  , is briefly reviewed and extended to 3D. The 3D acoustic wave equation with constant density is
where u = u(x, y, z, t) is the pressure field and v(x, y, z) is a velocity model of geological media. The double Fourier series of the pressure field is
After taking 2nd partial derivative Equation (2) with respect to x, y and t then substitution of Equation (2) into Equation (1) give
Equation (3) shows the second-order ordinary differential equation in the wave number-frequency domain (kx, ky, ω) of homogeneous acoustic wave equation. Though Equation (3) has two characteristic solutions relating the field at the depth z, we can choose only one characteristic solution for the following downward displacement of the signals in reverse time
A solution for the one-way wave equation
which is the basic tool used by other migration methods to approximate one-way wave propagation in the space- frequency domain (x, y, ω). In order to map the solution and transform it back to the space- frequency domain, it needs inverse Fourier transform, and then according to the imaging condition we can determine the image u (x, y, z, t = 0) of migrated data at z + Δz by the following,
Equations (4), (5) and (7) are the basis of the phase shift migration. Since we cannot directly apply the phase shift to the imaging condition in the case of lateral velocity variations, the phase shift formula is divided into vertical and horizontal components. Then the formula is modified to handle wave propagation inside the layer z + Δz that has a laterally variable velocity field. The first phase shift for the vertically traveling waves is
which k = ω/v and v = v(x, y, z). The second phase shift for the horizontal components with a reference velocity vj, one of, is
where, evaluated for reference velocity. After doing Fourier transform back to the space- frequency domain, serve as reference data from which the final result is obtained by the linear interpolation. The depth-continued wavefield is given by
for all points (x, y, z) with. PSPI requires that all small dips be characterized by
3. Parallel PSPI
When we parallelize a problem, we must first decompose the problem into smaller problems and then assign these problems to processors to be worked on simultaneously. There are methods to decompose a problem, domain decomposition and functional decomposition. In the case of the domain decomposition, which entails data parallelization, data are divided into pieces of approximately the same size and then mapped to different processors. Each processor works on a portion of the assigned data and needs to communicate with the other processors to exchange data. The Finite Difference Method (FDM) is a good example of the domain decomposition method. Functional decomposition, which is also known as task parallelization, decomposes the problem into a large number of tasks and these tasks are assigned to all processors. The tasks are allocated to a group of slave processes by a master process. Matrix-vector multiplication is a good example of functional decomposition. The PSPI algorithm is intrinsically data parallel according to decoupling in the frequency domain, and depth extrapolation in each step consists of concurrent calculations. The imaging condition at each depth requires the sum over frequency of the results of the depth extrapolation. Since the PSPI has a data-parallel nature, PSPI code is naturally structured for efficiency on the Linux cluster. In the 3D PSPI, each node performs wavefield extrapolation and interpolation for lateral velocity variation for a single image and then sends this to the master node. The migrated image is the result of the summation of all single images. In Table 1, MPI_PSPI consists of a library for parallel processing, initialization for parallel processing, sending input data, wavefield extrapolation process, stack common image gathers, and final process. The initialization process for parallelization consists of MPI_init, MPI_Common, and MPI_rank. The sending input data process plays a role in transmitting velocity data, shot gathers, and parameters for wave extrapolation to each computing node by MPI_BCast. Each computing node performs a wavefield extrapolation. After finishing extrapolation, the results are then sent to the master node by MPI_Reduce and then each single image gather is stacked for imaging subsurface. Figure 1 shows the basic flow chart for a parallel 3D PSPI algorithm.
4. Numerical Model Test
We conducted a numerical model test to verify prestack depth migration by the 3D PSPI. We applied it to the 3- D SEG/EAGE salt velocity model. The dimensions of the geological model are 10.8 km × 10.4 km × 4.2 km. The velocity range is 1500 m/s to 4482 m/s. Figure 2 shows the velocity model at inline 7760 m, xline 5320 m, and zline 1040 m, respectively. The grid size is Δx = Δy = Δz = 20 m and the number of grids is 541 × 521 × 211. The acquisition system consists of 8 streamers and each streamer has 68 receivers with 40 m distance. The
Table 1. A pseudo code for MPI_PSPI.
Figure 1. Basic parallel algorithm for 3D PSPI. Master node distributes input data such as shot gathers and a velocity model to all nodes. Each computing node conducts 1st and 2nd phase shift, wavefield extrapolation and interpolation with reference velocities then generates a single image gather. Each single gather on the computing nodes is sent to the master node then it makes a 3D stack imag in depth domain after summation of each single image gather.
Figure 2. Velocity model (a) at inline 7760 m, (b) xline 5320 m, and (c) zline 1040 m. The 3D velocity model shows geological interfaces, fault lines and saltbody. The highest velocity of the dark black is a saltbody.
shot interval is 80 m and there are 96 shots to each survey line. A shot gather thus consists of 544 traces. The total number of survey lines is 50, with 160 m spacing. The sampling interval is 8 ms for a recording time of 5 s and the source frequency is 20 Hz. A common shot gather at the 30th shot point is shown in Figure 3. This shows the direct wave, multiples, reflections, and various scattered waves due to the complex velocity model.
Figure 3. Synthetic shotgaters at shot point number 30. Each shot gather has 68 receivers with 40 m distance.
The prestack imaging by 3D PSPI entails extrapolation of wavefields in the wave number-frequency domain which are recorded on the surface, followed by finding wavefields at t = 0. A phase-shift was performed in the vertical and horizontal directions, respectively, and then interpolated at each node. This gave a single image gather according to the shot gather. After summation of every single image gather, we got a stacked image in depth domain. The results of wavefields extrapolation are shown in Figure 4. It shows extrapolated some parts of wavefields at 220 m, 1020 m, 1820 m, 2220 m, 3020 m, and 4020 m depth from top to bottom. We can see wavefields from the fault zone at 1020 m depth. Figure 5 shows the prestack migration results by 3D PSPI along the inline direction. The figures show the subsurface images on the inline axis. The top image is the result at inline 6360 m, which show some portions of the reflectors above the saltbody. The 2nd image is the resulting subsurface image for inline 7760 m, which shows some portions of the reflectors, the saltbody, and the reflectors of the subsalt. The 3rd and 4th images are the result at inline 9360 m and 10360 m. Figure 6 shows the comparison of 3D velocity model and 3D PSPI image in depth domain at inline 7760 m. This shows the image of top saltbody is well imaged, but the geological interfaces of the lower parts of saltbody is poor. This means 3D PSPI is a good algorithm for fast velocity variations, but it would be limitation for imaging sub saltbody. Figure 7 shows the prestack migration results by 3D PSPI along the xline direction. These figures show the subsurface image along the xline direction. The top image is there sult at inline 1540 m, and shows the reflectors and the fault zone. The 2nd and 3rd images are the resulting subsurface image for xline 5320 m and 7540 m, which show some portions of the reflectors, the saltbody, and the reflectors of the subsalt. The 4th is the result at inline 8640 m. Figure 8 shows the comparison of the 3D velocity model and 3D PSPI at xline 5320 m. From the Figure 8 the outline of saltbod is well imaged, but the portions of the lower parts of the saltbody are poor. Figure 9 shows the results of 3D prestack migration in the depth direction. The top image presents the result of adepth slice at zline 400 m, showing the fault line and the saltbody cross section image. The 2nd and 3rd results at 1000 m and 1040 m show the saltbody, the reflectors, and thefault zone, respectively. The bottom image is the result at zline 4000 m. Figure 10 shows the comparison of 3D velocity model and 3D PSPI result at zline 1040 m. This shows good agreement with each other. As presented in the results of Figure 6, Figure 8, and Figure 10, there is strong agreement between the reflecting boundary selected by 3D PSPI prestack migration and the interface of the velocity cube. However, some images in the lower parts of the saltbody are unclear. These phenomena show the limitation of 3D PSP in the case of fast lateral velocity variations and strong vertical velocity differences. Though the numerical result shows reasonable results, it needs to verify the performance of the parallel code and field data application. And it needs also to compare with other prestack depth migration algorithms such as prestack Kirchhoff migration and RTM.
Figure 4. Some parts of wavefields extrapolation at depth 220 m, 1020 m, 1820 m, 2220 m, 3020 m, and 4200 m from top left to bottom right.
Figure 5. The results of 3D prestack depth migration at inline 6360 m, 7760 m, 9360 m, and 10,360 m from top to bottom. The images of the lower parts of saltbody are unclear due to a big velocity difference between the upper and the lower parts of saltbody.
Figure 6. Comparision of (a) velocity model and (b) 3D PSPI image at inline 7760 m.
Figure 7. The results of 3D prestack depth migration at xline 1540 m, 5320 m, 7540 m, and 8640 m from top to bottom. The results show fault lines and saltbody, but some geological interfaces of the lower parts of saltbody is uncelear.
Figure 8. Comparison of (a) velocity model and (b) 3D PSPI image at xline 5320 m.
Figure 9. The results of 3D prestack depth migration at zline 400 m, 1000 m, 1040 m, and 3000 m. The results show fault line and saltbody.
Figure 10. Comparison of (a) velocity model and (b) 3D PSPI image at zline 1040 m.
Since 3D PSPI migration is efficient and requires less computation than RTM, it is a suitable tool for prestack depth migration, which requires heavy data input-output and huge computing time. We implemented 3D prestack depth migration on PSPI algorithm using Open MP and verified its performance on the synthetic 3D SEG/EAGE salt velocity model. The numerical model test shows that the results are in good agreement with the original geological model. The outline of saltbody is well imaged, but some geological interfaces of the lower parts of saltbody are poor. Though the numerical test shows reasonable results, for further study we need to perform field data application to focus on developing a more elaborate velocity model. And, it also needs to develop an algorithm using GPU codes for fast more computation.
This work was supported by the Energy Efficiency & Resources Core Technology Program of the Korea Institute of Energy Technology Evaluation and Planning(KETEP), granted financial resource from by the Ministry of Trade, Industry & Energy, Republic of Korea (No. 20132510100060).