WIDE-OFFSET-RANGE PRE-STACK DEPTH MIGRATION METHOD FOR SEISMIC EXPLORATION
Conventional seismic exploration in difficult geologic environments often fails to provide satisfactory results in terms of imaging below outcropping carbonates, superficial volcanics (e.g. basalts), salt or anhydrite layers, complex thrust-belts, etc., both in onshore and offshore surveys.
The main physical reason for this is the complex interaction of the elastic wave field with the complex geology, determining: energy absorption through high-impedance bodies, scattering caused by rugose interfaces (e.g. sub-basalt exploration) or sharp laterally- varying velocity fields (e.g. exploration of sub-thrust plays), reverberations through sequences of interbedded sediments with sharp velocity inversions (e.g. basalt flows intercalated with clastic sediments), multiples and shallow complexities.
The problems caused to seismic imaging can be summarized as scarce penetration and low signal-to-noise ratio for seismic reflected phases recorded at conventional offsets, destructive interferences produced by scattering and reverberations, unresolved velocity fields, poor imaging results in time and depth sections.
Finding solutions to overcome exploration problems in such difficult environments is extremely important to improve the success rate of drilling wells and the general understanding of oil generation, migration and storage in deep reservoirs.
It is an object of the present invention to provide a seismic exploration method suitable to overcome the above problems.
The said scope is achieved, according to the invention, by using the wide-offset-range pre-stack depth migration method disclosed by the independent claim 1.
Advantageous features of the invention are apparent from the dependent claims.
By using, according to claim 1, large offsets in seismic acquisition with offset to depth ratio more than 1:1 (preferably up to 3:1 or 4:1) but with standard geophone group intervals and shot spacing (thus significantly augmenting the number of live channels) a number of advantages can be reached, including:
- undershooting local high-impedance bodies that make penetration of the seismic energy difficult;
- high signal-to-noise ratio of the reflected phases recorded at large offset for near- or post-critical reflections; - very large fold at all offset ranges;
- attenuation of multiple, scattering and reverberation effects at large offsets;
- better velocity analysis and model building by using a. number of seismic phases generally disregarded in the usual near-vertical processing sequence, such as: wide-angle reflections, direct waves, diving waves, head waves, P to S converted waves.
DESCRIPTION ON THE DRAWINGS
Further characteristics of the invention will be apparent from the detailed description that follows, referring to a purely exemplary and therefore non limiting embodiment thereof, illustrated in the appended drawings, in which:
- Figure 1 is a block diagram describing the method to which the present invention refers;
- Figure 2 is a block diagram describing a preferred embodiment of the time processing step of figure 1;
- Figure 3 is a block diagram describing a preferred erribodiment of the reflection tomographic step of figure 1;
- Figure 4 is a block diagram describing a preferred embodiment of the optimised time processing step of figure 1.
PREFERRED EMBODIMENT
A general block diagram describing a preferred embodiment of the method according to the invention is shown in figure 1.
The said method, aimed at using all the available offset (including long offsets) for depth imaging purposes, starts from the seismic data acquired with extended offsets, preferably with offsets in the order of 3-4 times the depth of the target.
The use of the large offset data (offset to depth ratio ~ 4:1) for imaging purposes is not straightforward and requires the use of specific "tools" (or m&ans) for the analysis of the laterally varying velocity field, as well as for flattening and stacking the large offset
seismic phases in time and depth migrated domains (i.e. non-hyperbolic move-out, stretching).
Most of the above problems should be addressed by performing velocity analysis in both time and depth taking advantage of the information carried by seismic phases such as: direct waves, diving waves, head waves, reflected waves and post-critical (wide-angle) reflected waves.
It is known that seismic tomography is the proper "tool" to perform velocity analysis in depth by simultaneously inverting the above-mentioned seismic phases for the interval velocity structure as well as for the reflecting interface geometry.
It is also known that tomographic methods are suitable for updating the velocity field during the migration velocity analysis procedure. The deviation in depth from horizontality of the migrated reflected phases (on Common Image Gathers) is used to update the velocity field at all offsets by means of already l iown ray-tracing and tomographic inversion methods. The tomographic "tool" is then used both for velocity model building in depth (to be used as starting model for the pre-stack depth migration) and to perform migration velocity analysis.
Already known, pre-stack depth migration procedures (PSDM) - as, for example, that known as Kirchhoff - are used for properly handling both short and large offset reflected phases, with particular emphasis on the generation of the travel-time tables which allow the definition of travel-time values even at very large offsets but avoid the calculation of time arrivals of undesired seismic phases (e.g. head-waves with finite difference travel- time calculation schemes).
The Wide Offset Range Pre-Stack Depth Migration method to which the present invention refers is used for imaging purposes adopting the correct strategies to migrate the long-offset reflected phases as well as the near- vertical phases.
Since the said method combines data with different frequency content and phase characteristics (i.e. near offset/shallow reflections versus near-/post-critical deep wide angle reflections), processing and depth imaging procedures have been established in a depth/time-varying and offset-varying fashion so as to use distinct groups of data with homogeneous characteristics at each step.
Finally, procedures to reduce spatial-aliasing effects based on ray tracing and re-sorting of traces on actual common reflection points instead of common mid points are implemented in the pre-stack depth migration code.
A preferred embodiment of the Wide Offset Range Pre-Stack Depth Migration method is now disclosed with reference to figure 1.
It provides "tools" for "handling" (steps 2-5 and 7) seismic data acquired with a wide range of offsets - including long-offset arrivals for velocity analysis in depth - in step 1.
In particular, in step 2 first-break picking and transmission tomography are performed on the data after application of band-pass filtering; the picking is performed for all the available offsets to perform a simultaneous tomographic inversion of the travel-time residuals for the velocity model.
Since the offsets in acquisition extend up to 3-4 times the depth of the expected target, the continuously refracted diving waves, represented by the first break picks, penetrate down to the depth of interest allowing an accurate reconstruction of the superficial velocity field and the definition of the macro-velocity model for the deep structures.
Picking is performed for each shot gather using an already knoΛvn automated picking algorithm, which is followed by manual editing.
The dataset is simultaneously inverted in an iterative mode for a velocity field parameterised as velocity cells of constant velocity by using an already known LSQR algorithm, finite-difference ray tracing and smoothness constraints. The convergence of the iterative inversion procedure is checked by using already known travel-time residual reduction criteria (time r.m.s. residual), statistical F-tests and model-sharpening (increase of standard deviation) criteria.
The amount of data is generally very large for all offset classes, so as to provide stable velocity determinations at all depths.
The output of transmission tomography is a preliminary macro-velocity model (step 3) having a variable depth resolution, which depends on the maximum offset involved and on the velocity field distribution.
The assessment of the tomographic results can be carried out through a variety of already known "tools", which depend also on the inversion algorithm used. One of the most frequently used "tools" relies on the use of synthetic models where the complete loop of forward calculation, error contamination and inversion is performed to determine which portions of the model have been reliably reconstructed by the tomographic inversion. Other known "tools" rely on the analysis of the resolution matrix with the evaluation of the spread of the averaging vectors.
From the preliminary macro-velocity model (step 3) a velocity model is built through already known depth-imaging procedures involving at least a Pre-Stack Depth Migration (PSDM; step 4) and a layer identification (step 5) including the evaluation of external constraints such as geologic constraints and/or geophysical constraints. The final goal is the definition of a reliable starting velocity model for PSDM.
One of the major tasks performed for velocity model building is the derivation of a velocity model parameterised in terms of reflecting interfaces and interval velocities.
PSDM is performed at the beginning of the model-building flow by using as input the velocity field (step 3) derived from transmission tomography (step 2) and the output from a time processing (step 8). The joint interpretation of the results of PSDM, of the transmission tomography velocity field and of the possible constraints deriving from external data (i.e. geological and/or geophysical data) allows the creation of a starting velocity model parameterised in terms of interfaces and interval velocities.
Starting from the said parameterised velocity model (step 5), already known "tools" are used (step 6) for a forward calculation of travel-times for reflected waves and refracted waves (such as: diving waves and head waves), as well as for converted waves and direct waves.
Synthetic seismogram calculations and acoustic/elastic modelling are also already known "tools" that are - or can be - used in step 6.
The forward calculation is used to guide the picking of refracted/reflected travel-times and to associate the data to the corresponding model layers/interfaces; its main objective is to achieve a better understanding of the recorded wave-field by recognizing the seismic events on the seismic gathers and to allow their association with the corresponding interfaces of the velocity model.
The method now proceeds (step 7) in a layer-stripping fashion starting from the shallower interface identifiable on the velocity model.
Picking of reflected travel-times is followed by pick quality control.
The travel-times relative to various seismic phases (i.e. direct, reflected and refracted) are simultaneously inverted for the velocity field.
A block diagram describing a preferred embodiment of a tomographic reflection step 7 is shown by figure 3.
The tomographic inversion code is used - in already known ways - for handling forward and inverse modelling of a number of seismic phases such as: direct waves, diving waves, head waves, near-vertical and wide-angle reflections.
Fast and accurate travel-time forward and inversion algorithms are used for this purpose such as: Fermat-based minimum-time ray-tracing algorithms and SLRT for inversion.
The model parameterisation is designed in an adaptive fashion to allow an accurate model parameter description in areas of dense data sampling and a less accurate but robust model representation in areas of random data sampling.
The output of the tomographic reflection step 7 is an updated velocity field for the generic layer - in figure 1, the ith layer - taken into consideration (step 10).
The time processing step (step 8) improves the data in terms of signal-to-noise ratio and removes spurious seismic phases before proceeding to model building (steps 4 and 5).
The "tools" applied at step 8 are known as most of them are common to data processing procedures, while few further "tools" - like the combined use of transmission tomography
and pre-stack wave-equation datuming that are of great utility to remove the surface- related noise and the scattering effects caused by near-surface complex velocity fields and/or rugose interfaces - have been disclosed in published documents.
A preferred embodiment of the time processing step 8 used in the block diagram of figure 1 is shown in figure 2.
The outputs of the time processing step (step 8) are the time-processed data (step 9) to be used for PSDM (step 4) and the r.m.s. velocity field deriving from the time-velocity analysis that can be used to perform the optimised PSDM-TC (Pre-Stack Depth Migration - Time Converted) procedure (step 17)
Previous steps 2-9 generate - for a generic ith layer - an updated preliminary velocity field (step 10) which is managed by a so-called "migration loop" (figure 1; steps 6 -15) which involves an iterative procedure which analyses each reflecting interface layer by layer and proceeds from top to bottom of the layers.
For the ith layer the "migration loop" involves at least a migration step (PSDM; step 11) and an analysis of the residuals of the Common Image Gathers (CIGs; step 12).
If the residual resulting from CIGs analysis is not minor than a preset value (step 13) the velocity field is further updated (step 7); otherwise, if the i layer is the last one (step 14) a final stack step (step 16) is involved.
Otherwise, the next layer is identified (step 15) and forward travel-times calculation (step 6) is further involved.
The updating of the velocity field through a tomographic reflection (step 7) is able to accommodate strong non-hyperbolic effects in the long-offset reflections by introducing lateral velocity changes into the model.
Advantageously, the migration step 11 is implemented through an already known Kirchhoff migration (excluding head-waves from the calculation of travel-time tables used for migration) due to its flexibility in handling irregular acquisition geometries that often occur in rough topography areas such as those characterizing the thrust-belt zones.
Without departing from the scope of the invention "tools" - different from the Kirchhoff migration - can be used for reducing the effects of spatial aliasing during pre-stack depth migration and for allowing the use of limited migration angles even in the presence of dipping reflectors.
The said "tools" involve bent ray tracing, identification of the actual reflection point and migration from reflection point instead than from the mid-point between source and receiver.
In fact, in complex and contrasted velocity fields the actual reflection point on the subsurface may significantly differ from the theoretical reflection point defined as the midpoint between source and receiver.
When Kirchhoff depth migration is used, the migration angle (and consequently the aperture) should be large enough to include the point that is going to be imaged. The use of large migration angles (and apertures) can introduce disturbances in the migrated section especially in presence of noisy data or spatial aliasing.
The use of smaller angles in migration is sometimes desirable since it provides a better focusing of the migrated image, but this means that some important events are skipped with dip larger than the migration angle.
The said problem is solved through the above mentioned "tools", by performing ray tracing with the actual geometry and velocity field and then mapping the actual reflection points on the interface with the corresponding source-receiver pairs. After mapping, the traces are assigned to the locations corresponding to the actual reflection points instead of to the theoretical midpoints: this operation corresponds to positioning the vertex of the migration fan on the vertical of the point that is going to be imaged thus allowing a significant reduction of the angle to be used for migration).
The Common Image Gathers (CIGs) are then analysed (step 12) in a known way - for the interval of interest - for the depth residual that would produce the flattening of the reflected phases. If the reflected phases in the CIGs are not flattened, the velocity field is not accurate and a further updating of the velocity for the interval velocity above the analysed reflection is needed (step 7).
The reflecting horizon is picked on the stack section and the updated geometry is used for the next iteration tomographic inversion (step 7), which updates the interval velocity field. This loop is repeated until the corresponding reflected energy in the common image gather CIG is flattened and the reflected event on the stack section is maximized in character and lateral coherency.
The same "migration loop" procedure is repeated for the successive visible reflected event on the stack section.
When the "migration loop" is concluded, i.e. when the last layer has been satisfactorily migrated (step 11), a group of operations involving a final stack step 16 - such as stretching removal on the CIGs, stacking and band-pass filtering, if any - is carried out before obtaining (step 19) the image in depth.
Figure 2 is a block diagram describing a preferred embodiment of the time processing step 8 of figure 1, which includes the steps of:
- dephasing (step 21) the pre-stack raw data (step 1);
- BP-FK filtering the dephased data (step 22);
- wave-equation damming the filtered data (step 23) and their deconvolution (step 24); - SNR enhancement of the said data (step 25) and muting head waves (step 26); the above steps (steps 21-26) produce the processed pre-stack gathers or Time Processed Data (step 9);
- making time velocity analysis (step 28 ) and carrying out a NMO correction (step 29);
- applying the residual statics (step 201); - if the residual statics are considered satisfactory (step 204), perform a new time velocity analysis (step 202), prepare a RMS velocity model (step 205) and send it to the optimised time processing step (step 17); otherwise carry out a NMO"1 correction (step 203) and come back to step 28.
Figure 3 is a block diagram describing a preferred embodiment of the reflection tomographic step 7 of figure 1, which includes at least the steps of: analysing the inversion parameters (step 31); inverting the interval velocity of interest (step 32); inverting interface (step 33); verify (step 34) if a residual value is less than a preset value: if so, came back to step 31, otherwise assess the inversion results (step 35).
Steps 31-35 belonging to the reflection tomographic step 7 are no further disclosed as they can be implemented by a person skilled in the art without involving inventive steps.
Figure 4 is a block diagram describing a preferred embodiment of the optimised time processing step 17 of figure 1, which can be performed by using the output of the PSDM (step 11) before stack (the CIGs).
The CIGrs outputted from the PSDM (step 40) are used to perform a time processing of the data. The CIGs are converted from depth to time (step 41) and a NMO"1 correction is carried out (step 42) to generate a new dataset organized in CDPs (step 43). The existing velocity field (step 44) is used to perform the time to depth conversion (step 41) and the NMO"1 correction (step 42).
The new dataset (step 43) is then used to perform (step 45) a further velocity analysis in time domain by assuming that the non-hyperbolic effects have been corrected by the migration procedure. The energy is also shifted to the actual reflection position thanks to migration and this allows a reliable time velocity analysis also for reflected phases at large offsets which were originally affected by non-hyperbolic move-out.
The new r.m.s. velocities deriving from time analysis (step 45) are then used to correct the data (step 46) and a stack is performed (step 49).
Evaluate (step 50) if the stack is satisfactory: if so the requested time section has been reached (step 53); otherwise the procedure is further iterated by performing a new PSDM (step 52) after application of a NMO"1 correction to the data (step 51).
The whole procedure tends to converge in a few iterations.