GB2429526A - Identifying continuous features in a seismic data set - Google Patents

Identifying continuous features in a seismic data set Download PDF

Info

Publication number
GB2429526A
GB2429526A GB0517245A GB0517245A GB2429526A GB 2429526 A GB2429526 A GB 2429526A GB 0517245 A GB0517245 A GB 0517245A GB 0517245 A GB0517245 A GB 0517245A GB 2429526 A GB2429526 A GB 2429526A
Authority
GB
United Kingdom
Prior art keywords
data
seismic
semblance
location
reflectors
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB0517245A
Other versions
GB0517245D0 (en
Inventor
Yvind Magne Skjaeveland
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Equinor ASA
Original Assignee
Statoil ASA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Statoil ASA filed Critical Statoil ASA
Priority to GB0517245A priority Critical patent/GB2429526A/en
Publication of GB0517245D0 publication Critical patent/GB0517245D0/en
Priority to PCT/EP2006/065077 priority patent/WO2007023080A2/en
Publication of GB2429526A publication Critical patent/GB2429526A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V1/00Seismology; Seismic or acoustic prospecting or detecting
    • G01V1/28Processing seismic data, e.g. for interpretation or for event detection
    • G01V1/30Analysis
    • G01V1/301Analysis for determining seismic cross-sections or geostructures

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Environmental & Geological Engineering (AREA)
  • Geology (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Geophysics (AREA)
  • Geophysics And Detection Of Objects (AREA)
  • Computer And Data Communications (AREA)
  • Exchange Systems With Centralized Control (AREA)

Abstract

A method is provided for identifying a continuous feature in a seismic data set. Such a method may be used for identifying flat reflector surfaces, for example representing possible interfaces between fluids. The method comprises choosing a location within the data set and, at that location, performing a semblance measurement on a subset of the data representing a surface containing the location. The semblance measurement is then repeated for one or more further subsets of data representing other surfaces at the location.

Description

I
I
Seismic Data Processing The present invention relates to a method of processing a seismic dataset, and in particular to a method of processing a seismic dataset in order to identify reflector surfaces within a dataset.
In seismic exploration, seismic data is obtained using geophones and/or hydrophones which act as sensors to receive energy transmitted through the ground and/or water from an energy source at the surface and reflected back to the surface from subsurface interfaces. The timing and amplitude of the energy received at the sensors can be analysed in order to gain some understanding of the subsurface structure.
Processed seismic data is important in the study of underground formations that contain hydrocarbons. Known seismic processing techniques cannot easily identify fluid-fluid interfaces such as a hydrocarbon-water interface due to the generally low reflection coefficient at such an interface. It is desirable to the hydrocarbon exploration industry to be able to more clearly locate such interfaces which would give an indication that hydrocarbons are indeed present, and also the extent to which the reservoir is filled.
European Patent No. 0796442 discloses a method of locating discontinuous subsurface features. The method involves dividing a set of seismic data into an array of cubes and, for each cube, determining a measure of the similarity of the data within that cube to at least two predetermined directions. The algorithm disclosed in this patent returns the maximum value at each test location, since a low maximum value points towards possible faults.
United States Patent No. 6,473,697 also discloses a method of processing seismic data which involves splitting the data volume up into a series of cubes. In each cube, the local orientation of the seismic data is determined, and a smoothing operation performed to smooth the data in the direction of the local orientation of the data.
Due to the nature of the experimental process, recorded seismic data generally contains noise. Surfaces can be masked by noise, especially surfaces which have low reflection coefficients (such as the abovedescribed water-hydrocarbon interface). Such noise is generally random in nature. A further noise source is systematic noise which can be, for example, multiple reflections within a subsurface layer. Such systematic noise typically creates weak surfaces that can be difficult to detect when mixed with primary reflections and random noise. It is desirable to be able to identif' these weak systematic noise surfaces.
According to a first aspect of the invention, there is provided a method as defined in the appended claim 1.
According to a second aspect of the invention, there is provided a system as defined in the appended claim 24.
Embodiments of the invention are defined in the other appended claims.
For a better understanding of the present invention and in order to show how the same may be carried into effect, reference will now be made, by way of example, to the accompanying drawings, in which: Figure 1 illustrates the variation in semblance value with the orientation of a test surface; Figure 2 illustrates how real reflectors in a dataset may be approximated by a set of test reflectors; Figure 3 illustrates the values of a semblance function for a range of dip values; Figure 4 illustrates how adjacent test surfaces can be blended to image reflectors; Figure 5 illustrates characteristics of two different surfaces; and Figure 6 illustrates a normal seismic dataset and a pseudo-seismic dataset.
The embodiments described below provide a method of enhancing surfaces in a seismic dataset. The embodiments particularly enhance surfaces that are relatively large and smooth. The embodiments are particularly directed towards enhancing weak interfaces such as fluid-fluid interfaces. Two factors contribute to making such interfaces invisible to a human interpreter with known seismic processing methods. These are (1) the relatively weak amplitude of such reflectors compared to the amplitude of the noise and amplitude of other geological reflectors, and (2) interference between geological reflectors and the fluid-fluid interface.
In general, it is difficult to spot the weaker of two interfering crossed surfaces. In the presence of noise (random or systematic), the conclusion may be drawn that any irregularity of the main reflector is due to noise, whereas the reality may be that the irregularity is due to a weaker crossing reflector. Crossing reflectors are even more difficult to separate when considering the Fresnel zone effect, that one trace really represents data from a lateral area. It becomes a more difficult task to distinguish two crossing interfaces when the angle between them decreases.
The methods described below provide an objective way of protecting interfering surfaces by implementing three particular features: 1. Noise reduction accomplished by stacking a relatively large number of single traces to reduce noise.
2. Using data over a relatively large area to define a surface, thereby making it easier to resolve crossing reflectors as the relative area of interference between the surfaces decreases.
3. Simplifying the seismic image by representing the dataset only by the large- scale components.
The embodiments described below utilise an algorithm that forms a stack of traces over lateral areas of the data in different dip directions and depths, acting to filter or smooth the data. The dips that give the "best" results are detected and surfaces corresponding to those dips and depths are displayed in such a way as to make detection more simple than on a display including full details of the dataset. In other words, at each of a number of points within the seismic dataset, a test surface is applied, the test surface acting as an ideal reflector. The orientation and depth of the reflector surface is varied and for each position, a stack of traces over the surface is performed. The position which gives the "best" stack corresponds to the position of the reflector surface which best matches the seismic data and that surface position is chosen for that particular point in the seismic data space. Repeating the process for each point in the seismic dataset allows the user to build up a representation of the dataset using a plurality of test surfaces. The "best" stacks are those that resemble most what one would expect if a true reflector was present in the data. The selection of the positions that give best results are performed by the algorithm (and not the user) An important perimeter in the algorithm is the size of surface over which stacking is performed. Variations in the data on a smaller scale than the surface will not be accurately detected and the algorithm thus works as a size filter, keeping only those components of the data that have a large lateral size. If the size of the surface over which stacking is performed can be chosen so as to be larger than the typical size of geological variation but less than the typical size of variation of a fluid-fluid interface (which has variations when observed in two-way seismic time and not true depth), the method will enhance fluid-fluid interfaces at the expense of geological surfaces. It has been found that, for surfaces whose size is of the order of one square kilometre, such enhancement takes place. Although a surface area greater than or equal to one square kilometre may be preferred, it is believed that advantages result from adopting a surface area greater than approximately 0.1 square kilometres.
Mathematical formulation The aim of the method is to enhance planar or near planar structures in the seismic data at the expense of more rugged structures. Seismic surveys as described above result in a data search consisting of seismic amplitudes as a function of position, i.e. A(x,y,z), S where the depth coordinate z is in units of time. A plane structure can be identified in the seismic dataset as a consistent value of A over a surface in the dataset.
This is the same as saying that a surface S(x,y) in an area around the point xO,yO,zO could be well approximated by z= zO + tx*(xx0) + ty*(yy0) for not very large values of (x-xO) and (y-yO). A plane rectangular surface is defined by dimensions Rx = max(absolute value of (x-xO)) and Ry, with a corresponding definition. Thus the rectangle has area (2*Rx) *(2*Ry) with x values from xO-Rx to xO+Rx and y values from yO-Ry and yO+ Ry. In the following, this rectangle is referred to as the test reflector surface T. Rx and Ry are chosen by the user to give a test surface of a fixed size. It is assumed that the seismic data volume contains a surface S that can be considered as flat over the test surface. This means that by selecting position, dip in x direction and dip in y direction (or xO, yO,zO, tx and ty where tx is dip in x direction i.e. dxldz and ty is dip in y direction), the true surface S can be reasonably modelled by the surface T(xO,yO,zO,tx,ty,Rx,Ry).
The intention is to use the surface T(xO,yO,zO,tx,ty,Rx,Ry) to detect the surface S. In the following, fixed values of Rx and Ry are used and the variables xO,yO,zO,tx,ty are varied in order to locate a real surface S in the seismic data. There is a need for a scalar measure f(T) that indicates how "good" an approximation to a real surface the surface T is. The measure f(T) will be described hereinafter as a semblance measure.
A semblance measure can be chosen in many ways. The semblance measure must include data from not just the point xO,yO,zO, but from some or all of the test reflector surface T. The simplest semblance measure takes the sum of the seismic amplitudes A over the surface T, that is Rx Ry f1(x0, y0,z0)= A(x0+dx,y0+dy,z0+tx.dx+ty.dy) dx=-Rx dy=-Ry If the surface S is characterized by the seismic amplitude A having a maximum, the semblance measure f would have a theoretical maximum when T = S. Alternative semblance measures are equally valid, and some examples will now be described. Instead of just forming a sum of the amplitude, one could use information from a time interval by taking the scalar product between the amplitude of the seismic trace A and a synthetic test trace P in a band around zO, giving: B(x, y, z) = A(x, y, z + dz) . P(z + dz), giving the alternative semblance measure Rr Ry f2(x0,y0,z0)= B(x0+dx,y0+dy,z0+tx.dx+ iy.dy) d=-Rx dy=-Ry The reason for doing this is that "real" features will be represented by a waveform A(z) over a certain time interval z, and if this waveform is known to some degree of certainty, the noise can be reduced. Random noise in the seismic amplitude will be diminished when using data from a window (as this implies some form of stacking of data), thus the variations of the value of the semblance measure with time are expected to be less when a window is used. This gives rise to fewer local maxima induced by random noise.
Typically, Rz would be chosen to be around half of the seismic wavelength and P would be chosen to be the shape of a typical pulse. A simple cosine window over a few traces could be used, or a more sophisticated pulse calculated from the frequency spectra of the dataset itself. The latter can be achieved by performing a Fourier transform of the seismic amplitude as function of time over a time window in the area of interest and taking the average of this over a lateral area of interest (e.g. over the test surface). By zero-phasing in the Fourier domain ( Re(Fo) = abs(F) , Im(Fo) 0) and performing an inverse Fourier transform, a zero-phased wavelet is obtained that will be a good representation of a typical seismic pulse. The length of the test pulse could then be selected as being of the order of the wavelength of the main pulse or a few (for example five) samples covering the main maximum of the pulse.
One should observe that the semblance measure achieved by forming the scalar product of the signal and a pulse and summing this over an area at a given dip is in fact the same as the projection of the real seismic dataset onto an "ideal" reflector with constant shape in time. Further, this measure can be separated into the product of the correlation of the signal with the pulse and the amplitude of the signal. Given that the test pulse is normalized, this would be f = . f where = and ía = The scalar product scheme above can be slightly modified by spreading the resulting pulse, e.g. by forming a convolution with a Gaussian. The reason for doing this is that even a 100% flat physical surface (flat in spatial coordinates x,y,z) will be a bit rugged or spread out as a result of the inaccuracy in time-depth conversion during the seismic data stacking process. Thus, something that is flat when the measure is meters below sea level will not be completely flat when the measure is two-way time of the seismic signal.
A further alternative semblance measure uses the energy in the traces. The energy of a trace is the "normal" vector length if one considers each trace as a vector. Thus, the energy in the trace section between time samples a and b for a trace is: E, = I=a,b (here, A1 is the amplitude at sample time i).
The measure is then the energy of the sum of traces over the surface S. This measure differs from the previous measures by being invariant to the shape of the pulse. Thus, one need not make any assumptions as to the shape of the ideal reflector. Depending on the nature of the reflectors that one would like to detect, one might want to use a more sophisticated measure that takes into consideration not only the amplitude of the data but also the phase.
It is possible to evaluate a number of semblance measures and select that which returns the maximum value. An example of this, elaborated in more detail, is to use pulses of different width and select the pulse that gives the best match in terms of highest semblance value. A further example is to use pulses of different phase, acknowledging that one cannot in general be sure that a "true" reflector will be zero-phased. The phase that gives the best match is then used.
Yet another example is to perform the stack in a given dip direction and then take the Fourier transform of the sum, comparing this to the expected frequency content of a real reflector, e.g. to the average frequency content of a greater cube. This gives the opportunity to compare only the amplitudes of the components and not the phases, thus avoiding the need of operating with different phases.
The semblance measure that seems to work best (in the sense of being able to map out weak surfaces where they are known to be) is the scalar product with a typical pulse generated from the dataset by using the frequency content of the data. The width of this pulse is selected to cover the main energy of a typical wavelet. This pulse is zero- phased. Pulses of different phase are generated from this by fixing the phase to values other than zero, in the range from 0 to 360 degrees (e.g. at intervals of 10 degrees).
Further, these pulses are spread by forming the convolution with a Gaussian with a standard deviation typically between 0 to 12 ms, (e.g. at intervals of 0.4 ms.) The set of pulses are all normalized. The signal is then correlated to each of the resulting pulses (i.e. performing the scalar product with the signal and the pulse) and the pulse giving the highest value is selected as the best representation of the reflector at that position.
If the data contain surfaces giving very strong reflections (e.g. volcaning intrusions) it may prove more prudent to use a shorter timewindow so as to avoid picking up the energy of the strong reflection everywhere in its neighbourhood. A window of 2Oms has been used around the maximum of the zero-phased wavelet. In this case, the phase is not varied (since phase variations would require a longer window) but the detection is performed using only zero-phase test wavelets. Usually the phase of the targets of interest is known to be approximately zero phase. The spread of the test pulse and the correlation is done as in the phase variation scheme.
To summarise, a semblance measure f(T) can be obtained from any given test surface T. The location, shape and size of the surface T is selected, and the measure f(T) is a scalar that says something about the data in the location of the test surface T. If the test surface T happens to match a real surface S, the value of f(T) will be high. The better the match and the "stronger" the surface S, the higher the value of the semblance function will be. If there is no surface at the position of surface T or the alignment between T and S is poor, the value of f(T) is lower.
Figure 1 illustrates a two dimensional example of the variation in the semblance value as the orientation of the test surface T is varied about an axis perpendicular to the plane of the drawing. Figure 1 shows at (I) to (vi) the same geological reflector 1. The geological reflector 1 has a varied topology. A test surface 2 is centred at (xo,yo,zo), and rotated about that point to have the different orientations shown at (ii) to (vi).
Figure 1 (iv) has the highest semblance value (as shown on the right of each of (ii) to (vii)) because the test surface 2 most closely matches the topology at that orientation.
For a fixed test surface shape (we will use a plane rectangle with sides 2Rx and 2Ry as before) one can scan through all positions xO,yO,zO and all dips tx,ty using the semblance function f(T) (which in the case of constant shape of T becomes f(xO,yO,zO,tx,ty)).
For "useful" semblance functions f, that is, functions with a maximum value when the alignment with a real reflector is good, it is now possible to locate the positions and dips of the reflectors in the dataset by locating all local maxima of the semblance function G. A local maxima should be a maxima in dimensions z,tx and ty, that is: G(x0m,yO m, Z0 m,tX m,ty m)> G(xOm,yO m,z0m+özOm,tx m+&x m,ty m+ötym), for all combinations of small 6's positive and negative.
The real reflectors in the dataset may now be approximated by a set of test reflectors, as illustrated in Figure 2. The real surface 1 is approximated by four test surfaces 2a to 2d, each centred on an axis (xi, yi) to (x4,y4), respectively. Each of the test surfaces 2a to 2d returns the maximal value of the semblance function for test surface orientations about the respective axis.
The process may also be described as performing a 3D linear radon transform (also known as "slant stack"), convolving with target wavelet(s) in the radon domain, filtering in the radon domain by keeping only local maxima, and then performing a crude pseudo-inverse radon transform to obtain a filtered data set representing the large- scale variations in the data set.
Effecting the Algorithm The process of moving through the dataset and evaluation semblance measures can be summarised as follows: 1) Select a dataset and an area of it to investigate.
2) Select the size and shape of the test surface. It should normally be smaller than or comparable to the size of the typical structure one wants to find, depending on its curvature etc. To find oil-water contacts, a circular disk or a square of one square kilometre may be a suitable size. If the prospect is elongate in one specific direction, an ellipse of the same area and elongate in the direction of the prospect may be used.
3) Select a semblance function for example, the sum of amplitudes over the surface, or the square of this if negative pulses should also be detected.
4) Select a sampling rate in the time dimension for the calculation of the semblance function. The typical sampling rate used for seismic data is 4 milliseconds. It would seem appropriate to use the same here.
5) Select a sampling rate in the space domain (crossline and inline direction). This should be as fine as possible, subject to computing practicalities. A practical value for a large area would be somewhat less than half the size of the test rectangle, in order to give continuity in the data. 125 meters in each direction (20 lines with line distance 6.25 meters) could be a typical choice.
6) Select a sampling rate in the dip direction. A preferred choice would be to select a sampling rate that gives the same expectation of the standard error caused by dip sampling rate as caused by time sampling rate, i.e. <(3fJôz.)2> = tx)2> where zz is the step in the time direction and Atx is the step in the dip direction. The same should be the case for ty. This choice would yield the optimum error performance and thus least computing time for a given resolution.
For the simple sum of amplitudes over a rectangle of sides 2Rx*2Ry semblance function, the relation above gives itx = J3 Atx / Rx. It should be sufficient to use this relation for other slightly different semblance functions.
7) Select a dip interval of interest. If searching for rather flat features (such as hydrocarbon/water interfaces), one could choose to limit the dip search between -1 and 1 ms/line (one line being 12.5 meters).
8) Pick the first (x,y) coordinate of the area to be investigated. For each timestep z = za + i*Az, where za is the top of the area, build a table of values of the semblance function f as a function of the dips (using the discrete dip sampling rate). Locate the local maxima in this table and check if they are also maxima in the time direction. If so, print the coordinates for investigation by the user (the coordinates being x,y,z,tx,ty,f) - f is the semblance measure scalar value.
9) Investigate the printed coordinates or display them or track out individual surfaces to reveal the reflectors.
An example of the output is shown in Figure 3, which illustrates the values of the semblance function for a range of values of dip in the x and y directions for a proven location. The output appears like a contour map, with regions of common values of semblance connected by a line. The maximum semblance value occurs at point A (there may be multiple maximums) , for which the x and y dip values are approximately -0.2 and -0.1 respectively.
The exact sampling rates are not crucial other than they should be less than the distance between the reflectors in each dimension and should not be so small that roundoffs and other numerical artifacts could create "false" local maxima. The example values given here should be well within these limits. The important part of the algorithm is to choose an appropriate semblance function, and especially the size and shape of the test surface.
The larger the surface, the more weight will be given to very smooth structures.
When calculating the table of values of the semblance function, one needs to perform a sum in a dip direction. Quadratic interpolation between the sample values can be used to approximate the amplitude of the seismic trace between samples.
There can also be a step 8a, being a "refinement step". The process from steps 1 to 8 gives output on a grid. Step 8a is recalculation of the reflectors. Interpolation and decreasing steps in tx,ty and z direction can be used to give a more accurate value for z,tx,ty, and f.
Refinement of the method The following sections describe how the general principle can be exploited in practice, and also how some of the constraints given in the description above can be relaxed to better suit real data application.
Distinction between "200d" and "bad" surfaces The semblance measure is used to orient the test surface in space because the orientations that gives local maximas of the semblance value are selected. However, it is possible to calculate other attributes of the reflector. These values can then be used for filtering purposes. The simplest example is to use the dip as a filter parameter to only output reflectors that have a dip in an interesting range, or to avoid dips in a specific range. Prior knowledge of the physical structure can be used to enhance the surface detection. Alternatively, if there is an estimate of the time-depth variation for the area, this could be used to only output reflectors that are more or less "flat" in true depth. Aside from direct filtering, one could let the "strength" of the displayed data be modified by a quality parameter. As an example, one could output [Semblance]/[Semblance standard deviation] instead of simply [Semblance].
By evaluating the semblance measure on each individual trace in the test surface T, one can form the standard deviation of the semblance measure over the surface.
The semblance measure is used to identify the reflectors and then the standard deviation is used as a way to put more emphasis on "good" reflectors, i.e. those that fulfil certain criteria.
The standard deviation of a random scalar variable is defined as whereuisthemean, p= x,. 1=1 N
As the "x" in the present case is a pulse and not a single number, the standard deviation must be rewritten in a different format. The simple generalisation of the expressions for a scalar variable is: ji= where, is now a trace.
This definition of the standard deviation gives an indication of how "different" each trace is from the mean trace. A very smooth reflector would have low standard deviation, and a more rugged reflector would have a larger value. This measure, however, proves to be of little value when it comes to discriminating between smooth and rugged reflectors. The value of the standard deviation is rather high under all circumstances. This can be understood by noting that, in the present case, the magnitude of ü is much less than the magnitude of, due to noise in all dimensions.
A better definition of o for the present purpose is V i=1,N where ft = the unit vector in the direction of 1ä, in other words a pulse the same shape as itI the mean pulse, but with energy equal to 1.
This definition of the standard deviation considers only deviations of the projection onto the mean. A lot of the variations that are normal to the mean are thereby filtered out.
It is useful to calculate the standard deviation, as this can show if the semblance value has its origin from a strong but rough surface or a weak and smooth surface. One would expect a smooth surface to have less standard deviation than a rough surface. Figure 5 illustrates this principle. A strong and rough surface is shown at (a) and the standard deviation is relatively high. The surface shown at (b) is much smoother, but the amplitude of reflections is significantly weaker. However, the standard deviation at (b) is much lower than that at (a), resulting in an improved measure of semblance to the data.
Interpreting the results After having identified the local maxima of the semblance function, the output is a long list of coordinates with dip and value of the local maxima of the semblance function at that coordinate and dip. The list might look like the following table.
Trace Line Dept(ms) dipx dipy Semblance ___________ ___________ _____________ (ms/trace) (ms/line) ___________ 1200 1000 2340 0.2 -0.4 350 1200 1000 2356 0.4 0.8 120 1200 1020 2332 0.2 -0.4 370 1220 1000 2344 0.3 -0.4 340 etc. _____________ To analyse this, it is necessary to visualize the data, preferably in a way that makes it easy to compare against what "standard" seismic analysis shows.
The basic idea behind visualisation is to display the data in a way that reveals lateral continuity of reflectors.
One way of visualising the data is to generate a "pseudo" seismic dataset that can be visualized using standard visualisation tools (e.g. SeisworksTM). One way of doing this would be to use the semblance value as the amplitude of the new dataset and, for example, insert the value 350 in the sample corresponding to the coordinates (1200,1000,2340). If the sampling distance in the x and y directions are more than 1, one could extrapolate to get values for lines between the sampled points by using the dip information.
Figure 4 shows how multiple adjacent test surfaces 2 1-24, such as those of Figure 2, can be blended to image reflectors 25, 26. Preferably, the lateral sampling rate is less than the size of the test surface. Thus, the reflector in Figure 2 will be imaged by piecewise linear segments thatoverlap. One would expect that a "real" surface in the data would be detected by test surfaces at adjacent positions, whereas noisy data could give rise to detections that are not reproduced at neighbouring positions. By using lateral oversampling of the data and outputting the "blended" reflectors, so that the amplitude output at a specific point is a combination not only of the output from the test position with centre at that point, but also from nearby positions, more weight is given to laterally continuous features in the output. One specific way of doing this is to represent each detected reflector at position (xO,yO,zO) as a box with size 2rx,2ry,2rz where the amplitude (or "blackness" on a blackwhite output) of each point dies off from the centre of the box according to a cos2 function in all directions, that is: A(x,y,z) = 4 cos2( X0)) cos2YY0))cos2(7Z*(Z -[z0 +tx(x-x0)+tY(Y-_Y0}) 2r 2r 2r2 where A0 is the semblance value for the detected reflector. The displayed amplitude is then the sum of all such boxes from all positions.
When rx is an integer multiple of the sampling interval in x direction and ry is an integer multiple of the sampling interval in the y direction, the result gives a blend.
Taking rx = sample interval gives only interpolation of amplitudes, whereas using rx = 2*sample interval gives a blend of the neighbours. The value of rx should not exceed Rx - which is the same as stating that a reflector should not be "drawn" outside the area which was used to calculate it.
The optimum value of rx and ry will depend on the expected curvature of the features to be investigated, e.g. the sampling rate and so on. In a specific example, the size of the test surface used was 51 *5 1 lines*crosslines, the sampling interval was 10 lines and 10 crosslines, respectively. In this case, rx =20 and ry = 20 gave the subjectively best result. With rx and ry equal to 10 the picture looked more noisy, and with rx and ry equal to 30 the image became too smeared.
The choice of rz gives the "thickness" of the reflectors. As the reflectors are curves in real life but here modelled as straight lines (in the case of planar test surfaces) the blend is best if the lines have some thickness. A suitable value for rz can be 2 or 3 samples.
Figure 6 shows at (a) a "normal" seismic dataset as would be obtained using known seismic analysis. The lines marked 10 and 11 show the gas-oil and oil-water contacts, respectively. Figure 6 shows at (b) a pseudo seismic dataset evaluated as described above for the same region of space. The location of the contacts 10 and 11 are again superimposed on (b). Whilst both methods detect the contact marked with "A", only the pseudo seismic dataset generated using the present method has detected the lower oil-water contact, the location of which is marked with "B" in each image. The limitation on the range of dip angles of the test reflector surface prevents the detection of more dipping reflectors in the pseudo seismic dataset.
It should be noted that by operating with a sample rate in the lateral direction that is less than the size of the test surface it is possible to put more emphasis on laterally continuous features. The process can be summarised as follows * Start off with an empty dataset.
* For each detected reflector: - "Put" a 3D body into the dataset with an amplitude that is proportional to the semblance measure of the reflector. ("Putting" means adding this reflector into the dataset, causing a "blend" of all 3D bodies).
- The 3D body has the dip of the reflector and the same lateral size as the test surface or less. (One could let the amplitude die off at the edges to avoid sharp edges).
Repeat.
As multiple reflectors are detected from the same volume without removal, it can present a problem that areas where reflectors are crossing are given too much weight. If one uses the maximum of the semblance function instead of the sum, this would be more correct but the noise reduction effect mentioned above would be reduced.
A compromise between these two is possible by processing all reflectors having the same lateral position using the maximum of the semblance function, and then summing the results for each lateral position to get the total result. As data from different lateral positions are always summed, this process never reduces the effect of enhancing laterally consistent features, it only reduces adding due to different dip.
If both negative and positive amplitudes are used when putting reflectors into the dataset, one could take the minimum of the negative amplitudes, the maximum of the positive amplitudes and then form the sum of these to use as the "maximum".
However, to output both negative and positive amplitudes will give destructive interference between the reflectors. At reflector crossings this will make the interpretation more difficult. As one of the advantages of the current method is the detection of crossing interfaces, the preferred output is to display positive data in one cube and negative in another, and not represent the detected pulse by its real shape but with a simple "line", giving only positive pulses (or only negative pulses in the negative cube), thus making crossings more easy to identify.
If detection is carried out using a semblance measurement where different phases of the comparison pulse P are used, output is preferably achieved by placing the "line" in time at the position where the pulse has its maximum. This ensures that if the phase of the pulse that gives the maximum semblance varies along the reflector (e.g. due to noise) the line drawn will still be continuous.
Instead of using the semblance value as the amplitude of the output, one could instead output the trace of the stack in the direction of the maximum. This has the effect that the detection phase of the algorithm merely calculates a set of dominating dip directions and accompanying depths for each lateral position and the data at this point are replaced by the median value. To avoid interference resulting from using both positive and negative amplitudes, the negative amplitude values may be replaced by zero and the resulting trace may be scaled so that the energy of the new positive-only trace with only positive values is the same as the original trace.
Removal of a detected surface before further detection The basic detection scheme described above always starts out with the full dataset. If, say, one has two crossing reflectors, it means that the data in the overlapping area will contribute to both reflectors. In other words, detection without removal will put more emphasis on the "crossings".
A further embodiment comprises a scheme that first detects the strongest reflector, then subtracts it from the data, then detects the second strongest and so on. The most "correct" way of doing this is to iterate: after all reflectors are found, one restores the original dataset, removes all but the strongest and then re-detects the strongest and so on until convergence.
The reflectors can be removed in different ways. One way is to subtract for each trace the projection of that trace onto the mean trace or the shape of the test signal, that is a, = a, - a,* p1 Where is the "best" approximation of the true reflector (either the test pulse P or the actual stack).
The preferred way of removing reflectors is by FK-filtering, which technique is known in this technical field. This includes performing a 3D Fourier transform of the dataset that enters into the calculation of the reflector and remove or attenuate the components that correspond to the dip of the detected reflector (a small dip window must be used in practice), and then perform the inverse transform. The effect of this is to remove or attenuate all components of the data with that dip. Such removal of components with a given dip is common in the art. One way of doing this is to "flatten" the reflector to be removed by shifting the data so as to make the actual reflector flat. With a 3D Fourier transform, the components making up the reflector will be readily identified by those with zero dip. These can be removed or attenuated, followed by an inverse transform and inverse shift.
Trackint to create surfaces By investigating the above table of example output from the method, one will find that rows 1, 3 and 4 are consistent in the sense that if we start at point (1200,1000,2340) and move 20 traces "forward" at a dip of 0.2 we get to point (1220, 1000,2344) (2344 = 2340 + 20 * 0.2), being the position of row 4. it is possible to automate such tracking by starting at a specific local maxima, then going to a neighbouring position predicted by the dip of the first, and checking if there is a local maxima there at an acceptable deviation of position and dip. It is also possible to include the semblance measure in such tracking by, for example, demanding that the value should be over a certain limit.
Typically, the lateral extent of the test surface is much more than the lateral jump when tracking. This means that the data that enters into the calculations will be much the same for the neighbour. Thus, one would expect good continuity for "real" reflectors.
Tracking of surfaces is a standard tool in most seismic display packages, and it is in general possible to track the surfaces of interest by such tools by generating a pseudo dataset and using the tools on that.
However, there is much more information stored in the table of reflectors than in the pseudo dataset because each reflector is given with its dip and its values. When tracking directly on the list of reflectors, one should be able to follow weak reflectors through strong ones in a much easier way than in the pseudo dataset.
If it is possible for a user to give a seed point on a surface as input, tracking can be carried out by calculating only along the direction(s) of the surface(s) found at that location, thus avoiding bulk calculations over a volume.
Filtering on surface size A large number of the reflectors that are detected show little or no lateral continuity.
These can be removed because we are only interested in reflectors with a lateral size comparable with that of the test surface, and such reflectors should be continuous when the tracking step is much less than the size of the test surface. It is thus possible to filter out noise by joining reflector points together in reflector surfaces and filter out those surfaces with a small number of points.
Displaying tracked surfaces as horizons It is common to display tracked surfaces as horizons "on top" of the seismic data. This is useful also for the automatically tracked surfaces by this method.
Removing interfering surfaces from the dataset If a tracked surface is readily identified as "not of interest", it is possible to subtract the surface from the dataset, either by removing the average trace from each trace over the surface or by for each trace subtracting the projection of that trace onto the test pulse, or by 3D Fourier transform: by shifting the dataset to "flatten" the reflector surface, performing a 3D Fourier transform, removing flat components, performing the inverse transform and "unflattening" the data.
Adapting the shape of the test surface One of the most important parameters in this method is the size and shape of the test surface, the area to sum over. As the statistics get better with increasing number of traces in the sum, one would want to sum over as many traces as possible. However, a large area also can give problems: The real surface we want to detect may become smaller than our test surface, and/or the real surface is less likely to be totally flat over the area.
It is possible to let the surface adapt itself to the data. Since the method is statistical in nature and depends on a sum over an area of some size, it follows that one cannot detect a surface with great detail, on trace by trace basis, as this would indicate that one could identify the signal in each trace alone.
It is thus necessary to work with shapes with fewer degrees of freedom than the number of points. An alternative to a rectangle is an ellipse, a simple shape that is convex yet offers the possibility of extending more along one direction than the other.
An ellipse can be parameterized by 6 coordinates: position of centre of mass of ellipse (3 coordinates), number of points in the ellipse (equivalent to the area of the ellipse), stretch of the ellipse defined as the length of the longest half axis divided by the shortest half axis, and finally the direction of the ellipse (the angle between the x axis and the longest half axis of the ellipse). The main reason for choosing this way of describing the ellipse is that it keeps the number of points in the sum constant. Since the method is constrained to a grid, this is most easily achieved by selecting the number of points as a parameterisation variable.
One way of letting the test surface adapt to the data is then: * Keep the coordinates of the centre fixed, as well as the number of points.
* Let the spread and the angle of the ellipse vary over a range and measure the semblance with that ellipse as input (possibly subject to optimalisation of slope and depth for each position).
* Select the orientation that gives the highest semblance value as the best representation of the surface.
Relaxing the criterion of a totally plane test surface Especially in the case of large test surfaces, the constraint that the surface should be absolutely flat may be too rigid. There are variations in the average seismic velocities as function of lateral position (due for example to slope and jumps in sea-bed). This means that even a surface that is truly horizontal in the sense that it is parallel to the sea surface in a normal coordinate system will not appear as a plane in the seismic dataset.
If the test surface is large, there might come a point where the surface is so rugged that negative interference is obtained.
It is thus of interest to allow the surface to vary somewhat. Since the objective is to detect structures that are plane, the constraint that the surface is relatively smooth must be maintained.
One way of doing this is to first run the method as described above, i.e. using a completely flat test surface to obtain the maximal semblance value.
Subsequently: 1) For each trace i, perfom the correlations C(i(t-dt),mean), C(i(t),mean) , C(i(t+dt),mean)) 2) Adjust the trace in the direction that gives the maximum (if, say, C(i(t+dt),mean) is the max, move trace i down a bit dt 3) Smooth the traces by replacing each trace i by the average over the traces in the neighbourhood (over the traces that lie in a given radius R from trace i) 4) Calculate a new mean by summing over the traces.
5) Go to point 1) again. Repeat until convergence.
The radius R will now decide how smooth the surface is. If R is zero, then the surface can be very rugged. If the aim is to make something flat stand out in comparison to something not so flat, a large R should be selected. If one has already detected something flat, it could be interesting to let R be relatively small to be able to look at how the surface will adjust itself when given the freedom to do so.
A different approach to relaxing the criterion of a plane surface is to approximate the surface not by a plane but by a more complex function. One choice of such a more complex function would be to add two terms to the function describing the test surface T. These terms may be cx*(xx0)2 and cy*(yy0)2, and optionally cxy*(xx0)*(yy0).
It is then possible to vary the parameters cx and cy (and optically cxy) in the same way as tx and ty to locate the best match. It is computationally most efficient to include such further terms only in the refining phase. As such second order terms do not alter the median dip, it is possible first to locate the best dip, fixing tx and ly, and then to locate the best values for cx and cy (and optionally cxy). As the stacked trace will change in shape, it is also necessary to let the z value change. It is also possible to use other complex functions and even more terms. However, if too many degrees of freedom were added to the surface by adding too many terms, there would be a risk of not being able to detect the target surfaces as it is a requirement of the detection method to constrain the degrees of freedom of the surface to be able to stack out noise. In general, it is possible and desirable or even essential to add more terms if the size of the test surface is increased.
Lettin2 the test surface size vary The test surface can be positioned on a possible surface, and the surface allowed to expand, vary its shape (e. g. by selecting the best ellipse with a given number of points) and moved laterally.

Claims (38)

  1. CLAIMS: 1. A method of identif'ing a continuous feature in a seismic data
    set comprising: (i) choosing a location within the data set; (ii) at that location, performing a semblance measurement on a first subset of the data set that represents a surface containing that location; and (iii) repeating step (ii) for at least one further subset of the data set representing at least one further surface at that location.
  2. 2. A method according to claim I, wherein the or each further surface has a different spatial orientation to the first surface.
  3. 3. A method according to claim 2, wherein the method is repeated for a plurality of locations within the data, with varying spatial orientation and depth.
  4. 4. A method according to any preceding claim, comprising the further step of: (iv) storing the parameters of at least one surface spatial orientation.
  5. 5. A method according to claim 4, wherein the stored parameters correspond to the surfaces having the highest local values of semblance measurement.
  6. 6. A method according to claim 4 or 5, wherein the stored parameters of surfaces in the seismic data set are combined to generate an estimate of reflectors present in the data.
  7. 7. A method according to claim 6, wherein adjacent surfaces are smoothed into one another.
  8. 8. A method according to claim 7, wherein the smoothing is effected using one of a Gaussian function and a squared sinusoidal function.
  9. 9. A method according to any preceding claim, wherein the method is applied to stacked seismic data.
  10. 10. A method according to any preceding claim, wherein the semblance measurement comprises comparing the subset of data to a set of reference data.
  11. 11. A method according to claim 10, wherein the reference data comprises a theoretical seismic pulse.
  12. 12. A method according to claim 10, wherein the reference data comprises a seismic pulse generated from data present in the seismic data set.
  13. 13. A method according to any preceding claim, wherein step (ii) comprises evaluating a function f(T) to yield a scalar value representing the semblance, where T is the surface intersecting the location.
  14. 14. A method according to claim 13, wherein the function f(T) performs the sum of all seismic amplitudes over the surface.
  15. 15. A method according to claim 13, wherein the function f(T) performs the scalar product between seismic traces present in the surface and a synthetic seismic trace.
  16. 16. A method according to claim 15, wherein the synthetic seismic trace is generated using a frequency domain averaging process over a plurality of seismic traces present in the seismic data set.
  17. 17. A method according to claim 15, wherein the synthetic seismic trace is generated using a frequency domain averaging process over a plurality of seismic traces present in the first subset.
  18. 18. A method according to any preceding claim, wherein step (ii) comprises running a plurality of different semblance measurements.
    I
  19. 19. A method according to claim 18, wherein the plurality of different semblance measurements comprises comparisons based upon synthetic traces having different pulse widths.
  20. 20. A method according to claim 18, wherein the plurality of different semblance measurements comprises comparisons based upon synthetic traces having different pulse phases.
  21. 21. A method according to claim 19 or 20, wherein the synthetic traces are pulse broadened prior to the comparison being made.
  22. 22. A method according to claim 21, wherein the pulse broadening is effected by convolving the synthetic seismic trace with a Gaussian function.
  23. 23. A method according to any one of claims 18 to 22 when dependent on claim 3, wherein step (iv) comprises storing the parameters for the semblance measurement which returns the highest value.
  24. 24. A method according to any preceding claim, wherein the size and/or shape of the surface can be varied.
  25. 25. A method according to any preceding claim, wherein the surface has an area equal to or greater than substantially 0.1 square kilometres.
  26. 26. A method according to claim 25, wherein the surface has an area greater than or equal to substantially 1 square kilometre.
  27. 27. A method according to any preceding claim, wherein the step (ii) further comprises: calculating the standard deviation of the semblance measurement across the surface.
  28. 28. A method according to any preceding claim, comprising the further steps of removing the reflectors which most closely match the seismic data; and iterating steps (ii) and (iii) to identify progressively weaker reflecting surfaces in the data.
  29. 29. A method according to claim 27, wherein the iterative process comprises removing weaker reflecting surfaces to obtain an estimate of the strongest reflecting surface, restoring the original data, removing the estimated strongest reflecting surface from the original data, and repeating to remove the successive strongest reflecting surfaces.
  30. 30. A method according to claim 28 or 29, wherein removal of reflectors is effected using a 3D Fourier transform.
  31. 31. A method according to any preceding claim, wherein the surface is planar.
  32. 32. A method according to any one of claims 1 to 30, wherein the surface is non- planar.
  33. 33. A method of detecting interfaces in seismic data, comprising processing the data in accordance with any one of the preceding claims, and constructing interfaces from a plurality of reflector surfaces that most closely fit the seismic data.
  34. 34. A system for identifying a continuous feature in a seismic data, comprising: - means for choosing a location in the data; - processing means for performing a semblance measurement on a first subset of the data that represents a surface intersecting that location; and - control means for controlling the processing means to repeat the semblance measurement for at least one further subset of the data representing a further surface intersecting the location, the further surface having a different spatial orientation to the first surface.
  35. 35. A program for controlling a computer to perform a method as claimed in any one of claims ito 33.
  36. 36. A program as claimed in claim 35, stored in a storage medium.
  37. 37. Transmission of a program as claimed in claim 35 across a communication network.
  38. 38. A computer programmed to perform a method as claimed in any one of claims I to 33.
GB0517245A 2005-08-24 2005-08-24 Identifying continuous features in a seismic data set Pending GB2429526A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB0517245A GB2429526A (en) 2005-08-24 2005-08-24 Identifying continuous features in a seismic data set
PCT/EP2006/065077 WO2007023080A2 (en) 2005-08-24 2006-08-04 Seismic data processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0517245A GB2429526A (en) 2005-08-24 2005-08-24 Identifying continuous features in a seismic data set

Publications (2)

Publication Number Publication Date
GB0517245D0 GB0517245D0 (en) 2005-09-28
GB2429526A true GB2429526A (en) 2007-02-28

Family

ID=35098145

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0517245A Pending GB2429526A (en) 2005-08-24 2005-08-24 Identifying continuous features in a seismic data set

Country Status (2)

Country Link
GB (1) GB2429526A (en)
WO (1) WO2007023080A2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2522778A (en) * 2013-12-30 2015-08-05 Pgs Geophysical As Methods and systems for optimizing generation of seismic images
WO2020130845A1 (en) 2018-12-18 2020-06-25 Equinor Energy As Method of analysing seismic data to detect hydrocarbons
WO2020226506A1 (en) * 2019-05-03 2020-11-12 Equinor Energy As Method of analysing seismic data

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USRE38229E1 (en) * 1994-12-12 2003-08-19 Core Laboratories Global N.V. Method and apparatus for seismic signal processing and exploration
US6141622A (en) * 1996-11-15 2000-10-31 Union Oil Company Of California Seismic semblance/discontinuity method
MY125603A (en) * 2000-02-25 2006-08-30 Shell Int Research Processing seismic data
WO2002003099A2 (en) * 2000-06-30 2002-01-10 Exxonmobil Upstream Research Company Method for imaging discontinuities in seismic data using dip-steering
FR2826734B1 (en) * 2001-06-29 2004-01-16 Geophysique Cie Gle IMPROVEMENTS IN SURFACE TIP TECHNIQUES

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2522778A (en) * 2013-12-30 2015-08-05 Pgs Geophysical As Methods and systems for optimizing generation of seismic images
US9857490B2 (en) 2013-12-30 2018-01-02 Pgs Geophysical As Methods and systems for optimizing generation of seismic images
GB2522778B (en) * 2013-12-30 2020-09-02 Pgs Geophysical As Methods and systems for optimizing generation of seismic images
WO2020130845A1 (en) 2018-12-18 2020-06-25 Equinor Energy As Method of analysing seismic data to detect hydrocarbons
US11815640B2 (en) 2018-12-18 2023-11-14 Equinor Energy As Method of analysing seismic data to detect hydrocarbons
WO2020226506A1 (en) * 2019-05-03 2020-11-12 Equinor Energy As Method of analysing seismic data
GB2596734A (en) * 2019-05-03 2022-01-05 Equinor Energy As Method of analysing seismic data
US12061303B2 (en) 2019-05-03 2024-08-13 Equinor Energy As Method of analysing seismic data

Also Published As

Publication number Publication date
WO2007023080A2 (en) 2007-03-01
WO2007023080A3 (en) 2007-04-19
GB0517245D0 (en) 2005-09-28

Similar Documents

Publication Publication Date Title
US5892732A (en) Method and apparatus for seismic signal processing and exploration
AU2021203520B2 (en) Seismic acquisition method
CA2414631C (en) Method for imaging discontinuities in seismic data using dip-steering
US6853922B2 (en) System for information extraction from geologic time volumes
US6092026A (en) Seismic signal processing and exploration
EP1058131B1 (en) Method of determining spatial changes in subsurface structure stratigraphy, lithology and fluid content and of reducing seismic noise
EP1149311A2 (en) Spectral decomposition for seismic interpretation
AU2001271490A1 (en) Method for imaging discontinuities in seismic data using dip-steering
US20100223012A1 (en) Method for Determination of Diffractor Locations at Sea Bottom for the Purpose of Attenuating Such Energy
US9310504B2 (en) Systems and methods for detecting swell noise in a seismic gather
Wang et al. Separation and imaging of seismic diffractions using a localized rank-reduction method with adaptively selected ranks
EA023300B1 (en) Method for dynamic aperture determination for three-dimensional surface-related multiple elimination
US4688198A (en) Entropy guided deconvolution of seismic signals
CA2455810C (en) System for information extraction from geologic time volumes
WO2003009004A1 (en) System for information extraction from geologic time volumes
AU2002329615A1 (en) System for information extraction from geologic time volumes
GB2429526A (en) Identifying continuous features in a seismic data set
Kneib et al. 3D targeted multiple attenuation [Link]
US6101446A (en) Method for charting drillable zones in an oilfield avoiding anomaly zones
Mesquita et al. Horizon picking algorithm using global optimization and coherence measurement in poststacked seismic data
MXPA97010022A (en) Method and apparatus for processing and exploring sismi signals
CA2888252A1 (en) Method of enhancing flat spots in three-dimensional seismic interpretation
Chakraborty et al. Improved seismic data processing and interpretation for strati-structural petroleum plays in the north of Al-Khafji area, Middle-East