CN109144405B - Travel time data caching method and device - Google Patents

Travel time data caching method and device Download PDF

Info

Publication number
CN109144405B
CN109144405B CN201710505971.5A CN201710505971A CN109144405B CN 109144405 B CN109144405 B CN 109144405B CN 201710505971 A CN201710505971 A CN 201710505971A CN 109144405 B CN109144405 B CN 109144405B
Authority
CN
China
Prior art keywords
travel
time data
data
time
travel time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710505971.5A
Other languages
Chinese (zh)
Other versions
CN109144405A (en
Inventor
杨祥森
赵改善
魏嘉
亢永敢
刘百红
陈金焕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Petroleum and Chemical Corp
Sinopec Geophysical Research Institute
Original Assignee
China Petroleum and Chemical Corp
Sinopec Geophysical Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Petroleum and Chemical Corp, Sinopec Geophysical Research Institute filed Critical China Petroleum and Chemical Corp
Priority to CN201710505971.5A priority Critical patent/CN109144405B/en
Publication of CN109144405A publication Critical patent/CN109144405A/en
Application granted granted Critical
Publication of CN109144405B publication Critical patent/CN109144405B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • G06F3/0611Improving I/O performance in relation to response time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0643Management of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0674Disk device

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

A method and a device for caching data during travel are disclosed. The caching method comprises the following steps: traversing the seismic trace data, and counting travel time information used by each seismic data; establishing a cache priority level and a position index of each seismic channel data; sorting the seismic channel data by taking the shot number and the channel number of the seismic channel data as keywords to form a plurality of small shot sets; setting a plurality of priority levels of travel time data cache regions, and storing the travel time data into corresponding travel time data cache regions based on the priority levels of the travel time data in the use process of the travel time data; when the travel time data are prefetched, searching is carried out according to the priority of the travel time cache region; and refreshing the travel-time data in the travel-time cache area based on the retrieval result. The invention improves the hit rate of data access and the data prefetching rate, avoids IO competition and network blockage caused by frequent access to a different-place disk, and obviously improves the calculation performance of prestack depth migration.

Description

Travel time data caching method and device
Technical Field
The invention relates to the field of oil and gas geophysical exploration, in particular to a travel time data caching method and device.
Background
The travel-time data is one of the input data for Kirchhoff prestack depth migration. Since Kirchhoff has poor approximation calculation and adaptability to lateral velocity changes, and the travel-time data volume is large, the travel-time data is generally calculated well in advance in a sparse grid manner. When seismic channel data are calculated in an offset mode, a travel-time three-dimensional data body of 4 points around a shot point or a demodulator probe is read from a travel-time file, the three-dimensional travel-time data body of the shot point and the demodulator probe is obtained after 4-point interpolation, and then offset calculation is carried out. This results in frequent reading and replacement of data while traveling, significant degradation of offset computation performance and excessive network traffic.
The current common method is to apply for a large enough memory, read in travel time data to a memory queue as much as possible, and refresh the travel time data in the queue in a first-in first-out mode, so as to reduce IO times and improve data retrieval efficiency. However, after all, the memory is limited, the hit rate of data retrieval during traveling is low, data still needs to be frequently read into the buffer memory or removed from the memory during traveling, and the calculation performance is improved but needs to be further improved. Therefore, there is a need to develop a method and apparatus for caching travel time data that can significantly improve computing performance.
The information disclosed in this background section is only for enhancement of understanding of the general background of the invention and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art.
Disclosure of Invention
The invention aims to improve the multiplexing rate and the access hit rate of data during traveling, reduce the IO time of the data and reduce the network flow through a data prediction and multi-level cache method, thereby improving the calculation performance of the Kirchhoff prestack depth migration.
According to one aspect of the invention, a travel time data caching method is provided. The data caching method can comprise the following steps:
traversing seismic trace data, and counting travel time information used by each seismic data, wherein the seismic trace data are common offset gather data;
establishing a cache priority level and a position index of each seismic channel data;
sorting the seismic channel data by taking the shot number and the track number of the seismic channel data as keywords to form a plurality of small shot sets, wherein each small shot set shares one shot point travel time;
setting a plurality of priority levels of travel time data cache regions, and storing the travel time data into corresponding travel time data cache regions based on the priority levels of the travel time data in the use process of the travel time data;
when the travel time data are prefetched, searching is carried out according to the priority of the travel time cache region;
and refreshing the travel-time data in the travel-time cache area based on the retrieval result.
Preferably, the travel time information comprises the number of travel time use cannons, the number of travel time use times of each shot point, the minimum track number and the maximum track number of the travel time data of each cannon, and the reading times.
Preferably, the priority level of the travel time data is determined based on the number of times of use when traveling.
Preferably, the travel-time data cache of the plurality of priority levels comprises: a local disk cache region, a shared memory cache region, a shot point travel time special memory cache region and a wave detection point travel time special memory cache region.
Preferably, the travel-time data cache of the plurality of priority levels comprises: a local disk cache region, a shared memory cache region, a shot point travel time special memory cache region and a wave detection point travel time special memory cache region.
Preferably, when prefetching the travel-time data, retrieving according to the priority of the travel-time cache comprises: firstly, searching in a cache region with high priority level, and searching in a next level cache region if the needed travel time data is not found; and if the required travel-time data is not found in all the travel-time data cache regions, directly reading the travel-time data from the original travel-time file.
Preferably, the refreshing the travel-time data in the travel-time cache region based on the retrieval result includes, for the travel-time data which is directly read from the original travel-time file and is not cached in all the travel-time data cache regions, when the travel-time data is the wave detection point travel-time data or the shot point travel-time data, executing the following steps:
if the special memory cache area for the wave detection point travel time or the special memory cache area for the shot point travel time is not full, adding the wave detection point travel time data or the shot point travel time data read from the original travel time file into the special memory cache area for the wave detection point travel time or the special memory cache area for the shot point travel time, and if the space of the cache area is full, replacing the expired travel time data;
if the travel-time data which is out of date does not exist in the special internal storage cache area for the wave detection point travel-time or the special internal storage cache area for the shot point travel-time, the following judgment is carried out:
if the priority of the travel-time data to be added into the wave detection point or the travel-time data of the shot point is higher than the priority of the travel-time data in the special memory cache area for the wave detection point or the special memory cache area for the shot point, replacing the travel-time data with the lowest priority in the special memory cache area for the wave detection point or the special memory cache area for the shot point;
and if the priority of the travel-time data to be added into the wave detection point or the priority of the travel-time data of the shot point is not higher than that of the travel-time data in the special memory cache area for the wave detection point or the special memory cache area for the shot point, eliminating the first added travel-time data and adding the first added travel-time data into the next-level cache area.
According to another aspect of the invention, a travel time data caching device is provided. The data caching device comprises: a receiver for receiving seismic trace data, a processor, a memory, and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program is capable of implementing the steps of:
traversing seismic trace data, and counting travel time information used by each seismic data, wherein the seismic trace data are common offset gather data;
establishing a cache priority level and a position index of each seismic channel data;
sorting the seismic channel data by taking the shot number and the track number of the seismic channel data as keywords to form a plurality of small shot sets, wherein each small shot set shares one shot point travel time;
setting a plurality of priority levels of travel time data cache regions, and storing the travel time data into corresponding travel time data cache regions based on the priority levels of the travel time data in the use process of the travel time data;
when the travel time data are prefetched, searching is carried out according to the priority of the travel time cache region;
and refreshing the travel-time data in the travel-time cache area based on the retrieval result.
Preferably, when prefetching the travel-time data, retrieving according to the priority of the travel-time cache comprises: firstly, searching in a cache region with high priority level, and searching in a next level cache region if the needed travel time data is not found; and if the required travel-time data is not found in all the travel-time data cache regions, directly reading the travel-time data from the original travel-time file.
Preferably, the travel-time data cache of the plurality of priority levels comprises: a local disk cache region, a shared memory cache region, a shot point travel time special memory cache region and a wave detection point travel time special memory cache region.
The method and the device for caching the travel-time data realize multi-level caching and refreshing of pre-stack depth migration of the travel-time data. By deducing prediction and data sorting, the invention obviously improves the reuse rate of data during traveling; through multi-level caching and data refreshing, the hit rate of data access and the data prefetching rate are improved, IO competition and network blockage caused by frequent access to a foreign disk are avoided, and the computing performance of the Kirchhoff prestack depth migration is obviously improved.
The method and apparatus of the present invention have other features and advantages which will be apparent from or are set forth in detail in the accompanying drawings and the following detailed description, which are incorporated herein, and which together serve to explain certain principles of the invention.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent by describing in more detail exemplary embodiments thereof with reference to the attached drawings, in which like reference numerals generally represent like parts.
FIG. 1 is a flow diagram of a travel-time data caching method according to an exemplary embodiment of the present invention;
FIG. 2 shows the results of statistical analysis and description of the data usage while traveling.
FIG. 3 is a schematic diagram of a single shot seismic data acquisition.
FIG. 4 is a diagram illustrating the design of a multi-level cache and the data flow.
FIG. 5 is a flow chart of data refresh.
Detailed Description
The invention will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
The travel-time data is one of the input data for Kirchhoff prestack depth migration. Due to limited memory and large travel data amount, travel data at different points need to be frequently acquired in the offset calculation process, so that the offset calculation performance is greatly reduced. Therefore, the invention reorders the offset grouped seismic data according to the shot points by deducing, counting and analyzing the use rule of the travel data, and sets three levels of cache spaces of a local disk, a shared memory and a special memory. In the using process of the travel data, the data are read in and put out into different cache spaces according to the priority and the using period of the travel data. The following describes the Kirchhoff prestack depth migration for 512MB of common offset gather data.
Referring to fig. 1, a travel-time data caching method according to an exemplary embodiment of the present invention mainly includes the following steps:
1) and traversing the seismic trace data, and counting travel-time information used by each seismic data, wherein the seismic trace data are common offset gather data.
Firstly, traversing the seismic trace data before migration, and counting travel-time information used by each seismic trace data.
The travel time information can comprise the number of guns used during travel, the travel time of each gun point, the minimum track number and the maximum track number of the travel time data of each gun, the network coding and reading times and the like. Those skilled in the art will appreciate that other data may also be included in the travel time information.
2) And establishing the caching priority level and the position index of each seismic channel data.
After the statistics are completed, a corresponding cache priority level, a position index and the like of each seismic channel data are established, as shown in fig. 2.
3) And sequencing the seismic channel data by taking the shot number and the track number of the seismic channel data as keywords to form a plurality of small shot sets, wherein each small shot set shares one shot point travel time.
Although the input seismic channel data is common Offset data with a sorting keyword (Offset sequence number, line number, and vertical line number), multiple Offset channel sets within a tolerance range must have multiple channels that are common shot points. Thus, a 512MB common offset gather is second ordered with the key of the shot number and the track number (i.e., (shot, track number)) to form a small shot gather of offset grouped data, as shown in FIG. 3. In fig. 3, the black spot in the center is the shot position, and each grid point of the acquisition grid is a demodulator probe. Each concentric ring is paired with an Offset group, and the gathers collected on the concentric rings are a common Offset gather. For example, gathers corresponding to white dots in the figure, which share a shot point, can be formed into a small shot gather. When the travel time data is cached, one shot travel time can be shared.
These small shot gathers may share the time of the shot journey, so that the IO volume of the travel time data will be reduced by nearly half.
4) And setting a plurality of priority levels of travel time data cache regions, and storing the travel time data into the corresponding travel time data cache regions based on the priority levels of the travel time data in the use process of the travel time data.
In the travel-time calculation process, travel-time data for use with Kirchhoff prestack depth migration is typically stored in a disk array or distributed file system. In order to improve the access speed of the data during traveling, a plurality of levels of cache regions are set according to IO performance, network bandwidth and access rate, wherein the cache regions are respectively a computing node local disk cache region, a shared memory cache region, a shot point travel time special memory cache region and a wave detection point travel time special memory cache region. The local disk cache region stores the travel-time data read from the disk array or the distributed file system (namely the original travel-time file), or the travel-time data displaced from the shared memory cache region; the shared memory cache region is used for storing new data read from the disk array or the distributed file system, or reading data which is outdated and is set out from the shared memory cache region from the local disk cache region, so that the network flow is reduced, and IO conflict is avoided. The cache area is as large as possible, the travel-time data are stored as much as possible, and the access hit rate is improved. For a small shot gather within an offset group, the shot points are fixed, but the demodulator probes are varied, but can be reused within a certain range. Therefore, the special memory cache region for shot travel time is used for storing the shot travel time data which is processed by interpolation and the like and can be directly used for offset calculation; and the special memory buffer area for the wave detection point travel time is used for storing the wave detection point travel time data with high use frequency but not interpolating. When a certain wave detection point travel is needed, 4 travels are obtained from the special buffer area and are interpolated. Therefore, the data reuse rate during traveling is greatly improved.
The priority of data retrieval when traveling is from high to low in order: a shot travel time dedicated memory cache region, a demodulator travel time dedicated memory cache region, a shared memory cache region, and a local disk cache region, as shown in fig. 4.
And in the traveling time using statistical analysis stage, priority setting is carried out according to the statistically analyzed number of times of use of the shot during traveling, and the more the number of times of use is, the smaller the number is, and the higher the priority is.
5) When the travel time data is prefetched, the data is retrieved according to the priority of the travel time cache region.
When the travel-time data are prefetched, firstly, the travel-time data are searched in a cache region with high priority level, and if the needed travel-time data are not found, the travel-time data are searched in a next-level cache region; and if the required travel-time data are not found in all the travel-time data cache regions, directly reading the travel-time data from the original travel-time file and caching the travel-time data into a special memory cache region of the wave detection point.
6) And refreshing the travel-time data in the travel-time cache area based on the retrieval result.
Fig. 5 shows a flowchart for refreshing the travel-time data in the travel-time cache region for the travel-time data that is directly read from the original travel-time file and is not cached in all the travel-time data cache regions.
If the special memory cache region for the wave detection point travel time or the special memory cache region for the shot point travel time is not full, adding the wave detection point travel time data or the shot point travel time data read from the original travel time file into the special memory cache region for the wave detection point travel time or the special memory cache region for the shot point travel time, and if the space of the cache region is full, replacing the expired travel time data;
if the travel-time data in the special memory cache area for the wave detection point travel-time or the special memory cache area for the shot point travel-time does not have expired, the following judgment is carried out:
if the priority of the travel-time data to be added into the wave detection point or the priority of the travel-time data in the special memory cache area for the wave detection point travel or the priority of the travel-time data in the special memory cache area for the shot point travel is higher than the priority of the travel-time data in the special memory cache area for the wave detection point travel or the special memory cache area for the shot point travel, replacing the travel-time data with the lowest priority in the special memory cache area for the wave detection point travel or the special memory cache area for the shot point travel;
and if the priority of the travel-time data to be added into the wave detection point or the priority of the shot point travel-time data is not higher than the priority of the travel-time data in the special memory cache region for the wave detection point or the special memory cache region for the shot point travel-time, eliminating the travel-time data added firstly and adding the travel-time data into the next-level cache region.
Wherein, whether a certain travel is out of date or not needs to judge the calculation progress, the use priority, the maximum cache number and the like of all the calculation processes.
According to another embodiment of the invention, the travel-time data caching device is provided. The data caching device comprises: a receiver for receiving seismic trace data, a processor, a memory, and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program is capable of implementing the steps of:
traversing seismic trace data, and counting travel time information used by each seismic data, wherein the seismic trace data are common offset gather data;
establishing a cache priority level and a position index of each seismic channel data;
sorting the seismic channel data by taking the shot number and the track number of the seismic channel data as keywords to form a plurality of small shot sets, wherein each small shot set shares one shot point travel time;
setting a plurality of priority levels of travel time data cache regions, and storing the travel time data into corresponding travel time data cache regions based on the priority levels of the travel time data in the use process of the travel time data;
when the travel time data are prefetched, searching is carried out according to the priority of the travel time cache region;
and refreshing the travel-time data in the travel-time cache area based on the retrieval result.
Preferably, when prefetching the travel-time data, retrieving according to the priority of the travel-time cache comprises: firstly, searching in a cache region with high priority level, and searching in a next level cache region if the needed travel time data is not found; and if the required travel-time data is not found in all the travel-time data cache regions, directly reading the travel-time data from the original travel-time file.
Preferably, the travel-time data cache of the plurality of priority levels comprises: local disk cache region, shared memory cache region, shot point travel time special memory cache region, and wave detection point travel time special memory cache region:
according to the invention, through deduction prediction and data sequencing, the reuse rate of data during traveling is improved; through the multi-level cache and the data refreshing, the hit rate and the data pre-fetching rate of data access are improved, IO competition and network blockage caused by frequent access to a foreign disk are avoided, and the calculation performance of prestack depth migration is obviously improved.
Application example
The implementation of the present invention and the related effects will be described below by taking the implementation of the present invention on the travel time data as an example.
The Kirchhoff prestack depth migration software containing the travel time data caching method is deployed in a Hadoop running environment of a 64-node cluster, and 72GB seismic data and 400MB depth domain velocity model data of a certain exploration work area are selected to perform Kirchhoff prestack depth migration processing. The test shows that: by adopting the multi-level caching and refreshing technology, the CPU utilization rate of each node is improved to 95% from the original 55%, the network flow is reduced to less than 3GB/s from 30GB/s, the use of the memory is stable, and the overall calculation performance is improved by 5.8 times.
It will be appreciated by persons skilled in the art that the above description of embodiments of the invention is intended only to illustrate the benefits of embodiments of the invention and is not intended to limit embodiments of the invention to any examples given.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. A travel time data caching method is characterized by comprising the following steps:
traversing seismic trace data, and counting travel time information used by each seismic data, wherein the seismic trace data are common offset gather data;
establishing a cache priority level and a position index of each seismic channel data;
sorting the seismic channel data by taking the shot number and the track number of the seismic channel data as keywords to form a plurality of small shot sets, wherein each small shot set shares one shot point travel time;
setting a plurality of priority levels of travel time data cache regions, and storing the travel time data into corresponding travel time data cache regions based on the priority levels of the travel time data in the use process of the travel time data;
when the travel time data are prefetched, searching is carried out according to the priority of the travel time cache region;
and refreshing the travel-time data in the travel-time cache area based on the retrieval result.
2. The travel time data caching method according to claim 1, wherein the travel time information comprises travel time use cannon number, travel time use times of each shot point, travel time data use minimum track number, maximum track number and reading times of each shot.
3. The travel-time data caching method of claim 1, wherein the priority level of the travel-time data is determined based on a number of travel-time uses.
4. The travel time data caching method of claim 1, wherein retrieving travel time data according to a priority of a travel time cache comprises: firstly, searching in a cache region with high priority level, and searching in a next level cache region if the needed travel time data is not found; and if the required travel-time data is not found in all the travel-time data cache regions, directly reading the travel-time data from the original travel-time file.
5. The travel-time data caching method of claim 1, wherein the travel-time data cache regions of the plurality of priority levels comprise: a local disk cache region, a shared memory cache region, a shot point travel time special memory cache region and a wave detection point travel time special memory cache region.
6. The travel-time data caching method according to claim 5, wherein the travel-time data caches of the plurality of priority levels are, in order of priority from high to low: a special memory cache region for shot point travel time, a special memory cache region for wave detection point travel time, a shared memory cache region and a local disk cache region.
7. The travel-time data caching method according to claim 6, wherein refreshing the travel-time data in the travel-time cache based on the retrieval result comprises, for travel-time data that is not cached in all the travel-time data caches and is directly read from an original travel-time file, when the travel-time data is demodulator probe travel-time data or shot travel-time data, performing the following steps:
if the special memory cache area for the wave detection point travel time or the special memory cache area for the shot point travel time is not full, adding the wave detection point travel time data or the shot point travel time data read from the original travel time file into the special memory cache area for the wave detection point travel time or the special memory cache area for the shot point travel time, and if the space of the cache area is full, replacing the expired travel time data;
if the travel-time data which is out of date does not exist in the special internal storage cache area for the wave detection point travel-time or the special internal storage cache area for the shot point travel-time, the following judgment is carried out:
if the priority of the travel-time data to be added into the wave detection point or the travel-time data of the shot point is higher than the priority of the travel-time data in the special memory cache area for the wave detection point or the special memory cache area for the shot point, replacing the travel-time data with the lowest priority in the special memory cache area for the wave detection point or the special memory cache area for the shot point;
and if the priority of the travel-time data to be added into the wave detection point or the priority of the travel-time data of the shot point is not higher than that of the travel-time data in the special memory cache area for the wave detection point or the special memory cache area for the shot point, eliminating the first added travel-time data and adding the first added travel-time data into the next-level cache area.
8. A travel time data caching apparatus, comprising: a receiver for receiving seismic trace data, a processor, a memory, and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program is capable of implementing the steps of:
traversing seismic trace data, and counting travel time information used by each seismic data, wherein the seismic trace data are common offset gather data;
establishing a cache priority level and a position index of each seismic channel data;
sorting the seismic channel data by taking the shot number and the track number of the seismic channel data as keywords to form a plurality of small shot sets, wherein each small shot set shares one shot point travel time;
setting a plurality of priority levels of travel time data cache regions, and storing the travel time data into corresponding travel time data cache regions based on the priority levels of the travel time data in the use process of the travel time data;
when the travel time data are prefetched, searching is carried out according to the priority of the travel time cache region;
and refreshing the travel-time data in the travel-time cache area based on the retrieval result.
9. The travel time data caching apparatus according to claim 8, wherein the retrieving according to the priority of the travel time cache when prefetching the travel time data comprises: firstly, searching in a cache region with high priority level, and searching in a next level cache region if the needed travel time data is not found; and if the required travel-time data is not found in all the travel-time data cache regions, directly reading the travel-time data from the original travel-time file.
10. The travel-time data caching apparatus according to claim 8, wherein the travel-time data cache regions of the plurality of priority levels comprise: a local disk cache region, a shared memory cache region, a shot point travel time special memory cache region and a wave detection point travel time special memory cache region.
CN201710505971.5A 2017-06-28 2017-06-28 Travel time data caching method and device Active CN109144405B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710505971.5A CN109144405B (en) 2017-06-28 2017-06-28 Travel time data caching method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710505971.5A CN109144405B (en) 2017-06-28 2017-06-28 Travel time data caching method and device

Publications (2)

Publication Number Publication Date
CN109144405A CN109144405A (en) 2019-01-04
CN109144405B true CN109144405B (en) 2021-05-25

Family

ID=64805451

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710505971.5A Active CN109144405B (en) 2017-06-28 2017-06-28 Travel time data caching method and device

Country Status (1)

Country Link
CN (1) CN109144405B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112748466B (en) * 2019-10-30 2024-03-26 中国石油天然气集团有限公司 Fresnel-based travel time field data processing method and device
CN110865947B (en) * 2019-11-14 2022-02-08 中国人民解放军国防科技大学 Cache management method for prefetching data

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102841379A (en) * 2012-09-06 2012-12-26 中国石油大学(华东) Method for analyzing pre-stack time migration and speed based on common scatter point channel set
CN103605162A (en) * 2013-10-12 2014-02-26 中国石油天然气集团公司 Method and device for earthquake detection united combination simulation response analysis and based on earthquake data
CN103901468A (en) * 2014-03-18 2014-07-02 中国石油集团川庆钻探工程有限公司地球物理勘探公司 Seismic data processing method and device
CN104133240A (en) * 2014-07-29 2014-11-05 中国石油天然气集团公司 Large-scale collateral kirchhoff prestack depth migration method and device
CN106842304A (en) * 2017-01-03 2017-06-13 中国石油天然气集团公司 A kind of prestack depth migration method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1721190A4 (en) * 2003-12-12 2012-06-06 Exxonmobil Upstream Res Co Method for seismic imaging in geologically complex formations

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102841379A (en) * 2012-09-06 2012-12-26 中国石油大学(华东) Method for analyzing pre-stack time migration and speed based on common scatter point channel set
CN103605162A (en) * 2013-10-12 2014-02-26 中国石油天然气集团公司 Method and device for earthquake detection united combination simulation response analysis and based on earthquake data
CN103901468A (en) * 2014-03-18 2014-07-02 中国石油集团川庆钻探工程有限公司地球物理勘探公司 Seismic data processing method and device
CN104133240A (en) * 2014-07-29 2014-11-05 中国石油天然气集团公司 Large-scale collateral kirchhoff prestack depth migration method and device
CN106842304A (en) * 2017-01-03 2017-06-13 中国石油天然气集团公司 A kind of prestack depth migration method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于Hadoop的Kirchhoff叠前时间偏移并行算法;亢永敢 等;《石油地球物理勘探》;20151231;第50卷(第6期);全文 *

Also Published As

Publication number Publication date
CN109144405A (en) 2019-01-04

Similar Documents

Publication Publication Date Title
Lin et al. Pagraph: Scaling gnn training on large graphs via computation-aware caching
US10984073B2 (en) Dual phase matrix-vector multiplication system
US20170371807A1 (en) Cache data determining method and apparatus
CN103631730A (en) Caching optimizing method of internal storage calculation
CN109144405B (en) Travel time data caching method and device
CN108228110A (en) A kind of method and apparatus for migrating resource data
Tauheed et al. SCOUT: prefetching for latent feature following queries
US11567952B2 (en) Systems and methods for accelerating exploratory statistical analysis
CN110297787A (en) The method, device and equipment of I/O equipment access memory
CN103901468B (en) Seismic data processing method and device
CN105359142B (en) Hash connecting method and device
CN106874332B (en) Database access method and device
CN115712583A (en) Method, device and medium for improving distributed cache cross-node access performance
CN110245094B (en) Block-level cache prefetching optimization method and system based on deep learning
US20160092133A1 (en) Data allocation control apparatus and data allocation control method
Pan et al. A global user-driven model for tile prefetching in web geographical information systems
KR102006283B1 (en) Dataset loading method in m-tree using fastmap
Leal et al. TKSimGPU: A parallel top-K trajectory similarity query processing algorithm for GPGPUs
CN113157605B (en) Resource allocation method, system, storage medium and computing device for two-level cache
CN111290305A (en) Multi-channel digital quantity acquisition and processing anti-collision method and system for multiple sets of inertial navigation systems
US11899642B2 (en) System and method using hash table with a set of frequently-accessed buckets and a set of less frequently-accessed buckets
Wu et al. Neist: a neural-enhanced index for spatio-temporal queries
Park Flash-Aware Cost Model for Embedded Database Query Optimizer.
CN111880900A (en) Design method of near data processing system for super fusion equipment
CN111880739A (en) Near data processing system for super fusion equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant