CN109144405A - Data cache method and device when a kind of travelling - Google Patents

Data cache method and device when a kind of travelling Download PDF

Info

Publication number
CN109144405A
CN109144405A CN201710505971.5A CN201710505971A CN109144405A CN 109144405 A CN109144405 A CN 109144405A CN 201710505971 A CN201710505971 A CN 201710505971A CN 109144405 A CN109144405 A CN 109144405A
Authority
CN
China
Prior art keywords
travelling
data
buffer area
priority
internal memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710505971.5A
Other languages
Chinese (zh)
Other versions
CN109144405B (en
Inventor
杨祥森
赵改善
魏嘉
亢永敢
刘百红
陈金焕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Petroleum and Chemical Corp
Sinopec Geophysical Research Institute
Original Assignee
China Petroleum and Chemical Corp
Sinopec Geophysical Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Petroleum and Chemical Corp, Sinopec Geophysical Research Institute filed Critical China Petroleum and Chemical Corp
Priority to CN201710505971.5A priority Critical patent/CN109144405B/en
Publication of CN109144405A publication Critical patent/CN109144405A/en
Application granted granted Critical
Publication of CN109144405B publication Critical patent/CN109144405B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • G06F3/0611Improving I/O performance in relation to response time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0643Management of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0674Disk device

Abstract

Data cache method and device when disclosing a kind of travelling.The caching method is the following steps are included: traversal seismic channel data, information when counting the travelling that each seismic data uses;Establish the cache priority rank and location index of each seismic channel data;The seismic channel data is ranked up using the big gun number of seismic channel data and Taoist monastic name as keyword, forms multiple small big gun collection;The data buffer area when travelling of multiple priority-levels is set, in travelling in the use process of data, when the priority-level of data is saved to corresponding travelling when based on the travelling in data buffer area;When prefetching travelling when data, retrieved according to the priority of buffer area when travelling;Based on search result to travelling when buffer area in travelling when data refresh.The present invention improves the hit rate and data pre-fetching rate of data access, avoids because of IO competition and network blockage caused by frequently accessing strange land disk, hence it is evident that improve the calculated performance of pre-stack depth migration.

Description

Data cache method and device when a kind of travelling
Technical field
The present invention relates to oil gas field of geophysical exploration, when more particularly, to a kind of travelling data cache method with Device.
Background technique
Data are one of the input datas of Kirchhoff pre-stack depth migration when travelling.Due to the approximation meter of Kirchhoff It calculates and data volume is big to lateral variation in velocity bad adaptability, and when travelling, it therefore, generally can be in a manner of sparse grid by trip Data calculate in advance when row.In calculations of offset seismic channel data, shot point or geophone station week are first read from file when travelling 3D data volume when enclosing 4 points of travelling obtains data volume when shot point and geophone station respective three-dimensional travelling after 4 point interpolations, then Carry out calculations of offset.Data are frequently read in and are replaced when this will lead to travelling, and calculations of offset performance declines to a great extent and network flow It measures excessively high.
Current usual way is the sufficiently large memory of application, and data are to a memory queue when more as far as possible reading travellings In, data when refreshing the travelling in queue in a manner of " first in first out " improve data search efficiency to reduce by IO number.But It is, after all limited memory that the hit rate of data retrieval is lower when travelling, data are read in buffering there is still a need for frequent when travelling It deposits or is rejected from memory, though calculated performance has improvement, need further to be promoted.Therefore, it is necessary to which developing one kind can show Data cache method and device when writing the travelling for improving calculated performance.
The information for being disclosed in background of invention part is merely intended to deepen the reason to general background technique of the invention Solution, and it is known to those skilled in the art existing to be not construed as recognizing or imply that the information is constituted in any form Technology.
Summary of the invention
The purpose of the present invention is improve the reusability and access of data when travelling by data prediction and multi-level buffer method Hit rate reduces the data IO time, network flow is reduced, to improve the calculated performance of Kirchhoff pre-stack depth migration.
According to an aspect of the invention, it is proposed that data cache method when a kind of travelling.The data cache method may include Following steps:
Traverse seismic channel data, information when counting the travelling that each seismic data uses, wherein the seismic channel data It is common offset trace gather data;
Establish the cache priority rank and location index of each seismic channel data;
The seismic channel data is ranked up using the big gun number of seismic channel data and Taoist monastic name as keyword, forms multiple small big guns Collection, when each small big gun collection shares a shot point travelling;
The data buffer area when travelling of multiple priority-levels is arranged is based on institute in travelling in the use process of data When the priority-level of data is saved to corresponding travelling when stating travelling in data buffer area;
When prefetching travelling when data, retrieved according to the priority of buffer area when travelling;
Based on search result to travelling when buffer area in travelling when data refresh.
Preferably, big gun number is used when information includes travelling when the travelling, is used when information includes travelling when the travelling Big gun number, the number that uses when each shot point is travelled, data use when each big gun is travelled minimum Taoist monastic name, maximum Taoist monastic name, reading times.
Preferably, the number used when the priority-level of data is based on travelling when the travelling determines.
Preferably, data buffer area includes: local disk buffer area, shared drive when the travelling of the multiple priority level Dedicated internal memory cache region when dedicated internal memory cache region, geophone station travelling when buffer area, shot point travelling.
Preferably, data buffer area includes: local disk buffer area, shared drive when the travelling of the multiple priority level Dedicated internal memory cache region when dedicated internal memory cache region, geophone station travelling when buffer area, shot point travelling.
Preferably, when prefetching travelling when data, carrying out retrieval according to the priority of buffer area when travelling includes: first excellent It is retrieved in the high buffer area of first grade rank, is examined in next stage buffer area if data when not finding the travelling needed Rope;If not finding data when the travelling needed in data buffer area in all travellings, directly from file when original travelling Middle reading.
Be preferably based on search result to travelling when buffer area in travelling when data carry out refresh include for directly from The data when travelling not cached in data buffer area in all travellings read in file when original travelling, in the trip When data are data when data or shot point are travelled when geophone station is travelled when row, following steps are executed:
If dedicated internal memory cache region is less than when dedicated internal memory cache region or shot point are travelled when geophone station travelling, will be in original It is slow that dedicated memory when geophone station travelling is added in data when data or shot point are travelled when the geophone station travelling read in file when travelling It deposits when area or shot point travelling in dedicated internal memory cache region, if buffer space has been expired, data when displacement has expired travelling;
If geophone station does not have expired in dedicated internal memory cache region when dedicated internal memory cache region or shot point are travelled when travelling Data when travelling, then make the following judgment:
If the priority of data is higher than special when geophone station is travelled when data or shot point are travelled when geophone station to be added is travelled The priority of data, then replace geophone station travelling when travelling when being travelled with internal memory cache region or shot point in dedicated internal memory cache region When dedicated internal memory cache region or shot point data when priority minimum travelling in dedicated internal memory cache region when travelling;
If the priority of data is not above geophone station travelling when data or shot point are travelled when geophone station to be added is travelled When travelling when travelling in dedicated internal memory cache region of dedicated internal memory cache region or shot point when data priority, then will be added at first Travelling when data rejecting be added into next stage buffer area.
According to another aspect of the invention, it is proposed that data buffer storage device when a kind of travelling.The data buffer storage device includes: For receiving the receiver of seismic channel data, processor, memory and being stored on the memory and can transport on a processor Capable computer program, wherein the processor can be realized following steps when executing described program:
Traverse seismic channel data, information when counting the travelling that each seismic data uses, wherein the seismic channel data It is common offset trace gather data;
Establish the cache priority rank and location index of each seismic channel data;
The seismic channel data is ranked up using the big gun number of seismic channel data and Taoist monastic name as keyword, forms multiple small big guns Collection, when each small big gun collection shares a shot point travelling;
The data buffer area when travelling of multiple priority-levels is arranged is based on institute in travelling in the use process of data When the priority-level of data is saved to corresponding travelling when stating travelling in data buffer area;
When prefetching travelling when data, retrieved according to the priority of buffer area when travelling;
Based on search result to travelling when buffer area in travelling when data refresh.
Preferably, when prefetching travelling when data, carrying out retrieval according to the priority of buffer area when travelling includes: first excellent It is retrieved in the high buffer area of first grade rank, is examined in next stage buffer area if data when not finding the travelling needed Rope;If not finding data when the travelling needed in data buffer area in all travellings, directly from file when original travelling Middle reading.
Preferably, data buffer area includes: local disk buffer area, shared drive when the travelling of the multiple priority level Dedicated internal memory cache region when dedicated internal memory cache region, geophone station travelling when buffer area, shot point travelling.
Data cache method and device when travelling proposed by the present invention, realize in Kirchhoff pre-stack depth migration The multi-level buffer of data pre-fetching and refreshing when travelling.The present invention is by deducing prediction and data sorting, hence it is evident that when improving travelling The reusability of data;By multi-level buffer and Refresh Data, the hit rate and data pre-fetching rate of data access are improved, is avoided Because of IO competition and network blockage caused by frequently accessing strange land disk, hence it is evident that improve Kirchhoff pre-stack depth migration Calculated performance.
Methods and apparatus of the present invention has other characteristics and advantages, these characteristics and advantages are attached from what is be incorporated herein It will be apparent in figure and subsequent specific embodiment, or will be in the attached drawing and subsequent specific implementation being incorporated herein It is stated in detail in example, these the drawings and specific embodiments are used together to explain specific principle of the invention.
Detailed description of the invention
Exemplary embodiment of the present is described in more detail in conjunction with the accompanying drawings, of the invention is above-mentioned and other Purpose, feature and advantage will be apparent, wherein in exemplary embodiments of the present invention, identical reference label is usual Represent same parts.
The flow chart of data cache method when Fig. 1 is the travelling of exemplary implementation scheme according to the present invention;
The statistic analysis result and explanation that data use when Fig. 2 is travelling.
Fig. 3 is single-shot earthquake data acquisition schematic diagram.
Fig. 4 is the design of multi-level buffer area and data flow diagram.
Fig. 5 is the flow chart of Refresh Data.
Specific embodiment
The present invention will be described in more detail below with reference to accompanying drawings.Although showing the preferred embodiment of the present invention in attached drawing, However, it is to be appreciated that may be realized in various forms the present invention and should not be limited by the embodiments set forth herein.On the contrary, providing These embodiments are of the invention more thorough and complete in order to make, and can will fully convey the scope of the invention to ability The technical staff in domain.
Data are one of the input datas of Kirchhoff pre-stack depth migration when travelling.Number when due to limited memory, travelling It is big according to amount, data when needing frequently to obtain the travelling of difference in offset-calculating process, cause calculations of offset performance substantially under Drop.Therefore, the laws of use of data when the present invention is by deducing, counting, analyze travelling presses big gun to offset grouping seismic data Local disk, shared drive and dedicated memory three-level spatial cache are worked as in point rearrangement, setting.The data use process in travelling In, data are read according to the priority of data, service life when travelling, are set out into different spatial caches.Below with Summary of the invention is described for 512MB common offset trace gather data progress Kirchhoff pre-stack depth migration.
With reference to Fig. 1, data cache method is mainly comprised the steps that when travelling according to an exemplary embodiment of the present invention
1) seismic channel data, information when counting the travelling that each seismic data uses, wherein the seismic channel number are traversed According to being common offset trace gather data.
Firstly, traversing a seismic channel data before being deviated, the travelling that each seismic channel data uses is counted When information.
The number used when travelling when information may include travelling when travelling using big gun number, each shot point, when each big gun is travelled Most trail that data use, maximum Taoist monastic name, net compile reading times etc..It will be appreciated by those skilled in the art that information when travelling In can also include other data.
2) the cache priority rank and location index of each seismic channel data are established.
After statistics is completed, the corresponding cache priority rank of each seismic channel data and location index etc. are established, such as Fig. 2 It is shown.
3) seismic channel data is ranked up using the big gun number of seismic channel data and Taoist monastic name as keyword, is formed multiple small Big gun collection, when each small big gun collection shares a shot point travelling.
The seismic channel data of input be although sort key be (offset distance serial number, wire size, ordinate number) ((Offset, Inline, Crossline)) common offset data, but multiple offset trace gathers in range of tolerable variance, must have multiple tracks is altogether Shot point.It therefore, is that keyword progress is secondary with big gun number and Taoist monastic name (i.e. (shot point, Taoist monastic name)) to the common offset trace gather of 512MB Sequence forms the small big gun collection of offset distance packet data, as shown in Figure 3.In Fig. 3, the stain of center is shot position, acquisition Each mesh point of grid is a geophone station.Each concentric loop acquires an offset distance group on concentric loop Trace gather is a total Offset trace gather.Such as the corresponding trace gather of white point in figure, they share a shot point, can form one small Big gun collection.In travelling when data buffer storage, when can share a shot point travelling.
When these small big gun collection can share shot point travelling, the IO amount of data will reduce nearly half when travelling in this way.
4) the data buffer area when travelling of multiple priority-levels is arranged is based in travelling in the use process of data When the priority-level of data is saved to corresponding travelling when the travelling in data buffer area.
In travelling in calculating process, data are usually to save when the travelling used for Kirchhoff pre-stack depth migration In disk array or distributed file system.The access speed of data when in order to improve travelling, according to IO performance, Netowrk tape Wide and access rate, is arranged the buffer area of multiple ranks, is calculate node local disk buffer area, shared drive caching respectively Dedicated internal memory cache region when dedicated internal memory cache region, geophone station travelling when area, shot point travelling.Wherein local disk buffer area saves The data when travelling read from disk array or distributed file system (file when i.e. original is travelled), or it is slow from shared drive Data when depositing the travelling cemented out in area;Shared drive buffer area is for saving from disk array or distributed file system The new data of reading, or data that are once expired and setting out from shared drive buffer area are read from local disk buffer area, it reduces Network flow avoids IO conflict.The buffer area is big as far as possible, data when more preservations is travelled as far as possible, improves access hit rate. For the small big gun collection in an offset distance group, shot point is certain, but geophone station is variation, but is in a certain range It can be multiplexed.Therefore, dedicated internal memory cache region has done the processing such as interpolation for storing when shot point is travelled, and can be directly used for deviating Data when the shot point travelling of calculating;And dedicated internal memory cache region is for storing the high geophone station trip of frequency of use when geophone station travelling Data when row, but non-interpolation.When needing the travelling of some geophone station constantly, 4 travelling when progress are obtained from the dedicated cache area Interpolation obtains.In this way, substantially increasing data-reusing rate when travelling.
The priority of data retrieval is successively from high to low when travelling: shot point dedicated internal memory cache region, geophone station when travelling Dedicated internal memory cache region, shared drive buffer area and local disk buffer area when travelling, as shown in Figure 4.
The statistical analysis stage is used in travelling, access times carry out preferential when travelling according to the shot point that statistical analysis comes out Grade setting, access times are more, and number is smaller, and priority is higher.
5) it when prefetching travelling when data, is retrieved according to the priority of buffer area when travelling.
It when prefetching travelling when data, is retrieved in the high buffer area of priority-level first, if not finding to need Travelling when data then retrieved in next stage buffer area;If not finding needs in data buffer area in all travellings Data when travelling then directly read from file when original travelling, and are cached in the dedicated internal memory cache region of geophone station.
6) based on search result to travelling when buffer area in travelling when data refresh.
It is not cached in data buffer area for what is directly read from file when original travelling in all travellings Data when travelling, the flow chart that is refreshed of data is as shown in Figure 5 when travelling when to travelling in buffer area.
It, will be if dedicated internal memory cache region is less than in dedicated internal memory cache region or when shot point is travelled when geophone station is travelled Dedicated memory when geophone station travelling is added in data when data or shot point are travelled when the geophone station travelling read in file when original travelling In buffer area or when shot point is travelled in dedicated internal memory cache region, if buffer space expire, replace when having expired travelling Data;
If not expired in dedicated internal memory cache region in dedicated internal memory cache region or when shot point is travelled when geophone station travelling Travelling when data, then make the following judgment:
If the priority of data is higher than special when geophone station is travelled when data or shot point are travelled when geophone station to be added is travelled With the priority of data when travelling in internal memory cache region or when shot point is travelled in dedicated internal memory cache region, then geophone station trip is replaced Data when priority minimum travelling in dedicated internal memory cache region in dedicated internal memory cache region or when shot point is travelled when row;
If the priority of data is not above geophone station travelling when data or shot point are travelled when geophone station to be added is travelled When dedicated internal memory cache region in or when travelling of shot point when travelling in dedicated internal memory cache region data priority, then will add at first Data rejecting is added into next stage buffer area when the travelling entered.
Wherein, whether expired when some is travelled, need to judge the calculating progresses of all calculation procedures, use priority and most Big caching number etc..
Data buffer storage device when travelling according to another embodiment of the present invention proposes that data buffer storage fills when a kind of travelling It sets.The data buffer storage device includes: for receiving the receiver of seismic channel data, processor, memory and being stored in described On memory and the computer program that can run on a processor, wherein the processor can be realized when executing described program Following steps:
Traverse seismic channel data, information when counting the travelling that each seismic data uses, wherein the seismic channel data It is common offset trace gather data;
Establish the cache priority rank and location index of each seismic channel data;
The seismic channel data is ranked up using the big gun number of seismic channel data and Taoist monastic name as keyword, forms multiple small big guns Collection, when each small big gun collection shares a shot point travelling;
The data buffer area when travelling of multiple priority-levels is arranged is based on institute in travelling in the use process of data When the priority-level of data is saved to corresponding travelling when stating travelling in data buffer area;
When prefetching travelling when data, retrieved according to the priority of buffer area when travelling;
Based on search result to travelling when buffer area in travelling when data refresh.
Preferably, when prefetching travelling when data, carrying out retrieval according to the priority of buffer area when travelling includes: first excellent It is retrieved in the high buffer area of first grade rank, is examined in next stage buffer area if data when not finding the travelling needed Rope;If not finding data when the travelling needed in data buffer area in all travellings, directly from file when original travelling Middle reading.
Preferably, data buffer area includes: local disk buffer area, shared drive when the travelling of the multiple priority level Dedicated internal memory cache region when dedicated internal memory cache region, geophone station travelling when buffer area, shot point travelling:
The present invention is by deducing prediction and data sorting, the reusability of data when improving travelling;By multi-level buffer with Refresh Data improves the hit rate and data pre-fetching rate of data access, caused by avoiding because frequently accessing strange land disk IO competition and network blockage, hence it is evident that improve the calculated performance of pre-stack depth migration.
Using example
Data illustrate implementation process and relevant effect of the invention for implementing the present invention when to travelling below.
The Kirchhoff pre-stack depth migration software deployment of data cache method is in 64 sections when by comprising travelling of the invention Under the Hadoop running environment of point cluster, certain exploration work area 72GB seismic data and 400MB Depth Domain rate pattern data are selected Carry out the processing of Kirchhoff pre-stack depth migration.Test shows: using multi-level buffer and refresh technique, the CPU of each node is utilized Rate is increased to 95% by original 55%, and network flow drops below 3GB/s from 30GB/s, and memory uses steady, whole meter It calculates performance and improves 5.8 times.
It will be understood by those skilled in the art that above to the purpose of the description of the embodiment of the present invention only for illustratively saying The beneficial effect of bright the embodiment of the present invention is not intended to limit embodiments of the invention to given any example.
Various embodiments of the present invention are described above, above description is exemplary, and non-exclusive, and It is not limited to disclosed each embodiment.Without departing from the scope and spirit of illustrated each embodiment, for this skill Many modifications and changes are obvious for the those of ordinary skill in art field.The selection of term used herein, purport In the principle, practical application or improvement to the technology in market for best explaining each embodiment, or make the art Other those of ordinary skill can understand each embodiment disclosed herein.

Claims (10)

1. data cache method when a kind of travelling, which is characterized in that the data cache method the following steps are included:
Seismic channel data, information when counting the travelling that each seismic data uses are traversed, wherein the seismic channel data is altogether Offset gather data;
Establish the cache priority rank and location index of each seismic channel data;
The seismic channel data is ranked up using the big gun number of seismic channel data and Taoist monastic name as keyword, forms multiple small big gun collection, When each small big gun collection shares a shot point travelling;
The data buffer area when travelling of multiple priority-levels is arranged is based on the trip in travelling in the use process of data When the priority-level of data is saved to corresponding travelling when row in data buffer area;
When prefetching travelling when data, retrieved according to the priority of buffer area when travelling;
Based on search result to travelling when buffer area in travelling when data refresh.
2. data cache method when travelling according to claim 1, which is characterized in that information includes travelling when the travelling When use big gun number, the number that uses when each shot point is travelled, data use when each big gun is travelled minimum Taoist monastic name, maximum Taoist monastic name are read Take number.
3. data cache method when travelling according to claim 1, which is characterized in that the priority of data when the travelling The number that rank uses when being based on travelling determines.
4. data cache method when travelling according to claim 1, which is characterized in that when prefetching travelling when data, according to It includes: to be retrieved in the high buffer area of priority-level first that the priority of buffer area, which carries out retrieval, when travelling, if not It was found that data are then retrieved in next stage buffer area when the travelling of needs;If do not sent out in data buffer area in all travellings The data when travelling now needed are then directly read from file when original travelling.
5. data cache method when travelling according to claim 1, which is characterized in that the travelling of the multiple priority level When data buffer area dedicated internal memory cache region, detection when including: local disk buffer area, shared drive buffer area, shot point travelling Dedicated internal memory cache region when point travelling.
6. data cache method when travelling according to claim 5, which is characterized in that the trip of the multiple priority-level Data buffer area is successively from high to low according to priority when row: shot point is when travelling when dedicated internal memory cache region, geophone station travelling Dedicated internal memory cache region, shared drive buffer area, local disk buffer area.
7. data cache method when travelling according to claim 6, which is characterized in that slow when based on search result to travelling It includes the data in all travellings for directly reading from file when original travelling that data, which carry out refreshing, when depositing the travelling in area The data when travelling not cached in buffer area, in the travelling, data are data or shot point travelling when geophone station is travelled When data when, execute following steps:
If dedicated internal memory cache region is less than when dedicated internal memory cache region or shot point are travelled when geophone station travelling, will travel in original When file in read geophone station travelling when data or shot point dedicated internal memory cache region when geophone station travelling is added in data when travelling Or when shot point travelling in dedicated internal memory cache region, if buffer space has been expired, data when displacement has expired travelling;
If geophone station does not have expired travelling in dedicated internal memory cache region when dedicated internal memory cache region or shot point are travelled when travelling When data, then make the following judgment:
If the priority of data is higher than in dedicated when geophone station is travelled when data or shot point are travelled when geophone station to be added is travelled The priority of data when depositing travelling when buffer area or shot point travelling in dedicated internal memory cache region, then replace special when geophone station travelling Data when priority minimum travelling in dedicated internal memory cache region when being travelled with internal memory cache region or shot point;
If the priority of data is not above special when geophone station travelling when data or shot point are travelled when geophone station to be added is travelled The priority of data, the then trip that will be added at first when travelling when being travelled with internal memory cache region or shot point in dedicated internal memory cache region Data rejecting is added into next stage buffer area when row.
8. data buffer storage device when a kind of travelling, which is characterized in that the data buffer storage device includes: for receiving seismic channel number According to receiver, processor, memory and be stored in the computer program that can be run on the memory and on a processor, In, the processor can be realized following steps when executing described program:
Seismic channel data, information when counting the travelling that each seismic data uses are traversed, wherein the seismic channel data is altogether Offset gather data;
Establish the cache priority rank and location index of each seismic channel data;
The seismic channel data is ranked up using the big gun number of seismic channel data and Taoist monastic name as keyword, forms multiple small big gun collection, When each small big gun collection shares a shot point travelling;
The data buffer area when travelling of multiple priority-levels is arranged is based on the trip in travelling in the use process of data When the priority-level of data is saved to corresponding travelling when row in data buffer area;
When prefetching travelling when data, retrieved according to the priority of buffer area when travelling;
Based on search result to travelling when buffer area in travelling when data refresh.
9. data buffer storage device when travelling according to claim 8, which is characterized in that when prefetching travelling when data, according to It includes: to be retrieved in the high buffer area of priority-level first that the priority of buffer area, which carries out retrieval, when travelling, if not It was found that data are then retrieved in next stage buffer area when the travelling of needs;If do not sent out in data buffer area in all travellings The data when travelling now needed are then directly read from file when original travelling.
10. data buffer storage device when travelling according to claim 8, which is characterized in that the trip of the multiple priority level Dedicated internal memory cache region, inspection when data buffer area includes: local disk buffer area, shared drive buffer area, shot point travelling when row Wave point dedicated internal memory cache region when travelling.
CN201710505971.5A 2017-06-28 2017-06-28 Travel time data caching method and device Active CN109144405B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710505971.5A CN109144405B (en) 2017-06-28 2017-06-28 Travel time data caching method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710505971.5A CN109144405B (en) 2017-06-28 2017-06-28 Travel time data caching method and device

Publications (2)

Publication Number Publication Date
CN109144405A true CN109144405A (en) 2019-01-04
CN109144405B CN109144405B (en) 2021-05-25

Family

ID=64805451

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710505971.5A Active CN109144405B (en) 2017-06-28 2017-06-28 Travel time data caching method and device

Country Status (1)

Country Link
CN (1) CN109144405B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110865947A (en) * 2019-11-14 2020-03-06 中国人民解放军国防科技大学 Cache management method for prefetching data
CN112748466A (en) * 2019-10-30 2021-05-04 中国石油天然气集团有限公司 Travel time field data processing method and device based on Fresnel body

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050143921A1 (en) * 2003-12-12 2005-06-30 Exxonmobil Upstream Research Company Method for seismic imaging in geologically complex formations
CN102841379A (en) * 2012-09-06 2012-12-26 中国石油大学(华东) Method for analyzing pre-stack time migration and speed based on common scatter point channel set
CN103605162A (en) * 2013-10-12 2014-02-26 中国石油天然气集团公司 Method and device for earthquake detection united combination simulation response analysis and based on earthquake data
CN103901468A (en) * 2014-03-18 2014-07-02 中国石油集团川庆钻探工程有限公司地球物理勘探公司 Seismic data processing method and device
CN104133240A (en) * 2014-07-29 2014-11-05 中国石油天然气集团公司 Large-scale collateral kirchhoff prestack depth migration method and device
CN106842304A (en) * 2017-01-03 2017-06-13 中国石油天然气集团公司 A kind of prestack depth migration method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050143921A1 (en) * 2003-12-12 2005-06-30 Exxonmobil Upstream Research Company Method for seismic imaging in geologically complex formations
CN102841379A (en) * 2012-09-06 2012-12-26 中国石油大学(华东) Method for analyzing pre-stack time migration and speed based on common scatter point channel set
CN103605162A (en) * 2013-10-12 2014-02-26 中国石油天然气集团公司 Method and device for earthquake detection united combination simulation response analysis and based on earthquake data
CN103901468A (en) * 2014-03-18 2014-07-02 中国石油集团川庆钻探工程有限公司地球物理勘探公司 Seismic data processing method and device
CN104133240A (en) * 2014-07-29 2014-11-05 中国石油天然气集团公司 Large-scale collateral kirchhoff prestack depth migration method and device
CN106842304A (en) * 2017-01-03 2017-06-13 中国石油天然气集团公司 A kind of prestack depth migration method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
亢永敢 等: "基于Hadoop的Kirchhoff叠前时间偏移并行算法", 《石油地球物理勘探》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112748466A (en) * 2019-10-30 2021-05-04 中国石油天然气集团有限公司 Travel time field data processing method and device based on Fresnel body
CN112748466B (en) * 2019-10-30 2024-03-26 中国石油天然气集团有限公司 Fresnel-based travel time field data processing method and device
CN110865947A (en) * 2019-11-14 2020-03-06 中国人民解放军国防科技大学 Cache management method for prefetching data
CN110865947B (en) * 2019-11-14 2022-02-08 中国人民解放军国防科技大学 Cache management method for prefetching data

Also Published As

Publication number Publication date
CN109144405B (en) 2021-05-25

Similar Documents

Publication Publication Date Title
CN103218435B (en) Method and system for clustering Chinese text data
CN103631730B (en) The cache optimization method that internal memory calculates
CN110187383B (en) Method for rapidly sorting COV (coherent optical) gathers of offshore wide-azimuth seismic data
CN112180433B (en) Method and device for picking up first arrival wave of earthquake
CN109144405A (en) Data cache method and device when a kind of travelling
CN107451233A (en) Storage method of the preferential space-time trajectory data file of time attribute in auxiliary storage device
CN108228110A (en) A kind of method and apparatus for migrating resource data
CN111381275A (en) First arrival picking method and device for seismic data
CN115327616B (en) Automatic positioning method for mine microseism focus driven by massive data
JP2019212243A (en) Learning identification device and learning identification method
JP2019212171A (en) Learning device and learning method
CN105359142B (en) Hash connecting method and device
CN111638551A (en) Seismic first-motion wave travel time chromatography method and device
CN105447519A (en) Model detection method based on feature selection
US11567952B2 (en) Systems and methods for accelerating exploratory statistical analysis
CN104459781A (en) Three-dimensional pre-stack seismic data random noise degeneration method
CN109657197A (en) A kind of pre-stack depth migration calculation method and system
CN103901468B (en) Seismic data processing method and device
CN112099082B (en) Seismic folding wave travel time inversion method for coplanar element common azimuth gather
CN104849751B (en) The method of Prestack seismic data imaging
CN106199693B (en) Geological data normal-moveout spectrum automatic pick method and device
CN113534259A (en) Vibroseis efficient acquisition real-time prestack time migration imaging method
CN113157605B (en) Resource allocation method, system, storage medium and computing device for two-level cache
Münchmeyer PyOcto: A high-throughput seismic phase associator
Wu et al. Neist: a neural-enhanced index for spatio-temporal queries

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant