CN107180153A - The method and system of full waveform inversion is realized using MPI - Google Patents

The method and system of full waveform inversion is realized using MPI Download PDF

Info

Publication number
CN107180153A
CN107180153A CN201610141098.1A CN201610141098A CN107180153A CN 107180153 A CN107180153 A CN 107180153A CN 201610141098 A CN201610141098 A CN 201610141098A CN 107180153 A CN107180153 A CN 107180153A
Authority
CN
China
Prior art keywords
speed
concurrent processes
task
mpi
calculation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610141098.1A
Other languages
Chinese (zh)
Inventor
朱成宏
罗明秋
董宁
陈业全
魏哲枫
刘玉金
徐蔚亚
张春涛
高鸿
庞海玲
张建伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Petroleum and Chemical Corp
Sinopec Exploration and Production Research Institute
Original Assignee
China Petroleum and Chemical Corp
Sinopec Exploration and Production Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Petroleum and Chemical Corp, Sinopec Exploration and Production Research Institute filed Critical China Petroleum and Chemical Corp
Priority to CN201610141098.1A priority Critical patent/CN107180153A/en
Publication of CN107180153A publication Critical patent/CN107180153A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16ZINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS, NOT OTHERWISE PROVIDED FOR
    • G16Z99/00Subject matter not provided for in other main groups of this subclass

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses the method and system that full waveform inversion is realized using MPI.In invention, each MPI concurrent processes in multiple MPI concurrent processes are accordingly to be regarded as host process, each MPI concurrent processes, which are set to, to be obtained task that the MPI concurrent processes currently perform and the geological data for performing the required by task by accessing share dish and is back to corresponding result of calculation file by the result of calculation that the task obtains is performed, and different MPI concurrent processes are configured to not communicate mutually.Using the present invention, the dynamic change of hardware device can be supported in full waveform inversion, enables to that Network Computing Architecture is more stable, heterogeneous device can be supported, being easy to reduce hardware cost, in addition using present invention also eliminates limitation of the network service bottleneck to full waveform inversion efficiency between different MPI concurrent processes.

Description

The method and system of full waveform inversion is realized using MPI
Technical field
The present invention relates to imaging of seismic data field, more particularly, to the method and system that full waveform inversion (FWI) is realized using MPI.
Background technology
Full waveform inversion is the key and cutting edge technology that seismic wave is modeled as in, because its amount of calculation is very huge, it usually needs rely on the parallel computation of large-scale high-performance computer cluster to realize.Run on large-scale high-performance computer cluster, the single project operation time generally reaches a few weeks or months.Therefore the parallel computation of full waveform inversion how is realized, and enables full waveform inversion project to overcome the hardware fault of computer cluster, while realizing efficient parallel computation, the key difficulties of industrialization full waveform inversion technology are realized as people.
Disclosed full waveform inversion parallel computation implementations all at present, are all realized using MPI (Message Passing Interface) technologies.MPI Parallel Implementations, are typically a number of from node real-time collaborative Parallel Implementation by host node band.The gradient calculation and step size computation of order execution single-shot are configured as from node as shown in figure 1, each, and is superimposed the gradient of many big guns and determines that final step-length is performed in the master node according to the step-length of many big guns.The whole process calculated in full waveform inversion as shown in Figure 1, host node exchanges data between node, from there will be real-time network communication between node.
In actual applications, there are many notable defects in this method:1) the dynamic increase of hardware device is not supported and is reduced:MPI concurrent jobs are once sent, it cannot add or reduce the computer node for participating in calculating, and it is generally configured with a large amount of computer nodes in large data processing center, there are a large amount of job runs simultaneously, available computer node Number dynamics are caused to change greatly, only support dynamic device increase and reduce, making full use of for resource could be realized;2) stability is poor:During using MPI technologies, a large amount of computer nodes are calculated and interrelated simultaneously, and the calculating time of usual full waveform inversion technology is long (a few week to some months), once hardware fault occurs in some node, whole calculate fails immediately;3) heterogeneous device is not supported:MPI technologies can not operate in the computer cluster of isomery, and all computer nodes of MPI technical requirements must be identical, if computer node is different, and it is run by the efficiency of most slow computer node.And in production application, because the time that computer node is purchased is different, quickly, inevitably lead to computer cluster is made up of simultaneous computer hardware update speed different computer nodes, therefore be not suitable for MPI parallel computation;4) hardware cost is high:MPI Parallel Implementations are very high to the network speed between the node of computer cluster and stability requirement, people are forced using Infiniband and other equipment to improve network communication efficiency, people are forced and are improved using more expensive hardware the stability of system, these are obviously improved the buying deployment cost of high-performance calculation machine equipment without doubt;5) Large-scale parallel computing inefficiency:The parallel efficiency of MPI technologies increases and reduced with computer node number, once computer node number is more than 10, the bottleneck effect that MPI is lifted to computational efficiency just has embodiment.
Therefore, although current people generally use above-mentioned MPI parallel methods in the implementation process of full waveform inversion technology, but because it has drawbacks described above, seriously constrain the further development of full waveform inversion technology.
The content of the invention
The present invention proposes the system that a kind of use MPI that can overcome drawbacks described above realizes full waveform inversion.The invention also provides corresponding method.
According to an aspect of the invention, it is proposed that a kind of method that use MPI realizes full waveform inversion, this method includes:Each MPI concurrent processes in multiple MPI concurrent processes are set to and obtains task that the MPI concurrent processes currently perform and the geological data for performing the required by task by accessing share dish and is back to corresponding result of calculation file by the result of calculation that the task obtains is performed, and different MPI concurrent processes are configured to not communicate mutually.Wherein, the share dish are used to store shared status file and shared data body, task, the storage path of the result of calculation file of the current task of each MPI concurrent processes that maximum cycle, current cycle time, each MPI concurrent processes of full waveform inversion are currently performed are stored in the shared status file, the shared data body includes the geological data needed for full waveform inversion.
According to another aspect of the invention, it is proposed that a kind of system that use MPI realizes full waveform inversion, the system includes share dish and multiple MPI concurrent processes, wherein;The share dish are used to store shared status file and shared data body, task, the storage path of the result of calculation file of the current task of each MPI concurrent processes that maximum cycle, current cycle time, each MPI concurrent processes of full waveform inversion are currently performed are stored in the shared status file, the shared data body includes the geological data needed for full waveform inversion;Each MPI concurrent processes in multiple MPI concurrent processes are both configured to obtain task that the MPI concurrent processes currently perform and the geological data for performing the required by task by accessing share dish and are back to corresponding result of calculation file by the result of calculation that the task obtains is performed, and different MPI concurrent processes are configured as not communicating mutually.
Each aspect of the present invention, each MPI concurrent processes are considered as host process, each MPI concurrent processes obtain its being performed for task by way of accessing share dish and carry out data interaction, do not communicated mutually between different MPI concurrent processes, so as to support the dynamic change of hardware device in full waveform inversion, enable to that Network Computing Architecture is more stable, heterogeneous device can be supported, be easy to reduction hardware cost, in addition using present invention also eliminates limitation of the network service bottleneck to full waveform inversion efficiency between different MPI concurrent processes.
Brief description of the drawings
By the way that exemplary embodiment of the invention is described in more detail with reference to accompanying drawing, above-mentioned and other purpose, the feature and advantage of the present invention will be apparent, wherein, in exemplary embodiment of the invention, identical reference number typically represents same parts.
Fig. 1 shows the schematic diagram for realizing full waveform inversion using MPI in the prior art.
Fig. 2 shows that the full waveform inversion Distributed Parallel Computing of example according to an embodiment of the invention realizes schematic diagram.
Fig. 3 shows that use MPI according to an embodiment of the invention realizes the schematic diagram of the system of full waveform inversion.
Embodiment
The preferred embodiment of the present invention is more fully described below with reference to accompanying drawings.Although showing the preferred embodiment of the present invention in accompanying drawing, however, it is to be appreciated that may be realized in various forms the present invention without that should be limited by embodiments set forth herein.On the contrary, thesing embodiments are provided so that the present invention is more thorough and complete, and those skilled in the art can be will fully convey the scope of the invention to.
Embodiment 1
The invention discloses a kind of method that use MPI realizes full waveform inversion.This method carries out full waveform inversion including the use of share dish and multiple MPI concurrent processes.Each MPI concurrent processes in multiple MPI concurrent processes are set to and obtains task that the MPI concurrent processes currently perform and the geological data for performing the required by task by accessing share dish and is back to corresponding result of calculation file by the result of calculation that the task obtains is performed, and different MPI concurrent processes are configured to not communicate mutually.Wherein, the share dish are used to store shared status file and shared data body, task, the storage path of the result of calculation file of the current task of each MPI concurrent processes that maximum cycle, current cycle time, each MPI concurrent processes of full waveform inversion are currently performed are stored in the shared status file, the shared data body includes the geological data needed for full waveform inversion.
In above-described embodiment, each MPI concurrent processes are considered as host process, each MPI concurrent processes obtain its being performed for task by way of accessing share dish and carry out data interaction, do not communicated mutually between different MPI concurrent processes, so as to support the dynamic change of hardware device in full waveform inversion, enable to that Network Computing Architecture is more stable, heterogeneous device can be supported, be easy to reduction hardware cost, in addition using present invention also eliminates limitation of the network service bottleneck to full waveform inversion efficiency between different MPI concurrent processes.
The task can include:Gradient calculation, step size computation, renewal speed.The gradient calculation can include:The speed that single-shot can be calculated updates gradient, can be superimposed the speed renewal gradient and current interim total gradient of calculated single-shot to obtain new interim total gradient, and can return to new interim total gradient to corresponding result of calculation file.The step size computation can include:Can calculate single-shot speed update step-length, can the speed based on the single-shot calculated step-length and current interim total step-length is updated to obtain new interim total step-length, and new interim total step-length can be returned to corresponding result of calculation file.The renewal speed includes the speed renewal amount based on this circulation obtained from share dish and calculates the speed after single-shot updates, and returns to the speed after single-shot updates to corresponding result of calculation file.As known to those skilled in the art, the speed renewal amount of this circulation can be obtained based on the total step-length obtained after all big guns in the final total gradient and traversal work area obtained after all big guns in traversal work area.
After all big guns traversal in for work area performs the gradient calculation, task that can be by the current execution of part or all of MPI concurrent processes in the shared status file is rewritten as step size computation;After all big guns traversal in for work area performs the step size computation, the task of the current execution of part or all of MPI concurrent processes is rewritten as renewal speed in the shared status file.All big guns i.e. in work area are traveled through after the gradient calculation task, then travel through the step size computation task to all big guns in work area;All big guns in work area are traveled through after the step size computation task, then travel through the renewal speed task to all big guns in work area.
When meeting following condition for the moment, full waveform inversion can be terminated:When the speed renewal amount of this circulation is less than given threshold, when current cycle time reaches the maximum cycle;If being unsatisfactory for, it can start to circulate next time using the speed after renewal as new initial velocity model.
The form that multiple MPI concurrent processes can be locked by document of agreement accesses the shared status file and shared data body in the share dish.Document of agreement lock is a kind of file read-write mechanism, only allows a MPI concurrent process to access same file at any time, it can be ensured that the uniqueness and correctness of FileVersion.
Fig. 2 shows that the full waveform inversion Distributed Parallel Computing of example according to an embodiment of the invention realizes schematic diagram.
Embodiment 2
Fig. 3 shows that use MPI according to an embodiment of the invention realizes the schematic diagram of the system of full waveform inversion.In the present embodiment, the system includes:Share dish 101 and MPI concurrent processes 201,202,203.Share dish 101 are used to store shared status file and shared data body, task, the storage path of the result of calculation file of the current task of each MPI concurrent processes that maximum cycle, current cycle time, each MPI concurrent processes of full waveform inversion are currently performed are stored in the shared status file, the shared data body includes the geological data needed for full waveform inversion.MPI concurrent processes 201,202,203 are both configured to obtain the MPI concurrent processes currently the performing of the task and the geological data for performing the required by task by accessing share dish 101 and are back to corresponding result of calculation file by the result of calculation that the task obtains is performed.MPI concurrent processes 201,202,203 are configured as not communicating mutually.3 MPI concurrent processes 201,202,203 are illustrate only in Fig. 3 as an example, it will be appreciated by those skilled in the art that it is only used for schematically illustrating, being not used in the quantity of limitation MPI concurrent processes.
The task can include:Gradient calculation, step size computation, renewal speed.The gradient calculation can include:It can calculate that the speed of single-shot updates gradient, the speed that can be superimposed calculated single-shot updates gradient and current interim total gradient to obtain new interim total gradient and can return to new interim total gradient to corresponding result of calculation file.The step size computation can include:Can calculate single-shot speed update step-length, can the speed based on the single-shot calculated step-length and current interim total step-length is updated to obtain new interim total step-length and new interim total step-length can be returned to corresponding result of calculation file.The renewal speed can include the speed renewal amount based on this circulation obtained from share dish and calculate the speed after single-shot updates and return to the speed after single-shot updates to corresponding result of calculation file.As known to those skilled in the art, the speed renewal amount of this circulation can be obtained based on the total step-length obtained after all big guns in the final total gradient and traversal work area obtained after all big guns in traversal work area.
After all big guns traversal in for work area performs the gradient calculation, the task of the current execution of the part or all of MPI concurrent processes in the shared status file can be rewritten as step size computation;After all big guns traversal in for work area performs the step size computation, the task of the current execution of the part or all of MPI concurrent processes in the shared status file can be rewritten as renewal speed.
When meeting following condition for the moment, full waveform inversion can be terminated:When the speed renewal amount of this circulation is less than given threshold, when current cycle time reaches the maximum cycle;If being unsatisfactory for, it can start to circulate next time using the speed after renewal as new initial velocity model.
The form that multiple MPI concurrent processes can be locked by document of agreement accesses the shared status file and shared data body in the share dish.Document of agreement lock is a kind of file read-write mechanism, only allows a MPI concurrent process to access same file at any time, it can be ensured that the uniqueness and correctness of FileVersion.
There is no close dependence according between this distributed MPI concurrent processes of the present invention, therefore have the following advantages that in actual applications:1) support the dynamic increase of hardware device and reduce:The node for participating in calculating dynamically can increase and reduce, and each node/process can be by shared status file/shared data body come adjustment work point of penetration;2) stability is good:Because computer node is without interrelated, in the calculating of prolonged full waveform inversion, interrupted even if part of nodes breaks down, whole calculating operation still may proceed to;3) heterogeneous device is supported:The node of different computational efficiencies can update shared status file, voluntarily Adjustable calculation amount, will not intervene the calculating of other nodes;4) hardware cost is low:Parallel computation than relatively low, therefore can use the hardware device of low cost to the requirement such as stability, network service of system;5) parallel efficiency calculation is not influenceed by interstitial content:Because without the network communication efficiency bottle neck between node, increase node will not bring extra computational efficiency to lose.
Contrasted table 1 below illustrates the parallel full waveform inversions of the MPI of prior art and according to the key technical indexes of the parallel full waveform inversions of MPI of the present invention.
Technical indicator Prior art The present invention
Dynamic increase and decrease calculate node Can not Energy
Part of nodes fault interrupting Overall operation is interrupted Overall operation continues
Support Heterogeneous Computing node Can not Energy
Node hardware performance requirement It is high It is low
Parallel efficiency and average single-unit point efficiency Node increases, efficiency reduction Node increases, and efficiency is constant
It is described above various embodiments of the present invention, described above is exemplary, and non-exclusive, and is also not necessarily limited to disclosed each embodiment.In the case of without departing from the scope and spirit of illustrated each embodiment, many modifications and changes will be apparent from for those skilled in the art.The selection of term used herein, it is intended to best explain the principle, practical application or the improvement to the technology in market of each embodiment, or other those of ordinary skill of the art is understood that each embodiment disclosed herein.

Claims (10)

1. a kind of method that use MPI realizes full waveform inversion, this method includes;
Each MPI concurrent processes in multiple MPI concurrent processes are set to and obtains task that the MPI concurrent processes currently perform and the geological data for performing the required by task by accessing share dish and is back to corresponding result of calculation file by the result of calculation that the task obtains is performed, and different MPI concurrent processes are configured to not communicate mutually;Wherein,
The share dish are used to store shared status file and shared data body, task, the storage path of the result of calculation file of the current task of each MPI concurrent processes that maximum cycle, current cycle time, each MPI concurrent processes of full waveform inversion are currently performed are stored in the shared status file, the shared data body includes the geological data needed for full waveform inversion.
2. according to the method described in claim 1, wherein, the task includes:Gradient calculation, step size computation, renewal speed;
The gradient calculation includes:The speed for calculating single-shot updates gradient, is superimposed the speed renewal gradient and current interim total gradient of calculated single-shot to obtain new interim total gradient, and returns to new interim total gradient to corresponding result of calculation file;
The step size computation includes:The speed for calculating single-shot updates step-length, the speed renewal step-length based on the single-shot calculated and current interim total step-length to obtain new interim total step-length, and returns to new interim total step-length to corresponding result of calculation file;
The renewal speed includes the speed renewal amount based on this circulation obtained from share dish and calculates the speed after single-shot updates, and returns to the speed after single-shot updates to corresponding result of calculation file.
3. method according to claim 2, wherein,
After all big guns traversal in for work area performs the gradient calculation, the task by the current execution of part or all of MPI concurrent processes in the shared status file is rewritten as step size computation;
After all big guns traversal in for work area performs the step size computation, the task by the current execution of part or all of MPI concurrent processes in the shared status file is rewritten as renewal speed.
4. according to the method described in claim 1, wherein,
When meeting following condition for the moment, terminate full waveform inversion:When the speed renewal amount of this circulation is less than given threshold, when current cycle time reaches the maximum cycle;
If being unsatisfactory for, using the speed after renewal as new initial velocity model, start to circulate next time.
5. according to the method described in claim 1, wherein, the form that the multiple MPI concurrent processes are locked by document of agreement accesses the shared status file and shared data body in the share dish.
6. a kind of system that use MPI realizes full waveform inversion, the system includes share dish and multiple MPI concurrent processes, wherein;
The share dish are used to store shared status file and shared data body, task, the storage path of the result of calculation file of the current task of each MPI concurrent processes that maximum cycle, current cycle time, each MPI concurrent processes of full waveform inversion are currently performed are stored in the shared status file, the shared data body includes the geological data needed for full waveform inversion;
Each MPI concurrent processes in multiple MPI concurrent processes are both configured to obtain task that the MPI concurrent processes currently perform and the geological data for performing the required by task by accessing share dish and are back to corresponding result of calculation file by the result of calculation that the task obtains is performed, and different MPI concurrent processes are configured as not communicating mutually.
7. system according to claim 6, wherein, the task includes:Gradient calculation, step size computation, renewal speed;
The gradient calculation includes:The speed for calculating single-shot updates gradient, is superimposed the speed renewal gradient and current interim total gradient of calculated single-shot to obtain new interim total gradient, and returns to new interim total gradient to corresponding result of calculation file;
The step size computation includes:The speed for calculating single-shot updates step-length, the speed renewal step-length based on the single-shot calculated and current interim total step-length to obtain new interim total step-length, and returns to new interim total step-length to corresponding result of calculation file;
The renewal speed includes the speed renewal amount based on this circulation obtained from share dish and calculates the speed after single-shot updates, and returns to the speed after single-shot updates to corresponding result of calculation file.
8. system according to claim 7, wherein,
After all big guns traversal in for work area performs the gradient calculation, the task of the current execution of the part or all of MPI concurrent processes in the shared status file is rewritten as step size computation;
After all big guns traversal in for work area performs the step size computation, the task of the current execution of the part or all of MPI concurrent processes in the shared status file is rewritten as renewal speed.
9. system according to claim 6, wherein,
When meeting following condition for the moment, terminate full waveform inversion:When the speed renewal amount of this circulation is less than given threshold, when current cycle time reaches the maximum cycle;
If being unsatisfactory for, using the speed after renewal as new initial velocity model, start to circulate next time.
10. system according to claim 6, wherein, the form that the multiple MPI concurrent processes are locked by document of agreement accesses shared status file and shared data body in the share dish.
CN201610141098.1A 2016-03-11 2016-03-11 The method and system of full waveform inversion is realized using MPI Pending CN107180153A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610141098.1A CN107180153A (en) 2016-03-11 2016-03-11 The method and system of full waveform inversion is realized using MPI

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610141098.1A CN107180153A (en) 2016-03-11 2016-03-11 The method and system of full waveform inversion is realized using MPI

Publications (1)

Publication Number Publication Date
CN107180153A true CN107180153A (en) 2017-09-19

Family

ID=59830845

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610141098.1A Pending CN107180153A (en) 2016-03-11 2016-03-11 The method and system of full waveform inversion is realized using MPI

Country Status (1)

Country Link
CN (1) CN107180153A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109344135A (en) * 2018-10-18 2019-02-15 中国海洋石油集团有限公司 A kind of parallel seismic processing job scheduling method of the file lock of automatic load balancing

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100161911A1 (en) * 2007-05-31 2010-06-24 Eric Li Method and apparatus for mpi program optimization
CN102902514A (en) * 2012-09-07 2013-01-30 西安交通大学 Large-scale parallel processing method of moving particle semi-implicit method
CN103076627A (en) * 2011-10-26 2013-05-01 中国石油化工股份有限公司 Smoothing optimization method of velocity model
CN104463010A (en) * 2014-10-31 2015-03-25 华为技术有限公司 File lock implementation method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100161911A1 (en) * 2007-05-31 2010-06-24 Eric Li Method and apparatus for mpi program optimization
CN103076627A (en) * 2011-10-26 2013-05-01 中国石油化工股份有限公司 Smoothing optimization method of velocity model
CN102902514A (en) * 2012-09-07 2013-01-30 西安交通大学 Large-scale parallel processing method of moving particle semi-implicit method
CN104463010A (en) * 2014-10-31 2015-03-25 华为技术有限公司 File lock implementation method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张文生等: "频率多尺度全波形速度反演", 《地球物理学报》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109344135A (en) * 2018-10-18 2019-02-15 中国海洋石油集团有限公司 A kind of parallel seismic processing job scheduling method of the file lock of automatic load balancing

Similar Documents

Publication Publication Date Title
US11514045B2 (en) Structured cluster execution for data streams
Liu et al. Adaptive asynchronous federated learning in resource-constrained edge computing
EP3014444B1 (en) Computing connected components in large graphs
CN106156159B (en) A kind of table connection processing method, device and cloud computing system
US10209908B2 (en) Optimization of in-memory data grid placement
EP3251034B1 (en) Query optimization adaptive to system memory load for parallel database systems
US9152669B2 (en) System and method for distributed SQL join processing in shared-nothing relational database clusters using stationary tables
US9479449B2 (en) Workload partitioning among heterogeneous processing nodes
US9576026B2 (en) System and method for distributed SQL join processing in shared-nothing relational database clusters using self directed data streams
US10127281B2 (en) Dynamic hash table size estimation during database aggregation processing
CN102929989B (en) The load-balancing method of a kind of geographical spatial data on cloud computing platform
CN110659278A (en) Graph data distributed processing system based on CPU-GPU heterogeneous architecture
CN110222029A (en) A kind of big data multidimensional analysis computational efficiency method for improving and system
CN105045871A (en) Data aggregation query method and apparatus
US20160337442A1 (en) Scheduled network communication for efficient re-partitioning of data
WO2017088666A1 (en) Data storage method and coordinator node
CN107070645A (en) Compare the method and system of the data of tables of data
CN104834709B (en) A kind of parallel cosine mode method for digging based on load balancing
CN101800768A (en) Gridding data transcription generation method based on storage alliance subset partition
CN111522811A (en) Database processing method and device, storage medium and terminal
US10402385B1 (en) Database live reindex
CN107180153A (en) The method and system of full waveform inversion is realized using MPI
CN107203437A (en) The methods, devices and systems for preventing internal storage data from losing
CN106302778A (en) A kind of distributed flow process automotive engine system
CN104331336B (en) Be matched with the multilayer nest balancing method of loads of high-performance computer structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170919

RJ01 Rejection of invention patent application after publication