GB2518172A - Improvements in or relating to optimisation techniques - Google Patents

Improvements in or relating to optimisation techniques Download PDF

Info

Publication number
GB2518172A
GB2518172A GB1316208.6A GB201316208A GB2518172A GB 2518172 A GB2518172 A GB 2518172A GB 201316208 A GB201316208 A GB 201316208A GB 2518172 A GB2518172 A GB 2518172A
Authority
GB
United Kingdom
Prior art keywords
objective
solutions
scores
worker
evaluating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1316208.6A
Other versions
GB201316208D0 (en
Inventor
Andrew Spence
Jack Leslie Talbot
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Epistem Ltd
Original Assignee
Epistem Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Epistem Ltd filed Critical Epistem Ltd
Priority to GB1316208.6A priority Critical patent/GB2518172A/en
Publication of GB201316208D0 publication Critical patent/GB201316208D0/en
Priority to US14/482,910 priority patent/US10402727B2/en
Publication of GB2518172A publication Critical patent/GB2518172A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/12Computing arrangements based on biological models using genetic models
    • G06N3/126Evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V11/00Prospecting or detecting by methods combining techniques covered by two or more of main groups G01V1/00 - G01V9/00
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/17Function evaluation by approximation methods, e.g. inter- or extrapolation, smoothing, least mean square method
    • G06F17/175Function evaluation by approximation methods, e.g. inter- or extrapolation, smoothing, least mean square method of multidimensional data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Physiology (AREA)
  • Genetics & Genomics (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • Geophysics (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

Disclosed is a method of optimising an objective function by evaluating an objective score for each of a set of candidate solutions. The solutions are them updated based on the scores, and the evaluating and updating is repeated until a stop criterion is met. The evaluating of the objective scores being done by a set of worker processors and a master processor receiving the scores and updating the solutions asynchronously. Each candidate solution may be particle in a particle swarm optimisation algorithm, or may be a chromosome in an evolutionary or genetic algorithm, may be a differential evolution algorithm. The objective function may relate to the history of a hydrocarbon reservoir model, ie oil wells.

Description

IMPROVEMENTS IN OR RELATING TO OPTIMISATION TECHNIQUES
BACKGROUND
The present disclosure relates to improvements in or relating to optimisation techniques, and in particular to systems and methods for the optimisation of problems that take varying times to compute, and to new systems and methods for history matching analysis of fluid reservoirs such as hydrocarbon reservoirs.
Optimisation techniques involving metatheuristic and stochastic techniques such as particle swarm optimisation (P50) and differential evolution (DE) algorithms are computationally expensive. Even with parallelisation of the processing of candidate solutions, processing times for these techniques can be very sthw.
SUMMARY OF THE DISCLOSURE
According to a first aspect of the disclosure there is provided a computerised method for optimising an objective function comprising: evaluating an objective score for each of a plurality of candidate solutions; updating one or more solutions based on the evaluated objective score(s); repeating said steps of evaluating and updating until a stopping criterion is met; wherein: evaluating objective scores is carried out across a plurality of worker processors; and a master processor receives objective scores from the worker processors and updates said one more solutions asynchronously.
Optionally, each worker processor evaluates one candidate s&ution at a time.
Optionally, work is allocated to the worker processors when they are idle.
Optionally, during an evaluation period for a first candidate solution, one or more successive evaluation and update cycles can be performed for one or more other candidate solutions.
Optionally, each candidate solution is a particle in a particle swarm optimisation algorithm.
Optionally, each candidate solution is a chromosome in an evolutionary or a genetic algorithm.
Optionally, said evolutionary algorithm is a differential evolution algorithm.
Optionally, said optimising an objective function is carried out for history matching of a hydrocarbon reservoir model.
According to a second aspect of the disclosure there is provided apparatus for optimising an objective function comprising a master processor and one or more worker processors, wherein said worker processors are arranged to evaluate an objective score for each of a plurality of candidate solutions; and said master processor is arranged to receive objective scores from the worker processors and asynchronously update one or more solutions based on the evaluated objective score(s); and wherein said evaluation and updating are repeated until a stopping criterion is met.
According to a third aspect of the disclosure there is provided a method of building a model of a hydrocarbon reservoir by optimising an objective function that scores simulated results against a model; comprising evaluating an objective score for each of a plurality of simulated solutions; updating one or more agents based on the evaluated objective score(s) from their candidate solutions; repeating said steps of evaluating and updating until a stopping criterion is met; wherein: evaluating objective scores is carried out across a plurality of worker processors; and a master processor receives objective scores from the worker processors and updates said one more solutions asynchronously.
Optionally, the objective function comprises an error or misfit between simulated and observed data.
According to a fourth aspect of the disclosure there is provided apparatus for building a model of a hydrocarbon reservoir by optimising an objective function that scores simulated results against a model; comprising a master processor and one or more worker processors, wherein said worker processors are arranged to evaluate an objective score for each of a plurality of candidate solutions; and said master processor is arranged to receive objective scores from the worker processors and asynchronously update one or more solutions based on the evaluated objective score(s); and wherein said evaluation and updating are repeated until a stopping criterion is met.
According to a fifth aspect of the disclosure there is provided a computer program product that includes instructions that) when run on a computel; enable it to act as a master processor or as a worker processor; such that a master processor in combination with one or more worker processors can be provided for the performance of the methods or the provision of the apparatus of the previous aspects.
The computer program product may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer By way of example such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fibre optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infra-red, radio, and microwave, then the coaxial cable, fibre optic cable, twisted pail; DSL, or wireless technologies such as infra-red, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD], laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with hsers. Combinations of the above should also be included within the scope of computer-readable media. The instructions or code associated with a computer-readable medium of the computer program product may be executed by a computer, e.g., by one or more processors, such as one or more digital signal processors (DSP5], general purpose microprocessors, ASICs, FPGAs, or other equivalent integrated or discrete logic circuitry.
BRIEF DESCRIPTION OF THE DRAWINGS
The disclosure will be described below, byway of example only, with reference to the accompanying drawings, in which: Figure 1 shows a flowchart outlining the operation of a sampling algorithm; Figure 2 shows a sequence diagram for synchronous implementation of a sampling algorithm in parallel; Figure 3 shows a sequence diagram for synchronous implementation of a sampling algorithm in parallel where evaluation times vary; Figure 4 shows a sequence diagram for asynchronous implementation of a sampling algorithm in parallel where evaluation times vary; and Figure 5 shows a sequence diagram for asynchronous implementation of a sampling algorithm in parallel where evaluation times vary, illustrating parallel evaluation of an agent.
DETAILED DESCRIPTION
Sampling algorithms utilise a population of candidate solutions, also known as "agents'c to explore the topography of an optimisation problem with a view to finding a globally optimal solution. Each sampling algorithm [SA) guides the search process in its own unique way but a general approach is illustrated in Figure 1. First of all, each agent is randomly initialised by randomly defining its attributes, for example its position. An agent's position is evaluated to obtain a corresponding score. The agent's attributes are then updated based on the corresponding score which may result in an updated population state, then the agents are moved again and the evaluation and update of agent attr butes and population state is repeated until a stopping criteria is reached.
The procedure detailed in figure 1 could be implemented on a single thread or a single processor (CPU] because each agent is moved and evaluated in turn. Whilst each calculation has access to the very latest state of the population, in practice because a serial implementation is dependent on the evaluation time of each candidate solution, the optimisation process can take a large amount of time for complex evaluation stages.
The speed of optimising a model can be improved by carrying out the evaluation of candidates in parallel, across multiple threads and/or multiple CPUs. The concept of an agent readily lends itself to this idea. Figure 2 illustrates an implementation of a sampling algorithm on multiple CPUs. The potential for speeding up processing is clear as agent candidate solutions may be evaluated simultaneously.
The figure illustrates a master CPU issuing instructions to two worker CPUs which each process one agent, although it is to be appreciated that this is for ease of illustration and in fact any number of worker CPUs can be provided, one Cpu can process multiple agents, and one agent can be processed by different CPUs in successive evaluation cycles. Every agent candidate solution in the population must be evaluated and the agent updated (i.e. a complete generation) before the algorithm continues. This synchronous approach is in keeping with the design and intention of a sampling algorithm which evolves only when the population state is up to date, ensuring the latest information is utilised when an agent generates a new candidate solution. This works well when the evaluation time is likely to be around the same on each CPU.
However in many cases, for example reservoir history matching evaluation (i.e. simulation), there can be a wide variation in processing times, due to three potential reasons: 1. The candidate solution that the agent generates (the agenlfs position) creates a more computationally complex model which takes longer to evaluate.
2. Varying CPU power across a non-homogeneous cluster/grid which results in unpredictable evaluation time.
3. Increased parallelisation of a problem results in bottlenecks within data access on the network which can increase wait times within the evaluation process.
This means that synchronous parallel implementations are subject to potentially significant delays which can counteract any speed gains that a parallel implementation has over a serial implementation. This is illustrated in figure 3 where the master Cpu must wait for Cpu #2 to complete its processing before it can allow the sampling algorithm to continue.
This issue maybe addressed by implementing the sampling algorithm update and candidate solution generation in an asynchronous mannen With an asynchronous implementation the population state is updated with every new evaluation that is returned. However, because work is continuing in parallel it is possible that candidate solutions are generated by an agent whilst its attributes are out of date with respect to itself i.e. it has generated multiple future positions before the previous ones have been returned from evaluation. So although the system is as up to date as possible it will generate new work without having all work in progress returned with a score. This contrasts with a synchronous implementation which would halt the generation of new positions until work in progress has been evaluated. The effect of this is that the asynchronous implementation can be described as more explorative although when the evaluations return, the trajectory of the agent can sharply change. All agents within the system continue to generate positions whilst their previous ones are being evaluated.
An example of this is shown in figure 4. In this case when a worker CPU is idle or has capacity for evaluating an agent, it sends a work request to the master CPU. The master CPU responds to work requests by either issuing work to the worker CPU or informing the worker CPU that it should die: this arrangement is based on the worker farm design pattern.
This asynchronous approach means that prolonged processing in the evaluation phase on one Cpu does not disrupt processing on the other CPUs. As illustrated in figure 4, even though the evaluation of agent position P2 by worker CPU#2 is prolonged, the population state and agent attributes can be updated once the evaluation of the first agent position P1 is complete, and the worker CPU#1 can be deployed to evaluate the next position P1' of the first agent and update the population state and agent attributes once that evaluation is complete, all before the evaluation of the agent position P2 is completed.
1mpementing sampling algorithms in an asynchronous manner has the potential to influence the way in which SA algorithms function in practice. It can be inferred from the sequence diagram in figure 4 that new candidate solutions are generated for idle processors without requiring the evaluation of all candidate solutions to return. Although the generation of Pt' would benefit from knowing the score from P2, the SA generates the best position possible based on the state it has at the time.
However, the use of incomplete data is not an issue, as many more evaluations are possible within the same time frame. As evaluations are returned the algorithms update to reflect this new information -even though the population state may be way beyond the evaluation returned) if the score returned demonstrates the continued development of the agent was poor the new data are used to guide subsequent candidate solutions. This update ensures that although an evaluation took a long time, its score is still adhered to as per the algorithm definition.
The algorithm is not adversely affected by allowing an agent to continue past an evaluation, should more work be needed, without its score being returned. This is because of the stochastic nature of the problem space. By introducing the incomplete data effect we enable the algorithm to explore the solution space more widely without being drawn into local (and not always the global) minima too quickly. This makes the algorithm more efficient in terms of processing time but also more robust to the unpredictable solution surface.
It is also possible for a specific agent to be evaluated on multiple CPUs at the same time. Figure S illustrates this scenario. Whilst CPU#2 is evaluating the position of agent 2, CPU#1 requests work and is given another position for agent 2. Apart from a prolonged evaluation phase on a CPU, this could also occur in the case where there are more CPUs available for processing compared to the number of agents.
The effect of this simultaneous agent processing is specific to each particular SA.
In one embodiment the SA is a particle swarm optimisation (PSO) algorithm. In this case the agent is modelled as a particle in a swarm of particles exploring the parameter space. The effect of requesting a secondary position would be to move the particle on from its primary position (the one being evaluated). Upon completion of multiple evaluations only the global best position for the swarm or the local best position for the particle are updated if an improved score has been returned. The implication is that that the particle will not return to its primary position were that found to be better. However, subsequent moves will be influenced by the new global or local best positions such that its direction of movement is affected by its earlier position.
In one embodiment the SA is a differential evolution (DE] algorithm. In this case the agent is modelled as a chromosome in a population exploring the parameter space. The effect of requesting a secondary position would be to produce (breed] another trial chromosome.
Upon completion of any evaluation the chromosome's current position is only replaced by that of the trial chromosome if an improved score is apparent. In this case there is scope for a chromosome to move to a sibling (a trial position generated by its current position's parent) location were it found to be an improvement. Similarly, should any of its previous position's trial positions show an improvement over its new position, it will move to them [i.e. its nephew).
These techniques may have application in various areas. One such area is history matching for fluid reservoirs, such as hydrocarbon reservoirs.
A hydrocarbon reservoir may comprise a petroleum reservoir, a natural gas reservoir, or a reservoir with a combination of petroleum and natural gas. Petroleum comprises a mixture of various hydrocarbons and other compounds in liquid form, while natural gas comprises a mix of hydrocarbon gases. The term "hydrocarbon" as used herein is a generic term for petroleum and/or natural gas unless the context dictates otherwise.
In hydrocarbon reservoirs, the hydrocarbons are generally stored within rock formations and are extracted by drilling a well bore and recovering the hydrocarbons through various techniques which include (as a non-exhaustive set of examples); natural underground pressure, injection of watel; acid or gas, or (for petroleum) by injection of steam to reduce petroleum viscosity to make it easier to extract.
Hydrocarbon reservoirs can be modelled in computer-based reservoir simulation models.
Many different techniques are available but generally speaking the reservoir is modelled as a set of three-dimensional volume elements, known as cells. A set of parameters is then modelled on a per-cell basis, which define various geological or petrochemical properties for each cell. Variations in these parameters between cells are used to model the variations in the rock formations of the reservoir Example properties that can be defined include porosity permeability and water saturation.
Parameters for the reservoir model are determined by various inspection techniques such as seismic surveys and sampling of specific points or areas of the reservoir However there is inherently some uncertainty in the model due to the inhomogeneity of the reservoir and the relatively sparse set of samples or readings that can be taken.
One use of a reservoir model is to predict the hydrocarbon output of a reservoir over time.
However because of the uncertainties involved in the formation of the model, it is difficult to make accurate estimations. Therefore, it has been proposed to compare simulations with historical production data, in a process known as history matching. A sampling algorithm acts to minimise an objective function which is representative of the misfit (also referred to as mismatch) between measured and estimated parameters. In single objective history matching, a single match quality number is defined that is used by the algorithm to seek better solutions. In multi-objective histoiy matching, the objective is broken down into separate match quality components and an optimal trade-off between objectives is determined via a Pareto optimisation.
The techniques of this disclosure may be used for history matching of reservoir models. Each agent of the sampling algorithms employed may represent one possible solution to the objective function to be minimised.
Employing the asynchronous methods of the present disclosure in the context of histoiy matching for hydrocarbon reservoirs means that simulations can be performed more quickly without affecting the reliability of the predictions. The computing power of multiple CPUs is used efficiently for the simulation of candidate solutions.
The history matching may be used to simulate the hydrocarbon output from a reservoir Alternatively the history matching can be used for reservoir optimisation, running simulations to predict the effect of proposed new wells or boreholes on hydrocarbon production.
It is also to be appreciated that the techniques of this disclosure can be applied to any optimisation problem in general and including without limitation and as examples only, other areas such as modelling nuclear waste and other subsurface reservoirs, such as carbon capture and storage.
Various modifications and improvements can be made to the above without departing from
the scope of the disclosure.

Claims (13)

  1. CLAIMS1. A computerised method for optimising an objective function comprising: evaluating an objective score for each of a plurality of candidate solutions; updating one or more solutions based on the evaluated objective score(s); repeating said steps of evaluating and updating until a stopping criterion is met; wherein: evaluating objective scores is carried out across a plurality of worker processors; and a master processor receives objective scores from the worker processors and updates said one more solutions asynchronously.
  2. 2. The method of claim 1, wherein each worker processor evaluates one candidate solution at a time.
  3. 3. The method of claim 1 or claim 2, wherein work is allocated to the worker processors when they are idle.
  4. 4. The method of any preceding claim, wherein, during an evaluation period for a first candidate solution, one or more successive evaluation and update cycles can be performed for one or more other candidate solutions.
  5. 5. The method of any preceding claim, wherein each candidate solution is a particle in a particle swarm optimisation algorithm.
  6. 6. The method of any of claims 1 to 4, wherein each candidate solution is a chromosome in an evolutionary or a genetic algorithm.
  7. 7. The method of claim 6, wherein said evolutionary algorithm is a differential evolution algorithm.
  8. 8. The method of any preceding claim, wherein said optimising an objective function is carried out for history matching of a hydrocarbon reservoir model.
  9. 9. Apparatus for optimising an objective function comprising a master processor and one or more worker processors, wherein said worker processors are arranged to evaluate an objective score for each of a plurality of candidate solutions; and said master processor is arranged to receive objective scores from the worker processors and asynchronously update one or more solutions based on the evaluated objective score(s); and wherein said evaluation and updating are repeated until a stopping criterion is met.
  10. 10. A method of building a model of a hydrocarbon reservoir by optimising an objective function that scores simulated results against a model; comprising evaluating an objective score for each of a plurality of simulated solutions; updating one or more agents based on the evaluated objective score(s) from their candidate solutions; repeating said steps of evaluating and updating until a stopping criterion is met; wherein: evaluating objective scores is carried out across a plurality of worker processors; and a master processor receives objective scores from the worker processors and updates said one more solutions asynchronously.
  11. 11. The method of claim 10, wherein the objective function comprises an error or misfit between simulated and observed data.
  12. 12. Apparatus for building a model of a hydrocarbon reservoir by optimising an objective function that scores simulated results against a model; comprising a master processor and one or more worker processors, wherein said worker processors are arranged to evaluate an objective score for each of a plurality of candidate solutions; and said master processor is arranged to receive objective scores from the worker processors and asynchronously update one or more solutions based on the evaluated objective score(s); and wherein said evaluation and updating are repeated until a stopping criterion is met.
  13. 13. A computer program product that includes instructions that, when run on a compute!; enable it to act as a master processor or as a worker processor; such that a master processor in combination with one or more worker processors can be provided for the performance of the methods or the provision of any of the preceding claims.
GB1316208.6A 2013-09-11 2013-09-11 Improvements in or relating to optimisation techniques Withdrawn GB2518172A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB1316208.6A GB2518172A (en) 2013-09-11 2013-09-11 Improvements in or relating to optimisation techniques
US14/482,910 US10402727B2 (en) 2013-09-11 2014-09-10 Methods for evaluating and simulating data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1316208.6A GB2518172A (en) 2013-09-11 2013-09-11 Improvements in or relating to optimisation techniques

Publications (2)

Publication Number Publication Date
GB201316208D0 GB201316208D0 (en) 2013-10-23
GB2518172A true GB2518172A (en) 2015-03-18

Family

ID=49487082

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1316208.6A Withdrawn GB2518172A (en) 2013-09-11 2013-09-11 Improvements in or relating to optimisation techniques

Country Status (1)

Country Link
GB (1) GB2518172A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105137717A (en) * 2015-08-05 2015-12-09 哈尔滨工业大学 Compact Differential Evolution algorithm-based soft-measurement method for mechanical parameters of mask table micropositioner of lithography machine
CN106991122A (en) * 2017-02-27 2017-07-28 四川大学 A kind of film based on particle cluster algorithm recommends method
CN108389209A (en) * 2018-02-28 2018-08-10 江西理工大学 Using the grape image partition method of multi-mode Differential Evolution Algorithm

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002003716A1 (en) * 2000-06-30 2002-01-10 British Telecommunications Public Limited Company A memetic method for multiobjective optimisation
US20020099929A1 (en) * 2000-11-14 2002-07-25 Yaochu Jin Multi-objective optimization
US20100293121A1 (en) * 2009-05-15 2010-11-18 The Aerospace Corporation Systems and methods for parallel processing with infeasibility checking mechanism
US20110010142A1 (en) * 2008-08-19 2011-01-13 Didier Ding Petroleum Reservoir History Matching Method Using Local Parametrizations
WO2012003007A1 (en) * 2010-06-29 2012-01-05 Exxonmobil Upstream Research Company Method and system for parallel simulation models

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002003716A1 (en) * 2000-06-30 2002-01-10 British Telecommunications Public Limited Company A memetic method for multiobjective optimisation
US20020099929A1 (en) * 2000-11-14 2002-07-25 Yaochu Jin Multi-objective optimization
US20110010142A1 (en) * 2008-08-19 2011-01-13 Didier Ding Petroleum Reservoir History Matching Method Using Local Parametrizations
US20100293121A1 (en) * 2009-05-15 2010-11-18 The Aerospace Corporation Systems and methods for parallel processing with infeasibility checking mechanism
WO2012003007A1 (en) * 2010-06-29 2012-01-05 Exxonmobil Upstream Research Company Method and system for parallel simulation models

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105137717A (en) * 2015-08-05 2015-12-09 哈尔滨工业大学 Compact Differential Evolution algorithm-based soft-measurement method for mechanical parameters of mask table micropositioner of lithography machine
CN106991122A (en) * 2017-02-27 2017-07-28 四川大学 A kind of film based on particle cluster algorithm recommends method
CN106991122B (en) * 2017-02-27 2021-02-02 四川大学 Movie recommendation method based on particle swarm optimization
CN108389209A (en) * 2018-02-28 2018-08-10 江西理工大学 Using the grape image partition method of multi-mode Differential Evolution Algorithm

Also Published As

Publication number Publication date
GB201316208D0 (en) 2013-10-23

Similar Documents

Publication Publication Date Title
US9187984B2 (en) Methods and systems for machine-learning based simulation of flow
US8386227B2 (en) Machine, computer program product and method to generate unstructured grids and carry out parallel reservoir simulation
US9177086B2 (en) Machine, computer program product and method to carry out parallel reservoir simulation
US20180181693A1 (en) Method and System for Stable and Efficient Reservoir Simulation Using Stability Proxies
US20130096900A1 (en) Methods and Systems For Machine - Learning Based Simulation of Flow
US20130096899A1 (en) Methods And Systems For Machine - Learning Based Simulation of Flow
MX2015003997A (en) Analyzing microseismic data from a fracture treatment.
AU2011283190A1 (en) Methods and systems for machine-learning based simulation of flow
EP2614460B1 (en) Machine, computer program product and method to generate unstructured grids and carry out parallel reservoir simulation
MX2014008897A (en) Systems and methods for estimating fluid breakthrough times at producing well locations.
CA3027332A1 (en) Runtime parameter selection in simulations
GB2518172A (en) Improvements in or relating to optimisation techniques
WO2017171576A1 (en) Method for predicting perfomance of a well penetrating
US11112514B2 (en) Systems and methods for computed resource hydrocarbon reservoir simulation and development
CN113052968A (en) Knowledge graph construction method of three-dimensional structure geological model
Rezaei et al. Applications of Machine Learning for Estimating the Stimulated Reservoir Volume (SRV)
CN116911216B (en) Reservoir oil well productivity factor assessment and prediction method
Balabaeva et al. Optimal Wells Placement to Maximize the Field Coverage Using Derivative-Free Optimization
US20240037413A1 (en) Computer-implemented method and computer-readable medium for drainage mesh optimization in oil and/or gas producing fields
US20230140905A1 (en) Systems and methods for completion optimization for waterflood assets
US20210340858A1 (en) Systems and Methods for Dynamic Real-Time Hydrocarbon Well Placement
Parashar et al. Dynamic Data-Driven Application Systems for Reservoir Simulation-Based Optimization: Lessons Learned and Future Trends
Zhang et al. A Multi-Stencil Fast Marching Method with Path Correction for Efficient Reservoir Simulation and Automated History Matching
Golestan et al. Towards Bayesian Quantification of Permeability in Micro-scale Porous Structures–The Database of Micro Networks
CN117875154A (en) Land natural gas hydrate energy production prediction method, system and electronic equipment

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)