WO2019190003A1 - Method for optimizing locations of multiple wellbores in oil and gas reservoir by using artificial neural network - Google Patents

Method for optimizing locations of multiple wellbores in oil and gas reservoir by using artificial neural network Download PDF

Info

Publication number
WO2019190003A1
WO2019190003A1 PCT/KR2018/009385 KR2018009385W WO2019190003A1 WO 2019190003 A1 WO2019190003 A1 WO 2019190003A1 KR 2018009385 W KR2018009385 W KR 2018009385W WO 2019190003 A1 WO2019190003 A1 WO 2019190003A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
neural network
artificial neural
grid
input data
Prior art date
Application number
PCT/KR2018/009385
Other languages
French (fr)
Korean (ko)
Inventor
장일식
오세은
강현정
Original Assignee
조선대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 조선대학교 산학협력단 filed Critical 조선대학교 산학협력단
Publication of WO2019190003A1 publication Critical patent/WO2019190003A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • EFIXED CONSTRUCTIONS
    • E21EARTH DRILLING; MINING
    • E21BEARTH DRILLING, e.g. DEEP DRILLING; OBTAINING OIL, GAS, WATER, SOLUBLE OR MELTABLE MATERIALS OR A SLURRY OF MINERALS FROM WELLS
    • E21B41/00Equipment or details not covered by groups E21B15/00 - E21B40/00
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/06Electricity, gas or water supply

Definitions

  • the present invention relates to techniques in the field of energy and resources, and in particular, to a method of optimizing a plurality of drilling locations by modeling in an oil and gas reservoir.
  • the simulation is conducted using a computational model for oil fields or gas fields.
  • the gas field is partitioned into a three-dimensional grid to form a three-dimensional grid having a plurality of cells.
  • the gas content is set in each cell, and the amount of oil and gas discharged as a function is inputted as the pressure in the cell is reduced.
  • the pressure of the cell is reduced while methane gas is desorbed from the coal.
  • the amount of desorption can be designated as a function of pressure.
  • the desorbed methane gas is then transported along the crack network in the grid to the production wells.
  • the state of the cell changes over time, resulting in dynamic modeling.
  • the whole process of development of oil gas field is modeled using computer model.
  • ANN artificial neural networks
  • Artificial neural network is composed of input layer, hidden layer, and output layer by imitating human brain, and constructs model by learning nonlinear relationship between input value and output value. Once a nonlinear relationship model between input and output values is built, the model is used to predict output values when new input values are given.
  • the input-output data used for learning are secured through the existing computer model.
  • the neural network has the advantage of being very fast in processing the input and deriving the output.
  • the continuous use of neural network models has the advantage of more elaborate development of nonlinear relationship models between input and output values as data is accumulated.
  • the present invention provides a modeling method that can stably derive a wide area even when the exploration space increases exponentially, and in particular, by applying the above method to the modeling of the oil gas reservoir, it is possible to optimize a plurality of drilling estimates installed in the reservoir.
  • the purpose is to provide a method for determining location.
  • the grid size in the search space is gradually reduced to become the same as the size of the basic grid within a predetermined range, so that the optimal positions of the plurality of drilling estimates are determined in step (c). It is characterized by repeatedly performing steps up to).
  • the artificial neural network model is used in selecting the optimal positions of the plurality of drilling estimates in the oil gas reservoir, and the optimal position can be selected very effectively even when the number of data is very large by applying the artificial neural network sequentially and stepwise. .
  • the reliability of the optimal position selection is increased as the probability of matching the result of the computational model is higher than when the artificial neural network model is directly applied at one time.
  • FIG. 1 is a schematic flowchart of a plurality of drilling position optimization method using an artificial neural network in the oil gas reservoir according to an embodiment of the present invention.
  • FIG. 2 is an image showing that the storage layer to be the object of the present invention lattice through a conventional computer model.
  • FIG. 3 to 7 illustrate a method according to the present invention
  • FIG. 3 shows a basic grid system and a first stage grating system.
  • FIG. 4 shows the upper ranks evaluated through the neural network based on the first stage grid system and the area to be expanded in the second stage.
  • FIG. 5 shows the search space and the second stage grid system determined in the second stage. It is shown.
  • FIG. 6 shows the upper ranks found through the artificial neural network and the area to be expanded in the second stage
  • FIG. 7 shows the search space and the third stage grid system in the third stage.
  • FIG. 8 is a flowchart of "a method for calculating a global solution using an artificial neural network”.
  • FIG. 9 is an image of a storage layer serving as an application example of a method of calculating a global solution using the artificial neural network.
  • 10 to 13 are graphs comparing output prediction data output through an artificial neural network model and actual output data obtained by inputting a search group into a computational model according to a method of calculating a global solution using an artificial neural network.
  • FIG. 14 illustrates a first stage grating system of a storage layer for explaining an embodiment of the present invention.
  • 15 (a) to 15 (d) show a process of constructing and rebuilding an artificial neural network through the process shown in FIG. 8.
  • FIG. 16 shows the upper ranks (hatched gratings) after selecting the optimal position in the first stage grating system.
  • FIG. 17 illustrates a second stage grating system
  • FIG. 18 illustrates a result of obtaining a global solution for the second stage grating system.
  • 20 (a) to 20 (d) illustrate a process of selecting an optimal position in a third-stage grating system using the artificial neural network of FIG.
  • FIG. 21 shows the result of the optimum position found by applying the process of FIGS. 20 (a) to 20 (d) for the third stage grating system.
  • FIG. 22 compares the existing production profiles of six production profiles and production profiles including two additional production locations optimal positions through a computational model.
  • the grid size in the search space is gradually reduced to become the same as the size of the basic grid within a predetermined range, so that the optimal positions of the plurality of drilling estimates are determined in step (c). It is characterized by repeatedly performing steps up to).
  • the search space when the search space is partitioned into a grid having a size larger than that of the basic grid in the step (a) or (e), the plurality of basic grids are bundled to form a single grid.
  • step (d) When reconstructing the search space in step (d), the basic area occupied by the upper rank grids and the additional area extended within a predetermined range from the basic area to the surrounding area are reconstructed into the search space.
  • the additional region preferably extends only to a range not including the central portion of the lattice immediately adjacent to the upper rank lattice. More specifically, when determining the additional area, it is preferable to expand by the unit of the basic lattice in the grid.
  • the analysis target data is data on the position of the grid in which the plurality of drilling estimates are installed.
  • the model building step may include: a first modeling step of constructing an artificial neural network model for predicting data output from the computational model using the training material; A second modeling step of obtaining output prediction data by inputting respective data included in the search group to the artificial neural network model; A third modeling step of determining whether to reconstruct the artificial neural network model based on the output prediction data; And a fourth modeling step of determining a new learning material including the learning material and a new search group when it is determined that the artificial neural network model needs to be reconstructed through the third modeling step.
  • the first modeling step is repeated from the first modeling step using the new search group.
  • the third modeling step may include: a first selecting step of selecting output prediction data corresponding to a predetermined selection criterion among the output prediction data calculated through the second modeling step; A selection data extraction step of extracting input data corresponding to the output prediction data selected through the first selection step, respectively; A non-data extraction step of extracting input data not included in the learning data used for constructing the artificial neural network among the input data extracted through the selection data extraction step; And reconstructing the artificial neural network model if the number of input data extracted in the non-data extraction step is less than a predetermined number of determination criteria, and determining that the input data number selected in the non-data extraction step is preset. If the reference number or more, re-establishment determination step of determining that the neural network model needs to be rebuilt; includes.
  • the number of decision criteria may be equal to the number of data selected as the input data among the analysis target data in the initial learning material generation step.
  • the fourth modeling step may further include: a second selecting step of selecting some of the input data extracted in the extracting data; A material addition step of generating new learning material by adding the input data selected in the second selection step to the learning material; And
  • data from the input data extracted in the non-containing data extraction step is smaller than the number of input data selected as the learning material or the number of input data selected as the learning material in the initial learning material generation step. Select.
  • the data addition step may further include: a data addition acquisition step of obtaining actual output data output by inputting the input data selected in the second selection step into the computational model; And a material generation step of generating new learning material by adding the input data selected in the second selection step and the actual output data acquired in the data addition obtaining step to the learning material.
  • the present invention relates to a modeling method for simulating which position is best suited for installing a plurality of drilling wells in gas fields, oil fields and the like. Therefore, the method according to the present invention is implemented in the form of software to be executed on a computer.
  • a plurality of drilling wells may be a plurality of production wells, for example, for producing gas or oil.
  • a plurality of injection wells may be used in the case of injecting fluid into the reservoir in order to promote the production of gas or oil.
  • Carbon Capture and Storage (CCS) technology for the storage of carbon dioxide can be a carbon dioxide injection well.
  • the present invention is mainly an oil field or a gas field, which is being developed or under development, but may be applied to a storage layer for simply storing carbon dioxide, such as in the CCS field.
  • the present invention is not necessarily limited to modeling for determining a plurality of drilling estimates, and may be used to solve various problems through modeling in a grid system composed of a plurality of grids. More specifically, it can be applied to all examples in which the grid itself serves as input data in the grid system. For example, in an example of the present invention, two grids (identification numbers for each grid) form one data set, so that each grid may be located in which grid.
  • FIG. 1 is a schematic flowchart of a position optimization method according to the present invention.
  • the location optimization method assumes a basic grid system.
  • the grid system refers to a grid system in which a certain space is divided into a plurality of grids. That is, in this example, the storage layer is composed of a plurality of lattices arranged successively up, down, left and right. Each grid may be assigned an identification number or a separate identification number according to the coordinates.
  • FIG. 2 An example in which the reservoir is formed as a basic grid system using a computer model program is shown in FIG. 2.
  • the storage layer is formed in a curved surface and is formed of a plurality of layers to have a three-dimensional structure.
  • Six arrows indicate the production boats currently installed.
  • Figure 3 is a simplified simplified view of the plane of the basic grid system in order to easily explain the present invention. That is, the plane of the actual basic lattice system is generally composed of curved surfaces as shown in FIG. 2, which is represented as a simple plane in FIG. 3, and the number of grids is greatly reduced.
  • the smallest grid of squares represented by thin solid lines in Fig. 3 is the basic lattice a of the basic lattice system.
  • the computational model and the lattice system described above are widely used, such as the amount of residual modeling and the amount of production modeling.
  • the entire area (plane area) of the basic grid system is set as the search space.
  • the search space is a target area in which a plurality of drilling estimates can be installed. That is, in the first step, the target region in which the plurality of drilling estimates can be formed is set to the widest range.
  • the basic grid system is made up of a plurality of basic grids (a), but the first stage grid system differs in that the size of the grid is formed larger than the basic grid in the search space.
  • the first-stage grating (b) is a grid in which the basic grid (a) is grouped by 5 * 5.
  • the grid size is 25 times larger than the basic grid.
  • the size of the first stage grating (b) is larger than the basic grid (a) and the size is determined by the user.
  • the first stage grating (b) is specified in the form of a bundle of the basic grid (a). For example, 5 * 5, 6 * 6, 7 * 7 and 5 * 7 are formed by grouping the basic grid a as a unit.
  • the number of gratings is significantly reduced compared to the basic lattice system. That is, in FIG. 3, in the basic grid system, there are 2500 basic grids (a) of 50 * 50, but the grid (b) of the first stage grid system is reduced by 25 times to 100 of 10 * 10.
  • the present invention seeks to find an optimal location for a plurality of drilling estimates.
  • the position here means the grid.
  • the first lattice system there is a first stage grating (b) numbered from 1 to 100, which evaluates which of the two gratings is most advantageous for the production of the drilling estimate.
  • Two grids, each of which two drilling estimates are arranged, become one data.
  • the total number of data is 100 C 2, which is 4,851. If the basic grid system is used instead of the first phase grid system, the number of data increases exponentially. In other words, if you use the basic grid (a) is 2500 C 2 , which results in 3,123,750 data.
  • the present invention provides a step-by-step solution.
  • a plurality of basic grids (a) are bundled to form a first-stage grid system to reduce the number of cases to derive a global maximum through an artificial neural network, which will be described later.
  • the process of repeatedly searching for the solution is performed repeatedly by decreasing the size of the grid again. It demonstrates in detail below.
  • a first-stage grid system is constructed, and a plurality of drilling estimates are evaluated through an artificial neural network model.
  • the optimal position can be represented by a rank. For example, if you evaluate the location of the drilling estimate where the highest yield is expected, you can quantify the yield and rank the above 4,851 data points.
  • the neural network model will be described in detail later.
  • Figure 4 shows the top ranks from the first to the fifth position in the result obtained through the artificial neural network model. Since the two first-stage gratings b are one data, up to the fifth position, a total of ten first-stage gratings b are designated. In FIG. 4, nine first-level gratings b are designated because one first-level grating b is overlapped in two pieces of data. For example, (numbers 14-94) and (numbers 24-94) are all ranked in the top five and contain all of the number 94 grids.
  • the number of upper ranks may be arbitrarily determined by the user.
  • the present invention may be limited to the fifth position, and further extended to the lower ranks.
  • black indicates an area where the production well is located as a representative value of the grating.
  • the search space is set again.
  • the search space may be designated as the first-stage gratings (b) recorded in the upper ranks in the evaluation through the previous neural network model.
  • the search space is extended to the dotted line outside the hatched first stage grating b.
  • the first stage grating (b) is composed of a 5 * 5 basic grid (a)
  • the search space is extended to the surrounding grid, it extends only up to the basic grid (a) of two columns or two rows.
  • the region extended to the dotted line is extended only to the basic grid a of two columns and two rows. Of course, you can expand only one column or one row.
  • the search space is extended, it is extended in units of the basic grid (a).
  • a second stage grating system is again constructed.
  • the second grating c is formed by grouping the basic grids a by 3 * 3 units.
  • the important point here is that as you go through the steps, you will have to gradually reduce the size of the grid. That is, in the first step, the basic grid (a) is grouped by 5 * 5, but in the second step, the grid size is reduced by grouping it by 3 * 3.
  • the unit should be aligned with the basic grid unit (a). Narrow the search space, but seek sophistication by increasing the resolution of the narrowed area.
  • the second stage grid c may be partially out of the search space or may not be partially filled, the second stage grid c may be understood as a concept of partially expanding or partially reducing the search space. In this case, however, it is preferable to include all the grid areas recorded in the upper rank.
  • the optimal position is evaluated using the artificial neural network model as in the first stage, and the result is shown in FIG. 6 according to the number of higher ranks set in advance. It is.
  • the seven second stage gratings c hatched in FIG. 6 are shown. Now repeat the same process again. That is, as shown in FIG. 7, the area is extended to the periphery of the grids in which the upper ranks are recorded, again defining a search space, and constructing a third-level grid system.
  • the third stage grating used in the third stage grating system uses the basic grid (a) as it is. And we evaluate the optimal location through the neural network model. Here, the optimum positions of the plurality of drilling estimates are evaluated to determine the optimum positions.
  • the basic grid system is already constructed in the “START” state.
  • the first stage grid system (first grid system) is constructed.
  • the number of first stage gratings (ex: 5 * 5) is specified.
  • the global neural network model is used to calculate the global solutions, which are represented by ranking.
  • a plurality of basic grids are initially formed to form a lattice, and the process is repeated while gradually decreasing the size of the lattice while repeating orders (steps).
  • the algorithm proceeds to the next step.
  • a new search space is constructed based on the upper ranks. Then, by adding 1 to “i”, the order is increased, and after the grid system is constructed, the evaluation is performed through the artificial neural network. This process repeats cyclically.
  • the grid of the current step is the same size as the basic grid. If the grid sizes are the same, the final solution is determined as the global solution obtained from the neural network at that stage. That is, the wide area solution obtained at this stage is determined as an optimal position where a plurality of drilling estimates are installed.
  • the grid of the present stage is slightly larger than the basic grid, but if the size of the grid is large within a predetermined range, the cycle may be terminated and the algorithm may be terminated. That is, the present invention does not necessarily have to be forcibly repeated until the size of the base lattice and the base lattice at the present stage are the same, and the range may be further extended.
  • the present invention when the number of data is too large to derive a wide area solution in one process through an artificial neural network, it is to lower the uncertainty that a true optimal solution will not be derived.
  • a large grid is used to reduce the number of data, i.e. reduce the number of cases, so that the region corresponding to the optimal position is not missed.
  • the optimal position is detected precisely while increasing the resolution, and finally, the global solution is found. This not only significantly increases the probability of finding the actual optimal solution, but also simplifies the process of finding the optimal solution, which has a big advantage at the process level.
  • an artificial neural network model is used to evaluate and determine the optimal position of a plurality of drilling estimates. Genetic algorithms, statistical algorithms, etc. may be used as the artificial neural network model, but in this embodiment, the artificial neural network model developed by the researchers of the present invention is used.
  • the artificial neural network model has been filed in the Republic of Korea Patent Application No. 10-2017-0017703 "method of calculating a wide area solution using the artificial neural network", the state is not disclosed at the time the present invention is filed.
  • an artificial neural network model used in the present invention will be described with reference to the accompanying drawings.
  • the present embodiment is to determine the optimum position of the plurality of drilling estimates, but for convenience of description, it will be described below by selecting an optimal position of one drilling estimate as an example.
  • FIG. 8 is a flowchart illustrating a method for calculating a global solution using an artificial neural network according to the present invention.
  • a method for calculating a global solution using an artificial neural network includes an initial learning material generation step (S100), an initial search group generation step (S200), a model building step (S300), and a wide area calculation step (S400). Include.
  • Initial learning material generation step (S100) is a step of generating learning material for building an artificial neural network model.
  • the number of input data to be selected may be determined according to the number of data to be analyzed, and it is preferable to select 10 or 20 input data among the data to be analyzed.
  • the data to be analyzed means a grid. For example, when only one drilling estimate is selected by applying the above-described first-stage grid system, the number of cases, that is, the number of analysis data, becomes 100 pieces of analysis data. If there are multiple drilling estimates, the number of data is much larger than before.
  • Computational model is to analyze the analysis target data through computer simulation.
  • Computational model such as oil gas production simulation model according to drilling position that calculates oil gas production is calculated. Apply.
  • accurate output data can be obtained.
  • the present invention reduces the computation time of the global solution by using the learned artificial neural network model. I would like to.
  • the selected input data and the actual output data are classified into training materials for constructing an artificial neural network model and stored in a database (not shown).
  • the initial search group generation step (S200) is a step of classifying the analysis target data into search groups. In this step, all analysis data including the selected input data are classified into search groups.
  • the artificial neural network model is constructed to predict the data output from the computational model using the training data, and the artificial neural network model is reconstructed based on the output data when the search group is input to the artificial neural network model. If the neural network model is determined to be reconstructed, the training data is reset to reconstruct the artificial neural network model.
  • the model building step includes first to fourth modeling steps (S340).
  • the first modeling step (S310) is a step of constructing an artificial neural network model for predicting data output from a computational model using training materials.
  • An artificial neural network model is constructed based on learning materials using an algorithm for constructing an artificial neural network model. Since the algorithm is generally used to build an artificial neural network model, a detailed description thereof will be omitted.
  • the second modeling step (S320) is a step of obtaining output prediction data by inputting respective data included in the search group to the artificial neural network model.
  • the operator inputs data included in the search group into the artificial neural network model and outputs output prediction data.
  • the third modeling step (S330) is a step of determining whether to rebuild the artificial neural network model based on the output prediction data, the first selection step (S331), the screening data extraction step (S332), the data extraction step (S333) And determining whether to rebuild (S334).
  • the first selection step S331 is a step of selecting output prediction data corresponding to a predetermined selection criterion among the output prediction data calculated through the second modeling step S320. If the output prediction data is a number such as oil gas production, in the first selection step (S331), the output prediction data are ranked in descending or ascending order, and the selected criteria are ranked from the first rank to the predetermined reference rank. Set it.
  • the reference rank is set to a value corresponding to the product of the number of output prediction data output in the second modeling step S320 and a preset calculation ratio.
  • the calculation ratio may be set to 20% or 30%. However, the calculation ratio may be arbitrarily set according to the number of data to be analyzed, without being limited thereto. For example, when 100 output prediction data are calculated in the second modeling step S320 and the calculation ratio is set to 20%, the reference rank is 20th.
  • the determined ranking Selects output prediction data corresponding to ranks 1 to 20 in the.
  • the output prediction data are ranked in descending order.
  • the output data having the lowest value can be calculated as a global solution, and it is preferable to rank the output prediction data in ascending order.
  • the screening data extraction step S332 is a step of extracting input data corresponding to the output prediction data selected through the first selection step S331.
  • the non-data extraction step (S333) is a step of extracting input data (T) which is not included in the learning data used for constructing the artificial neural network among the input data extracted through the screening data extraction step (S332).
  • the operator stores the extracted input data in the database.
  • the reconstructing determination step (S334) is a step of determining whether to rebuild the artificial neural network model based on the number of input data extracted in the non-contained data extraction step (S333). That is, if the number of input data extracted in the non-data extraction step (S333) is less than the predetermined number of determination criteria, it is determined that reconstruction of the artificial neural network model is not necessary, and the number of input data selected in the non-data extraction step (S333) If it is more than the predetermined number of criteria, it is determined that the neural network model needs to be rebuilt.
  • the number of determination criteria may be set equal to the number of data selected as input data among the analysis target data in the initial learning material generation step (S100), but is not limited thereto, and an appropriate number may be arbitrarily set according to the number of analysis data. have. For example, if 20 input data are selected in the initial learning material generation step (S100), 20 is applied as the number of criteria.
  • the new modeling step and the new search group including the learning material are determined.
  • a step S341, a material adding step S342, and a group adding step S343 are included.
  • the second selection step S341 is a step of selecting some of the input data extracted in the non-data extraction step S333.
  • this step it is preferable to select data from the input data extracted in the data extraction step (S333) not including as many as the data selected as the learning data of the analysis target data in the initial learning material generation step (S100), but is not limited thereto. Instead, the data may be selected from the input data extracted in the non-included data extraction step (S333) by the number smaller than the number of data selected as the learning material in the initial learning material generation step (S100).
  • the data adding step (S342) is a step of generating a new learning material by adding the input data selected in the second selection step (S341) to the learning material, and includes a data addition obtaining step (S344) and a material generating step (S345). .
  • the additional data acquisition step S344 is a step of acquiring the actual output data output by inputting the input data selected in the second selection step S341 to the computational model.
  • the computational model is applied to the computational model used in the initial learning material generation step (S100).
  • the data generation step S345 is a step of generating new learning material by adding the input data selected in the second selection step S341 and the actual output data acquired in the data addition obtaining step S344 to the learning material.
  • the group addition step S343 is a step of setting the input data selected in the non-included data extraction step S333 and the input data included in the learning material as a new search group.
  • the model building step (S300) uses the new training material and the new search group, and then the fourth modeling step (S310) to the fourth. It is preferable to repeat the modeling step (S340).
  • the first modeling step S310 to the fourth modeling step S340 are repeated a plurality of times. The input data extracted in the data extraction step S333 without the third modeling step S330 is performed. If the number of them is less than the predetermined number of criteria, it is determined that reconstruction of the artificial neural network model at this time is not necessary.
  • the wide area calculation step (S400) is a step of calculating a wide area solution when it is determined that reconstruction is not necessary through the model building step.
  • the input data calculation step S410 is a step of calculating input data corresponding to the output prediction data selected in the first selection step S331. That is, input data input to the artificial neural network model is calculated so that the output prediction data selected in the first selection step S331 is output.
  • the output data calculating step S420 is a step of calculating actual output data for the input data selected in the input data calculating step S410 by using a computational model. That is, the input data calculated in the input data calculating step S410 is input to the computational model to obtain the actual output data. At this time, the computational model is applied to the computational model used in the initial learning material generation step (S100).
  • Completion step (S430) is input data corresponding to the actual output data having the largest or smallest value among the actual output data and the actual output data constituting all the learning data calculated in the output data calculation step (S420) Calculation step.
  • the output data having the maximum value is calculated as the global solution
  • the input data corresponding to the actual output data having the largest value is calculated, and when the output data having the lowest value is calculated as the global solution, the smallest value is obtained. It is preferable to calculate the input data corresponding to the actual output data having the value.
  • the existing production wells for collecting oil gas are set in the middle of the reservoir, and it is a problem of calculating one drilling position for horizontal wells in which oil gas production is maximized in addition to the existing production wells.
  • the computer model was used for 3200 data, and the actual output data was obtained in advance, and then the wide-range solution was verified.
  • the data on the drilling positions forming the reservoir layer that is, 20 drilling positions are selected as the input data from the analysis target data, and each data for the 20 drilling positions selected as the input data.
  • the computational model is a computational model for oil gas production according to the drilling position information.
  • the 20 real output data outputted by the computational model are classified into the training data along with the input data.
  • the analysis target data are classified into search groups, and 2858 of the reservoirs are classified as search groups except for the external boundary part and the existing production well installation part.
  • model building step (S300) applied to the embodiment will be described in detail as follows.
  • an artificial neural network model is constructed through 20 input data and 20 actual output data through a first modeling step S310, and 2858 data constituting a search group are artificially created through a second modeling step S320. Input to neural network model to obtain 2858 output prediction data.
  • the 10 illustrates a graph comparing output prediction data output through an initial artificial neural network model with actual output data obtained by inputting 2858 data forming a search group into a computer model.
  • the X axis represents the actual output data values obtained by inputting 2858 data forming the search group into the computational model
  • the Y axis represents the output prediction data values outputted from the initial neural network model with 2858 data.
  • the triangle on the upper right is the real-world solution, that is, the data with the maximum yield
  • the squares in the middle correspond to the training data used to construct the neural network model
  • the dotted line is the baseline for representing the output ratio of 30%.
  • points corresponding to the upper side of the dotted line are data included in the calculation rate, and points corresponding to the lower side of the dotted line are data not included in the calculation rate.
  • the calculated output prediction data are ranked in descending order.
  • the calculation rate is set to 30% to select output prediction data from the 1st rank to the 857th rank in the determined rank.
  • the screening data extraction step (S332) the input data corresponding to each of the 857 output prediction data is extracted, and the learning material used to build the artificial neural network of the 857 output prediction data through the without data extraction step (S333) Extract input data not included in.
  • the initial neural network model needs to be rebuilt because the number of input data selected in the non-data extraction step (S333) is 20 or more, which is a predetermined number of determination criteria.
  • a new learning material and a new search group are determined through the fourth modeling step (S340). That is, 20 input data among the input data selected in the non-data extraction step S333 are selected, and the 20 input data are input to the computational model to calculate actual output data. At this time, the new learning materials are determined by including 20 newly selected input data and actual output data in the existing learning materials. In addition, the input data included in the non-data extraction step (S333) and the training data used to construct the initial artificial neural network model are set as a new search group.
  • FIG. 11 illustrates a graph comparing output prediction data output through a second artificial neural network model and actual output data obtained by inputting a search group into a computer model
  • FIG. 12 illustrates a third artificial neural network model.
  • a graph comparing the output prediction data output through the search group with the actual output data obtained by inputting the search group into the computational model is disclosed.
  • FIG. 13 the output prediction data and the search group output through the fourth artificial neural network model are disclosed.
  • the graph comparing the actual output data obtained by inputting into the computational model is disclosed.
  • the markers shown in FIGS. 11 to 13 have the same meanings as the markers shown in FIG. 10.
  • the number of search groups decreases as the number of reconstructions of the artificial neural network model increases, and the output prediction data value output through the artificial neural network model is similar to the actual output data value output through the computational model. It can be seen that an artificial neural network can be used to calculate a real global solution, that is, a highly accurate global solution for data having an actual maximum value.
  • the total number of input data selected in the data extraction step (S333) at the time when the construction of a total of five artificial neural network model is completed is less than 20, which is a predetermined number of discrimination criteria, the reconstruction of the artificial neural network model It is determined that this is not necessary, and the eight input data are inputted to the computational model to output the actual output data.
  • the input data corresponding to the actual output data having the largest value among the actual output data and the actual output data of all learning materials is determined as a wide area solution. This concludes the process of constructing an artificial neural network to calculate the global optimal solution.
  • the stages are constructed sequentially from the first-stage grid system to the n-th grid system, and the artificial neural network is constructed for each stage according to the above process, and the optimal solution for the plural drilling estimates is obtained.
  • the step is made, using the grid on the new grid system to create a learning material, using it to build an artificial neural network model, and reduce the artificial neural network by reducing the search group of the constructed neural network Rebuild Finally, the constructed neural network is used to calculate the solution for the optimal position at each stage.
  • An embodiment of the present invention is to determine two drilling locations that can maximize the oil gas production in the oil gas reservoir shown in FIG.
  • the total possible combination is the number of 2,545,896 cases (planar).
  • the above input data can be applied to the artificial neural network algorithm shown in FIG. 8 as it is to find two optimal positions we want to find.
  • the algorithm shown in FIG. 8 is applied as it is, the probability of not finding an actual optimal position may increase.
  • the present invention has been made to solve this problem. That is, while using the algorithm of Figure 8 developed by the researchers of the present invention, it is applied in a sequential step. In other words, the multi-stage grid system was applied to reduce the number of cases.
  • n is the size of the grid in each stage
  • n1 means that in the first stage grid system
  • the basic grid is grouped by 5 * 5 to form one grid
  • the above values may vary depending on the embodiment, and the steps may be further increased.
  • a one-stage grid system is constructed in FIG.
  • it is a one-stage grating system in which dark solid lines are constructed, and a black grating indicates a position where a drilling estimate can be placed in this system.
  • Thin solid line means basic grid system.
  • the number of cases where two drilling estimates can be located that is, the search space, has been dramatically reduced to a total of 3,486.
  • the algorithm for finding the global solution shown in FIG. 8 is applied to this grid system.
  • FIG. 15 illustrates a process of constructing and rebuilding an artificial neural network through the process shown in FIG. 8. 15 (a) to 15 (d) correspond to FIGS. 10 to 13.
  • the data indicated by a triangle in the upper right of FIG. 15 (d) is the global solution in the first step, that is, the optimal position of the plurality of drilling estimates.
  • the second stage grating system was constructed by further expanding the upper 5 rank region of FIG. 16.
  • the two-stage grid system is shown in dark solid lines in FIG. 17 and the drillable positions are marked in black.
  • the size of the search space in the second-stage grid system is 280.
  • the grating hatched in FIG. 18 corresponds to a top 5 rank solution, and this area is further expanded to set a search space, and then a third stage grating system is constructed as shown in FIG. Referring to FIG. 19, the third stage grid system is the same as the grid size of the basic grid system, and the search space has 2,800 cases (input data).
  • FIG. 21 The input data of the wide area solution is shown in FIG. 21 as shown in FIG.
  • the solution derived in the third stage is now determined as the final solution without updating the grid system.
  • the two gratings denoted by r in FIG. 21 are optimal positions where two drilling estimates will be respectively installed.
  • FIG. 22 compares six production profiles of existing production wells and production profiles including two optimal positions of additional production wells through a computerized model.
  • the algorithm for finding a global solution using a multi-stage grid system has an advantage of finding the global solution very easily even when the number of cases to be considered increases exponentially.
  • the neural network must be newly constructed at each stage, but the construction rate of the artificial neural network is very fast in recent years, so it does not matter in applying the present invention.
  • the “method of calculating a wide area solution using an artificial neural network” developed by the researchers of the present invention has a practical limitation in finding an actual wide area solution by directly applying the input data at a time when there are a lot of input data.
  • the process of finding the optimal solution very fast by using the multi-step approach through the present invention, but also the possibility that the found wide-area coincides with the actual wide-area solution is increased. Therefore, the present invention is expected to be able to replace the simulation using a conventional computer model.
  • the present invention is implemented as a program on a computer. Accordingly, the present invention provides a recording medium containing a program for executing the above method on a computer.
  • the artificial neural network model used in the present invention has been described with reference to the form shown in FIG. 8 as an example, but the present invention is not limited thereto, and other wide area search models, such as genetic algorithms, which have been previously developed, as well as Another artificial neural network model could be used.
  • the present invention has been described as using a plurality of locations of the drilling estimates, it can be used to find one drilling estimate.
  • N the number of criteria for determining whether to reconstruct the artificial neural network model
  • the number of criteria is not limited thereto. For example, it is extended to a case where T is smaller than a predetermined value between N and 3N. can do. It can be further extended to apply for values above 3N.

Abstract

The present invention relates to a method for determining optimal locations of multiple wellbores in an oil and gas reservoir. The present invention proposes a selection method using an artificial neural network which replaces an existing computational model. Further, there are a practical limit and increasing uncertainty in directly applying an artificial neural network to a case in which the amount of data is very large. In this regard, the present invention proposes a method in which an artificial neural network can be applied sequentially and by stages. Therefore, it is expected that optimal locations of multiple wellbores can be effectively and reliably selected using the method even when the amount of data is very large.

Description

유가스 저류층에서 인공신경망을 이용한 복수의시추정 위치 최적화 방법Optimization of Multiple Position Estimation Using Artificial Neural Networks in Oil Gas Reservoir
본 발명은 에너지, 자원분야의 기술로서, 특히 오일, 가스 저류층에서 복수의 시추정 위치를 모델링을 통해 최적화하는 방법에 관한 것이다. TECHNICAL FIELD The present invention relates to techniques in the field of energy and resources, and in particular, to a method of optimizing a plurality of drilling locations by modeling in an oil and gas reservoir.
오일, 가스의 생산을 위해서는 이른바 '생산모델링'을 수행한다. 즉, 유전이나 가스전을 대상으로 하여 전산모델을 이용하여 시뮬레이션을 해보는 것이다. 예컨대, 가스전을 3차원 격자로 구획하여 복수의 셀을 가지는 3차원 그리드(grid)를 형성한다. 각각의 셀에는 가스함유량이 설정되어 있고, 셀의 압력이 감압됨에 따라 오일, 가스가 배출되는 양이 함수로 입력되어 있다. 보다 구체적으로 설명하면, CBM 가스전의 경우에는 생산정이 구축되면 셀의 압력이 감압되면서 석탄으로부터 메탄가스가 탈착되는데, 탈착량을 압력에 대한 함수로 지정할 수 있다. 그리고 탈착된 메탄가스는 그리드 내 균열망을 따라 생산정까지 이동된다. 셀의 상태는 시간에 따라 변화하게 되므로 동적 모델링이 된다. 유가스전의 개발의 전 과정은 전산모델을 이용한 모델링이 수행된다. In order to produce oil and gas, so-called 'production modeling' is performed. In other words, the simulation is conducted using a computational model for oil fields or gas fields. For example, the gas field is partitioned into a three-dimensional grid to form a three-dimensional grid having a plurality of cells. The gas content is set in each cell, and the amount of oil and gas discharged as a function is inputted as the pressure in the cell is reduced. More specifically, in the case of the CBM gas field, when the production well is constructed, the pressure of the cell is reduced while methane gas is desorbed from the coal. The amount of desorption can be designated as a function of pressure. The desorbed methane gas is then transported along the crack network in the grid to the production wells. The state of the cell changes over time, resulting in dynamic modeling. The whole process of development of oil gas field is modeled using computer model.
한편, 최근 인공신경망(ANN, Artificial NeuralNetworks)에 대한 연구가 매우 활발하다. 인공신경망은 인간의 뇌를 모방하여 입력층-은닉층-출력층으로 구성되며, 학습자료로 주어지는 입력값과 출력값 사이의 비선형 관계를 학습하여 모델을 구축한다. 입력값-출력값 사이의 비선형 관계 모델이 구축되면, 새로운 입력값이 주어질 때 모델을 이용하여 출력값을 예측한다. 여기서, 학습에 사용되는 입력값-출력값의 자료는 기존의 전산모델을 통해 확보한다. 인공신경망은 입력값을 처리하여 출력값을 도출하는데 속도가 매우 빠르다는 이점이 있다. 또한 인공신경망 모델을 계속적으로 사용하면 데이터가 축적됨에 따라 입력-출력값 사이의 비선형 관계 모델을 더욱 정교하게 발전시킬 수 있다는 이점도 있다.On the other hand, research on artificial neural networks (ANN) is very active recently. Artificial neural network is composed of input layer, hidden layer, and output layer by imitating human brain, and constructs model by learning nonlinear relationship between input value and output value. Once a nonlinear relationship model between input and output values is built, the model is used to predict output values when new input values are given. Here, the input-output data used for learning are secured through the existing computer model. The neural network has the advantage of being very fast in processing the input and deriving the output. In addition, the continuous use of neural network models has the advantage of more elaborate development of nonlinear relationship models between input and output values as data is accumulated.
유가스 저류층의 모델링에도 인공신경망을 이용하려는 연구가 진행되고 있다. 그러나 기존의 인공신경망에서는 출력값이 최대 또는 최소가 되는 입력값을 찾아내는 문제인 경우, 인공신경망 모델의 예측 오차로 찾아진 입력값에 대한 실제값(전산 모델링에 의한 값)은 해당 도메인에서 최대 또는 최소가 되지 못하는 한계가 있었다.In order to model oil gas reservoirs, research is being conducted to use artificial neural networks. However, in the existing neural network, when the output value is found to be the maximum or minimum input value, the actual value (the value obtained by computational modeling) for the input value found due to the prediction error of the artificial neural network model is the maximum or minimum value in the domain. There was a limit not to be.
즉 종래의 방법을 사용하면 인공신경망을 통해 광역해를 찾는 영역(탐색공간)을 축소할 수 있다. 이러한 과정을 반복적(순차적)으로 적용하면 탐색공간을 순차적으로 축소할 수 있으며, 최종적으로 진정한 광역해 (true global solution)를 도출할 수 있다. In other words, using the conventional method, it is possible to reduce the area (search space) for searching for a global solution through an artificial neural network. By applying this process repeatedly (sequential), the search space can be reduced sequentially, and finally a true global solution can be derived.
하지만 위의 방법에 의해서도 탐색공간이 너무 크면 인공신경망을 사용하여 탐색공간을 축소하는 과정에서 불확실성이 커져 광역해가 탈락할 가능성이 증가하는 한계가 있다. However, even with the above method, if the search space is too large, there is a limit that the possibility of dropping the global sea area increases due to uncertainty in the process of reducing the search space using an artificial neural network.
본 발명에서는 탐색공간이 기하급수적으로 증가하여도 안정적으로 광역해를 도출할 수 있는 모델링 방법을 제공하며, 특히 상기한 방법을 유가스 저류층의 모델링에 적용함으로써, 저류층에 설치되는 복수의 시추정 최적 위치를 결정하는 방법을 제공하는데 목적이 있다. The present invention provides a modeling method that can stably derive a wide area even when the exploration space increases exponentially, and in particular, by applying the above method to the modeling of the oil gas reservoir, it is possible to optimize a plurality of drilling estimates installed in the reservoir. The purpose is to provide a method for determining location.
또한 본 발명에서는 상기한 방법을 실행하기 위한 프로그램이 기록된 컴퓨터에서 판독가능한 기록매체를 제공하는데 목적이 있다. It is also an object of the present invention to provide a computer-readable recording medium having recorded thereon a program for executing the above method.
한편, 본 발명의 명시되지 않은 또 다른 목적들은하기의 상세한 설명 및 그 효과로부터 용이하게 추론할 수 있는 범위 내에서 추가적으로 고려될 것이다.On the other hand, other objects not specified in the present invention will be further considered within the scope that can be easily inferred from the following detailed description and effects.
상기 목적을 달성하기 위한 본 발명에 따른 유가스 저류층에서 인공신경망을 이용한 복수의 시추정 위치 최적화 방법은,A plurality of drilling position optimization method using an artificial neural network in the oil gas reservoir according to the present invention for achieving the above object,
(a)오일 또는 가스 저류층을 다수의 기본격자들로 구획하여 형성된 기본격자시스템의 전체 영역을 탐색공간으로 설정하며, 상기 탐색공간을 상기 기본격자보다 큰 크기의 격자로 다시 구획하는 단계; (a) setting an entire area of the basic grid system formed by dividing an oil or gas storage layer into a plurality of basic grids as a search space, and subdividing the search space into a grid having a size larger than the basic grid;
(b)탐색공간 내 다수의 격자들 중에서 어느 격자에 복수의 시추정이 위치하면 적합한지를 인공신경망 평가모델을 적용하여 평가하고, 격자별 적합도를 순위로 나타내는 단계; (b) applying an artificial neural network evaluation model to determine which one of the plurality of gratings in the search space is suitable for a plurality of grating estimates, and indicating the suitability of each grating as a rank;
(c)상기 평가단계에서 사용한 현재의 격자 크기가 상기 기본격자의 크기 보다 일정 범위 이상 큰 경우 탐색공간을 재구축하기 위한 후속 단계를 진행하며, 일정 범위 내에 있다면 상기 평가단계의 결과를 최적 위치로 결정하는 단계; (c) If the current grid size used in the evaluation step is larger than the base lattice size by a predetermined range or more, proceed to the next step to rebuild the search space, if within a certain range, the result of the evaluation step to the optimal position Determining;
(d)상기 평가단계에서 격자별 적합도에서 일정 순위까지의 상위 랭크에 기록된 격자들을 새로운 탐색공간으로 재구축하는 단계; (d) reconstructing the lattice recorded in a higher rank up to a predetermined rank in the goodness-of-fit of the lattice in the evaluation step into a new search space;
(e)재구축된 탐색공간을 상기 기본격자 크기 이상이되 선행하는 상기 평가단계에서 사용한 격자 크기 보다는 작은 크기의 격자들로 재구획하는 단계;를 포함하며, (e) re-segmenting the reconstructed search space into grids of size greater than or equal to the basic grid size but smaller than the grid size used in the preceding evaluation step;
상기 탐색공간 내의 격자 크기를 점차 작게 하여 상기 기본격자의 크기와 일정 범위 내에서 동일하게 됨으로써, 상기 (c)단계에서 상기 복수의 시추정의 최적 위치가 결정될 때까지, 상기 (b)단계에서 (e)단계까지를 반복적으로 수행하는 것에 특징이 있다. The grid size in the search space is gradually reduced to become the same as the size of the basic grid within a predetermined range, so that the optimal positions of the plurality of drilling estimates are determined in step (c). It is characterized by repeatedly performing steps up to).
본 발명은 유가스 저류층에서 복수의 시추정의 최적 위치를 선정함에 있어서 인공신경망 모델을 이용하되, 인공신경망을 순차적, 단계적으로 적용함으로써 데이터의 개수가 매우 많은 경우에도 매우 효과적으로 최적 위치를 선정할 수 있다. In the present invention, the artificial neural network model is used in selecting the optimal positions of the plurality of drilling estimates in the oil gas reservoir, and the optimal position can be selected very effectively even when the number of data is very large by applying the artificial neural network sequentially and stepwise. .
또한 본 발명을 적용하여 최적 위치를 선정하는 경우, 인공신경망 모델을 한 번에 직접 적용하는 경우에 비하여 전산모델의 결과와 일치할 확률이 높아 최적 위치 선정의 신뢰성이 증대한다는 이점이 있다. In addition, when the optimal position is selected by applying the present invention, there is an advantage that the reliability of the optimal position selection is increased as the probability of matching the result of the computational model is higher than when the artificial neural network model is directly applied at one time.
한편, 여기에서 명시적으로 언급되지 않은 효과라하더라도, 본 발명의 기술적 특징에 의해 기대되는 이하의 명세서에서 기재된 효과 및 그 잠정적인 효과는 본 발명의 명세서에 기재된 것과 같이 취급됨을 첨언한다.On the other hand, even if the effects are not explicitly mentioned here, it is added that the effects described in the following specification and the provisional effects expected by the technical features of the present invention are treated as described in the specification of the present invention.
도 1은 본 발명의 일 예에 따른 유가스 저류층에서 인공신경망을 이용한 복수의 시추정 위치 최적화 방법의 개략적 흐름도이다. 1 is a schematic flowchart of a plurality of drilling position optimization method using an artificial neural network in the oil gas reservoir according to an embodiment of the present invention.
도 2는 본 발명의 대상이 되는 저류층을 기존의전산모델을 통해서 격자화한 것을 보여주는 이미지이다. 2 is an image showing that the storage layer to be the object of the present invention lattice through a conventional computer model.
도 3 내지 도 7은 본 발명에 따른 방법을 설명하기 위한 것으로서, 도 3은 기본격자시스템과 제1단계 격자시스템을 나타낸 것이다. 3 to 7 illustrate a method according to the present invention, and FIG. 3 shows a basic grid system and a first stage grating system.
도 4는 제1단계 격자시스템을 기초로 인공신경망을 통해 평가한 상위랭크들과, 제2단계에서 확장할 영역을 나타낸 것이고, 도 5는 제2단계에서 확정된 탐색공간과 제2단계 격자시스템을 나타낸 것이다. FIG. 4 shows the upper ranks evaluated through the neural network based on the first stage grid system and the area to be expanded in the second stage. FIG. 5 shows the search space and the second stage grid system determined in the second stage. It is shown.
도 6은 제2단계에서 인공신경망을 통해 찾아낸 상위랭크들과 확장할 영역을 나타낸 것이고, 도 7은 제3단계에서의 탐색공간과 제3단계 격자시스템을 나타낸 것이다. FIG. 6 shows the upper ranks found through the artificial neural network and the area to be expanded in the second stage, and FIG. 7 shows the search space and the third stage grid system in the third stage.
도 8은 “인공신경망을 이용하여 광역해를 산출하는 방법”에 대한 순서도이다. 8 is a flowchart of "a method for calculating a global solution using an artificial neural network".
도 9는 위 인공신경망을 이용하여 광역해를 산출하는 방법의 적용예의 대상이 된 저류층에 대한 이미지이다. FIG. 9 is an image of a storage layer serving as an application example of a method of calculating a global solution using the artificial neural network.
도 10 내지 도 13은 인공신경망을 이용하여 광역해를 산출하는 방법에 따라 인공신경망 모델을 통해 출력된 출력 예측 데이터들과 탐색집단을 전산모델에 입력하여 획득한 실제 출력 데이터들을 비교하는 그래프이다.10 to 13 are graphs comparing output prediction data output through an artificial neural network model and actual output data obtained by inputting a search group into a computational model according to a method of calculating a global solution using an artificial neural network.
도 14는 본 발명의 실시예를 설명하기 위한 것으로서 저류층의 제1단계 격자시스템을 나타낸 것이다. FIG. 14 illustrates a first stage grating system of a storage layer for explaining an embodiment of the present invention.
도 15(a) 내지 도 15(d)는 도 8에 도시된 과정을 통해 인공신경망을 구축 및 재구축하는 과정을 나타낸 것이다. 15 (a) to 15 (d) show a process of constructing and rebuilding an artificial neural network through the process shown in FIG. 8.
도 16은 제1단계 격자시스템에서 최적 위치를 선정한 후 상위랭크(헤칭된 격자들)를 나타낸 것이다. 16 shows the upper ranks (hatched gratings) after selecting the optimal position in the first stage grating system.
도 17은 제2단계 격자시스템을 나타낸 것이고, 도 18은 제2단계 격자시스템을 대상으로 광역해를 구한 결과를 나타낸 것이다. FIG. 17 illustrates a second stage grating system, and FIG. 18 illustrates a result of obtaining a global solution for the second stage grating system.
도 19는 제3단계 격자시스템을 구축한 결과이다.19 shows the result of constructing the third stage grating system.
도 20(a) 내지 도 20(d)는 도 8의 인공신경망을 이용하여 제3단계 격자시스템에서 최적 위치를 선정하는 과정을 나타낸 것이다. 20 (a) to 20 (d) illustrate a process of selecting an optimal position in a third-stage grating system using the artificial neural network of FIG.
도 21은 제3단계 격자시스템에 대하여 도20(a) 내지 도 20(d)의 과정을 적용하여 찾아는 최적 위치의 결과를 나타낸 것이다. FIG. 21 shows the result of the optimum position found by applying the process of FIGS. 20 (a) to 20 (d) for the third stage grating system.
도 22는 기존 생산정 6개의 생산프로파일과, 여기에 추가 생산정 2개의 최적위치를 포함한 생산프로파일을 전산모델을 통해 비교한 것이다. FIG. 22 compares the existing production profiles of six production profiles and production profiles including two additional production locations optimal positions through a computational model.
※ 첨부된 도면은 본 발명의 기술사상에 대한 이해를 위하여 참조로서 예시된 것임을 밝히며, 그것에 의해 본 발명의 권리범위가 제한되지는 아니한다.The accompanying drawings show that they are illustrated as a reference for understanding the technical idea of the present invention, by which the scope of the present invention is not limited.
상기 목적을 달성하기 위한 본 발명에 따른 유가스 저류층에서 인공신경망을 이용한 복수의 시추정 위치 최적화 방법은,A plurality of drilling position optimization method using an artificial neural network in the oil gas reservoir according to the present invention for achieving the above object,
(a)오일 또는 가스 저류층을 다수의 기본격자들로 구획하여 형성된 기본격자시스템의 전체 영역을 탐색공간으로 설정하며, 상기 탐색공간을 상기 기본격자보다 큰 크기의 격자로 다시 구획하는 단계; (a) setting an entire area of the basic grid system formed by dividing an oil or gas storage layer into a plurality of basic grids as a search space, and subdividing the search space into a grid having a size larger than the basic grid;
(b)탐색공간 내 다수의 격자들 중에서 어느 격자에 복수의 시추정이 위치하면 적합한지를 인공신경망 평가모델을 적용하여 평가하고, 격자별 적합도를 순위로 나타내는 단계; (b) applying an artificial neural network evaluation model to determine which one of the plurality of gratings in the search space is suitable for a plurality of grating estimates, and indicating the suitability of each grating as a rank;
(c)상기 평가단계에서 사용한 현재의 격자 크기가 상기 기본격자의 크기 보다 일정 범위 이상 큰 경우 탐색공간을 재구축하기 위한 후속 단계를 진행하며, 일정 범위 내에 있다면 상기 평가단계의 결과를 최적 위치로 결정하는 단계; (c) If the current grid size used in the evaluation step is larger than the base lattice size by a predetermined range or more, proceed to the next step to rebuild the search space, if within a certain range, the result of the evaluation step to the optimal position Determining;
(d)상기 평가단계에서 격자별 적합도에서 일정 순위까지의 상위 랭크에 기록된 격자들을 새로운 탐색공간으로 재구축하는 단계; (d) reconstructing the lattice recorded in a higher rank up to a predetermined rank in the goodness-of-fit of the lattice in the evaluation step into a new search space;
(e)재구축된 탐색공간을 상기 기본격자 크기 이상이되 선행하는 상기 평가단계에서 사용한 격자 크기 보다는 작은 크기의 격자들로 재구획하는 단계;를 포함하며, (e) re-segmenting the reconstructed search space into grids of size greater than or equal to the basic grid size but smaller than the grid size used in the preceding evaluation step;
상기 탐색공간 내의 격자 크기를 점차 작게 하여 상기 기본격자의 크기와 일정 범위 내에서 동일하게 됨으로써, 상기 (c)단계에서 상기 복수의 시추정의 최적 위치가 결정될 때까지, 상기 (b)단계에서 (e)단계까지를 반복적으로 수행하는 것에 특징이 있다. The grid size in the search space is gradually reduced to become the same as the size of the basic grid within a predetermined range, so that the optimal positions of the plurality of drilling estimates are determined in step (c). It is characterized by repeatedly performing steps up to).
본 발명에 따르면, 상기 (a)단계 또는 (e)단계에서 상기 탐색공간을 기본격자보다 큰 크기의 격자로 구획할 때, 상기 기본격자를 복수 개 묶어서 하나의 격자로 형성한다. According to the present invention, when the search space is partitioned into a grid having a size larger than that of the basic grid in the step (a) or (e), the plurality of basic grids are bundled to form a single grid.
그리고, 상기 (d)단계에서 탐색공간을 재구축할 때, 상기 상위랭크의 격자들이 차지하는 기본영역과, 상기 기본영역으로부터 주변으로 일정 범위 내에서 확장한 추가영역을 합하여 탐색공간으로 재구축한다. When reconstructing the search space in step (d), the basic area occupied by the upper rank grids and the additional area extended within a predetermined range from the basic area to the surrounding area are reconstructed into the search space.
여기서, 상기 추가영역은 상기 상위랭크의 격자들 옆에 바로 이웃하는 격자의 중심부를 포함하지 않는 범위까지만 확장하는 것이 바람직하다. 보다 구체적으로, 상기 추가영역을 결정할 때 상기 격자 내부의 기본격자들 단위로 확장하는 것이 바람직하다. In this case, the additional region preferably extends only to a range not including the central portion of the lattice immediately adjacent to the upper rank lattice. More specifically, when determining the additional area, it is preferable to expand by the unit of the basic lattice in the grid.
한편 상기 인공신경망 모델을 이용한 평가방법은, Meanwhile, the evaluation method using the artificial neural network model,
다수의 분석대상 데이터 중 선택된 입력 데이터들과, 상기 분석대상 데이터를 분석하기 위해 구축된 전산모델에 상기 입력 데이터들을 입력하여 출력된 실제 출력 데이터들을 학습자료로 분류하는 초기 학습자료 생성단계;An initial learning material generation step of classifying input data selected from a plurality of analysis object data and actual output data outputted by inputting the input data to a computational model constructed to analyze the analysis object data as learning data;
상기 분석대상 데이터를 탐색집단으로 분류하는 초기 탐색집단 생성단계;An initial search group generation step of classifying the analysis target data into a search group;
학습자료를 이용하여 전산모델에서 출력되는 데이터를 예측하기 위한 인공신경망 모델을 구축하되, 상기 인공신경망 모델에 상기 탐색집단을 입력시 출력되는 데이터를 토대로 상기 인공신경망 모델의 재구축 여부를 판단하고, 상기 인공신경망 모델의 재구축이 필요하다고 판단시 학습자료를 재설정하여 인공신경망 모델을 재구축하는 모델 구축 단계; 및Construct an artificial neural network model for predicting the data output from the computational model using the training data, and determine whether to reconstruct the artificial neural network model based on the data output when the search group is input to the artificial neural network model, A model building step of reconstructing the artificial neural network model by resetting training data when it is determined that reconstruction of the artificial neural network model is necessary; And
상기 모델 구축단계를 통해 인공신경망 모델의 재구축이 필요없다고 판단시 광역해를 산출하는 광역해 산출단계;를 포함한다. 여기서, 상기 분석대상 데이터는 상기 복수의 시추정이 설치되는 상기 격자의 위치에 대한 데이터이다. A wide area calculation step of calculating a wide area solution when it is determined that reconstruction of the artificial neural network model is not necessary through the model building step. Here, the analysis target data is data on the position of the grid in which the plurality of drilling estimates are installed.
본 발명의 일 예에서, 상기 모델 구축 단계는, 상기 학습자료를 이용하여 상기 전산모델에서 출력되는 데이터를 예측하기 위한 인공신경망 모델을 구축하는 제1모델링 단계; 상기 인공신경망 모델에 상기 탐색집단에 포함된 각 데이터들을 입력하여 출력 예측 데이터를 획득하는 제2모델링 단계; 상기 출력 예측 데이터들을 토대로 상기 인공신경망 모델에 대한 재구축 여부를 판단하는 제3모델링 단계; 및 상기 제3모델링 단계를 통해 상기 인공신경망 모델의 재구축이 필요하다고 판단되면, 상기 학습자료를 포함하는 새로운 학습자료 및 새로운 탐색집단을 결정하는 제4모델링 단계를 포함하고, 상기 새로운 학습자료 및 새로운 탐색집단을 이용하여 상기 제1모델링 단계부터 상기 제4모델링 단계까지를 반복한다. In an example of the present invention, the model building step may include: a first modeling step of constructing an artificial neural network model for predicting data output from the computational model using the training material; A second modeling step of obtaining output prediction data by inputting respective data included in the search group to the artificial neural network model; A third modeling step of determining whether to reconstruct the artificial neural network model based on the output prediction data; And a fourth modeling step of determining a new learning material including the learning material and a new search group when it is determined that the artificial neural network model needs to be reconstructed through the third modeling step. The first modeling step is repeated from the first modeling step using the new search group.
그리고, 상기 제3모델링 단계는, 상기 제2모델링 단계를 통해 산출된 출력 예측 데이터들 중 기설정된 선별기준에 해당하는 출력 예측 데이터들을 선택하는 제1선택단계; 상기 제1선택단계를 통해 선택된 출력 예측 데이터들에 각각 대응되는 입력데이터를 추출하는 선별데이터 추출단계; 상기 선별데이터 추출단계를 통해 추출된 입력 데이터들 중 상기 인공신경망 구축에 사용된 학습자료에 미포함된 입력 데이터를 추출하는 미포함 데이터 추출단계; 및 상기 미포함 데이터 추출단계에서 추출된 입력 데이터들의 수가 기설정된 판별기준 수 미만이면, 상기 인공신경망 모델의 재구축이 필요하지 않은 것으로 판단하고, 상기 미포함 데이터 추출단계에서 선택된 상기 입력 데이터 수가 기설정된 판별기준 수 이상이면, 상기 인공신경망 모델의 재구축이 필요한 것으로 판단하는 재구축 여부 판단단계;를 포함한다. The third modeling step may include: a first selecting step of selecting output prediction data corresponding to a predetermined selection criterion among the output prediction data calculated through the second modeling step; A selection data extraction step of extracting input data corresponding to the output prediction data selected through the first selection step, respectively; A non-data extraction step of extracting input data not included in the learning data used for constructing the artificial neural network among the input data extracted through the selection data extraction step; And reconstructing the artificial neural network model if the number of input data extracted in the non-data extraction step is less than a predetermined number of determination criteria, and determining that the input data number selected in the non-data extraction step is preset. If the reference number or more, re-establishment determination step of determining that the neural network model needs to be rebuilt; includes.
또한, 상기 제3모델링 단계는, 상기 판결기준 수는 상기 초기 학습자료 생성단계에서 상기 분석대상 데이터 중 상기 입력 데이터로 선택된 데이터의 수와 동일한 것이 바람직하다. In the third modeling step, the number of decision criteria may be equal to the number of data selected as the input data among the analysis target data in the initial learning material generation step.
본 발명의 일 예에서 상기 제4모델링 단계는, 상기 미포함 데이터 추출단계에서 추출된 입력 데이터들 중 일부를 선택하는 제2선택단계; 상기 제2선택단계에서 선택된 입력 데이터를 상기 학습자료에 추가하여 새로운 학습자료를 생성하는 자료 추가 단계; 및In an example of the present invention, the fourth modeling step may further include: a second selecting step of selecting some of the input data extracted in the extracting data; A material addition step of generating new learning material by adding the input data selected in the second selection step to the learning material; And
상기 미포함 데이터 추출단계에서 선택된 입력 데이터 및 상기 학습자료에 포함된 입력 데이터를 새로운 탐색집단으로 설정하는 집단 추가 단계;를 포함한다. And a group adding step of setting the input data selected in the extracting data not included step and the input data included in the learning material as a new search group.
상기 제2선택단계에서는, 상기 초기 학습자료 생성단계에서 상기 학습자료로 선택된 입력 데이터의 수 또는 상기 학습자료로 선택된 입력 데이터의 수보다 작은 수만큼 상기 미포함 데이터 추출단계에서 추출된 입력 데이터들에서 데이터를 선택한다. In the second selection step, data from the input data extracted in the non-containing data extraction step is smaller than the number of input data selected as the learning material or the number of input data selected as the learning material in the initial learning material generation step. Select.
또한 상기 자료 추가 단계는, 상기 제2선택단계에서 선택된 입력 데이터들을 상기 전산모델에 입력하여 출력된 실제 출력 데이터를 획득하는 데이터 추가 획득단계; 및 상기 제2선택단계에서 선택된 입력 데이터 및 상기 데이터 추가 획득단계에서 획득한 실제 출력 데이터를 상기 학습자료에 추가하여 새로운 학습자료를 생성하는 자료 생성 단계를 포함한다. The data addition step may further include: a data addition acquisition step of obtaining actual output data output by inputting the input data selected in the second selection step into the computational model; And a material generation step of generating new learning material by adding the input data selected in the second selection step and the actual output data acquired in the data addition obtaining step to the learning material.
본 발명을 설명함에 있어서 관련된 공지기능에 대하여 이 분야의 기술자에게 자명한 사항으로서 본 발명의 요지를 불필요하게 흐릴 수 있다고 판단되는 경우에는 상세한 설명을 생략한다.In the following description of the present invention, when it is determined that the subject matter of the present invention may be unnecessarily obscured by the person skilled in the art with respect to the related well-known functions, the detailed description will be omitted.
본 발명은 가스전, 유전 등에서 복수의 시추정을 설치할 때 어느 위치에 설치하는 것이 가장 적합한지를 시뮬레이션하기 위한 모델링 방법에 관한 것이다. 따라서 본 발명에 따른 방법은 소프트웨어 형태로 구현되어 컴퓨터에서 실행하는 형태가 된다. The present invention relates to a modeling method for simulating which position is best suited for installing a plurality of drilling wells in gas fields, oil fields and the like. Therefore, the method according to the present invention is implemented in the form of software to be executed on a computer.
본 발명에서 복수의 시추정이란 예컨대 가스나 오일을 생산하기 위한 것이라면 복수의 생산정이 될 것이다. 또한 가스나 오일의 생산을 촉진하기 위하여 거꾸로 저류층에 유체를 주입하는 경우에는 복수의 주입정이 될 수도 있다. 마찬가지로 이산화탄소의 저장을 위한 CCS(Carbon Capture and Storage) 기술에서는 이산화탄소 주입정이 될 수 있다. In the present invention, a plurality of drilling wells may be a plurality of production wells, for example, for producing gas or oil. In addition, a plurality of injection wells may be used in the case of injecting fluid into the reservoir in order to promote the production of gas or oil. Similarly, Carbon Capture and Storage (CCS) technology for the storage of carbon dioxide can be a carbon dioxide injection well.
본 발명은 개발 예정 또는 가행중인 유전이나 가스전이 주요 대상이지만, CCS 분야와 같이 단순히 이산화탄소를 저장하기 위한 저류층에 대해서도 적용될 수 있다. The present invention is mainly an oil field or a gas field, which is being developed or under development, but may be applied to a storage layer for simply storing carbon dioxide, such as in the CCS field.
더 나아가, 본 발명은 반드시 복수의 시추정을 결정하기 위한 모델링에 국한되지 않으며, 다수의 격자로 이루어진 격자시스템에서 모델링을 통해 다양한 문제를 해결하는데 사용할 수 있다. 보다 구체적으로, 격자시스템에서 격자 자체가 입력데이터로 작용하는 모든 예에 대하여 적용할 수 있다. 예컨대 본 발명의 일 예에서는 두 개의 격자(격자마다 식별번호 부여)가 하나의 데이터세트를 형성하여, 두 개의 시추정이 각각 어느 격자에 위치하면 좋은지를 결정하게 된다. Furthermore, the present invention is not necessarily limited to modeling for determining a plurality of drilling estimates, and may be used to solve various problems through modeling in a grid system composed of a plurality of grids. More specifically, it can be applied to all examples in which the grid itself serves as input data in the grid system. For example, in an example of the present invention, two grids (identification numbers for each grid) form one data set, so that each grid may be located in which grid.
이하, 첨부된 도면을 참고하여, 본 발명의 일 예에 따른 유가스 저류층에서 인공신경망을 이용한 복수의 시추정 위치 최적화 방법(이하, '위치 최적화 방법')에 대하여 더욱 자세히 설명하기로 한다. Hereinafter, with reference to the accompanying drawings, it will be described in more detail with respect to a plurality of drilling position optimization methods (hereinafter, 'position optimization method') using the artificial neural network in the oil gas reservoir according to an embodiment of the present invention.
도 1은 본 발명에 따른 위치 최적화 방법의 개략적 흐름도이다. 도 1을 참고하면, 위치 최적화 방법은 기본격자시스템을 전제로 한다. 1 is a schematic flowchart of a position optimization method according to the present invention. Referring to FIG. 1, the location optimization method assumes a basic grid system.
격자시스템이란 일정한 공간을 다수의 격자로 구획해 놓은 그리드 시스템을 의미한다. 즉, 본 예에서는 저류층을 상하좌우로 연속적으로 배치된 다수의 격자로 구성한다. 각 격자마다 좌표에 따른 식별번호, 또는 별도의 식별번호가 부여될 수 있다. The grid system refers to a grid system in which a certain space is divided into a plurality of grids. That is, in this example, the storage layer is composed of a plurality of lattices arranged successively up, down, left and right. Each grid may be assigned an identification number or a separate identification number according to the coordinates.
전산모델 프로그램을 이용하여 저류층을 기본격자시스템으로 구축한 예가 도 2에 도시되어 있다. 도 2를 참고하면, 저류층이 곡면으로 형성되며, 복수의 층으로 형성되어 3차원 구조로 이루어져 있음을 알 수 있다. 6개의 화살표는 현재 설치되어 있는 생산정을 나타낸 것이다. An example in which the reservoir is formed as a basic grid system using a computer model program is shown in FIG. 2. Referring to FIG. 2, it can be seen that the storage layer is formed in a curved surface and is formed of a plurality of layers to have a three-dimensional structure. Six arrows indicate the production boats currently installed.
도 3은 본 발명을 쉽게 설명하기 위하여 기본격자시스템의 평면을 개략적으로 간소화하여 나타낸 것이다. 즉, 실제 기본격자시스템의 평면은 도 2와 같이 일반적으로 곡면으로 이루어지는데 도 3에서 단순한 평면으로 나타내었고, 격자의 수도 대폭 축소하였다. Figure 3 is a simplified simplified view of the plane of the basic grid system in order to easily explain the present invention. That is, the plane of the actual basic lattice system is generally composed of curved surfaces as shown in FIG. 2, which is represented as a simple plane in FIG. 3, and the number of grids is greatly reduced.
도 3에서 가는 실선으로 표현된 사각의 가장 작은 격자가 기본격자시스템의 기본격자(a)이다. 에너지, 자원분야에서는 부존량 모델링, 생산량 모델링 등에 상기한 전산모델과 격자시스템이 널리 사용되고 있으므로 더 이상의 자세한 설명은 생략하기로 한다.The smallest grid of squares represented by thin solid lines in Fig. 3 is the basic lattice a of the basic lattice system. In the field of energy and resources, the computational model and the lattice system described above are widely used, such as the amount of residual modeling and the amount of production modeling.
본 발명에서는 먼저 기본격자시스템 전체 영역(평면영역)을 탐색공간으로 설정한다. 여기서 탐색공간이란 복수의 시추정이 설치될 수 있는 대상영역이 된다. 즉 첫 번째 단계에서는 복수의 시추정이 형성될 수 있는 대상 영역을 최광범위로 설정한다. In the present invention, first, the entire area (plane area) of the basic grid system is set as the search space. The search space is a target area in which a plurality of drilling estimates can be installed. That is, in the first step, the target region in which the plurality of drilling estimates can be formed is set to the widest range.
그리고 위 탐색공간에 대하여 격자시스템을 새롭게 구축한다. 기본격자시스템으로부터 처음으로 새롭게 구축되는 격자시스템이므로 이를 1단계 격자시스템이라고 한다. 기본격자시스템은 다수의 기본격자(a)에 의해서 이루어지지만, 1단계 격자시스템은 탐색공간에서 격자의 크기를 기본격자보다 크게 형성한다는 점에서 다르다. And we build a new grid system for the above search space. Since it is the first grid system newly constructed from the basic grid system, it is called a 1-stage grid system. The basic grid system is made up of a plurality of basic grids (a), but the first stage grid system differs in that the size of the grid is formed larger than the basic grid in the search space.
도 3에서 굵은 실선으로 구획된 것이 1단계 격자(b)가 된다. 도면에서 알 수 있듯이, 1단계 격자(b)는 기본격자(a)를 5*5로 묶어 놓은 격자이다. 격자의 크기가 기본격자에 비하여 25배로 커졌다. 물론 1단계 격자(b)의 크기는 기본격자(a) 보다 크면 되며 크기는 사용자에 의하여 결정된다. 바람직하게는 1단계 격자(b)는 기본격자(a)를 묶어 놓은 형태로 특정한다. 예컨대, 5*5, 6*6, 7*7, 5*7 과 같이 기본격자(a)를 단위로 하여 묶어서 형성한다. In FIG. 3, a thick solid line is used to form the first stage grating b. As can be seen from the figure, the first-stage grating (b) is a grid in which the basic grid (a) is grouped by 5 * 5. The grid size is 25 times larger than the basic grid. Of course, the size of the first stage grating (b) is larger than the basic grid (a) and the size is determined by the user. Preferably, the first stage grating (b) is specified in the form of a bundle of the basic grid (a). For example, 5 * 5, 6 * 6, 7 * 7 and 5 * 7 are formed by grouping the basic grid a as a unit.
이렇게 제1단계 격자시스템을 형성하면, 격자의 수가 기본격자시스템에 비하여 현저히 줄어든다. 즉, 도 3에서 기본격자시스템에서 기본격자(a)는 50*50의 2500개가 있지만, 제1단계 격자시스템에서의 격자(b)는 10*10의 100개로 25배가 줄어들었다.When the first stage grating system is formed in this way, the number of gratings is significantly reduced compared to the basic lattice system. That is, in FIG. 3, in the basic grid system, there are 2500 basic grids (a) of 50 * 50, but the grid (b) of the first stage grid system is reduced by 25 times to 100 of 10 * 10.
본 발명은 복수의 시추정이 놓여질 최적의 위치를 찾기 위한 것이다. 여기서 위치는 바로 격자를 의미한다. 예컨대 제1격자시스템에서는 1번 내지 100번으로 넘버가 부여된 제1단계 격자(b)가 있는데, 이들 중 어떤 2개의 격자에 시추정을 형성할 때 가장 생산에 유리한가를 평가하는 것이다. 2개의 시추정이 각각 배치되는 2개의 격자가 하나의 데이터가 된다. 그러면 총 데이터의 개수는 100C2 이므로 4,851개이다. 만약 제1단계격자시스템이 아닌 기본격자시스템을 사용한다고 하면 데이터 개수는 기하급수적으로 늘어난다. 즉, 기본격자(a)를 사용한다면 2500C2 이므로 3,123,750개의 데이터가 나온다. The present invention seeks to find an optimal location for a plurality of drilling estimates. The position here means the grid. For example, in the first lattice system, there is a first stage grating (b) numbered from 1 to 100, which evaluates which of the two gratings is most advantageous for the production of the drilling estimate. Two grids, each of which two drilling estimates are arranged, become one data. The total number of data is 100 C 2, which is 4,851. If the basic grid system is used instead of the first phase grid system, the number of data increases exponentially. In other words, if you use the basic grid (a) is 2500 C 2 , which results in 3,123,750 data.
본 예에서는 2개의 시추정을 예로 들었지만, 3개의 시추정 또는 더 많은 시추정의 최적 위치를 산정한다고 하면 데이터의 개수는 기하급수적으로 증가할 것이다. In this example, two drilling estimates are taken as an example, but the calculation of the optimal positions of three or more drilling estimates will increase the number of data exponentially.
데이터의 수가 많으면 후술할 인공신경망을 이용한 평가에서 불확실성이 높아져 광역적 최대값을 찾지 못할 확률이 높아진다. 이를 보완하기 위하여 본 발명에서는 단계적 해법을 제시한다. 먼저 복수의 기본격자(a)들을 묶어서 제1단계 격자시스템을 만들어 경우의 수를 줄여서 후술할 인공신경망을 통해 광역적 최대값을 도출한다. 그리고 최대값과 상위랭크들을 탐색공간으로 설정한 후 다시 격자의 크기를 작게하는 단계를 거쳐 정교하게 해를 찾아가는 과정을 반복적으로 수행한다. 이하 자세하게 설명한다. The larger the number of data, the higher the uncertainty in the evaluation using an artificial neural network, which will be described later, which increases the probability of not finding a global maximum. In order to compensate for this, the present invention provides a step-by-step solution. First, a plurality of basic grids (a) are bundled to form a first-stage grid system to reduce the number of cases to derive a global maximum through an artificial neural network, which will be described later. After setting the maximum value and the upper ranks to the search space, the process of repeatedly searching for the solution is performed repeatedly by decreasing the size of the grid again. It demonstrates in detail below.
도 3과 같이 제1단계 격자시스템을 만들고, 인공신경망 모델을 통해 복수의 시추정, 본 실시예에서는 2개의 시추정이 각각 어느 위치에 설치되면 최적인지를 평가한다. 최적 위치는 순위로 나타낼 수 있다. 예컨대, 가장 많은 생산량이 예상되는 시추정의 위치에 대해서 평가한다면, 생산량을 수치화할 수 있고, 위의 4,851개의 데이터에 대하여 순위를 매길 수 있다. 인공신경망 모델에 대해서는 뒤에서 자세하게 설명하기로 한다. As shown in FIG. 3, a first-stage grid system is constructed, and a plurality of drilling estimates are evaluated through an artificial neural network model. The optimal position can be represented by a rank. For example, if you evaluate the location of the drilling estimate where the highest yield is expected, you can quantify the yield and rank the above 4,851 data points. The neural network model will be described in detail later.
도 4에는 인공신경망 모델을 통해 획득된 결과에서 1위부터 5위까지의 상위랭크들을 표시한 것이다. 2개의 제1단계 격자(b)가 하나의 데이터 이므로, 5위까지 하면 총 10개의 제1단계 격자(b)가 지정된다. 도 4에는 9개의 제1단계 격자(b)가 지정되어 있는데, 이는 2개의 데이터에서 하나의 제1단계 격자(b)가 중복되었기 때문이다. 예컨대, (14번-94번)과, (24번-94번)이 모두 5위 안에 랭크되어 있고, 94번 격자를 모두 포함하기 때문이다. Figure 4 shows the top ranks from the first to the fifth position in the result obtained through the artificial neural network model. Since the two first-stage gratings b are one data, up to the fifth position, a total of ten first-stage gratings b are designated. In FIG. 4, nine first-level gratings b are designated because one first-level grating b is overlapped in two pieces of data. For example, (numbers 14-94) and (numbers 24-94) are all ranked in the top five and contain all of the number 94 grids.
여기서, 상위랭크의 개수는 사용자가 임의로 결정할 수 있다. 본 실시예와 같이 5위까지로 제한할 수도 있으며, 더 후순위까지 확장할 수도 있다. Here, the number of upper ranks may be arbitrarily determined by the user. As in the present embodiment, the present invention may be limited to the fifth position, and further extended to the lower ranks.
참고로, 도 3 및 도 4에서 격자의 가운데에 검은색은 격자의 대표값으로서 생산정이 위치되는 영역을 표시한 것이다. For reference, in FIG. 3 and FIG. 4, black indicates an area where the production well is located as a representative value of the grating.
상기한 바와 같이 제1단계 격자시스템을 이용하여 복수의 시추정이 선정될 위치의 상위랭크들을 확정한 후에는 탐색공간을 다시 설정한다. As described above, after the upper ranks of the positions where the plurality of drilling estimates are to be selected are determined using the first-stage grid system, the search space is set again.
탐색공간은 앞선 인공신경망 모델을 통한 평가에서 상위랭크들에 기록된 제1단계 격자(b)들로 지정할 수 있다. 그러나, 본 실시예와 같이 탐색공간을 주변으로 약간 확장하는 것이 바람직하다. 상위랭크 격자의 주변으로 영역을 확장함으로써 최대 광역해를 찾을 수 있는 확률을 더욱 상승시킬 수 있기 때문이다. The search space may be designated as the first-stage gratings (b) recorded in the upper ranks in the evaluation through the previous neural network model. However, it is preferable to slightly expand the search space to the surroundings as in the present embodiment. This is because the probability of finding the maximum global solution can be further increased by extending the region around the upper rank grid.
도 4를 참고하면, 헤칭된 제1단계 격자(b)의 외곽으로 점선까지 탐색공간을 확장한 것을 볼 수 있다. 다만, 탐색공간을 확장하더라도, 인접한 격자의 중앙부까지 확장하는 것은 바람직하지 않다. 예컨대, 제1단계 격자(b)는 5*5의 기본격자(a)로 이루어져 있으므로, 탐색공간을 주변의 격자로 확장하더라도 최대 2열 또는 2행의 기본격자(a)까지만 확장한다. 도 4에 도시된 바와 같이, 점선까지 확장된 영역이 2열, 2행의 기본격자(a)까지만 확장된 것을 볼 수 있다. 물론 1열 또는 1행만 확장해도 된다. 또한 탐색공간을 확장하는 경우, 기본격자(a) 단위로 확장한다. Referring to FIG. 4, it can be seen that the search space is extended to the dotted line outside the hatched first stage grating b. However, even if the search space is extended, it is not desirable to extend to the center of the adjacent grid. For example, since the first stage grating (b) is composed of a 5 * 5 basic grid (a), even if the search space is extended to the surrounding grid, it extends only up to the basic grid (a) of two columns or two rows. As shown in FIG. 4, it can be seen that the region extended to the dotted line is extended only to the basic grid a of two columns and two rows. Of course, you can expand only one column or one row. In addition, when the search space is extended, it is extended in units of the basic grid (a).
상기한 바와 같이, 탐색공간을 다시 정의하면, 도 4의 헤칭된 영역과 점선까지 확장된 영역이 새로운 탐색공간이 된다. 원래의 탐색공간보다 대폭 축소된 영역이다. As described above, when the search space is redefined, the hatched area and the area extended to the dotted line in FIG. 4 become a new search space. This is a much smaller area than the original search space.
재구축된 탐색공간에 대해서, 도 5에 도시된 바와 같이, 다시 제2단계 격자시스템을 구축한다. 도 5를 참고하면, 제2단계 격자시스템에서는 기본격자(a)를 3*3 단위로 묶어서 제2단계 격자(c)를 형성하였다. 여기서 중요한 점은, 단계를 거쳐가면서 점차 격자의 크기를 줄여가야 한다는 것이다. 즉, 제1단계에서는 기본격자(a)를 5*5로 묶었지만, 제2단계에서는 3*3으로 묶어서 격자 사이즈를 줄였다. 3단계를 진행할 때에는 더 줄이고, 최종 단계에서는 기본격자(a) 단위와 일치시켜야 한다. 탐색공간을 좁혀 나가되, 좁혀진 영역에 대해서는 해상도를 높혀서 정교함을 추구한다. For the reconstructed search space, as shown in FIG. 5, a second stage grating system is again constructed. Referring to FIG. 5, in the second stage grating system, the second grating c is formed by grouping the basic grids a by 3 * 3 units. The important point here is that as you go through the steps, you will have to gradually reduce the size of the grid. That is, in the first step, the basic grid (a) is grouped by 5 * 5, but in the second step, the grid size is reduced by grouping it by 3 * 3. In the third stage, further reduction is required, and in the final stage, the unit should be aligned with the basic grid unit (a). Narrow the search space, but seek sophistication by increasing the resolution of the narrowed area.
다만, 도 5를 참고하면, 기본격자(a)를 3*3으로 묶는 경우 탐색공간의 과부족이 있을 수 있다. 즉, 도 5를 보면, 제2단계 격자(c)가 탐색공간을 일부 벗어나 있거나, 일부는 모두 채우지 못하는 경우도 있지만, 앞에서와 같이 탐색공간을 일부 확장 또는 일부 축소하는 개념으로 이해할 수 있다. 다만, 이 경우에도 앞에서 상위랭크로 기록된 격자 영역은 모두 포함되는 것이 바람직하다. However, referring to FIG. 5, when the basic grid (a) is grouped by 3 * 3, there may be an insufficiency of the search space. That is, referring to FIG. 5, although the second stage grid c may be partially out of the search space or may not be partially filled, the second stage grid c may be understood as a concept of partially expanding or partially reducing the search space. In this case, however, it is preferable to include all the grid areas recorded in the upper rank.
제2단계에서의 탐색공간과 격자시스템이 구축되면, 제1단계와 마찬가지로 인공신경망 모델을 이용하여 최적의 위치를 평가하고, 사전에 설정된 상위랭크의 수에 따라 결과를 나타낸 결과가 도 6에 도시되어 있다. 도 6에서 헤칭된 7개의 제2단계 격자(c)가 나타나 있다. 이제 다시 앞에서와 동일한 과정을 반복한다. 즉, 도 7에 도시된 바와 같이, 상위랭크들이 기록된 격자들의 주변까지 영역을 확장하여, 다시 탐색공간을 정의하고, 제3단계 격자시스템을 구축한다. 제3단계 격자시스템에서 사용하는 제3단계 격자는 기본격자(a)를 그대로 사용한다. 그리고 인공신경망 모델을 통해 최적 위치를 평가한다. 여기서 복수의 시추정의 최적 위치를 평가하여, 최적 위치로 결정한다. When the search space and the grid system are constructed in the second stage, the optimal position is evaluated using the artificial neural network model as in the first stage, and the result is shown in FIG. 6 according to the number of higher ranks set in advance. It is. The seven second stage gratings c hatched in FIG. 6 are shown. Now repeat the same process again. That is, as shown in FIG. 7, the area is extended to the periphery of the grids in which the upper ranks are recorded, again defining a search space, and constructing a third-level grid system. The third stage grating used in the third stage grating system uses the basic grid (a) as it is. And we evaluate the optimal location through the neural network model. Here, the optimum positions of the plurality of drilling estimates are evaluated to determine the optimum positions.
위에서는 본 발명의 전체적인 과정과, 각 과정이 가지는 의미에 대해서 설명하였다. 본 발명은 컴퓨터에서 소프트웨어를 통해 구현되므로, 도 1로 돌아가서 컴퓨터 알고리즘으로 설명해 본다. In the above, the overall process of the present invention and the meaning of each process have been described. Since the present invention is implemented through software in a computer, the description will be made back to FIG. 1 by computer algorithm.
도 1의 알고리즘을 참고하면, “START” 상태에서 이미 기본격자시스템이 구축되어 있다. “i=1”은 제1단계를 의미한다. 이에 따라 제1단계 격자시스템(첫 번째 격자시스템)을 구축한다. 여기서는 제1단계 격자(b)의 개수(ex: 5*5)를 지정해준다. 그리고 인공신경망 모델을 이용하여 광역해를 산출하며, 이들을 순위로 나타낸다. Referring to the algorithm of FIG. 1, the basic grid system is already constructed in the “START” state. “I = 1” means the first step. Accordingly, the first stage grid system (first grid system) is constructed. Here, the number of first stage gratings (ex: 5 * 5) is specified. The global neural network model is used to calculate the global solutions, which are represented by ranking.
결과를 도출한 후에는 현재 단계의 격자의 크기와 기본격자의 크기를 상호 비교한다. 앞에서 언급한 것처럼, 본 발명에서는 처음에는 복수의 기본격자를 묶어서 격자를 크게 형성하고, 차수(단계)를 거듭하면서 격자의 사이즈를 점차 작게 해 나가면서 과정을 반복한다. After deriving the results, we compare the size of the grid with the current grid. As mentioned above, in the present invention, a plurality of basic grids are initially formed to form a lattice, and the process is repeated while gradually decreasing the size of the lattice while repeating orders (steps).
제1단계에서는 당연히 격자의 크기가 기본격자보다 크므로 알고리즘에서 아래 단계로 진행한다. 제1단계 평가에서 상위랭크들을 토대로 새로운 탐색공간을 구축한다. 그리고 “i”에 1을 더하여 차수(단계)를 증가시키고, 그 단계의 격자시스템을 구축한 후에 다시 인공신경망을 통한 평가를 수행한다. 이 과정은 순환적으로 반복해 나간다. In the first step, since the grid size is larger than the basic lattice, the algorithm proceeds to the next step. In the first stage evaluation, a new search space is constructed based on the upper ranks. Then, by adding 1 to “i”, the order is increased, and after the grid system is constructed, the evaluation is performed through the artificial neural network. This process repeats cyclically.
단계를 증가시키면서 위 과정을 반복하다 보면, 앞에서 설명한 것처럼, 현 단계의 격자의 크기가 기본격자의 크기가 같아지는 단계가 온다. 격자크기가 같은 경우 그 단계에서 인공신경망을 통해서 구해진 광역해를 최종 해로 결정한다. 즉, 현 단계에서 구해진 광역해를, 복수의 시추정이 설치되는 최적 위치로 결정한다. Repeating the above process with increasing steps, as described above, comes a step where the grid of the current step is the same size as the basic grid. If the grid sizes are the same, the final solution is determined as the global solution obtained from the neural network at that stage. That is, the wide area solution obtained at this stage is determined as an optimal position where a plurality of drilling estimates are installed.
참고로, 다른 실시예에서는 현 단계의 격자의 크기가 기본격자보다 약간 크지만, 사전에 정해 놓은 일정 범위 내에서 큰 경우 순환과정을 종료시키고, 알고리즘을 종료할 수도 있다. 즉, 본 발명이 반드시 현 단계에서의 기본격자와 기본격자의 크기가 동일해질 때까지 강제적으로 반복해야 하는 것은 아니며, 그 범위가 더 확장될 수 있다. For reference, in another embodiment, the grid of the present stage is slightly larger than the basic grid, but if the size of the grid is large within a predetermined range, the cycle may be terminated and the algorithm may be terminated. That is, the present invention does not necessarily have to be forcibly repeated until the size of the base lattice and the base lattice at the present stage are the same, and the range may be further extended.
위에서 설명한 것처럼, 본 발명에서는 데이터의 개수가 너무 많아 인공신경망을 통해 한 번의 프로세스로 광역해를 도출하는 경우, 진정한 최적해가 도출되지 않을 불확실성을 낮추기 위한 것이다. 처음 단계에는 큰 사이즈의 격자를 이용하여 데이터수를 줄임으로써, 즉 경우의 수를 감소시킴으로써 최적 위치가 해당된 영역이 누락되지만 않게 한다. 그리고 좀 더 좁혀진 탐색공간에서는 해상도를 높여가면서 정밀하게 최적 위치를 탐지함으로써, 최종적으로 광역해를 찾아낸다. 이를 통해, 실제 최적해를 찾을 가능성이 현저히 상승될 뿐만 아니라, 최적해를 찾는 과정도 간소화되어 프로세스 차원에서 큰 이점을 갖는다. As described above, in the present invention, when the number of data is too large to derive a wide area solution in one process through an artificial neural network, it is to lower the uncertainty that a true optimal solution will not be derived. In the first step, a large grid is used to reduce the number of data, i.e. reduce the number of cases, so that the region corresponding to the optimal position is not missed. In the narrower search space, the optimal position is detected precisely while increasing the resolution, and finally, the global solution is found. This not only significantly increases the probability of finding the actual optimal solution, but also simplifies the process of finding the optimal solution, which has a big advantage at the process level.
이하에서는, 본 발명에서 사용하는 인공신경망 모델에 대하여 설명한다. 본 발명에서는 인공신경망 모델을 사용하여, 복수의 시추정의 최적 위치를 평가하고 결정한다. 인공신경망 모델로는 유전자 알고리즘, 통계적 알고리즘 등을 사용할 수 있지만, 본 실시예에서는 본 발명의 연구진에 의하여 개발된 인공신경망 모델을 사용한다. 인공신경망 모델은 대한민국 특허출원 제10-2017-0017703호 “인공신경망을 이용하여 광역해를 산출하는 방법”으로 출원되어 있으며, 본 발명이 출원되는 시점에서 미공개된 상태이다. 이하, 첨부한 도면을 참고하여 본 발명에서 사용하는 인공신경망 모델에 대하여 설명하기로 한다. Hereinafter, the artificial neural network model used in the present invention will be described. In the present invention, an artificial neural network model is used to evaluate and determine the optimal position of a plurality of drilling estimates. Genetic algorithms, statistical algorithms, etc. may be used as the artificial neural network model, but in this embodiment, the artificial neural network model developed by the researchers of the present invention is used. The artificial neural network model has been filed in the Republic of Korea Patent Application No. 10-2017-0017703 "method of calculating a wide area solution using the artificial neural network", the state is not disclosed at the time the present invention is filed. Hereinafter, an artificial neural network model used in the present invention will be described with reference to the accompanying drawings.
다만, 본 실시예는 복수의 시추정의 최적 위치를 결정하는 것이지만, 설명의 편의를 위해 이하에서는 1개의 시추정의 최적 위치를 선정하는 것을 예로 들어 설명하기로 한다. However, the present embodiment is to determine the optimum position of the plurality of drilling estimates, but for convenience of description, it will be described below by selecting an optimal position of one drilling estimate as an example.
도 8에는 본 발명에 따른 인공신경망을 이용하여 광역해를 산출하는 방법에 대한 순서도가 도시되어 있다.8 is a flowchart illustrating a method for calculating a global solution using an artificial neural network according to the present invention.
도면을 참조하면, 인공신경망을 이용하여 광역해를 산출하는 방법은 초기 학습자료 생성단계(S100), 초기 탐색집단 생성단계(S200), 모델 구축 단계(S300) 및 광역해 산출단계(S400)를 포함한다.Referring to the drawings, a method for calculating a global solution using an artificial neural network includes an initial learning material generation step (S100), an initial search group generation step (S200), a model building step (S300), and a wide area calculation step (S400). Include.
초기 학습자료 생성단계(S100)는 인공신경망 모델을 구축하기 위한 학습자료를 생성하는 단계이다. 먼저, 다수의 분석대상 데이터 중 일부를 입력 데이터들로 선택한다. 이때, 분석대상 데이터들의 수에 따라 선택하는 입력 데이터의 수를 결정할 수 있는데, 분석대상 데이터들 중 10개 또는 20개의 입력 데이터를 선택하는 것이 바람직하다. 본 발명과 매칭시키면, 여기서 분석대상 데이터는 격자를 의미한다. 예컨대 앞에서 설명한 제1단계 격자시스템을 적용하여 1개의 시추정만 선정하는 경우, 경우의수, 즉 분석데이터의 개수는 제1단계 격자의 개수 100개가 분석데이터가 된다. 복수의 시추정이라면 데이터의 개수가 앞에서처럼 훨씬 많아진다. Initial learning material generation step (S100) is a step of generating learning material for building an artificial neural network model. First, some of the plurality of analysis target data are selected as input data. In this case, the number of input data to be selected may be determined according to the number of data to be analyzed, and it is preferable to select 10 or 20 input data among the data to be analyzed. In accordance with the present invention, the data to be analyzed means a grid. For example, when only one drilling estimate is selected by applying the above-described first-stage grid system, the number of cases, that is, the number of analysis data, becomes 100 pieces of analysis data. If there are multiple drilling estimates, the number of data is much larger than before.
다음, 분석대상 데이터를 분석하기 위해 구축된 전산모델에 입력 데이터들을 입력하여 출력된 실제 출력 데이터들을 획득한다. 전산모델은 분석대상 데이터를 컴퓨터 시뮬레이션을 통해 분석하기 위한 것으로서, 시추 위치에 대한 정보를 입력하면 유가스 생산량을 산출하는 시추위치에 따른 유가스 생산량 시뮬레이션 모델과 같은 분석용 전산모델(Computational model)이 적용된다. 전산모델을 사용하면 정확한 출력 데이터를 획득할 수 있으나, 분석대상 데이터의 수가 많은 경우, 전산모델의 계산에 많은 시간에 소요되므로 본 발명은 학습된 인공신경망 모델을 이용하여 광역해의 계산시간을 절감하고자 한다.Next, input data is input to a computational model constructed to analyze the data to be analyzed to obtain actual output data. Computational model is to analyze the analysis target data through computer simulation. When inputting information about drilling position, Computational model such as oil gas production simulation model according to drilling position that calculates oil gas production is calculated. Apply. Using the computational model, accurate output data can be obtained. However, when the number of data to be analyzed is large, it takes much time to calculate the computational model, and the present invention reduces the computation time of the global solution by using the learned artificial neural network model. I would like to.
상술된 바와 같이 선택된 입력 데이터 및 실제 출력 데이터들은 인공신경망 모델을 구축하기 위한 학습자료로 분류하며, 데이터 베이스(미도시)에 저장한다.As described above, the selected input data and the actual output data are classified into training materials for constructing an artificial neural network model and stored in a database (not shown).
초기 탐색집단 생성단계(S200)는 분석대상 데이터를 탐색집단으로 분류하는 단계이다. 이 단계에서는, 선택된 입력 데이터를 포함하여 분석대상 데이터들 전체를 탐색집단으로 분류한다.The initial search group generation step (S200) is a step of classifying the analysis target data into search groups. In this step, all analysis data including the selected input data are classified into search groups.
모델 구축 단계(S300)는 학습자료를 이용하여 전산모델에서 출력되는 데이터를 예측하기 위한 인공신경망 모델을 구축하되, 인공신경망 모델에 탐색집단을 입력시 출력되는 데이터를 토대로 인공신경망 모델의 재구축 여부를 판단하고, 인공신경망 모델의 사용이 재구축이 필요하다고 판단시 학습자료를 재설정하여 인공신경망 모델을 재구축하는 단계이다. 모델 구축단계는 제1 내지 제4모델링 단계(S340)를 포함한다.In the model building step (S300), the artificial neural network model is constructed to predict the data output from the computational model using the training data, and the artificial neural network model is reconstructed based on the output data when the search group is input to the artificial neural network model. If the neural network model is determined to be reconstructed, the training data is reset to reconstruct the artificial neural network model. The model building step includes first to fourth modeling steps (S340).
제1모델링 단계(S310)는 학습자료를 이용하여 전산모델에서 출력되는 데이터를 예측하기 위한 인공신경망 모델을 구축하는 단계이다. 인공신경망 모델을 구축하는 알고리즘을 사용하여 학습자료를 근거로 인공신경망 모델을 구축한다. 알고리즘은 종래에 일반적으로 인공신경망 모델을 구축하는 알고리즘이 사용되므로 상세한 설명은 생략한다.The first modeling step (S310) is a step of constructing an artificial neural network model for predicting data output from a computational model using training materials. An artificial neural network model is constructed based on learning materials using an algorithm for constructing an artificial neural network model. Since the algorithm is generally used to build an artificial neural network model, a detailed description thereof will be omitted.
제2모델링 단계(S320)는 인공신경망 모델에 상기 탐색집단에 포함된 각 데이터들을 입력하여 출력 예측 데이터를 획득하는 단계이다. 작업자는 탐색집단에 포함된 데이터를 각각 인공신경망 모델에 입력하여 출력 예측 데이터들을 출력한다. The second modeling step (S320) is a step of obtaining output prediction data by inputting respective data included in the search group to the artificial neural network model. The operator inputs data included in the search group into the artificial neural network model and outputs output prediction data.
제3모델링 단계(S330)는 출력 예측 데이터들을 토대로 인공신경망 모델에 대한 재구축 여부를 판단하는 단계로서, 제1선택단계(S331), 선별데이터 추출단계(S332), 미포함 데이터 추출단계(S333) 및 재구축 여부 판단단계(S334)를 포함한다.The third modeling step (S330) is a step of determining whether to rebuild the artificial neural network model based on the output prediction data, the first selection step (S331), the screening data extraction step (S332), the data extraction step (S333) And determining whether to rebuild (S334).
제1선택단계(S331)는 제2모델링 단계(S320)를 통해 산출된 출력 예측 데이터들 중 기설정된 선별기준에 해당하는 출력 예측 데이터들을 선택하는 단계이다. 출력 예측 데이터가 유가스 생산량과 같이 숫자일 경우, 제1선택단계(S331)에서, 출력 예측 데이터들을 내림차순 또는 오름차순으로 순위를 결정하고, 결정된 순위에서 1순위부터 기설정된 기준순위까지를 선별기준으로 설정한다.The first selection step S331 is a step of selecting output prediction data corresponding to a predetermined selection criterion among the output prediction data calculated through the second modeling step S320. If the output prediction data is a number such as oil gas production, in the first selection step (S331), the output prediction data are ranked in descending or ascending order, and the selected criteria are ranked from the first rank to the predetermined reference rank. Set it.
이때, 기준순위는 제2모델링 단계(S320)에서 출력된 출력예측 데이터들의 수와 기설정된 산출 비율의 곱에 대응되는 값으로 설정한다. 산출비율은 20% 또는 30%로 설정할 수 있으나, 이에 한정하는 것이 아니라 분석대상 데이터의 수에 따라 적합한 비율을 임의로 설정할 수 있다. 예를 들어, 제2모델링 단계(S320)에서 100개의 출력 예측 데이터가 산출되고, 산출비율이 20%로 설정될 경우, 기준순위는 20위가 되며, 제1선택단계(S331)에서, 결정된 순위에서 1순위부터 20순위에 해당하는 출력 예측 데이터들을 선택한다. In this case, the reference rank is set to a value corresponding to the product of the number of output prediction data output in the second modeling step S320 and a preset calculation ratio. The calculation ratio may be set to 20% or 30%. However, the calculation ratio may be arbitrarily set according to the number of data to be analyzed, without being limited thereto. For example, when 100 output prediction data are calculated in the second modeling step S320 and the calculation ratio is set to 20%, the reference rank is 20th. In the first selection step S331, the determined ranking Selects output prediction data corresponding to ranks 1 to 20 in the.
한편, 제1선택단계(S331)에서 최대 값을 갖는 출력 데이터를 광역해로 산출할 경우에는 출력 예측 데이터들을 내림차순으로 순위를 결정한다. 본 발명의 다른 실시예에서는 최저 값을 갖는 출력 데이터를 광역해로 산출할 수 있고, 이 때에는 출력 예측 데이터들을 오름차순으로 순위를 결정하는 것이 바람직하다.On the other hand, when the output data having the maximum value is calculated as the global solution in the first selection step S331, the output prediction data are ranked in descending order. In another embodiment of the present invention, the output data having the lowest value can be calculated as a global solution, and it is preferable to rank the output prediction data in ascending order.
선별데이터 추출단계(S332)는 제1선택단계 (S331)를 통해 선택된 출력 예측 데이터들에 각각 대응되는 입력 데이터를 추출하는 단계이다.The screening data extraction step S332 is a step of extracting input data corresponding to the output prediction data selected through the first selection step S331.
미포함 데이터 추출단계(S333)는 선별데이터 추출단계(S332)를 통해 추출된 입력 데이터들 중 인공신경망 구축에 사용된 학습자료에 미포함된 입력 데이터(T)등을 추출하는 단계이다. 작업자는 추출된 입력 데이터들을 데이터 베이스에 저장한다.The non-data extraction step (S333) is a step of extracting input data (T) which is not included in the learning data used for constructing the artificial neural network among the input data extracted through the screening data extraction step (S332). The operator stores the extracted input data in the database.
재구축 여부 판단단계(S334)는 미포함 데이터 추출단계(S333)에서 추출된 입력 데이터들의 수를 기준으로 인공신경망 모델의 재구축 여부를 판별하는 단계이다. 즉, 미포함 데이터 추출단계(S333)에서 추출된 입력 데이터들의 수가 기설정된 판별기준 수 미만이면, 인공신경망 모델의 재구축이 필요하지 않은 것으로 판단하고, 미포함 데이터 추출단계(S333)에서 선택된 입력 데이터 수가 기설정된 판별기준 수 이상이면, 인공신경망 모델의 재구축이 필요한 것으로 판단한다.The reconstructing determination step (S334) is a step of determining whether to rebuild the artificial neural network model based on the number of input data extracted in the non-contained data extraction step (S333). That is, if the number of input data extracted in the non-data extraction step (S333) is less than the predetermined number of determination criteria, it is determined that reconstruction of the artificial neural network model is not necessary, and the number of input data selected in the non-data extraction step (S333) If it is more than the predetermined number of criteria, it is determined that the neural network model needs to be rebuilt.
여기서, 판별기준 수는 초기 학습자료 생성단계(S100)에서 분석대상 데이터 중 입력 데이터로 선택된 데이터의 수와 동일하게 설정할 수 있으나, 이에 한정하는 것이 아니라 분석 데이터의 수에 따라 적합한 수를 임의로 설정할 수도 있다. 예를들어, 초기 학습자료 생성단계(S100)에서 20개의 입력 데이터를 선택했다면, 판별기준 수는 20이 적용된다.Here, the number of determination criteria may be set equal to the number of data selected as input data among the analysis target data in the initial learning material generation step (S100), but is not limited thereto, and an appropriate number may be arbitrarily set according to the number of analysis data. have. For example, if 20 input data are selected in the initial learning material generation step (S100), 20 is applied as the number of criteria.
제4모델링 단계(S340)는 제3모델링 단계(S330)를 통해 인공신경망 모델의 재구축이 필요하다고 판단되면, 학습자료를 포함하는 새로운 학습자료 및 새로운 탐색집단을 결정하는 단계로서, 제2선택단계(S341), 자료 추가단계(S342) 및 집단 추가 단계(S343)를 포함한다.In the fourth modeling step S340, when it is determined that the artificial neural network model needs to be reconstructed through the third modeling step S330, the new modeling step and the new search group including the learning material are determined. A step S341, a material adding step S342, and a group adding step S343 are included.
제2선택단계(S341)는 미포함 데이터 추출단계(S333)에서 추출된 입력 데이터들 중 일부를 선택하는 단계이다. 이 단계에서는, 초기 학습자료 생성단계(S100)에서 분석대상 데이터 중 학습자료로 선택된 데이터의 수만큼 미포함 데이터 추출단계(S333)에서 추출된 입력 데이터들에서 데이터를 선택하는 것이 바람직하나, 이에 한정하지 않고, 초기 학습자료 생성단계(S100)에서 학습자료로 선택된 데이터의 수보다 작은 수만큼 미포함 데이터 추출단계(S333)에서 추출된 입력 데이터들에서 데이터를 선택할 수도 있다.The second selection step S341 is a step of selecting some of the input data extracted in the non-data extraction step S333. In this step, it is preferable to select data from the input data extracted in the data extraction step (S333) not including as many as the data selected as the learning data of the analysis target data in the initial learning material generation step (S100), but is not limited thereto. Instead, the data may be selected from the input data extracted in the non-included data extraction step (S333) by the number smaller than the number of data selected as the learning material in the initial learning material generation step (S100).
예를 들어 초기 학습자료 생성단계(S100)에서 20개의 입력 데이터를 선택했다면, 제2선택단계(S341)에서는 미포함 데이터 추출단계(S333)에서 추출된 입력 데이터들 중 20개를 랜덤으로 선택한다.For example, if 20 input data are selected in the initial learning material generation step (S100), in the second selection step (S341), 20 pieces of input data extracted in the non-included data extraction step (S333) are randomly selected.
자료 추가 단계(S342)는 제2선택단계(S341)에서 선택된 입력 데이터를 학습자료에 추가하여 새로운 학습자료를 생성하는 단계로서, 데이터 추가 획득단계(S344) 및 자료 생성 단계(S345)를 포함한다.The data adding step (S342) is a step of generating a new learning material by adding the input data selected in the second selection step (S341) to the learning material, and includes a data addition obtaining step (S344) and a material generating step (S345). .
데이터 추가 획득단계(S344)는 제2선택단계(S341)에서 선택된 입력 데이터들을 전산모델에 입력하여 출력된 실제 출력 데이터를 획득하는 단계이다. 전산모델은 초기 학습자료 생성단계(S100)에서 사용된 전산모델이 적용된다.The additional data acquisition step S344 is a step of acquiring the actual output data output by inputting the input data selected in the second selection step S341 to the computational model. The computational model is applied to the computational model used in the initial learning material generation step (S100).
자료 생성 단계(S345)는 제2선택단계(S341)에서 선택된 입력 데이터 및 데이터 추가 획득단계(S344)에서 획득한 실제 출력 데이터를 학습자료에 추가하여 새로운 학습자료를 생성하는 단계이다.The data generation step S345 is a step of generating new learning material by adding the input data selected in the second selection step S341 and the actual output data acquired in the data addition obtaining step S344 to the learning material.
집단 추가 단계(S343)는 미포함 데이터 추출단계(S333)에서 선택된 입력 데이터 및 학습자료에 포함된 입력 데이터를 새로운 탐색집단으로 설정하는 단계이다.The group addition step S343 is a step of setting the input data selected in the non-included data extraction step S333 and the input data included in the learning material as a new search group.
한편, 모델 구축 단계(S300)는 제3모델링 단계(S330)를 통해 인공신경망 모델의 재구축이 필요하다고 판단되면, 새로운 학습자료 및 새로운 탐색집단을 이용하여 제1모델링 단계(S310)부터 제4모델링 단계(S340)까지를 반복하는 것이 바람직하다. 또한, 모델 구축 단계(S300)에서 제1모델링 단계(S310)부터 제4모델링 단계(S340)까지를 다수회 반복하다 제3모델링 단계(S330)에서 미포함 데이터 추출단계(S333)에서 추출된 입력 데이터들의 수가 기설정된 판별기준 수 미만이 되면, 이 때의 인공신경망 모델의 재구축이 필요하지 않은 것으로 판단한다.Meanwhile, if it is determined that the neural network model needs to be reconstructed through the third modeling step (S330), the model building step (S300) uses the new training material and the new search group, and then the fourth modeling step (S310) to the fourth. It is preferable to repeat the modeling step (S340). In addition, in the model building step S300, the first modeling step S310 to the fourth modeling step S340 are repeated a plurality of times. The input data extracted in the data extraction step S333 without the third modeling step S330 is performed. If the number of them is less than the predetermined number of criteria, it is determined that reconstruction of the artificial neural network model at this time is not necessary.
광역해 산출단계(S400)는 모델 구축단계를 통해 재구축이 필요하지 않은 것으로 판단되면 광역해를 산출하는 단계로서, 입력 데이터 산출단계(S410), 출력 데이터 산출단계(S420) 및 완료단계(S430)를 포함한다.The wide area calculation step (S400) is a step of calculating a wide area solution when it is determined that reconstruction is not necessary through the model building step. The input data calculation step (S410), the output data calculation step (S420), and the completion step (S430). ).
입력 데이터 산출단계(S410)는 제1선택단계(S331)에서 선택된 출력 예측 데이터들에 대응되는 입력 데이터를 산출하는 단계이다. 즉, 제1선택단계(S331)에서 선택된 출력 예측 데이터가 출력되도록 상기 인공신경망 모델에 입력된 입력 데이터를 산출한다.The input data calculation step S410 is a step of calculating input data corresponding to the output prediction data selected in the first selection step S331. That is, input data input to the artificial neural network model is calculated so that the output prediction data selected in the first selection step S331 is output.
출력 데이터 산출단계(S420)는 전산모델을 이용하여 입력데이터 산출단계(S410)에서 선택된 입력 데이터들에 대한 실제 출력 데이터를 산출하는 단계이다. 즉, 입력 데이터 산출단계(S410)에서 산출된 입력 데이터들을 전산모델에 입력하여 출력된 실제 출력 데이터를 획득한다. 이때, 전산모델은 초기 학습자료 생성단계(S100)에서 사용된 전산모델이 적용된다.The output data calculating step S420 is a step of calculating actual output data for the input data selected in the input data calculating step S410 by using a computational model. That is, the input data calculated in the input data calculating step S410 is input to the computational model to obtain the actual output data. At this time, the computational model is applied to the computational model used in the initial learning material generation step (S100).
완료단계(S430)는 출력 데이터 산출단계(S420)에서 산출된 실제 출력 데이터들과 모든 학습자료를 구성하는 실제 출력 데이터들 중 가장 큰 값 또는 가장 작은 값을 갖는 실제 출력 데이터에 대응되는 입력 데이터를 산출하는 단계이다. 이 단계에서 최대 값을 갖는 출력 데이터를 광역해로 산출할 경우에는 가장 큰 값을 갖는 실제 출력 데이터에 대응되는 입력 데이터를 산출하고, 최저 값을 갖는 출력 데이터를 광역해로 산출할 경우에는 가장 작은 값을 갖는 실제 출력 데이터에 대응되는 입력 데이터를 산출하는 것이 바람직하다.Completion step (S430) is input data corresponding to the actual output data having the largest or smallest value among the actual output data and the actual output data constituting all the learning data calculated in the output data calculation step (S420) Calculation step. In this step, when the output data having the maximum value is calculated as the global solution, the input data corresponding to the actual output data having the largest value is calculated, and when the output data having the lowest value is calculated as the global solution, the smallest value is obtained. It is preferable to calculate the input data corresponding to the actual output data having the value.
이하, 인공신경망을 이용하여 광역해를 산출하는 방법의 적용예를 보다 상세하게 설명하고자 한다. Hereinafter, an application example of a method for calculating a global solution using an artificial neural network will be described in more detail.
(적용예)(Application example)
적용예로서, 유가스가 매장된 저류층에서 유가스 생산량을 최대화시킬 수 있는 시추위치를 결정하는 문제에서 위 방법을 적용한다. 저류층은 도 9에 도시된 바와 같은 구조를 갖는 것으로서, 도 9는 전산모델에 의한 시뮬레이션을 위해 상하방향, 좌우방향 및 후방향으로 4800개의 격자로 격자화(I방향×J방향×K방향=40×40×3)시킨 것이다.As an application, the above method is applied in the matter of determining the drilling position which can maximize the oil gas production in the reservoir where oil gas is buried. The storage layer has a structure as shown in FIG. 9, and FIG. 9 is gridded into 4800 grids in the vertical direction, the left and right directions, and the rear direction for simulation by the computational model (I direction × J direction × K direction = 40 X 40 x 3).
저류층 중앙에 유가스를 채취하기 위한 기존의 생산정이 설정되어 있으며, 기존의 생산정 외에 유가스 생산량이 최대가 되는 수평정 1개의 시추위치를 산출하는 문제이다. 이때, 수평정은 한 지점에서 상하방향 그리고 좌우방향 등 2가지 방향으로 놓일 수 있기 때문에 분석대상 데이터는 3200개(=40X40X2 방향)의 시추위치에 대한 데이터들이다. The existing production wells for collecting oil gas are set in the middle of the reservoir, and it is a problem of calculating one drilling position for horizontal wells in which oil gas production is maximized in addition to the existing production wells. At this time, since the horizontal tablet can be placed in two directions, one in the vertical direction and the other in the left and right directions, the data to be analyzed are data about 3200 drilling positions (= 40X40X2 direction).
먼저 3200개 자료에 대해 전산모델을 사용하여 실제 출력자료를 미리 획득한 후 광역해를 확인하였고, 이를 제안한 인공신경망모델의 광역해 도출 능력을 검증하는 자료로 사용하였다First, the computer model was used for 3200 data, and the actual output data was obtained in advance, and then the wide-range solution was verified.
초기 학습자료 생성단계(S100)에서, 저류층을 이루는 시추위치에 대한 데이터들 즉, 분석대상 데이터에서 20개의 시추위치를 입력데이터로 선정하고, 입력 데이터로 선정된 20개의 시추위치에 대한 각각의 데이터를 기구축된 전산모델에 입력하여 시뮬레이션을 진행한다. 이때, 전산모델은 시추위치의 정보에 따른 유가스 생산량에 대한 전산모델이다. 전산모델에 의해 출력된 20개의 실제 출력 데이터를 입력 데이터와 함께 학습자료로 분류한다.In the initial learning data generation step (S100), the data on the drilling positions forming the reservoir layer, that is, 20 drilling positions are selected as the input data from the analysis target data, and each data for the 20 drilling positions selected as the input data. Is inputted into the instrumented computer model to perform the simulation. At this time, the computational model is a computational model for oil gas production according to the drilling position information. The 20 real output data outputted by the computational model are classified into the training data along with the input data.
초기 탐색집단 생성단계(S200)에서, 분석대상 데이터들을 탐색집단으로 분류하는데, 저류층 중 외부경계부분, 기존 생산정 설치부분을 제외한 2858개를 탐색집단으로 분류한다.In the initial search group generation step (S200), the analysis target data are classified into search groups, and 2858 of the reservoirs are classified as search groups except for the external boundary part and the existing production well installation part.
다음, 모델 구축 단계(S300)를 통해 학습자료를 토대로 인공신경망모델을 구축한다. 이때, 실시 예에 적용되는 모델 구축 단계(S300)를 보다 상세히 설명하면 다음과 같다.Next, build an artificial neural network model based on the training material through the model building step (S300). In this case, the model building step (S300) applied to the embodiment will be described in detail as follows.
먼저, 제1모델링 단계(S310)를 통해, 20개의 입력 데이터 및 20개의 실제 출력 데이트를 통해 인공신경망 모델을 구축하고, 제2모델링 단계(S320)를 통해, 탐색집단을 이루는 2858개의 데이터를 인공신경망 모델에 입력하여 2858개의 출력 예측 데이터를 획득한다.First, an artificial neural network model is constructed through 20 input data and 20 actual output data through a first modeling step S310, and 2858 data constituting a search group are artificially created through a second modeling step S320. Input to neural network model to obtain 2858 output prediction data.
이때, 도 10에는 초기의 인공신경망 모델을 통해 출력된 출력 예측데이터들과 탐색집단을 이루는 2858개의 데이터를 전산모델에 입력하여 획득한 실제 출력 데이터들을 비교하는 그래프가 개시되어 있다. 여기서, X축은 탐색집단을 이루는 2858개의 데이터를 전산모델에 입력하여 획득한 실제 출력 데이터들의 값을 나타낸 것이고, Y축은 2858개의 데이터를 초기의 인공신경망 모델을 통해 출력된 출력 예측 데이터들의 값을 나타낸 것이다. 또한, 우측 상단의 삼각형은 실제 광역해 즉, 최대 생산량을 갖는 데이터이고, 중앙의 사각형들은 인공신경망 모델을 구축하기 위해 사용된 학습자료에 해당되고, 점선은 30%의 산출비율에 나타내기 위한 기준선이다, 즉, 점선의 상측에 해당되는 점들은 산출비율에 포함된 데이터이고, 점선의 하측에 해당되는 점들은 산출비율에 미포함된 데이터들이다. 이때, 도 10에, y=x인 가상선(미도시)을 추가한다고 가정하면, y=x인 가상선의 상측에 위치한 점들은 전산모델을 통해 출력된 데이터보다 초기의 인공신경망 모델을 통해 출력된 데이터가 더 크게 나온 경우이고, y=x인 가상선의 하측에 위치한 점들은 전산모델을 통해 출력된 데이터보다 초기의 인공신경망 모델을 통해 출력된 데이터가 더 작게 나온 경우이다.10 illustrates a graph comparing output prediction data output through an initial artificial neural network model with actual output data obtained by inputting 2858 data forming a search group into a computer model. Here, the X axis represents the actual output data values obtained by inputting 2858 data forming the search group into the computational model, and the Y axis represents the output prediction data values outputted from the initial neural network model with 2858 data. will be. In addition, the triangle on the upper right is the real-world solution, that is, the data with the maximum yield, the squares in the middle correspond to the training data used to construct the neural network model, and the dotted line is the baseline for representing the output ratio of 30%. That is, points corresponding to the upper side of the dotted line are data included in the calculation rate, and points corresponding to the lower side of the dotted line are data not included in the calculation rate. In this case, assuming that y = x virtual line (not shown) is added to FIG. 10, the points located above the y = x virtual line are output through the artificial neural network model earlier than the data output through the computational model. The larger the data, the lower points of the imaginary line with y = x indicate that the data output through the initial neural network model is smaller than the data output through the computational model.
다음, 제1선택단계(S331)를 통해, 산출된 출력 예측 데이터들을 내림차순으로 순위를 결정한다. 이때, 산출비율을 30%로 설정하여, 결정된 순위에서 1순위부터 857순위까지의 출력 예측 데이터를 선택한다.Next, through the first selection step S331, the calculated output prediction data are ranked in descending order. At this time, the calculation rate is set to 30% to select output prediction data from the 1st rank to the 857th rank in the determined rank.
다음, 선별데이터 추출단계(S332)를 통해, 857개의 출력 예측 데이터에 각각 대응되는 입력 데이터를 추출하고, 미포함 데이터 추출단계(S333)를 통해 857개의 출력 예측 데이터 중 인공신경망 구축에 사용된 학습자료에 미포함된 입력 데이터를 추출한다.Next, through the screening data extraction step (S332), the input data corresponding to each of the 857 output prediction data is extracted, and the learning material used to build the artificial neural network of the 857 output prediction data through the without data extraction step (S333) Extract input data not included in.
이때, 재구축 여부 판별단계에서 초기 인공신경망 모델을 판단결과, 미포함 데이터 추출단계(S333)에서 선택된 입력 데이터 수가 기설정된 판별기준 수인 20개 이상이므로 초기 인공신경망 모델의 재구축이 필요한 것으로 판단한다.In this case, as a result of determining the initial neural network model in the re-establishment determination step, it is determined that the initial neural network model needs to be rebuilt because the number of input data selected in the non-data extraction step (S333) is 20 or more, which is a predetermined number of determination criteria.
다음, 제4모델링 단계(S340)를 통해 새로운 학습자료 및 새로운 탐색집단을 결정한다. 즉, 미포함 데이터 추출단계(S333)에서 선택된 입력 데이터들 중 20개의 입력 데이터를 선택하고, 선택된 20개의 입력 데이터를 전산모델에 입력하여 실제 출력 데이터들을 산출한다. 이때, 새로 선택된 20개의 입력 데이터와, 실제 출력 데이터를 기존의 학습자료에 포함시켜 새로운 학습자료를 결정한다. 또한, 미포함 데이터 추출단계(S333)에서 선택된 입력 데이터 및 초기 인공신경망 모델의 구축에 사용된 학습자료에 포함된 입력 데이터를 새로운 탐색집단으로 설정한다.Next, a new learning material and a new search group are determined through the fourth modeling step (S340). That is, 20 input data among the input data selected in the non-data extraction step S333 are selected, and the 20 input data are input to the computational model to calculate actual output data. At this time, the new learning materials are determined by including 20 newly selected input data and actual output data in the existing learning materials. In addition, the input data included in the non-data extraction step (S333) and the training data used to construct the initial artificial neural network model are set as a new search group.
다음, 새로운 학습자료 및 새로운 탐색집단을 이용하여 제1모델링 단계(S310)부터 상기 제4모델링 단계(S340)까지를 반복하는데, 제3모델링 단계(S330)에서 인공신경망 모델의 재구축이 필요하지 않다고 판단될 때까지 반복한다.Next, repeating the first modeling step (S310) to the fourth modeling step (S340) using a new learning material and a new search group, the neural network model reconstruction is not necessary in the third modeling step (S330). Repeat until you determine no.
도 11에는 두 번째의 인공신경망 모델을 통해 출력된 출력 예측 데이터들과 탐색집단을 전산모델에 입력하여 획득한 실제 출력 데이터들을 비교하는 그래프가 개시되어 있고, 도 12에는 세 번째의 인공신경망 모델을 통해 출력된 출력 예측 데이터들과 탐색집단을 전산모델에 입력하여 획득한 실제 출력 데이터들을 비교하는 그래프가 개시되어 있고, 도 13에는 네 번째의 인공신경망 모델을 통해 출력된 출력 예측 데이터들과 탐색집단을 전산모델에 입력하여 획득한 실제 출력 데이터들을 비교하는 그래프가 개시되어 있다. 도 11 내지 도 13에 도시된 표식은 도 10에 도시된 표식과 동일한 의미를 갖는다. 도 10 내지 도 13을 참조하면, 인공신경망 모델을 재구축하는 횟수가 증가할수록 탐색집단의 수가 줄어들고, 인공신경망 모델을 통해 출력되는 출력 예측 데이터 값이 전산모델을 통해 출력되는 실제 출력 데이터값에 유사해지고, 인공신경망을 이용하여 실제 광역해 즉, 실제 최대 값을 갖는 데이터에 대해 정확도가 높은 광역해를 산출할 수 있음을 알 수 있다.11 illustrates a graph comparing output prediction data output through a second artificial neural network model and actual output data obtained by inputting a search group into a computer model, and FIG. 12 illustrates a third artificial neural network model. A graph comparing the output prediction data output through the search group with the actual output data obtained by inputting the search group into the computational model is disclosed. In FIG. 13, the output prediction data and the search group output through the fourth artificial neural network model are disclosed. The graph comparing the actual output data obtained by inputting into the computational model is disclosed. The markers shown in FIGS. 11 to 13 have the same meanings as the markers shown in FIG. 10. 10 to 13, the number of search groups decreases as the number of reconstructions of the artificial neural network model increases, and the output prediction data value output through the artificial neural network model is similar to the actual output data value output through the computational model. It can be seen that an artificial neural network can be used to calculate a real global solution, that is, a highly accurate global solution for data having an actual maximum value.
한편, 위 적용예의 경우, 총 5번의 인공신경망 모델의 구축을 완료한 시점에서 미포함 데이터 추출단계(S333)에서 선택된 입력 데이터수가 8개로 기설정된 판별기준 수인 20개보다 작으므로 인공신경망 모델의 재구축이 필요하지 않은 것으로 판단하고, 8개의 입력 데이터를 전산모델에 입력하여 실제 출력 데이터를 출력한다. 출력된 실제 출력 데이터들과 모든 학습자료의 실제 출력 데이터 중 가장 큰 값을 갖는 실제 출력 데이터에 대응되는 입력 데이터를 광역해로 결정한다. 여기까지가 인공신경망을 구축하여 광역 최적해를 산출하는 과정을 설명한 것이다.On the other hand, in the case of the above application, the total number of input data selected in the data extraction step (S333) at the time when the construction of a total of five artificial neural network model is completed is less than 20, which is a predetermined number of discrimination criteria, the reconstruction of the artificial neural network model It is determined that this is not necessary, and the eight input data are inputted to the computational model to output the actual output data. The input data corresponding to the actual output data having the largest value among the actual output data and the actual output data of all learning materials is determined as a wide area solution. This concludes the process of constructing an artificial neural network to calculate the global optimal solution.
다시 본 발명으로 돌아가서 설명한다. Returning to the present invention will be described again.
본 발명에서는 도 1에 도시된 바와 같이, 탐색공간을 점차 줄여가되 해상도를 높여가면서, 복수의 시추정에 대한 최적해를 위 인공신경망을 통해 도출한다. 다르게 말하면, 기본격자시스템을 기초로 제1단계 격자시스템으로부터 제n단계 격자시스템까지 순차적으로 단계를 구축하고, 각 단계별로 위의 과정에 따라 인공신경망을 구축하여, 복수의 시추정에 대한 최적해를 찾는다. 차수(i)가 하나씩 증가하여 단계가 올라갈 때마다, 새로운 격자시스템 상의 격자를 이용하여 학습자료를 만들고, 이를 이용하여 인공신경망 모델을 구축하며, 구축된 인공신경망의 탐색집단을 줄여가면서 인공신경망을 재구축한다. 최종적으로 구축된 인공신경망을 이용하여 각 단계에서의 최적 위치에 대한 해를 산출한다. In the present invention, as shown in Figure 1, while reducing the search space gradually increasing the resolution, it is derived through the artificial neural network for the optimal solution for a plurality of drilling estimates. In other words, based on the basic lattice system, the stages are constructed sequentially from the first-stage grid system to the n-th grid system, and the artificial neural network is constructed for each stage according to the above process, and the optimal solution for the plural drilling estimates is obtained. Find. Each time the order (i) is increased by one, the step is made, using the grid on the new grid system to create a learning material, using it to build an artificial neural network model, and reduce the artificial neural network by reducing the search group of the constructed neural network Rebuild Finally, the constructed neural network is used to calculate the solution for the optimal position at each stage.
이제 본 발명의 구성과 흐름에 대하여 모두 설명하였으며, 구체적 실시예에 대하여 설명한다. Now, both the configuration and the flow of the present invention have been described, and specific embodiments will be described.
(실시예)(Example)
본 발명의 실시예는 도 2에 도시된 유가스 저류층에서 유가스 생산량을 최대화 시킬 수 있는 시추위치 2곳을 결정하는 것이다. An embodiment of the present invention is to determine two drilling locations that can maximize the oil gas production in the oil gas reservoir shown in FIG.
저류층은 도 2에 도시된 것과 같은 구조이며, 전산모델 시뮬레이션을 위해 저류층을 I방향×J방향×K방향=61×37×3으로 격자화시킨 것이다. 저류층에는 기존 생산정 6개(헤칭된 부분, w)가 있어 생산 중에 있으며 저류층 전체의 유가스 생산량을 최대화할 수 있는 생산정 2개를 추가할 경우 최적위치를 선정하는 문제이다. The storage layer has the same structure as shown in Fig. 2, and the storage layer is lattice-shaped in the I direction × J direction × K direction = 61 × 37 × 3 for the computational model simulation. There are 6 existing production wells (hatched parts, w) in the reservoir, and it is a problem to select the optimum position when adding two production wells that can maximize the oil production of the entire reservoir.
수직 생산정 2개를 추가할 경우 총 가능한 조합은 2,545,896 가지의 경우의 수(평면방향)가 나온다. 물론 위 입력데이터를 도 8에 도시된 인공신경망 알고리즘에 그대로 적용하여 우리가 찾고자 하는 2군데의 최적 위치를 찾을 수도 있다. 그러나 입력 데이터의 개수가 이렇게 많은 경우에는 도 8에 도시된 알고리즘을 그대로 적용하는 경우 실제 최적위치를 찾지 못할 확률이 높아질 수 있다. 본 발명은 이를 해결하기 위하여 나온 것이다. 즉, 본 발명의 연구진에 의하여 개발된 도 8의 알고리즘을 사용하되, 순차적 단계적으로 적용하는 것이다. 즉 다단계 격자시스템을 적용하여 경우의 수를 작게 하여 접근하였다. 디자인 변수로는 n1 = 5, n2 = 3, n3 = 1로 설정하고, r1 = r2 = 5로 정하였다. 여기서, n은 각 단계에서의 격자의 크기를 말하는 것으로, n1은 1단계 격자시스템에서 기본격자를 5*5로 묶어서 하나의 격자로 만든다는 의미이며, n2=3은 2단계에서 격자의 크기를 기본격자 3*3개로 만든다는 의미이며, n3=1은 제3단계에서의 격자는 기본격자와 동일하게 하겠다는 의미이다. r1 = r2 = 5는 제1단계와 제2단계에서 상위랭크는 각각 5위까지만 산정하겠다는 의미이다. 물론 위 값들은 실시예에 따라 변할 수 있으며, 단계가 훨씬 더 늘어날 수도 있다. For the addition of two vertical production wells, the total possible combination is the number of 2,545,896 cases (planar). Of course, the above input data can be applied to the artificial neural network algorithm shown in FIG. 8 as it is to find two optimal positions we want to find. However, when the number of input data is so large, when the algorithm shown in FIG. 8 is applied as it is, the probability of not finding an actual optimal position may increase. The present invention has been made to solve this problem. That is, while using the algorithm of Figure 8 developed by the researchers of the present invention, it is applied in a sequential step. In other words, the multi-stage grid system was applied to reduce the number of cases. As a design variable, n1 = 5, n2 = 3, n3 = 1, and r1 = r2 = 5. Here, n is the size of the grid in each stage, n1 means that in the first stage grid system, the basic grid is grouped by 5 * 5 to form one grid, and n2 = 3 is based on the size of the grid in stage 2. N 3 = 1 means that the lattice in the third stage is the same as the basic lattice. r1 = r2 = 5 means that the upper ranks of the first and second stages are calculated only to the fifth position, respectively. Of course, the above values may vary depending on the embodiment, and the steps may be further increased.
먼저 1단계 격자시스템을 구축하기 위해 n1 = 5로 설정하여, 기본격자시스템의 5×5 격자를 1개의 격자로 구성하였다. 도 14에 1단계 격자시스템이 구축되어 있다. 여기서, 진한 실선이 구축된 1단계 격자시스템이며, 검은색 격자는 이 시스템에서 시추정이 놓일 수 있는 위치를 나타낸다. 가는 실선은 기본격자시스템을 의미한다. 1단계 격자시스템에서는 2개의 시추정을 위치시킬 수 있는 경우의 수 즉, 탐색공간이 총 3,486가지로 급격히 줄어 들었다. 이 격자시스템에 대해 도 8에 도시된 광역해를 찾는 알고리즘을 적용한다. 도 15는 도 8에 도시된 과정을 통해 인공신경망을 구축 및 재구축하는 과정을 나타낸 것이다. 도 15(a) 내지 도 15(d)는 앞의 도 10 내지 도 13과 대응되는 것이다. First, n1 = 5 was set to construct a one-stage grid system, and the 5x5 grid of the basic grid system was composed of one grid. A one-stage grid system is constructed in FIG. Here, it is a one-stage grating system in which dark solid lines are constructed, and a black grating indicates a position where a drilling estimate can be placed in this system. Thin solid line means basic grid system. In the first-stage grid system, the number of cases where two drilling estimates can be located, that is, the search space, has been dramatically reduced to a total of 3,486. The algorithm for finding the global solution shown in FIG. 8 is applied to this grid system. FIG. 15 illustrates a process of constructing and rebuilding an artificial neural network through the process shown in FIG. 8. 15 (a) to 15 (d) correspond to FIGS. 10 to 13.
도 15(d)의 우측 상단에 삼각형으로 표시된 데이터가 제1단계에서의 광역해, 즉 복수의 시추정의 최적 위치이다. The data indicated by a triangle in the upper right of FIG. 15 (d) is the global solution in the first step, that is, the optimal position of the plurality of drilling estimates.
그리고 r1 = 5로 설정하여 도 15(d)의 최종 결과에서 상위 5랭크의 해를 모아 놓은 것이 도 16의 헤칭된 격자들이다. 상위랭크를 5개 중에서 중복되는 격자가 많아서 실제 6개의 격자만 선택되었다. Then, a set of r1 = 5 and the solutions of the top 5 ranks in the final result of FIG. 15 (d) are the hatched gratings of FIG. 16. Since there are many grids overlapping among the top ranks, only six grids were actually selected.
도 16의 상위 5랭크 영역 주변을 조금더 확장하여 제2단계 격자시스템을 구축하였다. 도 17에는 2단계 격자시스템이 진한 실선으로 나타나 있으며 시추가능 위치가 검은색으로 표시되어 있다. 제2단계 격자시스템에서의 탐색공간의 크기는 280개로, 도 8에 도시된 인공신경망 모델을 다시 구축하여 광역해를 구하고 상위 5랭크를 표시한 결과가 도 18에 나타나 있다. 도 18에서 헤칭된 격자는 상위 5랭크 해에 해당하는 것으로, 이 영역을 조금 더 확장하여 탐색공간으로 설정한 후 도 19와 같이 제3단계 격자시스템을 구축한다. 도 19를 참고하면, 제3단계 격자시스템은 기본격자시스템의 격자크기와 동일하며 탐색공간은 2,800 가지의 경우의 수(입력데이터)가 있다. The second stage grating system was constructed by further expanding the upper 5 rank region of FIG. 16. The two-stage grid system is shown in dark solid lines in FIG. 17 and the drillable positions are marked in black. The size of the search space in the second-stage grid system is 280. The results of reconstructing the artificial neural network model shown in FIG. The grating hatched in FIG. 18 corresponds to a top 5 rank solution, and this area is further expanded to set a search space, and then a third stage grating system is constructed as shown in FIG. Referring to FIG. 19, the third stage grid system is the same as the grid size of the basic grid system, and the search space has 2,800 cases (input data).
다시 제3단계 격자시스템을 대상으로 도 8에 도시된 인공신경망 모델을 이용하여 광역해를 도출한다. 인공신경망을 구축, 재구축하는 과정은 도 20(a) 내지 도 20(d)에 순차적으로 나타나 있다. 도 20(d)의 우측 상단에 삼격형으로 나타낸 점이 최종 광역해이다. 이 광역해의 입력데이터를 제3단계 격자시스템에 나타내면 도 21과 같이 된다. Again, a wide area solution is derived using the artificial neural network model shown in FIG. The process of building and reconstructing the artificial neural network is shown sequentially in FIGS. 20 (a) to 20 (d). The point shown by the triangular shape at the upper right of FIG. 20 (d) is the final wide area solution. The input data of the wide area solution is shown in FIG. 21 as shown in FIG.
제3단계 격자시스템에서의 격자 크기는 기 격자 크기와 동일하므로 이제 격자시스템을 갱신하지 않고 제3단계에서 도출된 해를 최종 해로 결정한다. 도 21에서 r로 표시된 2개의 격자가 최종적으로 2개의 시추정이 각각 설치될 최적 위치이다. Since the grid size in the third stage grid system is the same as the existing grid size, the solution derived in the third stage is now determined as the final solution without updating the grid system. The two gratings denoted by r in FIG. 21 are optimal positions where two drilling estimates will be respectively installed.
참고로, 도 22는 기존 생산정 6개의 생산프로파일과, 여기에 추가 생산정 2개의 최적위치를 포함한 생산프로파일을 전산모델을 통해 비교한 것이다. For reference, FIG. 22 compares six production profiles of existing production wells and production profiles including two optimal positions of additional production wells through a computerized model.
이상에서 설명한 바와 같이, 기본격자시스템을 직접 사용하여 인공신경망 모델을 한 번 적용해서 2개의 최적 위치를 찾아내고자 하는 경우, 2,545,896가지의 경우를 모두 다뤄야하는 어려운 상황이 있다. 그러나 본 발명에서는 기존과 달리 다단계 격자시스템의 접근법을 사용하면 총 3번의 단계를 거쳐 6,566 (=3,486+280+2,800) 가지의 경우만 고려하면 된다. As described above, when the neural network model is applied once using the basic grid system to find two optimal positions, there are difficult situations in which 2,545,896 cases must be covered. However, in the present invention, if the multi-stage grid system approach is used, only 6,566 (= 3,486 + 280 + 2,800) cases need to be considered in three steps.
즉, 본 발명에 따라 다단계 격자시스템을 이용하여 광역해를 찾는 알고리즘을 이용하면, 고려해야하는 경우의 수가 기하급수적으로 증가하는 경우에도 매우 쉽게 광역해를 찾을 수 있는 장점이 있다. 물론 각 단계별로 인공신경망을 새롭게 구축해야 하지만, 최근 인공신경망의 구축 속도는 매우 빠르므로 본 발명을 적용하는데 있어서 문제가 되지 않는다. That is, according to the present invention, the algorithm for finding a global solution using a multi-stage grid system has an advantage of finding the global solution very easily even when the number of cases to be considered increases exponentially. Of course, the neural network must be newly constructed at each stage, but the construction rate of the artificial neural network is very fast in recent years, so it does not matter in applying the present invention.
본 발명의 연구진에 의하여 개발된 “인공신경망을 이용하여 광역해를 산출하는 방법”은 입력데이터가 매우 많은 경우 한 번에 직접 적용하여 실제 광역해를 찾기에는 현실적으로 제한이 있었다. 그러나 본 발명을 통해 다단계로 접근함으로써 최적해를 찾는 과정이 매우 빠를 뿐만 아니라, 찾아진 광역해가 실제 광역해와 일치할 가능성이 높아졌다. 이에 본 발명을 이용하면 기존의 전산모델을 이용한 시뮬레이션을 대체할 수 있을 것으로 기대된다. The “method of calculating a wide area solution using an artificial neural network” developed by the researchers of the present invention has a practical limitation in finding an actual wide area solution by directly applying the input data at a time when there are a lot of input data. However, not only is the process of finding the optimal solution very fast by using the multi-step approach through the present invention, but also the possibility that the found wide-area coincides with the actual wide-area solution is increased. Therefore, the present invention is expected to be able to replace the simulation using a conventional computer model.
본 발명은 앞에서도 설명하였듯이, 컴퓨터에서 프로그램으로 실행된다. 이에 본 발명은 상기한 방법을 컴퓨터에서 실행하기 위한 프로그램이 수록된 기록 매체를 제공한다. As described above, the present invention is implemented as a program on a computer. Accordingly, the present invention provides a recording medium containing a program for executing the above method on a computer.
한편, 지금까지 본 발명에서 사용하는 인공신경망 모델로서 도 8에 도시된 형태를 예로 들어 설명하였으나, 본 발명이 이에 한정되는 것은 아니며 이외에도 기존에 개발되어 있는 유전자 알고리즘과 같은 다른 광역해 탐색모델은 물론 또 다른 인공신경망모델을 사용할 수도 있을 것이다. Meanwhile, the artificial neural network model used in the present invention has been described with reference to the form shown in FIG. 8 as an example, but the present invention is not limited thereto, and other wide area search models, such as genetic algorithms, which have been previously developed, as well as Another artificial neural network model could be used.
또한 본 발명이 복수의 시추정의 위치를 사용하는 것으로 설명하였으나, 1개의 시추정을 찾는데에도 사용될 수 있다. In addition, although the present invention has been described as using a plurality of locations of the drilling estimates, it can be used to find one drilling estimate.
또한 인공신경망 모델의 재구축 여부를 판단하기 위한 판별기준 수를 N인 것으로 설명 및 도시하였으나, 판별기준 수가 이에 한정되는 것은 아니며, 예컨대 T가 N~3N 사이에서 사전에 정해진 값보다 작은 경우로 확장할 수 있다. 더 확장해서 3N 이상의 값에 대해서도 적용할 수 있을 것이다. In addition, although the number of criteria for determining whether to reconstruct the artificial neural network model is described and illustrated as N, the number of criteria is not limited thereto. For example, it is extended to a case where T is smaller than a predetermined value between N and 3N. can do. It can be further extended to apply for values above 3N.
본 발명의 보호범위가 이상에서 명시적으로 설명한 실시예의 기재와 표현에 제한되는 것은 아니다. 또한, 본 발명이 속하는 기술분야에서 자명한 변경이나 치환으로 말미암아 본 발명이 보호범위가 제한될 수도 없음을 다시 한 번 첨언한다.The protection scope of the present invention is not limited to the description and expression of the embodiments explicitly described above. In addition, it is again noted that the scope of protection of the present invention may not be limited due to obvious changes or substitutions in the technical field to which the present invention pertains.

Claims (12)

  1. (a)오일 또는 가스 저류층을 다수의 기본격자들로 구획하여 형성된 기본격자시스템의 전체 영역을 탐색공간으로 설정하며, 상기 탐색공간을 상기 기본격자보다 큰 크기의 격자로 다시 구획하는 단계; (a) setting an entire area of the basic grid system formed by dividing an oil or gas storage layer into a plurality of basic grids as a search space, and subdividing the search space into a grid having a size larger than the basic grid;
    (b)탐색공간 내 다수의 격자들 중에서 어느 격자에 하나 이상의 시추정이 위치하면 적합한지를 인공신경망 평가모델을 적용하여 평가하고, 격자별 적합도를 순위로 나타내는 단계; (b) applying an artificial neural network evaluation model to determine which one of the plurality of gratings in the search space is located in one of the gratings, and indicating the goodness of fit of each grating as a rank;
    (c)상기 평가단계에서 사용한 현재의 격자 크기가 상기 기본격자의 크기 보다 일정 범위 이상 큰 경우 탐색공간을 재구축하기 위한 후속 단계를 진행하며, 일정 범위 내에 있다면 상기 평가단계의 결과를 최적 위치로 결정하는 단계; (c) If the current grid size used in the evaluation step is larger than the base lattice size by a predetermined range or more, proceed to the next step to rebuild the search space, if within a certain range, the result of the evaluation step to the optimal position Determining;
    (d)상기 평가단계에서 격자별 적합도에서 일정 순위까지의 상위 랭크에 기록된 격자들을 새로운 탐색공간으로 재구축하는 단계; (d) reconstructing the lattice recorded in a higher rank up to a predetermined rank in the goodness-of-fit of the lattice in the evaluation step into a new search space;
    (e)재구축된 탐색공간을 상기 기본격자 크기 이상이되 선행하는 상기 평가단계에서 사용한 격자 크기 보다는 작은 크기의 격자들로 재구획하는 단계;를 포함하며, (e) re-segmenting the reconstructed search space into grids of size greater than or equal to the basic grid size but smaller than the grid size used in the preceding evaluation step;
    상기 탐색공간 내의 격자 크기를 점차 작게 하여 상기 기본격자의 크기와 일정 범위 내에서 동일하게 됨으로써, 상기 (c)단계에서 상기 시추정의 최적 위치가 결정될 때까지, 상기 (b)단계에서 (e)단계까지를 반복적으로 수행하는 것을 특징으로 하는 유가스 저류층에서 인공신경망을 이용한 복수의 시추정 위치 최적화 방법. The grid size in the search space is gradually reduced to be equal to the size of the basic grid within a predetermined range, so that the optimal position of the drilling estimate is determined in step (c), in step (e) to step (e). Method of optimizing a plurality of drilling position using the artificial neural network in the oil gas reservoir characterized in that it performs repeatedly until.
  2. 제1항에 있어서,The method of claim 1,
    상기 (a)단계 또는 (e)단계에서 상기 탐색공간을 기본격자보다 큰 크기의 격자로 구획할 때, 상기 기본격자를 복수 개 묶어서 하나의 격자로 형성하는 것을 특징으로 하는 유가스 저류층에서 인공신경망을 이용한 복수의 시추정 위치 최적화 방법. In the step (a) or (e), when the search space is partitioned into a grid having a size larger than that of the basic grid, the neural network in the oil gas reservoir layer is formed by tying a plurality of the basic grids together. A plurality of drilling estimation position optimization method using.
  3. 제1항에 있어서,The method of claim 1,
    상기 (d)단계에서 탐색공간을 재구축할 때, When rebuilding the search space in step (d),
    상기 상위랭크의 격자들이 차지하는 기본영역과, 상기 기본영역으로부터 주변으로 일정 범위 내에서 확장한 추가영역을 합하여 탐색공간으로 재구축하는 것을 특징으로 하는 유가스 저류층에서 인공신경망을 이용한 복수의 시추정 위치 최적화 방법. The plurality of drilling positions using the artificial neural network in the oil gas reservoir layer, wherein the base area occupied by the grids of the upper rank and the additional area extended within a predetermined range from the base area are reconstructed into the search space. Optimization method.
  4. 제3항에 있어서,The method of claim 3,
    상기 추가영역은 상기 상위랭크의 격자들 옆에 바로 이웃하는 격자의 중심부를 포함하지 않는 범위까지만 확장하는 것을 특징으로 하는 유가스 저류층에서 인공신경망을 이용한 복수의 시추정 위치 최적화 방법. And the additional region extends only to a range that does not include a central portion of a grid adjacent immediately adjacent to the grids of the upper rank, using a neural network in the oil gas reservoir.
  5. 제4항에 있어서,The method of claim 4, wherein
    상기 추가영역을 결정할 때, 상기 격자 내부의 기본격자들 단위로 확장하는 것을 특징으로 하는 유가스 저류층에서 인공신경망을 이용한 복수의 시추정 위치 최적화 방법. The method of optimizing a plurality of drilling positions using artificial neural network in the oil gas reservoir layer, characterized in that for expanding the additional region, the basic grid in the grid.
  6. 제1항에 있어서,The method of claim 1,
    상기 인공신경망 모델을 이용한 평가방법은, Evaluation method using the artificial neural network model,
    다수의 분석대상 데이터 중 선택된 입력 데이터들과, 상기 분석대상 데이터를 분석하기 위해 구축된 전산모델에 상기 입력 데이터들을 입력하여 출력된 실제 출력 데이터들을 학습자료로 분류하는 초기 학습자료 생성단계;An initial learning material generation step of classifying input data selected from a plurality of analysis object data and actual output data outputted by inputting the input data to a computational model constructed to analyze the analysis object data as learning data;
    상기 분석대상 데이터를 탐색집단으로 분류하는 초기 탐색집단 생성단계;An initial search group generation step of classifying the analysis target data into a search group;
    상기 학습자료를 이용하여 상기 전산모델에서 출력되는 데이터를 예측하기 위한 인공신경망 모델을 구축하되, 상기 인공신경망 모델에 상기 탐색집단을 입력시 출력되는 데이터를 토대로 상기 인공신경망 모델의 재구축 여부를 판단하고, 상기 인공신경망 모델의 재구축이 필요하다고 판단시 학습자료를 재설정하여 인공신경망 모델을 재구축하는 모델 구축 단계; 및Constructing an artificial neural network model for predicting data output from the computational model using the training data, and determining whether to reconstruct the artificial neural network model based on the data outputted when the search group is input to the artificial neural network model. And rebuilding the artificial neural network model by resetting training data when it is determined that reconstruction of the artificial neural network model is necessary; And
    상기 모델 구축단계를 통해 인공신경망 모델의 재구축이 필요없다고 판단시 광역해를 산출하는 광역해 산출단계;를 포함하며, A wide area calculation step of calculating a wide area solution when it is determined that reconstruction of the artificial neural network model is not necessary through the model building step;
    상기 분석대상 데이터는 상기 복수의 시추정이 설치되는 상기 격자의 위치에 대한 데이터인 것을 특징으로 하는 유가스 저류층에서 인공신경망을 이용한 복수의 시추정 위치 최적화 방법. The analysis target data is a plurality of drilling position optimization method using the artificial neural network in the oil gas reservoir, characterized in that the data on the position of the grid on which the plurality of drilling wells are installed.
  7. 제6항에 있어서,The method of claim 6,
    상기 모델 구축 단계는, The model building step,
    상기 학습자료를 이용하여 상기 전산모델에서 출력되는 데이터를 예측하기 위한 인공신경망 모델을 구축하는 제1모델링 단계;A first modeling step of constructing an artificial neural network model for predicting data output from the computational model using the training material;
    상기 인공신경망 모델에 상기 탐색집단에 포함된 각 데이터들을 입력하여 출력 예측 데이터를 획득하는 제2모델링 단계;A second modeling step of obtaining output prediction data by inputting respective data included in the search group to the artificial neural network model;
    상기 출력 예측 데이터들을 토대로 상기 인공신경망 모델에 대한 재구축 여부를 판단하는 제3모델링 단계; 및A third modeling step of determining whether to reconstruct the artificial neural network model based on the output prediction data; And
    상기 제3모델링 단계를 통해 상기 인공신경망 모델의 재구축이 필요하다고 판단되면, 상기 학습자료를 포함하는 새로운 학습자료 및 새로운 탐색집단을 결정하는 제4모델링 단계를 포함하고,If it is determined that the neural network model needs to be reconstructed through the third modeling step, a fourth modeling step of determining a new learning material and the new search group including the learning material,
    상기 새로운 학습자료 및 새로운 탐색집단을 이용하여 상기 제1모델링 단계부터 상기 제4모델링 단계까지를 반복하는 것을 특징으로 하는 유가스 저류층에서 인공신경망을 이용한 복수의 시추정 위치 최적화 방법. The method of optimizing a plurality of drilling positions using the artificial neural network in the oil gas reservoir layer, characterized in that it is repeated from the first modeling step to the fourth modeling step using the new learning material and the new search group.
  8. 제7항에 있어서,The method of claim 7, wherein
    상기 제3모델링 단계는The third modeling step
    상기 제2모델링 단계를 통해 산출된 출력 예측 데이터들 중 기설정된 선별기준에 해당하는 출력 예측 데이터들을 선택하는 제1선택단계;A first selecting step of selecting output prediction data corresponding to a predetermined selection criterion among the output prediction data calculated through the second modeling step;
    상기 제1선택단계를 통해 선택된 출력 예측 데이터들에 각각 대응되는 입력데이터를 추출하는 선별데이터 추출단계;A selection data extraction step of extracting input data corresponding to the output prediction data selected through the first selection step, respectively;
    상기 선별데이터 추출단계를 통해 추출된 입력 데이터들 중 상기 인공신경망 구축에 사용된 학습자료에 미포함된 입력 데이터를 추출하는 미포함 데이터 추출단계; 및A non-data extraction step of extracting input data not included in the learning data used for constructing the artificial neural network among the input data extracted through the selection data extraction step; And
    상기 미포함 데이터 추출단계에서 추출된 입력 데이터들의 수가 기설정된 판별기준 수 미만이면, 상기 인공신경망 모델의 재구축이 필요하지 않은 것으로 판단하고, 상기 미포함 데이터 추출단계에서 선택된 상기 입력 데이터 수가 기설정된 판별기준 수 이상이면, 상기 인공신경망 모델의 재구축이 필요한 것으로 판단하는 재구축 여부 판단단계;를 포함하는, 것을 특징으로 하는 유가스 저류층에서 인공신경망을 이용한 복수의 시추정 위치 최적화 방법. If the number of input data extracted in the non-data extraction step is less than the predetermined reference number, it is determined that reconstruction of the artificial neural network model is not necessary, and the number of input data selected in the non-data extraction step is preset If it is more than a number, re-establishment determination step of determining that it is necessary to rebuild the artificial neural network model; comprising a plurality of drilling estimation position optimization method using the artificial neural network in the oil gas reservoir.
  9. 제8항에 있어서,The method of claim 8,
    상기 제3모델링 단계는, The third modeling step,
    상기 판결기준 수는 상기 초기 학습자료 생성단계에서 상기 분석대상 데이터 중 상기 입력 데이터로 선택된 데이터의 수와 동일한 것을 특징으로 하는 유가스 저류층에서 인공신경망을 이용한 복수의 시추정 위치 최적화 방법.The determination criterion number is a plurality of drilling position optimization method using an artificial neural network in the oil gas reservoir, characterized in that the number of data selected as the input data of the analysis target data in the initial learning data generation step.
  10. 제8항에 있어서,The method of claim 8,
    상기 제4모델링 단계는, The fourth modeling step,
    상기 미포함 데이터 추출단계에서 추출된 입력 데이터들 중 일부를 선택하는 제2선택단계;A second selection step of selecting some of the input data extracted in the non-data extraction step;
    상기 제2선택단계에서 선택된 입력 데이터를 상기 학습자료에 추가하여 새로운 학습자료를 생성하는 자료 추가 단계; 및A material addition step of generating new learning material by adding the input data selected in the second selection step to the learning material; And
    상기 미포함 데이터 추출단계에서 선택된 입력 데이터 및 상기 학습자료에 포함된 입력 데이터를 새로운 탐색집단으로 설정하는 집단 추가 단계;를 포함하는 것을 특징으로 하는 유가스 저류층에서 인공신경망을 이용한 복수의 시추정 위치 최적화 방법.Optimizing a plurality of drilling positions using an artificial neural network in the oil gas reservoir, characterized in that it comprises a group addition step of setting the input data selected in the non-data extraction step and the input data included in the learning material as a new search group; Way.
  11. 제10항에 있어서,The method of claim 10,
    상기 제2선택단계에서는, In the second selection step,
    상기 초기 학습자료 생성단계에서 상기 학습자료로 선택된 입력 데이터의 수 또는 상기 학습자료로 선택된 입력 데이터의 수보다 작은 수만큼 상기 미포함 데이터 추출단계에서 추출된 입력 데이터들에서 데이터를 선택하는 것을 특징으로 하는 유가스 저류층에서 인공신경망을 이용한 복수의 시추정 위치 최적화 방법.Selecting data from the input data extracted in the extracting data extraction step by less than the number of input data selected as the learning material or the number of input data selected as the learning material in the initial learning material generation step. A Method for Optimizing Multiple Drilling Positions Using Artificial Neural Networks in Oil Gas Reservoir.
  12. 제10항에 있어서,The method of claim 10,
    상기 자료 추가 단계는, The data addition step,
    상기 제2선택단계에서 선택된 입력 데이터들을 상기 전산모델에 입력하여 출력된 실제 출력 데이터를 획득하는 데이터 추가 획득단계; 및An additional data acquiring step of acquiring the actual output data outputted by inputting the input data selected in the second selecting step to the computational model; And
    상기 제2선택단계에서 선택된 입력 데이터 및 상기 데이터 추가 획득단계에서 획득한 실제 출력 데이터를 상기 학습자료에 추가하여 새로운 학습자료를 생성하는 자료 생성 단계를 포함하는, 것을 특징으로 하는 유가스 저류층에서 인공신경망을 이용한 복수의 시추정 위치 최적화 방법. And a data generation step of generating new learning data by adding the input data selected in the second selection step and the actual output data acquired in the additional data acquisition step to the learning data. A plurality of drilling location optimization methods using neural networks.
PCT/KR2018/009385 2018-03-30 2018-08-16 Method for optimizing locations of multiple wellbores in oil and gas reservoir by using artificial neural network WO2019190003A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020180037167A KR102124315B1 (en) 2018-03-30 2018-03-30 Method for optimization of multi-well placement of oil or gas reservoirs using artificial neural networks
KR10-2018-0037167 2018-03-30

Publications (1)

Publication Number Publication Date
WO2019190003A1 true WO2019190003A1 (en) 2019-10-03

Family

ID=68062226

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2018/009385 WO2019190003A1 (en) 2018-03-30 2018-08-16 Method for optimizing locations of multiple wellbores in oil and gas reservoir by using artificial neural network

Country Status (2)

Country Link
KR (1) KR102124315B1 (en)
WO (1) WO2019190003A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11875371B1 (en) 2017-04-24 2024-01-16 Skyline Products, Inc. Price optimization system

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210230981A1 (en) * 2020-01-28 2021-07-29 Schlumberger Technology Corporation Oilfield data file classification and information processing systems
US11741359B2 (en) 2020-05-29 2023-08-29 Saudi Arabian Oil Company Systems and procedures to forecast well production performance for horizontal wells utilizing artificial neural networks

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101007985B1 (en) * 2009-10-26 2011-01-14 한국지질자원연구원 Method for estimating grade of minerals using neural network and recording medium recorded program therefor
KR101474874B1 (en) * 2013-05-22 2014-12-30 동아대학교 산학협력단 computing system for well placement optimization developed by SA/ANN and well placement optimization method using Thereof
KR20160143512A (en) * 2015-06-04 2016-12-14 더 보잉 컴파니 Advanced analytical infrastructure for machine learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101007985B1 (en) * 2009-10-26 2011-01-14 한국지질자원연구원 Method for estimating grade of minerals using neural network and recording medium recorded program therefor
KR101474874B1 (en) * 2013-05-22 2014-12-30 동아대학교 산학협력단 computing system for well placement optimization developed by SA/ANN and well placement optimization method using Thereof
KR20160143512A (en) * 2015-06-04 2016-12-14 더 보잉 컴파니 Advanced analytical infrastructure for machine learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ILSIK JANG ET AL.: "Well-placement optimisation using sequential artificial neural networks", ENERGY EXPLORATION & EXPLOITATION, vol. 36, no. 3, 6 September 2017 (2017-09-06), pages 433 - 449, XP055510156 *
KANG HYEON JEONG: "Optimization of multi-well placement using artificial intelligence", MASTER'S DEGREE, October 2017 (2017-10-01), Chosun University, Retrieved from the Internet <URL:http://www.riss.kr/search/detail/DetailView.do?p_m2t_type=be54d9b8bc7cd509&control-no=e45b784a5748aca5ffeObdc3ef48d4l9> *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11875371B1 (en) 2017-04-24 2024-01-16 Skyline Products, Inc. Price optimization system

Also Published As

Publication number Publication date
KR20190114450A (en) 2019-10-10
KR102124315B1 (en) 2020-06-18

Similar Documents

Publication Publication Date Title
WO2019190003A1 (en) Method for optimizing locations of multiple wellbores in oil and gas reservoir by using artificial neural network
Majdi et al. Evolving neural network using a genetic algorithm for predicting the deformation modulus of rock masses
Xiong et al. A graph edit dictionary for correcting errors in roof topology graphs reconstructed from point clouds
Caicedo et al. A novel evolutionary algorithm for identifying multiple alternative solutions in model updating
CN112528365A (en) Method for predicting health evolution trend of underground infrastructure structure
CN110322509B (en) Target positioning method, system and computer equipment based on hierarchical class activation graph
US20210383034A1 (en) Automated steel structure design system and method using machine learning
Maxwell et al. Cell-aware diagnosis: Defective inmates exposed in their cells
CN115544264B (en) Knowledge-driven intelligent construction method and system for digital twin scene of bridge construction
CN108121530A (en) A kind of conceptual design analysis method of multidisciplinary complex product
Croce et al. Connecting geometry and semantics via artificial intelligence: from 3D classification of heritage data to H-BIM representations
CN101894063B (en) Method and device for generating test program for verifying function of microprocessor
Rusek et al. Bayesian networks and Support Vector Classifier in damage risk assessment of RC prefabricated building structures in mining areas
Garcia et al. Automatic generation of geological stories from a single sketch
CN115203797A (en) Method for judging beam support by combining intelligent identification with manual reinspection
CN116307015A (en) Environmental performance prediction method and device based on pix2pix
Whiteman et al. Convolutional Neural Network Approach for Vibration-Based Damage State Prediction in a Reinforced Concrete Building
CN114414090A (en) Surface temperature prediction method and system based on remote sensing image and multilayer sensing
CN109101643A (en) The building of data information table, anti-pseudo- point global registration method, apparatus and robot
Cheng et al. Optimization of life-cycle cost of retrofitting school buildings under seismic risk using evolutionary support vector machine
Abidin et al. A Computerized Tool Based on Cellular Automata and Modified Game of Life for Urban Growth Region Analysis
KR102525249B1 (en) Method, device and system for monitoring and analyzing anomalies in photovoltaic power plants through artificial intelligence-based image processing
CN115267883B (en) Earthquake response prediction model training and predicting method, system, equipment and medium
JP4468736B2 (en) Similar image retrieval device, similar image retrieval method, and similar image retrieval program
GHADIMI et al. Structural damage prognosis by evaluating modal data orthogonality using chaotic imperialist competitive algorithm

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18911699

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 18911699

Country of ref document: EP

Kind code of ref document: A1