WO2002010797A1 - Systeme et procede permettant d'optimiser une operation miniere - Google Patents

Systeme et procede permettant d'optimiser une operation miniere Download PDF

Info

Publication number
WO2002010797A1
WO2002010797A1 PCT/US2001/023428 US0123428W WO0210797A1 WO 2002010797 A1 WO2002010797 A1 WO 2002010797A1 US 0123428 W US0123428 W US 0123428W WO 0210797 A1 WO0210797 A1 WO 0210797A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
neural network
mine
seismic
training
Prior art date
Application number
PCT/US2001/023428
Other languages
English (en)
Inventor
Ronald R. Bush
Original Assignee
Scientific Prediction, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Scientific Prediction, Inc. filed Critical Scientific Prediction, Inc.
Priority to AU2001280782A priority Critical patent/AU2001280782A1/en
Publication of WO2002010797A1 publication Critical patent/WO2002010797A1/fr

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V1/00Seismology; Seismic or acoustic prospecting or detecting
    • G01V1/28Processing seismic data, e.g. for interpretation or for event detection
    • G01V1/30Analysis

Definitions

  • This invention relates to mining operations.
  • this invention is drawn to a system and method for optimizing a mining operation by gathering seismic data during the mining operation.
  • the present invention relates to a system, method, and process for delineating objects in one (1) , two (2) , or three (3) dimensional space from data that contains patterns related to the existence of said objects.
  • seismic data frequently contains patterns from which hydrocarbon accumulations can be detected through the identification of bright spots, flat spots, and dim spots.
  • training sets consisting of data from areas where it is known that certain conditions exist and do not exist . In the case of hydrocarbon accumulations and prior to the disclosures of the present invention, this would have required expensive drilling of oil and gas wells before the data for the training sets could have been acquired.
  • Automated delineation of hydrocarbon accumulations from seismic data will be used as a non-exclusive, actual example to describe the system, method, and process of the present invention.
  • the method disclosed is also applicable to a wide range of applications other than hydrocarbon accumulations, such as but not limited to, aeromagnetic profiles, astronomical clusters from radio-telescope data, weather clusters from radiometers, objects from radar, sonar, and infrared returns, etc.
  • Many other applications will be obvious to those skilled in the pertinent art. Accordingly, it is intended by the appended claims to cover all such applications as fall within the true spirit and scope of the present invention.
  • the method of the present invention provides a process of spatially delineating accumulations of various types and properties. For example, it provides an automated process for delineating hydrocarbon accumulations from seismic data.
  • One particular hydrocarbon accumulation is the gas below the cap, i.e. gas cap, in an oil and/or gas field. Being able to accurately delineate the gas cap, from 2D and 3D seismic data, before the interpretation process even begins, will prove to be very valuable to the oil and gas industry. See, for example, U.S. Pat. Nos . 4,279,307,
  • U.S. Pat. No. 5,732,697 discloses a "Shift-Invariant Artificial Neural Network for Computerized Detection of Clustered Microcalcifications in Mammography. " In this disclosure "a series of digitized medical images are used to train an artificial neural network to differentiate between diseased and normal tissue.” The present invention might also find application in delineating diseased tissue from the normal or healthy tissue.
  • U.S. Pat. No. 5,775,806 discloses an Infrared Assessment System for evaluating the "functional status of an object by analyzing its dynamic heat properties using a series of infrared images . " The present invention might also be used to delineate zones of differing functionality in a series of infrared images .
  • U.S. Pat. 5,777,481 discloses an invention that uses “atmospheric radiation as an indicator of atmospheric conditions . " The present invention can be used to delineate the regions of atmospheric water vapor, cloud water, and ice; and it might be used in conjunction with the cited patent to also identify the content of the regions delineated.
  • the present invention provides such a system, method, and process.
  • neural networks can be used to delineate spatially dependent objects from patterns in the data acquired from some sensing process . It is yet another objective of the present invention to disclose how the technique is applied to the automated delineation of hydrocarbon accumulations from seismic data.
  • This objective is accomplished by combining the methods for detecting and delineating hydrocarbon carbon accumulations, and subdivisions within the accumulations, directly from seismic data with a priori knowledge related to completion times, production, and pressure properties. Thereby providing a method for reservoir simulation based on the actual parameters present in a particular hydrocarbon accumulation.
  • the system, method, and process of the present invention are based on the utilization of a neural network to discriminate between differing regions, accumulations, or clusters that can be detected from the patterns present in the data arising out of some sensing process .
  • the neural network classifies particular areas of the data as being either In or Out of a particular region, accumulation, or cluster.
  • a method of the invention for mining a targeted material comprising the steps of setting one or more charges in a mine to facilitate excavation of the mine, gathering seismic data from one or more of the charges, and determining locations of the targeted material based on the gathered seismic data.
  • Another embodiment of the invention provides a method of mining a desired material in a mine comprising the steps of setting a charge in the mine to facilitate the creation and extension of one or more mine shafts in the mine, taking a seismic survey from the set charge, and delineating spatially dependent objects based on the seismic survey.
  • Another embodiment of the invention provides a method of forming mine shafts in a mine comprising the steps of setting a charge in the mine, gathering seismic data from the set charge, determining locations of spatially dependent objects in the mine from the seismic data, and setting another charge in the mine based on the determined locations of the spatially dependent objects in the mine.
  • FIG. 1 is a schematic diagram of a neural network.
  • FIG. 2 shows a schematic diagram of the conceptual sliding window used by the present invention.
  • FIG. 3 shows information flow between the layers of a neural network while using back propagation for training.
  • FIG. 4 shows a neural network with an input layer, a hidden layer and an output layer.
  • FIG. 5 depicts the relationship between training data, test data, and the complete data set.
  • FIG. 6 shows the steps required for training the neural network.
  • FIG. 7(a) shows a hard-limited activation function.
  • FIG. 7(b) shows a threshold logic activation function
  • FIG. 7(c) shows a sig oid activation function.
  • FIG. 8 depicts an embodiment of a node in a neural network.
  • FIG. 9 shows a neural network model with its weights indicated.
  • FIG. 10 shows the contrast of the mean squared error as it is related to the variance from a test set.
  • FIG. 11 shows a flow chart of the typical process to be followed in delineating a spatially dependent object.
  • FIG. 12 shows a hypothetical seismic layout.
  • FIG. 13 shows a Common Depth Point (CDP) gather.
  • CDP Common Depth Point
  • FIG. 14 shows a hypothetical seismic layout with a split-sliding window.
  • FIG. 15 shows a hypothetical seismic layout in a hypothetical Oil and Gas field.
  • FIGS. 16-19 are plan views illustrating the optimization of a mining operation using seismic surveys. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
  • Node a single neuron-like computational element in a neural network.
  • Weight an adjustable value or parameter associated with a connection between nodes in a network. The magnitude of the weight determines the intensity of the connection. Negative weights inhibit node firing while positive weights enable node firing.
  • Connection are pathways between nodes, that correspond to the axons and synapses of neurons in the human brain, that connect the nodes into a network.
  • Training Law an equation that modifies all or some of the weights in a node's local memory in response to input signals and the values supplied by the activation function.
  • the equation enables the neural network to adapt itself to examples of what it should be doing and to organize information within itself and thereby learn.
  • Learning laws for weight adjustment can be described as supervised learning or unsupervised learning or reinforcement learning.
  • Supervised learning assumes that the desired output of the node is known or can be determined from an overall error. This is then used to form an error signal, which is used to update the weights .
  • unsupervised learning the desired output is not known and learning is based on input/output values.
  • reinforcement learning the weights associated with a node are not changed in proportion to the output error associated with a particular node but instead are changed in proportion to some type of global reinforcement signal.
  • activation function or "Transfer function” a formula that determines a node's output signal as a function of the most recent input signals and the weights in local memory.
  • Back propagation in a neural network is the supervised learning method in which an output error signal is fed back through the network, altering connection weights so as to minimize that error.
  • Input layer the layer of nodes that forms a passive conduit for entering a neural network.
  • Hidden layer a layer of nodes not directly connected to a neural network' s input or output .
  • Output layer a layer of nodes that produce the neural network ' s results .
  • Optimum Training Point is that point in the training of a neural network where the variance of the neural network has reached a minimum with respect to results from a test set 202 which is, in the case of the present invention, taken from the conceptual sliding window 205 that is comprised of data from some sensing process.
  • the invention described below relates in general to a method and system for data processing and, in particular, to a method and system for the automated delineation of anomalies or objects in one, two, and/or three dimensional space from data that contains patterns related to the existence of the objects.
  • seismic data frequently contains patterns from which hydrocarbon accumulations can, by use of the present invention, be detected and delineated through the use of neural networks.
  • Using the invention in this manner may include the following steps . First, developing a neural network. Second, applying the neural network to the entire seismic survey. Third, using the neural network to predict production from contemplated wells.
  • the invention is based on the utilization of a neural network to discriminate between differing regions, accumulations, or clusters of hydrocarbon accumulations that can be detected from the patterns present in seismic data.
  • the neural network classifies particular areas of the data as being either In or Out of a particular region, accumulation, or cluster.
  • the present invention provides a method for automating the process of analyzing and interpreting seismic data.
  • a neural network architecture (s) 101 having an input layer, one or more hidden layers, and an output layer, where each layer has one or more nodes and all nodes in the input layer are connected to an adjacent but different portion of the data from some sensing process .
  • Each node in the input layer is connected to each node in the first, and possibly only, hidden layer, each node in the first hidden layer is connected to each node in the next hidden layer, if it exists, and each node in the last hidden layer is connected to each node in the output layer.
  • Each connection between nodes has an associated weight .
  • the output layer outputs a classification 109 (described below) .
  • Neural network 101 further includes a training process (not illustrated in FIG. 1) for determining the weights of each of the connections of the neural network.
  • a conceptual sliding window composed of a training/test set combination, consisting of three adjacent lines each of which contains linearly adjacent portions of the data derived from some sensing process (described in more detail below) .
  • the middle of the three lines shown in FIG. 2 comprises the training set 201, while the outer two lines make up the test set 202.
  • approximately half of the data in each of the three lines is pre-assigned the classification of Out while the other half is pre-assigned the classification of In.
  • Each of the three lines of data is adjacent to one another, and each data point within each line is linearly adjacent to its closest neighboring data point.
  • the classifications of Out and In is contiguous while making up approximately half of the data points in each line.
  • all of the lines, which for the exemplary case is three are spatially aligned with one another.
  • the training process applies training set 201 to the neural network in an iterative manner, where the training set is formed from the middle line in the sliding window derived from the data arising out of the sensing process. Following each iteration, the training process determines a difference between the classi ication produced by the neural network and the classification assigned in the training set. The training set then adjusts the weights of the neural network based on the difference. The error assigned to each node in the network may be assigned by the training process via the use of back propagation.
  • cessation of training is optimized by executing the following process after each of the training iterations: saving the neural network weights, indexed by iteration number; testing the neural network on the test set 202 portion of the sliding window which is separate from the data in the training set 201; calculating the difference, which is herein referred to as the variance, between the classification produced by the neural network on the test set and the test set's pre-assigned classification; saving the iteration number and current variance when the current variance is less than any preceding variance; and monitoring the variance until it has been determined that the variance is increasing instead of decreasing.
  • the iteration number at which the lowest value of the variance was achieved, is then utilized to retrieve the optimal set of neural network weights for the current position of the sliding window.
  • the variance between the optimal fit to the test set and the values pre-assigned to the test set can either be obtained by applying the optimal set of neural network weights to the test set or by retrieving the variance from storage, if it has been previously stored by the training process during the iterative process .
  • the sliding window 205 is advanced one data point in relation to the data from the sensing process. That is, starting from the left, the first Out points are dropped from each of the three lines comprising the sliding window. Next, the first three In points become Out points; and finally three new In points are added to the sliding window.
  • the window may move from left to right, right to left, top to bottom, or bottom to top.
  • the neural network training process then begins again and culminates in a new variance at the optimum cessation of training point. While the sliding window remains entirely outside of a region, accumulation, or cluster the variances at each position of the sliding window will remain high and close to constant . As the sliding window enters a region, accumulation, or cluster to be detected the variance will begin to drop and it will reach a minimum when the sliding window is centered on the edge of the region, accumulation, or cluster to be detected.
  • the region, accumulation, or cluster can be delineated by presenting the complete data to the neural network weights that were obtained where the edge was detected.
  • the present invention is a neural network method and system for delineating spatially dependent objects such as hydrocarbon accumulations.
  • the process relies on a neural network to generate a classification.
  • FIG. 1 shows a neural network 101, input data from a sliding window 105, preprocessing block 107, and a classification as to Out or In 109.
  • the neural network 101 generates a classification 109 from input data applied to its input layer.
  • the inputs to the neural network are selected from the data arising out of some sensing process .
  • the preprocessing block 107 as shown in FIG. 1 may preprocess data input to the neural network. Preprocessing can be utilized, for example, to normalize the input data.
  • FIG. 2 depicts a sliding window 205 comprised of a combination training set 201 and a test set 202.
  • the sliding window 205 comprised of the training/test set combination includes, in the exemplary embodiment, of three adjacent lines each of which contains linearly adjacent portions of the data derived from the seismic data FIG. 14.
  • the middle of the three lines 201 shown in Fig. 2 comprises the training set, while the outer two lines 202 make up the test set.
  • Approximately, and preferably, half of the data in each of the three lines is assigned the classification of Out while the other half is assigned the classification of In.
  • FIG. 5 depicts the relationship between the complete data 509, the sliding window 505, the training data 501, and the test data 502 for an arbitrary point in the complete data from some sensing process.
  • the neural network 101 operates in four basic modes: training, testing, operation and retraining.
  • training the neural network 101 is trained by use of a training process that presents the neural network with sets of training data.
  • the training set 201 consists of linearly adjacent data divided approximately equally into Out and In classifications .
  • the neural network 101 generates a classification based on the similarity or diversity of the data in the training set. This classification is then compared with the classifications previously assigned in the training set. The difference between the classification
  • the test set 202 is presented to the neural network. This test set 202 consists of adjacent data taken from the sensing process. The test set 202 is also pre-assigned the classifications of Out and In as for the training set 201, but the data in the test set 202 does not duplicate any of the data in the training set 201.
  • the test set 202 data is taken from adjacent lines, and it is spatially aligned with and taken from both sides of the training data.
  • the classification resulting from the test set 202 being presented to the neural network is then compared with the pre-assigned classifications from the test set 202 and a variance 1001 is calculated.
  • the variance 1001 is monitored at the end of each iteration to determine the point when the variance starts increasing, see FIG. 10 and the variance curve 1001. At the point where the variance 1001 starts increasing, i.e. has reached a minimum, training is halted.
  • the neural network weights FIG. 9, which occurred at the point where the minimum variance 1001 was obtained are either retrieved from storage, if they were stored during the iterative process, or they are recalculated to obtain the optimal set of neural network weights for the current position of the sliding window 205.
  • the variance 1001, between the test set 202 classifications as calculated by the neural network at the optimal cessation of training point and the pre-assigned values in the test set 202 can either be obtained by applying the optimal set of neural network weights to the test set 202 or by retrieving the variance 1001 from storage, if it has been previously stored by the training process during the iterative process .
  • the sliding window 205 is advanced one data point in relation to the data from the sensing process. That is, starting from the left, the first Out points are dropped from each of the three lines comprising the sliding window 205. Next, the first three In points become Out points; and finally three new In points are added to the sliding window 205.
  • the neural network training process then begins again and culminates in a new variance 1001 at the optimum cessation of training point. While the sliding window 205 remains entirely outside of a region, accumulation, or cluster the variances 1001 at each position of the sliding window 205 will remain high and close to constant. As the sliding window 205 enters a region, accumulation, or cluster to be detected the variance 1001 will begin to drop and it will reach a minimum when the sliding window 205 is centered on the edge of the region, accumulation, or cluster to be detected.
  • FIG. 6 describe the training and test modes of the neural network.
  • the region, accumulation, or cluster can be delineated by presenting the complete data 509 to the neural network weights that were obtained where the edge was detected. This mode of operation is called operational mode.
  • Neural networks are trained by a training process that iteratively presents a training set to the neural network through its input layer 405.
  • the goal of the training process is to minimize the average sum-squared error 1003 over all of the training patterns . This goal is accomplished by propagating the error value back after each iteration and performing appropriate weight adjustments FIG. 6.
  • the weights FIG. 9 in the neural network begin to take on the characteristics or patterns in the data. Determining when, i.e. the iteration number at which, the neural network has taken on the appropriate set of characteristics has, prior to the method disclosed in U.S. Patent 6,119,112, "Optimum Cessation of Training in Neural Networks," (incorporated by reference herein) been a problem.
  • Artificial or computer neural networks are computer simulations of a network of interconnected neurons .
  • a biological example of a neural network is the interconnected neurons of the human brain. It should be understood that the analogy to the human brain is important and useful in understanding the present invention.
  • the neural networks of the present invention are computer simulations, which provide useful classifications based on input data provided in specified forms, which in the case of the present invention is data from some sensing process .
  • a neural network can be defined by three elements: a set of nodes, a specific topology of weighted interconnections between the nodes and a learning law, which provides for updating the connection weights.
  • a neural network is a hierarchical collection of nodes (also known as neurons or nuerodes or elements or processing elements or preceptrons) , each of which computes the results of an equation (transfer or activation function) .
  • the equation may include a threshold.
  • Each node ' s activation function uses multiple input values but produces only one output value .
  • the outputs of the nodes in a lower level (that is closer to the input data) can be provided as inputs to the nodes of the next highest layer.
  • the highest layer produces the output (s).
  • a neural network where all the outputs of a lower layer connect to all nodes in the next highest layer is commonly referred to as a feed forward neural network.
  • FIG. 4 a representative example of a neural network is shown. It should be noted that the example shown in FIG. 4 is merely illustrative of one embodiment of a neural network. As discussed below other embodiments of a neural network can be used with the present invention.
  • the embodiment of FIG. 4 has an input layer 405, a hidden layer
  • the input layer 405 includes a layer of input nodes which take their input values 407 from the external input which, in the case of the present invention, consists of data from some sensing process and pre-assigned Out/In classifications.
  • the input data is used by the neural network to generate the output 409 which corresponds to the classification 109.
  • input layer 405 is referred to as a layer of the neural network, input layer 405 does not contain any processing nodes; instead it uses a set of storage locations for input values .
  • the next layer is called the hidden or middle layer 403.
  • a hidden layer is not required, but is usually used. It includes a set of nodes as shown in FIG. 4. The outputs from nodes of the input layer 405 are used as inputs to each node in the hidden layer 403. Likewise the outputs of nodes of the hidden layer 403 are used as inputs to each node in the output layer 401. Additional hidden layers can be used. Each node in these additional hidden layers would take the outputs from the previous layer as their inputs. Any number of hidden layers can be utilized.
  • the output layer 401 may consist of one or more nodes. As their input values they take the output of nodes of the hidden layer 403.
  • the output (s) of the node(s) of the output layer 401 are the classification (s) 409 produced by the neural network using the input data 407 which, in the case of the present invention, consists of data from some sensing process and the pre-assigned classifications.
  • Each connection between nodes in the neural network has an associated weight, as illustrated in FIG. 9. Weights determine how much relative effect an input value has on the output value of the node in question. Before the network is trained, as illustrated in the flow chart of FIG. 6, random values 600 are selected for each of the weights. The weights are changed as the neural network is trained. The weights are changed according to the learning law associated with the neural network (as described below) .
  • the neural network shown in FIG. 4 is a fully connected feed forward neural network.
  • a neural network is built by specifying the number, arrangement and connection of the nodes of which it is comprised.
  • the configuration is fairly simple. For example, in a fully connected network with one middle layer (and of course including one input and one output layer) , and no feedback, the number of connections and consequently the number of weights is fixed by the number of nodes in each layer. Such is the case in the example shown in FIG. 4.
  • the total number of nodes in each layer has to be determined. This determines the number of weights and total storage needed to build the network. Note that more complex networks require more configuration information, and therefore more storage.
  • the present invention will shortly disclose a method for the selection of the appropriate number of nodes and activation function to include in a neural network used to delineate spatially dependent objects.
  • the present invention contemplates many other types of neural network configurations for use in delineating spatially dependent objects. All that is required for a neural network is that the neural network be able to be trained so as to provide the needed classification (s) .
  • Input data 407 is provided to input storage locations called input nodes in the input layer 405.
  • the hidden layer 403 nodes each retrieve the input values from all of the inputs in the input layer 405.
  • Each node has a weight with each input value.
  • Each node multiples each input value times its associated weight, and sums these values for all of the inputs . This sum is then used as input to an equation (also called a transfer function or activation function) to produce an output or activation for that node.
  • the processing for nodes in the hidden layer 403 can be preformed in parallel, or they can be performed sequentially.
  • the output values or activations would then be computed. For each output node, the output values or activations from each of the hidden nodes is retrieved. Each output or activation is multiplied by its associated weight, and these values are summed. This sum is then used as input to an equation which produces as its result the output data or classification 409. Thus, using input data 407 a neural network produces a classification or output 409, which is the predicted classification. Nodes
  • a typical node is shown in FIG. 8.
  • the output of the node is a nonlinear function of the weighted sum of its inputs.
  • the input/output relationship of a node is often described as the transfer function or activation function.
  • the activation function can be represented symbolically as follows :
  • the activation function determines the activity level or excitation level generated in the node as a result of an input signal of a particular size. Any function may be selected as the activation function. However, for use with back propagation a sigmoidal function is preferred.
  • the sigmoidal function is a continuous S-shaped monotonically increasing function which asymptotically approaches fixed values as the input approaches plus or minus infinity. Typically the upper limit of the sigmoid is set to +1 and the lower limit is set to either O or -1.
  • a sigmoidal function is shown in FIG. 7(c) and can be represented as follows:
  • x is a weighted input (i.e., ⁇ WiXj . ) and T is a simple threshold or bias.
  • the threshold T in the above equation can be eliminated by including a bias node in the neural network.
  • the bias node has no inputs and outputs a constant value (typically a +1) to all output and hidden layer nodes in the neural network.
  • the weights that each node assigns to this one output becomes the threshold term for the given node.
  • This neural network has an input layer that distributes the weighted input to the hidden layer, which then transforms that input and passes it to an output layer, which performs a further transformation and produces an output classification.
  • the hidden layer contains three nodes H l; H 2 , and H 3 as shown in FIG. 9. Each node acts as a regression equation by taking the sum of its weighted inputs as follows:
  • Hi (iN) W 0 ⁇ +W ⁇ iX ⁇ . . . +W n iXbn
  • each hidden node transforms this input using a sigmoidal activation function such that:
  • Hi (0 u) is the output of hidden node H*.
  • each hidden node is multiplied by the weight of its connection to the output node (i.e., bj . ) .
  • the results of these multiplications are summed to provide the input to the output layer node; thus the input of the activation function of the output node is defined as :
  • the forecast or predicted value, Y is obtained by a sigmoidal transformation of this input:
  • connection weights [(w 0 ⁇ , ... ,w nl ) , (W 02 , ••• /W n2 ) , (W 03 , ... , w n3 )],[b 0 , b 1# b 2 , b 3 ] are determined through training. See the section below that describes training of the neural network. Note that although a sigmoidal activation function is the preferred activation function, the present invention may be used with many other activation functions.
  • FIG. 7(a) depicts a hard-limiter activation function.
  • FIG. 7(b) depicts a threshold logic activation function.
  • FIG. 7(c) depicts a sigmoidal activation function. Other activation functions may be utilized with the present invention as well.
  • a neural network accepts input data 407 via its input layer 405 (FIG. 4) .
  • this input takes the form of data from some sensing process as well as pre-assigned classifications as to Out or In.
  • the optimal training point variance 1001 is lower than it is at points adjacent to the edge location of the sliding window 205.
  • each connection between nodes in the neural network has an associated weight .
  • Weights determine how much relative effect an input value has on the output value of the node in question.
  • random values are selected for each of the weights.
  • the weights are changed as the neural network is trained.
  • the weights are changed according to the learning law associated with the neural network.
  • the weights used in a neural network are adjustable values which determine (for any given neural network configuration) the predicted classification for a given set of input data.
  • Neural networks are superior to conventional statistical models for certain tasks because neural networks can adjust these weights automatically and thus they do not require that the weights be known a priori.
  • neural networks are capable of building the structure of the relationship (or model) between the input data and the output data by adjusting the weights, whereas in a conventional statistical model the developer must define the equatio (s) and the fixed constant (s) to be used in the equation.
  • Training a neural network requires that training data 201 (FIG. 2) be assembled for use by the training process. In the case of the present invention, this consists of the data from some sensing process and pre-assigned classifications as to Out or In.
  • the training process then implements the steps shown in FIG. 6 and described below. Referring now to FIG. 6, the present invention is facilitated by, but not dependent on, this particular approach for training the neural network.
  • step 600 the weights are initialized to random values. When retraining the neural network step 600 may be skipped so that training begins with the weights computed for the neural network from the previous training session (s) .
  • step 601 a set of input data is applied to the neural network.
  • this input causes the nodes in the input layer to generate outputs to the nodes of the hidden layer, which in turn generates outputs to the nodes of the output layer which in turn produces the classification required by the present invention.
  • This flow of information from the input nodes to the output nodes is typically referred to as forward activation flow. Forward activation is depicted on the right side of FIG. 3.
  • a desired (actual or known or correct) output value associated with the input data applied to the neural network in step 601 is a desired (actual or known or correct) output value .
  • this consists of the pre-assigned Out/In classifications, although they are not actually known in this case.
  • the classification produced by the neural network is compared with the pre-assigned classifications .
  • the difference between the desired output, i.e. pre-assigned classifications, and the classification produced by the neural network is referred to as the error value.
  • This error value is then used to adjust the weights in the neural network as depicted in step 605.
  • One suitable approach for adjusting weights is called back propagation (also commonly referred as the generalized delta rule) .
  • Back propagation is a supervised learning method in which an output error signal is fed back through the network, altering connection weights so as to minimize that error.
  • Back propagation uses the error value and the learning law to determine how much to adjust the weights in the network.
  • the error between the forecast output value and the desired output value is propagated back through the output layer and through the hidden layer (s).
  • Back propagation distributes the overall error value to each of the nodes in the neural network, adjusting the weights associated with each node ' s inputs based on the error value allocated to it . The error value is thus propagated back through the neural network. This accounts for the name back propagation. This backward error flow is depicted on the left-hand side of FIG. 3.
  • the node's weights can be adjusted.
  • One way of adjusting the weights for a given node is as follows :
  • E is the error signal associated with the node
  • X represents the inputs (i.e., as a vector)
  • W old is the current weights (represented as a vector)
  • W new is the weights after adjustment
  • is a learning constant or rate.
  • can be thought of as the size of the steps taken down the error curve.
  • an error value for each node in the hidden layer is computed by summing the errors of the output nodes each multiplied by its associated weight on the connection between the hidden node in the hidden layer and the corresponding output nodes in the output layer. This estimate of the error for each hidden layer node is then used in the manner described above to adjust the weights between the input layer and the hidden layer.
  • step 607 a test is used to determine whether training is complete or not. Commonly this test simply checks that the error value be less than a certain threshold over a certain number of previous training iterations, or it simply ends training after a certain number of iterations .
  • a preferred technique is to use a set of testing data 202 and measure the error generated by the testing data.
  • the testing data is generated so that it is mutually exclusive of the data used for training.
  • the neural network is allowed to train until the optimum point for cessation of training is reached.
  • the optimum training point is that point in the training of a neural network where the variance 1001 of the neural network classification has reached a minimum with respect to known results from a test set 202 taken from some sensing process and pre-assigned classifications of Out/In.
  • test data 202 is used to determine when training is completed the weights are not adjusted as a result of applying the testing data to the neural network. That is the test data is not used to train the network.
  • the weights are usually initialized by assigning them random values, step 600.
  • the neural network uses its input data to produce predicted output data as described above in step 601. These output data values are used in combination with training input data to produce error data, step 603.
  • the error data is the difference between the output from the output nodes and the target or actual data which, in the case of the present invention, consists of the pre-assigned Out/In classifications. These error data values are then propagated back through the network through the output node(s) and used in accordance with the activation function present in those nodes to adjust the weights, step 605.
  • a test on the variance 1001 is used to determine if training is complete or more training is required, step 607.
  • reinforcement learning In reinforcement learning a global reinforcement signal is applied to all nodes in the neural network. The nodes then adjust their weights based on the reinforcement signal. This is decidedly different from back propagation techniques, which essentially attempt to form an error signal at the output of each neuron in the network. In reinforcement learning there is only one error signal which is used by all nodes.
  • each training set 501 has a set of data items 503 from some sensing process and a pre-assigned classification value Out or In.
  • the testing set 202 is identical to the training set 201 in structure, but the testing set 202 is distinctly different from the training set 201 in that it does not contain any of the same data items as the training set .
  • one of the data sets is used as the training set 201, and two other adjacent and aligned data sets are combined to form the testing set 202.
  • the test set 202 is configured with one set of data items falling on each side of the training line. The purpose of this data configuration will be disclosed shortly.
  • the preprocessing function 107 is depicted in FIG. 1.
  • Preprocessing of the input values may be performed as the inputs are being applied to the neural network or the inputs may be preprocessed and stored as preprocessed values in an input data set. If preprocessing is performed, it may consist of one or more steps. For instance, classical back propagation has been found to work best when the input data is normalized either in the range [- 1, 1] or [0, 1] . Note that normalization is performed for each factor of data. For example, in the case of seismic data the amplitudes at each two- way time are normalized as a vector. The normalization step may also be combined with other steps such as taking the natural log of the input.
  • preprocessing may consist of taking the natural log of each input and normalizing the input over some interval.
  • the logarithmic scale compacts large data values more than smaller values.
  • Normalizing to the range [0.2, 0.8] uses the heart of the sigmoidal activation function.
  • Other functions may be utilized to preprocess the input values.
  • step 609 of FIG. 6 calculating the variance 609, of the neural network's classifications, from the pre-assigned classifications in the test set 202 (as shown as step 609 of FIG. 6) ; and using this variance to determine the optimum point for ceasing further training facilitates, but is not required by, the present invention.
  • This facilitating aspect which is the preferred embodiment of the present invention, is now described.
  • the neural network is presented with a test set 202.
  • a variance 1001 is then calculated between the neural network's classification and the pre-assigned classifications in the test set 202. This variance is then used to determine if training has achieved the optimal response from the given neural network, step 607, in which case, training is halted.
  • Two questions associated with achieving the optimal result are 1) what constitutes the variance, and 2) how is it determined that the optimal variance has been achieved.
  • FIG. 10 two curves, that are both a function of the number of iterations that the neural network has been trained, are presented.
  • One is the mean square error 1003 derived from the training set 201, and the other is the variance 1001 derived from the test set 202.
  • the goal of the neural network while it is training, is to minimize the mean square error 1003 by adjusting the neural network weights after each training iteration.
  • the neural network fits the training set with a greater and greater degree of accuracy with each iteration, while the mean square error curve 1003 asymptotically attempts to approach zero.
  • the neural network it is, possible for the neural network to fit a given pattern to any arbitrarily chosen degree of accuracy.
  • This is not the overall goal of using a neural network approach to make classifications .
  • the overall goal is to produce a neural network that will generalize on other sets of data that are presented to it. Therefore, there is a point in the iterative process when the neural network has learned the underlying patterns in the training data and is subsequently memorizing the training data including any noise that it may contain.
  • This over-fitting or over-training problem can be avoided if the neural network trains on the training data 201, but measures its ability to generalize on another set of data, called the testing data 202. This is accomplished by calculating the variance 1001 between the neural network's classification and the pre-assigned classifications from the testing data 202.
  • the variance can be any function that the system developer finds to be most appropriate for the problem at hand.
  • the variance 1001 could be the mean square error on the testing data 202, the chi-square test, or simply the number of incorrectly determined responses .
  • Step 609 in FIG. 6 represents the point, in the iterative process, at which the variance is calculated.
  • the iteration at which the variance 1001 reaches a minimum is the optimum point 1005, for any given set of testing data 202, to cease training.
  • the neural network has finished learning the pattern (s) in the training set and is beginning to over-fit or memorize the data.
  • the optimal point to cease training can also be calculated by a variety of methods. It is the point at which the variance ceases to decrease with further training and begins to increase instead. For example, this inflection point can be determined most simply by observing that the variance has not made a new minimum within some given number of iterations, or more complicatedly by performing a running linear regression on the variance for some number of iterations in the past and observing when the slope of the line becomes positive.
  • Step 609 of FIG. 6 is the point in the iterative process where the calculations to determine the minimum are carried out.
  • the neural network weights may be saved for an appropriate number of iterations in the past. These weights being indexed by the iteration number at which they were achieved. When it has been determined that the inflection point has been reached the iteration number with the lowest value of the variance is used to retrieve the optimum neural network weights .
  • U.S. Patent 6,119,112 "Optimum Cessation of Training in Neural Networks," discloses how to optimally halt the training process. This is something that has, heretofore, been a long-standing problem in the use of neural networks. However, a similar problem still exists. That is, how to determine the best number of nodes, i.e. the network architecture, and what activation function (s) to use in a specific neural network architecture. It is, therefore, one objective of the present invention to disclose how to determine the appropriate number of nodes and the activation function to use in a neural network prior to starting the overall process as illustrated in FIG. 11 for delineating spatially dependent objects.
  • the number of nodes required to best solve a particular neural network problem is primarily dependent on the overall structure of the problem, for example the number of variables, the number of observations, the number of output nodes, etc.
  • the actual data values have very little effect on the appropriate number of nodes to use.
  • the data values have much more influence on the number of training iterations that are required. Therefore, the first step 1101 in the process of delineating spatially dependent objects is to determine the best number of nodes to use. This is accomplished by configuring the sliding window 205, locating the window in some area of the data that is thought to be consistent, for example see FIG. 12, and then temporarily and consistently modifying the actual data in the area of the In portion of the sliding window 1206.
  • step 1102 of FIG. 11 a similar process is used to determine the best activation function, examples of which are shown in FIG.
  • Activation functions perform differently on different types of data, e.g. whether the data is smooth or subject to spikes can affect the performance of different activation functions. Therefore, after obtaining the best number of nodes, i.e. the network architecture, and before restoring the data to its original state, various activation functions are tried on the stationary-sliding window 1206 using the best number of nodes.
  • the variance against the test set 202 for each activation function that is tried is stored and tracked. Finally, the original data is restored, and the activation function that produced the lowest variance is selected as the activation function to use throughout the delineation process .
  • this knowledge might come from aeromagnetic profiles or gravity surveys, or even from the experience and judgement of seismic interpreters and geologists.
  • this knowledge might come from aeromagnetic profiles or gravity surveys, or even from the experience and judgement of seismic interpreters and geologists.
  • FIG. 12 it is common practice to start the seismic shots outside of the suspected oil and/or gas zones and run them in lines across the area under consideration.
  • CDP gathers in a corner of the layout will be outside of a suspected oil and/or gas zone while the CDP gathers in the suspected oil and/or gas zone will be found in the middle of the seismic layout.
  • face recognition a difficult and important spatially dependent neural network problem, it is common to image a person' s face against a uniform background.
  • face recognition we can expected to find the person's face in the middle of the data while the background can be expected to be found in the corners. We can use this type of partial knowledge, intuition, or expectation to expedite the delineation process.
  • the third step 1103 in the process of delineating spatially dependent objects is the incorporation of partial knowledge, intuition, or expectation.
  • FIG. 14 which extends the exemplary seismic layout of FIG. 12, we see that the sliding window 1206 of FIG. 12 has been split into two portions 1401 and 1402 in FIG. 14.
  • the Out portion of the split-sliding window 1401 is made stationary in a corner of the seismic layout, while the In portion 1402, which is allowed to slide, is initially located in the middle of the seismic layout 1400.
  • the neural network, composed of both portions of the sliding window is then trained to the optimum point using the number of nodes and activation function found in steps 1101 and 1102 of the delineation process .
  • a quick convergence to a minimum variance that is small in magnitude indicates that some type of accumulation, region, or cluster exists. If the neural network does not quickly converge to a small variance, it may be desirable to move the In sliding window to another position and repeat the process . If the method of the present invention is being used to delineate a major object, full delineation of the object can often be completed after training with partial knowledge, intuition, or expectation. Thus in FIG. 11, a decision is made at block 1107 whether or not delineation is complete after completion of training. If so, the process proceeds to block 1106, which is discussed below. If, on the other hand, delineation is not complete after completion of training, the process proceeds to block 1104.
  • Information related to the process can, in some circumstances, be derived as result of the way that the sliding window is configured. If one side of the test set 202 converges while the other side does not, it can be concluded that the In portion of the sliding window is sitting on an edge of an accumulation, as shown in 505. Therefore, moving the In portion 502 of the sliding window toward the converging side, i.e. down in FIG. 5, is likely to bring about convergence across both sides of the sliding window. This is the reason for having the test set evenly configured on both sides of the training set.
  • one objective of the present invention i.e. detecting the direction in which an object, accumulation, or cluster lies when the sliding window of the present invention is sitting on the edge or corner of the object, accumulation, or cluster, is achieved for both edges.
  • the complete data set 509 is then passed against the resulting neural network weights to delineate the entire accumulation, region, or cluster.
  • step 1104 of FIG. 11 This is accomplished in step 1104 of FIG. 11 by traversing the entire data set with the sliding window 1206.
  • the sliding window is not split, and it is generally started at some corner as shown in FIG. 12.
  • the training process is carried out to the optimum point as before and after each convergence the data set is advanced one data point. That is, the first Out points are dropped from each of the three lines comprising the exemplary sliding window 205. Next, the first three In points become Out points; and finally three new In points are added to the sliding window.
  • the neural network training process then begins again and culminates in a new variance at the optimum cessation of training point. While the sliding window remains entirely outside of a region, accumulation, or cluster the variances at each position of the sliding window will remain high and close to constant.
  • the variance will begin to drop and it will reach a minimum when the sliding window is centered on the edge of the region, accumulation, or cluster to be detected.
  • the complete data set 509 is passed against the resulting neural network weights to delineate the entire accumulation, region, or cluster. If significant convergence is not achieved, the existence, of accumulations, regions, or clusters is unlikely.
  • This objective may be accomplished in step 1105 of FIG. 11 even when no a priori knowledge as to the existence of such sub-delineation, sub- accumulation, sub-region, or sub-cluster exists.
  • the complete sliding window 1501 is positioned at a point on the edge of the major object on a line along which a sub-object is thought to exist.
  • the sliding window is positioned completely inside the major object with the Out portion adjacent to the edge of the major object.
  • the sliding window is trained to the optimum point and then advanced as previously described. Again the variance at the optimum point is monitored to detect the window position at which the variance is a minimum.
  • the complete data set 509 or some subset of the complete data set can be passed against the resulting neural network weights to delineate the sub- object.
  • the entire region of the major object can be systematically traversed.
  • the variance, when sub-objects are delineated, can be expected to be greater and the minimum not as distinct as it is in the case of a major object.
  • the optimum-point-variance that occurs when the sliding window is centered on the edge of the gas cap is expected to be greater than it would be when the Out portion of the sliding window is completely outside of the oil and gas accumulations and the In portion of the sliding window is centered well within the combined oil and gas accumulation.
  • the sliding window is at the edge of the OWC and one data point away, assuming movement to the right, from being centered on the edge of the gas cap.
  • This objective can be achieved in step 1106 of FIG. 11 by first delineating all of the Out and In values, process step 1103 or 1104, for the classification under consideration.
  • An appropriate sized sample for a training set such as the size used in the sliding window, is then randomly selected from the complete delineation.
  • the training set is trained to the optimum point and the resulting neural network weights are used to reclassify the complete data set 509, less the randomly selected training set, for the classification under consideration.
  • the variance from the original classification is recorded.
  • a new training set is again randomly selected and trained to the optimum point.
  • the reclassification of the entire set of Out and In values is again performed and the variance from the original classification is again recorded. This randomly select, train, and reclassify procedure is repeated for at least thirty (30) times.
  • Standard statistical methods are then used to calculate the mean and confidence interval of the neural network variance for the particular classification under consideration.
  • Major objects in an oil and/or gas field may show a variance of zero, while the sub-objects such as differing porosity zones show a non-zero variance within a narrow confidence interval. This occurs because seismic data overlaps different porosity, permeability and productivity zones.
  • Another novel method for determining the degree of accuracy a given prediction or classification has achieved is described in the section pertaining to the delineation of hydrocarbon accumulations below, and by the appended claims is included in the present invention.
  • the pulling together of the variances can be quickly accomplished over a network.
  • Another example of the use of parallel processing in the application of the present invention occurs during the determination of the appropriate number of nodes. In this case, a different number of nodes is trained on each machine and the resulting variances are brought together for evaluation at the end of the parallel run. Again this combining of the variances can be quickly accomplished across a network.
  • a number of other parallel processing implementations can be achieved using the concepts of the present invention, accordingly, it is intended by the appended claims to cover all such applications as fall within the true spirit and scope of the present invention.
  • the concepts of the present invention can be expedited by embedding the neural network function in hardware. Therefore, the present invention contemplates that various hardware configurations can be used in conjunction with the concepts of the present invention. In fact, neural network integrated circuit chips are commercially available, and could be configured to implement the concepts of the present invention. Accordingly, it is intended by the appended claims to cover all such applications as fall within the true spirit and scope of the present invention.
  • a description of how to apply the concepts of the present invention, in an experimental application of the invention, to the delineation of a gas cap in an Oil and Gas Field is used as a non-limiting exemplary embodiment of the application of the present invention.
  • the Enterprise Miner software from SAS Institute, Inc. can be used in the following experimental, exemplary embodiment to provide the neural network framework in which the present invention is applied.
  • the first task is to define the data to be used in the analysis, and to download it from SEG-Y format to SAS data sets.
  • 3D seismic data acquired using dynamite with receivers located at twenty-five (25 m) meter spacing, is used.
  • a fold of 72 traces per CDP gather (FIG.13) is used in the example that follows.
  • the two-way-time to the basement is 1.2 sec and the sampling interval is 2 msec .
  • the entire depositional environment is taken into consideration. This is done so that not only the hydrocarbon accumulation itself is considered; but also such characteristics as traps, migration paths from source rocks, and the underlying basins are considered in the analysis.
  • all of the amplitudes from the surface to the basement were used and the neural network was allowed to determine where the ground-roll stopped, which it did at around 90 msec. The point where ground-roll ceases is determined by using a sliding window in the vertical direction, instead of horizontally as heretofore described.
  • a delineation of the hydrocarbon accumulation is initially accomplished by using all of the amplitudes from the surface down to the basement.
  • a small number of amplitudes (25 in the cited example) is included in a vertically sliding window which is started at the surface and moved downward one amplitude at a time until the results from the 25 amplitudes begin to contribute to the signal strength of the hydrocarbon delineation function, i.e. the 25 amplitudes alone begin to offer a positive contribution toward discrimination on the test set.
  • This point is where ground-roll is no longer the overriding influence.
  • a similar process is performed below the hydrocarbon reservoir to locate the point at which the environmental deposition is no longer an influence in the delineation of the hydrocarbon accumulation. The amplitudes above and below these points are then deleted from further calculations, thereby enhancing the discrimination function on the hydrocarbon accumulation.
  • the classification into In (1) or Out (0) is done for each trace in each CDP gather that is either In or Out.
  • the fold is 72 we have each of the 72 traces, or observations, in a CDP classified as either 1 or 0 depending on whether the CDP is either In or Out.
  • the best results from a neural network are normally obtained when observations in the range of 1.5 to 2 times the number of variables , i.e. all of the amplitudes plus some of the trace header variables in the case of seismic data, are used. Therefore, for a two way time (TWT) of 1.2 seconds sampled at 2 millisecond intervals in the example cited, in the neighborhood of 900 to 1200 observations are required.
  • TWT two way time
  • Pre-determination of the appropriate number of nodes 1101, and the activation function (1102 and FIG. 7) was carried out as disclosed in the present invention. Furthermore, training to determine the appropriate number of nodes ceased within twenty-five or so iterations of what was later found to be the optimum point in the real classification runs. Since partial knowledge of the gas cap was available, all traces in eight (8) CDP gathers on the periphery of the seismic layout were classified as Out, and all traces in eight (8) centrally located CDP gathers were classified as In. This data was used to make up the training set 201 in the split-sliding window 1401 and 1402. The test set 202 was similarly configured according to the disclosure of the present invention.
  • the split window was run to the optimum cessation of training point, and the remainder of the complete data 509 was then classified.
  • the validation step 1106 revealed that all CDP gathers in the complete data 509 were correctly classified with 100% confidence.
  • the sliding window was then advanced along a line from the OWC in order to detect the gas cap as shown in FIG. 15.
  • each trace in a CDP that is to be scored as either In or Out, is presented to the neural network, i.e. each trace is multiplied by the weight vector, to obtain a score between 0 and 1. Rarely, if ever, do the traces score as exactly 0 and 1. It is therefore necessary to determine at what point between 0 and 1 the CDP scores as Out or In.
  • All of the trace scores in a given CDP are averaged to obtain the CDP score, which lies between 0 and 1.
  • the CDP's that are In are clearly distinguishable from those that are Out, all scores for CDP's that are In are greater than .5 and all scores for CDP's that are Out are less than or equal to .5.
  • the points in the CDP score that correctly discriminate the definitely In and definitely Out CDP's can be directly determined from the known classified CDP's. Furthermore, by determining the number of CDP's between the definitely In and definitely Out points, it is possible to determine the degree of accuracy a given prediction or classification has achieved by using the method disclosed above with the known data.
  • Yet another objective of the present invention is disclosure of a novel method for determining the degree of accuracy a given prediction or classification has achieved when no a priori knowledge is available with which to determine such accuracy.
  • the variables used to augment the trace header and amplitude variables are assigned to each trace in the closest CDP to the wellbore. Data from the latest actual wells is not used in the training set and is reserved for the test set. Training of the neural network continues until the variance from this test set is at a minimum.
  • the present invention contemplates that the system, method, and process for hydrocarbon reservoir simulation will be used in conjunction with 4D seismic surveys, accordingly, it is intended by the appended claims to cover all such applications as fall within the true spirit and scope of the present invention.
  • the present invention contemplates that those skilled in the art will find uses, other than the delineation of spatially dependent objects, for the methods disclosed for determining the best number of nodes, the activation function, the inclusion of partial knowledge or intuition, when to stop training, etc. for use in neural networks related to other applications . Accordingly, it is intended by the appended claims to cover all such applications as fall within the true spirit and scope of the present invention.
  • the preferred embodiment of the present invention comprises one or more software systems.
  • a software system is a collection of one or more executable software programs, and one or more storage areas, for example, RAM or disk.
  • a software system should be understood to comprise a fully functional software embodiment of a function, which can be added to an existing computer system to provide a new function to that computer system.
  • a software system is thus understood to be a software implementation of a function, which can be assembled, in a layered fashion to produce a computer system providing new functionality. Also, in general, the interface provided by one software system to another software system is well defined. It should be understood in the context of the present invention that delineations between software systems are representative of the preferred implementation. However, the present invention may be implemented using any combination or separation of software systems .
  • neural networks can be implemented in any way.
  • the preferred embodiment uses a software implementation of a neural network.
  • any form of implementing a neural network can be used in the present invention, including physical analog and digital forms .
  • the neural network may be implemented as a software module in a computer system.
  • the neural network of the present invention may be implemented on one computer system during training and another during operational mode.
  • a neural computer using parallel processing, could be utilized during the computationally intensive training stage and then once the weights have been adapted the weights and the neural network could be embodied in a number of other computing devices to generate the required classification using the required operational input data.
  • the neural network might be trained on a single processor and then distributed to a number of parallel processors in the operational mode.
  • the neural network, training process may, in a variant of the present invention, be implemented as a single software system.
  • This single software system could be delivered to a computer installation to provide the functions of the present invention.
  • a neural network configuration function (or program) could also be included in this software system.
  • a neural network configuration module can be connected in a bi- directional path configuration with the neural network.
  • the neural network configuration module is used by the user (developer) to configure and control the neural network in a fashion as discussed above in connection with the step and module or in connection with the user interface discussion contained below.
  • a number of commercial packages contain neural networks operating in this manner, e.g. Enterprise Miner from SAS
  • the neural network contains a neural network model.
  • the present invention contemplates all presently available and future developed neural network models and architectures.
  • the neural network model can have a fully connected aspect, or a no feedback aspect. These are just examples. Other aspects or architectures for the neural network model are contemplated.
  • the neural network has access to input data and access to locations in which it can store output data and error data.
  • One embodiment of the present invention uses an approach where the data is not kept in the neural network. Instead, data pointers are kept in the neural network, which point to data storage locations (e.g., a working memory area) in a separate software system. These data pointers also called data specifications, can take a number of forms and can be used to point to data used for a number of purposes. For example, input data pointer and output data pointer may be specified. The pointer can point to or use a particular data source system for the data, a data type, and a data item pointer.
  • the Neural network also has a data retrieval function and a data storage function.
  • the preferred method is to have the neural network utilize data from some sensory process .
  • the neural network itself can retrieve data from a database or another module could feed data to the areas specified by the neural networks pointers .
  • the neural network also needs to be trained, as discussed above. As stated previously, any presently available or future developed training method is contemplated by the present invention.
  • the training method also may be somewhat dictated by the architecture of the neural network model that is used. Examples of aspects of training methods include back propagation, generalized delta, and gradient descent, all of which are well known in the art.
  • neural network needs to know the data type that is being specified. This is particularly important since it can utilize more than one type of data. Finally, the data item pointer is specified. It is thus seen that neural network can be constructed so as to obtain desired input data or to provide output data in any intended fashion. In the preferred embodiment of the present invention, this is all done through menu selection by the user (developer) using a software based system on a computer platform.
  • the present invention can utilize a template and menu driven user interface, which allows the user to configure, reconfigure and operate the present invention. This approach makes the present invention very user friendly. It also eliminates the need for the user to perform any computer programming, since the configuration, reconfiguration and operation of the present invention is carried out in a template and menu format not requiring any actual computer programming expertise or knowledge .
  • GUI graphical user interface
  • API application programmer's interface
  • the Neural Network Utility (NNU) GUI runs on Intel-based machines using OS/2 or DOS/Windows and on RISC/6000 machines using AIX.
  • the API is available not only on those platforms but also on a number of mainframe platforms, including VM/CMS and OS/400. Other platforms such as variations of Windows are contemplated.
  • Available hardware for improving neural network training and run-time performance includes the IBM Wizard, a card that plugs into MicroChannel buses .
  • Other vendors with similar software and/or hardware products include NeuralWare, Nestor and Hecht-Nielsen Co.
  • While the present invention has been described in the context of using seismic data to delineate hydrocarbon accumulations from seismic data, the present invention is not limited to this particular application.
  • the present invention may be utilized in any number of fields including but not limited to: weather forecasting from radiometers, analysis of aeromagnetic profiles, delineation of astronomical clusters from radio- telescope data, delineation of objects from radar, sonar, and infrared returns , etc .
  • Another aspect of the invention relates to mining operations .
  • mining operations where targeted materials are extracted from an underground mine, the desired materials typically are found in clumps, or accumulations.
  • the targeted material In the example of a gold or uranium mine, the targeted material is found in "veins" running through the ground.
  • diamonds In the example of a diamond mine, diamonds are found in kimberlite pipes running through the ground.
  • Other types of materials are also typically found in some sort of accumulations . It is therefore desired to tunnel through mines in locations where the desired materials are most likely to be found. Typically, miners will try to determine where the desired materials will be found based on results of previous tunneling efforts .
  • the present invention provides a way of optimizing a mining operation based on seismic surveys.
  • charges e.g., dynamite, TNT, etc.
  • seismic data is gathered on the ground surface while the charges are set in the mine.
  • seismic data is gathered during the explosions that result from a tunneling effort, rather than setting charges specifically for the purpose of gathering seismic data.
  • the gathered seismic data is analyzed and subsequent tunneling efforts are optimized based on the seismic data.
  • the seismic data may be analyzed in any desired manner, including manual interpretations or automated interpretations (such as the interpretation processes described above) .
  • a geologist or other trained person
  • spatial locations of targeted materials can be precisely determined by the delineation process described above.
  • Figures 16-19 are plan views illustrating the application of the invention to a mining operation.
  • Figure 16 shows a mine area 1610.
  • the mine area 1610 may represent an entire mine, or only a portion of a mine.
  • Figure 16 shows a tunnel 1612 extending partially into the mine area 1610.
  • a charge is detonated at the charge location 1614, shown in Figure 16 as an "X".
  • seismic readings are taken within a seismic survey area 1616.
  • the size of the seismic survey area 1616 may include a small portion near the charge location 1614 or a large area including a larger portion of the mine area 1610. In addition, the seismic survey area 1616 may also comprise the entire mine area 1610. Within the seismic survey area 1616, pluralities of cross lines 1618, each having a plurality of seismic sensors 1620, are laid across the seismic survey area 1616.
  • seismic data is gathered from the plurality of seismic sensors 1620. After the seismic data is gathered, the data is analyzed and used for optimizing the mining operation.
  • the seismic data may be analyzed manually by a geologist or some other trained user, or may be analyzed using an automated process such as one of the processes described above.
  • Figure 17 shows the mine area 1610 including locations of accumulations 1722 of the targeted material, as determined by the interpretation of the seismic data. While Figure 17 illustrates the locations of the accumulations 1722 in two dimensions, the accumulations 1722 may also be illustrated in three dimensions, if desired. As shown, the accumulations 1722 within survey area 1616 include a number of small accumulations, as well as a large accumulation. In the example of a gold mine, the large accumulation 1722 may represent the location of a gold vein which may extend into other areas of the mine area 1610. Based on the interpreted seismic data, the tunnel 1612 is extended in a direction or directions which will most likely result in the optimal extraction of the targeted material. Figure 17 shows the path of an additional tunnel 1724 extending generally in the direction of the largest accumulation 1722. In this way, the targeted material may be mined as the tunnel 1724 is created.
  • Figure 17 also shows a new charge location 1714 near the end of the new tunnel 1724.
  • the location of the charge location 1714 is selected based on the interpreted seismic data.
  • the interpreted seismic data indicates the likelihood that the large accumulation 1722 will continue to extend past the boundary of the seismic survey area 1616 in the direction of the new charge location 1714.
  • the new charge location could also be located within the seismic survey area 1616, depending on the distance tunneled (in addition to other factors) between the gathering of seismic data.
  • Figure 18 shows the mine area 1610 with a new seismic survey area 1816 corresponding to the new charge location 1714.
  • the seismic survey area 1816 includes a plurality of cross lines 1818 laid across the seismic survey area 1816.
  • Each of the cross lines 1818 includes a plurality of seismic sensors 1820.
  • the cross lines 1818 and seismic sensors 1820 may be the identical cross lines and seismic sensors used on the previous seismic survey, or may be separate.
  • workers move the seismic sensors from the seismic survey area 1616 to the seismic survey area 1816. Also note that if the seismic survey areas overlap, it may be possible to move only a portion of the seismic sensors, rather than moving the entire array of sensors.
  • Figure 19 shows the mine area 1610 including the locations of accumulations 1922 of the targeted material within the seismic survey area 1816.
  • the accumulations 1922 include a number of small accumulations has well as larger accumulations.
  • the tunnel 1724 is extended in a direction or directions which will most likely result in the maximum extraction of the desired material. In this way, the desired material may be mined as the tunnel 1924 is created. As shown, the new tunnel 1924 extends to a new charge location 191 .
  • the location of the charge location 1914 is selected based on the interpreted seismic data.
  • the interpreted seismic data indicates the likelihood that the larger accumulation 1922 will continue to extend past the boundary of the seismic survey area 1816 in the direction of the new charge location 1914.
  • the interpreted seismic data also appears to indicate that a significant amount of the desired material may be extracted by creating a fork in the tunnel 1924 to the right.
  • a new seismic survey area can be created for collecting seismic data from a charge detonated at the new charge location 1914.
  • the tunneling efforts may be directed and other directions based on patterns in the interpreted seismic data.
  • the entire mine area 1610 is covered with seismic sensors to gather seismic data for the entire mine area 1610 for each set charge.
  • the resolution of the interpreted seismic data can be controlled by selecting the number of seismic sensors for any given area.
  • the seismic survey area 1616 is shown with 16 seismic sensors. To increase the resolution, more seismic sensors could be used. If less resolution is required, less seismic sensors could be used. In this case, the sensors could be spread out, increasing the size of the survey area.
  • seismic readings are taken during every charge that is set .
  • a seismic survey can be taken as often or as little as desired.
  • seismic data is gathered at routine intervals (e.g., one per week, every few days, etc.). Also note that as the size of the seismic surveys increase, the frequency at which surveys are conducted can decrease.
  • the process described above for optimizing a mining operation can allow users of the process to dynamically profile the targeted material as it is being mined. This can greatly increase the efficiency of the mining operation since the targeted material can be "seen" prior to its extraction.
  • the invention can be applied to any type of mining operation for any targeted material including, but not limited to, gold, uranium, coal, precious metals, semi-precious metals (rubies, emeralds, etc.), metals, salt domes, rocks (e.g., granite), etc.
  • the process of creating and extending tunnels applies to manmade tunnels as well as existing tunnels (such as that found in caves) .
  • the invention can be used to map an existing cave, without the requirement of manually surveying the interior of the cave. While the present invention has been described in detail herein in accord with certain preferred embodiments thereof, modifications and changes therein may be effected by those skilled in the art. Accordingly, it is intended by the appended claims to cover all such modifications and changes as fall within the true spirit and scope of the invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Environmental & Geological Engineering (AREA)
  • Geology (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Geophysics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Image Analysis (AREA)

Abstract

Cette invention a trait à un système articulé autour d'un réseau neuronal (101, 105, 107, 109) ainsi qu'à une méthode (600, 601, 603, 605, 609, 607) et à un procédé permettant d'effectuer une délimitation automatique d'objets dépendants au niveau spatial. Cette méthode est applicable à des objets tels que des accumulations d'hydrocarbures, des profils aéromagnétiques, des amas astronomiques, des amas météorologiques, des objets acquis par radar, sonar, des données de retours sismiques et infrarouges, etc. L'une des nouveautés apportées par cette invention réside dans le fait qu'il est possible d'utiliser cette méthode, que des données connues soient ou non disponibles, pour fournir des ensembles d'apprentissage. Les données de sortie consistent en une classification des données en entrée en des accumulations clairement délimitées, des amas, des objets, etc., de divers types et aux propriétés différentes. Une application préférée, mais non exclusive, de cette invention débouche sur une délimitation automatique d'accumulations d'hydrocarbures ainsi que de sous-régions de ces accumulations, aux propriétés différentes, dans les domaines de l'exploitation de pétrole et de gaz, et ce, avant que ne débutent les opérations de forage. L'invention porte également sur un système et un procédé faisant intervenir des données de relevés sismiques aux fins de l'optimisation d'opérations minières.
PCT/US2001/023428 2000-07-31 2001-07-25 Systeme et procede permettant d'optimiser une operation miniere WO2002010797A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2001280782A AU2001280782A1 (en) 2000-07-31 2001-07-25 System and method for optimizing a mining operation

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US62896000A 2000-07-31 2000-07-31
US09/628,960 2000-07-31

Publications (1)

Publication Number Publication Date
WO2002010797A1 true WO2002010797A1 (fr) 2002-02-07

Family

ID=24521014

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/023428 WO2002010797A1 (fr) 2000-07-31 2001-07-25 Systeme et procede permettant d'optimiser une operation miniere

Country Status (3)

Country Link
AU (1) AU2001280782A1 (fr)
CA (1) CA2323241A1 (fr)
WO (1) WO2002010797A1 (fr)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6574565B1 (en) 1998-09-15 2003-06-03 Ronald R. Bush System and method for enhanced hydrocarbon recovery
EP1398649A1 (fr) * 2002-09-12 2004-03-17 Totalfinaelf S.A. Méthode de calage d'un puits de forage
CN109490974A (zh) * 2017-09-12 2019-03-19 核工业二0八大队 一种提高铀多金属综合勘查效率的铀矿地质填图工作方法
CN109932746A (zh) * 2019-04-09 2019-06-25 山东省物化探勘查院 一种深部含金构造的地震探测方法
CN110794478A (zh) * 2019-11-13 2020-02-14 中铁十局集团有限公司 一种非煤系地层隧道有害气体综合探测方法
CN111505705A (zh) * 2020-01-19 2020-08-07 长江大学 基于胶囊神经网络的微地震p波初至拾取方法及系统
CN112543879A (zh) * 2018-04-13 2021-03-23 沙特阿拉伯石油公司 增强地震图像

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113627657A (zh) * 2021-07-23 2021-11-09 核工业北京地质研究院 一种使用机器学习模型的砂岩型铀成矿有利区预测方法

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3721471A (en) * 1971-10-28 1973-03-20 Du Pont Drill-and-blast module
US3877373A (en) * 1969-11-19 1975-04-15 Du Pont Drill-and-blast process

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3877373A (en) * 1969-11-19 1975-04-15 Du Pont Drill-and-blast process
US3721471A (en) * 1971-10-28 1973-03-20 Du Pont Drill-and-blast module

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MOWREY G.L., IEEE TRANSACTIONS ON INDUSTRY APPLICATIONS, vol. 24, no. 4, July 1988 (1988-07-01) - August 1988 (1988-08-01), pages 660 - 665, XP002947132 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6574565B1 (en) 1998-09-15 2003-06-03 Ronald R. Bush System and method for enhanced hydrocarbon recovery
EP1398649A1 (fr) * 2002-09-12 2004-03-17 Totalfinaelf S.A. Méthode de calage d'un puits de forage
CN109490974A (zh) * 2017-09-12 2019-03-19 核工业二0八大队 一种提高铀多金属综合勘查效率的铀矿地质填图工作方法
CN109490974B (zh) * 2017-09-12 2020-03-17 核工业二0八大队 一种提高铀多金属综合勘查效率的铀矿地质填图工作方法
CN112543879A (zh) * 2018-04-13 2021-03-23 沙特阿拉伯石油公司 增强地震图像
CN109932746A (zh) * 2019-04-09 2019-06-25 山东省物化探勘查院 一种深部含金构造的地震探测方法
CN109932746B (zh) * 2019-04-09 2020-07-28 山东省物化探勘查院 一种深部含金构造的地震探测方法
CN110794478A (zh) * 2019-11-13 2020-02-14 中铁十局集团有限公司 一种非煤系地层隧道有害气体综合探测方法
CN110794478B (zh) * 2019-11-13 2021-10-29 中铁十局集团有限公司 一种非煤系地层隧道有害气体综合探测方法
CN111505705A (zh) * 2020-01-19 2020-08-07 长江大学 基于胶囊神经网络的微地震p波初至拾取方法及系统
CN111505705B (zh) * 2020-01-19 2022-08-02 长江大学 基于胶囊神经网络的微地震p波初至拾取方法及系统

Also Published As

Publication number Publication date
AU2001280782A1 (en) 2002-02-13
CA2323241A1 (fr) 2002-01-31

Similar Documents

Publication Publication Date Title
US6574565B1 (en) System and method for enhanced hydrocarbon recovery
AU743505B2 (en) System and method for delineating spatially dependent objects, such as hydrocarbon accumulations from seismic data
CN111783825B (zh) 一种基于卷积神经网络学习的测井岩性识别方法
US10948618B2 (en) System and method for automated seismic interpretation
US20190353811A1 (en) Method for detecting geological objects in a seismic image
CN111596978A (zh) 用人工智能进行岩相分类的网页显示方法、模块和系统
WO2001031366A1 (fr) Agregation de donnees d'apres des graphiques multiresolution
EP3857267B1 (fr) Système et procédé d'interprétation sismique automatisée
Wu et al. Automated stratigraphic interpretation of well-log data
WO2002010797A1 (fr) Systeme et procede permettant d'optimiser une operation miniere
Hoyle COMPUTER TECHNIQUES FOR THE ZONING AND CORRELATION OF WELL‐LOGS
Ramu et al. Multi-attribute and artificial neural network analysis of seismic inferred chimney-like features in marine sediments: a study from KG Basin, India
Gu et al. Carbonate lithofacies identification using an improved light gradient boosting machine and conventional logs: a demonstration using pre-salt lacustrine reservoirs, Santos Basin
AU3559102A (en) System and method for delineating spatially dependent objects, such as hydrocarbon accumulations from seismic data
US20240219602A1 (en) Systems and methods for digital gamma-ray log generation using physics informed machine learning
US20240176036A1 (en) Automatic salt geometry detection in a subsurface volume
Heydarpour et al. Applying deep learning method to develop a fracture modeling for a fractured carbonate reservoir using geologic, seismic and petrophysical data
CN116522251A (zh) 一种应用于石油钻探的岩性识别方法及系统
WO2024058932A1 (fr) Systèmes et procédés d'analyse d'une incertitude et d'une sensibilité de populations de failles
Hassibi et al. High resolution reservoir heterogeneity characterization using recognition technology
Benbernou et al. A fuzzy multi-criteria decision approach for enhanced auto-tracking of seismic events
Aminzadeh Image Processing and Pattern Recognition in Exploration Geophysics
Simaan Texture-based techniques for interpretation of seismic images
Kuo et al. Artificial intelligence in formation evaluation
Toumani Fuzzy classification for lithology determination from well logs

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP