US20230021210A1 - Supervised machine learning-based wellbore correlation - Google Patents
Supervised machine learning-based wellbore correlation Download PDFInfo
- Publication number
- US20230021210A1 US20230021210A1 US17/305,861 US202117305861A US2023021210A1 US 20230021210 A1 US20230021210 A1 US 20230021210A1 US 202117305861 A US202117305861 A US 202117305861A US 2023021210 A1 US2023021210 A1 US 2023021210A1
- Authority
- US
- United States
- Prior art keywords
- control point
- wellbore
- processor
- input
- well
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000010801 machine learning Methods 0.000 title claims abstract description 62
- 238000000034 method Methods 0.000 claims abstract description 33
- 238000013507 mapping Methods 0.000 claims description 84
- 238000013528 artificial neural network Methods 0.000 claims description 46
- 230000015572 biosynthetic process Effects 0.000 claims description 35
- 238000005755 formation reaction Methods 0.000 claims description 35
- 238000012549 training Methods 0.000 claims description 32
- 239000013643 reference control Substances 0.000 claims description 16
- 230000001131 transforming effect Effects 0.000 claims description 8
- 230000004044 response Effects 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 4
- 238000000844 transformation Methods 0.000 claims description 4
- 230000000875 corresponding effect Effects 0.000 description 32
- 230000006870 function Effects 0.000 description 11
- 230000002596 correlated effect Effects 0.000 description 10
- 230000000007 visual effect Effects 0.000 description 10
- 238000003860 storage Methods 0.000 description 8
- 230000005251 gamma ray Effects 0.000 description 7
- 238000005259 measurement Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 238000003491 array Methods 0.000 description 4
- 238000013459 approach Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000007781 pre-processing Methods 0.000 description 3
- 238000007792 addition Methods 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000005553 drilling Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000035699 permeability Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01V—GEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
- G01V11/00—Prospecting or detecting by methods combining techniques covered by two or more of main groups G01V1/00 - G01V9/00
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01V—GEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
- G01V1/00—Seismology; Seismic or acoustic prospecting or detecting
- G01V1/40—Seismology; Seismic or acoustic prospecting or detecting specially adapted for well-logging
- G01V1/44—Seismology; Seismic or acoustic prospecting or detecting specially adapted for well-logging using generators and receivers in the same well
- G01V1/48—Processing data
- G01V1/50—Analysing data
-
- E—FIXED CONSTRUCTIONS
- E21—EARTH DRILLING; MINING
- E21B—EARTH DRILLING, e.g. DEEP DRILLING; OBTAINING OIL, GAS, WATER, SOLUBLE OR MELTABLE MATERIALS OR A SLURRY OF MINERALS FROM WELLS
- E21B47/00—Survey of boreholes or wells
- E21B47/04—Measuring depth or liquid level
-
- E—FIXED CONSTRUCTIONS
- E21—EARTH DRILLING; MINING
- E21B—EARTH DRILLING, e.g. DEEP DRILLING; OBTAINING OIL, GAS, WATER, SOLUBLE OR MELTABLE MATERIALS OR A SLURRY OF MINERALS FROM WELLS
- E21B47/00—Survey of boreholes or wells
- E21B47/12—Means for transmitting measuring-signals or control signals from the well to the surface, or from the surface to the well, e.g. for logging while drilling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- E—FIXED CONSTRUCTIONS
- E21—EARTH DRILLING; MINING
- E21B—EARTH DRILLING, e.g. DEEP DRILLING; OBTAINING OIL, GAS, WATER, SOLUBLE OR MELTABLE MATERIALS OR A SLURRY OF MINERALS FROM WELLS
- E21B2200/00—Special features related to earth drilling for obtaining oil, gas or water
- E21B2200/22—Fuzzy logic, artificial intelligence, neural networks or the like
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01V—GEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
- G01V2210/00—Details of seismic processing or analysis
- G01V2210/60—Analysis
- G01V2210/61—Analysis by combining or comparing a seismic data set with other data
- G01V2210/616—Data from specific type of measurement
- G01V2210/6169—Data from specific type of measurement using well-logging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
Definitions
- the disclosure generally relates to the field of wellbore formation analysis and, more particularly, to aligning wellbore log data across multiple wellbores.
- Well-to-well log correlation is often performed to determine a consistency or change between patterns or signatures in well log data measured from sub-surface geological formations.
- a well log may be a visual representation of a measurement of a property of a formation surrounding a wellbore plotted against a depth of the wellbore.
- Well log data from multiple wells can be evaluated to determine corresponding depths within each well at which the surrounding formation shares a similar formation property.
- FIG. 1 depicts an example system for performing well correlation across multiple wellbores using a machine-learning model, according to some embodiments.
- FIG. 2 depicts an example training workflow for training a machine-learning model for assisted well correlation (AWC), according to some embodiments.
- AWC assisted well correlation
- FIG. 3 depicts an example input tile to a machine-learning model for well correlation, according to some embodiments.
- FIG. 4 depicts an example input array having multiple channels for well correlation, according to some embodiments.
- FIG. 5 depicts a graph of an example transform for representing null data of a set of input signals, according to some embodiments.
- FIG. 6 depicts an example input array having null data mapped to zero, according to some embodiments.
- FIG. 7 depicts the example input array of FIG. 6 with null data represented by a pseudo-random pattern, according to some embodiments.
- FIG. 8 depicts an example output set of upper and lower control points corresponding to a set of input signals, according to some embodiments.
- FIGS. 9 A- 9 D depict example alternative visual representations of control points output by a machine-learning model for well correlation, according to some embodiments.
- FIG. 10 depicts an example prediction workflow employing a machine-learning model for performing well correlation, according to some embodiments.
- FIG. 11 depicts a flowchart of example operations for training a machine-learning model to perform well correlation, according to some embodiments.
- FIG. 12 depicts a flowchart of example operations for performing well correlation using a trained machine-learning model, according to some embodiments.
- FIG. 13 depicts an example computer, according to some embodiments.
- Data gathered from drilling a wellbore and performing other downhole operations can be interpreted to evaluate a property of a formation surrounding the wellbore at a given depth.
- the formation property can be mapped across a depth of the wellbore to create a well log.
- Multiple well logs can be generated from a single wellbore, where each well log represents a different property of the formation.
- Well correlation can include aligning well logs of multiple wellbores to determine depths within each wellbore that corresponds to a certain formation property.
- Well correlation is often performed by aligning pairs of digital signals representing the well logs of two wellbores.
- conventional approaches for aligning signals are limited in the number of signals (i.e., number of wellbores being correlated) and number of well logs for each wellbore that can be aligned at once.
- DTW Dynamic Time Warping
- Conventional approaches to aligning digital signals can be slow, particularly when applied to multiple signals.
- Dynamic Time Warping DTW
- DTW can align pairs of signals.
- DTW can be inefficient, as DTW correlates depths across only two wellbores at a time.
- many signal alignment methods cannot efficiently process multi-channel data (i.e. multiple well logs for a single well). This can further hamper the computing efficiency of large amounts of data, as processing time increases with each additional well log and wellbore data set.
- null or missing data can be present when wellbores have differing depths or types of data sets.
- An inability to process well logs having null data can severely limit the ability to perform well correlation between wellbores in which differing operations and/or measurements were performed. While it may be possible to align some well logs of a wellbore with those of other wellbores, large amounts of data may not be processed in cases where a data set for a wellbore may have null or missing data in one or more well logs.
- a machine-learning (ML) model can be trained to perform assisted well correlation (AWC) by aligning sets of well signals including greater than two signals and/or one channel.
- AWC assisted well correlation
- example embodiments can perform well correlation for 20 different wellbores for four different formation properties (e.g., porosity, resistivity, permeability, and temperature) in each of the 20 different wellbores.
- Each formation property can correspond to a channel for a given wellbore. Therefore, in this example, there would be four channels for each of the four different formation properties for each of the 20 different wellbores.
- An input array can be generated by combining signals representing well data from multiple wellbores.
- the input array can be represented visually as an input tile, where signals of the input array are represented in heatmap format.
- One or more control points can be defined at fixed depths for a reference signal of the input array in order to determine corresponding depths within other wellbores of the input array where properties of the wellbore are the same or similar.
- the input array can be input into a trained ML model to generate control point mappings.
- the control point mappings can indicate positions within signals of the input array that correspond to the control points of the reference signal.
- a probability can be associated with each control point of the control point mappings.
- Control point mappings generated by the ML model can be used to determine a set of shifts for signals of the input array relative to the reference signal. The signals of the input array can then be aligned based on the set of shifts.
- the ML model can be trained based on input arrays that include a set of transformed signals.
- the transformed signals can be generated based on a single original signal representing well data from a single wellbore.
- One or more control points can be defined for the original signal.
- locations of the control points within the transformed signals are known.
- a set of control point mappings can be generated from the transformed signals, and the control point mappings and corresponding input arrays can be used to train the ML model.
- FIG. 1 depicts an example system for performing well correlation across multiple wellbores using a machine-learning model, according to some embodiments.
- An example system 100 for performing well correlation across multiple wellbores can include multiple modules stored on a machine-readable medium 118 of a computer 106 .
- the computer 106 can also include a processor 109 and a bus 111 .
- the multiple modules (that include a preprocessing module 108 , a trained neural network 110 , an alignment module 112 , a training module 114 ) can be executed by the processor 109 .
- Wellbore data 102 can be input into the preprocessing module 108 of the computer 106 to generate an input array 104 representing the wellbore data 102 .
- the wellbore data 102 can include any data collected relating to a wellbore for a plurality of wellbores.
- the wellbore data 102 can include wireline data and/or seismic data.
- Example types of wellbore data can include formation type, porosity, density, gamma ray data, acoustic data, and/or any data characterizing a formation surrounding the wellbore.
- the wellbore data 102 can be data collected across a range of depths within a wellbore where the collected data is associated with a depth at which it was collected.
- Wellbore data from a given wellbore can be represented as one or more signals within the input array 104 .
- An input tile can represent the input array 104 , where the input tile is to be input into the trained neural network 110 to determine correlated locations across wellbores of the wellbore data 102 .
- the training module 114 can aid in unsupervised learning based on stored labeled data 116 .
- the trained neural network 110 can be trained based on transformed signals representing data from one or more wellbores.
- FIG. 2 depicts an example training workflow for training a machine-learning model for assisted well correlation (AWC), according to some embodiments.
- FIG. 2 depicts a training workflow 200 for training a neural network 212 to perform assisted well correlation (AWC).
- the neural network 212 may be an untrained U-NET.
- FIG. 2 depicts three signals, signal 201 , signal 202 , and signal 203 . Each of the signals 201 - 203 can include well data from distinct wellbores.
- One or more control points can be defined at a fixed position within a signal, where the position of a control point indicates a depth at which wellbore data represented by the signal was collected.
- FIG. 2 depicts an upper control point 205 and a lower control point 207 defined for the signal 201 .
- the upper control point 205 can correspond to a first depth within the wellbore and the lower control point 207 can correspond to a second depth within the wellbore. Positioning of control points is described in more detail below in reference to FIG. 3
- the signal can be transformed to generate a number of input signals, N T, for training the neural network 212 .
- a signal can be transformed by one or more of the following operations: (1) shifting the signal, (2) scaling the signal, (3) adjusting (increasing and/or decreasing) an amplitude of the signal, (4) adding noise to the signal, (5) setting some signal values to null values, etc.
- the transformed signals representing well data from a single wellbore can be compiled to create an input tile.
- transforms of the signal 201 can be processed to create an input tile 206 , where the input tile 206 includes the signal 201 and transformed signals 209 .
- Positions of control points within the transformed signals are also combined to create at least one set of control points for each input tile.
- FIG. 2 depicts example input tiles for each of the signals 201 , 202 , and 203 .
- the input tiles can have a height and a width such that an aspect ratio of the input tile is approximately one.
- Corresponding set(s) of control points can be generated for each signal.
- multiple control points may be identified in a well signal.
- each signal will have a set of control points for each control point defined in the original signal.
- an upper and lower control point can be defined at fixed positions within the signal 201
- a first set of control points 208 and a second set of control points 210 can be generated with the input tile 206 .
- the first set of control point 208 can correspond to the upper control point 205 and the second set of control points 210 can correspond to the lower control point 207 .
- the control points for a given signal are defined at a fixed position within the untransformed signal, positions of the control points within the transformed signals are known.
- the input tiles and sets of control points can then be used to train the neural network 212 .
- dimensions of the input array 104 can be determined based on a type of the trained neural network 110 .
- the trained neural network 110 may be a UNET that requires an input tile having a set number of rows and a number of columns such that an aspect ratio of the input tile is approximately one.
- the dimensions of the input array can include a height, a width, and a number of channels and can be based on how the trained neural network 110 is trained.
- the height of the input array 104 can be a number of samples, N d , of a signal for a given wellbore.
- the number of samples can be equal to a number of rows required by the trained neural network 110 . If the number of samples for a signal is not equal to the number of rows required by the trained neural network 110 , the signals can be scaled such that the number of samples equals the number of rows required. For example, if the number of samples is less than the number of rows required by the trained neural network 110 , the signal can be stretched. If the number of samples of a signal is greater than the number of rows required by the trained neural network 110 , the signal can be compressed. Alternatively, a segment of the signal having the number of rows required by the trained neural network 110 can be selected.
- the width of the input array 104 can be adjusted to make the aspect ratio of the input array 104 approximately equal to one. Each signal can be repeated a number of times in a horizontal dimension of the input array 104 to make the input array more “square” (i.e. aspect ratio closer to one).
- the width of the input array can be defined by Equation 1:
- Width N T *W (1)
- N T is a total number of input signals and w is an integer representing the number of times each signal is repeated in the horizontal dimension of the input array 104 . If the total number of input signals is less than a number of input signals required by the trained neural network 110 , one or more of the input signals can be repeated until N T is equal to a number of signals to be taken as input by the trained neural network 110 .
- the number of channels, N L, of the input array 104 can be a number of well logs for a wellbore when a signal represents well data.
- the input array 104 can include multiple well logs (or components) for one or more of the wellbores from which the wellbore data 102 was collected. Examples described herein generally refer to three channels/well logs, as it allows for signals to be easily visualized as a color (RGB) image.
- RGB color
- FIG. 3 depicts an example input tile to a machine-learning model for well correlation, according to some embodiments.
- An input tile 300 can include multiple signals 301 - 311 that represent wellbore data. Each of the signals 301 - 311 can represent wellbore data from distinct wellbores.
- FIG. 3 depicts 11 signals.
- the signal 301 (depicted as a first column of the input tile 300 ) can correspond to a first wellbore
- the signal 302 (depicted as a second column of the input tile 300 ) can correspond to a second wellbore
- the signal 303 (depicted as a third column of the input tile 300 ) can correspond to a third wellbore
- the signal 304 (depicted as a fourth column of the input tile 300 ) can correspond to a fourth wellbore
- the signal 305 (depicted as a fifth column of the input tile 300 ) can correspond to a fifth wellbore
- the signal 306 (depicted as a sixth column of the input tile 300 ) can correspond to a sixth wellbore
- the signal 307 (depicted as a third column of the input tile 300 ) can correspond to a seventh wellbore
- the signal 308 (depicted as an eighth column of the input tile 300 ) can correspond to an eighth wellbore
- the signal 309 (depicted
- Upper and lower control points 352 and 354 can be defined for a reference signal (here, the signal 306 ). While FIG. 3 depicts the reference signal as the signal 306 , any of the signals 301 - 311 can be selected as the reference signal. Further, FIG. 3 depicts two control points. However, in some embodiments, a greater or lesser number of control points may be defined for a reference signal. A greater number of control points can result in an increased confidence in corresponding control points output by a trained neural network.
- control points 352 and 354 can be positioned where there is a visual change in the reference signal 306 .
- the input tile 300 depicts the reference signal (signal 306 ) as having a change in intensity of color of the input tile 300 between a first portion 351 of the signal 306 and a second portion 353 of the signal 306 , and a third portion 355 of the signal 306 and a fourth portion 356 of the signal 306 .
- the upper control point 352 can be defined at a first depth where the first portion 351 and the second portion 353 meet, and the lower control point 354 can be defined at a second depth where the third portion 355 and the fourth portion 356 meet.
- the control points 352 and 354 can be defined at fixed and pre-determined locations within the reference signal 306 .
- the input tile 300 can be a visual representation of an input array having multiple well logs/channels.
- An intensity of a color of a signal can be correlated to a strength of the signal.
- Each of the signals 301 - 311 can represent wellbore data of multiple types.
- each channel can be assigned a color (i.e. red, green, or blue) and signals of the input tile 300 can be colored and represented as an RGB image.
- the coloring and intensity of a signal can represent wellbore data from the multiple well logs that make up an input array.
- FIG. 4 depicts an example input array having multiple channels for well correlation, according to some embodiments.
- An input array 400 can include well logs 450 A, 450 B, and 450 C for a number of wellbores. As depicted in FIG. 4 , the input array 400 includes wellbore data for 11 wellbores. Each well log includes 11signals—the well log 450 A includes signals 401 A- 411 A, the well log 450 B includes signals 401 B- 411 B, and the well log 450 C includes signals 401 C- 411 C.
- Each wellbore's data is represented as a signal in each well log.
- the signals 401 A, 401 B, and 401 C depicted as first columns of the well logs 450 A, 450 B, and 450 C, respectively, correspond to a first wellbore.
- the signals 402 A, 402 B, and 402 C depicted as second columns of the well logs 450 A, 450 B, and 450 C, respectively, correspond to a second wellbore.
- a given signal represents how a given formation property changes over a depth of the wellbore.
- the highest point in the column for a given signal can correspond to a value of a formation property at a location that is closest to a surface of the wellbore.
- the different shading along a column represents different values of the formation property of the wellbore.
- Each well log can contain different wellbore data.
- the well log 450 A can contain data relating to porosity for each wellbore
- the well log 450 B can contain data relating to formation density for each wellbore
- the well log 450 C can contain data relating to gamma ray measurements for each wellbore.
- the signal 401 A can represent porosity
- the signal 401 B can represent formation density
- the signal 401 C can represent gamma ray data for the first wellbore.
- An intensity (i.e. brightness) of a signal in the well logs 450 A, 450 B, and 450 C can be proportional to an amplitude of the signal.
- Control points corresponding to locations of control points defined for a reference signal of an input tile can be defined across the well logs 450 A, 450 B, and 450 C at the same depth of the control points defined for the input tile.
- an upper control point 452 A and a lower control point 454 A can be defined at the first and second depths, respectively, within the signal 406 A.
- an upper control point 452 B and a lower control point 454 B can be defined at the first and second depths, respectively, within the signal 406 B.
- an upper control point 452 C and a lower control point 454 C can be defined at the first and second depths, respectively, within the signal 406 C.
- FIG. 4 depicts signals corresponding to 11 wellbores with no missing data. However, when comparing wellbores of differing depths or data sets, there may be null and/or missing data. For example, different logging of the wellbore parts of the data may not have been detected—resulting in null and/or missing data.
- well signals can be transformed according to a function in order to represent null data.
- FIGS. 5 - 6 depict a transform for mapping null data to a zero value and an example input tile having null data mapped to zero, respectively.
- FIG. 5 depicts a graph of an example transform for representing null data of a set of input signals, according to some embodiments.
- FIG. 5 shows a graph 500 having an x-axis 504 (input x) and a y-axis 506 (output y).
- Real-valued data from input signals can be an input to a transform function to output a compressed range of values.
- a transform function can be used to map input values within a range defined by an input lower value 508 and an input upper value 510 to a compressed range defined by an output lower value 512 and an output upper value 514 .
- An example transform function can be defined by Equation 2:
- y is a transformed value
- x is an input value
- f(x) is a transform function
- the transform function can map the input lower value 508 to the output lower value 512 , where the input lower value 508 is the input value to the transform function and the output lower value 512 is the transformed value output from the transform function.
- the input lower value 508 is equal to a
- the upper input value 510 is equal to b
- the output lower value 512 is equal to a′
- the output upper value 514 is also equal to b.
- input values having an original interval defined by the lower and upper input values 508 and 510 , respectively, i.e. an interval [a, b]
- well data input to the transform function can be represented by a compressed range of values and null values in the original well data can be mapped to a zero value, where the lower output value 512 is greater than the lower input value 508 and the lower input value 508 is greater than or equal to zero.
- Transformed data (represented by a curve 502 ) then can have no data in an interval defined between zero and the lower output value 512 . This interval (i.e. an interval [0, a′]) can then act as a buffer between null data and real-valued data when the transformed data is processed for signal alignment.
- FIG. 6 depicts an example input array having null data mapped to zero, according to some embodiments.
- FIG. 6 depicts an example input array 600 including well logs 650 A, 650 B, and 650 C for a number of wellbores.
- the input array 600 includes wellbore data for 11 wellbores, where each wellbore's data is represented within a different column of each of the well logs 650 A, 650 B, and 650 C.
- each wellbore's data can be represented within a single signal.
- FIG. 6 depicts the well log 650 A as having 11 signals 601 A- 611 A, the well log 650 B as having 11 signals 601 B- 611 B, and the well log 650 C as having 11 signals 601 C- 611 C.
- a signal can include data from the same wellbore across each well log (i.e., data of the signal 601 A, the signal 601 B, and the signal 601 C can be collected from a single wellbore).
- Upper control points 652 A, 652 B, and 652 C and lower control points 654 A, 654 B, and 654 Ccorresponding to control points defined for a reference signal of an input tile are depicted.
- the well logs 650 A, 650 B, and 650 C each contain one or more signals having null values mapped to zero.
- a signal of a well log can be null data when a corresponding signal of another well log contains real-valued data.
- a signal of an input tile (the signal 507 of the input tile 300 , for example) can represent well data from a first wellbore
- the well log 650 A can contain data relating to porosity for each wellbore
- the well log 650 B can contain data relating to formation density for each wellbore
- the well log 650 C can contain data relating to gamma ray measurements for each wellbore.
- one or more types of data was not collected for a given wellbore. For example, if porosity data was not collected from a wellbore, but formation density data and gamma ray data were, then the wellbore data from that wellbore would have null (i.e. missing) porosity data. That null data can be mapped to zero by transforming the wellbore data (as described in reference to FIG. 5 , for example). Thus, a signal of the well log 650 A (porosity, in this example) for that wellbore would have values mapped to zero. Continuing the example of the signal 307 , the corresponding signal 607 A can have values mapped to zero and real-valued (but compressed) data represented by the signals 607 B and 607 C in the well logs 650 B and 650 C, respectively.
- Multiple wellbores may include differing types of data and/or null data.
- FIG. 6 depicts null data mapped to zero for the signals 602 B, 603 B, and 605 C within the well log 650 B and the signals 601 C and 608 C within the well log 650 C. While FIG. 6 depicts each of the well logs 650 A, 650 B, and 650 C as having null data, in some cases, a subset of a total number of well logs of an input array may have null data. For example, only two well logs of the input array 600 may have a signal having null data. Alternatively, there may be only a single well log containing null data. Further, some wellbore data may include null data across multiple well logs (i.e. two or more signals corresponding to a single wellbore are null data). Additionally, there may be a greater or lesser number of null data signals within a well log than depicted in FIG. 6 .
- null data can be represented by adding an additional binary signal for each well log of the original signal and the binary signals can be used to indicate whether a signal contains null or real-valued data.
- the input array 600 may include a binary signal for each of the well logs 650 A, 650 B, and 650 C, where the binary signal for the well log 650 A indicates that signals 601 A- 606 A and 608 A- 611 A contain real-valued data (e.g. a binary value of 1) and that the signal 607 A contains null data (e.g. a binary value of 0).
- null data may be represented using a pseudo-random pattern.
- the signals 607 A, 607 B, 603 B, 605 B, 601 C, and 608 C can be represented by a pseudo-random pattern rather than black boxes (which are interpreted as a zero value when input into a trained neural network for signal alignment).
- a horizontal dimension of each signal must be greater than one (i.e., w>1).
- FIG. 7 depicts the example input array of FIG. 6 with null data represented by a pseudo-random pattern, according to some embodiments.
- FIG. 7 depicts an example input array 700 generated from the same well data represented by FIG. 6 , but with null data represented by a pseudo-random pattern in place of black boxes.
- the input array 700 includes three well logs 750 A, 750 B, and 750 C, each having 11 signals.
- the well log 750 A includes signals 701 A- 711 C
- the well log 750 B includes signals 701 B- 711 B
- the well log 750 C includes signals 701 C- 711 C.
- the signals 707 A, 702 B, 703 B, 705 B, 701 C, and 708 C (which correspond to the signals 607 A, 607 B, 603 B, 605 B, 601 C, and 608 C, respectively) contain null data that is represented by a pseudo-random pattern.
- one or more input control points can be defined for a reference signal of an input tile representing the input array 104 (as described in reference to FIGS. 3 - 7 ), and the trained neural network 110 can output one or more control points for each wellbore where each output control point represents a depth within each wellbore that corresponds to a depth within the reference wellbore at which the input control point is defined.
- output control points can be an array having dimensions of a height, a width, and a number of channels.
- the height of the output array can be equal to a number of samples, N D (i.e. a number of rows of the output array).
- the number of samples of the output array can be equal to the number of samples of the input array 104 .
- the width of the output array can be equal to the width of the input array 104 and can be defined by Equation (1) as recited above in reference to FIG. 1 .
- the number of channels, N C , of the output array can be equal to a number of control points defined for a reference signal of the input tile representing the input array 104 .
- the output array can have 2 channels for the input tile 300 , where the reference signal 306 has two control points 352 and 354 defined. Examples described herein depict the number of channels of the output array as equal to 2, however, a greater or lesser number of channels may be included in the output array.
- the output control points can indicate depths within each wellbore where formation properties and/or downhole measurements are similar.
- the upper control point 352 can be at a first depth within a first wellbore and an output control point can indicate a depth within one or more wellbores where the formation properties are similar.
- the output control points can be a visual representation of correlated depths.
- an intensity and/or diffusivity of the output control points can be related to a confidence associated with the identified output control points.
- FIG. 8 depicts an example output set of upper and lower control points corresponding to a set of input signals, according to some embodiments.
- FIG. 8 depicts an example set of control point mappings 800 for an input tile having two control points defined for a reference signal of the input tile.
- the set of control point mappings 800 may be a set of output control points that correspond to the input tile 300 of FIG. 3 .
- the set of control point mappings 800 includes a set of upper control point mappings 820 A and a set of lower control point mappings 820 B.
- the sets of upper and lower control point mappings 820 A and 820 B, respectively, can be images divided into columns.
- FIG. 8 depicts output images from a trained neural network (e.g. the trained neural network 110 of FIG.
- each column corresponds to a signal of an input tile.
- the columns 801 - 811 can correspond to the signals 301 - 311 of the input tile 300 .
- An output upper control point 822 A for a reference signal of the input tile can be located at the same position within the reference signal as the input upper control point 352 and an output lower control point 822 B can be located at the same position within the reference signal as the input lower control point 354 .
- the input and output upper control points 352 and 722 A can correspond to a depth within a wellbore, where the reference signal 306 represents well data from that wellbore across a range of depths. Each control point can be assigned a value between zero and 1that indicates the probability. Because the output upper control points 820 A are determined based on the identified reference signal, and the reference signal 306 corresponds to the column 806 , the output upper control point 822 A can represent a probability of 1. The same can be true for the input and output lower control points 354 and 822 B.
- an intensity and/or diffusivity of output control points of the sets of upper and lower control point mappings 820 A and 820 B can represent a probability that the location of the output control point is accurate.
- An output lower control point 855 B for the signal 308 corresponds to the lower control point 354 .
- the output lower control point 855 B has lower intensity relative to an output lower control point 855 A for the signal 808 . This can indicate a lower probability associated with a determined corresponding depth within that wellbore (of the signal 808 ).
- a set of control point mappings may include more than one corresponding upper and/or lower control point for a single signal.
- FIG. 8 depicts the column 806 as having two output upper control points, 815 A and 815 B, and two output lower control points, 825 A and 825 B.
- the output upper control point 815 B has a greater visual intensity (i.e. less diffuse) relative to the output upper control point 815 A, indicating a higher probability that the position (depth) of the output upper control point 815 B corresponds to the position of the control point 822 A compared to the position of the output upper control point 815 A.
- FIG. 8 depicts the output control points 825 A and 825 B similar to the output control points 815 A and 815 B.
- FIGS. 9 A- 9 D depict example alternative visual representations of control points output by a machine-learning model for well correlation, according to some embodiments.
- output control points can be depicted on an input tile.
- FIG. 9 A depicts a set of upper control point mappings 915 A and a set of lower control point mappings 915 B overlaid on an example input tile 900 .
- a trained machine-learning model can output solely the output control point mappings associated with one or more input control points.
- FIG. 9 B depicts a set of upper control point mappings 925 A and a set of lower control point mappings 925 B.
- an area below an output control point can be assigned a value of 1 and an area above the output control point can be assigned a value of 0.
- FIGS. 9 C- 9 D depict, respectively, an alternative visual representation of a set of upper control point mappings and a set of lower control point mappings.
- FIG. 9 C depicts an image 906 A illustrating an example set of upper control point mappings where an area below an output upper control point is assigned a value of 1 and an area above the output upper control point is assigned a value of 0.
- FIG. 9 D depicts an image 906 B illustrating an example set of lower control point mappings where an area below the output lower control point is assigned a value of 1 and an area above the output lower control point is assigned a value of 0.
- an area above the output control point can be assigned a value of 1 and an area below the output control point can be assigned a value of 0.
- control point mappings can be an output array from the trained neural network 110 can then be input into an alignment module 112 to align the signals of the input array 104 .
- the control point mappings can be stored as stored labelled data 116 to be used by the training module 114 in unsupervised learning processes.
- the alignment module 112 may be a pre-trained machine-learning model.
- the alignment module 112 and the trained neural network 110 can be combined into a single trained machine-learning model.
- the output array from the trained neural network 110 can represent estimated control point mappings for each signal of the input tile representing the input array 104 . These estimated control point mappings can be formulated as an over-constrained system of equations.
- the input array 104 can be modified to generate additional control point mappings.
- signals of the input array 104 can be flipped vertically to generate a second input array, which can be input into the trained neural network 110 to obtain additional sets of output control points.
- signals of the input array 104 can be rearranged.
- an order of the signals of the input array can be modified.
- additional input arrays can be generated from the input array 104 by vertically flipping the signals and shuffling the order of the signals. This can be repeated until a sufficient number of control point mappings to formulate the over-constrained system of equations.
- the alignment module 112 can solve the over-constrained system of equations to generate a set of shifts between control points of the output array.
- the alignment module 112 can solve the over-constrained system of equations using a least squares regression.
- the set of shifts between control points generated by the alignment module 112 can then be used to generate a set of aligned signals 120 .
- FIG. 10 depicts an example prediction workflow employing a machine-learning model for performing well correlation, according to some embodiments.
- FIG. 10 depicts an example prediction workflow 1000 for an input tile 1002 having 11 signals.
- the input tile 1002 can represent an input array that is similar to any of the input arrays described herein.
- the input tile 1002 can represent an input array having multiple well logs.
- the input tile 1002 may represent the input array 400 .
- the input tile 1002 may include null data mapped to zero.
- the input tile 1002 can represent the input array 600 .
- the input tile 1002 can include null data represented as a pseudo-random pattern.
- the input tile 1002 can represent the input array 700 .
- the input tile 1002 can include one or more control points defined for a reference signal of the input tile 1002 .
- the input tile 1002 can include an upper control point and a lower control point.
- the input tile 1002 can be the input tile 300 and have the upper control point 352 and the lower control point 354 defined for the reference signal 306 .
- FIG. 10 depicts the example prediction workflow 1000 for the input tile 1002 having two control points. However, the input tile 1002 may have a greater or lesser number of control points.
- the input tile 1002 can be input into a trained neural network 1006 to generate an output array representing a first set of control point mappings 1008 A.
- the trained neural network 1006 can be a U-NET.
- the trained neural network 1006 may be the trained neural network 110 .
- the set of control point mappings 1008 A can include output control points for each signal corresponding to the control point(s) defined for the reference signal of the input tile 1002 .
- the set of control point mappings 1008 A can be represented as an image, where an intensity and/or diffusivity of each output control point is correlated to a probability.
- the set of control point mappings 1008 A may be the set of control point mappings 800 .
- the set of control point mappings 1008 A can then be used to formulate a system of equations to determine a set of shifts for signals of the input tile 1002 in order to generate a set of aligned signals 1010 .
- FIG. 10 depicts the modified input tile 1004 as being a vertically flipped image of the input tile 1002 .
- the modified input tile 1004 can be generated by rearranging an order of signals of the input tile 1002 .
- the modified input tile 1004 can be input into the trained neural network 1006 to generate a second set of control point mappings 1008 B corresponding to the signals of the input tiles 1002 / 1004 .
- the second set of control point mappings 1008 B can be represented similar to the first set of control point mappings 1008 A.
- Additional constraints for the system can be determined based on the second set of control point mappings 1008 B.
- the set of shifts can be determined using a least squares regression. The set of shifts can then be used to align the signals of the input array 1002 to generate the set of aligned signals 1010 .
- the input array 104 can be processed by the trained neural network 110 and the alignment module 112 to generate a first set of aligned signals 120 .
- this process can be performed iteratively to increase a resolution of the set of aligned signals 120 .
- the set of aligned signals 120 may then be input into the trained neural network 110 to obtain a second output array representing a set of control point mappings corresponding to the set of aligned signals 120 .
- the second output array can have a set of control point mappings having an increased probability.
- the second output array can then be processed by the alignment module 112 to generate a second set of aligned signals having a higher resolution than the set of aligned signals 120 . This process can be repeated until a resolution of the set of aligned signals 120 and/or a probability associated with control points of the output array output from the trained neural network 110 is above a threshold.
- FIGS. 11 - 12 operations for training a machine-learning model to perform well correlation and performing well correlation using said model are now described with reference to FIGS. 11 - 12 , respectively.
- Operations of flowcharts 1100 - 1200 can be performed by software, firmware, hardware, or a combination thereof. Operations of the flowcharts 1100 - 1200 are described in reference to the example system 100 of FIG. 1 . However, other systems and components can be used to perform the operations now described.
- FIG. 11 depicts a flowchart of example operations for training a machine-learning model to perform well correlation, according to some embodiments. The operations of the flowchart 1100 are now described with reference to the example training workflow of FIG. 2 . The operations of the flowchart 1100 begin at block 1102 .
- a reference signal is generated based on data from a well log.
- the well log can be derived during logging of the wellbore.
- data from the well log can be from synthetic data (not derived from actual logging of a wellbore).
- the reference signal can be a visual representation of data of the well log.
- the data can be represented in heatmap form as the well signal 201 .
- Data of the well log can include raw and/or processed data for a wellbore.
- the data can include porosity data, formation type data, gamma ray measurement data, and/or any other data type of a downhole and/or formation attribute.
- a computer can be used to generate the reference signal from data from the well log.
- data from the well log can be input into the pre-processing module 108 of the computer 106 to generate a reference signal.
- one or more control points are defined for the reference signal.
- the one or more control points can be defined at a position within the reference signal that corresponds to a depth within the wellbore from which the data was acquired.
- an upper control point and a lower control point can be defined for the reference signal.
- the upper control point 205 and the lower control point 207 are defined at a first and second position within the signal 201 .
- additional control points can be defined within the reference signal.
- the reference signal is transformed to generate a plurality of transformed signals.
- the reference signal can be transformed be performing one or more transforms on the reference signal.
- transformed signals can be generated from the well signal 201 by shifting the signal 201 , scaling the signal 201 , adjusting (increasing and/or decreasing) an amplitude of the signal 201 , adding noise to the signal 201 , setting some signal values to null values, etc.
- an input tile and one or more sets of control point mappings for the input tile are created based on the reference signal and the plurality of transformed signals.
- the input tile can be created by combining the reference signal and the plurality of transformed signals.
- the input tile 206 can be generated by combining the well signal 201 with the transformed signals 209 generated from the well signal 201 .
- the sets of control point mappings can be determined based on the control points defined for the reference signal and the transformed signals.
- the sets of control point mappings 208 and 210 can be determined based on the control points 205 and 207 and the transformed signals 209 .
- the first set of control point mappings 208 can correspond to the upper control point 205 and the second set of control point mappings 210 can correspond to the lower control point 207 .
- the input tile(s) and the one or more sets of control point mappings are input into a machine-learning model to train the machine-learning model.
- the machine-learning model can generate sets of control point mappings based on the input tile(s) and the generated sets of control point mappings can be compared to the input sets of control point mappings to determine accuracy.
- the input tile 206 , the first set of control point mappings 208 , and the second set of control point mappings 210 can be input into a machine-learning model. Operations of the flowchart 1100 are complete.
- FIG. 12 depicts a flowchart of example operations for performing well correlation using a trained machine-learning model, according to some embodiments.
- the operations of the flowchart 1200 are described with reference to the example prediction workflow of FIG. 10 .
- the operations of the flowchart 1200 begin at block 1202 .
- a first input array having a plurality of signals is generated based on wellbore data from a set of wellbores.
- Signals of the first input array can represent well data obtained from the set of wellbores.
- each signal of the first input array can correspond to a different wellbore.
- each of the signals 301 - 311 can represent data obtained from distinct wellbores (i.e. a set of 11 wellbores).
- one or more signals can be repeated to create an input array having an aspect ratio of approximately one.
- the input array can include multiple well logs.
- the input array 400 includes the well logs 450 A, 450 B, and 450 C.
- Each well log of the input array can include different data types.
- the well log 450 A can represent porosity data
- the well log 450 B can represent formation type data
- the well log 450 C can represent gamma ray measurement data.
- the input array can include null data.
- Null data can be present in the input array when a type of data is unavailable for a given wellbore.
- null data can be mapped to zero and real-valued data of the well log can be mapped to a compressed range.
- the input array 600 includes null data in each of the well logs 650 A, 650 B, and 650 C, where the signals 607 A, 602 B, 603 B, 605 B, 601 C, and 608 C include null data mapped to zero.
- null data can be represented as a pseudo-random pattern.
- the input array 700 includes null data represented by a pseudo-random pattern for the signals 707 A, 702 B, 703 B, 705 B, 701 C, and 708 C.
- an input tile is created based on the input array.
- the input tile can be a visual representation of data of the input array, where an intensity of a signal is correlated to an amplitude of that signal.
- the input tile 1002 can be generated based on an input array having 11 signals.
- the input tile can represent data from multiple well logs.
- the input tile 300 can represent the input array 400 .
- a reference signal is selected from the plurality of signals of the input tile.
- the reference signal can be any one of the plurality of signals of the input tile.
- the reference signal can be the signal 306 .
- the reference signal can be selected to determine depths within other wellbores where a formation property is similar to a formation property of the wellbore corresponding to the reference signal.
- a given wellbore can correspond to the reference signal 306 to determine equivalent depths within wellbores corresponding to the signals 301 - 305 and 307 - 311 .
- one or more control points are defined for the reference signal of the input tile.
- the one or more control points can be positioned within the reference signal and can correspond to a depth within a reference wellbore.
- the upper control point 352 can correspond to a first depth within the reference wellbore and the lower control point 354 can correspond to a second depth within the reference wellbore.
- the input tile is input into a machine-learning model to generate one or more sets of control point mappings corresponding to signals of the input tile.
- the machine-learning model can output a set of control point mappings for each control point defined for the reference signal of the input tile. For example, with reference to FIG. 8 , the machine-learning model can output the set of upper control point mappings 820 A and the set of lower control point mappings 820 B for an input tile having two control points defined for the reference signal (e.g. the input tile 300 , for example).
- the sets of control point mappings can indicate a probability associated with each control point of the set of control point mappings.
- the probability associated with a control point mapping can be represented visually as an intensity or diffusivity.
- FIG. 8 depicts the control point 855 B having a lower intensity (more diffuse or “spread out”) relative to the control point 855 A. This can indicate a lower probability associated with the determined position of the control point 855 B compared to the probability associated with the determined position of the control point 855 A.
- a modified input tile is generated based on the first input tile.
- the modified input tile can be generated by transforming the first input tile.
- the first input tile can be transformed to create a modified input tile by flipping/mirroring the first input tile and/or rearranging and order of the signals of the input tile.
- the modified input tile 1004 can be generated from the input tile 1002 by vertically flipping the input tile 1002 .
- the modified input tile can then be input into the machine-learning model to generate a second set of control point mappings.
- the modified input tile 1004 can be input into the neural network 1006 to generate the second set of control point mappings 1008 B. In some embodiments, this process can be repeated to generate additional sets of control point mappings.
- a set of shifts for the plurality of signals of the input array is determined based on the sets of control point mappings output from the machine-learning model.
- the set of shifts can be determined by solving the formulated system which correlates wells once the system is over-constrained by the sets of control point mappings for the input tile. For example, the set of shifts can be determined by performing a least squares regression.
- the signals of the plurality of signals of the input array are aligned based on the set of shifts to generate a set of aligned signals.
- Each signal of the input tile can be shifted based on a corresponding shift of the set of shifts to align the signal with the reference signal of the input tile.
- signals of the input tile 1002 can be shifted based on the set of shifts to generate the set of aligned signals 1010 .
- the set of aligned signals is used as an input tile and operations continue at block 1206 where a reference signal is selected from the set of aligned signals.
- a reference signal is selected from the set of aligned signals.
- the set of aligned signals 1010 can be used as an input tile.
- correlated depths for the set of wellbores are determined based on the set of aligned signals.
- the correlated depths for wells of the set of wellbores can be defined relative to the reference well.
- the set of aligned signals can be used to determine a depth within each wellbore of the set of wellbores at which a formation property/attribute of the wellbore is substantially equivalent or similar to the formation property/attribute of the reference wellbore at the depth(s) that correspond to the one or more control points defined for the reference signal. For example, with reference to FIG.
- the upper control point 352 can be defined at a depth of 50 feet within a reference wellbore and a formation surrounding the reference wellbore can have a measured (or determined) porosity at that depth. Based on the set of aligned signals, a correlated depth within another wellbore can be determined, where the porosity of the other wellbore (of the set of signals of the input tile) at the correlated depth is substantially equivalent to the porosity of the reference wellbore at a depth of 50 feet. Operations of the flowchart 1200 are complete.
- FIGS. 11 - 12 are annotated with a series of numbers. These numbers represent stages of operations. Although these stages are ordered for this example, the stages illustrate one example to aid in understanding this disclosure and should not be used to limit the claims. Subject matter falling within the scope of the claims can vary with respect to the order and some of the operations.
- aspects of the disclosure may be embodied as a system, method or program code/instructions stored in one or more machine-readable media. Accordingly, aspects may take the form of hardware, software (including firmware, resident software, micro-code, etc.), or a combination of software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.”
- the functionality presented as individual modules/units in the example illustrations can be organized differently in accordance with any one of platform (operating system and/or hardware), application ecosystem, interfaces, programmer preferences, programming language, administrator preferences, etc.
- the machine readable medium may be a machine readable signal medium or a machine readable storage medium.
- a machine readable storage medium may be, for example, but not limited to, a system, apparatus, or device, that employs any one of or combination of electronic, magnetic, optical, electromagnetic, infrared, or semiconductor technology to store program code.
- machine readable storage medium More specific examples (a non-exhaustive list) of the machine readable storage medium would include the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
- a machine readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
- a machine readable storage medium is not a machine readable signal medium.
- a machine readable signal medium may include a propagated data signal with machine readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
- a machine readable signal medium may be any machine readable medium that is not a machine readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
- Program code embodied on a machine readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
- Computer program code for carrying out operations for aspects of the disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as the Java® programming language, C++ or the like; a dynamic programming language such as Python; a scripting language such as Perl programming language or PowerShell script language; and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
- the program code may execute entirely on a stand-alone machine, may execute in a distributed manner across multiple machines, and may execute on one machine while providing results and or accepting input on another machine.
- the program code/instructions may also be stored in a machine readable medium that can direct a machine to function in a particular manner, such that the instructions stored in the machine readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
- FIG. 13 depicts an example computer, according to some embodiments.
- a computer 1300 includes a processor 1301 (possibly including multiple processors, multiple cores, multiple nodes, and/or implementing multi-threading, etc.).
- the computer 1300 includes a memory 1307 .
- the memory 1307 may be system memory or any one or more of the above already described possible realizations of machine-readable media.
- the computer 1300 also includes a bus 1303 and a network interface 1305 .
- the computer 1310 also includes a signal processor 1311 .
- the signal processor 1311 may perform one or more operations described herein. Any one of the previously described functionalities may be partially (or entirely) implemented in hardware and/or on the processor 1301 . For example, the functionality may be implemented with an application specific integrated circuit, in logic implemented in the processor 1301 , in a co-processor on a peripheral device or card, etc. Further, realizations may include fewer or additional components not illustrated in FIG. 13 (e.g., video cards, audio cards, additional network interfaces, peripheral devices, etc.).
- the processor 1301 and the network interface 1305 are coupled to the bus 1303 . Although illustrated as being coupled to the bus 1303 , the memory 1307 may be coupled to the processor 1301 .
- Embodiment 1 A method comprising performing wellbore correlation across multiple wellbores, the performing comprising, predicting a depth alignment across the multiple wellbores based on at least one geological feature of subsurface formations in which the multiple wellbores are located, wherein the predicting comprises, selecting a reference wellbore from among the multiple wellbores; defining at least one control point in a reference signal of a reference well log for the reference wellbore, wherein the reference well log includes changes in the at least one geological feature over a depth of the reference wellbore; generating an input tile that comprises the reference signal, the at least one control point, and a number of non-reference well logs, wherein the number of non-reference well logs corresponds to a set of non-reference wellbores, and wherein each of the number of non-reference well logs includes changes in the at least one geological feature over a depth of each non-reference wellbore of the set of non-reference wellbores; inputting the input tile into a machine-learning
- Embodiment 2 The method of Embodiment 1, wherein predicting the depth alignment comprises aligning the reference well log and the set of non-reference well logs based on the at least one control point and the corresponding control point for each of the number of non-reference well logs that is output from the machine-learning model.
- Embodiment 3 The method of Embodiment 2, wherein the reference well log and each of the number of non-reference well logs comprises multiple channels, wherein each channel corresponds to a different geological feature of the at least one geological feature, wherein defining the at least one control point comprises defining at least one control point for each channel in the reference well log, and wherein aligning comprises aligning, using the machine-learning model, the at least one control point for each channel in the reference well log with a point in a corresponding channel of each of the number of non-reference well logs.
- Embodiment 4 The method of any one of Embodiments 1-3, wherein predicting the depth alignment across the multiple wellbores comprises prior to inputting the reference well log and the non-reference well logs, identifying null data in each of the non-reference well logs; and transforming the null data into non-null data.
- Embodiment 5 The method of Embodiment 4, wherein transforming the null data into non-null data comprises setting the null data to a same value.
- Embodiment 6 The method of Embodiment 5, wherein the same value is zero.
- Embodiment 7 The method of Embodiment 4, wherein transforming the null data into non-null data comprises setting the null data to values according to a pseudo-random number pattern.
- Embodiment 8 The method of any one of Embodiments 1-7, further comprising: training the machine-learning model, wherein the training comprises, inputting, into the machine-learning model, a number of well logs and at least one control point for each of the number of well logs; inputting, into the machine-learning model, a mapping among the at least one control point across each of the number of well logs; and training the machine-learning model based on the number of well logs and the mapping among the at least one control point across each of the number of well logs.
- Embodiment 9 The method of Embodiment 8, wherein the number of well logs are based on actual well logging operations.
- Embodiment 10 The method of Embodiments 8 or 9, wherein the number of well logs includes synthetic data.
- Embodiment 11 One or more non-transitory machine-readable media comprising program code executable by a processor to cause the processor to: select a reference wellbore from among multiple wellbores; define at least one control point in a reference signal of a reference well log for the reference wellbore, wherein the reference well log includes changes in at least one geological feature over a depth of the reference wellbore; generate an input tile that comprises the reference signal, the at least one control point, and a number of non-reference well logs, wherein the number of non-reference well logs corresponds to a set of non-reference wellbores, and wherein each of the number of non-reference well logs includes changes in the at least one geological feature over a depth of each non-reference wellbore of the set of non-reference wellbores; input the input tile into a machine-learning model; and in response to inputting the input tile into the machine-learning model, output, from the machine-learning model, a corresponding control point for each of the number of non
- Embodiment 12 The one or more non-transitory machine-readable media of Embodiment 11, wherein the program code comprises program code executable by the processor to cause the processor to: align the reference well log and the number of non-reference well logs based on the at least one control point and the corresponding control point for each of the number of non-reference well logs that is output from the machine-learning model.
- Embodiment 13 The one or more non-transitory machine-readable media of Embodiments 11 or 12, wherein the reference well log and each of the number of non-reference well logs comprises multiple channels, wherein each channel corresponds to a different geological feature of the at least one geological feature, wherein the program code comprises program code executable by the processor to cause the processor to: define at least one control point for each channel in the reference well log; and align the at least one control point for each channel in the reference well log with a point in a corresponding channel of each of the number of non-reference well logs.
- Embodiment 14 The one or more non-transitory machine-readable media of any one of Embodiments 11-13, wherein the program code comprises program code executable by the processor to cause the processor to: prior to inputting the reference well log and the non-reference well logs, identify null data present in each of the non-reference well logs; and transform the null data into non-null data.
- the program code comprises program code executable by the processor to cause the processor to: prior to inputting the reference well log and the non-reference well logs, identify null data present in each of the non-reference well logs; and transform the null data into non-null data.
- Embodiment 15 The one or more non-transitory machine-readable media of Embodiment 14, wherein the null data is transformed into the non-null data by setting the null data to a same value.
- Embodiment 16 The one or more non-transitory machine-readable media of Embodiment 14, wherein the null is transformed into the non-null data by setting the null data to values according to a pseudo-random number pattern.
- Embodiment 17 An apparatus comprising: a processor; and a machine-readable medium having program code executable by the processor to cause the processor to, train a neural network for performing wellbore correlation across multiple wellbores, wherein the program code executable by the processor to cause the processor to train the neural network comprises program code executable by the processor to cause the processor to, generate a reference well log based on wellbore data for a wellbore; define at least one reference control point for the reference well log; apply one or more transformations to the reference well log having the at least one reference control point defined to create a plurality of transformed well logs; create a training input tile that comprises the reference well log, the at least one reference control point for the reference well log, and the plurality of transformed well logs; create a training set that comprises the training input tile and an input mapping among the at least one reference control point across each of the plurality of transformed well logs, wherein the input mapping includes at least one corresponding control point for each transformed well log of the plurality of transformed well logs, wherein the at least
- Embodiment 18 The apparatus of Embodiment 17, wherein the program code executable by the processor to cause the processor to apply the one or more transformations to the reference well log comprises program code executable by the processor to cause the processor to: applying at least one of shifting, compressing, stretching, an amplitude increase, an amplitude decrease, adding noise, and setting a value of at least a portion of data to null.
- Embodiment 19 The apparatus of Embodiments 17 or 18, wherein the program code executable by the processor to cause the processor to train the neural network using the training set comprises program code executable by the processor to cause the processor to: output, from the neural network, a predicted mapping among the at least one reference control point across each of the plurality of transformed well logs, wherein the predicted mapping includes at least one predicted corresponding control point for each transformed well log of the plurality of transformed well logs, wherein each of the at least one predicted corresponding control point is associated with a probability, wherein the probability associated with each of the at least one predicted corresponding control points is based on the training input tile and the predicted mapping.
- Embodiment 20 The apparatus of any one of Embodiments 17-19, wherein the reference well log comprises multiple channels, wherein each channel corresponds to a different geological feature of a formation surrounding the wellbore, and wherein the program code executable by the processor to cause the processor to define the at least one reference control point for the reference well log comprises program code executable by the processor to cause the processor to define at least one reference control point for each channel of the reference well log.
Abstract
Description
- The disclosure generally relates to the field of wellbore formation analysis and, more particularly, to aligning wellbore log data across multiple wellbores.
- Well-to-well log correlation is often performed to determine a consistency or change between patterns or signatures in well log data measured from sub-surface geological formations. A well log may be a visual representation of a measurement of a property of a formation surrounding a wellbore plotted against a depth of the wellbore. Well log data from multiple wells can be evaluated to determine corresponding depths within each well at which the surrounding formation shares a similar formation property.
- Embodiments of the disclosure may be better understood by referencing the accompanying drawings.
-
FIG. 1 depicts an example system for performing well correlation across multiple wellbores using a machine-learning model, according to some embodiments. -
FIG. 2 depicts an example training workflow for training a machine-learning model for assisted well correlation (AWC), according to some embodiments. -
FIG. 3 depicts an example input tile to a machine-learning model for well correlation, according to some embodiments. -
FIG. 4 depicts an example input array having multiple channels for well correlation, according to some embodiments. -
FIG. 5 depicts a graph of an example transform for representing null data of a set of input signals, according to some embodiments. -
FIG. 6 depicts an example input array having null data mapped to zero, according to some embodiments. -
FIG. 7 depicts the example input array ofFIG. 6 with null data represented by a pseudo-random pattern, according to some embodiments. -
FIG. 8 depicts an example output set of upper and lower control points corresponding to a set of input signals, according to some embodiments. -
FIGS. 9A-9D depict example alternative visual representations of control points output by a machine-learning model for well correlation, according to some embodiments. -
FIG. 10 depicts an example prediction workflow employing a machine-learning model for performing well correlation, according to some embodiments. -
FIG. 11 depicts a flowchart of example operations for training a machine-learning model to perform well correlation, according to some embodiments. -
FIG. 12 depicts a flowchart of example operations for performing well correlation using a trained machine-learning model, according to some embodiments. -
FIG. 13 depicts an example computer, according to some embodiments. - The description that follows includes example systems, methods, techniques, and program flows that embody embodiments of the disclosure. However, it is understood that this disclosure may be practiced without these specific details. For instance, this disclosure refers to multiple signal alignment for well correlation in illustrative examples. Embodiments of this disclosure can be also applied to data sets of multi-channel systems for signal alignment. In other instances, well-known instruction instances, protocols, structures and techniques have not been shown in detail in order not to obfuscate the description.
- Data gathered from drilling a wellbore and performing other downhole operations can be interpreted to evaluate a property of a formation surrounding the wellbore at a given depth. The formation property can be mapped across a depth of the wellbore to create a well log. Multiple well logs can be generated from a single wellbore, where each well log represents a different property of the formation. Well correlation can include aligning well logs of multiple wellbores to determine depths within each wellbore that corresponds to a certain formation property. Well correlation is often performed by aligning pairs of digital signals representing the well logs of two wellbores. However, conventional approaches for aligning signals are limited in the number of signals (i.e., number of wellbores being correlated) and number of well logs for each wellbore that can be aligned at once.
- Conventional approaches to aligning digital signals can be slow, particularly when applied to multiple signals. For example, Dynamic Time Warping (DTW) can align pairs of signals. To align well logs across more than two wellbores, DTW can be inefficient, as DTW correlates depths across only two wellbores at a time. Further, many signal alignment methods cannot efficiently process multi-channel data (i.e. multiple well logs for a single well). This can further hamper the computing efficiency of large amounts of data, as processing time increases with each additional well log and wellbore data set.
- In addition, conventional alignment approaches are not capable of handling null or missing data. Null or missing data can be present when wellbores have differing depths or types of data sets. An inability to process well logs having null data can severely limit the ability to perform well correlation between wellbores in which differing operations and/or measurements were performed. While it may be possible to align some well logs of a wellbore with those of other wellbores, large amounts of data may not be processed in cases where a data set for a wellbore may have null or missing data in one or more well logs.
- In example embodiments, a machine-learning (ML) model can be trained to perform assisted well correlation (AWC) by aligning sets of well signals including greater than two signals and/or one channel. For instance, example embodiments can perform well correlation for 20 different wellbores for four different formation properties (e.g., porosity, resistivity, permeability, and temperature) in each of the 20 different wellbores. Each formation property can correspond to a channel for a given wellbore. Therefore, in this example, there would be four channels for each of the four different formation properties for each of the 20 different wellbores. An input array can be generated by combining signals representing well data from multiple wellbores. The input array can be represented visually as an input tile, where signals of the input array are represented in heatmap format. One or more control points can be defined at fixed depths for a reference signal of the input array in order to determine corresponding depths within other wellbores of the input array where properties of the wellbore are the same or similar.
- The input array can be input into a trained ML model to generate control point mappings. The control point mappings can indicate positions within signals of the input array that correspond to the control points of the reference signal. A probability can be associated with each control point of the control point mappings. Control point mappings generated by the ML model can be used to determine a set of shifts for signals of the input array relative to the reference signal. The signals of the input array can then be aligned based on the set of shifts.
- The ML model can be trained based on input arrays that include a set of transformed signals. The transformed signals can be generated based on a single original signal representing well data from a single wellbore. One or more control points can be defined for the original signal. When the original signal is transformed, locations of the control points within the transformed signals are known. A set of control point mappings can be generated from the transformed signals, and the control point mappings and corresponding input arrays can be used to train the ML model.
-
FIG. 1 depicts an example system for performing well correlation across multiple wellbores using a machine-learning model, according to some embodiments. Anexample system 100 for performing well correlation across multiple wellbores can include multiple modules stored on a machine-readable medium 118 of acomputer 106. Thecomputer 106 can also include aprocessor 109 and a bus 111. The multiple modules (that include apreprocessing module 108, a trainedneural network 110, analignment module 112, a training module 114) can be executed by theprocessor 109. -
Wellbore data 102 can be input into thepreprocessing module 108 of thecomputer 106 to generate aninput array 104 representing thewellbore data 102. Thewellbore data 102 can include any data collected relating to a wellbore for a plurality of wellbores. Thewellbore data 102 can include wireline data and/or seismic data. Example types of wellbore data can include formation type, porosity, density, gamma ray data, acoustic data, and/or any data characterizing a formation surrounding the wellbore. Thewellbore data 102 can be data collected across a range of depths within a wellbore where the collected data is associated with a depth at which it was collected. Wellbore data from a given wellbore can be represented as one or more signals within theinput array 104. - An input tile can represent the
input array 104, where the input tile is to be input into the trainedneural network 110 to determine correlated locations across wellbores of thewellbore data 102. Thetraining module 114 can aid in unsupervised learning based on stored labeleddata 116. The trainedneural network 110 can be trained based on transformed signals representing data from one or more wellbores. - To help illustrate,
FIG. 2 depicts an example training workflow for training a machine-learning model for assisted well correlation (AWC), according to some embodiments.FIG. 2 depicts atraining workflow 200 for training aneural network 212 to perform assisted well correlation (AWC). Theneural network 212 may be an untrained U-NET.FIG. 2 depicts three signals, signal 201, signal 202, and signal 203. Each of the signals 201-203 can include well data from distinct wellbores. One or more control points can be defined at a fixed position within a signal, where the position of a control point indicates a depth at which wellbore data represented by the signal was collected.FIG. 2 depicts anupper control point 205 and alower control point 207 defined for thesignal 201. Theupper control point 205 can correspond to a first depth within the wellbore and thelower control point 207 can correspond to a second depth within the wellbore. Positioning of control points is described in more detail below in reference toFIG. 3 . - Once the control points for a signal are defined, the signal can be transformed to generate a number of input signals, NT, for training the
neural network 212. A signal can be transformed by one or more of the following operations: (1) shifting the signal, (2) scaling the signal, (3) adjusting (increasing and/or decreasing) an amplitude of the signal, (4) adding noise to the signal, (5) setting some signal values to null values, etc. - The transformed signals representing well data from a single wellbore can be compiled to create an input tile. For example, transforms of the
signal 201 can be processed to create an input tile 206, where the input tile 206 includes thesignal 201 and transformed signals 209. Positions of control points within the transformed signals are also combined to create at least one set of control points for each input tile.FIG. 2 depicts example input tiles for each of thesignals neural network 212 can be a U-NET, the input tiles can have a height and a width such that an aspect ratio of the input tile is approximately one. - Corresponding set(s) of control points can be generated for each signal. In some embodiments, multiple control points may be identified in a well signal. In such cases, each signal will have a set of control points for each control point defined in the original signal. For example, an upper and lower control point can be defined at fixed positions within the
signal 201, and a first set ofcontrol points 208 and a second set ofcontrol points 210 can be generated with the input tile 206. The first set ofcontrol point 208 can correspond to theupper control point 205 and the second set ofcontrol points 210 can correspond to thelower control point 207. Because the control points for a given signal are defined at a fixed position within the untransformed signal, positions of the control points within the transformed signals are known. The input tiles and sets of control points can then be used to train theneural network 212. - Returning to
FIG. 1 , dimensions of theinput array 104 can be determined based on a type of the trainedneural network 110. For example, the trainedneural network 110 may be a UNET that requires an input tile having a set number of rows and a number of columns such that an aspect ratio of the input tile is approximately one. The dimensions of the input array can include a height, a width, and a number of channels and can be based on how the trainedneural network 110 is trained. - The height of the
input array 104 can be a number of samples, Nd, of a signal for a given wellbore. The number of samples can be equal to a number of rows required by the trainedneural network 110. If the number of samples for a signal is not equal to the number of rows required by the trainedneural network 110, the signals can be scaled such that the number of samples equals the number of rows required. For example, if the number of samples is less than the number of rows required by the trainedneural network 110, the signal can be stretched. If the number of samples of a signal is greater than the number of rows required by the trainedneural network 110, the signal can be compressed. Alternatively, a segment of the signal having the number of rows required by the trainedneural network 110 can be selected. - The width of the
input array 104 can be adjusted to make the aspect ratio of theinput array 104 approximately equal to one. Each signal can be repeated a number of times in a horizontal dimension of theinput array 104 to make the input array more “square” (i.e. aspect ratio closer to one). The width of the input array can be defined by Equation 1: -
Width=N T *W (1) - where NT is a total number of input signals and w is an integer representing the number of times each signal is repeated in the horizontal dimension of the
input array 104. If the total number of input signals is less than a number of input signals required by the trainedneural network 110, one or more of the input signals can be repeated until NT is equal to a number of signals to be taken as input by the trainedneural network 110. - The number of channels, NL, of the
input array 104 can be a number of well logs for a wellbore when a signal represents well data. In some embodiments, theinput array 104 can include multiple well logs (or components) for one or more of the wellbores from which thewellbore data 102 was collected. Examples described herein generally refer to three channels/well logs, as it allows for signals to be easily visualized as a color (RGB) image. - To help illustrate,
FIG. 3 depicts an example input tile to a machine-learning model for well correlation, according to some embodiments. Aninput tile 300 can include multiple signals 301-311 that represent wellbore data. Each of the signals 301-311 can represent wellbore data from distinct wellbores.FIG. 3 depicts 11 signals. Thus, the signal 301 (depicted as a first column of the input tile 300) can correspond to a first wellbore, the signal 302 (depicted as a second column of the input tile 300) can correspond to a second wellbore, the signal 303 (depicted as a third column of the input tile 300) can correspond to a third wellbore, the signal 304 (depicted as a fourth column of the input tile 300) can correspond to a fourth wellbore, the signal 305 (depicted as a fifth column of the input tile 300) can correspond to a fifth wellbore, the signal 306 (depicted as a sixth column of the input tile 300) can correspond to a sixth wellbore, the signal 307 (depicted as a third column of the input tile 300) can correspond to a seventh wellbore, the signal 308 (depicted as an eighth column of the input tile 300) can correspond to an eighth wellbore, the signal 309 (depicted as a ninth column of the input tile 300) can correspond to a ninth wellbore, the signal 310 (depicted as a tenth column of the input tile 300) can correspond to a tenth wellbore, and the signal 311 (depicted as an eleventh column of the input tile 300) can correspond to an eleventh wellbore. - Upper and
lower control points FIG. 3 depicts the reference signal as thesignal 306, any of the signals 301-311 can be selected as the reference signal. Further,FIG. 3 depicts two control points. However, in some embodiments, a greater or lesser number of control points may be defined for a reference signal. A greater number of control points can result in an increased confidence in corresponding control points output by a trained neural network. - In some embodiments, the control points 352 and 354 can be positioned where there is a visual change in the
reference signal 306. For example, theinput tile 300 depicts the reference signal (signal 306) as having a change in intensity of color of theinput tile 300 between afirst portion 351 of thesignal 306 and asecond portion 353 of thesignal 306, and athird portion 355 of thesignal 306 and afourth portion 356 of thesignal 306. Theupper control point 352 can be defined at a first depth where thefirst portion 351 and thesecond portion 353 meet, and thelower control point 354 can be defined at a second depth where thethird portion 355 and thefourth portion 356 meet. Alternatively, the control points 352 and 354 can be defined at fixed and pre-determined locations within thereference signal 306. - The
input tile 300 can be a visual representation of an input array having multiple well logs/channels. An intensity of a color of a signal can be correlated to a strength of the signal. Each of the signals 301-311 can represent wellbore data of multiple types. In some embodiments, each channel can be assigned a color (i.e. red, green, or blue) and signals of theinput tile 300 can be colored and represented as an RGB image. Thus, the coloring and intensity of a signal can represent wellbore data from the multiple well logs that make up an input array. - To help illustrate,
FIG. 4 depicts an example input array having multiple channels for well correlation, according to some embodiments. Aninput array 400 can include well logs 450A, 450B, and 450C for a number of wellbores. As depicted inFIG. 4 , theinput array 400 includes wellbore data for 11 wellbores. Each well log includes 11signals—the well log 450A includessignals 401A-411A, the well log 450B includessignals 401B-411B, and the well log 450C includessignals 401C-411C. - Each wellbore's data is represented as a signal in each well log. Thus, the
signals signals signals signals signals signals signals signals signals signals signals FIG. 4 depicts thesignals 401A-411A, 401B-411B, and 401C-411C as representing distinct wellbores, in some embodiments, one or more signals may be repeated (and thus one or more signals of theinput array 400 may correspond to the same wellbore). - A given signal represents how a given formation property changes over a depth of the wellbore. In particular, the highest point in the column for a given signal can correspond to a value of a formation property at a location that is closest to a surface of the wellbore. Thus, the different shading along a column represents different values of the formation property of the wellbore. Each well log can contain different wellbore data. For example, the well log 450A can contain data relating to porosity for each wellbore, the well log 450B can contain data relating to formation density for each wellbore, and the well log 450C can contain data relating to gamma ray measurements for each wellbore. Thus, the
signal 401A can represent porosity, thesignal 401B can represent formation density, and thesignal 401C can represent gamma ray data for the first wellbore. An intensity (i.e. brightness) of a signal in the well logs 450A, 450B, and 450C can be proportional to an amplitude of the signal. - Control points corresponding to locations of control points defined for a reference signal of an input tile can be defined across the well logs 450A, 450B, and 450C at the same depth of the control points defined for the input tile. For the well log 450A, an
upper control point 452A and alower control point 454A can be defined at the first and second depths, respectively, within thesignal 406A. For the well log 450B, anupper control point 452B and alower control point 454B can be defined at the first and second depths, respectively, within the signal 406B. For the well log 450C, anupper control point 452C and alower control point 454C can be defined at the first and second depths, respectively, within thesignal 406C. -
FIG. 4 depicts signals corresponding to 11 wellbores with no missing data. However, when comparing wellbores of differing depths or data sets, there may be null and/or missing data. For example, different logging of the wellbore parts of the data may not have been detected—resulting in null and/or missing data. In some embodiments, well signals can be transformed according to a function in order to represent null data.FIGS. 5-6 depict a transform for mapping null data to a zero value and an example input tile having null data mapped to zero, respectively. - To help illustrate,
FIG. 5 depicts a graph of an example transform for representing null data of a set of input signals, according to some embodiments.FIG. 5 shows agraph 500 having an x-axis 504 (input x) and a y-axis 506 (output y). Real-valued data from input signals can be an input to a transform function to output a compressed range of values. A transform function can be used to map input values within a range defined by an inputlower value 508 and an inputupper value 510 to a compressed range defined by an outputlower value 512 and an outputupper value 514. An example transform function can be defined by Equation 2: -
y=ƒ(x) (2) - where y is a transformed value, x is an input value, and f(x) is a transform function.
- The transform function can map the input
lower value 508 to the outputlower value 512, where the inputlower value 508 is the input value to the transform function and the outputlower value 512 is the transformed value output from the transform function. As depicted inFIG. 5 , the inputlower value 508 is equal to a, theupper input value 510 is equal to b, the outputlower value 512 is equal to a′, and the outputupper value 514 is also equal to b. Thus, input values having an original interval defined by the lower and upper input values 508 and 510, respectively, (i.e. an interval [a, b]) can be transformed to have a compressed interval defined by the lower andupper output values - Following a transform, well data input to the transform function can be represented by a compressed range of values and null values in the original well data can be mapped to a zero value, where the
lower output value 512 is greater than thelower input value 508 and thelower input value 508 is greater than or equal to zero. Transformed data (represented by a curve 502) then can have no data in an interval defined between zero and thelower output value 512. This interval (i.e. an interval [0, a′]) can then act as a buffer between null data and real-valued data when the transformed data is processed for signal alignment. - To help illustrate,
FIG. 6 depicts an example input array having null data mapped to zero, according to some embodiments.FIG. 6 depicts anexample input array 600 including well logs 650A, 650B, and 650C for a number of wellbores. As depicted inFIG. 6 , theinput array 600 includes wellbore data for 11 wellbores, where each wellbore's data is represented within a different column of each of the well logs 650A, 650B, and 650C. - Similar to
FIG. 4 , each wellbore's data can be represented within a single signal.FIG. 6 depicts the well log 650A as having 11signals 601A-611A, the well log 650B as having 11signals 601B-611B, and the well log 650C as having 11signals 601C-611C. A signal can include data from the same wellbore across each well log (i.e., data of thesignal 601A, thesignal 601B, and thesignal 601C can be collected from a single wellbore).Upper control points lower control points signal 306 of theinput tile 300, for example) are depicted. - As depicted in
FIG. 6 , the well logs 650A, 650B, and 650C each contain one or more signals having null values mapped to zero. A signal of a well log can be null data when a corresponding signal of another well log contains real-valued data. For example, a signal of an input tile (the signal 507 of theinput tile 300, for example) can represent well data from a first wellbore, and the well log 650A can contain data relating to porosity for each wellbore, the well log 650B can contain data relating to formation density for each wellbore, and the well log 650C can contain data relating to gamma ray measurements for each wellbore. However, it may be the case that one or more types of data was not collected for a given wellbore. For example, if porosity data was not collected from a wellbore, but formation density data and gamma ray data were, then the wellbore data from that wellbore would have null (i.e. missing) porosity data. That null data can be mapped to zero by transforming the wellbore data (as described in reference toFIG. 5 , for example). Thus, a signal of the well log650A (porosity, in this example) for that wellbore would have values mapped to zero. Continuing the example of thesignal 307, thecorresponding signal 607A can have values mapped to zero and real-valued (but compressed) data represented by the signals 607B and 607C in the well logs 650B and 650C, respectively. - Multiple wellbores may include differing types of data and/or null data.
FIG. 6 depicts null data mapped to zero for thesignals signals FIG. 6 depicts each of the well logs 650A, 650B, and 650C as having null data, in some cases, a subset of a total number of well logs of an input array may have null data. For example, only two well logs of theinput array 600 may have a signal having null data. Alternatively, there may be only a single well log containing null data. Further, some wellbore data may include null data across multiple well logs (i.e. two or more signals corresponding to a single wellbore are null data). Additionally, there may be a greater or lesser number of null data signals within a well log than depicted inFIG. 6 . - In some embodiments, null data can be represented by adding an additional binary signal for each well log of the original signal and the binary signals can be used to indicate whether a signal contains null or real-valued data. For example, the
input array 600 may include a binary signal for each of the well logs 650A, 650B, and 650C, where the binary signal for the well log 650A indicates that signals 601A-606A and 608A-611A contain real-valued data (e.g. a binary value of 1) and that thesignal 607A contains null data (e.g. a binary value of 0). - In some embodiments, null data may be represented using a pseudo-random pattern. For example, the
signals - To help illustrate,
FIG. 7 depicts the example input array ofFIG. 6 with null data represented by a pseudo-random pattern, according to some embodiments.FIG. 7 depicts anexample input array 700 generated from the same well data represented byFIG. 6 , but with null data represented by a pseudo-random pattern in place of black boxes. Theinput array 700 includes threewell logs well log 750A includessignals 701A-711C, the well log 750B includessignals 701B-711B, and the well log 750C includessignals 701C-711C. As depicted, thesignals signals - Returning to
FIG. 1 , one or more input control points can be defined for a reference signal of an input tile representing the input array 104 (as described in reference toFIGS. 3-7 ), and the trainedneural network 110 can output one or more control points for each wellbore where each output control point represents a depth within each wellbore that corresponds to a depth within the reference wellbore at which the input control point is defined. Similar to theinput array 104, output control points can be an array having dimensions of a height, a width, and a number of channels. - The height of the output array can be equal to a number of samples, ND (i.e. a number of rows of the output array). The number of samples of the output array can be equal to the number of samples of the
input array 104. The width of the output array can be equal to the width of theinput array 104 and can be defined by Equation (1) as recited above in reference toFIG. 1 . The number of channels, NC, of the output array can be equal to a number of control points defined for a reference signal of the input tile representing theinput array 104. For example, with reference toFIG. 3 , the output array can have 2 channels for theinput tile 300, where thereference signal 306 has twocontrol points - The output control points can indicate depths within each wellbore where formation properties and/or downhole measurements are similar. For example, with reference to
FIG. 3 , theupper control point 352 can be at a first depth within a first wellbore and an output control point can indicate a depth within one or more wellbores where the formation properties are similar. The output control points can be a visual representation of correlated depths. In some embodiments, an intensity and/or diffusivity of the output control points can be related to a confidence associated with the identified output control points. - To help illustrate,
FIG. 8 depicts an example output set of upper and lower control points corresponding to a set of input signals, according to some embodiments.FIG. 8 depicts an example set ofcontrol point mappings 800 for an input tile having two control points defined for a reference signal of the input tile. For example, the set ofcontrol point mappings 800 may be a set of output control points that correspond to theinput tile 300 ofFIG. 3 . The set ofcontrol point mappings 800 includes a set of uppercontrol point mappings 820A and a set of lowercontrol point mappings 820B. The sets of upper and lowercontrol point mappings FIG. 8 depicts output images from a trained neural network (e.g. the trainedneural network 110 ofFIG. 1 ) divided into 11 columns, 801-811, where each column corresponds to a signal of an input tile. For example, with reference toFIG. 3 , the columns 801-811 can correspond to the signals 301-311 of theinput tile 300. - An output
upper control point 822A for a reference signal of the input tile (thesignal 306, in this example) can be located at the same position within the reference signal as the inputupper control point 352 and an outputlower control point 822B can be located at the same position within the reference signal as the inputlower control point 354. The input and output upper control points 352 and 722A can correspond to a depth within a wellbore, where thereference signal 306 represents well data from that wellbore across a range of depths. Each control point can be assigned a value between zero and 1that indicates the probability. Because the output upper control points 820A are determined based on the identified reference signal, and thereference signal 306 corresponds to the column 806, the outputupper control point 822A can represent a probability of 1. The same can be true for the input and outputlower control points - In some embodiments, an intensity and/or diffusivity of output control points of the sets of upper and lower
control point mappings lower control point 855B for thesignal 308 corresponds to thelower control point 354. However, the outputlower control point 855B has lower intensity relative to an outputlower control point 855A for thesignal 808. This can indicate a lower probability associated with a determined corresponding depth within that wellbore (of the signal 808). - In some cases, a set of control point mappings may include more than one corresponding upper and/or lower control point for a single signal.
FIG. 8 depicts the column 806 as having two output upper control points, 815A and 815B, and two output lower control points, 825A and 825B. As depicted, the outputupper control point 815B has a greater visual intensity (i.e. less diffuse) relative to the outputupper control point 815A, indicating a higher probability that the position (depth) of the outputupper control point 815B corresponds to the position of thecontrol point 822A compared to the position of the outputupper control point 815A.FIG. 8 depicts theoutput control points output control points -
FIGS. 9A-9D depict example alternative visual representations of control points output by a machine-learning model for well correlation, according to some embodiments. In some embodiments, output control points can be depicted on an input tile.FIG. 9A depicts a set of uppercontrol point mappings 915A and a set of lowercontrol point mappings 915B overlaid on anexample input tile 900. Alternatively, a trained machine-learning model can output solely the output control point mappings associated with one or more input control points.FIG. 9B depicts a set of uppercontrol point mappings 925A and a set of lowercontrol point mappings 925B. - In some embodiments, an area below an output control point can be assigned a value of 1 and an area above the output control point can be assigned a value of 0. To help illustrate,
FIGS. 9C-9D depict, respectively, an alternative visual representation of a set of upper control point mappings and a set of lower control point mappings.FIG. 9C depicts animage 906A illustrating an example set of upper control point mappings where an area below an output upper control point is assigned a value of 1 and an area above the output upper control point is assigned a value of 0.FIG. 9D depicts animage 906B illustrating an example set of lower control point mappings where an area below the output lower control point is assigned a value of 1 and an area above the output lower control point is assigned a value of 0. Alternatively, an area above the output control point can be assigned a value of 1 and an area below the output control point can be assigned a value of 0. - Returning to
FIG. 1 , the control point mappings can be an output array from the trainedneural network 110 can then be input into analignment module 112 to align the signals of theinput array 104. In some embodiments, the control point mappings can be stored as stored labelleddata 116 to be used by thetraining module 114 in unsupervised learning processes. Thealignment module 112 may be a pre-trained machine-learning model. In some embodiments, thealignment module 112 and the trainedneural network 110 can be combined into a single trained machine-learning model. The output array from the trainedneural network 110 can represent estimated control point mappings for each signal of the input tile representing theinput array 104. These estimated control point mappings can be formulated as an over-constrained system of equations. - In some embodiments, the
input array 104 can be modified to generate additional control point mappings. For example, signals of theinput array 104 can be flipped vertically to generate a second input array, which can be input into the trainedneural network 110 to obtain additional sets of output control points. Alternatively or in addition, signals of theinput array 104 can be rearranged. For example, an order of the signals of the input array can be modified. In some embodiments, additional input arrays can be generated from theinput array 104 by vertically flipping the signals and shuffling the order of the signals. This can be repeated until a sufficient number of control point mappings to formulate the over-constrained system of equations. - Returning to
FIG. 1 , thealignment module 112 can solve the over-constrained system of equations to generate a set of shifts between control points of the output array. In some embodiments, thealignment module 112 can solve the over-constrained system of equations using a least squares regression. The set of shifts between control points generated by thealignment module 112 can then be used to generate a set of aligned signals 120. - To help illustrate,
FIG. 10 depicts an example prediction workflow employing a machine-learning model for performing well correlation, according to some embodiments.FIG. 10 depicts anexample prediction workflow 1000 for aninput tile 1002 having 11 signals. Theinput tile 1002 can represent an input array that is similar to any of the input arrays described herein. Theinput tile 1002 can represent an input array having multiple well logs. For example, with reference toFIG. 4 , theinput tile 1002 may represent theinput array 400. In some embodiments, theinput tile 1002 may include null data mapped to zero. For example, with reference toFIG. 6 , theinput tile 1002 can represent theinput array 600. Alternatively, theinput tile 1002 can include null data represented as a pseudo-random pattern. For example, with reference toFIG. 7 , theinput tile 1002 can represent theinput array 700. - The
input tile 1002 can include one or more control points defined for a reference signal of theinput tile 1002. In some embodiments, theinput tile 1002 can include an upper control point and a lower control point. For example, with reference toFIG. 3 , theinput tile 1002 can be theinput tile 300 and have theupper control point 352 and thelower control point 354 defined for thereference signal 306.FIG. 10 depicts theexample prediction workflow 1000 for theinput tile 1002 having two control points. However, theinput tile 1002 may have a greater or lesser number of control points. - The
input tile 1002 can be input into a trainedneural network 1006 to generate an output array representing a first set ofcontrol point mappings 1008A. The trainedneural network 1006 can be a U-NET. For example, the trainedneural network 1006 may be the trainedneural network 110. The set ofcontrol point mappings 1008A can include output control points for each signal corresponding to the control point(s) defined for the reference signal of theinput tile 1002. The set ofcontrol point mappings 1008A can be represented as an image, where an intensity and/or diffusivity of each output control point is correlated to a probability. For example, with reference toFIG. 8 , the set ofcontrol point mappings 1008A may be the set ofcontrol point mappings 800. The set ofcontrol point mappings 1008A can then be used to formulate a system of equations to determine a set of shifts for signals of theinput tile 1002 in order to generate a set of aligned signals 1010. - If the formulated system is under-constrained, additional constraints can be obtained by processing a modified
input tile 1004 that is generated from theinput tile 1002.FIG. 10 depicts the modifiedinput tile 1004 as being a vertically flipped image of theinput tile 1002. Alternatively or in addition, the modifiedinput tile 1004 can be generated by rearranging an order of signals of theinput tile 1002. The modifiedinput tile 1004 can be input into the trainedneural network 1006 to generate a second set ofcontrol point mappings 1008B corresponding to the signals of theinput tiles 1002/1004. The second set ofcontrol point mappings 1008B can be represented similar to the first set ofcontrol point mappings 1008A. Additional constraints for the system can be determined based on the second set ofcontrol point mappings 1008B. Once the system is over-constrained, the set of shifts can be determined using a least squares regression. The set of shifts can then be used to align the signals of theinput array 1002 to generate the set of aligned signals 1010. - Returning to
FIG. 1 , for example, theinput array 104 can be processed by the trainedneural network 110 and thealignment module 112 to generate a first set of aligned signals 120. In some embodiments, this process can be performed iteratively to increase a resolution of the set of aligned signals 120. The set of alignedsignals 120 may then be input into the trainedneural network 110 to obtain a second output array representing a set of control point mappings corresponding to the set of aligned signals 120. The second output array can have a set of control point mappings having an increased probability. The second output array can then be processed by thealignment module 112 to generate a second set of aligned signals having a higher resolution than the set of aligned signals 120. This process can be repeated until a resolution of the set of alignedsignals 120 and/or a probability associated with control points of the output array output from the trainedneural network 110 is above a threshold. - To further illustrate, operations for training a machine-learning model to perform well correlation and performing well correlation using said model are now described with reference to
FIGS. 11-12 , respectively. Operations of flowcharts 1100-1200 can be performed by software, firmware, hardware, or a combination thereof. Operations of the flowcharts 1100-1200 are described in reference to theexample system 100 ofFIG. 1 . However, other systems and components can be used to perform the operations now described.FIG. 11 depicts a flowchart of example operations for training a machine-learning model to perform well correlation, according to some embodiments. The operations of theflowchart 1100 are now described with reference to the example training workflow ofFIG. 2 . The operations of theflowchart 1100 begin atblock 1102. - At
block 1102, a reference signal is generated based on data from a well log. The well log can be derived during logging of the wellbore. Alternatively, data from the well log can be from synthetic data (not derived from actual logging of a wellbore). The reference signal can be a visual representation of data of the well log. For example, with reference toFIG. 2 , the data can be represented in heatmap form as the well signal 201. Data of the well log can include raw and/or processed data for a wellbore. For example, the data can include porosity data, formation type data, gamma ray measurement data, and/or any other data type of a downhole and/or formation attribute. In some embodiments, a computer can be used to generate the reference signal from data from the well log. For example, with reference toFIG. 1 , data from the well log can be input into thepre-processing module 108 of thecomputer 106 to generate a reference signal. - At
block 1104, one or more control points are defined for the reference signal. The one or more control points can be defined at a position within the reference signal that corresponds to a depth within the wellbore from which the data was acquired. In some embodiments, an upper control point and a lower control point can be defined for the reference signal. For example, theupper control point 205 and thelower control point 207 are defined at a first and second position within thesignal 201. Optionally, additional control points can be defined within the reference signal. - At
block 1106, the reference signal is transformed to generate a plurality of transformed signals. The reference signal can be transformed be performing one or more transforms on the reference signal. For example, transformed signals can be generated from the well signal 201 by shifting thesignal 201, scaling thesignal 201, adjusting (increasing and/or decreasing) an amplitude of thesignal 201, adding noise to thesignal 201, setting some signal values to null values, etc. - At
block 1108, an input tile and one or more sets of control point mappings for the input tile are created based on the reference signal and the plurality of transformed signals. The input tile can be created by combining the reference signal and the plurality of transformed signals. For example, the input tile 206 can be generated by combining the well signal 201 with the transformedsignals 209 generated from the well signal 201. The sets of control point mappings can be determined based on the control points defined for the reference signal and the transformed signals. For example, the sets ofcontrol point mappings control point mappings 208 can correspond to theupper control point 205 and the second set ofcontrol point mappings 210 can correspond to thelower control point 207. - At
block 1110, a determination is made of whether there is additional well data. If there is determined to be additional well data, operations of theflowchart 1100 continue atblock 1102 and an additional reference signal is generated based on the additional well data. If it is determined that there is no additional well data, operations of theflowchart 1100 continue atblock 1112. - At
block 1112, the input tile(s) and the one or more sets of control point mappings are input into a machine-learning model to train the machine-learning model. The machine-learning model can generate sets of control point mappings based on the input tile(s) and the generated sets of control point mappings can be compared to the input sets of control point mappings to determine accuracy. For example, the input tile 206, the first set ofcontrol point mappings 208, and the second set ofcontrol point mappings 210 can be input into a machine-learning model. Operations of theflowchart 1100 are complete. -
FIG. 12 depicts a flowchart of example operations for performing well correlation using a trained machine-learning model, according to some embodiments. The operations of theflowchart 1200 are described with reference to the example prediction workflow ofFIG. 10 . The operations of theflowchart 1200 begin atblock 1202. - At
block 1202, a first input array having a plurality of signals is generated based on wellbore data from a set of wellbores. Signals of the first input array can represent well data obtained from the set of wellbores. In some embodiments, each signal of the first input array can correspond to a different wellbore. For example, with reference toFIG. 3 , each of the signals 301-311 can represent data obtained from distinct wellbores (i.e. a set of 11 wellbores). Alternatively, if there is an insufficient number of unique signals, one or more signals can be repeated to create an input array having an aspect ratio of approximately one. - The input array can include multiple well logs. For example, with reference to
FIG. 4 , theinput array 400 includes the well logs 450A, 450B, and 450C. Each well log of the input array can include different data types. For example, the well log 450A can represent porosity data, the well log 450B can represent formation type data, and the well log 450C can represent gamma ray measurement data. - In some embodiments, the input array can include null data. Null data can be present in the input array when a type of data is unavailable for a given wellbore. In some embodiments, null data can be mapped to zero and real-valued data of the well log can be mapped to a compressed range. For example, with reference to
FIG. 6 , theinput array 600 includes null data in each of the well logs 650A, 650B, and 650C, where thesignals FIG. 7 , theinput array 700 includes null data represented by a pseudo-random pattern for thesignals - At
block 1204, an input tile is created based on the input array. The input tile can be a visual representation of data of the input array, where an intensity of a signal is correlated to an amplitude of that signal. For example, with reference toFIG. 10 , theinput tile 1002 can be generated based on an input array having 11 signals. The input tile can represent data from multiple well logs. For example, with reference toFIGS. 3-4 , theinput tile 300 can represent theinput array 400. - At block 1206, a reference signal is selected from the plurality of signals of the input tile. The reference signal can be any one of the plurality of signals of the input tile. For example, with reference to
FIG. 3 , the reference signal can be thesignal 306. The reference signal can be selected to determine depths within other wellbores where a formation property is similar to a formation property of the wellbore corresponding to the reference signal. For example, a given wellbore can correspond to thereference signal 306 to determine equivalent depths within wellbores corresponding to the signals 301-305 and 307-311. - At
block 1208, one or more control points are defined for the reference signal of the input tile. The one or more control points can be positioned within the reference signal and can correspond to a depth within a reference wellbore. For example, with reference toFIG. 3 , theupper control point 352 can correspond to a first depth within the reference wellbore and thelower control point 354 can correspond to a second depth within the reference wellbore. - At
block 1210, the input tile is input into a machine-learning model to generate one or more sets of control point mappings corresponding to signals of the input tile. The machine-learning model can output a set of control point mappings for each control point defined for the reference signal of the input tile. For example, with reference toFIG. 8 , the machine-learning model can output the set of uppercontrol point mappings 820A and the set of lowercontrol point mappings 820B for an input tile having two control points defined for the reference signal (e.g. theinput tile 300, for example). - In some embodiments, the sets of control point mappings can indicate a probability associated with each control point of the set of control point mappings. The probability associated with a control point mapping can be represented visually as an intensity or diffusivity. For example,
FIG. 8 depicts thecontrol point 855B having a lower intensity (more diffuse or “spread out”) relative to thecontrol point 855A. This can indicate a lower probability associated with the determined position of thecontrol point 855B compared to the probability associated with the determined position of thecontrol point 855A. - At
block 1212, a determination is made of whether additional control point mappings are needed. Additional control point mappings can be needed when a formulated system representing correlations between wellbores is under-constrained. If a determination is made that additional control point mappings are needed, operations of theflowchart 1200 continue atblock 1214. If a determination is made that additional control point mappings are not needed, operations of the flowchart continue atblock 1216. - At
block 1214, a modified input tile is generated based on the first input tile. The modified input tile can be generated by transforming the first input tile. The first input tile can be transformed to create a modified input tile by flipping/mirroring the first input tile and/or rearranging and order of the signals of the input tile. For example, with reference toFIG. 10 , the modifiedinput tile 1004 can be generated from theinput tile 1002 by vertically flipping theinput tile 1002. The modified input tile can then be input into the machine-learning model to generate a second set of control point mappings. For example, the modifiedinput tile 1004 can be input into theneural network 1006 to generate the second set ofcontrol point mappings 1008B. In some embodiments, this process can be repeated to generate additional sets of control point mappings. - At
block 1216, a set of shifts for the plurality of signals of the input array is determined based on the sets of control point mappings output from the machine-learning model. The set of shifts can be determined by solving the formulated system which correlates wells once the system is over-constrained by the sets of control point mappings for the input tile. For example, the set of shifts can be determined by performing a least squares regression. - At block 1218, the signals of the plurality of signals of the input array are aligned based on the set of shifts to generate a set of aligned signals. Each signal of the input tile can be shifted based on a corresponding shift of the set of shifts to align the signal with the reference signal of the input tile. For example, with reference to
FIG. 10 , signals of theinput tile 1002 can be shifted based on the set of shifts to generate the set of aligned signals 1010. - At
block 1220, a determination is made of whether the resolution of the set of aligned signals is acceptable. If the resolution of the set of aligned signals is not acceptable, the set of aligned signals can be used as an input tile to further refine alignment of the signals. If the resolution of the set of aligned signals is acceptable, operations of theflowchart 1200 continue atblock 1224. - At
block 1222, the set of aligned signals is used as an input tile and operations continue at block 1206 where a reference signal is selected from the set of aligned signals. For example, with reference toFIG. 10 , the set of alignedsignals 1010 can be used as an input tile. - At
block 1224, correlated depths for the set of wellbores are determined based on the set of aligned signals. The correlated depths for wells of the set of wellbores can be defined relative to the reference well. The set of aligned signals can be used to determine a depth within each wellbore of the set of wellbores at which a formation property/attribute of the wellbore is substantially equivalent or similar to the formation property/attribute of the reference wellbore at the depth(s) that correspond to the one or more control points defined for the reference signal. For example, with reference toFIG. 3 , theupper control point 352 can be defined at a depth of 50 feet within a reference wellbore and a formation surrounding the reference wellbore can have a measured (or determined) porosity at that depth. Based on the set of aligned signals, a correlated depth within another wellbore can be determined, where the porosity of the other wellbore (of the set of signals of the input tile) at the correlated depth is substantially equivalent to the porosity of the reference wellbore at a depth of 50 feet. Operations of theflowchart 1200 are complete. -
FIGS. 11-12 are annotated with a series of numbers. These numbers represent stages of operations. Although these stages are ordered for this example, the stages illustrate one example to aid in understanding this disclosure and should not be used to limit the claims. Subject matter falling within the scope of the claims can vary with respect to the order and some of the operations. - The flowcharts are provided to aid in understanding the illustrations and are not to be used to limit scope of the claims. The flowcharts depict example operations that can vary within the scope of the claims. Additional operations may be performed; fewer operations may be performed; the operations may be performed in parallel; and the operations may be performed in a different order. For example, the operations depicted in
blocks 1204 and 1206 can be performed in parallel or concurrently. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by program code. The program code may be provided to a processor of a general purpose computer, special purpose computer, or other programmable machine or apparatus. - As will be appreciated, aspects of the disclosure may be embodied as a system, method or program code/instructions stored in one or more machine-readable media. Accordingly, aspects may take the form of hardware, software (including firmware, resident software, micro-code, etc.), or a combination of software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” The functionality presented as individual modules/units in the example illustrations can be organized differently in accordance with any one of platform (operating system and/or hardware), application ecosystem, interfaces, programmer preferences, programming language, administrator preferences, etc.
- Any combination of one or more machine readable medium(s) may be utilized. The machine readable medium may be a machine readable signal medium or a machine readable storage medium. A machine readable storage medium may be, for example, but not limited to, a system, apparatus, or device, that employs any one of or combination of electronic, magnetic, optical, electromagnetic, infrared, or semiconductor technology to store program code. More specific examples (a non-exhaustive list) of the machine readable storage medium would include the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a machine readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. A machine readable storage medium is not a machine readable signal medium.
- A machine readable signal medium may include a propagated data signal with machine readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A machine readable signal medium may be any machine readable medium that is not a machine readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
- Program code embodied on a machine readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
- Computer program code for carrying out operations for aspects of the disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as the Java® programming language, C++ or the like; a dynamic programming language such as Python; a scripting language such as Perl programming language or PowerShell script language; and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on a stand-alone machine, may execute in a distributed manner across multiple machines, and may execute on one machine while providing results and or accepting input on another machine.
- The program code/instructions may also be stored in a machine readable medium that can direct a machine to function in a particular manner, such that the instructions stored in the machine readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
-
FIG. 13 depicts an example computer, according to some embodiments. Acomputer 1300 includes a processor 1301 (possibly including multiple processors, multiple cores, multiple nodes, and/or implementing multi-threading, etc.). Thecomputer 1300 includes amemory 1307. Thememory 1307 may be system memory or any one or more of the above already described possible realizations of machine-readable media. Thecomputer 1300 also includes abus 1303 and anetwork interface 1305. - The computer 1310 also includes a
signal processor 1311. Thesignal processor 1311 may perform one or more operations described herein. Any one of the previously described functionalities may be partially (or entirely) implemented in hardware and/or on theprocessor 1301. For example, the functionality may be implemented with an application specific integrated circuit, in logic implemented in theprocessor 1301, in a co-processor on a peripheral device or card, etc. Further, realizations may include fewer or additional components not illustrated inFIG. 13 (e.g., video cards, audio cards, additional network interfaces, peripheral devices, etc.). Theprocessor 1301 and thenetwork interface 1305 are coupled to thebus 1303. Although illustrated as being coupled to thebus 1303, thememory 1307 may be coupled to theprocessor 1301. - While the aspects of the disclosure are described with reference to various implementations and exploitations, it will be understood that these aspects are illustrative and that the scope of the claims is not limited to them. In general, techniques for multiple signal alignment as described herein may be implemented with facilities consistent with any hardware system or hardware systems. Many variations, modifications, additions, and improvements are possible.
- Plural instances may be provided for components, operations or structures described herein as a single instance. Finally, boundaries between various components, operations and data stores are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within the scope of the disclosure. In general, structures and functionality presented as separate components in the example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements may fall within the scope of the disclosure.
- Use of the phrase “at least one of ”preceding a list with the conjunction “and” should not be treated as an exclusive list and should not be construed as a list of categories with one item from each category, unless specifically stated otherwise. A clause that recites “at least one of A, B, and C” can be infringed with only one of the listed items, multiple of the listed items, and one or more of the items in the list and another item not listed.
- Embodiment 1: A method comprising performing wellbore correlation across multiple wellbores, the performing comprising, predicting a depth alignment across the multiple wellbores based on at least one geological feature of subsurface formations in which the multiple wellbores are located, wherein the predicting comprises, selecting a reference wellbore from among the multiple wellbores; defining at least one control point in a reference signal of a reference well log for the reference wellbore, wherein the reference well log includes changes in the at least one geological feature over a depth of the reference wellbore; generating an input tile that comprises the reference signal, the at least one control point, and a number of non-reference well logs, wherein the number of non-reference well logs corresponds to a set of non-reference wellbores, and wherein each of the number of non-reference well logs includes changes in the at least one geological feature over a depth of each non-reference wellbore of the set of non-reference wellbores; inputting the input tile into a machine-learning model; and in response to inputting the inputting the input tile into the machine-learning model, outputting, from the machine-learning model, a corresponding control point for each of the number of non-reference well logs that corresponds to the at least one control point of the reference well log.
- Embodiment 2: The method of
Embodiment 1, wherein predicting the depth alignment comprises aligning the reference well log and the set of non-reference well logs based on the at least one control point and the corresponding control point for each of the number of non-reference well logs that is output from the machine-learning model. - Embodiment 3: The method of Embodiment 2, wherein the reference well log and each of the number of non-reference well logs comprises multiple channels, wherein each channel corresponds to a different geological feature of the at least one geological feature, wherein defining the at least one control point comprises defining at least one control point for each channel in the reference well log, and wherein aligning comprises aligning, using the machine-learning model, the at least one control point for each channel in the reference well log with a point in a corresponding channel of each of the number of non-reference well logs.
- Embodiment 4: The method of any one of Embodiments 1-3, wherein predicting the depth alignment across the multiple wellbores comprises prior to inputting the reference well log and the non-reference well logs, identifying null data in each of the non-reference well logs; and transforming the null data into non-null data.
- Embodiment 5: The method of Embodiment 4, wherein transforming the null data into non-null data comprises setting the null data to a same value.
- Embodiment 6: The method of
Embodiment 5, wherein the same value is zero. - Embodiment 7: The method of Embodiment 4, wherein transforming the null data into non-null data comprises setting the null data to values according to a pseudo-random number pattern.
- Embodiment 8: The method of any one of Embodiments 1-7, further comprising: training the machine-learning model, wherein the training comprises, inputting, into the machine-learning model, a number of well logs and at least one control point for each of the number of well logs; inputting, into the machine-learning model, a mapping among the at least one control point across each of the number of well logs; and training the machine-learning model based on the number of well logs and the mapping among the at least one control point across each of the number of well logs.
- Embodiment 9: The method of Embodiment 8, wherein the number of well logs are based on actual well logging operations.
- Embodiment 10: The method of Embodiments 8 or 9, wherein the number of well logs includes synthetic data.
- Embodiment 11: One or more non-transitory machine-readable media comprising program code executable by a processor to cause the processor to: select a reference wellbore from among multiple wellbores; define at least one control point in a reference signal of a reference well log for the reference wellbore, wherein the reference well log includes changes in at least one geological feature over a depth of the reference wellbore; generate an input tile that comprises the reference signal, the at least one control point, and a number of non-reference well logs, wherein the number of non-reference well logs corresponds to a set of non-reference wellbores, and wherein each of the number of non-reference well logs includes changes in the at least one geological feature over a depth of each non-reference wellbore of the set of non-reference wellbores; input the input tile into a machine-learning model; and in response to inputting the input tile into the machine-learning model, output, from the machine-learning model, a corresponding control point for each of the number of non-reference well logs that correspond to the at least one control point of the reference well log.
- Embodiment 12: The one or more non-transitory machine-readable media of Embodiment 11, wherein the program code comprises program code executable by the processor to cause the processor to: align the reference well log and the number of non-reference well logs based on the at least one control point and the corresponding control point for each of the number of non-reference well logs that is output from the machine-learning model.
- Embodiment 13: The one or more non-transitory machine-readable media of Embodiments 11 or 12, wherein the reference well log and each of the number of non-reference well logs comprises multiple channels, wherein each channel corresponds to a different geological feature of the at least one geological feature, wherein the program code comprises program code executable by the processor to cause the processor to: define at least one control point for each channel in the reference well log; and align the at least one control point for each channel in the reference well log with a point in a corresponding channel of each of the number of non-reference well logs.
- Embodiment 14: The one or more non-transitory machine-readable media of any one of Embodiments 11-13, wherein the program code comprises program code executable by the processor to cause the processor to: prior to inputting the reference well log and the non-reference well logs, identify null data present in each of the non-reference well logs; and transform the null data into non-null data.
- Embodiment 15: The one or more non-transitory machine-readable media of Embodiment 14, wherein the null data is transformed into the non-null data by setting the null data to a same value.
- Embodiment 16: The one or more non-transitory machine-readable media of Embodiment 14, wherein the null is transformed into the non-null data by setting the null data to values according to a pseudo-random number pattern.
- Embodiment 17: An apparatus comprising: a processor; and a machine-readable medium having program code executable by the processor to cause the processor to, train a neural network for performing wellbore correlation across multiple wellbores, wherein the program code executable by the processor to cause the processor to train the neural network comprises program code executable by the processor to cause the processor to, generate a reference well log based on wellbore data for a wellbore; define at least one reference control point for the reference well log; apply one or more transformations to the reference well log having the at least one reference control point defined to create a plurality of transformed well logs; create a training input tile that comprises the reference well log, the at least one reference control point for the reference well log, and the plurality of transformed well logs; create a training set that comprises the training input tile and an input mapping among the at least one reference control point across each of the plurality of transformed well logs, wherein the input mapping includes at least one corresponding control point for each transformed well log of the plurality of transformed well logs, wherein the at least one corresponding control point corresponds to the at least one reference control point; and train the neural network using the training set.
- Embodiment 18: The apparatus of Embodiment 17, wherein the program code executable by the processor to cause the processor to apply the one or more transformations to the reference well log comprises program code executable by the processor to cause the processor to: applying at least one of shifting, compressing, stretching, an amplitude increase, an amplitude decrease, adding noise, and setting a value of at least a portion of data to null.
- Embodiment 19: The apparatus of Embodiments 17 or 18, wherein the program code executable by the processor to cause the processor to train the neural network using the training set comprises program code executable by the processor to cause the processor to: output, from the neural network, a predicted mapping among the at least one reference control point across each of the plurality of transformed well logs, wherein the predicted mapping includes at least one predicted corresponding control point for each transformed well log of the plurality of transformed well logs, wherein each of the at least one predicted corresponding control point is associated with a probability, wherein the probability associated with each of the at least one predicted corresponding control points is based on the training input tile and the predicted mapping.
- Embodiment 20: The apparatus of any one of Embodiments 17-19, wherein the reference well log comprises multiple channels, wherein each channel corresponds to a different geological feature of a formation surrounding the wellbore, and wherein the program code executable by the processor to cause the processor to define the at least one reference control point for the reference well log comprises program code executable by the processor to cause the processor to define at least one reference control point for each channel of the reference well log.
Claims (20)
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/305,861 US20230021210A1 (en) | 2021-07-15 | 2021-07-15 | Supervised machine learning-based wellbore correlation |
GB2316088.0A GB2620524A (en) | 2021-07-15 | 2021-07-16 | Supervised machine learning-based wellbore correlation |
NO20231178A NO20231178A1 (en) | 2021-07-15 | 2021-07-16 | Supervised machine learning-based wellbore correlation |
PCT/US2021/070891 WO2023287454A1 (en) | 2021-07-15 | 2021-07-16 | Supervised machine learning-based wellbore correlation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/305,861 US20230021210A1 (en) | 2021-07-15 | 2021-07-15 | Supervised machine learning-based wellbore correlation |
Publications (1)
Publication Number | Publication Date |
---|---|
US20230021210A1 true US20230021210A1 (en) | 2023-01-19 |
Family
ID=84890703
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/305,861 Pending US20230021210A1 (en) | 2021-07-15 | 2021-07-15 | Supervised machine learning-based wellbore correlation |
Country Status (4)
Country | Link |
---|---|
US (1) | US20230021210A1 (en) |
GB (1) | GB2620524A (en) |
NO (1) | NO20231178A1 (en) |
WO (1) | WO2023287454A1 (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060052937A1 (en) * | 2004-09-07 | 2006-03-09 | Landmark Graphics Corporation | Method, systems, and computer readable media for optimizing the correlation of well log data using dynamic programming |
US20200174149A1 (en) * | 2018-11-29 | 2020-06-04 | Bp Exploration Operating Company Limited | Event Detection Using DAS Features with Machine Learning |
US11360233B2 (en) * | 2017-09-12 | 2022-06-14 | Schlumberger Technology Corporation | Seismic image data interpretation system |
US20220187496A1 (en) * | 2019-05-21 | 2022-06-16 | Schlumberger Technology Corporation | Geologic model and property visualization system |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10459098B2 (en) * | 2013-04-17 | 2019-10-29 | Drilling Info, Inc. | System and method for automatically correlating geologic tops |
US20150088424A1 (en) * | 2013-09-20 | 2015-03-26 | Schlumberger Technology Corporation | Identifying geological formation depth structure using well log data |
US11377931B2 (en) * | 2016-08-08 | 2022-07-05 | Schlumberger Technology Corporation | Machine learning training set generation |
WO2019204555A1 (en) * | 2018-04-20 | 2019-10-24 | Schlumberger Technology Corporation | Well log correlation and propagation system |
-
2021
- 2021-07-15 US US17/305,861 patent/US20230021210A1/en active Pending
- 2021-07-16 WO PCT/US2021/070891 patent/WO2023287454A1/en unknown
- 2021-07-16 GB GB2316088.0A patent/GB2620524A/en active Pending
- 2021-07-16 NO NO20231178A patent/NO20231178A1/en unknown
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060052937A1 (en) * | 2004-09-07 | 2006-03-09 | Landmark Graphics Corporation | Method, systems, and computer readable media for optimizing the correlation of well log data using dynamic programming |
US11360233B2 (en) * | 2017-09-12 | 2022-06-14 | Schlumberger Technology Corporation | Seismic image data interpretation system |
US20200174149A1 (en) * | 2018-11-29 | 2020-06-04 | Bp Exploration Operating Company Limited | Event Detection Using DAS Features with Machine Learning |
US20220187496A1 (en) * | 2019-05-21 | 2022-06-16 | Schlumberger Technology Corporation | Geologic model and property visualization system |
Non-Patent Citations (5)
Title |
---|
B. Zhang, "Deep Machine Learning In Assisted Well Correlation Analysis", [online] retrieved on 4/30/2021 <from https://slidetodoc.com/deep-machine-leaming-in-assisting-well-correlation-analysis/>, 55 pages (Year: 2021) * |
H. Fang and et al, "Mimicking the process of manual sequence stratigraphy well correlation", Interpretation, Vol. 9, No. 3 (August 2021), published ahead of production 25 March 2021; published online 15 June 2021 (Year: 2021) * |
N. A. Sidorovskaia and et al, "Some Aspects of Time-to-Depth Conversion for Depth Imaging,", Offshore Technology Conference, Houston, Texas, 03-May-1999 (Year: 1999) * |
S. Brazell and et al, "A Machine-Learning-Based Approach to Assistive Well-Log Correlation", Petrophysics, Vol. 60, No. 4, August 2019; Pages 469-479; 9 Figures. DOI: 10.30632/PJV60N4-2019al (Year: 2019) * |
Wheeler, "Automatic and simultaneous correlation of multiple well logs", UMI, Dissertation Publishing, UMI 1589685, Published by ProQuest LLC 2015 (Year: 2015) * |
Also Published As
Publication number | Publication date |
---|---|
WO2023287454A1 (en) | 2023-01-19 |
NO20231178A1 (en) | 2023-11-02 |
GB202316088D0 (en) | 2023-12-06 |
GB2620524A (en) | 2024-01-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wu et al. | Semiautomatic first-arrival picking of microseismic events by using the pixel-wise convolutional image segmentation method | |
US7363158B2 (en) | Method for creating a stratigraphic model using pseudocores created from borehole images | |
Brazell et al. | A machine-learning-based approach to assistive well-log correlation | |
Wei et al. | Characterizing rock facies using machine learning algorithm based on a convolutional neural network and data padding strategy | |
Chevitarese et al. | Seismic facies segmentation using deep learning | |
Gonzalez et al. | Integrated multi-physics workflow for automatic rock classification and formation evaluation using multi-scale image analysis and conventional well logs | |
Gupta et al. | A deep-learning approach for borehole image interpretation | |
Koeshidayatullah et al. | Faciesvit: Vision transformer for an improved core lithofacies prediction | |
Masroor et al. | A multiple-input deep residual convolutional neural network for reservoir permeability prediction | |
Liang et al. | A machine learning framework for automating well log depth matching | |
Jiang et al. | Deep-learning-based vuggy facies identification from borehole images | |
Zhang et al. | Applying convolutional neural networks to identify lithofacies of large-n cores from the Permian Basin and Gulf of Mexico: The importance of the quantity and quality of training data | |
US20230021210A1 (en) | Supervised machine learning-based wellbore correlation | |
CN116168224A (en) | Machine learning lithology automatic identification method based on imaging gravel content | |
Simoes et al. | Deep Learning for Multiwell Automatic Log Correction | |
Shakirov et al. | Quantitative assessment of rock lithology from gamma-ray and mud logging data | |
Liu et al. | Automatic fracture segmentation and detection from image logging using mask R-CNN | |
Wylie Jr et al. | Well-log tomography and 3-D imaging of core and log-curve amplitudes in a Niagaran reef, Belle River Mills field, St. Clair County, Michigan, United States | |
Akkurt et al. | Machine learning for well log normalization | |
Parimontonsakul et al. | A Machine Learning Based Approach to Automate Stratigraphic Correlation through Marker Determination | |
Sharifi et al. | Estimation of pore types in a carbonate reservoir through artificial neural networks | |
CN114021700A (en) | Permeability calculation method and device based on petrophysical constraint neural network | |
Waggoner | Lessons learned from 4D projects | |
Zoraster et al. | Curve alignment for well-to-well log correlation | |
RU2530324C2 (en) | Method for determining position of marker depth coordinates when constructing geological model of deposit |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: LANDMARK GRAPHICS CORPORATION, TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SERVAIS, MARC PAUL;BAINES, GRAHAM;SIGNING DATES FROM 20210713 TO 20210714;REEL/FRAME:056872/0932 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |