US20220215526A1 - Machine learning model for identifying surfaces in a tubular - Google Patents
Machine learning model for identifying surfaces in a tubular Download PDFInfo
- Publication number
- US20220215526A1 US20220215526A1 US17/540,251 US202117540251A US2022215526A1 US 20220215526 A1 US20220215526 A1 US 20220215526A1 US 202117540251 A US202117540251 A US 202117540251A US 2022215526 A1 US2022215526 A1 US 2022215526A1
- Authority
- US
- United States
- Prior art keywords
- tubular
- images
- acoustic
- regions
- internal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000010801 machine learning Methods 0.000 title claims abstract description 21
- 238000000034 method Methods 0.000 claims abstract description 28
- 238000012545 processing Methods 0.000 claims abstract description 25
- 238000012800 visualization Methods 0.000 claims abstract description 5
- 238000012549 training Methods 0.000 claims description 13
- 230000006870 function Effects 0.000 claims description 8
- 238000003384 imaging method Methods 0.000 claims description 5
- 230000001537 neural effect Effects 0.000 claims description 5
- 238000009877 rendering Methods 0.000 claims description 5
- 238000005260 corrosion Methods 0.000 claims description 3
- 230000007797 corrosion Effects 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 238000005094 computer simulation Methods 0.000 claims description 2
- 239000004576 sand Substances 0.000 claims description 2
- 238000002604 ultrasonography Methods 0.000 description 18
- 230000008569 process Effects 0.000 description 7
- 239000011800 void material Substances 0.000 description 6
- 230000004913 activation Effects 0.000 description 4
- 238000001994 activation Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000010606 normalization Methods 0.000 description 4
- 230000007423 decrease Effects 0.000 description 3
- 239000012530 fluid Substances 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 239000002245 particle Substances 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 239000012636 effector Substances 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 239000002184 metal Substances 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 101100188555 Arabidopsis thaliana OCT6 gene Proteins 0.000 description 1
- 238000012935 Averaging Methods 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 239000004568 cement Substances 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013434 data augmentation Methods 0.000 description 1
- 230000000593 degrading effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000013011 mating Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 239000011435 rock Substances 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- E—FIXED CONSTRUCTIONS
- E21—EARTH OR ROCK DRILLING; MINING
- E21B—EARTH OR ROCK DRILLING; OBTAINING OIL, GAS, WATER, SOLUBLE OR MELTABLE MATERIALS OR A SLURRY OF MINERALS FROM WELLS
- E21B47/00—Survey of boreholes or wells
-
- E—FIXED CONSTRUCTIONS
- E21—EARTH OR ROCK DRILLING; MINING
- E21B—EARTH OR ROCK DRILLING; OBTAINING OIL, GAS, WATER, SOLUBLE OR MELTABLE MATERIALS OR A SLURRY OF MINERALS FROM WELLS
- E21B47/00—Survey of boreholes or wells
- E21B47/002—Survey of boreholes or wells by visual inspection
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N29/00—Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
- G01N29/36—Detecting the response signal, e.g. electronic circuits specially adapted therefor
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N29/00—Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
- G01N29/44—Processing the detected response signal, e.g. electronic circuits specially adapted therefor
- G01N29/4445—Classification of defects
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N29/00—Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
- G01N29/44—Processing the detected response signal, e.g. electronic circuits specially adapted therefor
- G01N29/4481—Neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
- G06T2207/10136—3D ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20076—Probabilistic image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20168—Radial search
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
Definitions
- the invention relates generally to inspection of fluid-carrying tubulars, in particular acoustic sensors detecting internal and external regions in oil and gas wells and pipelines.
- preferred embodiments of the invention enable the device to automatically identify surfaces of imaged tubulars, such as pipes and wells, and remove noise from rendering of it to a user.
- FIG. 6A is an unwrapped acoustic image of a single transverse frame for training.
- the present invention may be described by the following preferred embodiments with reference to the attached drawings.
- Disclosed are a method and system to automatedly identify image internal and external regions in a logged well from ultrasound images using a computer model.
- the model is a Machine Learning model.
- the present system may identify the tubular surface, i.e. the boundary between internal and external regions in the tubular, that are difficult for a human operator to trace.
- the tubular surface might exhibit degrading features include glints, ringing, frequency changes, refraction, depth information, surfaces mating and materials affecting the speed of sound.
- the transducers are preferably a phased array operating in the ultrasound band.
- the present imaging tool is preferably that of the leading technology in this area, exemplified by: U.S. Pat. No. 10,781,690 filed 6 Oct. 2016 and entitled “Devices and methods for imaging wells using phased array ultrasound”; Patent Applications US20200055196A1, filed 13 Aug. 2019 entitled “Device and Method to Position an End Effector in a Well”, both incorporated by reference.
- the ultrasound image may be 3-dimensional, in which case the exemplary neural nets provided herein below have an extra dimension to convolve.
- the ultrasound image may be a 2D image with depth (z) and azimuthal ( ⁇ ) coordinates, as discussed above.
- the system maintains a database 25 to store image and surface metadata. For each log, the system may store raw images, modified images, surface image pixels, internal probabilities, and boundary locations (z and ⁇ ).
- the method only operates on a subset of these image segments.
- the choice of which image segment to process might simply involve skipping regular intervals, or it could be the output of a different ML model.
- the input data is represented in three main axes: ⁇ , R and Z (the Z axis is also the logging axis separated in time by frames; R is the radial distance from the transducer array (or major axis of the tool), in directions transverse to the logging axis, measurable in time-sampled pixels or physical distance; and ⁇ corresponds to the azimuthal angle of a scan line in the transverse plane.
- the ⁇ -R plane represents data collected from a cross-sectional slice of the tubular at a specific axial position (z) or logging time instant (t).
- One efficient representation is averaging the intensities over R for each scan line.
- the purpose of the encoder module is to provide a compact representation of its input.
- This module may comprise five convolution layers, but fewer or more layers could be used, trading off accuracy and processing time.
- spatial attention layers could be used instead of convolution layers.
- RNN Recurrent Neural Networks
- LSTM Long Short-Term Memory
- spatio-temporal attention models could be used.
- the encoder architecture uses Rectified Linear Units (ReLU), but other activation function such as Leaky or Randomized ReLUs could also be used to improve the accuracy of the model.
- the architecture further employs a Batch Normalization layer, which normalizes and scales the input feature maps to each convolutional layer. Batch Normalization layers help in speeding up the training process and reduce the possibility of the model overfitting the training data. Because Batch Normalization helps reduce overfitting, the model does not need Dropout layers.
- the architecture may employ an Adam optimizer for training the ML model, as it is easier to tune than a stochastic gradient descent optimizer.
- a stochastic gradient descent with momentum is also an alternative.
- a learning rate scheduler may be used to reduce the learning as a function of the current training epoch.
- the loss function for the optimizer is the binary cross entropy function.
- This threshold can be a fixed probability or computed dynamically to find an optimal threshold that is able to differentiate internal and external regions. For example, there may be cases where most regions are computed to have very high probability of being internal, so the dynamic threshold is set higher to return boundaries, preferably close to the known radius of the tubular. After thresholding, the processor may then classify regions as ‘internal’, ‘tubular’ or ‘external’, rather than a probability of such.
- the processor calculates the gradient of the probability map and scans it row by row.
- the first peak along the gradient image may be declared the boundary. This tends to be fast but less accurate.
- the process may use seam carving, as disclosed in GB2019014401 filed 4 Oct. 2019 entitled “Surface extraction for ultrasonic images using path energy.” This technique generally finds the brightest “column”. This tends to be more accurate but slower.
- the output of the U-Net may be probabilities for regions of being inner (i.e. 21 b ) or not (i.e. 21 c and 21 a ).
- the output may be multiple classes 21 a , 21 b , 21 c (most outer, inner, most inner).
- the boundaries 24 are determined as the locations of gradient changes in these regions' class or probability. In this application, it may be optimal to consider gradients in three dimensions.
- the output may be the probabilities of being one of three regions (internal, external, void).
- a void may be a rupture or perforation of the tubular, i.e. there is no metal at a location but there should be by inference to the surrounding metal regions.
- the image of the tubular near a void tend to return abnormal and characteristic reflections from edges of the void.
- the model is trained on images containing such voids with pixels/voxels labelled as ‘void.’
- the system may remove all image data (i.e. set pixels to clear) for internal and external pixels and display only pixels from within some width of the identified boundary pixels. This is useful in visualizing the tubular itself with its surface features, such as cracks and perforations.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mining & Mineral Resources (AREA)
- Geology (AREA)
- Pathology (AREA)
- Health & Medical Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- General Health & Medical Sciences (AREA)
- Immunology (AREA)
- Quality & Reliability (AREA)
- Environmental & Geological Engineering (AREA)
- Geophysics (AREA)
- Signal Processing (AREA)
- Fluid Mechanics (AREA)
- General Life Sciences & Earth Sciences (AREA)
- Geochemistry & Mineralogy (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Geometry (AREA)
- Image Processing (AREA)
- Investigating Or Analyzing Materials By The Use Of Ultrasonic Waves (AREA)
Abstract
Description
- This application claims priority to United Kingdom Patent Application Number 2100058.3, filed Jan. 4, 2021, the disclosure of which is incorporated herein by reference in its entirety.
- The invention relates generally to inspection of fluid-carrying tubulars, in particular acoustic sensors detecting internal and external regions in oil and gas wells and pipelines.
- Cylindrical conduits such as well casings, tubulars and pipes may be imaged using ultrasound sensors mounted to a tool propelled through the conduit. Existing ultrasound tools comprise an array of piezoelectric elements distributed radially around the tool housing. The top surface of each element faces radially away from the tool towards the wall of the conduit. The reflected waves are received by the same elements and the pulse-echo time of the waves are used to deduce the distances to the internal and external walls and voids therebetween. The elements may be angled slightly off radial, such that some of the energy reflects away from the transducer and some backscatters off features, per PCT Application WO 2016/201583 published Dec. 22, 2016, to Darkvision Technologies.
- The reflections are image processed to generate a 2D or 3D geometric model of the conduit, then rendered for visualization at a monitor. However, there are numerous errors in the logging process that need to be corrected to represent the surface smoothly. The reflected signals often contain noises from particles in the fluid, secondary reflections, and ringing in the conduit material. Moreover, there can be dead sensor elements or the whole tool can be decentralized. This tends to lead to discontinuities and skewing in the visualization even through the conduit is generally cylindrical with a smooth surface.
- In accordance with a first aspect of the invention there is provided a method of processing acoustic images of a tubular, the method comprising: receiving acoustic images of the tubular from an acoustic logging tool; convolving the acoustic images with a Machine Learning model to output probabilities that regions of the images are internal regions or non-internal regions; determining a boundary between internal and non-internal regions based on their probabilities; identifying surface pixels in the acoustic images corresponding to the boundary; and storing a result of the processing in a datastore.
- In accordance with a first aspect of the invention there is provided an apparatus for processing acoustic images of a tubular comprising: a non-transitory computer readable medium having instructions executable by a processor to perform operations comprising: receiving acoustic images of the tubular; convolving the acoustic images with a Machine Learning model to output probabilities that regions of the images are internal regions or non-internal regions; determining a boundary between internal and non-internal regions based on their probabilities; identifying surface pixels in the acoustic images corresponding to the boundary; and storing a result of the processing in a datastore.
- Further aspects of the invention are set out below and in the appended claims. Thus preferred embodiments of the invention enable the device to automatically identify surfaces of imaged tubulars, such as pipes and wells, and remove noise from rendering of it to a user.
- Various objects, features, and advantages of the invention will be apparent from the following description of embodiments of the invention and illustrated in the accompanying drawings. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the principles of various embodiments of the invention.
-
FIG. 1 is a block diagram of a computing modules in accordance with one embodiment. -
FIG. 2 is a block diagram of an encoder-decoder model's architecture. -
FIG. 3 is a block diagram of the encoder inFIG. 2 . -
FIG. 4 is a block diagram of the decoder inFIG. 2 . -
FIG. 5 is a cross-section of an imaging tool capturing acoustic images in a tubular. -
FIG. 6A is an unwrapped acoustic image of a single transverse frame for training. -
FIG. 6B is an unwrapped acoustic image of a single transverse frame after processing. -
FIG. 7 is a workflow for building a using a neural net model. -
FIG. 8A is an unwrapped ultrasonic image. -
FIG. 8B is a wrapped ultrasonic image. -
FIG. 9 is an illustration of surface finding in a multi-region situation. -
FIG. 10 is an illustration of geometric features in an imaged tubular. -
FIG. 11 is a filter for mapping pixel probabilities to surface pixels. -
FIG. 12 is a rendered image of a tubular after processing. - The present invention may be described by the following preferred embodiments with reference to the attached drawings. Disclosed are a method and system to automatedly identify image internal and external regions in a logged well from ultrasound images using a computer model. The model is a Machine Learning model. Advantageously, the present system may identify the tubular surface, i.e. the boundary between internal and external regions in the tubular, that are difficult for a human operator to trace. In difficult cases, the tubular surface might exhibit degrading features include glints, ringing, frequency changes, refraction, depth information, surfaces mating and materials affecting the speed of sound.
- The transducers are preferably a phased array operating in the ultrasound band. The present imaging tool is preferably that of the leading technology in this area, exemplified by: U.S. Pat. No. 10,781,690 filed 6 Oct. 2016 and entitled “Devices and methods for imaging wells using phased array ultrasound”; Patent Applications US20200055196A1, filed 13 Aug. 2019 entitled “Device and Method to Position an End Effector in a Well”, both incorporated by reference.
- In these disclosures, an array of ultrasound transducers uses beamforming to capture images of a downhole casing. Typically, images are captured as frames from the whole array, while the tool is conveyed through the casing to log a long section of the casing. The result is a 3D ultrasound image with millimeter resolution, which may be stored raw, demodulated, or data compressed into local storage and then transmitted to a remote workstation 19 for further image processing, as described herein below.
- As an example, the
imaging array 12 may be a radial array of 256 elements and captures cross sectional frames of the tubular at a given axial position z. An example image frame is shown inFIG. 8A , unwrapped, showing each scan line vs reflections in time. The sinusoidal nature of the surface is due to uncorrected eccentricity of the tool. This image may be shown wrapped, perFIG. 8B to show a more intuitive image in the transverse (X-Y) plane. Here a dashed line indicates the estimated location of the surface reflections. Other reflections come from particles or secondary reflections of the tubular. - The processor applies a Machine Learning (ML) model to the ultrasound image. Because pipes and wells are typically several thousand meters long, the ML model is applied to smaller image segments and later recompiled into a rendering 27 of the longer pipe or well. For selected image segment, the ML model returns a probabilities Pinternal that each image region is internal or external to the tubular.
- As used herein, ‘regions’ may be a single pixel or contiguous areas/volumes of pixels sharing a common probability of being internal. The region does not necessarily correspond to pixels in the image space, but that is a convenient correspondence for computation. For example, the outcome may be that the physical region beyond 10 cm is ‘external’, which actually corresponds to thousands of pixels in image space.
- Without loss of generality the ultrasound image may be 3-dimensional, in which case the exemplary neural nets provided herein below have an extra dimension to convolve. However, to reduce processing time, the ultrasound image may be a 2D image with depth (z) and azimuthal (Θ) coordinates, as discussed above.
- As shown in
FIG. 1 , araw ultrasound image 15 fromtransducer array 12 may be convolved with aNeural Net 20 to output aprobability map 21 that regions within the ultrasound image correspond to internal regions in the tubular. The probability that a region is ‘external’ is simply the complement of it being ‘internal’. Further processing may involve finding the tubular surface by locating the transition between internal and external regions, (i.e. boundary 22), which identifies the surface of the tubular. This further output may be stored as pixel locations (z, r and Θ), where the determined surface is. - The system maintains a
database 25 to store image and surface metadata. For each log, the system may store raw images, modified images, surface image pixels, internal probabilities, and boundary locations (z and Θ). - The inventors have found that a Convolutional Neural Net (CNN) is desirable because they are largely spatially invariant and computationally efficient, especially when run on a GPU or TPU (tensor processing unit). CNN architectures of the types used in RGB image processing to identify common objects may be used with some modifications to work on ultrasound “pixels” in circular images to identify apparatus. Modifications to their training are also provided below.
- Image segment selection preferably involves only images that have been collected from a tubular. Invalid regions, including faulty hardware or images for which the acoustic sensor has not yet been inserted, need not be processed. This a priori knowledge may be provided from a human operator, as entries in a database, or as the result of a different ML model.
- Even for valid segments, it might not be desirable to process all images uniformly along the tubular due to the sheer number of images. Given that the tubular boundary is smooth and changes slowly as the acoustic sensors move through it, in some embodiments the method only operates on a subset of these image segments. The choice of which image segment to process might simply involve skipping regular intervals, or it could be the output of a different ML model.
- The image size of the segment selected for processing preferably relates (in terms of pixels) to the size of the GPU that can be stored for efficient matrix operations and relates (in terms of physical units) to the size of the apparatus. These are both related by the ultrasound scan resolution (pixels/mm or pixels/radian). In preferred embodiments, a region may be from 50 cm to 2 m axially, or may be 200-1000 pixels in either azimuthal or axial dimensions (not necessarily a square).
- In the preferred system, the input data is represented in three main axes: Θ, R and Z (the Z axis is also the logging axis separated in time by frames; R is the radial distance from the transducer array (or major axis of the tool), in directions transverse to the logging axis, measurable in time-sampled pixels or physical distance; and Θ corresponds to the azimuthal angle of a scan line in the transverse plane. Here, the Θ-R plane represents data collected from a cross-sectional slice of the tubular at a specific axial position (z) or logging time instant (t). One efficient representation is averaging the intensities over R for each scan line. Hence, the entire well or pipe can be represented by a 2-dimensional stream of 2D segments in the Θ-z plane, where every pixel along the Θ-axis at a given z, represents averaged line intensities. The size of the image to process may be based on the estimated apparatus size.
- The ML model presented in this embodiment may use a U-Net architecture. The output of this model partitions the image into internal and external regions. While the input to this model is a gray-scale image (e.g. 256z×256Θ×1), without loss of generality a sequence of images could be fed to similar architectures. The network uses an encoder and decoder modules for assigning pixels to internal and external regions.
- The purpose of the encoder module is to provide a compact representation of its input. This module may comprise five convolution layers, but fewer or more layers could be used, trading off accuracy and processing time. Alternatively, spatial attention layers could be used instead of convolution layers. For a sequence of images, Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM) or spatio-temporal attention models could be used.
- For activation functions, the encoder architecture uses Rectified Linear Units (ReLU), but other activation function such as Leaky or Randomized ReLUs could also be used to improve the accuracy of the model. The architecture further employs a Batch Normalization layer, which normalizes and scales the input feature maps to each convolutional layer. Batch Normalization layers help in speeding up the training process and reduce the possibility of the model overfitting the training data. Because Batch Normalization helps reduce overfitting, the model does not need Dropout layers.
- The decoder module creates the probability map of a pixel belonging to external or an internal region. The input of the decoder module is the compact representation given by the encoder. This module comprises of five convolution layers with RELU activations, but fewer or more layers could be used as well. Similar to the encoder module, RNNs, LSTMs or spatio-temporal attention layers could be used for sequential input. In order to expand the compact representation, up-sampling functions are used in-between convolution layers.
- The architecture may employ an Adam optimizer for training the ML model, as it is easier to tune than a stochastic gradient descent optimizer. A stochastic gradient descent with momentum is also an alternative. A learning rate scheduler may be used to reduce the learning as a function of the current training epoch. The loss function for the optimizer is the binary cross entropy function.
- The system initially builds a training dataset of apparatuses with different orientation angles, intensities, geometry and sizes. The training set may be generated by data-augmentation of collected, ultrasound images with labelled regions or pixels (‘INTERNAL’, ‘NON-INTERNAL). The training set may also comprise augmented images flipped around an axis, changing the brightness and the contrast of the image, without affecting the estimated label.
- The
boundary 22 between these internal and external regions is determined by the process to be the tubular surface. Thus, the system may automatically identify the tubular surface as this boundary, by looking for a threshold change in probability of being ‘internal’ along a scan line. That is, for a given scan line some contiguous internal radius will be above the probability threshold of being internal, followed by a contiguous external radius below that threshold. - This threshold can be a fixed probability or computed dynamically to find an optimal threshold that is able to differentiate internal and external regions. For example, there may be cases where most regions are computed to have very high probability of being internal, so the dynamic threshold is set higher to return boundaries, preferably close to the known radius of the tubular. After thresholding, the processor may then classify regions as ‘internal’, ‘tubular’ or ‘external’, rather than a probability of such.
- In one embodiment, the processor calculates the gradient of the probability map and scans it row by row. The first peak along the gradient image may be declared the boundary. This tends to be fast but less accurate.
- Alternatively, the process may use seam carving, as disclosed in GB2019014401 filed 4 Oct. 2019 entitled “Surface extraction for ultrasonic images using path energy.” This technique generally finds the brightest “column”. This tends to be more accurate but slower.
- Although the simpler case is finding a circular one-dimensional boundary in a frame between regions of probability, the architecture may be structured and trained to find two-dimensional surfaces of other shapes. This is relevant in downhole and pipeline applications, where objects become stuck (aka FISH) or the structure becomes deformed.
FIG. 9 shows a perspective view thru a cross-section of a crushed pipe located within another crushed pipe. This may be seen as four surfaces, twoboundaries 24, and threeregions 21 a,b,c. In this example, the pipes were imaged using an axial-facing ultrasound array, such as that disclosed inPatent Application 1 GB1813356.1, filed Aug. 8, 2018 entitled “Device and method to position an end effector in a well.” Such a device is movable around in the pipe to capture reflections from different angles. - In one preferred embodiment, the U-Net model receives ultrasound intensity images in polar coordinates. To improve training time and accuracy, these images are first normalized per scanline. The U-Net model compromises multiple layers of “standard-units”. These standard-units are the atomic building parts of the model and comprise batch normalization, convolution and activation units.
- In the U-Net model, these standard-units are followed by pooling layers to decrease the dimensionality of their outputs in a downward path. The successive operation of standard-units and pooling layers gradually decreases the dimensionality of features into a bottleneck, which is a compact representation of the entire image. After the bottleneck, the standard-units are followed by unpooling layers to increase the dimensionality of feature maps to the original image size in an upward path.
- Skip connections between downward and upward paths are used to concatenate feature maps. These skip connections create a gradient highway, which decreases training time and improves accuracy.
- The output of the U-Net may be probabilities for regions of being inner (i.e. 21 b) or not (i.e. 21 c and 21 a). Alternatively, the output may be
multiple classes boundaries 24 are determined as the locations of gradient changes in these regions' class or probability. In this application, it may be optimal to consider gradients in three dimensions. - In certain embodiments the output may be the probabilities of being one of three regions (internal, external, void). Here a void may be a rupture or perforation of the tubular, i.e. there is no metal at a location but there should be by inference to the surrounding metal regions. There will be an internal region, then a void region and then an external region, even though there are no direct surface reflections to find these boundaries. In such situations, the image of the tubular near a void tend to return abnormal and characteristic reflections from edges of the void. The model is trained on images containing such voids with pixels/voxels labelled as ‘void.’
- The system may then selectively operate on image data for the determined internal/external/surface regions. These operations may include filtering, contrasting, smoothing, or hole-finding of the image data. The system may visualize and render the tubular to a user on a
display 29 using the modified internal/external/surface pixels. - For example, the system may remove all image data (i.e. set pixels to clear) for internal and external pixels and display only pixels from within some width of the identified boundary pixels. This is useful in visualizing the tubular itself with its surface features, such as cracks and perforations.
- Alternatively, the system could remove external image data to render just acoustic reflections from particles in the fluid. Alternatively, the system could remove internal image data to render just acoustic reflections from the external cement bond, rock formation or apparatus attached to the tubular.
- The results of the surface detection and image processing may also be used to compute features of the tubular, without need to visualize it for a user.
FIG. 10 illustrates geometric features that may be identified from the determined surface. For example, the processor may compute properties of: ovality of the tubular using an ellipse fit to theboundary 24; wall thickness of the tubular overregion 26; Speed of Sound correction for the fluid from knowledge of the tubular diameter vs time-of-flight (ToF) of the ultrasound to the determined surface. - Qualifiable features such as dents (low frequency variation in radius), surface corrosion (high frequency variation in radius), sand build up may also be computed and reported with the relevant location.
- Terms such as “top”, “bottom”, “distal”, “proximate” “downhole”, “uphole”, “below,” “above,” “upper”, “downstream,” “axial, lateral” are used herein for simplicity in describing relative positioning of elements of the conduit or device, as depicted in the drawings or with reference to the surface datum. Although the present invention has been described and illustrated with respect to preferred embodiments and preferred uses thereof, it is not to be so limited since modifications and changes can be made therein which are within the scope of the appended claims as understood by those skilled in the art.
Claims (20)
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB2100058.3A GB2602495B (en) | 2021-01-04 | 2021-01-04 | Machine Learning Model for Identifying Surfaces in a Tubular |
GB2100058 | 2021-01-04 | ||
GB2100058.3 | 2021-01-04 |
Publications (2)
Publication Number | Publication Date |
---|---|
US20220215526A1 true US20220215526A1 (en) | 2022-07-07 |
US11983860B2 US11983860B2 (en) | 2024-05-14 |
Family
ID=74566388
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/540,251 Active 2042-12-21 US11983860B2 (en) | 2021-01-04 | 2021-12-02 | Machine learning model for identifying surfaces in a tubular |
Country Status (3)
Country | Link |
---|---|
US (1) | US11983860B2 (en) |
CA (1) | CA3143180C (en) |
GB (1) | GB2602495B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2624002A (en) * | 2022-11-03 | 2024-05-08 | Darkvision Tech Inc | Method and system for characterizing perforations in a tubular |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160327675A1 (en) * | 2013-10-03 | 2016-11-10 | Halliburton Energy Services, Inc. | Downhole inspection with ultrasonic sensor and conformable sensor responses |
CN110060248A (en) * | 2019-04-22 | 2019-07-26 | 哈尔滨工程大学 | Sonar image submarine pipeline detection method based on deep learning |
WO2019236832A1 (en) * | 2018-06-08 | 2019-12-12 | Schlumberger Technology Corporation | Methods for characterizing and evaluating well integrity using unsupervised machine learning of acoustic data |
CN111784645A (en) * | 2020-06-15 | 2020-10-16 | 北京科技大学 | Crack detection method for filling pipeline |
US20230358911A1 (en) * | 2019-11-08 | 2023-11-09 | Darkvision Technologies Inc | Using an acoustic tool to identify external devices mounted to a tubular |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016187242A1 (en) * | 2015-05-18 | 2016-11-24 | Schlumberger Technology Corporation | Method for analyzing cement integrity in casing strings using machine learning |
GB2555978B (en) | 2015-06-17 | 2022-08-17 | Darkvision Tech Inc | Ultrasonic imaging device and method for wells |
US10781690B2 (en) | 2015-10-09 | 2020-09-22 | Darkvision Technologies Inc. | Devices and methods for imaging wells using phased array ultrasound |
GB2572834B8 (en) | 2018-08-16 | 2021-08-11 | Darkvision Tech Inc | Downhole imaging device and method of using same |
GB2588102B (en) | 2019-10-04 | 2023-09-13 | Darkvision Tech Ltd | Surface extraction for ultrasonic images using path energy |
-
2021
- 2021-01-04 GB GB2100058.3A patent/GB2602495B/en active Active
- 2021-12-02 US US17/540,251 patent/US11983860B2/en active Active
- 2021-12-20 CA CA3143180A patent/CA3143180C/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160327675A1 (en) * | 2013-10-03 | 2016-11-10 | Halliburton Energy Services, Inc. | Downhole inspection with ultrasonic sensor and conformable sensor responses |
WO2019236832A1 (en) * | 2018-06-08 | 2019-12-12 | Schlumberger Technology Corporation | Methods for characterizing and evaluating well integrity using unsupervised machine learning of acoustic data |
CN110060248A (en) * | 2019-04-22 | 2019-07-26 | 哈尔滨工程大学 | Sonar image submarine pipeline detection method based on deep learning |
US20230358911A1 (en) * | 2019-11-08 | 2023-11-09 | Darkvision Technologies Inc | Using an acoustic tool to identify external devices mounted to a tubular |
CN111784645A (en) * | 2020-06-15 | 2020-10-16 | 北京科技大学 | Crack detection method for filling pipeline |
Also Published As
Publication number | Publication date |
---|---|
CA3143180C (en) | 2023-06-13 |
GB2602495B (en) | 2023-01-25 |
GB202100058D0 (en) | 2021-02-17 |
CA3143180A1 (en) | 2022-07-04 |
GB2602495A (en) | 2022-07-06 |
US11983860B2 (en) | 2024-05-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11733380B2 (en) | Using an acoustic device to identify external apparatus mounted to a tubular | |
JP4839338B2 (en) | Ultrasonic flaw detection apparatus and method | |
JP5090315B2 (en) | Ultrasonic flaw detection apparatus and ultrasonic flaw detection method | |
US20180197292A1 (en) | Interactive image segmenting apparatus and method | |
US11983860B2 (en) | Machine learning model for identifying surfaces in a tubular | |
WO2019200242A1 (en) | Evaluating casing cement using automated detection of clinging compression wave (p) arrivals | |
US11378550B2 (en) | Surface extraction for ultrasonic images using path energy | |
Lee et al. | Probability-based recognition framework for underwater landmarks using sonar images | |
US20230358911A1 (en) | Using an acoustic tool to identify external devices mounted to a tubular | |
US20220415040A1 (en) | Machine learning model for measuring perforations in a tubular | |
EP4170339B1 (en) | Ultrasonic inspection of complex surfaces | |
EP4120192A1 (en) | Computing device comprising an end-to-end learning-based architecture for determining a scene flow from two consecutive scans of point clouds | |
US20240153057A1 (en) | Method and system for characterizing perforations in a tubular | |
US11054398B2 (en) | Ultrasonic inspection method, ultrasonic inspection device, and computer-readable storage medium | |
KR20130038748A (en) | Adaptive signal processing method and apparatus thereof | |
KR102614954B1 (en) | Ultrasonic scanner for scanning an interior of offshore structures and ships including the same | |
CA3105688C (en) | Accurate rendering of acoustic images | |
Benslimane et al. | Automated Corrosion Analysis with Prior Domain Knowledge-Informed Neural Networks | |
Patel | Three-dimensional underwater acoustic image interpretation for ROV navigation | |
EP3001223A1 (en) | Method for detecting and correcting artifacts in ultrasound imaging data | |
JP2024025405A (en) | Ranging device and ranging method | |
Cuenca et al. | Hamburger Beiträge | |
JP5578472B2 (en) | Ultrasonic flaw detector and image processing method of ultrasonic flaw detector | |
JP2013130591A (en) | Ultrasonic flaw detector and image processing method thereof | |
Mapurisa | Reconstruction of industrial piping installations from laser point clouds using profiling techniques |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: DARKVISION TECHNOLOGIES INC., CANADA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MAHMOUD, ANAS;KHALLAGHI, SIAVASH;SIGNING DATES FROM 20211114 TO 20211122;REEL/FRAME:058263/0191 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ALLOWED -- NOTICE OF ALLOWANCE NOT YET MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
ZAAB | Notice of allowance mailed |
Free format text: ORIGINAL CODE: MN/=. |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |