GB2606632A - A method to increase fracture efficiency using variability in optical measurements - Google Patents

A method to increase fracture efficiency using variability in optical measurements Download PDF

Info

Publication number
GB2606632A
GB2606632A GB2204841.7A GB202204841A GB2606632A GB 2606632 A GB2606632 A GB 2606632A GB 202204841 A GB202204841 A GB 202204841A GB 2606632 A GB2606632 A GB 2606632A
Authority
GB
United Kingdom
Prior art keywords
images
perforation
perspective
parameters
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB2204841.7A
Other versions
GB202204841D0 (en
Inventor
Luu Timothy
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Darkvision Technologies Inc
Original Assignee
Darkvision Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Darkvision Technologies Inc filed Critical Darkvision Technologies Inc
Priority to GB2204841.7A priority Critical patent/GB2606632A/en
Publication of GB202204841D0 publication Critical patent/GB202204841D0/en
Publication of GB2606632A publication Critical patent/GB2606632A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • EFIXED CONSTRUCTIONS
    • E21EARTH OR ROCK DRILLING; MINING
    • E21BEARTH OR ROCK DRILLING; OBTAINING OIL, GAS, WATER, SOLUBLE OR MELTABLE MATERIALS OR A SLURRY OF MINERALS FROM WELLS
    • E21B47/00Survey of boreholes or wells
    • E21B47/002Survey of boreholes or wells by visual inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Geology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mining & Mineral Resources (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Environmental & Geological Engineering (AREA)
  • Fluid Mechanics (AREA)
  • Geophysics (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • Geochemistry & Mineralogy (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

A method of manipulating images from a downhole camera to achieve a desired result. Images of a wellbore are captured using a camera, the images are uploaded to an image server, and parameters of the image are compared with expected parameters to determine a perspective error. The images are then modified using software to reduce the perspective error and generate new parameters.

Description

Invention Title: A Method to Increase Fracture Efficiency Using Variability in Optical Measurements
FIELD OF THE INVENTION
[0001] The invention is directed to the enormous problems with operating a camera in a wellbore. Method and systems are disclosed for manipulating camera images to get a desired result.
BACKGROUND OF THE INVENTION
[0002] As known in the art of wellbore logging, cameras are used to image perforations. However, they suffer from perspective issues and therefore require extensive post processing to estimate the true size of the perforation. Fish eye lens effects, tool eccentricity and particulate in the fluid obstructing the view that lead to operators having very little confidence in the images.
[0003] Figure 1 show how images of the perforation in a casing of an oil well change depending on relative position and distortion of the camera when the original image was captured that was respect to the perforation. This problem will leave the analyst with confusion about the true size of the perforation.
[0004] The inherent problems with using camera downhole are known in this field and are demonstrated by patents: W02005069603 Al "A camera system" EP2831671 A2, U88979401 "Optical Element", US11215917 62 "Inspection assembly viewport," US11174719 B2 "Inspection assembly lighting system," and CA3091872 Al "Estimating inspection tool velocity and depth." These discussed the problems with lighting, eccentricity, perspective, and fluid interference.
[0005] Additionally, downhole operations, like perforating and fracturing, are very complicated and difficult to optimize. The intent for operators is to have an even distribution of even size of perforations across the entire lateral of the well. This can be measured by optically imaging their perforations after the well has been fractured. One can correlate proppant ingress into the perforations by measuring the size of the perforations after fracture. Attempts to optimize fracturing can be measured by the even growth of the perforations as measured by the technology.
SUMMARY OF THE INVENTION
[0006] To address the shortcomings of downhole camera tools, a new downhole tool and method are provided that compensates for poor camera performance by matching expectation to observation and reducing any difference therebetween.
[0007] In accordance with a first aspect of the invention there is provided a method of imaging a wellbore comprising: capturing images of the wellbore using a camera; uploading the images to an image server; receiving a set of expected parameters for perforations of that wellbore; determining observation parameters of a perforation captured by the images; comparing the observed and expected parameters to calculate a perspective error; modifying the perspective of the images to reduce the perspective error; and reporting the modified images or their parameters.
[0008] Further compensation for camera performance is provided in the appended claims.
BRIEF DESCRIPTION OF THE DRAWINGS
[0009] Various objects, features and advantages of the invention will be apparent from the following description of embodiments of the invention, as illustrated in the accompanying drawings. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the principles of various embodiments of the invention.
FIG. 1 is a set of images of perspective errors affecting the observed perforation. FIG. 2 is a GUI with inputs for modifying an image.
FIG. 3 is an illustration comparing a perforation viewed from two frames varying in well depth (Z).
FIG. 4 is an illustration comparing a perforation with rotational perspective error.
FIG. 5 is an illustration comparing a perforation with different fisheye spacing correction. -2 -
FIG. 6 is an illustration comparing a perforation viewed from different perspective due to eccentricity of the tool in the well.
FIG. 7 is an illustration comparing a perforation using different illumination settings to affect edge detection.
FIG. 8 is a flow diagram for modifying images to match an expectation.
FIG. 9 is a cross-sectional view of an imaging device deployed in a wellbore in accordance with one embodiment of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0010] With reference to the accompanying figures, devices and methods are disclosed for capturing downhole images of perforations and analyzing them offline with image modifications made as necessary to achieve consistency in the area of perforations. This saves time and money for the service companies that make the perforations and those that measure them. For example, fracturing companies can simplify and speed up their perforation and fracturing operations knowing that any problems can be fixed later in image-processing.
[0011] Logging is achieved by pulling a camera tool through the wellbore axially, while continually capturing images of the casing and storing the frames on the tool's memory. Any stick-slip in the wireline or eccentricity of the tool can be fixed with image processing.
[0012] The raw images are uploaded to a server, where an analyst examines them and compares them to a desired result, which is typically an expected area of the perforations and some level of consistency desired over the length of the lateral. The analyst receives these set of parameters from the well operator regarding the expected perforations. The expectations come from the perforation charge size, normal wear rates from proppant and whatever was promised to the wellsite operator. These expected parameters are compared to the observed raw images to determined what manipulation needs to be made to the images that are reported back to the customer. -3 -
[0013] For example, if a 1" charge was used, then subjected to proppant for long enough to achieve a 1.5" diameter perforation, then any deviation from this in the camera image must be due to tool and perspective error. Cameras tools are, after all, highly prone to errors and these need to be corrected. The analyst is then tasked with finding what modifications are needed to return a believable result in the report. Variations in perforation size over the length of a section must also be down to errors that can be corrected on a perf-by-perf basis to fit within an acceptable range.
[0014] The prime parameters of interest to an oilwell operator are diameter, effective diameter, area of a perforation and variation in these parameters. These may be of interest and logged before or after each of perforating, fracturing or production to understand the well's operation. These parameters are determined by identifying the edges of the perforation in the image and converting between pixels and mm. Effective diameter is calculated by the equation: De f f = Area (Lpixelmm)lim (1+ -IX [0015] As demonstrated in Figure 1, the raw observed image can vary greatly due to perspective problems with the camera. There are several parameters that affect the perspective and therefore the reported value of the perforation: x, y, z positions, rotation about Z, Field-of-View, lens distortion, and tool eccentricity. Figure 2 illustrates a GUI comprising sliders that correct for or manipulate the raw image to synthesize an ideal image, which is reported to the customer. The GUI also includes virtual calipers that the analyst can move relative to image based on their guess of where the perforation is. The caliper may be a spaced-apart lines or an ellipse, from which a diameter or area is derived from the estimated pixel to mm conversion.
[0016] Figure 3 illustrates how the tool depth into the wellbore changes the axial position of the camera with respect to the perforation, such that successive frames capture the casing thickness in bottom or top edge of the image, which affects the calculated area. The slider 11 can be used to select frames up or down until choosing the best frame whose perforation matches the expected perforation. That frame or a synthesis of frames is reported. -4 -
[0017] Figure 4 illustrates a tool in a casing (lower portion) having perforations with the unmodified calculated cross-sectional area of each (top portion). As seen, these areas range from 0.3" to 0.45" which is unacceptable. Part of the problem here is that the camera is not centered on the perforation, such that the images include the casing thickness on one edge and not the other. Thus, we see that the imaged size of the perforation is arbitrary and simply needs to be manipulated to get the desired result. By playing with the slider 33 on the GUI, a new synthesized image for that angle is generated and the new calculated area changes until the analyst stops and saves the synthesized image to the report.
[0018] Figure 5 demonstrates the problem with the fisheye effect of the wide-angle lenses used in downhole cameras. Therefore, the calculated area will depend on where the perforation is within the field of view and what space to pixel conversion is used. That is, preparations in the corner of the image will appear smaller than those in the middle and one cannot be sure how close the camera is to the casing.
[0019] Even a calibration ruler in one part of the image will not inform the pixel/mm density elsewhere. Not only do the mentioned perspective errors affect the observed calibration ruler itself, but the analyst must use guesswork to select the edges of the ruler in order to guess the pixels/mm density. Using the inventive method, another slider on the GUI allows the analyst to enter the pixel density that gives the desired values for the perforation.
[0020] Using a very wide-angle lens or multiple overlapping cameras, it may be possible to capture the same perforation multiple times. Each instance of the capture is analyzed to determine parameters and the instance closest to the expected parameters is reported. The other instances must be erroneous and can be deleted.
[0021] Similarly Figure 6, shows the different perspectives of the preparation when translating laterally (x, y). This can be used to correct for eccentricity of the tool. Since the eccentricity is unknown, it is estimated and then used to correct the image area using the Circular Reasoning Theorem. From the raw image, the observed area is calculated and compared to the expected area to determine the eccentricity that must have occurred during logging. This provides a correction factor, which is used to multiply the observed area to get a final area that is reported. The GUI here illustrates how the tool might be -5 -eccentric in the casing and what amount or X or Y translation would fix this. In the display, the synthesized images show how perforations can be centered onto the perforation and moved closer or further away until the desired size is achieved.
[0022] As discussed above, area or effective area is determined from the edges of the perforation and determining those edges is affected by many of the perspectives mentioned plus judgement of where to place the calipers in the GUI. Adding to this problem is that intensity values of pixels are highly variable due to variations in the lighting source, camera sensitivity and attenuation of the fluid. So, a pixel that may appear dark enough to be a perforation in one frame, may be light enough to be considered casing in the next (see Figure 7). Whether using a human or computer for edge detection, this poses a problem for this segmentation task -all the pixels interior of the perforation are usually dark, whereas the casing is light. By adjusting the illumination slider 38, one can make pixels near the edge appear or disappear to make the perforation appear smaller or larger, until the desired area is matched.
[0023] In an advanced approach, image processing is used to auto-select the edges. This task can be improved by inputting the desired diameter or area so that the auto-select function will include or exclude pixels to match the desired result. For example, a Sobel filter with adjustable threshold can be used.
[0024] To overcome the inherent limitation of cameras providing only two-dimensional information, the analyst may need to use some judgement to decide how deep into the casing to start the perforation's cross-sectional area. In the (right side) image of Figure 2, the perforation area is arguably the outermost place where the deformation starts or the innermost place where there is a clear hole. Since the required perforation parameters may have been intended for the interior, exterior or anywhere in between, it is reasonable to set the calipers at any depth. For perforation that flare radially outwards, it is reasonable to assume there are hidden edges that could constitute the intended perforation measurement and thus those may be selected.
[0025] Decimal places are one way of conveying accuracy and resolution in reports. The invention provides means to select the number of decimals reported. It is preferred to show -6 -six decimal places in reports even though the precision is ±lcm due to perspective errors. In the literature, this is called Spatial Precision Upsampling.
[0026] Another inherent problem is that the camera and lighting are not axially aligned, which adds additional perspective errors, like parallax and shadows. Thus the skilled person will appreciate that pixels and edges should be treated differently depending on where the lighting is coming from. The analyst should include some pixels in the perforation that would be excluded from the other side. This lighting bias can be varied until the expected area is obtained. This can be used to explain why measurement calipers in the GUI do not appear to be consistently selected around the perforation.
[0027] In an alternative embodiment, the analyst loses their job and is replaced by a computer with machine learning. While this sounds very hard to do, the skilled person would probably employ neural nets, decentralized blockchain, and virtual reality. Quantum computing may also be used in which the state of the observed perforation may be said to exist in all possible permutations at once, with the expected parameters input to resolve the singularity, as taught in US20070042672 Al.
[0028] The manipulated images and their parameters are aggregated into a single report to be shown to interested parties. The original, raw camera images are then deleted along with the unmodified measured parameters.
[0029] It should also be understood that the image processing correction and the order in which the respective steps of each method are performed are purely exemplary. Depending on implementation, the steps may be performed in any order or in parallel, unless indicated otherwise in the present disclosure. Further, the GUI and program is not related, or limited to any particular programming language, and may comprise of one or more modules that execute on one or more processors in a distributed, non-distributed or multiprocessing environment. The program may be embodied as a video game.
[0030] Modifications and alternative embodiments of this invention will be apparent to those skilled in the art in view of this description of the invention. Accordingly, this description teaches those skilled in the art the manner of carrying out the invention and is intended to be construed as illustrative only. The forms of the invention shown and -7 -described constitute the present embodiments. Persons skilled in the art may make various changes in the shape, size and arrangement of parts. For example, persons skilled in the art may substitute equivalent elements for the elements illustrated and described here. Moreover, persons skilled in the art after having the benefit of this description of the invention may use certain features of the invention independently of the use of other features, without departing from the scope of the invention. -8 -

Claims (6)

  1. CLAIMS: 1 A method of imaging a wellbore comprising: capturing images of the wellbore using a camera; uploading the images to an image server; receiving a set of expected parameters for perforations of that wellbore; determining observation parameters of a perforation captured by the images; comparing the observed and expected parameters to calculate a perspective error; modifying the perspective of the images to reduce the perspective error; and reporting the modified images or their parameters.
  2. 2. The method of claim 1, further comprising repeating the steps of modifying and comparing images until the reported images are within a predefined range of the expected parameters.
  3. 3. The method of claim 1, further comprising synthesizing a modified image using the inputted perspective correction.
  4. 4. The method of claim 1, further comprising selecting another image frame that captures the same perforation and choosing the frame that captures the perforation closest to the expected.
  5. 5. The method of claim 1, wherein the modified perspective includes rotating or translating the perspective of the camera relative to the wellbore.
  6. 6. The method of claim 1, further comprising correcting for eccentricity of the tool computing a lateral offset of the tool that would reduce said perspective error and then by synthesizing an image from that offset position. -9 -
GB2204841.7A 2022-04-01 2022-04-01 A method to increase fracture efficiency using variability in optical measurements Withdrawn GB2606632A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB2204841.7A GB2606632A (en) 2022-04-01 2022-04-01 A method to increase fracture efficiency using variability in optical measurements

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2204841.7A GB2606632A (en) 2022-04-01 2022-04-01 A method to increase fracture efficiency using variability in optical measurements

Publications (2)

Publication Number Publication Date
GB202204841D0 GB202204841D0 (en) 2022-05-18
GB2606632A true GB2606632A (en) 2022-11-16

Family

ID=81581606

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2204841.7A Withdrawn GB2606632A (en) 2022-04-01 2022-04-01 A method to increase fracture efficiency using variability in optical measurements

Country Status (1)

Country Link
GB (1) GB2606632A (en)

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None *

Also Published As

Publication number Publication date
GB202204841D0 (en) 2022-05-18

Similar Documents

Publication Publication Date Title
Gallego et al. Accurate angular velocity estimation with an event camera
EP3158532B1 (en) Local adaptive histogram equalization
Oliveira et al. Parametric blur estimation for blind restoration of natural images: Linear motion and out-of-focus
JP5230131B2 (en) Method and system for detecting the shape of the surface of a reflective object
US10254193B2 (en) Systems and methods for optical scanning of fluid transport pipelines
WO2019213534A1 (en) Pipe inspection systems and methods
EP2947628B1 (en) Method for processing local information
EP3496035A1 (en) Using 3d vision for automated industrial inspection
CN101795361A (en) Two-dimensional polynomial model for depth estimation based on two-picture matching
US20200394839A1 (en) Method for Constructing a 3D Representation of a Conduit Internal Surface
EP3382645B1 (en) Method for generation of a 3d model based on structure from motion and photometric stereo of 2d sparse images
WO2019167453A1 (en) Image processing device, image processing method, and program
Fidali et al. Diagnostic method of welding process based on fused infrared and vision images
US20140126804A1 (en) Edge measurement video tool and interface including automatic parameter set alternatives
CN110986834A (en) Automatic assembly pipe penetration monitoring method
Kunzel et al. Automatic analysis of sewer pipes based on unrolled monocular fisheye images
Nurit et al. HD-RTI: An adaptive multi-light imaging approach for the quality assessment of manufactured surfaces
US20190273845A1 (en) Vibration monitoring of an object using a video camera
WO2007004864A1 (en) Method and apparatus for visual object recognition
GB2606632A (en) A method to increase fracture efficiency using variability in optical measurements
WO2021163406A1 (en) Methods and systems for determining calibration quality metrics for a multicamera imaging system
CN107529962A (en) Image processing apparatus, image processing method and image processing program
Frangione et al. Multi-step approach for automated scaling of photogrammetric micro-measurements
Nikolova et al. Detecting of Unique Image Features by Using Camera with Controllable Parameters
Chidambaram Edge Extraction of Color and Range Images

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)