WO2022172048A1 - Inspection methods for small bore pipes - Google Patents

Inspection methods for small bore pipes Download PDF

Info

Publication number
WO2022172048A1
WO2022172048A1 PCT/IB2021/000088 IB2021000088W WO2022172048A1 WO 2022172048 A1 WO2022172048 A1 WO 2022172048A1 IB 2021000088 W IB2021000088 W IB 2021000088W WO 2022172048 A1 WO2022172048 A1 WO 2022172048A1
Authority
WO
WIPO (PCT)
Prior art keywords
pipe
images
image
point cloud
video camera
Prior art date
Application number
PCT/IB2021/000088
Other languages
French (fr)
Inventor
Graeme Michael WEST
Colin FORRESTER
Dayi Zhang
Gordon Ian DOBIE
Original Assignee
Inspectahire Instrument Company Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspectahire Instrument Company Limited filed Critical Inspectahire Instrument Company Limited
Priority to PCT/IB2021/000088 priority Critical patent/WO2022172048A1/en
Publication of WO2022172048A1 publication Critical patent/WO2022172048A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces
    • G06T3/073Transforming surfaces of revolution to planar images, e.g. cylindrical surfaces to planar images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/579Depth or shape recovery from multiple images from motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30136Metal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • the invention describes improved remote video inspection methods of image processing obtained using a conventional small diameter videoscope to present to the observer an accurate unwrapped combined image of the internal surface of a small bore pipe.
  • pipe networks are common in several industries, including oil and gas, petrochemical, nuclear, and aerospace as some examples. Regular non-destructive examination and assessment of these pipes to ensure the safe operation is common and often required by safety regulations.
  • Other examples of pipe networks that require inspection include water pipes, and drain pipes.
  • Optical cameras are frequently used where human access is constrained - typically by hazardous conditions or small pipe diameter.
  • Traditional systems and methods for inspecting the pipes may include a video camera head mounted on a push-cable that deployed along a pipe to display the interior of the pipe on a camera control unit or another video display.
  • video camera heads are essential tools to visually inspect the interior of the pipes and to identify possible defects therein, for example, pipe cracks or breaks, corrosion, leakage, and/or other defects or blockages inside the pipe.
  • Traditional pipe inspection systems though useful, are limited to manually performed visual inspection and interpretation of the images of pipes by an operator.
  • Existing lateral push-cable camera systems generally include analog cameras with poor image quality due to limitations in power and signal processing down a lengthy push-cable, where the camera head must be sufficiently small to fit into and navigate the bends and turns of commonly used pipe diameters.
  • the problem may be exacerbated further by a small diameter of the pipe, such as under 25 mm, as it may restrict the view of the camera and further limit the ability of the operator to identify possible defects therein.
  • the novel video inspection method for an interior surface of a pipe of the present invention may include the conventional steps of (a) defining geometrical size and shape of the interior surface of the pipe, (b) providing a videoscope sized to fit inside the pipe, (c) advancing the videoscope through the pipe while acquiring raw images from the video camera of the videoscope.
  • the method further includes the novel steps of (d) estimating the video camera pose for at least some of the raw images of step (c), (e) sequentially building a pipe point cloud, a mesh, and a 3D textured model of the interior surface of the pipe from the raw images with adjustments for video camera poses, (f) unwrapping the raw images using corresponding video camera poses and the pipe point cloud to create unwrapped images of the interior surface of the pipe, and (g) creating a panoramic image of the interior surface of the pipe by stitching the unwrapped images together.
  • the step of extracting and matching unique features of the images may be performed using pipe point clouds and Structure-from-Motion techniques as a basis. Beyond that, the method of the invention allows enhanced sharpness, reduced artifacts, and improved uniformity of lighting as a result of reduced model noise and removal of outliers as described in greater detail below.
  • the method of the invention may be implemented in a software product configured to perform the steps of the method.
  • the software may be loaded on a suitable computer, tablet, smartphone, or another appropriate electronic device.
  • FIGURE 1 is a block diagram of the components of the system to implement the present invention listing major functional blocks thereof
  • FIGURE 2 is an example of point cloud and camera poses calculated therefore for an interior of a rectangular pipe
  • FIGURE 3 is an example of a reconstructed 3D model fitted to a pipe with known geometry
  • FIGURE 4 illustrates camera position and projected rays as part of image unwrapping
  • FIGURE 5 shows an example of projected rays fitted to the geometry of an interior of a round pipe
  • FIGURE 6 shows an example of a raw image seen by a camera inside a round pipe
  • FIGURE 7 shows a postprocessing step of fitting projected rays onto a raw image from Fig. 6,
  • FIGURE 8 is another example of a raw image seen by the camera inside a round pipe
  • FIGURE 9 is an example of an unwrapped image from that of Fig. 8,
  • FIGURE 10 is yet another example of a raw image from a camera positioned inside a round pipe
  • FIGURE 11 is an example of unwrapping of the image of Fig. 10,
  • FIGURE 12 is an example of manual image stitching using a plurality of unwrapped images
  • FIGURE 13 is an example of an unwrapped raw image of the internal surface of a round pipe
  • FIGURE 14 is an example of a weighted image of Fig. 13 to adjust the lighting thereof
  • FIGURE 15 is an illustration of using camera poses for automated image stitching
  • FIGURES 16a to 16d shows examples of using Structure-from-Motion and image stitching to develop a smooth unwrapped image of the interior surface of the pipe.
  • the present invention may be useful for video inspection of a variety of internal surfaces of pipes, containers, canisters, reservoirs, and other cavities, it has a particularly advantageous utility in inspecting small diameter pipes, such as 25 mm or smaller. This is because it only uses a feed of video or still images provided by a camera at the end of the scope, which may be obtained with additional lighting from an array of small size LEDs. No other sensor, probe, or other hardware is required for applying the method of the invention, which makes the scope size as small as practical given the size of the suitable camera and light source.
  • FIG. 1 illustrates a general block-diagram of the main components of the system implementing the present invention.
  • a conventional videoscope 10 (shown in a dashed line) may include a handheld controller 12 with a flexible scope cable extending therefrom.
  • a miniature camera 14 and a light source 16 (for example an array of LEDs) may be mounted at the end of the flexible scope cable.
  • the entire videoscope may be configured to provide raw video and still images as the scope is advanced throughout the internal space of a pipe under inspection.
  • Examples of suitable, commercially available videoscopes may include an Olympus IPLEX series of videoscopes (Olympus Corporation, Tokyo, Japan), which are only 2.4 mm in diameter, as well as a GE Video Borescope (General Electric, Boston, MA). Recent videoscope advances have produced even smaller size videoscope, such as only 1 mm in diameter, for example a Super-ultra small industrial video borescope HNL-0.95CAM120 by SPI Engineering (Nagano, Japan), which is only 0.95 mm in diameter. Such small videoscopes are appropriate for evaluating pipes of smaller yet diameters such as sub-millimeter pipes using the methods of the present invention.
  • While commercially available videoscopes feature a small built-in display for directly observing the images supplied from the video camera, at least some also feature an option to store collected images using an internal or an external media storage element 20.
  • a media storage element may include any suitable removable or embedded computer memory devices such as memory cards, sticks, tapes, etc. as the present invention is not limited in this regard.
  • a further example of a media storage device 20 may include a plug-in cable configured to transmit the video images from the handheld videoscope 10 to any external computer 30, which itself may include a media storage element 20 built-in or removably attached thereto.
  • the method of the invention may be used to post-process the images obtained using the video camera either simultaneously with the process of advancing the videoscope along the pipe or at a later time using the recorded plurality of images, as the invention is not limited in this regard.
  • novel methods of the invention may be implemented as software loaded on a suitable electronic device 30 directly attached to or otherwise capable of receiving the successive images of the internal surface of the pipe from the videoscope 10, for example via a memory stick file transfer.
  • suitable electronic computer device 30 capable of operating the software to implement the methods of the invention may include a personal computer, a smartphone, a tablet, a laptop, a smartwatch, etc. as the present invention is not limited in this regard.
  • the images from the video camera 14 may be transmitted remotely (such as via the Internet) to a central server, which in this case may be configured to post-process the images, generate an unwrapped final result image and then transmit it back to the user or store in memory for subsequent review and further analysis.
  • a suitable user interface such as a suitable website configured for uploading the images to a central server may be provided as part of the system of the present invention.
  • Such remote post-processing of images is included in the scope of the notion of the computer 30 and post-processing unit 40 as illustrated in Fig. 1 .
  • the post-processing unit 40 may include the following functional blocks: video camera pose estimation block 42, image unwrapping block 44, light adjustment block 46, and image stitching block 48, all of which are described below in greater detail. While in some embodiments, one or more of these blocks may be implemented as physically stand-alone units, in other embodiments they may be implemented as elements of a single computer software product. In further yet embodiments of the invention, at least some of these functional blocks may be implemented as separate software programs configured to operate with each other or with other commercially available image processing software products.
  • the output of the results of the post-processing transformation of the raw images obtained from the video camera 14 may be displayed for the observer in a functional block 50, which may be implemented as a computer monitor showing one or more of the unwrapped stitched images of the pipe interior surface.
  • a functional block 50 may be implemented as a computer monitor showing one or more of the unwrapped stitched images of the pipe interior surface.
  • the results may be presented to the observer as a print-out or in other suitable graphically observable ways.
  • a video inspection of an interior surface of a pipe may start with a step of defining the geometrical size and shape of the interior surface of the pipe. This may be accomplished by examining the pipe itself it is accessible, or by inspecting the records of the pipe collected during its construction.
  • the internal surface of the pipe may be defined as a 3D geometrical coordinate record of the axis of the pipe combined with a record of its cross-sectional area.
  • Accurate knowledge of the interior geometry of the pipe in some embodiments exceeding a sub-millimeter accuracy, may be instrumental in the proper fitting of the raw images obtained by the video camera to the actual dimensions defining the interior surface as will be explained below in greater detail.
  • a mathematical (or virtual) model of the interior surface of the pipe may then be created in order to define the boundaries of the surface which naturally limits the field of view of the video camera during a pipe inspection.
  • the actual video inspection may be carried out. It may include providing a videoscope equipped with a camera and a light source sized to fit inside the pipe. Advancing of the videoscope may be conducted while acquiring raw imaged from the video camera as discussed above. An optional step of observing the video feed from the camera may be instrumental in assuring the proper advancement of the videoscope and in identifying any gross defects of the pipe.
  • a series of steps of the post-processing part of the process may be initiated.
  • a first, or preliminary determination of camera poses and point cloud may be created using a variety of techniques.
  • One suitable technique for post-processing of the raw images is called Structure-from-Motion, or SfM.
  • SfM technique involves the recognition and extraction of unique features of successive images so that the geometrical relationship of successive images can be determined in the relationship of one image to the next.
  • SfM technique allows reconstruction of a 3D image from a series of 2D images recorded in a known succession.
  • the accuracy of the reconstructed 3D model highly depends on the quality of available 2D images.
  • Low light intensity for example, can reduce image quality, which may cause a reduction of the number of unique features available for reconstructing the 3D model.
  • videoscopes are equipped with an LED light source, such LEDs may not be able to completely cover the field of view of the camera or not provide bright enough illumination of the pipe interior. These factors tend to introduce errors during the 3D reconstruction process and texture rendering, making the final result less than optimal for detecting pipe defects.
  • Fig. 2 shows an example of a first, or preliminary 3D reconstruction of a rectangular pipe using a point cloud (shown as a plurality of raw points with varying density) and camera poses 15, as better seen in a circular zoom-in insert on the right of Fig. 2.
  • a point cloud shown as a plurality of raw points with varying density
  • camera poses 15, as better seen in a circular zoom-in insert on the right of Fig. 2.
  • An assumption is made that all images are captured by the same camera so the camera pose may be extracted from the images by matching these unique features between various raw images.
  • Each point is shown in varying degrees of black, with less color indicating fewer features extracted from the raw images, typically because of the blurs or low light density.
  • the point cloud in this example contains many outliers, empty zones, discontinuities, and considerable noise, all of which may negatively impact the final result.
  • Artificial discontinuities for example, may create an impression of a defect in a place where there is no physical defect present. That may create a need for an unnecessary intervention
  • the present invention contemplates an additional step of fitting the preliminary first 3D reconstruction to a virtual model of a pipe created from the known pipe geometry and shape.
  • the point cloud may be separated into individual slices in order to accommodate pipe bends and other complex geometry.
  • the fitting procedure may be carried out by tuning the poses of the camera and fitted pipe slices, for example with the goal of minimizing the average of the Euclidean distance between the fitted pipe and the corresponding preliminary point cloud slice.
  • the point cloud slices may be adjusted so as to fit the virtual pipe as a whole, as seen in Fig. 3, representing a second pipe point cloud.
  • Image unwrapping is a term generally describing a projection of the points in the world coordinate to a 2D Cartesian coordinate.
  • a projection of the 3D textured model onto a 2D surface is conducted in order to create a final result of the inspection to be presented to the observer of the test.
  • the unwrapping step may be initiated by creating an unwrapped image with multiple rays as seen in Fig. 4.
  • Shown in Fig. 4 is an example of a round pipe with a camera positioned along its central axis (after adjusting for camera poses as explained above based on the point cloud and initial camera pose) and a plurality of rays oriented across the pipe projected in front of the camera. These rays are projected onto a virtual model of the round pipe as seen in Fig. 5.
  • the correlated points of the rays may then be projected onto the raw 2D images, for example, see an image shown in Fig. 6 to create an image with ray projections as seen in Fig. 7.
  • a further advantageous step of the present invention includes lighting adjustment of the raw images, which can be carried out during the step of image unwrapping.
  • Low light intensity and the nature of a small size video camera may result in at least some portion of the acquired images containing blurry areas and distortions. Mixing these “featurless” parts with other sharper and more detailed parts of the image may reduce the quality and sharpness of the final result.
  • the methods of the present invention use a weight factor assigned to at least some or even every pixel on each image in order to reduce the contribution of unfocused pixels and increase the contribution of pixels with high focus and sharpness.
  • the raw image has points with maximum contribution close to the center of the image, while peripheral parts of the image have lower sharpness and therefore are assigned lower weight factors.
  • a linear increase of sharpness from the periphery towards the center of the image may be assumed and therefore a corresponding linear increase in weight factors may be implemented before unwrapping the image.
  • Fig. 9 shows an example of using weight factors in unwrapping the image of Fig. 8.
  • a lower contribution is represented by higher transparency, up to a point of abandoning some of the image parts at the lower edge of the image.
  • Abandoned parts of the image may be compensated by more sharp images of the same portion of the pipe obtained from other raw images so that blending of focused and unfocused pixels of the same portion of the image is avoided - resulting in an increased focus of the final result.
  • FIG. 10 shows the raw image of the pipe interior
  • Fig. 11 shows an adjusted unwrapped image of the same, created using Fig. 10 as a starting point.
  • Image stitching to create a panoramic combined image of the interior surface of the pipe may be the next step in the method of the invention.
  • manually- conducted image stitching is a time-consuming process when it comes to consolidating a large number of images, which can easily go into hundreds or more images.
  • variable lighting conditions may occur when different adjacent images are taken, due to movement of the video camera, light source repositioning, blind spots in illumination, and for other reasons. Variability in lighting may cause the occurrence of light artifacts in the final panoramic image as can be seen in Fig. 12, for example. Abrupt transitions between well-lit and poorly lit areas of the pipe may create an erroneous impression of a presence of a defect in the pipe, which may not be there.
  • Fig. 13 shows an example of a raw unwrapped image.
  • the brightness of the image is a function of lighting, which is generally diminished from the center of the video camera (which is located generally at the same spot as the light source) towards the periphery of the image.
  • Each image may be processed based on this or another suitable light distribution model.
  • the video camera pose is estimated on the image and a gradual decrease of brightness away from that point on the image is used to assign respective weight factors to at least some or all pixels of the image. The lower weight factor results in a more transparent result of the weighted image - as seen in Fig.
  • Fig. 15 shows estimated poses of the video camera extracted from the point cloud during video inspection of a round pipe long a U-shaped curve of the pipe position. Successive points indicate individual camera poses when the images of the interior surface of the pipe were taken. To stitch these images, each N th point may be assumed to represent a 0, 0, 0 initial coordinates. The entire point cloud may then be rotated using the rotation of the nearest fitted pipe section. The distance between two successive images N and N+1 may then be assumed to be the distance between these two points along the path of Fig. 15. Successive images of the pipe interior may then be automatically processed between corresponding poses and positions of the video camera.
  • FIGs. 16a through 16d show a comparison of various methods of image stitching of the prior art (Figs. 16a, 16b, 16c) against that obtained using the method of the present invention shown in Fig. 16d.
  • a clean round copper pipe without any internal surface defects was used to obtain all of these images.
  • Fig. 16a shows a panoramic image obtained from a plurality of individual images of the pipe interior with reconstruction errors on the mesh which appear as defects, highlighted within the three circles and pointed by two arrows. These are artifacts of image processing and are not indicative of actual pipe defects.
  • Fig. 16b shows a panoramic image of the same pipe obtained by manual stitching of the images with variable brightness, indicating light artifacts as seen in the three circles.
  • Fig. 16c is a panoramic image of the same pipe obtained using SfM without weight adjustment of the images, thus producing defect-like artifacts seen within the two circles on the right.
  • Fig. 16d shows a panoramic combined image of the pipe obtained using the present invention with all the weighting and light adjustments as described above. This image has the most uniform lighting, best sharpness, and least artifacts as compared to all of the previous three images.
  • the words “comprising” (and any form of comprising, such as “comprise” and “comprises”), “having” (and any form of having, such as “have” and “has”), “including” (and any form of including, such as “includes” and “include”) or “containing” (and any form of containing, such as “contains” and “contain”) are inclusive or open-ended and do not exclude additional, unrecited elements or method steps.
  • “comprising” may be replaced with “consisting essentially of” or “consisting of”.
  • the phrase “consisting essentially of” requires the specified integer(s) or steps as well as those that do not materially affect the character or function of the claimed invention.
  • the term “consisting” is used to indicate the presence of the recited integer (e.g., a feature, an element, a characteristic, a property, a method/process step or a limitation) or group of integers (e.g., feature(s), element(s), characteristic(s), propertie(s), method/process steps or limitation(s)) only.
  • words of approximation such as, without limitation, “about”, “substantial” or “substantially” refers to a condition that when so modified is understood to not necessarily be absolute or perfect but would be considered close enough to those of ordinary skill in the art to warrant designating the condition as being present.
  • the extent to which the description may vary will depend on how great a change can be instituted and still have one of ordinary skilled in the art recognize the modified feature as still having the required characteristics and capabilities of the unmodified feature.
  • a numerical value herein that is modified by a word of approximation such as “about” may vary from the stated value by at least ⁇ 1 , 2, 3, 4, 5, 6, 7, 10, 12, 15, 20 or 25%.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

A novel video inspection method for an interior surface of a small diameter pipe may include the steps of advancing a videoscope through the pipe of known size while acquiring raw images therefrom; estimating the video camera pose for at least some of the raw images; sequentially building a pipe point cloud, a mesh, and a 3D textured model of the interior surface of the pipe from the raw images with adjustments for video camera poses; unwrapping the raw images using camera poses and the pipe point cloud to create unwrapped images of the interior surface of the pipe; and creating a panoramic image of the interior surface of the pipe by stitching the unwrapped images together. The method improves on a conventional Structure-from-Motion technique and allows enhanced sharpness, reduced artifacts, and improved uniformity of lighting as a result of reduced model noise and removal of outliers. The method may be implemented with conventional video borescopes by postprocessing raw images using proprietary software.

Description

INSPECTION METHODS FOR SMALL BORE PIPES
BACKGROUND
[001] Without limiting the scope of the invention, its background is described in connection with the non-destructive inspection of pipelines. More particularly, the invention describes improved remote video inspection methods of image processing obtained using a conventional small diameter videoscope to present to the observer an accurate unwrapped combined image of the internal surface of a small bore pipe.
[002] The use of pipe networks is common in several industries, including oil and gas, petrochemical, nuclear, and aerospace as some examples. Regular non-destructive examination and assessment of these pipes to ensure the safe operation is common and often required by safety regulations. Other examples of pipe networks that require inspection include water pipes, and drain pipes. Optical cameras are frequently used where human access is constrained - typically by hazardous conditions or small pipe diameter.
[003] Traditional systems and methods for inspecting the pipes may include a video camera head mounted on a push-cable that deployed along a pipe to display the interior of the pipe on a camera control unit or another video display. Such video camera heads are essential tools to visually inspect the interior of the pipes and to identify possible defects therein, for example, pipe cracks or breaks, corrosion, leakage, and/or other defects or blockages inside the pipe. Traditional pipe inspection systems, though useful, are limited to manually performed visual inspection and interpretation of the images of pipes by an operator. Existing lateral push-cable camera systems generally include analog cameras with poor image quality due to limitations in power and signal processing down a lengthy push-cable, where the camera head must be sufficiently small to fit into and navigate the bends and turns of commonly used pipe diameters.
[004] The problem may be exacerbated further by a small diameter of the pipe, such as under 25 mm, as it may restrict the view of the camera and further limit the ability of the operator to identify possible defects therein.
[005] The need, therefore, exists for improved pipe inspection methods in order to increase confidence in assessment results and more reliably identify problem areas inside a pipe network.
SUMMARY
[006] Accordingly, it is an object of the present invention to overcome these and other drawbacks of the prior art by providing novel methods for enhanced visual inspection of pipes, and in particular of small bore pipelines and networks.
[007] It is another object of the present invention to provide novel methods for remote visual inspection of pipes with improved image quality.
[008] It is a further object of the present invention to provide novel methods of visual inspection adapted to operate with existing visual inspection equipment and yet provide improved image representation of interior surfaces of small bore pipes.
[009] It is yet a further object of the present invention to provide improved methods for remote visual inspection using a small diameter handheld videoscope equipped with just the video camera and without additional sensors, laser projectors, pipe crawler mechanisms, or camera positioning and centering hardware.
[0010] It is yet another object of the present invention to provide novel visual inspection methods allowing to localize identified pipe defects with greater accuracy along the length of the pipe.
[0011] It is a further yet object of the present invention to provide novel visual inspection methods configured to compensate for the low light intensity of the raw images acquired by the videoscope.
[0012] The novel video inspection method for an interior surface of a pipe of the present invention may include the conventional steps of (a) defining geometrical size and shape of the interior surface of the pipe, (b) providing a videoscope sized to fit inside the pipe, (c) advancing the videoscope through the pipe while acquiring raw images from the video camera of the videoscope. The method further includes the novel steps of (d) estimating the video camera pose for at least some of the raw images of step (c), (e) sequentially building a pipe point cloud, a mesh, and a 3D textured model of the interior surface of the pipe from the raw images with adjustments for video camera poses, (f) unwrapping the raw images using corresponding video camera poses and the pipe point cloud to create unwrapped images of the interior surface of the pipe, and (g) creating a panoramic image of the interior surface of the pipe by stitching the unwrapped images together.
[0013] The step of extracting and matching unique features of the images may be performed using pipe point clouds and Structure-from-Motion techniques as a basis. Beyond that, the method of the invention allows enhanced sharpness, reduced artifacts, and improved uniformity of lighting as a result of reduced model noise and removal of outliers as described in greater detail below.
[0014] The method of the invention may be implemented in a software product configured to perform the steps of the method. The software may be loaded on a suitable computer, tablet, smartphone, or another appropriate electronic device.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] Subject matter is particularly pointed out and distinctly claimed in the concluding portion of the specification. The foregoing and other features of the present disclosure will become more fully apparent from the following description and appended claims, taken in conjunction with the accompanying drawings. Understanding that these drawings depict only several embodiments in accordance with the disclosure and are, therefore, not to be considered limiting of its scope, the disclosure will be described with additional specificity and detail through the use of the accompanying drawings, in which: [0016] FIGURE 1 is a block diagram of the components of the system to implement the present invention listing major functional blocks thereof,
[0017] FIGURE 2 is an example of point cloud and camera poses calculated therefore for an interior of a rectangular pipe,
[0018] FIGURE 3 is an example of a reconstructed 3D model fitted to a pipe with known geometry,
[0019] FIGURE 4 illustrates camera position and projected rays as part of image unwrapping,
[0020] FIGURE 5 shows an example of projected rays fitted to the geometry of an interior of a round pipe,
[0021] FIGURE 6 shows an example of a raw image seen by a camera inside a round pipe,
[0022] FIGURE 7 shows a postprocessing step of fitting projected rays onto a raw image from Fig. 6,
[0023] FIGURE 8 is another example of a raw image seen by the camera inside a round pipe,
[0024] FIGURE 9 is an example of an unwrapped image from that of Fig. 8,
[0025] FIGURE 10 is yet another example of a raw image from a camera positioned inside a round pipe,
[0026] FIGURE 11 is an example of unwrapping of the image of Fig. 10,
[0027] FIGURE 12 is an example of manual image stitching using a plurality of unwrapped images,
[0028] FIGURE 13 is an example of an unwrapped raw image of the internal surface of a round pipe,
[0029] FIGURE 14 is an example of a weighted image of Fig. 13 to adjust the lighting thereof,
[0030] FIGURE 15 is an illustration of using camera poses for automated image stitching, and
[0031] FIGURES 16a to 16d shows examples of using Structure-from-Motion and image stitching to develop a smooth unwrapped image of the interior surface of the pipe.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS OF THE INVENTION
[0032] The following description sets forth various examples along with specific details to provide a thorough understanding of claimed subject matter. It will be understood by those skilled in the art, however, that claimed subject matter may be practiced without one or more of the specific details disclosed herein. Further, in some circumstances, well-known methods, procedures, systems, components and/or circuits have not been described in detail in order to avoid unnecessarily obscuring claimed subject matter. In the following detailed description, reference is made to the accompanying drawings, which form a part hereof. In the drawings, similar symbols typically identify similar components, unless context dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented here. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the figures, can be arranged, substituted, combined, and designed in a wide variety of different configurations, all of which are explicitly contemplated and make part of this disclosure.
[0033] While the present invention may be useful for video inspection of a variety of internal surfaces of pipes, containers, canisters, reservoirs, and other cavities, it has a particularly advantageous utility in inspecting small diameter pipes, such as 25 mm or smaller. This is because it only uses a feed of video or still images provided by a camera at the end of the scope, which may be obtained with additional lighting from an array of small size LEDs. No other sensor, probe, or other hardware is required for applying the method of the invention, which makes the scope size as small as practical given the size of the suitable camera and light source.
[0034] Fig. 1 illustrates a general block-diagram of the main components of the system implementing the present invention. A conventional videoscope 10 (shown in a dashed line) may include a handheld controller 12 with a flexible scope cable extending therefrom. A miniature camera 14 and a light source 16 (for example an array of LEDs) may be mounted at the end of the flexible scope cable. The entire videoscope may be configured to provide raw video and still images as the scope is advanced throughout the internal space of a pipe under inspection.
[0035] Examples of suitable, commercially available videoscopes may include an Olympus IPLEX series of videoscopes (Olympus Corporation, Tokyo, Japan), which are only 2.4 mm in diameter, as well as a GE Video Borescope (General Electric, Boston, MA). Recent videoscope advances have produced even smaller size videoscope, such as only 1 mm in diameter, for example a Super-ultra small industrial video borescope HNL-0.95CAM120 by SPI Engineering (Nagano, Japan), which is only 0.95 mm in diameter. Such small videoscopes are appropriate for evaluating pipes of smaller yet diameters such as sub-millimeter pipes using the methods of the present invention.
[0036] While commercially available videoscopes feature a small built-in display for directly observing the images supplied from the video camera, at least some also feature an option to store collected images using an internal or an external media storage element 20. Examples of a media storage element may include any suitable removable or embedded computer memory devices such as memory cards, sticks, tapes, etc. as the present invention is not limited in this regard. A further example of a media storage device 20 may include a plug-in cable configured to transmit the video images from the handheld videoscope 10 to any external computer 30, which itself may include a media storage element 20 built-in or removably attached thereto. [0037] The method of the invention may be used to post-process the images obtained using the video camera either simultaneously with the process of advancing the videoscope along the pipe or at a later time using the recorded plurality of images, as the invention is not limited in this regard.
[0038] In either case, novel methods of the invention may be implemented as software loaded on a suitable electronic device 30 directly attached to or otherwise capable of receiving the successive images of the internal surface of the pipe from the videoscope 10, for example via a memory stick file transfer. Examples of such suitable electronic computer device 30 capable of operating the software to implement the methods of the invention may include a personal computer, a smartphone, a tablet, a laptop, a smartwatch, etc. as the present invention is not limited in this regard. In further embodiments of the invention, the images from the video camera 14 may be transmitted remotely (such as via the Internet) to a central server, which in this case may be configured to post-process the images, generate an unwrapped final result image and then transmit it back to the user or store in memory for subsequent review and further analysis. In this case, a suitable user interface such as a suitable website configured for uploading the images to a central server may be provided as part of the system of the present invention. Such remote post-processing of images is included in the scope of the notion of the computer 30 and post-processing unit 40 as illustrated in Fig. 1 .
[0039] The post-processing unit 40 may include the following functional blocks: video camera pose estimation block 42, image unwrapping block 44, light adjustment block 46, and image stitching block 48, all of which are described below in greater detail. While in some embodiments, one or more of these blocks may be implemented as physically stand-alone units, in other embodiments they may be implemented as elements of a single computer software product. In further yet embodiments of the invention, at least some of these functional blocks may be implemented as separate software programs configured to operate with each other or with other commercially available image processing software products.
[0040] The output of the results of the post-processing transformation of the raw images obtained from the video camera 14 may be displayed for the observer in a functional block 50, which may be implemented as a computer monitor showing one or more of the unwrapped stitched images of the pipe interior surface. Alternatively, or in addition to a visual display, the results may be presented to the observer as a print-out or in other suitable graphically observable ways.
[0041] According to the present invention, a video inspection of an interior surface of a pipe may start with a step of defining the geometrical size and shape of the interior surface of the pipe. This may be accomplished by examining the pipe itself it is accessible, or by inspecting the records of the pipe collected during its construction. The internal surface of the pipe may be defined as a 3D geometrical coordinate record of the axis of the pipe combined with a record of its cross-sectional area. Accurate knowledge of the interior geometry of the pipe, in some embodiments exceeding a sub-millimeter accuracy, may be instrumental in the proper fitting of the raw images obtained by the video camera to the actual dimensions defining the interior surface as will be explained below in greater detail. A mathematical (or virtual) model of the interior surface of the pipe may then be created in order to define the boundaries of the surface which naturally limits the field of view of the video camera during a pipe inspection.
[0042] Either before or after the step of obtaining the geometrical dimensions of the pipe interior surface, the actual video inspection may be carried out. It may include providing a videoscope equipped with a camera and a light source sized to fit inside the pipe. Advancing of the videoscope may be conducted while acquiring raw imaged from the video camera as discussed above. An optional step of observing the video feed from the camera may be instrumental in assuring the proper advancement of the videoscope and in identifying any gross defects of the pipe.
[0043] Once the record of a plurality of the raw images is obtained, a series of steps of the post-processing part of the process may be initiated. As a first step, a first, or preliminary determination of camera poses and point cloud may be created using a variety of techniques. One suitable technique for post-processing of the raw images is called Structure-from-Motion, or SfM. Generally speaking, SfM technique involves the recognition and extraction of unique features of successive images so that the geometrical relationship of successive images can be determined in the relationship of one image to the next. SfM technique allows reconstruction of a 3D image from a series of 2D images recorded in a known succession. [0044] As can be appreciated by those skilled in the art, the accuracy of the reconstructed 3D model highly depends on the quality of available 2D images. Low light intensity, for example, can reduce image quality, which may cause a reduction of the number of unique features available for reconstructing the 3D model. While videoscopes are equipped with an LED light source, such LEDs may not be able to completely cover the field of view of the camera or not provide bright enough illumination of the pipe interior. These factors tend to introduce errors during the 3D reconstruction process and texture rendering, making the final result less than optimal for detecting pipe defects.
[0045] Fig. 2 shows an example of a first, or preliminary 3D reconstruction of a rectangular pipe using a point cloud (shown as a plurality of raw points with varying density) and camera poses 15, as better seen in a circular zoom-in insert on the right of Fig. 2. An assumption is made that all images are captured by the same camera so the camera pose may be extracted from the images by matching these unique features between various raw images. Each point is shown in varying degrees of black, with less color indicating fewer features extracted from the raw images, typically because of the blurs or low light density. As can be observed, the point cloud in this example contains many outliers, empty zones, discontinuities, and considerable noise, all of which may negatively impact the final result. Artificial discontinuities, for example, may create an impression of a defect in a place where there is no physical defect present. That may create a need for an unnecessary intervention and costly unnecessary repair.
[0046] To improve on this preliminary 3D reconstruction, the present invention contemplates an additional step of fitting the preliminary first 3D reconstruction to a virtual model of a pipe created from the known pipe geometry and shape. In embodiments, the point cloud may be separated into individual slices in order to accommodate pipe bends and other complex geometry. The fitting procedure may be carried out by tuning the poses of the camera and fitted pipe slices, for example with the goal of minimizing the average of the Euclidean distance between the fitted pipe and the corresponding preliminary point cloud slice. Using a virtual pipe model, the point cloud slices may be adjusted so as to fit the virtual pipe as a whole, as seen in Fig. 3, representing a second pipe point cloud. This step may be instrumental in reducing the noise of the images and compensate for empty or dark regions of the pipe as can be appreciated from the description below. [0047] Using a 3D textured model built as a result of implementing the steps described above, a step of image unwrapping may then be conducted. Image unwrapping is a term generally describing a projection of the points in the world coordinate to a 2D Cartesian coordinate. In this instance, a projection of the 3D textured model onto a 2D surface is conducted in order to create a final result of the inspection to be presented to the observer of the test.
[0048] One advantageous technique for inspection images unwrapping for the purposes of the present invention is depth-image-based-rendering and ray tracing. The unwrapping step may be initiated by creating an unwrapped image with multiple rays as seen in Fig. 4. Shown in Fig. 4 is an example of a round pipe with a camera positioned along its central axis (after adjusting for camera poses as explained above based on the point cloud and initial camera pose) and a plurality of rays oriented across the pipe projected in front of the camera. These rays are projected onto a virtual model of the round pipe as seen in Fig. 5. The correlated points of the rays may then be projected onto the raw 2D images, for example, see an image shown in Fig. 6 to create an image with ray projections as seen in Fig. 7.
[0049] A further advantageous step of the present invention includes lighting adjustment of the raw images, which can be carried out during the step of image unwrapping. Low light intensity and the nature of a small size video camera may result in at least some portion of the acquired images containing blurry areas and distortions. Mixing these “featurless” parts with other sharper and more detailed parts of the image may reduce the quality and sharpness of the final result. The methods of the present invention use a weight factor assigned to at least some or even every pixel on each image in order to reduce the contribution of unfocused pixels and increase the contribution of pixels with high focus and sharpness.
[0050] In one example seen in Fig. 8, the raw image has points with maximum contribution close to the center of the image, while peripheral parts of the image have lower sharpness and therefore are assigned lower weight factors. A linear increase of sharpness from the periphery towards the center of the image may be assumed and therefore a corresponding linear increase in weight factors may be implemented before unwrapping the image. Fig. 9 shows an example of using weight factors in unwrapping the image of Fig. 8. A lower contribution is represented by higher transparency, up to a point of abandoning some of the image parts at the lower edge of the image. Abandoned parts of the image may be compensated by more sharp images of the same portion of the pipe obtained from other raw images so that blending of focused and unfocused pixels of the same portion of the image is avoided - resulting in an increased focus of the final result.
[0051] Another example of such image adjustment is seen in Figs. 10 and 11. Fig. 10 shows the raw image of the pipe interior, while Fig. 11 shows an adjusted unwrapped image of the same, created using Fig. 10 as a starting point.
[0052] Image stitching to create a panoramic combined image of the interior surface of the pipe may be the next step in the method of the invention. Traditionally, manually- conducted image stitching is a time-consuming process when it comes to consolidating a large number of images, which can easily go into hundreds or more images. In addition, variable lighting conditions may occur when different adjacent images are taken, due to movement of the video camera, light source repositioning, blind spots in illumination, and for other reasons. Variability in lighting may cause the occurrence of light artifacts in the final panoramic image as can be seen in Fig. 12, for example. Abrupt transitions between well-lit and poorly lit areas of the pipe may create an erroneous impression of a presence of a defect in the pipe, which may not be there.
[0053] The present invention addresses this problem by deploying a step of adjusting image brightness for at least some of the images of the pipe, as illustrated in Figs. 13 and 14. Fig. 13 shows an example of a raw unwrapped image. The brightness of the image is a function of lighting, which is generally diminished from the center of the video camera (which is located generally at the same spot as the light source) towards the periphery of the image. Each image may be processed based on this or another suitable light distribution model. In one example, the video camera pose is estimated on the image and a gradual decrease of brightness away from that point on the image is used to assign respective weight factors to at least some or all pixels of the image. The lower weight factor results in a more transparent result of the weighted image - as seen in Fig. 14 around the edges of the image. [0054] Processing of at least some or all of the images for adjustment of the brightness before image stitching results in a more gradual final panoramic picture, whereby avoiding lighting artifacts and reducing the likelihood of identifying a false defect in the pipe - as seen from examples described below.
[0055] Fig. 15 shows estimated poses of the video camera extracted from the point cloud during video inspection of a round pipe long a U-shaped curve of the pipe position. Successive points indicate individual camera poses when the images of the interior surface of the pipe were taken. To stitch these images, each Nth point may be assumed to represent a 0, 0, 0 initial coordinates. The entire point cloud may then be rotated using the rotation of the nearest fitted pipe section. The distance between two successive images N and N+1 may then be assumed to be the distance between these two points along the path of Fig. 15. Successive images of the pipe interior may then be automatically processed between corresponding poses and positions of the video camera.
[0056] Finally, Figs. 16a through 16d show a comparison of various methods of image stitching of the prior art (Figs. 16a, 16b, 16c) against that obtained using the method of the present invention shown in Fig. 16d. A clean round copper pipe without any internal surface defects was used to obtain all of these images. Fig. 16a shows a panoramic image obtained from a plurality of individual images of the pipe interior with reconstruction errors on the mesh which appear as defects, highlighted within the three circles and pointed by two arrows. These are artifacts of image processing and are not indicative of actual pipe defects.
[0057] Fig. 16b shows a panoramic image of the same pipe obtained by manual stitching of the images with variable brightness, indicating light artifacts as seen in the three circles.
[0058] Fig. 16c is a panoramic image of the same pipe obtained using SfM without weight adjustment of the images, thus producing defect-like artifacts seen within the two circles on the right.
[0059] Fig. 16d shows a panoramic combined image of the pipe obtained using the present invention with all the weighting and light adjustments as described above. This image has the most uniform lighting, best sharpness, and least artifacts as compared to all of the previous three images.
[0060] It is contemplated that any embodiment discussed in this specification can be implemented with respect to any method of the invention, and vice versa. It will be also understood that particular embodiments described herein are shown by way of illustration and not as limitations of the invention. The principal features of this invention can be employed in various embodiments without departing from the scope of the invention. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, numerous equivalents to the specific procedures described herein. Such equivalents are considered to be within the scope of this invention and are covered by the claims.
[0061] All publications and patent applications mentioned in the specification are indicative of the level of skill of those skilled in the art to which this invention pertains. All publications and patent applications are herein incorporated by reference to the same extent as if each individual publication or patent application was specifically and individually indicated to be incorporated by reference. Incorporation by reference is limited such that no subject matter is incorporated that is contrary to the explicit disclosure herein, no claims included in the documents are incorporated by reference herein, and any definitions provided in the documents are not incorporated by reference herein unless expressly included herein.
[0062] The use of the word “a” or “an” when used in conjunction with the term “comprising” in the claims and/or the specification may mean “one,” but it is also consistent with the meaning of “one or more,” “at least one,” and “one or more than one.” The use of the term “or” in the claims is used to mean “and/or” unless explicitly indicated to refer to alternatives only or the alternatives are mutually exclusive, although the disclosure supports a definition that refers to only alternatives and “and/or.” Throughout this application, the term “about” is used to indicate that a value includes the inherent variation of error for the device, the method being employed to determine the value, or the variation that exists among the study subjects. [0063] As used in this specification and claim(s), the words “comprising” (and any form of comprising, such as “comprise” and “comprises”), “having” (and any form of having, such as “have” and “has”), “including” (and any form of including, such as “includes” and “include”) or “containing” (and any form of containing, such as “contains” and “contain”) are inclusive or open-ended and do not exclude additional, unrecited elements or method steps. In embodiments of any of the compositions and methods provided herein, “comprising” may be replaced with “consisting essentially of” or “consisting of”. As used herein, the phrase “consisting essentially of” requires the specified integer(s) or steps as well as those that do not materially affect the character or function of the claimed invention. As used herein, the term “consisting” is used to indicate the presence of the recited integer (e.g., a feature, an element, a characteristic, a property, a method/process step or a limitation) or group of integers (e.g., feature(s), element(s), characteristic(s), propertie(s), method/process steps or limitation(s)) only.
[0064] The term “or combinations thereof” as used herein refers to all permutations and combinations of the listed items preceding the term. For example, “A, B, C, or combinations thereof” is intended to include at least one of: A, B, C, AB, AC, BC, or ABC, and if order is important in a particular context, also BA, CA, CB, CBA, BCA, ACB, BAC, or CAB. Continuing with this example, expressly included are combinations that contain repeats of one or more item or term, such as BB, AAA, AB, BBC, AAABCCCC, CBBAAA, CABABB, and so forth. The skilled artisan will understand that typically there is no limit on the number of items or terms in any combination, unless otherwise apparent from the context.
[0065] As used herein, words of approximation such as, without limitation, “about”, "substantial" or "substantially" refers to a condition that when so modified is understood to not necessarily be absolute or perfect but would be considered close enough to those of ordinary skill in the art to warrant designating the condition as being present. The extent to which the description may vary will depend on how great a change can be instituted and still have one of ordinary skilled in the art recognize the modified feature as still having the required characteristics and capabilities of the unmodified feature. In general, but subject to the preceding discussion, a numerical value herein that is modified by a word of approximation such as “about” may vary from the stated value by at least ±1 , 2, 3, 4, 5, 6, 7, 10, 12, 15, 20 or 25%. [0066] All of the devices and/or methods disclosed and claimed herein can be made and executed without undue experimentation in light of the present disclosure. While the devices and methods of this invention have been described in terms of preferred embodiments, it will be apparent to those of skill in the art that variations may be applied to the devices and/or methods and in the steps or in the sequence of steps of the method described herein without departing from the concept, spirit and scope of the invention. All such similar substitutes and modifications apparent to those skilled in the art are deemed to be within the spirit, scope and concept of the invention as defined by the appended claims.

Claims

WHAT IS CLAIMED IS:
1. A video inspection method for an interior surface of a pipe, the method comprising the steps of: a. defining geometrical size and shape of the interior surface of the pipe, b. providing a videoscope sized to fit inside the pipe, c. advancing the videoscope through the pipe while acquiring raw images from a video camera of the videoscope, wherein the improvement is characterized by performing the steps of: d. estimating the video camera pose for at least some of the raw images of step (c), e. sequentially building a pipe point cloud, a mesh, and a 3D textured model of the interior surface of the pipe from the raw images with adjustments for video camera poses, f. unwrapping the raw images using corresponding video camera poses and the pipe point cloud to create unwrapped images of the interior surface of the pipe, and g. creating a panoramic image of the interior surface of the pipe by stitching the unwrapped images together.
2. The method as in claim 1 , wherein the videoscope is equipped with only the video camera and illuminating lights, and does not include any camera centering hardware, laser projection hardware, or any other sensor.
3. The method as in claim 1 , wherein the pipe is a small bore pipe with an internal diameter or an internal size characterizing a cross-section of the pipe being 25 mm or smaller.
4. The method as in claim 1 , wherein in step (d) the estimating of video camera poses is done by extracting and matching unique features from consecutive raw images.
5. The method as in claim 4, wherein said step of extracting and matching unique features is performed using a Structure-from-Motion (SfM) technique.
6. The method as in claim 1 , wherein the step of building the pipe point cloud comprising building a first point cloud from the raw images, followed by building a second pipe point cloud to replace the first pipe point cloud using characteristics of the pipe extracted from the first point cloud and known pipe dimensions, whereby reducing defects and discontinuities of the 3D textured model caused by insufficient lighting during image acquisition in step (c).
7. The method as in claim 6, wherein the step of building the pipe point cloud further comprises a step of creating a virtual pipe using known pipe dimensions followed by a step of fitting pipe poses onto the virtual pipe to generate the second pipe point cloud.
8. The method of claim 7, wherein the step of fitting pipe poses onto the virtual pipe is conducted by minimizing an average of Euclidean distances between the virtual pipe and the first pipe point cloud.
9. The method as in claim 1 , wherein the step of unwrapping the raw images is performed using Depth-Image-Based-Rendering (DIBR) and ray tracing techniques.
10. The method as in claim 9, wherein the step of unwrapping the raw images further comprises a step of creating an unwrapped image with multiple rays projected onto the virtual pipe in front of the video camera based on a pose thereof and using the second pipe point cloud.
11. The method as in claim 10, wherein the step (g) of creating the panoramic image further comprises a step of assigning a weight factor to at least some of the pixels on at least some of the raw images, whereby creating unwrapped and weighted images and increasing the sharpness of the panoramic image.
12. The method as in claim 11 , wherein the step of assigning a weight factor is conducted by gradually increasing the weight factor for each pixel from the periphery of the image towards a center thereof.
13. The method as in claim 12, wherein said step of stitching of unwrapped images is conducted with averaging of the pixels from related images with their respective calculated weight factors.
14. The method as in claim 1 further comprising a step of adjusting image brightness following step (e), wherein the adjusting of image brightness is based on a light distribution model and the video camera pose.
15. The method as in claim 14, wherein the light distribution model is that of a gradual decrease of lighting from a center of the video camera pose a periphery of the raw image.
PCT/IB2021/000088 2021-02-15 2021-02-15 Inspection methods for small bore pipes WO2022172048A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/IB2021/000088 WO2022172048A1 (en) 2021-02-15 2021-02-15 Inspection methods for small bore pipes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2021/000088 WO2022172048A1 (en) 2021-02-15 2021-02-15 Inspection methods for small bore pipes

Publications (1)

Publication Number Publication Date
WO2022172048A1 true WO2022172048A1 (en) 2022-08-18

Family

ID=75267528

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2021/000088 WO2022172048A1 (en) 2021-02-15 2021-02-15 Inspection methods for small bore pipes

Country Status (1)

Country Link
WO (1) WO2022172048A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117132828A (en) * 2023-08-30 2023-11-28 常州润来科技有限公司 Automatic classification method and system for solid waste in copper pipe machining process

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JOHANNES K\"UNZEL ET AL: "Automatic Analysis of Sewer Pipes Based on Unrolled Monocular Fisheye Images", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 11 December 2019 (2019-12-11), XP081549604, DOI: 10.1109/WACV.2018.00223 *
KAGAMI SHO ET AL: "3D Pipe Network Reconstruction Based on Structure from Motion with Incremental Conic Shape Detection and Cylindrical Constraint", 2020 IEEE 29TH INTERNATIONAL SYMPOSIUM ON INDUSTRIAL ELECTRONICS (ISIE), IEEE, 17 June 2020 (2020-06-17), pages 1345 - 1352, XP033800793, DOI: 10.1109/ISIE45063.2020.9152377 *
YOSHIMOTO KAYO ET AL: "Evaluation of endoscopic entire 3D image acquisition of the digestive tract using a stereo endoscope", PROGRESS IN BIOMEDICAL OPTICS AND IMAGING, SPIE - INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING, BELLINGHAM, WA, US, vol. 10054, 14 February 2017 (2017-02-14), pages 1005412 - 1005412, XP060083541, ISSN: 1605-7422, ISBN: 978-1-5106-0027-0, DOI: 10.1117/12.2250774 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117132828A (en) * 2023-08-30 2023-11-28 常州润来科技有限公司 Automatic classification method and system for solid waste in copper pipe machining process
CN117132828B (en) * 2023-08-30 2024-03-19 常州润来科技有限公司 Automatic classification method and system for solid waste in copper pipe machining process

Similar Documents

Publication Publication Date Title
US11386542B2 (en) Training data creation method and device, and defect inspection method and device
JP7009491B2 (en) Methods and devices for asset inspection
US9292915B2 (en) Digital optical comparator
JP4078409B2 (en) Multi-source data processing method and multi-source data processing apparatus
JP2017037075A (en) Borehole inspection device
TW201011284A (en) Device for inspecting resin material and recording medium
US8508591B2 (en) System and method for estimating the height of an object using tomosynthesis-like techniques
JP4331541B2 (en) Endoscope device
JP6114539B2 (en) Method and system for processing an image for inspection of an object
JP2007333639A (en) Inspection system
WO2022172048A1 (en) Inspection methods for small bore pipes
US20220084178A1 (en) Method and device for inspecting hard-to-reach components
JP4279833B2 (en) Appearance inspection method and appearance inspection apparatus
US9953409B2 (en) Tire inspection method and device therefor
US20230386011A1 (en) Inspection methods for small bore pipes
CN117058106A (en) Method for measuring flatness and surface defects of flexible glass based on random forest
CN111105413B (en) Intelligent spark plug appearance defect detection system
JP2001174227A (en) Method and device for measuring diameter distribution of fiber
Moccia et al. Automatic workflow for narrow-band laryngeal video stitching
US20220130032A1 (en) Automated turbine blade to shroud gap measurement
CN113870197A (en) Gear crack detection method based on wavelet multilayer decomposition
WO2023224041A1 (en) Inspection device and learning method
CN110940679A (en) Electronic endoscope crack detection system and method based on FPGA
JP2012122964A (en) Method of detecting surface defect
WO2023089846A1 (en) Inspection device and inspection method, and program for use in same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21714942

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 12.12.2023)