CN110766669A - Pipeline measuring method based on multi-view vision - Google Patents

Pipeline measuring method based on multi-view vision Download PDF

Info

Publication number
CN110766669A
CN110766669A CN201910993518.2A CN201910993518A CN110766669A CN 110766669 A CN110766669 A CN 110766669A CN 201910993518 A CN201910993518 A CN 201910993518A CN 110766669 A CN110766669 A CN 110766669A
Authority
CN
China
Prior art keywords
pipeline
camera
point
central line
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910993518.2A
Other languages
Chinese (zh)
Other versions
CN110766669B (en
Inventor
王健
李明洲
方定君
吕琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN201910993518.2A priority Critical patent/CN110766669B/en
Publication of CN110766669A publication Critical patent/CN110766669A/en
Application granted granted Critical
Publication of CN110766669B publication Critical patent/CN110766669B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/08Measuring arrangements characterised by the use of optical techniques for measuring diameters
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a pipeline measuring method based on multi-view vision, which consists of four parts, namely image acquisition and preprocessing, central line and outer diameter extraction, camera calibration and camera matrix conversion, pipeline central line matching and outer diameter expansion. The system places the camera on the built measuring frame, and performs image acquisition and preprocessing to obtain a pipeline gray level picture. Acquiring a pipeline central line by using a distance transformation and minimum path method, extracting characteristic points of the pipeline central line and carrying out discretization operation on the characteristic points; calibrating a corresponding binocular camera (a front side camera and a rear side camera), and acquiring a transformation matrix from a rear side camera coordinate system to a front side camera coordinate system; and recovering the three-dimensional information of the pipeline center line by adopting a dynamic matching method according to the acquired feature points of the center line and the camera calibration parameters, and expanding the outer diameter information obtained by processing the shot pictures at different visual angles on the three-dimensional pipeline center line to finally obtain the spatial position of the pipeline center line and the corresponding outer diameter information.

Description

Pipeline measuring method based on multi-view vision
Technical Field
The invention relates to the technical field of vision measurement, in particular to a pipeline measurement method based on multi-view vision.
Background
The pipeline system is widely applied to nuclear reactor, aviation, aerospace, ship and automobile industries, is responsible for the transmission and measurement of media such as gas, liquid and the like, is an important component of high-end electromechanical products, and the precise processing of pipeline parts has important influence on the performance and safety of the products. High precision pipe machining is therefore a current hot issue.
The measurement of the space geometric shape after the pipeline processing mainly adopts a mode-dependent method, a three-coordinate measuring instrument, a measuring method based on a laser CCD device and the like. The profiling method can only roughly check the shape of the pipeline, and cannot accurately measure the positions of the end points of the pipeline and the distance between the end points. Although the three-coordinate measuring instrument has high measurement accuracy, the requirement on the measurement environment is strict, and the measurement range is limited. The measurement method based on the laser CCD device requires workers to carefully move the measurement optical fork when measuring the end point, and the operation is difficult.
In the academic world, the "multi-view vision-based pipeline digital measurement method research" paper by zhang tian proposes a pipeline reconstruction technique based on centerline matching and machine learning. And calculating the coordinates of the control points of the pipeline by adopting a machine vision algorithm, fitting the center line of the pipeline by utilizing the coordinates of the control points of the pipeline, and further reconstructing a three-dimensional model of the pipeline.
In combination with the above analysis, the prior art in the field of pipeline measurement has the following disadvantages: the traditional pipeline space geometric form measuring method is not accurate enough, or has strict requirements on the measuring environment, or needs to use some high-cost instruments, or adopts a manual interaction method, so that the cost is high and the efficiency is low. The method is different from the method proposed by Zhang Tian in that: the invention provides a more efficient distance transformation and minimum path method, wherein an imaging method is used in a central line extraction link in the Zhang Tian paper, but the method cannot ensure that an accurate central line is extracted; in the step of central line matching, Zhang Tian thesis adopts a NURBS curve fitting method, and the invention provides a direct calculation method for dynamic matching, which is more concise; in addition, the invention provides an edge detection method based on centroid transformation to calculate the outer diameter of the pipeline.
Disclosure of Invention
The purpose of the invention is as follows: in order to overcome the defects in the prior art, the invention provides a pipeline measuring method and system based on multi-view vision, which are used for solving the problems of complex flow, high cost and low precision in the current pipeline measuring technology. The method can identify the pipeline path and the outer diameter of the pipeline with high precision by using a pipeline reconstruction technology based on center line matching, and can obtain a complete 3D model of the pipeline by matching with modeling software.
The technical scheme is as follows: in order to achieve the purpose, the invention adopts the technical scheme that:
1. a pipeline measuring method based on multi-view vision comprises the following steps:
(1) image acquisition and pre-processing
Step 1, an image acquisition system is set up, and pictures are captured through binocular cameras distributed on the front side and the rear side of a pipeline to obtain a shot pipeline gray image; carrying out binarization processing on the gray level image to distinguish a background from the pipeline; extracting a target region ROI (region of interest) of the binary image by adopting a contour comparison technology to obtain pipeline contour information;
(2) centerline and outside diameter extraction
Step 2.1, calculating the Euclidean distance from each point in the pipeline to the pipeline profile by adopting an Euclidean distance conversion method according to the pipeline profile information, acquiring a distance conversion diagram of the pipeline, and recording the Euclidean distance from each point in the pipeline profile to the pipeline boundary;
2.2, searching a pipeline boundary through image processing, obtaining pixel coordinates of end points on two sides of the pipeline, and obtaining a minimum path between two points, namely a central line of the pipeline by adopting a Dijkstra minimum path algorithm and taking the end point on one side of the central line as a starting point and the end point on the other side of the central line as an end point;
step 2.3, reducing the characteristic value by adopting a local characteristic point dispersion method, regarding the central line as a function, optimizing all stagnation points to obtain a central line characteristic point set, wherein the obtained characteristic points are part of stagnation points of the function;
step 2.4, acquiring and storing the outer diameter size corresponding to each characteristic point in the characteristic point set of the central line by adopting an edge detection method based on the shape mass center;
(3) camera calibration and multi-camera matrix conversion
Step 3.1, obtaining external parameters consisting of a rotation matrix and a translation matrix and internal parameters of the camera through camera calibration, and obtaining a matrix for converting the coordinates of the pipeline to the coordinates of the camera window pixels in the world coordinate system according to the internal and external parameters;
step 3.2, converting a camera coordinate system according to the external reference information of the camera, simultaneously calibrating the rear side camera and the front side camera by using the same calibration plate, and solving a rotation matrix and a translation matrix required by converting the rear side camera coordinate system to the front side camera coordinate system;
(4) pipeline centerline matching and outside diameter expansion
Step 4.1, respectively carrying out viewport transformation recovery, perspective projection transformation recovery, view transformation recovery and model transformation recovery on the central line characteristic points obtained by the front side camera and the rear side camera to obtain two groups of ray families of the imaging plane of the front side camera and the rear side camera with the centers pointing forward;
step 4.2, converting a ray family of the rear camera center pointing to the imaging plane of the rear camera into a ray family under the coordinate system of the front camera according to the rotation matrix and the translation matrix obtained in the step 3.2;
4.3, matching from the end point on one side according to the central line end point obtained in the step 2.2 by adopting a dynamic matching method, and sequentially matching until the end point on the other side according to the dynamic method to obtain the spatial position of the central line of the pipeline, wherein the spatial position is represented by a group of broken line segments;
step 4.4, expanding the outer diameter of the characteristic point of the central line under each view to obtain the outer diameter information of all the directions of the pipeline to be measured; adopting a curve fitting method based on a least square method, regarding the outer diameter of the characteristic point as a function value, regarding the distance between the characteristic point and an endpoint as an independent variable, performing polynomial curve fitting on the series of points, selecting a fitting curve according to the principle of minimum deviation square sum, and finally obtaining a function, namely the outer diameter length of the distance endpoint at different distances;
and reconstructing the pipeline to be measured according to the space coordinate and the outer diameter information of the central line of the pipeline to be measured in the front camera coordinate system.
Further, the edge detection method based on the centroid of the body in the step 2.4 is as follows:
the one-dimensional mathematical model of the image information, without considering the noise, is simply expressed as:
f(x)=u(x)*g(x)
wherein: u (x) is the original ideal signal, f (x) is the one-dimensional image information; g (x) is a point spread function, typically approximated as a gaussian function:
Figure BDA0002239037690000031
and (x) respectively setting u (x) as an ideal step edge, an ideal pulse edge and an ideal roof edge, smoothing sharp edges into fuzzy edges after the point spread function is acted, and carrying out symmetry analysis on derivatives to obtain a calculation formula of a boundary, wherein the calculation formula of the boundary comprises the following steps:
Figure BDA0002239037690000032
discretizing the formula, fiIs f (x) at xiA sampled value of xiThe differential value at (b) is replaced with the mean of the forward difference and the backward difference at (b) as follows:
Figure BDA0002239037690000033
the formula may be changed to:
Figure BDA0002239037690000034
calculating a difference matrix, respectively convolving the difference matrix with the pipeline gray level picture by using a row difference template and a column difference template, and recording an obtained matrix D1,D2(ii) a Selecting a differential threshold value T according to the statistical characteristics of the pipeline matrix; computing and selecting a calculation interval to the matrix D1,D2The element in the two matrixes is less than T and is set to be 0, and the non-zero continuous interval of the two matrixes is an edge transition interval; calculating an edge source point value by using a discretization formula, and storing the edge source point value into the edge of the gray level image; according to the characteristic point set of the central line, the outer diameter is the pixel number obtained in the distance conversion plus the edge point value calculated in the previous step.
Further, the dynamic matching strategy in step 4.3 is as follows:
matching from an end point on one side according to the end point information obtained in the image processing link, and recording a ray corresponding to the end point, wherein the ray is taken as a reference ray;
according to a binocular imaging principle, selecting a reference ray and a next ray of a front camera center pointing to a front camera imaging plane projection point under a front camera coordinate system to form a plane, selecting a reference ray and a next ray of a rear camera center pointing to a rear camera imaging plane projection point under the coordinate system to form another plane, and solving an intersection line of the two planes; four intersection points are formed by the intersection lines and the four rays, the intersection points of the two rays and the intersection lines are selected as reference points according to the reference rays recorded before, the first point far away from the intersection points is the characteristic point of the central line of the pipeline to be measured obtained by the matching, the point is used as the reference point during the next matching, and one ray is selected backwards in the next matching of the corresponding ray family; through matching, the spatial characteristic points of the central line of the pipeline to be measured are obtained, and the central line of the pipeline to be measured can be obtained through connection
Has the advantages that: the method provides a pipeline center line extraction method based on path transformation and a pipeline center reconstruction measurement method based on pipeline center line matching aiming at the widely applied pipeline part finish machining and measurement. Meanwhile, a subpixel level detection method based on centroid transformation is provided for measuring the outer diameter of the pipeline. The method can effectively solve the problems of complex measurement process, high measurement cost, low measurement precision and difficult reconstruction of the pipeline model in the current pipeline measurement application, obtains the pipeline center line broken line family and the corresponding outer diameter information, and can completely express all the information of the pipeline.
Drawings
FIG. 1 is a schematic diagram showing the relationship between the structures of the system of the present invention
FIG. 2 is a schematic diagram of the system framework of the present invention;
FIG. 3 is a schematic diagram illustrating the conversion process from the pipeline under test to the image obtained by shooting;
FIG. 4 is a schematic perspective projection diagram;
FIG. 5 is a schematic view of the binocular imaging principle of the present invention;
FIG. 6 is a flow chart of the dynamic matching strategy of the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings.
The invention provides a pipeline measuring method based on multi-view vision, which mainly comprises (1) image acquisition and preprocessing; (2) extracting a central line and an outer diameter; (3) camera calibration and camera matrix conversion; (4) pipeline centerline matching and outer diameter expansion.
A specific example is given below
(1) Image acquisition and pre-processing
Firstly, an image acquisition system is built, a steel stand platform with the length of about 1.5m is used as an image acquisition system frame, and a light-emitting plate is laid at the bottom of the frame to eliminate shadows caused by high-place light source illumination and simplify the operation of pipeline background in image processing. If the desktop is used as the object stage, the texture information on the desktop brings great trouble, and the strong light for paving the light-emitting plate can make the background of the pipeline picture close to white.
According to the precision analysis of the three-dimensional reconstruction simulation of the ideal broken line segment, the image acquisition module needs to use a camera with at least 500 ten thousand pixels. The cameras are placed in the system frame, as shown in fig. 1-2, one camera is placed in front of and behind each other, forming a pair of camera sets. The pipeline to be measured placed on the light emitting panel is photographed. The camera uses python and opencv for picture capture. Since color information is not required for subsequent image processing, camera parameters need to be adjusted, and the aim is to acquire an original picture close to a gray-scale image as much as possible.
And converting the shot pipeline image into a gray-scale image, carrying out binarization processing on the gray-scale image, and selecting a proper threshold value to distinguish a background, a pipeline and other boundaries. And extracting the ROI of the second picture and the third picture, wherein the outline comparison technology is adopted, and the camera is aligned to the pipeline when being placed, so that the maximum outline in the shot pictures is the outline of the pipeline. And (3) extracting all the figure outlines contained in the picture by using opencv, eliminating small outlines, and reserving the maximum outline to finish ROI area extraction.
(2) Centerline and outside diameter extraction
The distance transformation refers to calculating the shortest distance from each point inside the object to the boundary of the object. The centerline of a pipe is the set of points that are furthest from the pipe boundary. The distance transformation comprises Euclidean distance transformation and non-Euclidean distance transformation, the non-Euclidean distance transformation has the characteristics of low complexity and high efficiency, similar results can be obtained, the Euclidean distance transformation has higher complexity, and accurate results can be obtained. Through Euclidean distance transformation, a scalar field can be obtained, and the Euclidean distance from each point inside the pipeline contour to the pipeline boundary is recorded.
Since the pipeline centerline is the set of points inside the pipeline farthest from the boundary, after the euclidean distance from each point inside the pipeline to the boundary is obtained, the centerline extraction problem becomes the minimum routing problem from the end point on one side of the pipeline to the end point on the other side. The end points of the two sides of the pipeline are obtained by a graphical method, the specific operation is to continuously refine the pipeline, and the outline at the outermost side of the pipeline is identified and removed until only a single pixel is left in the pipeline, and the end points of the single pixel curve are the end points of the two sides of the pipeline. It should be noted that there is some error between the single-pixel curve and the pipeline centerline, because the contour refinement is performed around the contour, and therefore there is an error of one to two pixels from the true centerline.
And obtaining coordinates of end points on two sides of the pipeline. One side end point is chosen as the starting point of the path, noting that the picture taken is symmetrical, since the two cameras are placed opposite. Therefore, if the front camera selects one end point as the starting point of the path, the back camera selects the other end point. The distinguishing endpoints may be distinguished by the size of the sum of the horizontal and vertical coordinates of the pixel. The other end point after the starting point is selected is the end point.
The centerline of the pipe is the smallest path between the starting point and the ending point. The invention adopts Dijkstra minimum path algorithm. The specific process is as follows: searching a starting point and an adjacent shortest path; another path is selected, the nodes which are also moved to the node are compared, the updating cost is minimum, and the path is updated; repeating the steps until the last node; a final path is calculated.
After the centerline is acquired, the feature points are discretized. Regarding the central line as a function, the feature points of the central line are certain stagnation points of the function, and the stagnation point screening strategy is as follows: and calculating the distance from a certain characteristic point to two end points of the curve, comparing the distance with the sum of the distances of two lines formed by the point and the two end points, discarding the curve when the distance reaches a certain degree, and otherwise, keeping the distance. And obtaining a discretized central line characteristic point set.
Aiming at the outer diameter of the pipeline, the invention adopts an edge detection method based on the centroid of the shape. After the characteristics of 3 basic types of edges of step type edges, pulse type edges and roof ridge type edges are analyzed in detail, the first derivative of the 3 basic types of edges is a symmetrical function about edge points, and the absolute value of the first derivative is an even symmetrical function about the edge points. Without considering the noise impact, one-dimensional mathematical models are simply represented as:
f(x)=u(x)*g(x)
wherein: u (x) is the original ideal signal, f (x) is the one-dimensional image information; g (x) is a point spread function, typically approximated as a gaussian function:
Figure BDA0002239037690000061
setting u (x) as an ideal step edge, an ideal pulse edge and an ideal ridge edge respectively, smoothing sharp edges into fuzzy edges after the action of a point diffusion function, and obtaining a calculation formula of the boundary through the symmetry analysis of a derivative, wherein the formula comprises:
Figure BDA0002239037690000062
in order to realize the algorithm by a computer, the formula needs to be discretized, and f is setiIs f (x) at xiA sampled value of xiThe differential value at (b) is replaced by the mean of the forward and backward differences at (b), noted as:
Figure BDA0002239037690000071
the formula may be changed to:
Figure BDA0002239037690000072
the specific process for implementing the algorithm is as follows: calculating a difference matrix, respectively convolving the difference matrix with the pipeline gray level picture by using a row difference template and a column difference template, and recording an obtained matrix D1,D2(ii) a Selecting a differential threshold value T according to the statistical characteristics of the pipeline matrix; computing and selecting a calculation interval to the matrix D1,D2The element in the two matrixes is less than T and is set to be 0, and the non-zero continuous interval of the two matrixes is an edge transition interval; calculating an edge source point value by using a discretization formula, and storing the edge source point value into the edge of the gray level image; according to the characteristic point set of the central line, the outer diameter is the pixel number obtained in the distance conversion plus the edge point value calculated in the previous step.
(3) Camera calibration and multi-camera matrix conversion
And calibrating the two cameras. The camera calibration is one of basic problems of computer vision, and is a premise for acquiring three-dimensional information from a two-dimensional image. The calibration is to utilize the known information in the two-dimensional image shot by the camera to be calibrated to obtain all unknown parameters in the camera imaging model, including internal parameters representing the internal structure of the camera and external parameters representing the spatial pose of the camera.
The camera calibration method comprises ① calibration methods for three-dimensional calibration objects, ② calibration methods for two-dimensional calibration objects, ③ calibration methods for one-dimensional calibration objects, ④ self-calibration methods, calibration methods for two-dimensional calibration objects, which are high in precision and simple and convenient to operate, and become calibration methods commonly used in practical application.
The two-dimensional plane calibration method needs to use a calibration plate, the form and the manufacturing quality of the calibration plate influence the calibration precision of the camera, the calibration plate mainly comprises a checkerboard calibration plate and a circular calibration plate, and the checkerboard calibration plate is adopted in the invention. The checkerboard calibration plate takes grid right-angle points as calibration points, adopts python to capture pictures, two cameras capture 15-20 pictures respectively, the checkerboard covers the area as large as possible, and the checkerboard of each picture has different angles. A calibration tool of the image processing packet, such as a calibration tool built in Matlab, performs camera calibration, and may obtain the following parameters: rotation matrix, translation matrix, and reference matrix of the camera.
And performing matrix conversion between the cameras according to the external reference information of the cameras, and calibrating the two cameras simultaneously by using the same calibration board, so that the two cameras have the same world coordinate system when being calibrated. For a certain point P (in world coordinate system) on the calibration disk, the point P is converted into P in the coordinate system of the rear side camera through the external parameter matrix of the rear side cameratThe external reference matrix of the front camera is converted into P in the coordinate system of the front camerar。Pt,PrThe conversion can be performed by a rotation matrix R, a translation matrix T:
Pr=RPt+T
wherein the rotation matrix expression is:
Figure BDA0002239037690000081
Rrthe resulting rotation matrix is calibrated for the front camera alone,
Figure BDA0002239037690000082
rotation matrix inversion obtained for separate calibration of a back-side camera
The translation matrix expression is:
T=Tr-RTt
Trtranslation matrix, T, calibrated separately for front cameratThe translation matrix is calibrated for the back side camera alone.
(4) Pipeline centerline matching and outside diameter expansion
And obtaining the pixel position information of the feature point under the camera view through image acquisition, preprocessing, centerline extraction and discretization. And obtaining matrix parameters and perspective projection matrix information converted from a world coordinate system to a camera coordinate system through camera calibration. And obtaining the conversion relation from one point of the imaging plane to the picture pixel point according to the camera parameters. As shown in fig. 3, each point on the pipeline to be measured undergoes the transformation shown in fig. 3, and becomes a pixel point in the camera imaging picture. The purpose of the module is to recover the position information of the points in the world coordinate system by the pixel points by utilizing the principle of stereo imaging.
And carrying out reverse view transformation on each point in the characteristic point set of the pipeline picture obtained by the rear camera, and restoring the pixel information into a two-dimensional point coordinate on the camera imaging surface under a camera coordinate system according to the size and the initial position of the picture. Since the depth information is discarded in the perspective projection, as shown in fig. 4, any point (for example, point M, point N) on a straight line passing through the center eye of the camera and the point P on the imaging plane is projected onto the imaging planeThe same pixel point can be obtained through view transformation of the P point. Therefore, the original point information can not be recovered by the reduction perspective projection, and only one ray pointing to the point P on the imaging plane from the eye center of the camera is recorded as Li. Represented by two points (eye, P) in the rear camera coordinate system.
The ray pointing from the center of the camera to a point on the imaging plane (a point obtained by inverse view transformation) can be obtained for each feature point of the front camera, but the ray is represented in the front camera coordinate system, and the ray in the rear camera coordinate system obtained in the above step is converted into a ray in the front camera coordinate system by using the camera coordinate system conversion relation obtained by the camera calibration module. The operation is applied to all the characteristic points on the central line obtained by the pictures shot by the two cameras, and two groups of rays under the coordinate system of the front side camera are obtained, namely a ray group of which the center of the front side camera points to the projection point of the imaging plane of the front side camera and a ray group of which the center of the rear side camera points to the projection point of the imaging plane of the rear side camera.
Using the principles of binocular imaging, two rays can define a plane with the pipeline centerline lying in the intersection of the two planes, as shown in FIG. 5. Specifically, the four rays are located between two points of four intersection points generated by the intersection line. The scheme of determining which two points is relevant to the matching strategy. And matching from an end point on one side according to the end point information obtained in the image processing link by adopting a dynamic matching method, and recording the ray corresponding to the end point, wherein the ray is taken as a reference ray. According to the binocular imaging principle, a reference ray and a next ray of a projection point of a front side camera imaging plane at the center of a front side camera under a front side camera coordinate system are selected to form a plane, a reference ray and a next ray of a projection point of a rear side camera imaging plane under the coordinate system are selected to form another plane, and an intersection line of the two planes is obtained. And forming four intersection points by the intersection lines and the four rays, selecting the intersection points of the two rays and the intersection lines as reference points according to the reference rays recorded before, and obtaining the characteristic points of the central line of the pipeline to be detected by matching the first point far away from the intersection points. This point is used as a reference point for the next matching, and a ray is selected backwards in the next matching of the corresponding ray family, as shown in fig. 6. And obtaining the spatial characteristic points of the central line of the pipeline to be tested through matching, and connecting to obtain the central line of the pipeline to be tested.
And expanding the outer diameter of the characteristic point of the central line under each view to obtain the outer diameter information of all the directions of the pipeline to be measured. And adopting a curve fitting method based on a least square method. And (4) taking the outer diameter of the characteristic point as a function value, taking the distance between the characteristic point and an endpoint as an independent variable, and performing polynomial curve fitting on the series of points. The fitted curve is selected according to the principle of minimum deviation sum of squares, and the finally obtained function represents the length of the outer diameter at different distances from the end point.
And reconstructing the pipeline to be measured according to the space coordinate (under a front side camera coordinate system) of the central line of the pipeline to be measured and the outer diameter information.
The above description is only of the preferred embodiments of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.

Claims (3)

1. A pipeline measuring method based on multi-view vision is characterized in that: the method comprises the following steps:
(1) image acquisition and pre-processing
Step 1, an image acquisition system is set up, and pictures are captured through binocular cameras distributed on the front side and the rear side of a pipeline to obtain a shot pipeline gray image; carrying out binarization processing on the gray level image to distinguish a background from the pipeline; extracting a target region ROI (region of interest) of the binary image by adopting a contour comparison technology to obtain pipeline contour information;
(2) centerline and outside diameter extraction
Step 2.1, calculating the Euclidean distance from each point in the pipeline to the pipeline profile by adopting an Euclidean distance conversion method according to the pipeline profile information, acquiring a distance conversion diagram of the pipeline, and recording the Euclidean distance from each point in the pipeline profile to the pipeline boundary;
2.2, searching a pipeline boundary through image processing, obtaining pixel coordinates of end points on two sides of the pipeline, and obtaining a minimum path between two points, namely a central line of the pipeline by adopting a Dijkstra minimum path algorithm and taking the end point on one side of the central line as a starting point and the end point on the other side of the central line as an end point;
step 2.3, reducing the characteristic value by adopting a local characteristic point dispersion method, regarding the central line as a function, optimizing all stagnation points to obtain a central line characteristic point set, wherein the obtained characteristic points are part of stagnation points of the function;
step 2.4, acquiring and storing the outer diameter size corresponding to each characteristic point in the characteristic point set of the central line by adopting an edge detection method based on the shape mass center;
(3) camera calibration and multi-camera matrix conversion
Step 3.1, obtaining external parameters consisting of a rotation matrix and a translation matrix and internal parameters of the camera through camera calibration, and obtaining a matrix for converting the coordinates of the pipeline to the coordinates of the camera window pixels in the world coordinate system according to the internal and external parameters;
step 3.2, converting a camera coordinate system according to the external reference information of the camera, simultaneously calibrating the rear side camera and the front side camera by using the same calibration plate, and solving a rotation matrix and a translation matrix required by converting the rear side camera coordinate system to the front side camera coordinate system;
(4) pipeline centerline matching and outside diameter expansion
Step 4.1, respectively carrying out viewport transformation recovery, perspective projection transformation recovery, view transformation recovery and model transformation recovery on the central line characteristic points obtained by the front side camera and the rear side camera to obtain two groups of ray families of the imaging plane of the front side camera and the rear side camera with the centers pointing forward;
step 4.2, converting a ray family of the rear camera center pointing to the imaging plane of the rear camera into a ray family under the coordinate system of the front camera according to the rotation matrix and the translation matrix obtained in the step 3.2;
4.3, matching from the end point on one side according to the central line end point obtained in the step 2.2 by adopting a dynamic matching method, and sequentially matching until the end point on the other side according to the dynamic method to obtain the spatial position of the central line of the pipeline, wherein the spatial position is represented by a group of broken line segments;
step 4.4, expanding the outer diameter of the characteristic point of the central line under each view to obtain the outer diameter information of all the directions of the pipeline to be measured; adopting a curve fitting method based on a least square method, regarding the outer diameter of the characteristic point as a function value, regarding the distance between the characteristic point and an endpoint as an independent variable, performing polynomial curve fitting on the series of points, selecting a fitting curve according to the principle of minimum deviation square sum, and finally obtaining a function, namely the outer diameter length of the distance endpoint at different distances;
and reconstructing the pipeline to be measured according to the space coordinate and the outer diameter information of the central line of the pipeline to be measured in the front camera coordinate system.
2. The multi-vision based pipeline measuring method according to claim 1, characterized in that: the edge detection method based on the centroid of the body in the step 2.4 is as follows:
the one-dimensional mathematical model of the image information, without considering the noise, is simply expressed as:
f(x)=u(x)*g(x)
wherein: u (x) is the original ideal signal, f (x) is the one-dimensional image information; g (x) is a point spread function, typically approximated as a gaussian function:
and (x) respectively setting u (x) as an ideal step edge, an ideal pulse edge and an ideal roof edge, smoothing sharp edges into fuzzy edges after the point spread function is acted, and carrying out symmetry analysis on derivatives to obtain a calculation formula of a boundary, wherein the calculation formula of the boundary comprises the following steps:
Figure FDA0002239037680000022
discretizing the formula, fiIs f (x) at xiA sampled value of xiThe differential value of (b) is determined by using both the forward difference and the backward difference of (b)Values are substituted as follows:
Figure FDA0002239037680000023
the formula may be changed to:
Figure FDA0002239037680000024
calculating a difference matrix, respectively convolving the difference matrix with the pipeline gray level picture by using a row difference template and a column difference template, and recording an obtained matrix D1,D2(ii) a Selecting a differential threshold value T according to the statistical characteristics of the pipeline matrix; computing and selecting a calculation interval to the matrix D1,D2The element in the two matrixes is less than T and is set to be 0, and the non-zero continuous interval of the two matrixes is an edge transition interval; calculating an edge source point value by using a discretization formula, and storing the edge source point value into the edge of the gray level image; according to the characteristic point set of the central line, the outer diameter is the pixel number obtained in the distance conversion plus the edge point value calculated in the previous step.
3. The multi-vision based pipeline measuring method according to claim 1, characterized in that: the dynamic matching strategy in step 4.3 is as follows:
matching from an end point on one side according to the end point information obtained in the image processing link, and recording a ray corresponding to the end point, wherein the ray is taken as a reference ray;
according to a binocular imaging principle, selecting a reference ray and a next ray of a front camera center pointing to a front camera imaging plane projection point under a front camera coordinate system to form a plane, selecting a reference ray and a next ray of a rear camera center pointing to a rear camera imaging plane projection point under the coordinate system to form another plane, and solving an intersection line of the two planes; four intersection points are formed by the intersection lines and the four rays, the intersection points of the two rays and the intersection lines are selected as reference points according to the reference rays recorded before, the first point far away from the intersection points is the characteristic point of the central line of the pipeline to be measured obtained by the matching, the point is used as the reference point during the next matching, and one ray is selected backwards in the next matching of the corresponding ray family; and obtaining the spatial characteristic points of the central line of the pipeline to be tested through matching, and connecting to obtain the central line of the pipeline to be tested.
CN201910993518.2A 2019-10-18 2019-10-18 Pipeline measuring method based on multi-view vision Active CN110766669B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910993518.2A CN110766669B (en) 2019-10-18 2019-10-18 Pipeline measuring method based on multi-view vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910993518.2A CN110766669B (en) 2019-10-18 2019-10-18 Pipeline measuring method based on multi-view vision

Publications (2)

Publication Number Publication Date
CN110766669A true CN110766669A (en) 2020-02-07
CN110766669B CN110766669B (en) 2022-06-21

Family

ID=69332505

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910993518.2A Active CN110766669B (en) 2019-10-18 2019-10-18 Pipeline measuring method based on multi-view vision

Country Status (1)

Country Link
CN (1) CN110766669B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111666876A (en) * 2020-06-05 2020-09-15 北京百度网讯科技有限公司 Method and device for detecting obstacle, electronic equipment and road side equipment
CN111882488A (en) * 2020-07-22 2020-11-03 浙江大学 Indoor pipeline position recording and projecting method
CN111986257A (en) * 2020-07-16 2020-11-24 南京模拟技术研究所 Bullet point identification automatic calibration method and system supporting variable distance
CN113436209A (en) * 2021-06-23 2021-09-24 南通大学 Novel welding line center line extraction method based on layer-by-layer indentation strategy
CN114777658A (en) * 2022-03-31 2022-07-22 深圳禾思众成科技有限公司 Alignment detection method and alignment detection equipment for semiconductor device
CN114897902A (en) * 2022-07-13 2022-08-12 深圳金正方科技股份有限公司 BWFRP pipeline on-line monitoring method and system based on multiple cameras
CN115115602A (en) * 2022-05-31 2022-09-27 江苏濠汉信息技术有限公司 Algorithm for positioning texture in wire diameter measurement process

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002071547A (en) * 2000-08-29 2002-03-08 Pal Giken:Kk On-line analyzer for image of particle in liquid
CN102135236A (en) * 2011-01-05 2011-07-27 北京航空航天大学 Automatic non-destructive testing method for internal wall of binocular vision pipeline
CN102410811A (en) * 2011-07-27 2012-04-11 北京理工大学 Method and system for measuring parameters of bent pipe
CN107563356A (en) * 2017-09-29 2018-01-09 西安因诺航空科技有限公司 A kind of unmanned plane inspection pipeline target analysis management method and system
CN109583377A (en) * 2018-11-30 2019-04-05 北京理工大学 A kind of control method, device and host computer that pipeline model is rebuild
CN109615654A (en) * 2019-01-09 2019-04-12 中国矿业大学(北京) Drainage pipeline inside corrosion depth and area measurement method based on binocular vision

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002071547A (en) * 2000-08-29 2002-03-08 Pal Giken:Kk On-line analyzer for image of particle in liquid
CN102135236A (en) * 2011-01-05 2011-07-27 北京航空航天大学 Automatic non-destructive testing method for internal wall of binocular vision pipeline
CN102410811A (en) * 2011-07-27 2012-04-11 北京理工大学 Method and system for measuring parameters of bent pipe
CN107563356A (en) * 2017-09-29 2018-01-09 西安因诺航空科技有限公司 A kind of unmanned plane inspection pipeline target analysis management method and system
CN109583377A (en) * 2018-11-30 2019-04-05 北京理工大学 A kind of control method, device and host computer that pipeline model is rebuild
CN109615654A (en) * 2019-01-09 2019-04-12 中国矿业大学(北京) Drainage pipeline inside corrosion depth and area measurement method based on binocular vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
盛遵冰: "通用亚像素边缘检测算法", 《上海交通大学学报》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111666876A (en) * 2020-06-05 2020-09-15 北京百度网讯科技有限公司 Method and device for detecting obstacle, electronic equipment and road side equipment
CN111666876B (en) * 2020-06-05 2023-06-09 阿波罗智联(北京)科技有限公司 Method and device for detecting obstacle, electronic equipment and road side equipment
CN111986257A (en) * 2020-07-16 2020-11-24 南京模拟技术研究所 Bullet point identification automatic calibration method and system supporting variable distance
CN111882488A (en) * 2020-07-22 2020-11-03 浙江大学 Indoor pipeline position recording and projecting method
CN111882488B (en) * 2020-07-22 2022-07-19 浙江大学 Indoor pipeline position recording and projecting method
CN113436209A (en) * 2021-06-23 2021-09-24 南通大学 Novel welding line center line extraction method based on layer-by-layer indentation strategy
CN113436209B (en) * 2021-06-23 2023-11-17 南通大学 Novel weld joint center line extraction method based on layer-by-layer indentation strategy
CN114777658A (en) * 2022-03-31 2022-07-22 深圳禾思众成科技有限公司 Alignment detection method and alignment detection equipment for semiconductor device
CN115115602A (en) * 2022-05-31 2022-09-27 江苏濠汉信息技术有限公司 Algorithm for positioning texture in wire diameter measurement process
CN115115602B (en) * 2022-05-31 2023-09-19 江苏濠汉信息技术有限公司 Algorithm for texture positioning in wire diameter measurement process
CN114897902A (en) * 2022-07-13 2022-08-12 深圳金正方科技股份有限公司 BWFRP pipeline on-line monitoring method and system based on multiple cameras

Also Published As

Publication number Publication date
CN110766669B (en) 2022-06-21

Similar Documents

Publication Publication Date Title
CN110766669B (en) Pipeline measuring method based on multi-view vision
CN109816703B (en) Point cloud registration method based on camera calibration and ICP algorithm
CN110230998B (en) Rapid and precise three-dimensional measurement method and device based on line laser and binocular camera
CN111340797A (en) Laser radar and binocular camera data fusion detection method and system
CN111721259B (en) Underwater robot recovery positioning method based on binocular vision
CN111311689A (en) Method and system for calibrating relative external parameters of laser radar and camera
CN105043350A (en) Binocular vision measuring method
CN109470149B (en) Method and device for measuring position and posture of pipeline
CN104091324A (en) Quick checkerboard image feature matching algorithm based on connected domain segmentation
CN112067233B (en) Six-degree-of-freedom motion capture method for wind tunnel model
CN106996748A (en) A kind of wheel footpath measuring method based on binocular vision
CN113393439A (en) Forging defect detection method based on deep learning
CN114252449B (en) Aluminum alloy weld joint surface quality detection system and method based on line structured light
CN102494663A (en) Measuring system of swing angle of swing nozzle and measuring method of swing angle
CN112001973B (en) Quick three-dimensional human head measuring method based on digital speckle correlation
CN113744351A (en) Underwater structured light measurement calibration method and system based on multi-medium refraction imaging
CN103884294B (en) The method and its device of a kind of infrared light measuring three-dimensional morphology of wide visual field
CN116563377A (en) Mars rock measurement method based on hemispherical projection model
CN113012238B (en) Method for quick calibration and data fusion of multi-depth camera
CN109443319A (en) Barrier range-measurement system and its distance measuring method based on monocular vision
CN117710588A (en) Three-dimensional target detection method based on visual ranging priori information
CN110487254B (en) Rapid underwater target size measuring method for ROV
JP2010009236A (en) Plane area estimation device and program
CN112630469B (en) Three-dimensional detection method based on structured light and multiple light field cameras
CN110310371B (en) Method for constructing three-dimensional contour of object based on vehicle-mounted monocular focusing sequence image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant