CN112767338A - Assembled bridge prefabricated part hoisting and positioning system and method based on binocular vision - Google Patents

Assembled bridge prefabricated part hoisting and positioning system and method based on binocular vision Download PDF

Info

Publication number
CN112767338A
CN112767338A CN202110040531.3A CN202110040531A CN112767338A CN 112767338 A CN112767338 A CN 112767338A CN 202110040531 A CN202110040531 A CN 202110040531A CN 112767338 A CN112767338 A CN 112767338A
Authority
CN
China
Prior art keywords
image
industrial
target
eye camera
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110040531.3A
Other languages
Chinese (zh)
Inventor
李枝军
张辉
徐后生
刘武
徐秀丽
李雪红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Luqiao Nanjing Engineering General Corp
Nanjing Tech University
Original Assignee
Luqiao Nanjing Engineering General Corp
Nanjing Tech University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Luqiao Nanjing Engineering General Corp, Nanjing Tech University filed Critical Luqiao Nanjing Engineering General Corp
Priority to CN202110040531.3A priority Critical patent/CN112767338A/en
Publication of CN112767338A publication Critical patent/CN112767338A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Abstract

The invention discloses a binocular vision-based assembly type bridge prefabricated part hoisting and positioning system, which comprises: the system comprises a fixed support, an industrial left-eye camera, an industrial right-eye camera, a computer, a trigger, an optical fiber data line, a calibration plate and a target combined structure; the invention also discloses a hoisting and positioning method of the prefabricated component of the assembled bridge based on binocular vision, which comprises the following steps: step S1, calibrating the camera and solving internal and external parameters; step S2, mounting a target combination structure, and acquiring a first image and a second image; s3, correcting the first image and the second image, and S4, acquiring pixel coordinate values of a circle center positioning point of the target; and step S5, obtaining the space three-dimensional coordinate of the circle center positioning point by adopting a parallax method. The invention can obtain the three-dimensional coordinates of the steel bar by obtaining the target arranged on the steel bar through binocular vision, provides the position coordinates to set the adjustment method of the steel bar, and saves time and cost for engineering.

Description

Assembled bridge prefabricated part hoisting and positioning system and method based on binocular vision
Technical Field
The invention relates to the field of computer vision and key target positioning of prefabricated components of an assembly type building, in particular to a system and a method for hoisting and positioning prefabricated components of an assembly type bridge based on binocular vision.
Background
With the development of novel urbanization of China, the trend of promoting the modernization of the building industry is inevitable, and the key point is to adapt to the development requirements of novel building industrialization and information technology. The fabricated building or the bridge has obvious effects on the aspects of energy conservation and emission reduction, environmental protection, short construction period, cost reduction, quality improvement and the like.
The hoisting process is an important component in the assembly type process, the optimized hoisting process is one of the key points of the assembly type building research, and the optimized hoisting process is related to the healthy development of the assembly type building.
Traditional hoist and mount location is mainly measured and detected by manual work, shows in hoist and mount in-process, at first tries to assemble, ensures that prefabricated hole and component reinforcing bar's position match, guarantees that the component can take one's place successfully, uses in the engineering to assemble the manual work in advance and detects the component counterpoint condition often consuming time longer. When the laser scanning technology is used for hoisting, positioning and detecting the components, the three-dimensional laser scanning measuring instrument is high in cost, needs a specific environment, is long in processing time of scanning point cloud, and is difficult to recalibrate once the accuracy of the scanning measuring instrument is inaccurate.
Therefore, developing a method for intelligently detecting key parts in the hoisting process is particularly important, detection data can be conveniently provided to formulate an adjusting method in the hoisting process, and time and cost are saved for engineering. Aiming at the problems, the invention develops a method and a system for positioning the hoisting key part of the prefabricated member of the assembled bridge based on binocular vision. Technical support is provided for solving the problems.
Disclosure of Invention
In order to solve the problems in the actual engineering, the invention aims to provide a binocular vision-based prefabricated bridge component hoisting and positioning system and method, which are used for solving the problem of positioning of the steel bars of the prefabricated components in the actual hoisting engineering.
In order to achieve the above object, the present invention provides a binocular vision based prefabricated bridge member hoisting and positioning system, comprising: the system comprises a fixed support, an industrial left-eye camera, an industrial right-eye camera, a computer, a trigger, an optical fiber data line, a calibration plate and a target combined structure;
the industrial left-eye camera and the industrial right-eye camera are arranged on the fixed support in parallel, and are in communication connection with the trigger through the optical fiber data line respectively, and the trigger is in communication connection with the computer;
a target measuring program is installed in the computer, and the computer performs data processing on images acquired by the industrial left-eye camera and the industrial right-eye camera through the target measuring program;
the target assembly structure includes: the device comprises a target veneer, a hollow rubber mounting handle and an adjusting knob, wherein the hollow rubber mounting handle is connected with the target veneer through a spherical holder connector, the adjusting knob is arranged on the hollow rubber mounting handle, and a target is arranged on the front side of the target veneer;
the industrial left-eye camera and the industrial right-eye camera shoot the calibration plate or the target to acquire corresponding images.
Furthermore, the optical fiber data line is a USB3.0 optical fiber data line, the calibration plate is a chessboard calibration plate, the surface of the calibration plate is provided with return light reflection mark points, and the target is provided with return light reflection mark points.
The invention also provides a hoisting and positioning method of the prefabricated component of the assembled bridge based on binocular vision, which comprises the following steps: step S1, calibrating the industrial left-eye camera and the industrial right-eye camera, and solving internal and external parameters of the industrial left-eye camera and the industrial right-eye camera;
step S2, mounting a target combination structure at the position of a steel bar to be measured of a prefabricated part, shooting the target through the industrial left-eye camera to obtain a first image of the target, and shooting the target through the industrial right-eye camera to obtain a second image of the target;
step S3, performing a rectification process on the first image and the second image acquired in step S2, the rectification process including: distortion correction and stereo correction;
step S4, processing the corrected first image and the corrected second image, and acquiring and storing pixel coordinate values of a target circle center positioning point in the first image and the second image;
and step S5, obtaining the pixel coordinate value of the target circle center positioning point of the first image and the pixel coordinate value of the target circle center positioning point of the second image according to the step S4, and obtaining the space three-dimensional coordinate of the circle center positioning point by adopting a parallax method.
Further, the step S1 is specifically:
s101, fixing the industrial left-eye camera and the industrial right-eye camera in front of a prefabricated part to be tested, and placing the calibration plate in the visual field range of the industrial left-eye camera and the industrial right-eye camera;
step S102, acquiring a plurality of calibration plate images with different angles and different distances through the industrial left-eye camera and the industrial right-eye camera, and then putting the acquired images into a formulated folder;
s103, carrying out corner point detection on the image in the formulated folder through matlab, and carrying out calibration calculation to obtain parameters, wherein the parameters comprise: rotation matrix, translation vector, internal parameter matrix, reprojection error, skewness of image axis, principal point coordinate and scale factor;
s104, screening out the images which do not meet the requirement of the reprojection error from a formulated folder;
and step S105, carrying out second calibration on the image in the formulated folder, and calculating and saving the parameters.
Further, in step S2, the target combination structure is installed at the steel bar to be measured of the prefabricated component, and the target surface position of the target is adjusted through the spherical pan-tilt connector, so that the target surface is directly opposite to the industrial left-eye camera and the industrial right-eye camera.
Further, in step S3, the aberration correction specifically includes:
step S311, projecting three-dimensional space points in the first image and the second image to a normalized image plane;
step S312, correcting radial distortion and tangential distortion of the point on the normalized plane, specifically using formula (1) and formula (2), the expression is as follows:
xcorrected=x(1+k1r2+k2r4+k3r6)+2 p1xy+p2(r2+2x2) (1)
ycorrected=y(1+k1r2+k2r4+k3r6)+2 p2xy+p1(r2+2y2) (2)
in the formulas (1) to (2), x is the abscissa of the space point in the image coordinate system; x is the number ofcorrectedThe corrected space point is the abscissa of the image coordinate system; y is the vertical coordinate of the space point in the image coordinate system; y iscorrectedThe corrected space point is the vertical coordinate of the image coordinate system; k is a radical of1、k2、k3Is a camera radial distortion parameter; p1 and p2 are tangential distortion parameters of the camera;
step 313, projecting the corrected point to a pixel plane through an internal parameter matrix to obtain a correct position of the point on the image, wherein the expression is as follows:
u=fxxcorrected+cx (3)
v=fyycorrected+cy (4)
in formulae (3) to (4), cx、cyThe offset of the optical axis of the camera in an image coordinate system is taken as the offset; f. ofx、fyIs the focal length; u and v are coordinates of the space point in a pixel coordinate system;
in step S3, the stereo correction specifically includes:
step S321, dividing the rotation matrix R into two matrices RzAnd RyAnd R isz=R1/2,Ry=R1/2
Step S322, construct e by offset matrix T1,e2,e3Through e1,e2,e3So that the left and right polar lines are parallel, and then pass through the constructed transformation matrix RrectThe matrix transforms the poles of the left view to infinity; wherein T ═ Tx,Ty,Tz]T,
Figure BDA0002895656710000041
Wherein T is an offset matrix; rrectIs a constructed transformation matrix; t isx、Ty、TzIs a component of the offset matrix; e.g. of the type1、e2、e3As components of a transformation matrix;
step S323, multiplying the coordinate systems of the industrial left-eye camera and the industrial right-eye camera by corresponding integral rotation matrixes in sequence to enable the main optical axes of the two coordinate systems to be parallel, wherein the integral rotation matrixes can be obtained by multiplying a synthesized rotation matrix and a transformation matrix; the expression is as follows:
Rzuo=Rrect·rzuo (5)
Ryou=Rrect·ryou (6)
in formulae (5) to (6), Rzuo、RyouSynthetic rotation matrixes of an industrial left-eye camera and an industrial right-eye camera are respectively provided; rrectTo transform the camera poles to a matrix at infinity.
Further, the step S4 specifically includes:
step S401, a median filtering algorithm is adopted to filter the corrected first image and the corrected second image, and the median filtering output formula is as follows:
Figure BDA0002895656710000042
in formula (7), g (s, t) is the gray level value of the original image,
Figure BDA0002895656710000043
is the gray value of the filtered image;
step S402, adopting a Laplacian operator to sharpen the filtered image, wherein the expression is as follows:
Figure BDA0002895656710000044
formula (8) represents the center coefficient of the laplacian mask;
s403, setting a proper threshold value through a canny operator to perform edge detection on the image to obtain an image contour, and screening out target contours meeting conditions through detecting the contour by the perimeter, the area and the roundness of the image contour;
fitting the target contour meeting the conditions, obtaining the circle center coordinates of the first image and the second image, averaging the circle center coordinates of the first image and the second image, and storing the circle center coordinates of the average value;
the expression of the canny operator is as follows:
Figure BDA0002895656710000045
I(x,y)=G(x,y,σ)*f(x,y) (10)
Figure BDA0002895656710000046
θ=arctan(IY(x,y)/IX(x,y)) (12)
in equations (9) to (12), G (x, y, σ) is a two-dimensional gaussian function, σ is a standard deviation of the gaussian function, the filter coverage area increases with the increase of σ, f (x, y) is an original image gray value, I (x, y) is a filtered image gray valueX(x, y) and IY(x, y) is the partial derivative of image I (x, y) in the x and y directions, M is the gradient strength at that point, and θ is its gradient vector.
Further, step S5 specifically includes:
step S501, obtaining the relation between parallax and three-dimensional coordinates according to the triangle geometry principle:
Figure BDA0002895656710000051
step S502, order (u)1-u2)=Xzuo-XyouAnd (3) obtaining:
Figure BDA0002895656710000052
step S503, calculating the three-dimensional coordinate of the circle center position according to the formula (15);
Figure BDA0002895656710000053
in the formulae (13) to (15), XwIs the abscissa of the central point under the world coordinate system; y iswIs the vertical coordinate of the central point under the world coordinate system; zwIs the vertical coordinate of the central point under the world coordinate system; xzuoIs a circle center pointThe horizontal coordinate of the image coordinate system of the industrial left eye camera imaging plane; xyouThe horizontal coordinate of the central point under the image coordinate system of the imaging plane of the industrial right-eye camera is shown; u. of1The abscissa of the central point at the projection point of the first image; u. of2The abscissa of the central point at the projection point of the second image; v. of1The ordinate of the central point at the projection point of the first image is taken as the ordinate; (u)1-u2)=Xzuo-XyouIs the parallax d of the target point; f is the focal length of the two cameras; b is the distance between the two cameras as a base line.
Compared with the prior art, the invention has the beneficial effects that:
according to the invention, the target images are acquired by the left and right cameras, the image data is processed and calculated, the two-dimensional coordinates are converted into world three-dimensional coordinates, the absolute position of the reinforcing steel bar is rapidly obtained, the automation of the detection of the large prefabricated part of the assembled bridge is realized, the workload of manual measurement is reduced, the measurement precision is improved, the cost is low, the measurement is convenient, and the result can be processed in real time.
Drawings
Fig. 1 is a schematic view of a hoisting positioning system provided in embodiment 1.
Fig. 2 is a schematic structural diagram of a target assembly structure provided in embodiment 1.
Fig. 3 is a schematic structural diagram of another angle of the target assembly structure provided in embodiment 1.
Fig. 4 is a flowchart of the method of step S1 in embodiment 2.
Fig. 5 is a flowchart of a method for hoisting and positioning the prefabricated assembly bridge member based on binocular vision, which is provided in embodiment 2.
Fig. 6 and 7 are schematic perspective views of the middle-view correction in step S3 in embodiment 2.
Fig. 8 is a schematic diagram illustrating coordinate relationship conversion in different coordinate systems in step S3 in example 2.
Fig. 9 is a flowchart of the method of step S4 in embodiment 2.
In the figure, a 001-fixed support, a 002-industrial left-eye camera, a 003-industrial right-eye camera, a 004-computer, a 005-trigger, a 006-USB3.0 optical fiber data line, a 007-chessboard calibration plate, a 008-target combination structure, a 009-target veneer, a 010-hollow rubber installation handle, a 011-spherical pan-tilt connector and a 012-adjusting knob.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
Referring to fig. 1 to 3, the present embodiment provides an assembly type bridge prefabricated part hoisting and positioning system based on binocular vision, including:
the system comprises a fixed support 001, an industrial left-eye camera 002, an industrial right-eye camera 003, a computer 004, a trigger 005, a USB3.0 optical fiber data line 006, a chessboard calibration plate 007 and a target combined structure 008; the industrial left-eye camera 002 and the industrial right-eye camera 003 are arranged on the fixing support 001 in parallel, and the fixing support 001 can adjust and read the distance between the industrial left-eye camera 002 and the industrial right-eye camera 003; the industrial left eye camera 002 and the industrial right eye camera 003 are in communication connection with the trigger 005 through the USB3.0 optical fiber data line 006 respectively, and the trigger 005 is used for ensuring that the industrial left eye camera 002 and the industrial right eye camera 003 can acquire images simultaneously; the trigger 005 is communicatively connected to the computer 004, specifically, connected through the USB3.0 optical fiber data line 006.
The target measuring program is installed in the computer 004, and the computer 004 processes the images acquired by the industrial left-eye camera 002 and the industrial right-eye camera 003 through the target measuring program.
Target assembly 008 includes: target veneer 009, hollow rubber installation handle 010 and adjust knob 012, hollow rubber installation handle 010 links to each other through spherical cloud platform connector 011 with target veneer 009, is equipped with adjust knob 012 on the hollow rubber installation handle 010, the front of target veneer 009 is equipped with the target, the angle that the target can be adjusted to spherical cloud platform connector 011, the reinforcing bar can be inserted in the inboard of hollow rubber installation handle 010, but survey in hollow rubber installation handle 010 outside interpolation and prefabricated component cross-section hole.
Specifically, in this embodiment, the surface of the chessboard calibration plate 007 and the target are provided with the return light reflection mark points, which reduce the diffuse emission of the reflected light, and perform the corner detection more quickly and effectively.
This system is when using, the parallel setting of industry left eye camera 002 and industry right eye camera 003 are on fixed bolster 001, and industry left eye camera 002 and industry right eye camera 003 shoot the target on the target veneer 009 and chess board calibration board 007 and acquire corresponding image, and the image passes through USB3.0 optic fibre data line 006 and trigger 005 transmission to computer 004, and computer 004 is handled the image of receiving through built-in target measurement range order, it specifically includes to handle: 1. calibrating the industrial left-eye camera 002 and the industrial right-eye camera 003 through the acquired image of the chessboard calibration plate 007, and solving internal and external parameters of the industrial left-eye camera 002 and the industrial right-eye camera 003; 2. and carrying out correction processing on the acquired target image, wherein the correction processing comprises the following steps: distortion correction and stereo correction; 3. processing the corrected target image, and storing a pixel coordinate value of a target circle center positioning point of the target image; 4. and according to the obtained pixel coordinate value of the circle center positioning point of the target, obtaining the spatial three-dimensional coordinate of the circle center positioning point by adopting a parallax method.
More specifically, in the above four processes, the images of the chessboard calibration plate 007 are referred to as being obtained by the industrial left-eye camera 002 and the industrial right-eye camera 003 simultaneously, and there are a plurality of images; the target image refers to a plurality of images obtained by the industrial left-eye camera 002 and the industrial right-eye camera 003 at the same time.
Example 2
Referring to fig. 4 to 9, the embodiment provides a method for hoisting and positioning an assembly type bridge prefabricated part based on binocular vision, which includes the following steps:
step S1, calibrating the industrial left eye camera and the industrial right eye camera, and solving internal and external parameters of the industrial left eye camera and the industrial right eye camera;
specifically, step S1 specifically includes:
s101, fixing an industrial left-eye camera and an industrial right-eye camera in front of a prefabricated part to be tested, and placing a chessboard calibration plate in the visual field range of the industrial left-eye camera and the industrial right-eye camera;
step S102, acquiring 10-20 images of chessboard calibration plates with different angles and different distances through an industrial left-eye camera and an industrial right-eye camera, and then putting the acquired images into a formulated folder;
s103, carrying out corner point detection on the image in the formulated folder through matlab, and carrying out calibration calculation to obtain parameters, wherein the parameters comprise: rotation matrix, translation vector, internal parameter matrix, reprojection error, skewness of image axis, principal point coordinate and scale factor;
s104, screening out the images which do not meet the requirement of the reprojection error from a formulated folder;
and step S105, carrying out second calibration on the image in the formulated folder, and calculating and saving the parameters.
More specifically, the reprojection error refers to a difference between a theoretical value of the target point projected onto the imaging plane and a real value on the measured image.
S2, mounting a target combination structure at the position of a steel bar to be measured of the prefabricated part, shooting a target through an industrial left-eye camera to obtain a first image of the target, and shooting the target through an industrial right-eye camera to obtain a second image of the target;
specifically, in step S2, the target combination structure is installed at the steel bar to be measured of the prefabricated component, and the target surface position of the target is adjusted through the spherical holder connector, so that the target surface is directly opposite to the industrial left-eye camera and the industrial right-eye camera, and the purpose of the operation is to reduce the error of the post-extraction of the center coordinates of the target surface.
Step S3, performing a rectification process on the first image and the second image acquired in step S2, the rectification process including: distortion correction and stereo correction;
specifically, the generation of image distortion directly affects the subsequent measurement accuracy. Due to processing errors of optical components such as a camera lens in the processing process, the camera has certain distortion in imaging.
The distortion correction is performed by the parameters and distortion coefficients in the industrial camera obtained in step S1, and the distortion correction is performed by the conversion relationship between the pixel coordinates and the image coordinates, and the relationship between the conversion relationship between different coordinate systems and the calibration parameters is expressed by the following formula:
Figure BDA0002895656710000081
in the formula: a isx,ayScale factors representing the horizontal and vertical axes of the image, respectively; k contains the focal length, principal point coordinates, etc. internal parameters, M1Containing a rotation matrix and a translation vector, M1The medium parameter is determined by the position of the camera coordinate system relative to the world coordinate system, M1Is a camera internal parameter matrix; the product M of the internal parameter matrix and the external parameter matrix is a projection matrix.
More specifically, in step S3, the aberration correction specifically includes:
step S311, projecting three-dimensional space points in the first image and the second image to a normalized image plane;
step S312, correcting radial distortion and tangential distortion of the point on the normalized plane, specifically solving through the following two formulas, where the expression is as follows:
xcorrected=x(1+k1r2+k2r4+k3r6)+2 p1xy+p2(r2+2x2)
ycorrected=y(1+k1r2+k2r4+k3r6)+2 p2xy+p1(r2+2y2)
in the formula, x is the abscissa of a space point in an image coordinate system; x is the number ofcorrectedThe corrected space point is the abscissa of the image coordinate system; y is the vertical coordinate of the space point in the image coordinate system; y iscorrectedThe corrected space point is the vertical coordinate of the image coordinate system; k is a radical of1、k2、k3Is a camera radial distortion parameter; p1 and p2 are tangential distortion parameters of the camera.
Step 313, projecting the corrected point to a pixel plane through an internal parameter matrix to obtain a correct position of the point on the image, wherein the expression is as follows:
u=fxxcorrected+cx
v=fyycorrected+cy
in the formula, cx、cyThe offset of the optical axis of the camera in an image coordinate system is taken as the offset; f. ofx、fyIs the focal length; u and v are coordinates of the space point in a pixel coordinate system.
In step S3, the stereo correction is to mathematically perform projective transformation on the left and right views captured in the same scene, so that the two imaging planes are parallel to the baseline, and the same point is located in the same line in the left and right views, which is referred to as coplanar line alignment. And calculating three-dimensional coordinates by using a trigonometric principle only after the alignment of the coplanar rows is achieved, and correcting by using a Bouguet polar line correction algorithm of OpenCV. The method specifically comprises the following steps:
step S321, dividing the rotation matrix into two matrixes, RzAnd RyAnd R isz=R1/2,Ry=R1/2
Step S322, construct e by offset matrix T1,e2,e3Through e1,e2,e3So that the polar lines of the left and the right are parallel and then pass through RrectThe matrix transforms the poles of the left view to infinity; wherein T ═ Tx,Ty,Tz]T,
Figure BDA0002895656710000091
Figure BDA0002895656710000092
Wherein T is an offset matrix; rrectIs a constructed transformation matrix; t isx、Ty、TzIs a component of the offset matrix; e.g. of the type1、e2、e3Are components of a transformation matrix.
Step S323, multiplying coordinate systems of the industrial left-eye camera and the industrial right-eye camera by corresponding integral rotation matrixes in sequence to enable main optical axes of the two coordinate systems to be parallel, wherein the integral rotation matrixes are obtained by multiplying a synthesized rotation matrix and a transformation matrix; the expression is as follows:
Rzuo=Rrect·rzuo
Ryou=Rrect·ryou
in the formula, Rzuo、RyouThe synthetic rotation matrixes of the left camera and the right camera are respectively; rrectTo transform the camera poles to a matrix at infinity.
Step S4, processing the corrected first image and the corrected second image, and acquiring and storing pixel coordinate values of a target circle center positioning point in the first image and the second image;
specifically, step S4 specifically includes:
step S401, filtering the corrected first image and second image by adopting a median filtering algorithm, more specifically, selecting a 3 x 3 filtering template window to enable the pixel point to be filtered on the image to coincide with the center of the filtering template, moving the templates in sequence, carrying out filtering operation on all gray values covered by the template on the image, recombining the gray values according to the sequence from small to large, and selecting the gray value at the center from the gray values, wherein the gray value selected by the method has the minimum difference with the gray values of the pixels around, thereby effectively removing noise; the median filter output formula is:
Figure BDA0002895656710000105
in the formula, g (s, t) is the gray value of the original image,
Figure BDA0002895656710000106
is the filtered image gray value.
Step S402, adopting Laplacian to sharpen the filtered image, enhancing details, fusing the original image and the filtered image after filtering by using Laplacian template, and protecting background, wherein Laplacian is defined as:
Figure BDA0002895656710000101
the discrete expression in the image processing process is:
Figure BDA0002895656710000102
the above formula represents the center coefficient of the laplacian mask.
S403, setting a proper threshold value through a canny operator to perform edge detection on the image to obtain an image contour, and screening out target contours meeting conditions through detecting the contour by the perimeter, the area and the roundness of the image contour; when the roundness is set to be 0.8, the contour of the round target at the position can be well screened;
fitting the target contour meeting the conditions, obtaining the circle center coordinates of the first image and the second image, averaging the circle center coordinates of the concentric circles of the first image and the second image because the target is set as a concentric circle, and storing the average circle center coordinate;
the expression of the canny operator is:
Figure BDA0002895656710000103
I(x,y)=G(x,y,σ)*f(x,y)
Figure BDA0002895656710000104
θ=arctan(IY(x,y)/IX(x,y))
in the formula, G (x, y, sigma) is a two-dimensional Gaussian function, sigma is the standard deviation of the Gaussian function, the filter coverage area increases with the increase of sigma, f (x, y) is the gray value of the original image, I (x, y) is the gray value of the filtered image, and I (x, y) is the gray value of the filtered imageX(x, y) and IY(x, y) is the partial derivative of image I (x, y) in the x and y directions, M is the gradient strength at that point, and θ is its gradient vector.
And step S5, obtaining the pixel coordinate value of the target circle center positioning point of the first image and the pixel coordinate value of the target circle center positioning point of the second image according to the step S4, and obtaining the space three-dimensional coordinate of the circle center positioning point by adopting a parallax method.
Specifically, step S5 specifically includes:
step S501, obtaining the relation between parallax and three-dimensional coordinates according to the triangle geometry principle:
Figure BDA0002895656710000111
step S502, order (u)1-u2)=Xzuo-XyouAnd (3) obtaining:
Figure BDA0002895656710000112
step S503, calculating the three-dimensional coordinates of the circle center position according to the following three formulas;
Figure BDA0002895656710000113
Figure BDA0002895656710000114
Figure BDA0002895656710000115
in the formula, XwIs the abscissa of the central point under the world coordinate system; y iswIs the vertical coordinate of the central point under the world coordinate system; zwIs the vertical coordinate of the central point under the world coordinate system; xzuoThe horizontal coordinate of the central point under the image coordinate system of the imaging plane of the industrial left eye camera is shown; xyouThe horizontal coordinate of the central point under the image coordinate system of the imaging plane of the industrial right-eye camera is shown; u. of1The abscissa of the central point at the projection point of the first image; u. of2The abscissa of the central point at the projection point of the second image; v. of1The ordinate of the central point at the projection point of the first image is taken as the ordinate; (u)1-u2)=Xzuo-XyouIs the parallax d of the target point; f is the focal length of the two cameras; b is the distance between the two cameras as a base line.
While the invention has been described in connection with specific embodiments thereof, it will be understood that these should not be construed as limiting the scope of the invention, which is defined in the following claims, and any variations which fall within the scope of the claims are intended to be embraced thereby.

Claims (8)

1. The utility model provides an assembled bridge prefabricated component hoist and mount positioning system based on binocular vision which characterized in that includes: the system comprises a fixed support, an industrial left-eye camera, an industrial right-eye camera, a computer, a trigger, an optical fiber data line, a calibration plate and a target combined structure;
the industrial left-eye camera and the industrial right-eye camera are arranged on the fixed support in parallel, and are in communication connection with the trigger through the optical fiber data line respectively, and the trigger is in communication connection with the computer;
a target measuring program is installed in the computer, and the computer performs data processing on images acquired by the industrial left-eye camera and the industrial right-eye camera through the target measuring program;
the target assembly structure includes: the device comprises a target veneer, a hollow rubber mounting handle and an adjusting knob, wherein the hollow rubber mounting handle is connected with the target veneer through a spherical holder connector, the adjusting knob is arranged on the hollow rubber mounting handle, and a target is arranged on the front side of the target veneer;
the industrial left-eye camera and the industrial right-eye camera shoot the calibration plate or the target to acquire corresponding images.
2. The assembled bridge prefabricated part hoisting and positioning system based on binocular vision is characterized in that the optical fiber data lines are USB3.0 optical fiber data lines, the calibration plate is a chessboard calibration plate, return light reflection mark points are arranged on the surface of the calibration plate, and return light reflection mark points are arranged on the targets.
3. The method for hoisting and positioning the assembled bridge prefabricated part based on the binocular vision, as claimed in any one of claims 1 to 2, wherein the method comprises the following steps:
step S1, calibrating the industrial left-eye camera and the industrial right-eye camera, and solving internal and external parameters of the industrial left-eye camera and the industrial right-eye camera;
step S2, mounting a target combination structure at the position of a steel bar to be measured of a prefabricated part, shooting the target through the industrial left-eye camera to obtain a first image of the target, and shooting the target through the industrial right-eye camera to obtain a second image of the target;
step S3, performing a rectification process on the first image and the second image acquired in step S2, the rectification process including: distortion correction and stereo correction;
step S4, processing the corrected first image and the corrected second image, and acquiring and storing pixel coordinate values of a target circle center positioning point in the first image and the second image;
and step S5, obtaining the pixel coordinate value of the target circle center positioning point of the first image and the pixel coordinate value of the target circle center positioning point of the second image according to the step S4, and obtaining the space three-dimensional coordinate of the circle center positioning point by adopting a parallax method.
4. The assembled bridge prefabricated part hoisting and positioning method based on binocular vision according to claim 3, wherein the step S1 is specifically as follows:
s101, fixing the industrial left-eye camera and the industrial right-eye camera in front of a prefabricated part to be tested, and placing the calibration plate in the visual field range of the industrial left-eye camera and the industrial right-eye camera;
step S102, acquiring a plurality of calibration plate images with different angles and different distances through the industrial left-eye camera and the industrial right-eye camera, and then putting the acquired images into a formulated folder;
s103, carrying out corner point detection on the image in the formulated folder through matlab, and carrying out calibration calculation to obtain parameters, wherein the parameters comprise: rotation matrix, translation vector, internal parameter matrix, reprojection error, skewness of image axis, principal point coordinate and scale factor;
s104, screening out the images which do not meet the requirement of the reprojection error from a formulated folder;
and step S105, carrying out second calibration on the image in the formulated folder, and calculating and saving the parameters.
5. The binocular vision based assembled bridge prefabricated part hoisting and positioning method as claimed in claim 4, wherein in the step S2, the target combination structure is installed at the steel bar to be measured of the prefabricated part, and the target surface position of the target is adjusted through the spherical pan-tilt connector, so that the target surface is opposite to the industrial left-eye camera and the industrial right-eye camera.
6. The binocular vision based assembled bridge prefabricated part hoisting and positioning method as recited in claim 5, wherein in the step S3, the distortion correction specifically comprises:
step S311, projecting three-dimensional space points in the first image and the second image to a normalized image plane;
step S312, correcting radial distortion and tangential distortion of the point on the normalized plane, specifically using formula (1) and formula (2), the expression is as follows:
xcorrected=x(1+k1r2+k2r4+k3r6)+2p1xy+p2(r2+2x2) (1)
ycorrected=y(1+k1r2+k2r4+k3r6)+2p2xy+p1(r2+2y2) (2)
in the formulas (1) to (2), x is the abscissa of the space point in the image coordinate system; x is the number ofcorrectedThe corrected space point is the abscissa of the image coordinate system; y is the vertical coordinate of the space point in the image coordinate system; y iscorrectedThe corrected space point is the vertical coordinate of the image coordinate system; k is a radical of1、k2、k3Is a camera radial distortion parameter; p1 and p2 are tangential distortion parameters of the camera;
step 313, projecting the corrected point to a pixel plane through an internal parameter matrix to obtain a correct position of the point on the image, wherein the expression is as follows:
u=fxxcorrected+cx (3)
v=fyycorrected+cy (4)
in formulae (3) to (4), cx、cyThe offset of the optical axis of the camera in an image coordinate system is taken as the offset; f. ofx、fyIs the focal length; u and v are coordinates of the space point in a pixel coordinate system;
in step S3, the stereo correction specifically includes:
step S321, dividing the rotation matrix R into two matrices RzAnd RyAnd R isz=R1/2,Ry=R1/2
Step S322, construct e by offset matrix T1,e2,e3Through e1,e2,e3So that the left and right polar lines are parallel, and then pass through the constructed transformation matrix RrectThe matrix transforms the poles of the left view to infinity; wherein T ═ Tx,Ty,Tz]T,
Figure FDA0002895656700000031
Wherein T is an offset matrix; rrectIs a constructed transformation matrix; t isx、Ty、TzIs a component of the offset matrix; e.g. of the type1、e2、e3As components of a transformation matrix;
step S323, multiplying the coordinate systems of the industrial left-eye camera and the industrial right-eye camera by corresponding integral rotation matrixes in sequence to enable the main optical axes of the two coordinate systems to be parallel, wherein the integral rotation matrixes can be obtained by multiplying a synthesized rotation matrix and a transformation matrix; the expression is as follows:
Rzuo=Rrect·rzuo (5)
Ryou=Rrect·ryou (6)
in formulae (5) to (6), Rzuo、RyouSynthetic rotation matrixes of an industrial left-eye camera and an industrial right-eye camera are respectively provided; rrectTo transform the camera poles to a matrix at infinity.
7. The assembled bridge prefabricated part hoisting and positioning method based on binocular vision according to claim 6, wherein the step S4 specifically comprises:
step S401, a median filtering algorithm is adopted to filter the corrected first image and the corrected second image, and the median filtering output formula is as follows:
Figure FDA0002895656700000032
in formula (7), g (s, t) is the gray level value of the original image,
Figure FDA0002895656700000033
is the gray value of the filtered image;
step S402, adopting a Laplacian operator to sharpen the filtered image, wherein the expression is as follows:
Figure FDA0002895656700000034
formula (8) represents the center coefficient of the laplacian mask;
s403, setting a proper threshold value through a canny operator to perform edge detection on the image to obtain an image contour, and screening out target contours meeting conditions through detecting the contour by the perimeter, the area and the roundness of the image contour;
fitting the target contour meeting the conditions, obtaining the circle center coordinates of the first image and the second image, averaging the circle center coordinates of the first image and the second image, and storing the circle center coordinates of the average value;
the expression of the canny operator is as follows:
Figure FDA0002895656700000041
I(x,y)=G(x,y,σ)*f(x,y) (10)
Figure FDA0002895656700000042
θ=arctan(IY(x,y)/IX(x,y)) (12)
in equations (9) to (12), G (x, y, σ) is a two-dimensional gaussian function, σ is a standard deviation of the gaussian function, the filter coverage area increases with the increase of σ, f (x, y) is an original image gray value, I (x, y) is a filtered image gray valueX(x, y) and IY(x, y) is the partial derivative of image I (x, y) in the x and y directions, M is the gradient strength at that point, and θ is its gradientAnd (5) vector quantity.
8. The assembled bridge prefabricated part hoisting and positioning method based on binocular vision according to claim 7, wherein the step S5 specifically comprises:
step S501, obtaining the relation between parallax and three-dimensional coordinates according to the triangle geometry principle:
Figure FDA0002895656700000043
step S502, order (u)1-u2)=Xzuo-XyouAnd (3) obtaining:
Figure FDA0002895656700000044
step S503, calculating the three-dimensional coordinate of the circle center position according to the formula (15);
Figure FDA0002895656700000045
in the formulae (13) to (15), XwIs the abscissa of the central point under the world coordinate system; y iswIs the vertical coordinate of the central point under the world coordinate system; zwIs the vertical coordinate of the central point under the world coordinate system; xzuoThe horizontal coordinate of the central point under the image coordinate system of the imaging plane of the industrial left eye camera is shown; xyouThe horizontal coordinate of the central point under the image coordinate system of the imaging plane of the industrial right-eye camera is shown; u. of1The abscissa of the central point at the projection point of the first image; u. of2The abscissa of the central point at the projection point of the second image; v. of1The ordinate of the central point at the projection point of the first image is taken as the ordinate; (u)1-u2)=Xzuo-XyouIs the parallax d of the target point; f is the focal length of the two cameras; b is the distance between the two cameras as a base line.
CN202110040531.3A 2021-01-13 2021-01-13 Assembled bridge prefabricated part hoisting and positioning system and method based on binocular vision Pending CN112767338A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110040531.3A CN112767338A (en) 2021-01-13 2021-01-13 Assembled bridge prefabricated part hoisting and positioning system and method based on binocular vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110040531.3A CN112767338A (en) 2021-01-13 2021-01-13 Assembled bridge prefabricated part hoisting and positioning system and method based on binocular vision

Publications (1)

Publication Number Publication Date
CN112767338A true CN112767338A (en) 2021-05-07

Family

ID=75699898

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110040531.3A Pending CN112767338A (en) 2021-01-13 2021-01-13 Assembled bridge prefabricated part hoisting and positioning system and method based on binocular vision

Country Status (1)

Country Link
CN (1) CN112767338A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114612565A (en) * 2022-05-12 2022-06-10 中建安装集团有限公司 Prefabricated part positioning method and system based on binocular vision
CN115082557A (en) * 2022-06-29 2022-09-20 中交第二航务工程局有限公司 Tower column hoisting relative attitude measurement method based on binocular vision
CN115126267A (en) * 2022-07-25 2022-09-30 中建八局第三建设有限公司 Optical positioning control system and method applied to concrete member embedded joint bar alignment
CN115306165A (en) * 2022-08-25 2022-11-08 中国建筑第二工程局有限公司 Assembly type prefabricated part mounting system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130058581A1 (en) * 2010-06-23 2013-03-07 Beihang University Microscopic Vision Measurement Method Based On Adaptive Positioning Of Camera Coordinate Frame
CN107133983A (en) * 2017-05-09 2017-09-05 河北科技大学 Bundled round steel end face binocular vision system and space orientation and method of counting
CN109166153A (en) * 2018-08-21 2019-01-08 江苏德丰建设集团有限公司 Tower crane high altitude operation 3-D positioning method and positioning system based on binocular vision
CN109493313A (en) * 2018-09-12 2019-03-19 华中科技大学 A kind of the coil of strip localization method and equipment of view-based access control model
CN110189382A (en) * 2019-05-31 2019-08-30 东北大学 A kind of more binocular cameras movement scaling method based on no zone of mutual visibility domain
CN111062990A (en) * 2019-12-13 2020-04-24 哈尔滨工程大学 Binocular vision positioning method for underwater robot target grabbing

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130058581A1 (en) * 2010-06-23 2013-03-07 Beihang University Microscopic Vision Measurement Method Based On Adaptive Positioning Of Camera Coordinate Frame
CN107133983A (en) * 2017-05-09 2017-09-05 河北科技大学 Bundled round steel end face binocular vision system and space orientation and method of counting
CN109166153A (en) * 2018-08-21 2019-01-08 江苏德丰建设集团有限公司 Tower crane high altitude operation 3-D positioning method and positioning system based on binocular vision
CN109493313A (en) * 2018-09-12 2019-03-19 华中科技大学 A kind of the coil of strip localization method and equipment of view-based access control model
CN110189382A (en) * 2019-05-31 2019-08-30 东北大学 A kind of more binocular cameras movement scaling method based on no zone of mutual visibility domain
CN111062990A (en) * 2019-12-13 2020-04-24 哈尔滨工程大学 Binocular vision positioning method for underwater robot target grabbing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王洋: "基于双目视觉的工件定位与尺寸测量研究", 《中国优秀硕士学位论文全文数据库》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114612565A (en) * 2022-05-12 2022-06-10 中建安装集团有限公司 Prefabricated part positioning method and system based on binocular vision
CN115082557A (en) * 2022-06-29 2022-09-20 中交第二航务工程局有限公司 Tower column hoisting relative attitude measurement method based on binocular vision
CN115082557B (en) * 2022-06-29 2024-03-15 中交第二航务工程局有限公司 Binocular vision-based tower column hoisting relative attitude measurement method
CN115126267A (en) * 2022-07-25 2022-09-30 中建八局第三建设有限公司 Optical positioning control system and method applied to concrete member embedded joint bar alignment
CN115306165A (en) * 2022-08-25 2022-11-08 中国建筑第二工程局有限公司 Assembly type prefabricated part mounting system

Similar Documents

Publication Publication Date Title
CN110276808B (en) Method for measuring unevenness of glass plate by combining single camera with two-dimensional code
CN112767338A (en) Assembled bridge prefabricated part hoisting and positioning system and method based on binocular vision
US10690492B2 (en) Structural light parameter calibration device and method based on front-coating plane mirror
CN111612853B (en) Camera parameter calibration method and device
CN110146038B (en) Distributed monocular camera laser measuring device and method for assembly corner of cylindrical part
CN110296667B (en) High-reflection surface three-dimensional measurement method based on line structured light multi-angle projection
CN109029299B (en) Dual-camera measuring device and method for butt joint corner of cabin pin hole
CN109859272B (en) Automatic focusing binocular camera calibration method and device
CN112907676A (en) Calibration method, device and system of sensor, vehicle, equipment and storage medium
Boochs et al. Increasing the accuracy of untaught robot positions by means of a multi-camera system
CN112816949B (en) Sensor calibration method and device, storage medium and calibration system
CN104315978A (en) Method and device for measuring pipeline end face central points
CN113324478A (en) Center extraction method of line structured light and three-dimensional measurement method of forge piece
CN114705122B (en) Large-view-field stereoscopic vision calibration method
CN110827360B (en) Photometric stereo measurement system and method for calibrating light source direction thereof
CN111879354A (en) Unmanned aerial vehicle measurement system that becomes more meticulous
CN110595374B (en) Large structural part real-time deformation monitoring method based on image transmission machine
KR101597163B1 (en) Method and camera apparatus for calibration of stereo camera
CN116740187A (en) Multi-camera combined calibration method without overlapping view fields
JP3696336B2 (en) How to calibrate the camera
CN110458951B (en) Modeling data acquisition method and related device for power grid pole tower
CN114963981B (en) Cylindrical part butt joint non-contact measurement method based on monocular vision
CN116147477A (en) Joint calibration method, hole site detection method, electronic device and storage medium
CN113781579B (en) Geometric calibration method for panoramic infrared camera
CN116071433A (en) Camera calibration method and system, and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210507

RJ01 Rejection of invention patent application after publication