Welding localization method and weld excursion amount acquisition methods for the weld seam of the vision navigation system of the climbing robot of weld seam detection
Technical field
The present invention is applicable to the repair and maintenance field to tower cylinder.
Background technology
Recent years, Wind Power Generation Industry was flourish, and wind-power tower quantity is surged, and the repair and maintenance work requirements amount for tower cylinder also increases accordingly.General wind-power tower height is between 50 to 100 meters, and tower cylinder is divided into some sections, and every section is welded by monolithic steel plate roll bending.Due to the impact of welding technology and welding precision, weld seam part there will be the problems such as rosin joint, pore, slag inclusion unavoidably, easily further causes crack in wind field use procedure.Due to wind energy turbine set majority, all build on the severe places of physical environment such as sea, valley, mountain pass, cause the problems referred to above in the use of blower fan, to become great potential safety hazard.All the time, be all to adopt suspension type operating type to carry out maintenance, the maintenance work of wind-power tower, risk is higher, therefore in the urgent need to the machine of working under limit mode, replaces manual work.And under this maximum conditions, be difficult to realize by GUIDANCE FOR AUTONOMIC GUIDED VEHICLES the training of the robot course of work, need robot to there is more intelligent control system and adapt to working environment, the work of the line correlation of going forward side by side.
Summary of the invention
In order to solve the severe place of physical environment prior art, cannot realize the problem of GUIDANCE FOR AUTONOMIC GUIDED VEHICLES to the repair and maintenance work of tower cylinder, thereby welding localization method and weld excursion amount acquisition methods for the weld seam of the vision navigation system of the climbing robot of weld seam detection are provided.
The vision navigation system of the climbing robot for weld seam detection of the present invention, it comprises charge coupling camera, cross laser transmitter and computing machine,
Charge coupling camera and cross laser transmitter are fixed on the positive front end of robot head, and cross laser transmitter is directly over charge coupling camera, the Ear Mucosa Treated by He Ne Laser Irradiation of described cross laser transmitter transmitting forms cross hot spot on welded part surface, charge coupling camera is for taking the cross hot spot of surface of the work, and the angle of the optical axis of laser beam of cross laser transmitter transmitting and the optical axis of the camera of charge coupling camera is 45 °, and the data output end of charge coupling camera is connected with the data input pin of computing machine.
Welding localization method, it comprises the steps:
Step 1, cross laser transmitter are launched red cross line style laser beam, described laser beam irradiation forms cross hot spot on welded part surface, and adopt this Laser Beam Scanning Workpiece surperficial, in scanning process, adopt charge coupling camera to gather the video information of surface of the work, perform step two;
Step 2, computing machine receive the video information that charge coupling camera gathers;
Step 3, computing machine are handled as follows each two field picture in video information:
Step 3 one, this two field picture is divided into pixel, this dot image of the value representation of described pixel is red, the value of Huang and Lan Sanse;
Step 3 two, the numerical value of all red, the Huang of this two field picture and blue three-color component is stored in respectively in three arrays;
Step 3 three, computing machine are asked for the optimal threshold of the gray-scale value that red component numerical value is corresponding to the array of red component corresponding to this two field picture by Two-dimensional maximum-entropy split plot design;
Step 3 four, the optimal threshold that the gray-scale value of all pixels of this two field picture and step 3 three obtained are compared, and the gray-scale value of the pixel higher than threshold value is set to 255, and the grey scale pixel value lower than threshold value is set to 0, and this two field picture is converted to binary map;
Step 3 five, the binary map of using Canny operator to obtain step 3 four are carried out rim detection, to obtain the marginal information of bianry image, show as the edge of cross hot spot on this two field picture;
The marginal information of step 3 six, extraction step three or five, gets the center line at edge of cross hot spot as skeleton, obtains the skeleton curve of smooth single pixel wide;
Step pseudo-ginseng, the skeleton curve obtaining according to step 3 six carry out the Hough conversion of straight line, obtain two straight lines on camber line both sides, and mark two end points of camber line, find camber line part, detect the position of weld seam in this two field picture.
Weld excursion amount acquisition methods, behind weld seam location, extract three unique points of central point of two end points and the cross light of weld seam camber line, between described two end points, distance is a, central point to and the end points that closes on of weld seam between distance be b, establish a, the measured value of b is a', b', can try to achieve deviation angle and be:
Position offset is:
w=b'-b, (15)
Work as a'=a, b'=b, is normal condition, during robot normally advances; Work as a'>a, b'≤b, robot is offset left; Work as a'>a, b' >=b, robot is offset to the right.
The present invention uses the vision navigation system of the climbing robot of weld seam detection that robot can have been reached from line search weld seam by vision navigation system; Welding localization method by weld seam is determined position while welding, follows the tracks of bead direction, thereby guarantee that defect-detecting equipment can, always along weld seam detection, reach the object that does not depart from weld seam by weld excursion amount acquisition methods.
Accompanying drawing explanation
Fig. 1 is the structural representation for the vision navigation system of the climbing robot of weld seam detection; Fig. 2 is gray scale-neighborhood gray average two-dimensional histogram; Fig. 3 is bead contour signal; Fig. 4 is the perspective view of laser projection and weld seam intersection drift condition; Fig. 5 is the contrast of laser projection and weld seam intersection drift condition and normal conditions; Fig. 6 is the isoboles of Fig. 5.
Embodiment
Embodiment one, in conjunction with Fig. 1, illustrate present embodiment, the vision navigation system of the climbing robot for weld seam detection described in present embodiment, it comprises charge coupling camera 1, cross laser transmitter 2 and computing machine,
Charge coupling camera 1 and cross laser transmitter 2 are fixed on the positive front end of robot head, and cross laser transmitter 2 is directly over charge coupling camera 1, the Ear Mucosa Treated by He Ne Laser Irradiation of described cross laser transmitter 2 transmittings forms cross hot spot on welded part surface, charge coupling camera 1 is for taking the cross hot spot of surface of the work, and the angle of cross laser transmitter 2 optical axises of laser beam of transmitting and the optical axis of the camera of charge coupling camera 1 is 45 °, and the data output end of charge coupling camera 1 is connected with the data input pin of computing machine.
The welding localization method of the weld seam of the vision navigation system of embodiment two, the climbing robot for weld seam detection based on described in embodiment one, it comprises the steps:
Step 1, the red cross line style laser beam of cross laser transmitter 2 transmitting, described laser beam irradiation forms cross hot spot on welded part surface, and adopt this Laser Beam Scanning Workpiece surperficial, in scanning process, adopt charge coupling camera 1 to gather the video information of surface of the work, perform step two;
Step 2, computing machine receive the video information that charge coupling camera gathers;
Step 3, computing machine are handled as follows each two field picture in video information:
Step 3 one, this two field picture is divided into pixel, this dot image of the value representation of described pixel is red, the value of Huang and Lan Sanse;
Step 3 two, the numerical value of all red, the Huang of this two field picture and blue three-color component is stored in respectively in three arrays;
Step 3 three, computing machine are asked for the optimal threshold of the gray-scale value that red component numerical value is corresponding to the array of red component corresponding to this two field picture by Two-dimensional maximum-entropy split plot design;
Step 3 four, the optimal threshold that the gray-scale value of all pixels of this two field picture and step 3 three obtained are compared, and the gray-scale value of the pixel higher than threshold value is set to 255, and the grey scale pixel value lower than threshold value is set to 0, and this two field picture is converted to binary map;
Step 3 five, the binary map of using Canny operator to obtain step 3 four are carried out rim detection, to obtain the marginal information of bianry image, show as the edge of cross hot spot on this two field picture;
The marginal information of step 3 six, extraction step three or five, gets the center line at edge of cross hot spot as skeleton, obtains the skeleton curve of smooth single pixel wide;
Step pseudo-ginseng, the skeleton curve obtaining according to step 3 six carry out the Hough conversion of straight line, obtain two straight lines on camber line both sides, and mark two end points of camber line, find camber line part, detect the position of weld seam in this two field picture.
Hough conversion in step pseudo-ginseng, the concrete steps that obtain two straight lines on camber line both sides are:
Step 3 July 1st, by (ρ, θ) space quantization: skeleton curve ρ=xcos θ+ysin θ obtains two-dimensional matrix A (ρ, θ), and initialization A (ρ, θ) is full null matrix;
Step pseudo-ginseng two, each non-zero gray-value pixel point coordinate (x, y) of image is calculated to corresponding ρ by each quantized value of θ, make a (i, j)+1 → a (i, j), a (i, j) be the element value of the capable j row of i of matrix A (ρ, θ);
Step pseudo-ginseng three, by whole non-zero gray-value pixel point coordinate (x, y) after processing, analyze A (ρ, θ), if A is (ρ, θ) be greater than threshold value T, have a line segment, (ρ, θ) is the fitting parameter of this line segment, T is a nonnegative integer, and in image, the priori of scenery determines;
Step pseudo-ginseng four, by (ρ, θ) and (x, y), jointly determine the line segment in image, and breaking portion is connected, obtain straight-line segment.
Embodiment three, in conjunction with Fig. 2, illustrate present embodiment, the difference of present embodiment and embodiment two is, the optimal threshold method of asking for the gray-scale value that red component numerical value is corresponding described in step 3 three is:
On image four pixels up and down of pixel next-door neighbour with and the neighbor at 4 diagonal angles just formed the neighborhood information of 8 pixels of this pixel, by Two-dimensional maximum-entropy split plot design, make image about the two-dimensional histogram of pixel gray-scale value and this neighborhood of pixel points information gray average, utilize two-dimensional entropy maximum to ask for the optimal threshold f (x, y) of the gray-scale value that red component numerical value is corresponding;
Wherein, the coordinate that (x, y) is pixel,
A district and B district that two-dimensional histogram distributes along diagonal line represent respectively target and background, away from cornerwise C district and D district, represent border and noise, in A district and B district, utilize some gray scale-area grayscale average Two-dimensional maximum-entropy method to determine optimal threshold, can make the quantity of information of authentic representative target and background maximum, two-dimensional entropy is:
The discriminant function of entropy is:
f(x
0,y
0)=log
2[P
A(1-P
A)]+H
A/P
A+(H
L-H
A)/(1-P
A), (2)
Optimal threshold is:
f(x,y)=max{f(x
0,y
0)}, (3)
Wherein,
The number of greyscale levels of image is L, and total pixel is counted as N(m * n), g
i, jfor image mid point gray scale is that the pixel that i and area grayscale average thereof are j is counted, p
i, jfor the probability of gray scale-area grayscale average to (i, j) generation, that is: p
i, j=g
i, j/ N, wherein N(m * n) be the number of total picture element of image, p
i, j, i, and j=1,2 ..., L } and presentation video is about a two-dimensional histogram for gray scale-area grayscale average.
The difference of embodiment four, present embodiment and embodiment two is, the concrete steps of carrying out rim detection according to binary map in step 3 five are:
Step 3 May Day, with Gaussian wave filter, the optimal threshold f (x, y) of image is carried out to convolution:
g(x,y)=h(x,y,σ)*f(x,y) (7)
Wherein, g (x, y) is the image after convolution, and h (x, y, σ) is Gaussian filter function, and σ is standard deviation, and * represents convolution;
Step 3 five or two, by single order local derviation method of finite difference, obtain partial gradient M (x, y) and the edge direction θ (x, y) of the image g (x, y) after convolution,
g'
x(x,y)≈G
x(x,y)=[g(x+1,y)-g(x,y)+g(x+1,y+1)-g(x,y+1)]/2 (9)
g'
y(x,y)≈G
y(x,y)=[g(x,y+1)-g(x,y)+g(x+1,y+1)-g(x+1,y)]/2 (10)
θ(x,y)=arctan(G
x(x,y)/G
y(x,y)) (12)
Wherein, G
xthe horizontal gradient of (x, y) presentation video g (x, y); G
ythe vertical gradient of (x, y) presentation video g (x, y); The partial gradient of M (x, y) presentation video g (x, y); The edge direction of θ (x, y) presentation video g (x, y),
Step 3 five or three, by each pixel of image, neighborhood information partial gradient M (x, y) being compared with two pixels along gradient line, if neighborhood information partial gradient M (x, y) be less than the pixel along gradient line, M (x, y) is set to zero, this Grad is marginal point; If neighborhood information partial gradient M (x, y) is more than or equal to the pixel along gradient line, this Grad is not marginal point, will disregard;
Step 3 the May 4th, choose two threshold values of two-dimensional histogram, and act on by non-maximum value and suppress the image obtaining, the gray-scale value that Grad is less than to the pixel of less threshold value is made as 0, obtains image 1; Then the gray-scale value that Grad is less than to the pixel of larger threshold value is made as 0, obtains image 2; Take image 2 as basis, take image 1 as supplementing to link the edge that obtains cross hot spot in image.
The difference of embodiment five, present embodiment and embodiment four is, step 3 the May 4th medium chain obtain in image the concrete steps at the edge of cross hot spot be:
Step 3 the May 4th one, scans image 2, when running into the pixel of a non-zero gray-scale value, follows the tracks of and take its outline line that is starting point, until the terminal of outline line;
Step 3 the May 4th two, the neighborhood information of 8 pixels of the pixel corresponding with the final position of outline line in image 2 in image under consideration 1, if there is the pixel of non-zero gray-scale value to exist in 8 pixel neighborhoods of a point of this point, be included in image 2, as NEW BEGINNING point, then repeating step 3641, until all cannot continue in image 1 and image 2;
Step 3 five-four-three, after completing the link of the outline line to comprising a certain pixel, is labeled as this outline line and accesses;
Repeating step three the May 4ths one, step 3 the May 4th two and step 3 five-four-three, until can not find new outline line in image 2.
The difference of embodiment six, present embodiment and embodiment two is, the skeleton curve described in step 3 six is:
ρ=xcosθ+ysinθ; (13)
Wherein, a bit (x, the y) in ρ=xcos θ+ysin θ presentation video space is corresponding to (ρ, θ) space sinusoidal curve, x represents the horizontal ordinate of pixel, y represents the ordinate of pixel, image coordinate initial point is ρ to the distance of this straight line, and the angle of the normal of this straight line and x axle is θ.
Embodiment seven, in conjunction with Fig. 3, Fig. 4, Fig. 5 and Fig. 6, illustrate present embodiment, the weld excursion amount acquisition methods of the welding localization method of the weld seam of the vision navigation system of the climbing robot for weld seam detection based on described in embodiment two, behind weld seam location, extract three unique points of central point of two end points and the cross light of weld seam camber line, between described two end points, distance is a, central point to and the end points that closes between distance be b, if a, the measured value of b is a', b', can try to achieve deviation angle and be:
Position offset is:
w=b'-b, (15)
Work as a'=a, b'=b, is normal condition, during robot normally advances; Work as a'>a, b'≤b, robot is offset left; Work as a'>a, b' >=b, robot is offset to the right.
Figure 5 shows that the situation of right avertence appears in robot, in Fig. 6, shift state and standard state are done to a contrast.Angle theta is deviation angle.
No matter be crawler-type wall climbing robot or wheeled climbing robot, when vertical crawling, all can not occur suddenly horizontal displacement, the situation of a'=a & b' >=b can not be in instantaneous generation.Therefore Fig. 5 can be equivalent to the result of Fig. 6,
Deviation angle
In robot, when horizontal position welding seam is creeped, may occur because of gravity factor the situation of downward small amplitude slip, the situation that now has a'=a & b' >=b occurs, position offset w=b'-b.
Using calculating the w of gained and θ, as feedback quantity, input to the PID control section of robot, can realize the real-time correction to running orbit.