CN102107179A - Method for controlling single-layer leather gluing based on binocular vision - Google Patents
Method for controlling single-layer leather gluing based on binocular vision Download PDFInfo
- Publication number
- CN102107179A CN102107179A CN2010105877098A CN201010587709A CN102107179A CN 102107179 A CN102107179 A CN 102107179A CN 2010105877098 A CN2010105877098 A CN 2010105877098A CN 201010587709 A CN201010587709 A CN 201010587709A CN 102107179 A CN102107179 A CN 102107179A
- Authority
- CN
- China
- Prior art keywords
- gluing
- camera
- point
- characteristic point
- coordinate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Processing (AREA)
Abstract
The invention provides a method for controlling single-layer leather gluing based on binocular vision. As a binocular camera and a three-shaft gluing manipulator are adopted, the method comprises the following steps of: 1) performing three-dimensional calibration on the fixed binocular camera; 2) collecting a binocular image of single-layer leather and extracting an edge outline of the binocular image; 3) extracting characteristics according to the edge outline image in the step 2); 4) matching according to the characteristic points in the step 3); 5) reducing real coordinate values of a gluing platform system corresponding to the characteristic points in the step 4) according to a weight projection matrix value; and 6) inputting the real coordinate values of the characteristic points into a gluing coordinate queue of the three-shaft gluing manipulator to control the manipulator to finish the gluing. By the method, the working efficiency is improved, the single-layer leather is conveniently integrated to the processing of a production line, the precision is high, and harms to the health of workers can be avoided.
Description
Technical field
The present invention relates to leather glue spreading method field, relate to a kind of individual layer leather glue spreading method, especially a kind of comprising detection, discern, measure the individual layer leather gluing control method of gluing entire flow based on binocular vision based on binocular vision.
Background technology
The leather gluing is mainly finished by manual in the present actual production of China, and the toxic gas inhomogeneous and that coating distributes of manual gluing all is the subject matter of leather gluing.The inhomogeneous difficulty that causes the stitching of leather later stage of manual gluing; can have influence on the efficient that leather is produced; and because the toxic gas that the gluing process is distributed can greatly influence workers'health again; therefore adopt the gluing means of mechanization can not only improve efficient and the technological level that leather is produced, also can effectively protect workers'health.
In existing image processing techniques and the binocular vision technology, most of function is comprehensive inadequately, also there be not a cover aspect the individual layer leather gluing from detecting, discern, measure the complete method of gluing, it as application number 200810232122.8 invention, the pixel equivalent of utilizing the demarcation thing of known geometric area to try to achieve camera can be influenced by computational accuracy, the leather area of trying to achieve may not be suitable for further commercial Application, obviously obtains the edge of leather and the characteristic point of processing and is more suitable for commercial Application.Application number is 200710190470.9 invention, utilize vision technique to discern the classification and the existing parameter gluing of basis of windshield, just simply utilized the visual identity technology, need sample and the step of input feature vector database for unsampled windshield, comparatively loaded down with trivial details, the shape of windshield is comparatively simple simultaneously.Application number is 200710123727.9 invention, under the driving of three servo motion controllers, carry out gluing according to three-dimensional or two-dimensional points sequence, but numerous three-dimensional point or two-dimensional points can influence the efficient of gluing, thereby should get the quantity that its Origin And Destination reduces point for collinear two-dimensional points.
Summary of the invention
For the inefficiency that overcomes existing pure manual leather glue spreading method, be unfavorable for being integrated into that streamline processing, precision are low, the deficiency of harm workers ' health, the invention provides and a kind ofly promote operating efficiency, be convenient to the individual layer leather gluing control method that is integrated into streamline processing, precision height, avoids endangering workers ' health based on binocular vision.
The technical solution adopted for the present invention to solve the technical problems is:
A kind of individual layer leather gluing control method based on binocular vision, this method adopts binocular camera and three gluing mechanical arms, and described individual layer leather gluing control method may further comprise the steps:
1), fixing binocular camera being carried out solid demarcates;
2), gather individual layer leather binocular image and extract its edge contour;
3), according to step 2) the edge contour image carry out feature extraction:
Individual layer leather edge contour by limited to two-dimensional coordinate point (x
i, y
i) constitute, i=1,2, L, n represent that this profile is made up of coordinate points n; For on the profile arbitrarily coordinate points to (x
iy
i), its curvature computing formula is calculated the right curvature K of all coordinate points as (1)
i, i=1,2, L, n represent that this profile has n to coordinate;
Wherein, (x
I+1, y
I+1) be that this coordinate points is to (x
i, y
i) next coordinate points right, (x
I-1, y
I-1) be that this coordinate points is to (x
i, y
i) a last coordinate points right.
Feature extraction is according to following steps:
(3.1), with
Calculate profile local curvature threshold value, B is a proportionality coefficient, is selected between [1,2];
(3.2), satisfy K
iThe coordinate points of 〉=T (i) can be selected into candidate feature point P
i, i=1,2, L, s<n represent total s candidate feature point;
(3.3), calculate two formed angle theta of tangent line of each candidate feature point
I+1: with the some P of candidate feature point
iBe starting point, P
I+1Be terminal point, the mid point P of Origin And Destination place line segment
(i+ (i+1))/2, according to formula (2) theory of computation center of circle
In like manner, with P
I+1Be starting point, P
I+2Be terminal point, the mid point P of Origin And Destination place line segment
((i+1)+(i+2))/2, get its theoretical center of circle by formula (2)
Calculate P with formula (3) then
I+1With
Line and P
I+1The angle of tangent line
And P
I+1With
Line and P
I+1The angle of tangent line
(3.4), according to θ
I+1Value judges whether it is line segment shape between required characteristic point and the marker characteristic point: if θ
I+1It is not obvious to think then that greater than 170 degree the residing local configuration curvature of this characteristic point changes, and is not required characteristic point, and the line segment between the characteristic point is shaped as Straight; Otherwise this characteristic point is required characteristic point, and the line segment between the characteristic point is shaped as Arc;
4), mate according to the characteristic point of step 3), characteristic matching is according to following steps:
(4.1), with the characteristic point of the left figure in the binocular image
Be reference, i=0,1,2, L, m, wherein, m<s, m represent that left figure has m characteristic point;
(4.2), the right figure in calculating and the binocular image is to all characteristic points in 2 pixels around there won't be any problem
Euclidean distance, i=0,1,2, L, k, wherein, k<m, k represent k characteristic point arranged in 2 pixel coverages in the right figure m characteristic point;
(4.3), get the Euclidean distance minimum
As the characteristic point that matches, thus the characteristic point one-to-one relationship of figure about definite individual layer leather edge contour;
5), according to formula (7) reduction step 4) characteristic point the true coordinate value of corresponding gluing platform system;
6), the true coordinate value of characteristic point is input in the gluing coordinate formation of three gluing mechanical arms, thus the control mechanical arm is finished the gluing process.
Further, in the described step 6), line segment shape and the formula (4) according to step 3) between the characteristic point carried out gluing control;
Wherein, L (P
i, P
I+1) expression P
iWith P
I+1Individual layer leather edge line section shape between the individual characteristic point, Straight represents line segment, carries out gluing according to the linear interpolation mode during gluing, Arc represents arc, carries out gluing, i=0 according to the curve interpolating mode during gluing, 1,2, L m-1 represents to have m edge contour characteristic point.
Further again, in the described step 1), the demarcation thing of taking different attitudes obtains to be used to demarcate the image of analysis, obtains intrinsic parameter (the k l u of each camera according to linear camera model (5)
0v
0F θ) and outer parameter, 3 * 4 matrix M that obtain by camera inside and outside parameter dot product
3 * 4Be perspective projection matrix;
Wherein, k, l are the pixel sizes of binocular camera, u
0, v
0Be the coordinate figure of the optical axis center of camera, f is a camera focus, and θ is the degree of skewness (being generally 90 °) of camera coordinates system, R
3 * 3Be the spin matrix of camera, t
3 * 1Then be the translation matrix of camera, (u v 1)
TBe any point in the image, (X
WY
WZ
W1)
TIt then is the pairing gluing coordinate system of any point coordinate in the image;
The process that image is corrected is as follows:
(1.1), the inside and outside parameter by formula (6) and left and right sides camera obtains rotation and the translation matrix of right camera with respect to left camera, and correct the left and right sides image trip aimed at;
R=R
r(R
l)T
(6)
T=T
r-RT
l
Wherein, R
l, R
rBe the spin matrix of left and right sides camera, T
l, T
rBe the translation matrix of left and right sides camera, R, T are spin matrix and the translation matrix of right camera with respect to left camera.
(1.2), the camera inside and outside parameter by formula (7) and front obtains re-projection matrix Q:
Wherein, T
xBe the x axle component that binocular camera is joined translation vector T outward, c
xAnd c
yBe the world coordinate system coordinate figure at left camera optics center, f is the focal length of left camera.
Described step 2) in, convert individual layer leather image to gray level image according to the different threshold filters of finishing of the rgb value of leather and noise, detailed process is as follows:
(2.1), use Gaussian function that image is carried out smoothly;
(2.2), the closed operation in the employing mathematical morphology;
(2.3), adopt the leather edge contour of the unidirectional element of Canny operator extraction.
Described step 2) in, around the gluing platform, light source is installed, directly over the center of described gluing platform, is arranged binocular camera.
Technical conceive of the present invention is: utilize vision and image technique, by binocular camera acquisition of image data and analyzing and processing, can obtain comparatively accurate leather characteristic point and change into the machining coordinate of glue spreading apparatus, the advantage of this method is contactless gluing, the enforcement difficulty is low, cost is low, and the accuracy height does not have harm to human body.
The unknown of individual layer leather edge shape, if treated leather then edge is comparatively regular, undressed leather then edge is comparatively complicated, when finishing rim detection, also need to discern the edge contour characteristic point based on the curvature method with what improve, when gluing, be the gluing control point with the characteristic point point sequence, with the edge line segment between the characteristic point serves as with reference to straight line or the curve gluing mode of determining three shaft mechanical arms, and the present invention is exactly according to above-mentioned purpose and requires effectively to have finished from detecting, discern, measure the complete individual layer leather glue spreading method of gluing.
This cognition technology based on vision is dissolved into conventional leather gluing process, the intelligent level and the automaticity of its process will effectively be promoted, can avoid because of the potential hazard of gluing process generation workers ' health, realize the closed-loop control flow process, not only ensured workers'health but also improved production efficiency.
Obtain the individual layer leather image for the treatment of gluing by the binocular camera that is fixed on directly over the gluing platform center, because the effect of light source provides higher picture quality, passing threshold filtering, Gaussian function and closed operation are carried out level and smooth noise reduction to image, obtain comparatively smooth leather gray level image, extract the leather edge contour that obtains single pixel by Canny operator, extract edge contour characteristic point and coupling by improved feature point extraction algorithm again based on curvature, at last, obtain the coordinate figure of its corresponding gluing coordinate system by the characteristic point reduction of coupling, finish the gluing process in conjunction with the line segment shape of edge contour between the characteristic point.
Beneficial effect of the present invention mainly shows: 1, entire method has comprised detection to leather, has discerned, measured the entire flow of gluing, helps the industrial automation of gluing, and does not need the workman to participate; 2, the glue spreading method based on binocular vision has noncontact, high-precision advantage; 3, the improved effective extract minutiae of feature point extraction algorithm energy based on curvature, and reduced the number of coordinates of importing gluing mechanical arm job sequence, improve gluing speed.
Description of drawings
Fig. 1 is automatic double surface gluer indication device figure of the present invention.
Fig. 2 is a glue spreading method flow chart of the present invention.
Fig. 3 is left and right sides camera edge extracting result's a schematic diagram, and wherein, left hand view is the edge extracting result of left camera, and right part of flg is the edge extracting result of right camera, first kind of leather example print of (1) expression; (2) second kind of leather example print of expression; (3) represent the third leather example print.
Fig. 4 is left and right sides camera Edge Gradient Feature result's a schematic diagram, wherein, the black surround frame table shows the feature extraction result, and left hand view is the Edge Gradient Feature result of left camera, right part of flg is the Edge Gradient Feature result of right camera, first kind of leather example print of (1) expression; (2) second kind of leather example print of expression; (3) represent the third leather example print.
Fig. 5 is two angle theta that tangent line forms of candidate feature point
I+1Schematic diagram.
Fig. 6 is left and right sides Feature Points Matching result's a schematic diagram, and wherein, black numbers represents to match characteristic point, and left hand view is left figure Feature Points Matching result, and right part of flg is right figure Feature Points Matching result, first kind of leather example print of (1) expression; (2) second kind of leather example print of expression; (3) represent the third leather example print.
Fig. 7 is the schematic diagram that reconstructed results symbol quadrant distributes.
Fig. 8 is the schematic diagram of the error analysis of reconstructed results and actual value.
The specific embodiment
Below in conjunction with accompanying drawing the present invention is further described.
With reference to Fig. 1~Fig. 8, a kind of individual layer leather gluing control method based on binocular vision, this method adopts binocular camera and three gluing mechanical arms, and described individual layer leather gluing control method may further comprise the steps:
1), fixing binocular camera being carried out solid demarcates;
2), gather individual layer leather binocular image and extract its edge contour;
3), according to step 2) the edge contour image carry out feature extraction:
Individual layer leather edge contour by limited to two-dimensional coordinate point (x
i, y
i) constitute, i=1,2, L, n represent that this profile is made up of coordinate points n; For on the profile arbitrarily coordinate points to (x
i, y
i), its curvature computing formula is calculated the right curvature K of all coordinate points as (1)
i, i=1,2, L, n represent that this profile has n to coordinate;
Wherein, (x
I+1, y
I+1) be that this coordinate points is to (x
i, y
i) next coordinate points right, (x
I-1, y
I-1) be that this coordinate points is to (x
i, y
i) a last coordinate points right.
Feature extraction is according to following steps:
(3.1), with
Calculate profile local curvature threshold value, B is a proportionality coefficient, is selected between [1,2];
(3.2), satisfy K
iThe coordinate points of 〉=T (i) can be selected into candidate feature point P
i, i=1,2, L, s<n represent total s candidate feature point;
(3.3), calculate two formed angle theta of tangent line of each candidate feature point
I+1: with the some P of candidate feature point
iBe starting point, P
I+1Be terminal point, the mid point P of Origin And Destination place line segment
(i+ (i+1))/2, according to formula (2) theory of computation center of circle
In like manner, with P
I+1Be starting point, P
I+2Be terminal point, the mid point P of Origin And Destination place line segment
((i+1)+(i+2))/2, get its theoretical center of circle by formula (2)
Calculate P with formula (3) then
I+1With
Line and P
I+1The angle of tangent line
And P
I+1With
Line and P
I+1The angle of tangent line
(3.4), according to θ
I+1Value judges whether it is line segment shape between required characteristic point and the marker characteristic point: if θ
I+1It is not obvious to think then that greater than 170 degree the residing local configuration curvature of this characteristic point changes, and is not required characteristic point, and the line segment between the characteristic point is shaped as Straight; Otherwise this characteristic point is required characteristic point, and the line segment between the characteristic point is shaped as Arc;
4), mate according to the characteristic point of step 3), characteristic matching is according to following steps:
(4.1), with the characteristic point of the left figure in the binocular image
Be reference, i=0,1,2, L, m, wherein, m<s, m represent that left figure has m characteristic point;
(4.2), the right figure in calculating and the binocular image is to all characteristic points in 2 pixels around there won't be any problem
Euclidean distance, i=0,1,2, L, k, wherein, k<m, k represent k characteristic point arranged in 2 pixel coverages in the right figure m characteristic point;
(4.3), get the Euclidean distance minimum
As the characteristic point that matches, thus the characteristic point one-to-one relationship of figure about definite individual layer leather edge contour;
5), according to formula (6) reduction step 4) characteristic point the true coordinate value of corresponding gluing platform system;
6), the true coordinate value of characteristic point is input in the gluing coordinate formation of three gluing mechanical arms, thus the control mechanical arm is finished the gluing process.
In the present embodiment, four light sources are housed around the gluing platform, can eliminate the shade for the treatment of gluing leather edge, distinct image more is provided, the binocular vision collecting device of directly over the platform center, forming by two cameras, this equipment is linked to each other with host computer by the USB line, and computer then can be finished IMAQ and analysis subsequently and finish individual layer leather gluing function.
1), fixing binocular camera being carried out solid demarcates and correcting image.Binocular camera is fixed in directly over the center of gluing platform, obtains to be used to demarcate the image of analysis by the demarcation thing (chequered with black and white gridiron pattern scaling board, each gridiron pattern size is all known on the plate) of taking different attitudes.According to (Heikkila, Silven, " A Four-step Camera Calibration Procedurewith Implicit Image Correction ", CVPR97, p:1106-1112) the linear camera model formation (4) in obtains intrinsic parameter (the k l u of each camera
0v
0F θ) and outer parameter (spin matrix R
3 * 3, translation matrix t
3 * 1).3 * 4 matrix M that obtain by camera inside and outside parameter dot product
3 * 4Be perspective projection matrix.
Wherein, k, l are the pixel sizes of binocular camera, u
0, v
0Be the coordinate figure of the optical axis center of camera, f is a camera focus, and θ is the degree of skewness (being generally 90 °) of camera coordinates system, R
3 * 3Be the spin matrix of camera, t
3 * 1Then be the translation matrix of camera, (u v 1)
TBe any point in the image, (X
WY
WZ
W1)
TIt then is the pairing gluing coordinate system of any point coordinate in the image.Calculate the M of left and right sides camera according to formula (4)
3 * 4Matrix is as follows:
(1.1), the outer parameter by binocular camera, obtain rotation and the translation vector of right camera with formula (5), and correct the left and right sides image trip is aimed at respect to left camera.
R=R
r(R
l)T
(5)
T=T
r-RT
l
Wherein, R
l, R
rBe the spin matrix of left and right sides camera, T
l, T
rBe the translation matrix of left and right sides camera, R, T are spin matrix and the translation matrix of right camera with respect to left camera.As follows according to the result that formula (5) obtains:
T=(-110892622920883670.06314981629290070.0921180076963402)
(1.2), the camera inside and outside parameter by formula (6) and front obtains re-projection matrix Q:
Wherein,
Be the x axle component that binocular camera is joined translation vector T outward, c
xAnd c
yBe the world coordinate system coordinate figure at left camera optics center, f is the focal length of left camera.As follows according to the result that formula (6) obtains:
2), gather individual layer leather binocular image and extract its edge contour.Profile extracts the result as shown in Figure 3
(2.1), because the light source effect of gluing platform, the present invention with individual layer leather image according to the different threshold filters of finishing of the rgb value of leather and noise.Threshold filter promptly is to choose suitable gray value in [0,1,2, L, 255] interval, can keep greater than the pixel of this gray value;
(2.2), the present invention uses Gaussian function that image is carried out smoothly, reduces the noise that may the exist influence to the leather edge with this.The present invention chooses 3 * 3 gaussian kernel function (G.Bradski, A.Kaebler, " Learning OpenCV ", 2008,110) to the gray-scale map smoothing processing through threshold filter;
(2.3), the present invention adopt closed operation in the mathematical morphology guarantee preserving edge when extracting complete edge as far as possible scrambling (list of references: G.Bradski, A.Kaebler, " Learning OpenCV ", 2008,120-121).Structural element in the closed operation is that 3 * 3 rectangular configuration is carried out interative computation according to the order of first expansion post-etching one time to level and smooth image, makes image more smooth;
(2.4), the present invention adopts is based on Tuscany (Canny) operator (Fleck, " Somedefects in finite-difference edge finders ", IEEE PAMI, 1992,14 (3): 337-345) individual layer leather image border profile is carried out single pixel extraction, wherein, the low threshold value and the high threshold of Canny operator are elected [0.3 0.35] as.
3), according to step 2) the edge contour image carry out feature extraction.In fact individual layer leather edge contour is exactly to two-dimensional coordinate point (x by limited
iy
i) constitute, the present invention adopt improvement based on the profile coordinate between the mode (X.C.He of curvature, " Curvature Scale SpaceCorner Detector with Adaptive Threshold and Dynamic Region ofSupport ", ICPR, 2004, vo1.2:791-794) extract the characteristic point of profile.For on the profile arbitrarily coordinate points to (x
i, y
i), i=1,2, L, n represent that this profile is made up of coordinate points n, its curvature computing formula is as (1).Feature point extraction result as shown in Figure 4, black surround frame point representation feature point.
(3.1), with
Calculate profile local curvature threshold value, B is a proportionality coefficient, and it is comparatively reasonable to be selected between [1,2], elects 1.5 among the present invention as;
(3.2), satisfy K
iThe coordinate points of 〉=T (i) can be selected into candidate feature point P
i, i=1,2, L, s<n represent total s candidate feature point;
(3.3), calculate two formed angle theta of tangent line of each candidate feature point
I+1: with the some P of candidate feature point
iBe starting point, P
I+1Be terminal point, the mid point P of Origin And Destination place line segment
(i+ (i+1)/2, according to formula (2) theory of computation center of circle
In like manner, with P
I+1Be starting point, P
I+2Be terminal point, the mid point P of Origin And Destination place line segment
((i+1)+(i+22))/2, get its theoretical center of circle by formula (2)
Calculate P with formula (3) then
I+1With
Line and P
I+1The angle of tangent line
And P
I+1With
Line and P
I+1The angle of tangent line
(3.4), according to θ
I+1Value judges whether it is line segment shape between required characteristic point and the marker characteristic point: if θ
I+1It is not obvious to think then that greater than 170 degree the residing local configuration curvature of this characteristic point changes, and is not required characteristic point, and the line segment between the characteristic point is shaped as Straight; Otherwise this characteristic point is required characteristic point, and the line segment between the characteristic point is shaped as Arc.
4), mate according to the characteristic point of step 3).About figure characteristic matching result as shown in Figure 6, black surround frame point representation feature point.
(4.1), with the characteristic point of left figure
For with reference to (i=0,1,2, L, m<s represent that left figure has m characteristic point);
(4.2), calculate figure right to all characteristic points in 2 pixels there won't be any problem on every side with it
Euclidean distance (i=0,1,2, L, k, k<m represent k characteristic point arranged in 2 pixel coverages in the right figure m characteristic point);
(4.3), get the Euclidean distance minimum
As the characteristic point that matches, thus the characteristic point one-to-one relationship of figure about definite individual layer leather edge contour.The Euclidean distance computing formula as shown in Equation (8) (list of references: Yang Shuying, " pattern-recognition and intelligence computation---Matlab technology realize ", 2008:54).
Wherein, Euclidean
iExpression
With
Minimum euclidean distance,
For
Coordinate figure,
For
Coordinate figure.
5), according to formula (6) reduction step 4) characteristic point the true coordinate value of corresponding gluing platform system.For each characteristic point, obtaining this characteristic point image among the figure about the individual layer leather by step 4) is coordinate figure and formula (6) re-projection matrix, reduction obtains the true coordinate value that its corresponding gluing platform is, the gluing platform coordinate system among the present invention promptly is the kinetic coordinate system of three gluing mechanical arms.Black numbers marker characteristic point shown in (1)-(3) in 6 with reference to the accompanying drawings, the true coordinate value that reduction obtains, wherein left figure, right side figure are the pixel coordinate value of characteristic point, reconstructed results is the true coordinate value:
Fig. 1 | Left figure | Right figure | Reconstructed results (cm) |
1 | (149,187) | (94,185) | (16.85,3.62,50.84) |
2 | (170,262) | (115,259) | (15.41,-1.22,50.84) |
3 | (227,303) | (178,301) | (12.92,-4.33,57.07) |
4 | (225,316) | (178,312) | (13.63,-5.5,59.49) |
5 | (502,364) | (448,361) | (-7.47,-7.94,51.78) |
6 | (511,357) | (454,355) | (-7.67,-7.09,49.05) |
7 | (526,355) | (469,353) | (-8.66,-6.96,49.05) |
8 | (555,267) | (496,267) | (-10.22,-1.44,47.39) |
9 | (283,170) | (227,167) | (7.54,4.63,49.93) |
10 | (559,186) | (501,185) | (-10.66,3.49,48.21) |
11 | (540,93) | (484,92) | (-9.76,9.51,49.93) |
12 | (527,91) | (471,91) | (-8.88,9.63,49.93) |
13 | (519,79) | (464,79) | (-8.5,10.58,50.84) |
14 | (251,87) | (197,87) | (9.86,10.07,50.84) |
15 | (243,101) | (185,100) | (9.87,8.69,48.21) |
16 | (184,126) | (128,125) | (14.2,7.42,49.93) |
Fig. 2 | Left figure | Right figure | Reconstructed results (cm) |
1 | (167,223) | (110,223) | (15.07,1.25,49.06) |
2 | (176,252) | (117,251) | (13.99,-0.53,47.39) |
3 | (507,347) | (448,347) | (-7.16,-6.24,47.39) |
4 | (508,258) | (448,258) | (-7.1,-0.88,46.6) |
5 | (287,250) | (229,242) | (7.02,-0.42,48.21) |
6 | (292,176) | (238,173) | (7.19,4.41,51.78) |
7 | (326,162) | (259,162) | (3.88,4.29,41.73) |
8 | (514,188) | (454,187) | (-7.48,3.06,46.6) |
9 | (521,99) | (459,99) | (-7.66,8.34,45.1) |
10 | (168,201) | (111,200) | (15.01,2.62,49.06) |
11 | (197,213) | (141,212) | (13.32,1.91,49.93) |
Fig. 3 | Left figure | Right figure | Reconstructed results (cm) |
1 | (86,252) | (30,249) | (20.79,-0.56,49.93) |
2 | (197,349) | (141,346) | (13.32,-6.71,49.93) |
3 | (248,336) | (187,334) | (9.08,-5.4,45.84) |
4 | (536,423) | (479,420) | (-9.32,-11.19,49.06) |
5 | (602,161) | (548,159) | (-14.45,5.39,51.78) |
6 | (249,129) | (194,126) | (10.00,7.36,50.84) |
7 | (201,170) | (147,166) | (13.54,4.8,51.78) |
8 | (178,175) | (131,172) | (17.4,5.14,59.49) |
9 | (117,167) | (62,164) | (19.05,4.91,50.84) |
In the above-mentioned reconstructed results, according to the matrix Q value of formula (6), can find that the pairing pixel coordinate value of optical axis center of left camera is (394,243), symbol quadrant distribution is as shown in Figure 7 released in the definition of associate(d) matrix Q again.(+,+) mean x direction coordinate figure less than 394 pixel rebuild the back symbol for+, y direction coordinate figure rebuilds the back symbol less than 243 pixel and is+; (+,-) mean x direction coordinate figure less than 394 pixel rebuild the back symbol for+, y direction coordinate figure rebuilds the back symbol greater than 243 pixel and is-; (-,-) mean x direction coordinate figure greater than 394 pixel rebuild the back symbol for-, y direction coordinate figure rebuilds the back symbol greater than 243 pixel and is-; (-,+) mean x direction coordinate figure greater than 394 pixel rebuild the back symbol for-, y direction coordinate figure rebuilds the back symbol less than 243 pixel and is+.
Accompanying drawing 8 is the error analysis figure that obtain after 6 matching result and the reconstructed results with reference to the accompanying drawings, wherein, and in the reconstructed results of Fig. 1, vertical range between point 4 and the point 14 is 15.57, and actual range is 16, error 2.7%, vertical range between point 5 and the point 13 is 18.52, and actual range is 18.8, error 1.5%, vertical range between point 11 and the point 7 is 16.47, actual range is 17, error 3.2%, and the vertical range between point 10 and the point 8 is 4.93, actual range 5.1, error 3.4%; In the reconstructed results of Fig. 2, the vertical range between point 3 and the point 4 is 5.36, and actual range is 5.4, error 0.7%, vertical range between point 8 and the point 9 is 5.28, and actual range is 5.4, error 2.2%, distance between point 10 and the point 11 is 1.83, actual range is 1.88, error 2.7%, the distance 1.86 between point 11 and the point 1, actual range is 1.88, error 1.1%; In the reconstructed results of Fig. 3, the distance between point 1 and the point 2 is 9.69, and actual range is 10, error 3.2%, distance between point 4 and the point 5 is 17.36, and actual range is 17.9, error 3.1%, distance between point 9 and the point 1 is 5.74, and actual range is 5.9, error 2.8%; The 3rd coordinate in all coordinate figures is z axle value, i.e. the distance of this characteristic point and left camera optical axis center, and this is worth and is 48cm herein, and error about 3% is so this method is feasible.
6), the true coordinate value of characteristic point is input in the gluing coordinate formation of three gluing mechanical arms, thus the control mechanical arm is finished the gluing process.Because the gluing platform coordinate system is the coordinate system of gluing mechanical arm just again, therefore control card can convert each characteristic point coordinates value of this individual layer leather edge contour to job sequence, carries out gluing at characteristic point and line segment shape and the formula (7) put according to step 2.
L (P
i, P
I+1) expression P
iWith P
I+1Individual layer leather edge line section shape between the individual characteristic point, Straight represents line segment, carries out gluing according to the linear interpolation mode during gluing, Arc represents arc, carries out gluing, i=0 according to the curve interpolating mode during gluing, 1,2, L m-1 represents to have m edge contour characteristic point.
Claims (5)
1. individual layer leather gluing control method based on binocular vision is characterized in that: this method adopts binocular camera and three gluing mechanical arms, and described individual layer leather gluing control method may further comprise the steps:
1), fixing binocular camera being carried out solid demarcates;
2), gather individual layer leather binocular image and extract its edge contour;
3), according to step 2) the edge contour image carry out feature extraction:
Individual layer leather edge contour by limited to two-dimensional coordinate point (x
i, y
i) constitute, i=1,2, L, n represent that this profile is made up of coordinate points n; For on the profile arbitrarily coordinate points to (x
i, y
i), its curvature computing formula is calculated the right curvature K of all coordinate points as (1)
i, i=1,2, L, n represent that this profile has n to coordinate;
Wherein, (x
I+1, y
I+1) be that this coordinate points is to (x
i, y
i) next coordinate points right, (x
I-1, y
I-1) be that this coordinate points is to (x
i, y
i) a last coordinate points right.
Feature extraction is according to following steps:
(3.1), with
Calculate profile local curvature threshold value, B is a proportionality coefficient, is selected between [1,2];
(3.2), satisfy K
iThe coordinate points of 〉=T (i) can be selected into candidate feature point P
i, i=1,2, L, s<n represent total s candidate feature point;
(3.3), calculate two formed angle theta of tangent line of each candidate feature point
I+1: with the some P of candidate feature point
iBe starting point, P
I+1Be terminal point, the mid point P of Origin And Destination place line segment
(i+ (i+1))/2, according to formula (2) theory of computation center of circle
In like manner, with P
I+1Be starting point, P
I+2Be terminal point, the mid point P of Origin And Destination place line segment
((i+1)+(i+2))/2, get its theoretical center of circle by formula (2)
Calculate P with formula (3) then
I+1With
Line and P
I+1The angle of tangent line
And P
I+1With
Line and P
I+1The angle of tangent line
(3.4), according to θ
I+1Value judges whether it is line segment shape between required characteristic point and the marker characteristic point: if θ
I+1It is not obvious to think then that greater than 170 degree the residing local configuration curvature of this characteristic point changes, and is not required characteristic point, and the line segment between the characteristic point is shaped as Straight; Otherwise this characteristic point is required characteristic point, and the line segment between the characteristic point is shaped as Arc;
4), mate according to the characteristic point of step 3), characteristic matching is according to following steps:
(4.1), with the characteristic point of the left figure in the binocular image
Be reference, i=0,1,2, L, m, wherein, m<s, m represent that left figure has m characteristic point;
(4.2), the right figure in calculating and the binocular image is to all characteristic points in 2 pixels around there won't be any problem
Euclidean distance, i=0,1,2, L, k, wherein, k<m, k represent k characteristic point arranged in 2 pixel coverages in the right figure m characteristic point;
(4.3), get the Euclidean distance minimum
As the characteristic point that matches, thus the characteristic point one-to-one relationship of figure about definite individual layer leather edge contour;
5), according to re-projection matrix reduction step 4) characteristic point the true coordinate value of corresponding gluing platform system;
6), the true coordinate value of characteristic point is input in the gluing coordinate formation of three gluing mechanical arms, thus the control mechanical arm is finished the gluing process.
2. a kind of individual layer leather gluing control method based on binocular vision as claimed in claim 1 is characterized in that: in the described step 6), line segment shape and the formula (4) according to step 3) between the characteristic point carried out gluing control;
Wherein, L (P
i, P
I+1) expression P
iWith P
I+1Individual layer leather edge line section shape between the individual characteristic point, Straight represents line segment, carries out gluing according to the linear interpolation mode during gluing, Arc represents arc, carries out gluing, i=0 according to the curve interpolating mode during gluing, 1,2, L m-1 represents to have m edge contour characteristic point.
3. a kind of individual layer leather gluing control method as claimed in claim 1 or 2 based on binocular vision, it is characterized in that: in the described step 1), the demarcation thing of taking different attitudes obtains to be used to demarcate the image of analysis, obtains intrinsic parameter (the k l u of each camera according to linear camera model (5)
0v
0F θ) and outer parameter, 3 * 4 matrix M that obtain by camera inside and outside parameter dot product
3 * 4Be perspective projection matrix;
Wherein, k, l are the pixel sizes of binocular camera, u
0, v
0Be the coordinate figure of the optical axis center of camera, f is a camera focus, and θ is the degree of skewness (being generally 90 °) of camera coordinates system, R
3 * 3Be the spin matrix of camera, t
3 * 1Then be the translation matrix of camera, (u v 1)
TBe any point in the image, (X
WY
WZ
W1)
TIt then is the pairing gluing coordinate system of any point coordinate in the image;
The process that image is corrected is as follows:
(1.1), the inside and outside parameter by formula (6) and left and right sides camera obtains rotation and the translation matrix of right camera with respect to left camera, and correct the left and right sides image trip aimed at;
R=R
r(R
l)T
(6)
T=T
r-RT
l
Wherein, R
l, R
rBe the spin matrix of left and right sides camera, T
l, T
rBe the translation matrix of left and right sides camera, R, T are spin matrix and the translation matrix of right camera with respect to left camera.
(1.2), the camera inside and outside parameter by formula (7) and front obtains re-projection matrix Q:
Wherein, T
xBe the x axle component that binocular camera is joined translation vector T outward, c
xAnd c
yBe the world coordinate system coordinate figure at left camera optics center, f is the focal length of left camera.
4. a kind of individual layer leather gluing control method as claimed in claim 1 or 2 based on binocular vision, it is characterized in that: described step 2), convert individual layer leather image to gray level image according to the different threshold filters of finishing of the rgb value of leather and noise, detailed process is as follows:
(2.1), use Gaussian function that image is carried out smoothly;
(2.2), the closed operation in the employing mathematical morphology;
(2.3), adopt the leather edge contour of the unidirectional element of Canny operator extraction.
5. a kind of individual layer leather gluing control method based on binocular vision as claimed in claim 1 or 2 is characterized in that: described step 2), around the gluing platform light source is installed, is arranged binocular camera directly over the center of described gluing platform.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201010587709 CN102107179B (en) | 2010-12-14 | 2010-12-14 | Method for controlling single-layer leather gluing based on binocular vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201010587709 CN102107179B (en) | 2010-12-14 | 2010-12-14 | Method for controlling single-layer leather gluing based on binocular vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102107179A true CN102107179A (en) | 2011-06-29 |
CN102107179B CN102107179B (en) | 2013-07-24 |
Family
ID=44171526
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 201010587709 Active CN102107179B (en) | 2010-12-14 | 2010-12-14 | Method for controlling single-layer leather gluing based on binocular vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102107179B (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102354151A (en) * | 2011-08-04 | 2012-02-15 | 浙江工业大学 | Tangential following interpolation method applied to multilayer shoe leather numerical control cutting machine tool |
CN102783769A (en) * | 2012-07-20 | 2012-11-21 | 浙江工业大学 | Glue coating control method at small-curvature turning part |
CN102981406A (en) * | 2012-11-26 | 2013-03-20 | 浙江工业大学 | Sole glue spraying thickness control method based on binocular vision |
CN103316827A (en) * | 2013-06-09 | 2013-09-25 | 深圳众为兴技术股份有限公司 | Adhesive dispensing method and device |
CN103841311A (en) * | 2012-11-20 | 2014-06-04 | 广州三星通信技术研究有限公司 | Method for generating 3D image and portable terminals |
CN106583178A (en) * | 2016-11-01 | 2017-04-26 | 浙江理工大学 | Leather edge positioning method and device of automatic edge painting machine |
CN106868229A (en) * | 2017-01-05 | 2017-06-20 | 四川大学 | A kind of device of the leather processed that stretches tight automatically |
CN107413590A (en) * | 2017-07-05 | 2017-12-01 | 佛山缔乐视觉科技有限公司 | A kind of watchband automatic glue spreaders based on machine vision |
CN107597497A (en) * | 2017-09-08 | 2018-01-19 | 佛山缔乐视觉科技有限公司 | A kind of automatic ceramic glue spreading apparatus and method based on machine vision |
CN107726985A (en) * | 2017-11-13 | 2018-02-23 | 易思维(天津)科技有限公司 | A kind of three-dimensional gluing detection method and device in real time |
CN107976147A (en) * | 2017-12-11 | 2018-05-01 | 西安迈森威自动化科技有限公司 | A kind of glass locating and detecting device based on machine vision |
CN108089544A (en) * | 2017-12-25 | 2018-05-29 | 厦门大学嘉庚学院 | A kind of orbit generation method and control system of sole glue spraying robot |
CN109046846A (en) * | 2018-10-30 | 2018-12-21 | 石家庄辐科电子科技有限公司 | A kind of intelligent circuit board paint spraying apparatus based on linear motor |
CN109522935A (en) * | 2018-10-22 | 2019-03-26 | 易思维(杭州)科技有限公司 | The method that the calibration result of a kind of pair of two CCD camera measure system is evaluated |
CN109798831A (en) * | 2018-12-28 | 2019-05-24 | 辽宁红沿河核电有限公司 | A kind of Binocular vision photogrammetry method for fuel assembly |
CN111122581A (en) * | 2019-12-25 | 2020-05-08 | 北京中远通科技有限公司 | Binocular vision detection system and method and glue spraying device |
CN111664809A (en) * | 2020-06-15 | 2020-09-15 | 苏州亿视智能科技有限公司 | Intelligent high-precision modular three-dimensional detection equipment and method |
CN112197715A (en) * | 2020-10-27 | 2021-01-08 | 上海市特种设备监督检验技术研究院 | Elevator brake wheel and brake shoe gap detection method based on image recognition |
CN115846129A (en) * | 2022-11-08 | 2023-03-28 | 成都市鸿侠科技有限责任公司 | Special-shaped complex curved surface glue joint device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH1030917A (en) * | 1996-07-16 | 1998-02-03 | Tsubakimoto Chain Co | Object recognition method and device therefor, and recording medium |
JP2002213929A (en) * | 2000-10-27 | 2002-07-31 | Korea Inst Of Science & Technology | Method and device for three-dimensional visual inspection of semiconductor package |
CN101517615A (en) * | 2006-09-29 | 2009-08-26 | 冲电气工业株式会社 | Personal authentication system and personal authentication method |
CN101876533A (en) * | 2010-06-23 | 2010-11-03 | 北京航空航天大学 | Microscopic stereovision calibrating method |
-
2010
- 2010-12-14 CN CN 201010587709 patent/CN102107179B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH1030917A (en) * | 1996-07-16 | 1998-02-03 | Tsubakimoto Chain Co | Object recognition method and device therefor, and recording medium |
JP2002213929A (en) * | 2000-10-27 | 2002-07-31 | Korea Inst Of Science & Technology | Method and device for three-dimensional visual inspection of semiconductor package |
CN101517615A (en) * | 2006-09-29 | 2009-08-26 | 冲电气工业株式会社 | Personal authentication system and personal authentication method |
CN101876533A (en) * | 2010-06-23 | 2010-11-03 | 北京航空航天大学 | Microscopic stereovision calibrating method |
Cited By (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102354151B (en) * | 2011-08-04 | 2013-06-05 | 浙江工业大学 | Tangential following interpolation method applied to multilayer shoe leather numerical control cutting machine tool |
CN102354151A (en) * | 2011-08-04 | 2012-02-15 | 浙江工业大学 | Tangential following interpolation method applied to multilayer shoe leather numerical control cutting machine tool |
CN102783769A (en) * | 2012-07-20 | 2012-11-21 | 浙江工业大学 | Glue coating control method at small-curvature turning part |
CN102783769B (en) * | 2012-07-20 | 2015-06-03 | 浙江工业大学 | Glue coating control method at small-curvature turning part |
CN103841311A (en) * | 2012-11-20 | 2014-06-04 | 广州三星通信技术研究有限公司 | Method for generating 3D image and portable terminals |
CN102981406B (en) * | 2012-11-26 | 2016-02-24 | 浙江工业大学 | A kind of sole glue spraying method for controlling thickness based on binocular vision |
CN102981406A (en) * | 2012-11-26 | 2013-03-20 | 浙江工业大学 | Sole glue spraying thickness control method based on binocular vision |
CN103316827A (en) * | 2013-06-09 | 2013-09-25 | 深圳众为兴技术股份有限公司 | Adhesive dispensing method and device |
CN103316827B (en) * | 2013-06-09 | 2015-10-28 | 深圳众为兴技术股份有限公司 | A kind of dispensing method, device and spot gluing equipment |
CN106583178A (en) * | 2016-11-01 | 2017-04-26 | 浙江理工大学 | Leather edge positioning method and device of automatic edge painting machine |
CN106583178B (en) * | 2016-11-01 | 2019-01-18 | 浙江理工大学 | A kind of edge positioning method and device of the leather substance of automatic oil edge machine |
CN106868229A (en) * | 2017-01-05 | 2017-06-20 | 四川大学 | A kind of device of the leather processed that stretches tight automatically |
CN107413590A (en) * | 2017-07-05 | 2017-12-01 | 佛山缔乐视觉科技有限公司 | A kind of watchband automatic glue spreaders based on machine vision |
CN107413590B (en) * | 2017-07-05 | 2023-06-02 | 佛山缔乐视觉科技有限公司 | Automatic spreading machine of watchband based on machine vision |
CN107597497A (en) * | 2017-09-08 | 2018-01-19 | 佛山缔乐视觉科技有限公司 | A kind of automatic ceramic glue spreading apparatus and method based on machine vision |
CN107726985A (en) * | 2017-11-13 | 2018-02-23 | 易思维(天津)科技有限公司 | A kind of three-dimensional gluing detection method and device in real time |
CN107976147A (en) * | 2017-12-11 | 2018-05-01 | 西安迈森威自动化科技有限公司 | A kind of glass locating and detecting device based on machine vision |
CN107976147B (en) * | 2017-12-11 | 2019-08-06 | 西安迈森威自动化科技有限公司 | A kind of glass locating and detecting device based on machine vision |
CN108089544A (en) * | 2017-12-25 | 2018-05-29 | 厦门大学嘉庚学院 | A kind of orbit generation method and control system of sole glue spraying robot |
CN108089544B (en) * | 2017-12-25 | 2021-03-30 | 厦门大学嘉庚学院 | Trajectory generation method and control system for sole glue spraying robot |
CN109522935A (en) * | 2018-10-22 | 2019-03-26 | 易思维(杭州)科技有限公司 | The method that the calibration result of a kind of pair of two CCD camera measure system is evaluated |
CN109522935B (en) * | 2018-10-22 | 2021-07-02 | 易思维(杭州)科技有限公司 | Method for evaluating calibration result of binocular vision measurement system |
CN109046846A (en) * | 2018-10-30 | 2018-12-21 | 石家庄辐科电子科技有限公司 | A kind of intelligent circuit board paint spraying apparatus based on linear motor |
CN109798831A (en) * | 2018-12-28 | 2019-05-24 | 辽宁红沿河核电有限公司 | A kind of Binocular vision photogrammetry method for fuel assembly |
CN111122581A (en) * | 2019-12-25 | 2020-05-08 | 北京中远通科技有限公司 | Binocular vision detection system and method and glue spraying device |
CN111664809A (en) * | 2020-06-15 | 2020-09-15 | 苏州亿视智能科技有限公司 | Intelligent high-precision modular three-dimensional detection equipment and method |
CN112197715A (en) * | 2020-10-27 | 2021-01-08 | 上海市特种设备监督检验技术研究院 | Elevator brake wheel and brake shoe gap detection method based on image recognition |
CN112197715B (en) * | 2020-10-27 | 2022-07-08 | 上海市特种设备监督检验技术研究院 | Elevator brake wheel and brake shoe gap detection method based on image recognition |
CN115846129A (en) * | 2022-11-08 | 2023-03-28 | 成都市鸿侠科技有限责任公司 | Special-shaped complex curved surface glue joint device |
CN115846129B (en) * | 2022-11-08 | 2023-12-15 | 成都市鸿侠科技有限责任公司 | Special-shaped complex curved surface cementing device |
Also Published As
Publication number | Publication date |
---|---|
CN102107179B (en) | 2013-07-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102107179B (en) | Method for controlling single-layer leather gluing based on binocular vision | |
CN107063228B (en) | Target attitude calculation method based on binocular vision | |
CN109903313B (en) | Real-time pose tracking method based on target three-dimensional model | |
CN106824806B (en) | The detection method of low module plastic gear based on machine vision | |
CN107203973B (en) | Sub-pixel positioning method for center line laser of three-dimensional laser scanning system | |
CN101256156B (en) | Precision measurement method for flat crack and antenna crack | |
CN106934813A (en) | A kind of industrial robot workpiece grabbing implementation method of view-based access control model positioning | |
CN106683137B (en) | Artificial mark based monocular and multiobjective identification and positioning method | |
CN107578464A (en) | A kind of conveyor belt workpieces measuring three-dimensional profile method based on line laser structured light | |
CN105910583B (en) | A kind of space junk based on spaceborne Visible Light Camera quickly detects localization method | |
CN111126174A (en) | Visual detection method for robot to grab parts | |
CN106338287A (en) | Ceiling-based indoor moving robot vision positioning method | |
CN103727930A (en) | Edge-matching-based relative pose calibration method of laser range finder and camera | |
CN105488503A (en) | Method for detecting circle center image coordinate of uncoded circular ring-shaped gauge point | |
CN113834625B (en) | Aircraft model surface pressure measuring method and system | |
CN104715491B (en) | A kind of sub-pixel edge detection method based on one-dimensional Gray Moment | |
CN108154536A (en) | The camera calibration method of two dimensional surface iteration | |
CN107238374A (en) | A kind of classification of concave plane part and recognition positioning method | |
CN111402330A (en) | Laser line key point extraction method based on plane target | |
Li et al. | Road markings extraction based on threshold segmentation | |
Kurban et al. | Plane segmentation of kinect point clouds using RANSAC | |
CN113884002A (en) | Pantograph slide plate upper surface detection system and method based on two-dimensional and three-dimensional information fusion | |
CN110030979B (en) | Spatial non-cooperative target relative pose measurement method based on sequence images | |
Tamas et al. | Relative pose estimation and fusion of omnidirectional and lidar cameras | |
CN108335332A (en) | A kind of axial workpiece central axes measurement method based on binocular vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |