Disclosure of Invention
The present invention provides a visual positioning compensation method for solving the above technical problems.
In order to solve the technical problems, the invention adopts the following technical scheme: a visual positioning compensation method comprises a rotating platform and at least two cameras, and further comprises the following steps:
step 1: the camera scans the product of the rotating platform and extracts the contour line of the product;
step 2: calculating the translation proportion and the rotation proportion of each camera and the rotating platform;
and step 3: converting the product contour line extracted by each camera into translation XY and rotation Angle according to the translation proportion and the rotation proportion;
and 4, step 4: and carrying out integral compensation and station compensation according to the translation XY and the rotation Angle Angle.
Further, the specific steps of calculating the translation and rotation ratio of each camera to the rotating platform are as follows:
step 2.1: presetting the values of moveX, moveY and roAngle;
step 2.2: moving the rotary platform to the original point position to acquire a reference line;
step 2.3: controlling the rotary platform to move along the X-axis direction, acquiring characteristics, and calculating the moving xScale proportion;
step 2.4: controlling the rotary platform to move along the direction of the Y axis, acquiring characteristics, and calculating the moving yScale proportion;
step 2.5: and controlling the rotating platform to rotate under the driving of the rotating shaft, acquiring the characteristics and calculating the movement proportion of the RoScale.
Further, the method comprises the following specific steps of controlling the rotary platform to move along the X-axis direction and acquiring the characteristics, and calculating the moving xScale ratio:
step 2.3.1: calculating an intersection baseMidPt1 of a reference line before the rotary platform moves along the direction of the X axis and the central line of the camera;
step 2.3.2: calculating an intersection point curMidPt1 of the reference line and the central line after the rotary platform moves;
step 2.3.3: the formula of the X movement proportion calculation is as follows:
further, the method comprises the specific steps of controlling the rotating platform to move along the Y-axis direction and acquiring the characteristics, and calculating the moving yScale ratio:
step 2.4.1: calculating an intersection baseMidPt2 of a reference line before the rotary platform moves along the Y-axis direction and the center line of the camera;
step 2.4.2: calculating an intersection point curMidPt2 of the reference line and the central line after the rotary platform moves;
step 2.4.3: the formula for calculating the Y movement proportion is as follows:
further, the method comprises the following specific steps of controlling the rotation of the rotating platform and obtaining characteristics, and calculating the movement proportion of the robScale:
step 2.5.1: calculating the intersection point baseMidPt3 of the reference line before the rotation of the rotary platform and the center line of the camera;
step 2.5.2: calculating an intersection point curMidPt3 of the reference line and the central line after the rotation of the rotary platform;
step 2.5.3: the equation for the angle moving proportion is as follows:
further, the specific steps of converting the product contour line extracted by each camera into translation XY and rotation Angle according to the translation proportion and the rotation proportion are as follows:
step 3.1: positioning the product contour line extracted by each camera;
step 3.2: calculating the intersection point baseMidPt4 of the datum line and the central line of each camera;
step 3.3: calculating an intersection curMidPt4 of the product contour line extracted by each camera and the central line;
step 3.4: calculating the distance Dis between curMidPt4 and baseMidPt4 of each camera;
step 3.5: the rotation angle A, Y axis offset value offectY and the X axis offset value offectX for each camera are calculated.
Further, the specific steps of performing overall compensation and station compensation according to the translation XY and the rotation Angle Angle comprise:
step 4.1: according to actual product requirements, a user inputs data X1, Y1, Y2 and a configured product length proLen to obtain compensation data X on an X axis, compensation data Y on a Y axis and angle compensation data A.
Further, the method also comprises the step 5: and (4) mechanically compensating.
Further, the rotating platform is a turntable.
The invention has the beneficial effects that: the invention can realize the rapid calculation of the actual difference between the actual product position and the theoretical position, and compensate through the calculated difference, thereby achieving the purpose of positioning compensation.
Detailed Description
In order to facilitate understanding of those skilled in the art, the present invention will be further described with reference to the following examples and drawings, which are not intended to limit the present invention. The present invention is described in detail below with reference to the attached drawings.
The invention provides a visual positioning compensation method, which comprises a rotary platform 1 and at least two cameras 2, wherein the number of the cameras 2 is5, the installation positions above the corresponding rotary platform 1 are shown in figure 1, and each point on the rotary platform 1 corresponds to a corresponding coordinate of an XY axis, and the visual positioning compensation method also comprises the following steps:
step 1: the camera 2 scans the product of the rotating platform 1 and extracts a contour line;
step 2: calculating the translation and rotation proportion of each camera 2 to the rotating platform 1;
and step 3: converting the product contour line extracted by each camera 2 into translation XY and rotation Angle according to translation proportion and rotation proportion conversion calculation;
and 4, step 4: and carrying out integral compensation and station compensation according to the translation XY and the rotation Angle Angle.
In the visual positioning compensation method described in this embodiment, the specific steps of calculating the translation and rotation ratio of each camera 2 and the rotating platform 1 are as follows:
step 2.1: presetting the values of moveX, moveY and roAngle; when rotating, the product edge needs to be controlled within the range of the shot image of the 5 cameras 2;
step 2.2: the rotating platform 1 moves to the original point position to obtain a datum line;
step 2.3: controlling the rotary platform to move along the X-axis direction, acquiring characteristics, and calculating the moving xScale proportion;
step 2.4: controlling the rotary platform to move along the direction of the Y axis, acquiring characteristics, and calculating the moving yScale proportion;
step 2.5: : and controlling the rotating platform to rotate under the driving of the rotating shaft, acquiring the characteristics and calculating the movement proportion of the RoScale.
As shown in fig. 2, in the visual positioning compensation method according to this embodiment, the specific steps of controlling the rotation platform to move along the X-axis direction and acquiring the features include:
step 2.3.1: calculating an intersection baseMidPt1 of a reference line before the rotary platform moves along the direction of the X axis and the central line of the camera;
step 2.3.2: calculating an intersection point curMidPt1 of the reference line and the central line after the rotary platform moves;
step 2.3.3: the formula of the X movement proportion calculation is as follows:
the method comprises the following steps of acquiring features, wherein the acquiring features include but are not limited to printing acquisition and reference positioning acquisition, and specifically, when a printed product is positioned, the acquired features are contour edges of the product; in the reference positioning, the whole product features are obtained by referring to the reference features to perform comparison and adjustment.
As shown in fig. 3, in the visual positioning compensation method according to this embodiment, the step of controlling the rotation platform to move along the Y-axis direction and obtain the features includes the specific steps of:
step 2.4.1: calculating an intersection baseMidPt2 of a reference line before the rotary platform moves along the Y-axis direction and the center line of the camera;
step 2.4.2: calculating an intersection point curMidPt2 of the reference line and the central line after the rotary platform moves;
step 2.4.3: the formula for calculating the Y movement proportion is as follows:
as shown in fig. 4, in the visual positioning compensation method according to this embodiment, the specific steps of controlling the rotation of the rotation platform and obtaining the features and calculating the robcale movement ratio include:
step 2.5.1: calculating the intersection point baseMidPt3 of the reference line before the rotation of the rotary platform and the center line of the camera;
step 2.5.2: calculating an intersection point curMidPt3 of the reference line and the central line after the rotation of the rotary platform;
step 2.5.3: the equation for the angle moving proportion is as follows:
the 5 cameras 2 are named camera No. 1, camera No. 2, camera No. 3, camera No. 4, and phase No. 5, respectively. Wherein, the difference value of the central point X is taken by the No. 3 camera and the No. 5 camera, and the difference value of the central point Y is taken by the No. 1 camera, the No. 2 camera and the No. 4 camera.
In the visual positioning compensation method described in this embodiment, the specific steps of converting the product contour line extracted by each camera into translation XY and rotation Angle according to the translation proportion and the rotation proportion are as follows:
step 3.1: positioning the product contour line extracted by each camera;
step 3.2: calculating the intersection point baseMidPt4 of the datum line and the central line of each camera;
step 3.3: calculating an intersection curMidPt4 of the product contour line extracted by each camera and the central line;
step 3.4: calculating the distance Dis between curMidPt4 and baseMidPt4 of each camera;
step 3.5: the rotation angle A, Y axis offset value offectY and the X axis offset value offectX for each camera are calculated.
The following is one of the specific calculation processes in this embodiment, specifically as follows: assuming that the X-direction distance is tmpDisX, the corresponding camera number 1, camera number 2, and camera number 4 displacements are redactiondis 1X, redactiondis 2X, and redactiondis 4X.
Presetting the values of moveX, moveY and roAngle, wherein the values of moveX, moveY and roAngle are used for calculating the translation proportion and the rotation proportion, the values are preset by a user, and then the distance between the intersection point of one line extracted by the camera 2 and the central line and the reference point is Dis, a formula can be deduced: dis ═ X1 × XS + Y1 × YS + Angle × AS. Wherein X1, Y1, Angle are data of the rotating platform 1 during motion, XS refers to the moving xScale proportion of the corresponding camera 2, i.e. the X motion proportion, YS refers to the moving yScale proportion of the corresponding camera 2, i.e. the Y motion proportion, and AS refers to the roScale proportion of the corresponding camera 2, i.e. the rotating motion proportion.
The translation may result in the following equation (where Dis3 refers to the reference point distance for camera No. 3 and Dis5 refers to the reference point distance for camera No. 5):
reductionDis1X=xScaleCam1×tmpDisX;
reductionDis2X=xScaleCam2×tmpDisX;
reductionDis4X=xSCaleCam4×tmpDisX。
therefore, the following formula can be obtained by translation, and the current intersection point distance of the camera 1, the camera 2 and the camera 4 is subtracted by the X-direction distance tmpdivx, so that the displacements of the camera 1, the camera 2 and the camera 4 caused by the corresponding operations are respectively:
afterReductionDis1Y=Dis1-reductionDis1X;
afterReductionDis2Y=Dis2-reductionDis2X;
afterReductionDis4Y=Dis1-reductionDis4X。
the X-axis movement ratios of the 5 cameras 2 are: xscale cam1, xscale cam2, xscale cam3, xscale cam4, and xscale cam 5.
The Y-axis movement ratios of the 5 cameras 2 are: yScaleCam1, yScaleCam2, yScaleCam3, yScaleCam4, and yScaleCam 5.
The rotation ratios of the 5 cameras 2 are: rotayscale 1, rotayscale 2, rotaxscale 3, rotayscale 4, and rotaxscale 5.
Calculating a rotation angle A: converting the 1 st camera 2 movement value to a second camera 2 movement value as:
the rotation scale of the first camera 2 converted into the second camera 2 is:
calculating a Y-axis offset value offectY (averaging the Y-direction motion amounts corresponding to the camera No. 1, the camera No. 2 and the camera No. 4 to obtain a final Y-direction motion amount):
calculating the X-axis deviation value (obtaining the platform X-direction movement amount corresponding to the No. 3 camera and the No. 5 camera): average value offectX of camera No. 3 and camera No. 5 after removing influence values caused by Y-axis movement and rotation:
in the visual positioning compensation method of this embodiment, the specific steps of performing the overall compensation and the station compensation according to the translation XY and the rotation Angle include:
step 4.1: according to actual product requirements, a user inputs data X1, Y1, Y2 and a configured product length proLen to obtain compensation data X on an X axis, compensation data Y on a Y axis and angle compensation data A. .
The compensation data X on the X axis is input X, and the compensation data Y and the angle compensation data A on the Y axis have the following conditions:
case 1 is shown in fig. 5:
Y=|Y1|<l Y2 l, Y1: y2, wherein when: y1 is less than Y2:
when Y1 is greater than Y2:
case 2 is shown in fig. 6:
y1| ═ Y2|, then a ═ 0;
the angle is negated when the value of Y1 is not equal to the value of Y2 and Y1 is a negative number;
case 3 is shown in fig. 7:
y is 0; y1 is 0, Y2 is greater than zero angle value and is inverted;
y2 is 0, Y1 is smaller than zero angle value and is inverted;
in the visual positioning compensation method described in this embodiment, step 5: mechanical compensation; the mechanical compensation method comprises the following specific steps of:
step 5.1: extracting a reference line of each camera of a reference station;
step 5.2: extracting a corresponding line of each camera of the compensation station;
step 5.3: calculating the deviation value of the reference line and the corresponding line;
step 5.4: calculating the intersection baseMidPt of the datum line and the central line of each camera;
step 5.5: calculating the intersection point curMidPt of the boundary line and the central line of each camera;
step 5.6: calculating the distance Dis between curMidPt and baseMidPt of each camera;
step 5.7: the process of step 3 is consistent.
In the visual positioning compensation method described in this embodiment, the camera 2 is a CCD camera 2, but is not limited to the CCD camera 2, and may be a camera 2 of another model.
In the visual positioning compensation method according to this embodiment, the rotating platform 1 is a turntable.
Although the present invention has been described with reference to the above preferred embodiments, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.