CN112082482A - Visual positioning method for object with edge characteristic only, application and precision evaluation method - Google Patents

Visual positioning method for object with edge characteristic only, application and precision evaluation method Download PDF

Info

Publication number
CN112082482A
CN112082482A CN202010939227.8A CN202010939227A CN112082482A CN 112082482 A CN112082482 A CN 112082482A CN 202010939227 A CN202010939227 A CN 202010939227A CN 112082482 A CN112082482 A CN 112082482A
Authority
CN
China
Prior art keywords
test
workpiece
dimensional coordinates
standard
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010939227.8A
Other languages
Chinese (zh)
Other versions
CN112082482B (en
Inventor
郭寅
尹仕斌
刘海庆
李晓飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yi Si Si Hangzhou Technology Co ltd
Original Assignee
Isvision Hangzhou Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Isvision Hangzhou Technology Co Ltd filed Critical Isvision Hangzhou Technology Co Ltd
Priority to CN202010939227.8A priority Critical patent/CN112082482B/en
Publication of CN112082482A publication Critical patent/CN112082482A/en
Application granted granted Critical
Publication of CN112082482B publication Critical patent/CN112082482B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a visual positioning method for an object with only edge characteristics, which comprises the following steps: detecting a standard workpiece by a multi-line structured light sensor, and calibrating standard data; during testing, the multi-line structured light sensor respectively acquires the three-dimensional coordinates of each test point on the actually measured workpiece and records the three-dimensional coordinates as actually measured data; calculating a rotation translation matrix based on the calibrated three-dimensional coordinates and the measured data; each auxiliary straight line in the standard data is subjected to rotational translation, and intersection points between the auxiliary straight line and the corresponding reference light plane are recorded into a calibration point set; taking the distance between the measured data and each corresponding point in the calibration point set as a target function, and iterating by using an optimization method to obtain an optimal rotation translation matrix meeting a convergence condition; compensating the coordinates of the test points in the calibration process to obtain the position of the current workpiece, and completing positioning; the method effectively solves the problems of edge test point deviation and inaccurate positioning, and improves the accuracy of visual positioning.

Description

Visual positioning method for object with edge characteristic only, application and precision evaluation method
Technical Field
The invention relates to the field of structured light measurement, in particular to a visual positioning method for an object with only edge characteristics, application and an accuracy evaluation method.
Background
In the field of active vision measurement, structured light measurement is a common measurement method, wherein the most widely used is line laser characteristics, it projects laser stripes to a measured object through a line laser, adopts an image acquisition device to acquire images of the stripes, three-dimensional information of the measured object is obtained through image analysis, when the line-structured light sensor is used for visual positioning, the characteristics of the object to be measured need to be selected, and the characteristics are extracted to obtain three-dimensional information, so as to realize visual positioning, in practical use, visual positioning is often applied to the field of visual guidance or automatic processing and assembly, workpieces of the same type are sequentially placed on a station to be measured through a production line, a rotating wheel, a roller bed and other mobile equipment, in the process, the actual position of the workpiece to be measured and the position of the standard workpiece are inevitably different under the influence of the transmission precision of the mobile equipment, the positioning error of the tool clamp and the processing precision among different workpieces; the existing positioning method only calculates the rotation translation relation between the measured point coordinates (selected measured characteristic coordinates) on the workpiece to be measured and the measured point coordinates on the standard workpiece, if the measured characteristics are in a closed shape (such as characteristic circles, square holes or balls), the positioning is carried out according to the geometric center of the graphic characteristics, at the moment, the position of the measured object slightly moves to have limited influence on the measured characteristics, so as to ensure that the laser bar can cover the measured characteristics, and even if small errors exist between the characteristics of the current measured object and the characteristic positions on the standard workpiece, the geometric center of the measured characteristics can still be obtained through structured light; however, workpieces to be measured are various in types, some workpieces do not contain closed graphic features, such as a flat plate, an automobile roof and a windshield glass frame, the workpieces only contain surface features and edge features, structural light measurement is adopted, the edge features need to be selected for workpiece positioning, as the edges are stretching bodies, points selected on the edges are not visually and obviously different from surrounding points, and as long as the positions of the workpieces to be measured are changed (deviated and rotated), errors exist between the positions of a measuring point of the current workpiece and a measuring point of a standard workpiece, and the consistency of the measuring points before and after deviation cannot be ensured, at the moment, the workpiece deviation is solved by using the existing positioning method, so that a large error exists, and the visual positioning accuracy is influenced.
Disclosure of Invention
In order to solve the problems, the invention provides a visual positioning method for an object with only edge characteristics, which improves the existing positioning method by utilizing an optimization idea, effectively solves the problems of edge test point deviation and inaccurate positioning and improves the accuracy of visual positioning.
The technical scheme is as follows:
a visual positioning method for an object with only edge characteristics is provided, wherein at least two edges which are not on the same straight line are formed on the surface of the object with only edge characteristics; the method comprises the following steps:
the method comprises the following steps of respectively detecting different test positions of a standard workpiece by utilizing a multi-line structured light sensor, wherein the test result at least comprises three points on edges which are not on the same straight line (the three-dimensional coordinates of the middle point of the test result can monitor the deviation of the workpiece along X, Y, Z three directions and the rotation of the workpiece around X, Y, Z three axes (namely, the principle of establishing a system and selecting points according to 321)); before testing, calibrating the auxiliary linear equation of each testing position and the three-dimensional coordinates of the testing point, and taking the auxiliary linear equation and the three-dimensional coordinates as standard data;
the test points and the auxiliary straight lines corresponding to the test positions are obtained by the following method: at a test position, marking one laser strip in the multi-line structured light as a reference laser strip in advance, wherein a light plane where the laser strip is located is a reference light plane; marking the inflection point of the reference laser bar on the standard workpiece as a test point; fitting inflection point coordinates of the laser strips in the multi-line structured light on the standard workpiece to obtain auxiliary straight lines;
during testing, a workpiece is placed on a detection station according to a preset state, and the multi-line structured light sensor respectively acquires the three-dimensional coordinates of each test point and records the three-dimensional coordinates as measured data;
calculating a rotation translation matrix based on the three-dimensional coordinates of the test points recorded in the calibration process and the three-dimensional coordinate data of the test points in the measured data; on the basis, each auxiliary straight line in the standard data is subjected to rotational translation, and intersection points between the auxiliary straight line and the corresponding reference light plane are recorded into a calibration point set;
taking the three-dimensional coordinates of each test point in the measured data and the distance between the intersection points in the corresponding calibration point set as a target function, and iterating by using an optimization method to obtain an optimal rotation translation matrix meeting the convergence condition;
and compensating the optimal rotation and translation matrix to the coordinates of each test point in the calibration process to obtain the position of the current workpiece, and finishing positioning.
In order to ensure that the edge of the workpiece acquired and actually measured by the sensor and the edge of the standard workpiece are the same-side edges, further, when the workpiece is placed on a detection station according to a preset state, the deviation between the workpiece and the standard workpiece in the calibration time in the transverse and longitudinal directions is within 30mm, and the deviation of the angle is within 5 degrees.
Further, the optimization method is a gradient descent method, an LM method, or a gauss-newton method.
Further, the objective function is represented as min { S }i-Fi1,2 … … m, m represents test point number, SiRepresenting the three-dimensional coordinates of the test points in the measured data of the ith test position, FiRepresenting the three-dimensional coordinates of the intersection points in the ith test position calibration point set;
the convergence conditions are as follows: a unique upper limit value or an upper limit value of 1 objective function for each test position.
Furthermore, m multi-line structured light sensors are arranged and are respectively fixed above each test point;
or: mounting a single multi-line structured light sensor on a robot: according to the positions of the test points, teaching a robot; the robot drives the multi-line structured light sensor to project laser stripes to each test point respectively along the taught detection path, and structured light images are obtained.
As an application, the method for grabbing a workpiece by using the visual positioning method of the invention comprises the following steps: and compensating the optimized rotation and translation matrix to the grabbing track of the robot, and guiding the grabbing robot to grab the workpiece according to the actual position.
As another application, the method for processing a workpiece by using the visual positioning method of the invention comprises the following steps: and compensating the optimized rotation and translation matrix to the processing track of the robot, and guiding the workpiece grabbing robot to process the workpiece according to the actual position.
The invention also discloses a method for evaluating the precision of the visual positioning method, which comprises the following steps:
s1, acquiring three-dimensional coordinates Q of specific points on the standard workpiece by using a standard instrumentjJ is 1,2 … … n, n represents the number of feature points; the standard instrument comprises a laser tracker, a three-coordinate machine and V-Sarrs; the specific points are points marked on the surface or edge of the standard workpiece in advance and at least comprise three non-collinear points;
detecting different test positions of the standard workpiece by using the multi-line structured light sensor, wherein the test result at least comprises three points on edges which are not on the same straight line; before testing, calibrating the auxiliary linear equation of each testing position and the three-dimensional coordinates of the testing point, and taking the auxiliary linear equation and the three-dimensional coordinates as standard data;
the test points and the auxiliary straight lines corresponding to the test positions are obtained by the following method: at a test position, marking one laser strip in the multi-line structured light as a reference laser strip in advance, wherein a light plane where the laser strip is located is a reference light plane; marking the inflection point of the reference laser bar on the standard workpiece as a test point; fitting inflection point coordinates of the laser strips in the multi-line structured light on the standard workpiece to obtain auxiliary straight lines;
s2, adjusting the position of the standard workpiece to present the position of the standard workpiece with other positions;
and (5) acquiring the three-dimensional coordinates Q 'of the specific point on the adjusted standard workpiece again by using a standard instrument'jSolving for Q 'by rigid body transformation'jAnd QjA rotational translation matrix RT' in between; (wherein, Q'jAnd QjAll under a global space coordinate system)
The multi-line structured light sensor respectively acquires the three-dimensional coordinates of each test point and records the three-dimensional coordinates as measured data;
calculating a rotation translation matrix based on the three-dimensional coordinates of the test points recorded in the calibration process and the three-dimensional coordinate data of the test points in the measured data; on the basis, each auxiliary straight line in the standard data is subjected to rotational translation, and intersection points between the auxiliary straight line and the corresponding reference light plane are recorded into a calibration point set;
taking the three-dimensional coordinates of each test point in the measured data and the distance between the intersection points in the corresponding calibration point set as a target function, and iterating by using an optimization method to obtain an optimal rotation translation matrix RT meeting the convergence condition;
s3, comparing the RT and the RT', judging whether the difference value between the rotation component and the translation component between the RT and the RT is smaller than a preset value, if so, the precision of the current multi-line structured light sensing meets the measurement requirement, and if not, the measurement requirement cannot be met.
The scheme of the invention has the following advantages:
(1) the method not only utilizes standard coordinates and actual measurement coordinates to obtain an initial rotation translation relation, but also carries out straight line fitting around each measuring point, the fitted auxiliary straight line represents an edge line, when the actual position of a workpiece changes, the position of the measuring point changes but the corresponding auxiliary straight line is always unchanged, the auxiliary straight line is adjusted by utilizing the initial rotation translation relation, theoretically, the intersection point of the adjusted auxiliary straight line and an optical plane should be consistent with the actual measurement coordinates of the corresponding position, based on the principle, a target function is set, a convergence condition is provided, a more accurate rotation translation matrix is obtained based on the optimization idea, and then high-precision positioning is realized; the method is applied to visual guidance, and can assist the robot in precise workpiece grabbing, processing and the like;
(2) the invention also provides an accuracy verification method, which adopts a standard instrument and a standard workpiece before and after the position adjustment which is respectively measured by the multi-line structured light sensor, compares the rotation and translation matrixes which are obtained twice, effectively evaluates the positioning accuracy of the multi-line structured light sensor and screens the sensor which meets the detection requirement.
Drawings
FIG. 1 is a schematic diagram showing the change of the position of a measuring point between a workpiece to be measured and a standard workpiece;
FIG. 2 is a schematic view of a light plane at multiple projection laser bars and station locations;
fig. 3 is a schematic diagram of an acquired multi-line structured light image.
Detailed Description
The technical solution of the present invention is described in detail below with reference to the accompanying drawings and the detailed description.
A visual positioning method for an object only with edge characteristics is characterized in that at least two edges which are not on the same straight line are formed on the surface of the object only with the edge characteristics; the method specifically comprises the following steps:
the method comprises the following steps of respectively detecting different test positions of a standard workpiece by utilizing a multi-line structured light sensor, wherein the test result at least comprises three points on edges which are not on the same straight line (the three-dimensional coordinates of the middle point of the test result can monitor the deviation of the workpiece along X, Y, Z three directions and the rotation of the workpiece around X, Y, Z three axes (namely, the principle of establishing a system and selecting points according to 321)); before testing, calibrating the auxiliary linear equation of each testing position and the three-dimensional coordinates of the testing point, and taking the auxiliary linear equation and the three-dimensional coordinates as standard data;
the test points and the auxiliary straight lines corresponding to the test positions are obtained by the following method: at the test position, marking one laser bar in the multi-line structured light as a reference laser bar in advance, wherein a light plane (shown in fig. 2) in which the laser bar is positioned is a reference light plane; in this embodiment, the multi-line structured light sensor projects 3-10 laser beams at a time (as shown in fig. 2: projecting 9 laser beams at a time, marking the 5 th laser beam as a reference laser beam), preferably, the laser beams are parallel, and the distance between two adjacent laser beams is less than 5 mm;
marking the inflection point of the reference laser bar on the standard workpiece as a test point; fitting inflection point coordinates of the laser strips in the multi-line structured light on the standard workpiece to obtain auxiliary straight lines; as shown in fig. 3, the coordinates of the inflection point are calculated, and according to the quality of the collected image, the coordinates of the end point (point a or point B) at any end of the break inflection point part of the laser stripe are selected, in this embodiment, the coordinates of the end point B at the lower edge of the break inflection point part of the laser stripe are calculated;
during testing, a workpiece is placed on a detection station according to a preset state (as shown in fig. 1, the positions of an actually measured workpiece and a standard workpiece are changed pfzeropfmove), and the multi-line structured light sensor respectively acquires three-dimensional coordinates of each test point and records the three-dimensional coordinates as actually measured data;
calculating a rotation translation matrix based on the three-dimensional coordinates of the test points recorded in the calibration process and the three-dimensional coordinate data of the test points in the measured data; on the basis, each auxiliary straight line in the standard data is subjected to rotational translation, and intersection points between the auxiliary straight line and the corresponding reference light plane are recorded into a calibration point set;
taking the three-dimensional coordinates of each test point in the measured data and the distance between the intersection points in the corresponding calibration point set as a target function, and iterating by using an optimization method to obtain an optimal rotation translation matrix meeting the convergence condition;
and compensating the optimal rotation and translation matrix to the coordinates of each test point in the calibration process to obtain the position of the current workpiece, and finishing positioning.
Specifically, a plurality of workpieces are provided, one workpiece is manually selected as a standard workpiece, and the other workpieces are workpieces to be tested; and acquiring standard data by using the standard workpiece, and respectively calculating an optimized rotation and translation matrix for positioning other workpieces to be measured.
In order to ensure that the edge of the workpiece acquired and actually measured by the sensor and the edge of the standard workpiece are the same-side edges, when the workpiece is placed on a detection station according to a preset state, the deviation between the workpiece and the standard workpiece in the transverse and longitudinal directions during calibration is within 30mm, and the deviation between the angle and the position is within 5 degrees.
Among them, the optimization method is a gradient descent method, an LM method, or a Gaussian Newton method.
Expressing the objective function as min Si-Fi1,2 … … m, m represents test point number, SiRepresenting the three-dimensional coordinates of the test points in the measured data of the ith test position, FiRepresenting the three-dimensional coordinates of the intersection points in the ith test position calibration point set;
the convergence conditions are as follows: a unique upper limit value or an upper limit value of 1 objective function for each test position.
In the specific operation: setting m distance upper limit values, and if the m distance values obtained by the target function are respectively smaller than the distance upper limit values of the corresponding positions, meeting a convergence condition;
or: and setting a distance upper limit total value, and if the sum or the mean value of the m distance values obtained by the target function is less than the distance upper limit total value, meeting a convergence condition.
Wherein, the multi-thread structure light sensor can adopt following two kinds of modes to set up:
1) the multi-line structured light sensor is provided with m light sensors which are respectively fixed above each test point; the number of m is the same as that of the test points, the relation among the sensors is calibrated, and at the moment, the global space coordinate system is set as one of the multi-line structured light sensor coordinate systems;
2) mounting a single multi-line structured light sensor on a robot: according to the positions of the test points, teaching a robot; the robot drives the multi-line structured light sensor to project laser stripes to each test point respectively along the taught detection path and acquire structured light images; at the moment, the hand-eye calibration is carried out, and the global space coordinate system is set as a robot base coordinate system.
As an application of this embodiment, a method for grabbing a workpiece by using the visual positioning method of the present invention includes: and compensating the optimized rotation and translation matrix to the grabbing track of the robot, and guiding the grabbing robot to grab the workpiece according to the actual position.
As another application of this embodiment, a method for processing a workpiece by using the visual positioning method of the present invention includes: and compensating the optimized rotation and translation matrix to the processing track of the robot, and guiding the workpiece grabbing robot to process the workpiece (such as welding, cutting and the like) according to the actual position.
In order to evaluate the accuracy of the positioning method, a method for evaluating the accuracy of the visual positioning method is also disclosed, which comprises the following steps:
s1, acquiring three-dimensional coordinates Q of specific points on the standard workpiece by using a standard instrumentjJ is 1,2 … … n, n represents the number of feature points; the standard instrument comprises a laser tracker, a three-coordinate machine and V-Sarrs; the specific points are points marked on the surface or edge of the standard workpiece in advance and at least comprise three non-collinear points;
detecting different test positions of the standard workpiece by using the multi-line structured light sensor, wherein the test result at least comprises three points on edges which are not on the same straight line; before testing, calibrating the auxiliary linear equation of each testing position and the three-dimensional coordinates of the testing point, and taking the auxiliary linear equation and the three-dimensional coordinates as standard data;
the test points and the auxiliary straight lines corresponding to the test positions are obtained by the following method: at a test position, marking one laser strip in the multi-line structured light as a reference laser strip in advance, wherein a light plane where the laser strip is located is a reference light plane; marking the inflection point of the reference laser bar on the standard workpiece as a test point; fitting inflection point coordinates of the laser strips in the multi-line structured light on the standard workpiece to obtain auxiliary straight lines;
s2, adjusting the position of the standard workpiece to present the position of the standard workpiece with other positions;
and (5) acquiring the three-dimensional coordinates Q 'of the specific point on the adjusted standard workpiece again by using a standard instrument'jSolving for Q 'by rigid body transformation'jAnd QjA rotational translation matrix RT' in between; (wherein, Q'jAnd QjAll under a global space coordinate system)
The multi-line structured light sensor respectively acquires the three-dimensional coordinates of each test point and records the three-dimensional coordinates as measured data;
calculating a rotation translation matrix based on the three-dimensional coordinates of the test points recorded in the calibration process and the three-dimensional coordinate data of the test points in the measured data; on the basis, each auxiliary straight line in the standard data is subjected to rotational translation, and intersection points between the auxiliary straight line and the corresponding reference light plane are recorded into a calibration point set;
taking the three-dimensional coordinates of each test point in the measured data and the distance between the intersection points in the corresponding calibration point set as a target function, and iterating by using an optimization method to obtain an optimal rotation translation matrix RT meeting the convergence condition;
s3, comparing the RT and the RT', judging whether the difference value between the rotation component and the translation component between the RT and the RT is smaller than a preset value, if so, the precision of the current multi-line structured light sensing meets the measurement requirement, and if not, the measurement requirement cannot be met.
In the embodiment, 5 times of precision verification is performed, that is, the pose of the standard workpiece is adjusted 5 times, each time of adjustment is measured by using a standard instrument and a single-line structured light sensor, a rotational translation matrix is respectively calculated, rotational translation components of the rotational translation matrix are obtained, difference values between the components of the conventional measurement method and the components of the method are respectively compared, and test data are shown in the following table, wherein dx, dy and dz represent translation components in X, Y, Z directions in the rotational translation matrix; rx, ry, rz represent the X, Y, Z three-directional rotational components in the rototranslation matrix; the difference between the translation components obtained by the traditional measurement method, the method and the standard instrument measurement represented by error _ dx, error _ dy and error _ dz; the difference between the rotation components obtained by the traditional measurement method, the method and the standard instrument measurement represented by error _ rx, error _ ry and error _ rz;
Figure BDA0002673046140000101
as can be seen from the table, each component value of the method is closer to the component value obtained by the measurement of a standard instrument, namely, a more accurate rotation and translation matrix is obtained, and compared with the traditional calculation method, the precision is obviously improved.
The foregoing descriptions of specific exemplary embodiments of the present invention have been presented for purposes of illustration and description. The foregoing description is not intended to be exhaustive or to limit the invention to the precise form disclosed, and obviously many modifications and variations are possible in light of the above teaching. The exemplary embodiments were chosen and described in order to explain certain principles of the invention and its practical application to enable others skilled in the art to make and use various exemplary embodiments of the invention and various alternatives and modifications thereof. It is intended that the scope of the invention be defined by the following claims and their equivalents.

Claims (8)

1. A visual positioning method for an object with only edge characteristics is provided, wherein at least two edges which are not on the same straight line are formed on the surface of the object with only edge characteristics; the method is characterized in that:
detecting different test positions of the standard workpiece by using a multi-line structured light sensor, wherein the test result at least comprises three points on edges which are not on the same straight line; before testing, calibrating the auxiliary linear equation of each testing position and the three-dimensional coordinates of the testing point, and taking the auxiliary linear equation and the three-dimensional coordinates as standard data;
the test points and the auxiliary straight lines corresponding to the test positions are obtained by the following method: at a test position, marking one laser strip in the multi-line structured light as a reference laser strip in advance, wherein a light plane where the laser strip is located is a reference light plane; marking the inflection point of the reference laser bar on the standard workpiece as a test point; fitting inflection point coordinates of the laser strips in the multi-line structured light on the standard workpiece to obtain auxiliary straight lines;
during testing, a workpiece is placed on a detection station according to a preset state, and the multi-line structured light sensor respectively acquires the three-dimensional coordinates of each test point and records the three-dimensional coordinates as measured data;
calculating a rotation translation matrix based on the three-dimensional coordinates of the test points recorded in the calibration process and the three-dimensional coordinate data of the test points in the measured data; on the basis, each auxiliary straight line in the standard data is subjected to rotational translation, and intersection points between the auxiliary straight line and the corresponding reference light plane are recorded into a calibration point set;
taking the three-dimensional coordinates of each test point in the measured data and the distance between the intersection points in the corresponding calibration point set as a target function, and iterating by using an optimization method to obtain an optimal rotation translation matrix meeting the convergence condition;
and compensating the optimal rotation and translation matrix to the coordinates of each test point in the calibration process to obtain the position of the current workpiece, and finishing positioning.
2. A method of visually locating an object having only edge features according to claim 1 wherein the workpiece is placed in the inspection station in a predetermined position within 30mm of the nominal workpiece position and within 5 ° of the nominal workpiece position.
3. A method for visual localization of an object characterized by only edges according to claim 1, wherein said optimization method is gradient descent, LM or gauss-newton.
4. A method for visual positioning of an object having only edge features according to claim 1, characterised in that the method comprises the step of aligning the object with the edge featuresThe objective function is expressed as min Si-Fi1,2 … … m, m represents test point number, SiRepresenting the three-dimensional coordinates of the test points in the measured data of the ith test position, FiRepresenting the three-dimensional coordinates of the intersection points in the ith test position calibration point set;
the convergence conditions are as follows: a unique upper limit value or an upper limit value of 1 objective function for each test position.
5. The method of claim 1 wherein there are m structured light sensors fixed above each test point;
or: mounting a single multi-line structured light sensor on a robot: according to the positions of the test points, teaching a robot; the robot drives the multi-line structured light sensor to project laser stripes to each test point respectively along the taught detection path, and structured light images are obtained.
6. A method for workpiece grabbing by using the visual positioning method as claimed in any one of claims 1 to 5, wherein the optimized rotation and translation matrix is compensated to the grabbing track of the robot, and the grabbing robot is guided to grab the workpiece according to the actual position.
7. A method for processing a workpiece by using the visual positioning method of any one of claims 1-5, wherein the optimized rotation and translation matrix is compensated to the processing track of the robot, and the workpiece grabbing robot is guided to process the workpiece according to the actual position.
8. A method for evaluating the accuracy of the visual positioning method, comprising the steps of:
s1, acquiring three-dimensional coordinates Q of specific points on the standard workpiece by using a standard instrumentjJ is 1,2 … … n, n represents the number of feature points; the standard instrument comprises a laser tracker, a three-coordinate machine and V-Sarrs; the specific point being on a standard workpiecePoints marked in advance on the surface or edge at least comprise three non-collinear points;
detecting different test positions of the standard workpiece by using the multi-line structured light sensor, wherein the test result at least comprises three points on edges which are not on the same straight line; before testing, calibrating the auxiliary linear equation of each testing position and the three-dimensional coordinates of the testing point, and taking the auxiliary linear equation and the three-dimensional coordinates as standard data;
the test points and the auxiliary straight lines corresponding to the test positions are obtained by the following method: at a test position, marking one laser strip in the multi-line structured light as a reference laser strip in advance, wherein a light plane where the laser strip is located is a reference light plane; marking the inflection point of the reference laser bar on the standard workpiece as a test point; fitting inflection point coordinates of the laser strips in the multi-line structured light on the standard workpiece to obtain auxiliary straight lines;
s2, adjusting the position of the standard workpiece to present the position of the standard workpiece with other positions;
and (5) acquiring the three-dimensional coordinates Q 'of the specific point on the adjusted standard workpiece again by using a standard instrument'jSolving for Q 'by rigid body transformation'jAnd QjA rotational translation matrix RT' in between;
the multi-line structured light sensor respectively acquires the three-dimensional coordinates of each test point and records the three-dimensional coordinates as measured data;
calculating a rotation translation matrix based on the three-dimensional coordinates of the test points recorded in the calibration process and the three-dimensional coordinate data of the test points in the measured data; on the basis, each auxiliary straight line in the standard data is subjected to rotational translation, and intersection points between the auxiliary straight line and the corresponding reference light plane are recorded into a calibration point set;
taking the three-dimensional coordinates of each test point in the measured data and the distance between the intersection points in the corresponding calibration point set as a target function, and iterating by using an optimization method to obtain an optimal rotation translation matrix RT meeting the convergence condition;
s3, comparing the RT and the RT', judging whether the difference value between the rotation component and the translation component between the RT and the RT is smaller than a preset value, if so, the precision of the current multi-line structured light sensing meets the measurement requirement, and if not, the measurement requirement cannot be met.
CN202010939227.8A 2020-09-09 2020-09-09 Visual positioning method for workpiece with edge feature only, application and precision evaluation method Active CN112082482B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010939227.8A CN112082482B (en) 2020-09-09 2020-09-09 Visual positioning method for workpiece with edge feature only, application and precision evaluation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010939227.8A CN112082482B (en) 2020-09-09 2020-09-09 Visual positioning method for workpiece with edge feature only, application and precision evaluation method

Publications (2)

Publication Number Publication Date
CN112082482A true CN112082482A (en) 2020-12-15
CN112082482B CN112082482B (en) 2021-12-17

Family

ID=73732907

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010939227.8A Active CN112082482B (en) 2020-09-09 2020-09-09 Visual positioning method for workpiece with edge feature only, application and precision evaluation method

Country Status (1)

Country Link
CN (1) CN112082482B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112082483A (en) * 2020-09-09 2020-12-15 易思维(杭州)科技有限公司 Positioning method and application of object with edge characteristics only and precision evaluation method
CN114111576A (en) * 2021-11-24 2022-03-01 易思维(杭州)科技有限公司 Aircraft skin clearance surface difference detection method and sensor

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103162623A (en) * 2013-03-07 2013-06-19 大连理工大学 Stereoscopic measuring system for double vertically mounted cameras and calibration method
CN107421442A (en) * 2017-05-22 2017-12-01 天津大学 A kind of robot localization error online compensation method of externally measured auxiliary
JP2019018313A (en) * 2017-07-20 2019-02-07 エヌティーツール株式会社 Tip information acquisition device and tip information acquisition method
CN109794963A (en) * 2019-01-07 2019-05-24 南京航空航天大学 A kind of robot method for rapidly positioning towards curved surface member

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103162623A (en) * 2013-03-07 2013-06-19 大连理工大学 Stereoscopic measuring system for double vertically mounted cameras and calibration method
CN107421442A (en) * 2017-05-22 2017-12-01 天津大学 A kind of robot localization error online compensation method of externally measured auxiliary
JP2019018313A (en) * 2017-07-20 2019-02-07 エヌティーツール株式会社 Tip information acquisition device and tip information acquisition method
CN109794963A (en) * 2019-01-07 2019-05-24 南京航空航天大学 A kind of robot method for rapidly positioning towards curved surface member

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112082483A (en) * 2020-09-09 2020-12-15 易思维(杭州)科技有限公司 Positioning method and application of object with edge characteristics only and precision evaluation method
CN114111576A (en) * 2021-11-24 2022-03-01 易思维(杭州)科技有限公司 Aircraft skin clearance surface difference detection method and sensor

Also Published As

Publication number Publication date
CN112082482B (en) 2021-12-17

Similar Documents

Publication Publication Date Title
CN112082483B (en) Positioning method and application of workpiece with edge characteristics only and precision evaluation method
US10357879B2 (en) Robot zero-point calibration device and method
CN108871216B (en) Robot porous contact type automatic measurement method based on visual guidance
CN111767354B (en) High-precision map precision evaluation method
CN112082482B (en) Visual positioning method for workpiece with edge feature only, application and precision evaluation method
CN102519400B (en) Large slenderness ratio shaft part straightness error detection method based on machine vision
US4638232A (en) Method and apparatus for calibrating a positioning system
CN102458779A (en) Robot calibration apparatus and method for same
US11293745B2 (en) Inspection master
CN113394141B (en) Quality evaluation system and method for chip structure defects
US20220230348A1 (en) Method and apparatus for determining a three-dimensional position and pose of a fiducial marker
CN106989670B (en) A kind of non-contact type high-precision large-scale workpiece tracking measurement method of robot collaboration
US7199881B2 (en) Apparatus for and method of measurements of components
CN103791868A (en) Space calibrating body and method based on virtual ball
CN113421310A (en) Method for realizing cross-field high-precision measurement based on motion position error compensation technology of grating ruler positioning
EP3707569B1 (en) Calibration of a stationary camera system for detecting the position of a mobile robot
CA1310092C (en) Method for determining position within the measuring volume of a coordinate measuring machine and the like and system therefor
CN108627103A (en) A kind of 2D laser measurement methods of parts height dimension
CN110940271A (en) Method for detecting, monitoring and intelligently carrying and installing large-scale industrial manufacturing of ships and the like based on space three-dimensional measurement and control network
CN112082481B (en) Precision evaluation method of visual detection system for detecting thread characteristics
CN109945839B (en) Method for measuring attitude of butt-jointed workpiece
CN107392899B (en) Automatic detection method for horizontal angle of grinding mark of steel ball grinding spot image
DE102016212651B4 (en) Method for measuring a workpiece using at least one reference body
CN113246146B (en) Method, device and system for error correction of parallel robot
CN113670280A (en) Verticality measuring device and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: Room 495, building 3, 1197 Bin'an Road, Binjiang District, Hangzhou City, Zhejiang Province 310051

Patentee after: Yi Si Si (Hangzhou) Technology Co.,Ltd.

Address before: Room 495, building 3, 1197 Bin'an Road, Binjiang District, Hangzhou City, Zhejiang Province 310051

Patentee before: ISVISION (HANGZHOU) TECHNOLOGY Co.,Ltd.