CN117301078A - Robot vision calibration method and system - Google Patents
Robot vision calibration method and system Download PDFInfo
- Publication number
- CN117301078A CN117301078A CN202311581165.8A CN202311581165A CN117301078A CN 117301078 A CN117301078 A CN 117301078A CN 202311581165 A CN202311581165 A CN 202311581165A CN 117301078 A CN117301078 A CN 117301078A
- Authority
- CN
- China
- Prior art keywords
- distortion
- data set
- real
- weight
- tangential
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 230000000007 visual effect Effects 0.000 claims abstract description 26
- 238000012937 correction Methods 0.000 claims description 23
- 238000010276 construction Methods 0.000 claims description 15
- 238000012545 processing Methods 0.000 claims description 13
- 238000013527 convolutional neural network Methods 0.000 claims description 12
- 238000012549 training Methods 0.000 claims description 12
- 238000013500 data storage Methods 0.000 claims description 10
- 230000008439 repair process Effects 0.000 claims description 8
- 238000013528 artificial neural network Methods 0.000 claims description 6
- 238000012360 testing method Methods 0.000 claims 1
- 230000008569 process Effects 0.000 description 7
- 238000012795 verification Methods 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 5
- 230000014509 gene expression Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000001737 promoting effect Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1679—Programme controls characterised by the tasks executed
- B25J9/1692—Calibration of manipulator
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
- B25J9/161—Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention relates to the technical field of machine vision calibration, in particular to a method and a system for calibrating the vision of a robot, wherein the method comprises the following steps: constructing a first data set, a second data set and a third data set for visual calibration of a robot, and acquiring a target image of a measured target in real time; calculating radial distortion weights and tangential distortion weights from the first data set, the second data set, and the third data set; establishing a vision calibration database by using the third data set, the radial distortion weight and the tangential distortion weight; predicting real-time radial distortion weights and real-time tangential distortion weights of the target image based on the vision calibration database; and determining the actual position of the measured target according to the third data set, the real-time radial distortion weight and the real-time tangential distortion weight. The invention considers the influence of radial distortion and tangential distortion on the target image, and compared with the existing robot vision calibration method, the method has higher accuracy and reliability, is simple and easy to implement, and is convenient to popularize.
Description
Technical Field
The invention relates to the technical field of machine vision calibration, in particular to a method and a system for calibrating machine vision.
Background
The manipulator is widely applied in the warehouse logistics industry due to the flexibility of operation. In the components of the robot arm, the vision system is one of the key parts of the robot arm, and as eyes of the robot arm, the accuracy of grabbing and moving of the robot arm is directly determined by the performance of the vision system. However, in the vision system of the robot, because the camera lens processing technology and the camera axial direction are not perpendicular to the plane of the measured object, the image acquired by the vision system of the robot is distorted, mainly including radial distortion and tangential distortion, so that the robot cannot accurately identify the specific position of the measured object, and therefore, the vision calibration of the robot is necessary.
In the prior art, a distortion model is established to find out the mapping relation between pixel points in an image and a real world coordinate system, so that the visual calibration of a robot is realized. However, the distortion process of the image acquired by the vision system is often very complex, so that the solving of the distortion model of the image is also difficult, a large amount of complex calculation is often carried out in the process of solving the distortion model, if the calculation is required to be simplified, only part of distortion in radial distortion and tangential distortion can be ignored, but the accuracy and the reliability of the nonlinear distortion model are reduced, and the accuracy and the reliability of vision calibration of the robot are further reduced.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a robot vision calibration method and a system.
To achieve the above object, in a first aspect, the present invention provides a robot vision calibration method, the method comprising the steps of: constructing a first data set, a second data set and a third data set for visual calibration of a robot, and acquiring a target image of a measured target in real time; calculating radial distortion weights and tangential distortion weights using a distortion weight model from the first data set, the second data set, and the third data set; creating a visual calibration database using the third dataset, the radial distortion weights, and the tangential distortion weights; predicting real-time radial distortion weights and real-time tangential distortion weights of the target image based on the vision calibration database; and determining the actual position of the measured target according to the third data set, the real-time radial distortion weight, the real-time tangential distortion weight and the correction model. The invention considers the influence of radial distortion and tangential distortion on the target image, and compared with the existing robot vision calibration method, the method has higher accuracy and reliability, is simple and easy to implement, and is convenient to popularize.
Optionally, the constructing the first data set, the second data set and the third data set for the vision calibration of the robot, and collecting the target image of the measured target in real time includes the following steps:
obtaining a distortion template, and determining a radial distortion model and a tangential distortion model;
setting first distortion coefficients in a plurality of groups of radial distortion models, simulating the distortion templates through the radial distortion models to obtain radial distortion images corresponding to each group of the first distortion coefficients, and further establishing the first data set;
setting second distortion coefficients in a plurality of groups of tangential distortion models, simulating the distortion templates through the tangential distortion models to obtain tangential distortion images corresponding to each group of second distortion coefficients, and further establishing a second data set;
and combining the radial distortion model and the tangential distortion model into a combined distortion model, combining each group of first distortion coefficients and second distortion coefficients one by one to form a plurality of groups of combined distortion coefficients, simulating the distortion template through the combined distortion model to obtain a combined distortion image corresponding to each group of combined distortion coefficients, and further establishing the third data set.
Furthermore, different distortion images are obtained by adjusting distortion coefficients in the radial distortion model, the tangential distortion model and the combined distortion model, so that different data sets are easy to realize, and a reliable data basis is provided for subsequent calculation of corresponding distortion weights.
Optionally, said calculating radial distortion weights and tangential distortion weights using a distortion weight model from said first data set, said second data set and said third data set comprises the steps of:
calculating the first graph similarity of the radial distortion image and the combined distortion image corresponding to the same first distortion coefficient in the first data set and the third data set;
calculating a second graph similarity of the tangential distortion image and the combined distortion image corresponding to the same second distortion coefficient in the second data set and the third data set;
and calculating the radial distortion weight and the tangential distortion weight by using the distortion weight model according to the first graph similarity and the second graph similarity.
Furthermore, calculating the radial distortion weight and the tangential distortion weight can provide a reliable data basis for distributing accurate distortion coefficients to the combined distortion model subsequently, so that the accuracy and the reliability of the obtained actual position of the measured target are improved.
Optionally, the distortion weight model satisfies the following relationship:
,
wherein,for the radial distortion weight, k is the ratio of the maximum distortion coefficient to the minimum distortion coefficient in the first distortion coefficient in the combined distortion coefficients, s is the similarity of the first graph, and +.>Mean value is taken>For the set of similarity of the first graph, +.>For the set of similarity of the second graph, +.>K is the maximum distortion coefficient in the second distortion coefficient in the combined distortion coefficient for the tangential distortion weightAnd the ratio of the number to the minimum distortion coefficient, S is the similarity of the second graph.
Optionally, the predicting the real-time radial distortion weight and the real-time tangential distortion weight of the target image based on the visual calibration database comprises the steps of:
taking the combined distortion image as an input, and establishing a first convolution neural network by taking the radial distortion weight and the tangential distortion weight as outputs;
training the first convolutional neural network by using the combined distortion image, the radial distortion weight and the tangential distortion weight in the vision calibration database to obtain a first prediction model;
and inputting the target image into the first prediction model, and further predicting the real-time radial distortion weight and the real-time tangential distortion weight of the target image.
Further, the combination distortion coefficient predicted value obtained later can be corrected by acquiring the real-time radial distortion weight and the real-time tangential distortion weight, so that the accuracy of vision calibration is improved.
Optionally, said determining the actual position of the measured object from the third dataset, the real-time radial distortion weights, the real-time tangential distortion weights and the correction model comprises the steps of:
establishing a second prediction model according to the third data set, and acquiring a combined distortion coefficient predicted value of the target image by using the second prediction model;
correcting the combined distortion coefficient predicted value by using the real-time radial distortion weight, the real-time tangential distortion weight and the correction model;
and carrying the corrected result into the combined distortion model to invert the target image to obtain a repair image, and determining the actual position of the measured target by using the repair image.
Optionally, the establishing a second prediction model according to the third data set, and using the second prediction model to obtain the combined distortion coefficient predicted value of the target image includes the following steps:
taking the combined distortion image as an input, and establishing a second convolution neural network by taking the combined distortion coefficient as an output;
training the second convolutional neural network by using the combined distortion image and the combined distortion coefficient in the third data set, so as to obtain a second prediction model;
and inputting the target image into the second prediction model so as to predict a combined distortion coefficient predicted value of the target image.
Furthermore, the second prediction model is used for realizing the prediction of the combined distortion coefficient, so that the complex distortion model is avoided being established, and meanwhile, various distortion amounts can be considered, thereby simplifying the process of performing vision calibration on the robot arm and improving the accuracy of the vision calibration.
Optionally, said correcting said combined distortion coefficient prediction value using said real-time radial distortion weight, said real-time tangential distortion weight, and said correction model comprises the steps of:
correcting a first distortion coefficient predicted value in the combined distortion coefficient predicted values by using the real-time radial distortion weight and the correction model to obtain a first distortion coefficient accurate value;
and correcting a second distortion coefficient predicted value in the combined distortion coefficient predicted values by using the real-time tangential distortion weight and the correction model to obtain a second distortion coefficient accurate value.
Furthermore, the accuracy of the combined distortion coefficient can be improved by correcting the combined distortion coefficient predicted value by using the real-time radial distortion weight and the real-time tangential distortion weight, so that the accuracy of visual calibration is further improved.
Optionally, the correction model satisfies the following relationship:
wherein,、/>and->For said first distortion factor accurate value, < >>And->For said second distortion factor accurate value, < >>For the radial distortion weight, +.>For the tangential distortion weight, +.>、/>And->For said first distortion coefficient predictor, < >>And->And predicting a value for the second distortion coefficient.
In summary, the method provided by the invention predicts each distortion parameter in the combined distortion model by using the second prediction model and the established visual calibration database, so that the complex distortion model is avoided being established, and meanwhile, various distortion amounts can be considered, thereby simplifying the visual calibration process of the robot and improving the accuracy of visual calibration. Meanwhile, the real-time radial distortion weight and the real-time tangential distortion weight of the target image are predicted based on the vision calibration database, and the real-time radial distortion weight and the real-time tangential distortion weight are utilized to correct the predicted value of the combined distortion coefficient, so that the target image can be inverted under the condition of determining the combined distortion model to determine the actual position of the measured target, and the accuracy of vision calibration is further improved. Compared with the existing robot vision calibration method, the method has higher accuracy and reliability, is simple and feasible, and is convenient to popularize.
In a second aspect, the invention provides a robot vision calibration system, which uses the robot vision calibration method provided by the invention, and the system comprises: the data construction and acquisition module is used for constructing a first data set, a second data set and a third data set for the vision calibration of the robot and acquiring a target image of a measured target in real time; the data processing module is used for calculating radial distortion weights and tangential distortion weights by using a distortion weight model according to the first data set, the second data set and the third data set; creating a visual calibration database using the third dataset, the radial distortion weights, and the tangential distortion weights; predicting real-time radial distortion weights and real-time tangential distortion weights of the target image based on the vision calibration database; determining the actual position of the measured target according to the third data set, the real-time radial distortion weight, the real-time tangential distortion weight and the correction model; the data storage module is used for storing the data acquired and generated in the data construction and acquisition module and the data processing module; and the data output module is used for outputting the actual position of the measured target.
Furthermore, the system provided by the invention has the same advantages as the method provided by the invention, can improve the efficiency of the visual calibration of the robot, and is beneficial to promoting the development of the robot to a more intelligent direction.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered limiting in scope, and that other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for calibrating robot vision according to an embodiment of the present invention;
fig. 2 is a schematic frame diagram of a robot vision calibration system according to an embodiment of the present invention.
Detailed Description
Specific embodiments of the invention will be described in detail below, it being noted that the embodiments described herein are for illustration only and are not intended to limit the invention. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to one of ordinary skill in the art that: no such specific details are necessary to practice the invention. In other instances, well-known circuits, software, or methods have not been described in detail in order not to obscure the invention.
Throughout the specification, references to "one embodiment," "an embodiment," "one example," or "an example" mean: a particular feature, structure, or characteristic described in connection with the embodiment or example is included within at least one embodiment of the invention. Thus, the appearances of the phrases "in one embodiment," "in an embodiment," "one example," or "an example" in various places throughout this specification are not necessarily all referring to the same embodiment or example. Furthermore, the particular features, structures, or characteristics may be combined in any suitable combination and/or sub-combination in one or more embodiments or examples. Moreover, those of ordinary skill in the art will appreciate that the illustrations provided herein are for illustrative purposes and that the illustrations are not necessarily drawn to scale.
It should be noted in advance that in an alternative embodiment, the same symbols or alphabet meaning and number are the same as those present in all formulas, except where separate descriptions are made.
In an alternative embodiment, referring to fig. 1, the present invention provides a robot vision calibration method, comprising the steps of:
s1, constructing a first data set, a second data set and a third data set for visual calibration of a robot, and acquiring a target image of a measured target in real time.
The step S1 specifically includes the following steps:
s11, obtaining a distortion template, and determining a radial distortion model and a tangential distortion model.
Specifically, in the present embodiment, the distortion template is a clear image in which no distortion occurs, which is a color image generated using AI drawing. The radial distortion model and the tangential distortion model are in the prior art, and for convenience in understanding the following description, the present embodiment sequentially provides expressions of the radial distortion model and the tangential distortion model, where the specific expressions are as follows:
wherein,and->Respectively the abscissa and the ordinate of the distorted pixel point, +.>And->Respectively the abscissa and the ordinate of the pixel point when no distortion occurs, and the +.>、/>And->For the first distortion coefficient, +>And->For the second distortion coefficient, r is an algebra and +.>。
Furthermore, the color image generated by using the AI drawing is not distorted, so that an accurate and reliable data basis is provided for the follow-up acquisition of the accurate and reliable first data set, the second data set and the third data set, and the acquisition of the actual position of the measured target is facilitated. In other alternative embodiments, the distortion template may be obtained in other ways as well.
S12, setting first distortion coefficients in a plurality of groups of radial distortion models, simulating the distortion templates through the radial distortion models to obtain radial distortion images corresponding to each group of the first distortion coefficients, and further establishing the first data set.
Specifically, in this embodiment, 100 sets of first distortion coefficients are set in total, and then each set of first distortion coefficients is sequentially brought into the radial distortion model, so that 100 different radial distortion images are simulated by using matlab based on the radial distortion model and the distortion template, each set of first distortion coefficients and the corresponding radial distortion image thereof are used as one set of radial distortion data, so that 100 sets of radial distortion data can be obtained in total, and a first data set is established by using the 100 sets of radial distortion data. For convenience of the following description, the present embodiment further sequentially numbers 100 sets of first distortion coefficients using positive integers of 1 to 100.
Furthermore, under the condition that an accurate and reliable distortion template is determined, the radial distortion image is simulated by using matlab through adjusting the first distortion coefficient, so that the establishment of the first data set is easy to realize, and an accurate and reliable data basis can be provided for acquiring the actual position of the measured target. In addition, the matlab is used to simulate a radial distortion image based on a radial distortion model and a distortion template, which is not described in detail herein.
S13, setting second distortion coefficients in a plurality of groups of tangential distortion models, simulating the distortion templates through the tangential distortion models to obtain tangential distortion images corresponding to the second distortion coefficients of each group, and further establishing the second data set.
Specifically, in this embodiment, 100 sets of second distortion coefficients are set in total, and then each set of second distortion coefficients is brought into the tangential distortion model in turn, so that 100 different tangential distortion images are simulated by using matlab based on the tangential distortion model and the distortion template, and each set of second distortion coefficients and the corresponding tangential distortion image are used as one set of tangential distortion data, so that 100 sets of tangential distortion data can be obtained in total, and a second data set is established by using the 100 sets of tangential distortion data. For convenience of the following description, the present embodiment further numbers 100 sets of second distortion coefficients sequentially using positive integers of 1 to 100.
Furthermore, under the condition that an accurate and reliable distortion template is determined, a tangential distortion image is simulated by using matlab through adjusting a second distortion coefficient, so that a second data set is easy to realize, and an accurate and reliable data basis can be provided for acquiring the actual position of a measured target. The modeling of tangential distortion images using matlab based on tangential distortion models and distortion templates is prior art and will not be described in detail herein.
S14, combining the radial distortion model and the tangential distortion model into a combined distortion model, combining each group of first distortion coefficients and second distortion coefficients one by one to form a plurality of groups of combined distortion coefficients, simulating the distortion template through the combined distortion model to obtain a combined distortion image corresponding to each group of combined distortion coefficients, and further establishing the third data set.
Specifically, in the present embodiment, the combined distortion model satisfies the following relationship:
the first distortion coefficient and the second distortion coefficient with the same number are combined into a group of combined distortion coefficients, and the number of the combined distortion coefficient is the same as the number of the first distortion coefficient and the second distortion coefficient which form the combined distortion coefficient, so that 100 groups of combined distortion coefficients can be obtained. Each set of combined distortion coefficients comprises a set of first distortion coefficients and a set of second distortion coefficients, as can be seen from the combined distortion model.
Further, each group of combined distortion coefficients is sequentially brought into the combined distortion model, 100 different combined distortion images are simulated by using matlab based on the combined distortion model and the distortion template, each group of combined distortion coefficients and the corresponding combined distortion image are used as one group of combined distortion data, so that 100 groups of combined distortion data can be obtained in total, and a third data set is established by using the 100 groups of combined distortion data.
Furthermore, under the condition that an accurate and reliable distortion template is determined, the combined distortion image is simulated by using matlab by adjusting the combined distortion coefficient, so that the establishment of a third data set is easy to realize, and an accurate and reliable data basis can be provided for acquiring the actual position of the measured target. The modeling of the combined distorted image using matlab based on the combined distortion model and distortion template is prior art and will not be described in detail herein.
S2, calculating radial distortion weights and tangential distortion weights according to the first data set, the second data set and the third data set by using a distortion weight model.
The step S2 specifically includes the following steps:
s21, calculating the first graph similarity of the radial distortion image and the combined distortion image corresponding to the same first distortion coefficient in the first data set and the third data set.
Specifically, in this embodiment, since the number of the combined distortion coefficient is the same as the numbers of the first distortion coefficient and the second distortion coefficient that constitute it, this step is to calculate the graph similarity of the radial distortion image corresponding to the first distortion coefficient and the combined distortion image, that is, the first graph similarity, in the case that the number of the first distortion coefficient is the same as the number of the combined distortion coefficient, and a total of 100 first graph similarities can be obtained.
Further, the calculation of the similarity of the graphs is known in the art, and will not be described in detail herein.
S22, calculating the second graph similarity of the tangential distortion image and the combined distortion image corresponding to the same second distortion coefficient in the second data set and the third data set.
Specifically, in this embodiment, since the number of the combined distortion coefficient is the same as the numbers of the first distortion coefficient and the second distortion coefficient that constitute it, this step is to calculate the graph similarity of the tangential distortion image corresponding to the second distortion coefficient and the combined distortion image, that is, the second graph similarity, in the case that the second distortion coefficient is the same as the number of the combined distortion coefficient, and a total of 100 second graph similarities can be obtained.
Further, the calculation of the similarity of the graphs is known in the art, and will not be described in detail herein.
S23, calculating the radial distortion weight and the tangential distortion weight by using the distortion weight model according to the first graph similarity and the second graph similarity.
Specifically, in the present embodiment, the distortion weight model satisfies the following relationship:
,
wherein,for radial distortion weight, k is the ratio of the maximum distortion coefficient to the minimum distortion coefficient in the first distortion coefficient in the combined distortion coefficients, s is the similarity of the first graph, and +.>Mean value is taken>For the set of first graph similarities, +.>For the set of second graph similarities, +.>And K is the ratio of the maximum distortion coefficient to the minimum distortion coefficient in the second distortion coefficient in the combined distortion coefficients, and S is the similarity of the second graph.
Furthermore, the radial distortion weight and the tangential distortion weight calculated by combining the image similarity can accurately reflect the influence of the radial distortion and the tangential distortion on the combined distortion image from an objective angle, so that the radial distortion weight and the tangential distortion weight can provide a reliable data basis for distributing accurate distortion coefficients to a combined distortion model subsequently, and further the accuracy and the reliability of the obtained actual position of the measured target are improved.
S3, establishing a visual calibration database by using the third data set, the radial distortion weight and the tangential distortion weight.
Specifically, in this embodiment, the radial distortion weight and the tangential distortion weight are supplemented to the corresponding combined distortion data, so as to obtain the visual calibration database.
S4, predicting the real-time radial distortion weight and the real-time tangential distortion weight of the target image based on the vision calibration database.
The step S4 specifically includes the following steps:
s41, taking the combined distortion image as an input, and establishing a first convolution neural network by taking the radial distortion weight and the tangential distortion weight as outputs.
Specifically, in this embodiment, the convolutional neural network is established as the prior art, and will not be described in detail herein.
S42, training the first convolutional neural network by using the combined distortion image, the radial distortion weight and the tangential distortion weight in the vision calibration database to obtain a first prediction model.
Specifically, in this embodiment, a combined distortion image, a radial distortion weight and a tangential distortion weight in 70 sets of data in the visual calibration database are selected as a training set, and the combined distortion image, the radial distortion weight and the tangential distortion weight in the remaining 30 sets of data are used as a verification set, so that training and verification of the first convolutional neural network are completed, and a first prediction model is obtained.
Further, training and verification of convolutional neural networks is prior art and will not be described in detail herein.
S43, inputting the target image into the first prediction model, and further predicting the real-time radial distortion weight and the real-time tangential distortion weight of the target image.
Specifically, in this embodiment, the combination distortion coefficient predicted value obtained later can be corrected by acquiring the real-time radial distortion weight and the real-time tangential distortion weight, so as to improve the accuracy of visual calibration.
S5, determining the actual position of the measured target according to the third data set, the real-time radial distortion weight, the real-time tangential distortion weight and the correction model.
The step S5 specifically includes the following steps:
s51, a second prediction model is established according to the third data set, and a combined distortion coefficient predicted value of the target image is obtained by using the second prediction model.
The step S51 specifically further includes the following steps:
s511, taking the combined distortion image as an input, and establishing a second convolution neural network for the output by the combined distortion coefficient.
Specifically, in this embodiment, the convolutional neural network is established as the prior art, and will not be described in detail herein.
And S512, training the second convolutional neural network by using the combined distortion image and the combined distortion coefficient in the third data set, and further obtaining a second prediction model.
Specifically, in this embodiment, 70 sets of combined distortion data in the third data set are selected as the training set, and the remaining 30 sets of combined distortion data are used as the verification set, so as to complete training and verification of the first convolutional neural network, and obtain the second prediction model.
Further, training and verification of convolutional neural networks is prior art and will not be described in detail herein.
S513, inputting the target image into the second prediction model, and further predicting a combined distortion coefficient predicted value of the target image.
Specifically, in this embodiment, the second prediction model is used to implement prediction of the combined distortion coefficient, so that multiple distortion amounts can be considered while avoiding building a complex distortion model, which not only simplifies the process of performing vision calibration on the robot, but also improves the accuracy of vision calibration.
S52, correcting the combined distortion coefficient predicted value by using the real-time radial distortion weight, the real-time tangential distortion weight and the correction model.
The step S52 specifically further includes the following steps:
s521, correcting the first distortion coefficient predicted value in the combined distortion coefficient predicted value by using the real-time radial distortion weight and the correction model to obtain a first distortion coefficient accurate value.
S522, correcting a second distortion coefficient predicted value in the combined distortion coefficient predicted values by using the real-time tangential distortion weight and the correction model to obtain a second distortion coefficient accurate value.
Specifically, in the present embodiment, the correction model satisfies the following relationship:
wherein,、/>and->For the first distortion coefficient exact value, +.>And->For the second distortion coefficient accurate value, +.>Is radial distortion weight +>For tangential distortion weight, +.>、/>And->For the first distortion coefficient predictor, +.>And->Is the second distortion coefficient predicted value. And (3) bringing the real-time radial distortion weight and the real-time tangential distortion weight obtained in the step (S43) and the combined distortion coefficient predicted value obtained in the step (S513) into the relational expression, so that the first distortion coefficient accurate value and the second distortion coefficient accurate value can be calculated.
Further, the combined distortion coefficient predicted value predicted by the second prediction model can be directly used for repairing the target image, but some errors exist between the combined distortion coefficient predicted value and the true combined distortion coefficient, which are larger or smaller, so that an ideal repairing effect is often not achieved. In order to further improve the restoration effect on the target image, the real-time radial distortion weight and the real-time tangential distortion weight are used for correcting the predicted value of the combined distortion coefficient, namely, the magnitude of the influence of the radial distortion and the tangential distortion on the target image is considered to carry out numerical distribution on each predicted value of the first distortion coefficient and each predicted value of the second distortion coefficient in the predicted value of the combined distortion coefficient, so that more reasonable first distortion coefficient and second distortion coefficient are obtained, namely, the accurate value of the first distortion coefficient and the accurate value of the second distortion coefficient are obtained, and the accuracy of the combined distortion coefficient is improved, and the accuracy of visual calibration is further improved.
And S53, introducing the corrected result into the combined distortion model to invert the target image to obtain a repair image, and determining the actual position of the measured target by using the repair image.
Specifically, in this embodiment, the first distortion coefficient accurate value and the second distortion coefficient accurate value are brought into the combined distortion model, so that the obtained combined distortion model can accurately reflect the distortion process of the target image. Since the coordinates of each pixel on the target image are known, i.e. the coordinates of the pixel after the image is distortedIs known, so that the coordinates of each pixel point on the target image can be brought into the combined distortion model under the condition that the accurate values of the first distortion coefficient and the second distortion coefficient are known, and the coordinates of each pixel point on the target image before distortion is calculated>And further restoring the appearance of the target image when no distortion occurs, namely, restoring the image, and realizing the vision correction of the robot by utilizing the restoring image.
Further, the repair image is used to determine the actual position of the object to be measured. The determination of the actual position of the object to be measured using the repair image involves the transformation of the pixel coordinate system into the world coordinate system, and is not described in detail here since it is the prior art.
It should be noted that, in some cases, the actions described in the specification may be performed in a different order and still achieve desirable results, and in this embodiment, the order of steps is merely provided to make the embodiment more clear, and it is convenient to describe the embodiment without limiting it.
In an alternative embodiment, referring to fig. 2, the present invention further provides a robot vision calibration system, which uses a robot vision calibration method provided by the present invention, and the system includes a data construction and acquisition module A1, a data processing module A2, a data storage module A3, and a data output module A4.
The data construction and acquisition module A1 is used for constructing a first data set, a second data set and a third data set for the vision calibration of the robot, and acquiring a target image of a measured target in real time.
Specifically, in this embodiment, the data construction and acquisition module A1 specifically executes the content described in step S1, which is not described herein.
Further, in other optional embodiments, the data construction and acquisition module A1 may be further specifically divided into a data construction sub-module and a data acquisition sub-module, where the data construction sub-module is configured to construct a first data set, a second data set, and a third data set for the vision calibration of the robot, and the data acquisition sub-module is configured to acquire the target image of the measured target in real time.
The data processing module A2 is used for calculating radial distortion weights and tangential distortion weights by using a distortion weight model according to the first data set, the second data set and the third data set; creating a visual calibration database using the third dataset, the radial distortion weights, and the tangential distortion weights; predicting real-time radial distortion weights and real-time tangential distortion weights of the target image based on the vision calibration database; and determining the actual position of the measured target according to the third data set, the real-time radial distortion weight, the real-time tangential distortion weight and the correction model.
Specifically, in this embodiment, the data processing module A2 is connected to the data construction and collection module A1 through a data line, and the data processing module A2 specifically executes the contents described in steps S2 to S5, which are not described herein.
Further, in other alternative embodiments, the data processing module A2 may be connected to the data construction and acquisition module A1 in other manners.
The data storage module A3 is configured to store data collected and generated in the data construction and collection module A1 and the data processing module A2.
Specifically, in this embodiment, the data storage module A3 is connected to the data construction and acquisition module A1 and the data processing module A2 through data lines, and the data stored in the data storage module A3 includes a first data set, a second data set, a third data set, a target image of the target to be measured, a visual calibration database, a real-time radial distortion weight, a real-time tangential distortion weight, and an actual position of the target to be measured.
Further, in other alternative embodiments, the data storage module A3 may be connected to the data construction and acquisition module A1 and the data processing module A2 in other manners.
The data output module A4 is used for outputting the actual position of the measured object.
Specifically, in the present embodiment, the data output module A4 is connected to the data storage module A3 through a data line.
Further, in other alternative embodiments, the data output module A4 may be integrated with the data storage module A3, and the data output module A4 may output other data in the data storage module A3, such as the first data set, the second data set, the third data set, and so on.
In summary, the invention realizes the robot vision calibration by removing the distortion of the target image, thereby obtaining the accurate actual position of the measured target. The method provided by the invention predicts each distortion parameter in the combined distortion model by using the second prediction model and the established visual calibration database, and can take various distortion quantities into consideration while avoiding establishing a complex distortion model, thereby simplifying the visual calibration process of the robot and improving the accuracy of visual calibration. Meanwhile, the real-time radial distortion weight and the real-time tangential distortion weight of the target image are predicted based on the vision calibration database, and the real-time radial distortion weight and the real-time tangential distortion weight are utilized to correct the predicted value of the combined distortion coefficient, so that the target image can be inverted under the condition of determining the combined distortion model to determine the actual position of the measured target, and the accuracy of vision calibration is further improved. Compared with the existing robot vision calibration method, the method has higher accuracy and reliability, is simple and feasible, and is convenient to popularize. In addition, the system provided by the invention has the same advantages as the method provided by the invention, can improve the efficiency of the visual calibration of the robot, and is beneficial to promoting the development of the robot to a more intelligent direction.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention, and are intended to be included within the scope of the appended claims and description.
Claims (10)
1. A method for calibrating the vision of a robot, comprising the steps of:
constructing a first data set, a second data set and a third data set for visual calibration of a robot, and acquiring a target image of a measured target in real time;
calculating radial distortion weights and tangential distortion weights using a distortion weight model from the first data set, the second data set, and the third data set;
creating a visual calibration database using the third dataset, the radial distortion weights, and the tangential distortion weights;
predicting real-time radial distortion weights and real-time tangential distortion weights of the target image based on the vision calibration database;
and determining the actual position of the measured target according to the third data set, the real-time radial distortion weight, the real-time tangential distortion weight and the correction model.
2. The method of claim 1, wherein constructing the first data set, the second data set, and the third data set for the vision calibration of the robot, and acquiring the target image of the target in real time comprises the steps of:
obtaining a distortion template, and determining a radial distortion model and a tangential distortion model;
setting first distortion coefficients in a plurality of groups of radial distortion models, simulating the distortion templates through the radial distortion models to obtain radial distortion images corresponding to each group of the first distortion coefficients, and further establishing the first data set;
setting second distortion coefficients in a plurality of groups of tangential distortion models, simulating the distortion templates through the tangential distortion models to obtain tangential distortion images corresponding to each group of second distortion coefficients, and further establishing a second data set;
and combining the radial distortion model and the tangential distortion model into a combined distortion model, combining each group of first distortion coefficients and second distortion coefficients one by one to form a plurality of groups of combined distortion coefficients, simulating the distortion template through the combined distortion model to obtain a combined distortion image corresponding to each group of combined distortion coefficients, and further establishing the third data set.
3. The robot vision calibration method of claim 2, wherein said calculating radial and tangential distortion weights using a distortion weight model from said first, second and third data sets comprises the steps of:
calculating the first graph similarity of the radial distortion image and the combined distortion image corresponding to the same first distortion coefficient in the first data set and the third data set;
calculating a second graph similarity of the tangential distortion image and the combined distortion image corresponding to the same second distortion coefficient in the second data set and the third data set;
and calculating the radial distortion weight and the tangential distortion weight by using the distortion weight model according to the first graph similarity and the second graph similarity.
4. A method of calibrating robot vision according to claim 3, wherein the distortion weight model satisfies the relationship:
,
wherein,for the radial distortion weight, k is the ratio of the maximum distortion coefficient to the minimum distortion coefficient in the first distortion coefficient in the combined distortion coefficients, s is the similarity of the first graph, and +.>Mean value is taken>For the set of similarity of the first graph, +.>For the set of similarity of the second graph, +.>And for the tangential distortion weight, K is the ratio of the maximum distortion coefficient to the minimum distortion coefficient in the second distortion coefficient in the combined distortion coefficient, and S is the similarity of the second graph.
5. A method of robot vision calibration as claimed in claim 3, wherein predicting real-time radial distortion weights and real-time tangential distortion weights of the target image based on the vision calibration database comprises the steps of:
taking the combined distortion image as an input, and establishing a first convolution neural network by taking the radial distortion weight and the tangential distortion weight as outputs;
training the first convolutional neural network by using the combined distortion image, the radial distortion weight and the tangential distortion weight in the vision calibration database to obtain a first prediction model;
and inputting the target image into the first prediction model, and further predicting the real-time radial distortion weight and the real-time tangential distortion weight of the target image.
6. The method of robot vision calibration of claim 2, wherein said determining the actual position of the object under test from the third dataset, the real-time radial distortion weights, the real-time tangential distortion weights, and the correction model comprises the steps of:
establishing a second prediction model according to the third data set, and acquiring a combined distortion coefficient predicted value of the target image by using the second prediction model;
correcting the combined distortion coefficient predicted value by using the real-time radial distortion weight, the real-time tangential distortion weight and the correction model;
and carrying the corrected result into the combined distortion model to invert the target image to obtain a repair image, and determining the actual position of the measured target by using the repair image.
7. The method of claim 6, wherein the establishing a second prediction model from the third data set and using the second prediction model to obtain the combined distortion coefficient prediction value of the target image comprises the steps of:
taking the combined distortion image as an input, and establishing a second convolution neural network by taking the combined distortion coefficient as an output;
training the second convolutional neural network by using the combined distortion image and the combined distortion coefficient in the third data set, so as to obtain a second prediction model;
and inputting the target image into the second prediction model so as to predict a combined distortion coefficient predicted value of the target image.
8. The robot vision calibration method of claim 6, wherein said correcting said combined distortion factor predictor using said real-time radial distortion weights, said real-time tangential distortion weights, and said correction model comprises the steps of:
correcting a first distortion coefficient predicted value in the combined distortion coefficient predicted values by using the real-time radial distortion weight and the correction model to obtain a first distortion coefficient accurate value;
and correcting a second distortion coefficient predicted value in the combined distortion coefficient predicted values by using the real-time tangential distortion weight and the correction model to obtain a second distortion coefficient accurate value.
9. The method of claim 8, wherein the correction model satisfies the following relationship:
,
wherein,、/>and->For said first distortion factor accurate value, < >>And->For the second distortion factor to be an accurate value,for the radial distortion weight, +.>For the tangential distortion weight, +.>、/>And->For said first distortion coefficient predictor, < >>And->And predicting a value for the second distortion coefficient.
10. A robot vision calibration system using a robot vision calibration method as claimed in any one of claims 1 to 9, comprising:
the data construction and acquisition module is used for constructing a first data set, a second data set and a third data set for the vision calibration of the robot and acquiring a target image of a measured target in real time;
the data processing module is used for calculating radial distortion weights and tangential distortion weights by using a distortion weight model according to the first data set, the second data set and the third data set; creating a visual calibration database using the third dataset, the radial distortion weights, and the tangential distortion weights; predicting real-time radial distortion weights and real-time tangential distortion weights of the target image based on the vision calibration database; determining the actual position of the measured target according to the third data set, the real-time radial distortion weight, the real-time tangential distortion weight and the correction model;
the data storage module is used for storing the data acquired and generated in the data construction and acquisition module and the data processing module;
and the data output module is used for outputting the actual position of the measured target.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311581165.8A CN117301078B (en) | 2023-11-24 | 2023-11-24 | Robot vision calibration method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311581165.8A CN117301078B (en) | 2023-11-24 | 2023-11-24 | Robot vision calibration method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117301078A true CN117301078A (en) | 2023-12-29 |
CN117301078B CN117301078B (en) | 2024-03-12 |
Family
ID=89285079
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311581165.8A Active CN117301078B (en) | 2023-11-24 | 2023-11-24 | Robot vision calibration method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117301078B (en) |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030035100A1 (en) * | 2001-08-02 | 2003-02-20 | Jerry Dimsdale | Automated lens calibration |
RU2321888C1 (en) * | 2006-10-16 | 2008-04-10 | Государственное образовательное учреждение высшего профессионального образования Курский государственный технический университет | Method for calibrating distortion of an optical-electronic device |
CN102452081A (en) * | 2010-10-21 | 2012-05-16 | 财团法人工业技术研究院 | Method and device for correcting system parameters of mechanical arm |
CN103372862A (en) * | 2012-04-12 | 2013-10-30 | 精工爱普生株式会社 | Robot system, calibration method of robot system, robot, calibration device, and digital camera |
CN103411621A (en) * | 2013-08-09 | 2013-11-27 | 东南大学 | Indoor-mobile-robot-oriented optical flow field vision/inertial navigation system (INS) combined navigation method |
US20140340508A1 (en) * | 2011-12-26 | 2014-11-20 | Mitsubishi Heavy Industries, Ltd. | Method for calibrating camera measurement system |
CN109632085A (en) * | 2018-12-29 | 2019-04-16 | 中国计量科学研究院 | A kind of low-frequency vibration calibration method based on monocular vision |
CN111325674A (en) * | 2018-12-17 | 2020-06-23 | 北京京东尚科信息技术有限公司 | Image processing method, device and equipment |
CN112258588A (en) * | 2020-11-13 | 2021-01-22 | 江苏科技大学 | Calibration method and system of binocular camera and storage medium |
CN113450418A (en) * | 2021-06-24 | 2021-09-28 | 深圳市明日系统集成有限公司 | Improved method, device and system for underwater calibration based on complex distortion model |
CN114298923A (en) * | 2021-12-13 | 2022-04-08 | 吉林大学 | Lens evaluation and image restoration method for machine vision measurement system |
US20220138985A1 (en) * | 2020-10-29 | 2022-05-05 | Black Sesame Technologies Inc. | Method, apparatus, electronic device and storage medium for image distortion calibrating |
CN115170435A (en) * | 2022-07-28 | 2022-10-11 | 上海海洋大学 | Image geometric distortion correction method based on Unet network |
KR102518913B1 (en) * | 2022-12-14 | 2023-04-10 | 라온피플 주식회사 | Method and apparatus for managing performance of artificial intelligence model |
-
2023
- 2023-11-24 CN CN202311581165.8A patent/CN117301078B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030035100A1 (en) * | 2001-08-02 | 2003-02-20 | Jerry Dimsdale | Automated lens calibration |
RU2321888C1 (en) * | 2006-10-16 | 2008-04-10 | Государственное образовательное учреждение высшего профессионального образования Курский государственный технический университет | Method for calibrating distortion of an optical-electronic device |
CN102452081A (en) * | 2010-10-21 | 2012-05-16 | 财团法人工业技术研究院 | Method and device for correcting system parameters of mechanical arm |
US20140340508A1 (en) * | 2011-12-26 | 2014-11-20 | Mitsubishi Heavy Industries, Ltd. | Method for calibrating camera measurement system |
CN103372862A (en) * | 2012-04-12 | 2013-10-30 | 精工爱普生株式会社 | Robot system, calibration method of robot system, robot, calibration device, and digital camera |
CN103411621A (en) * | 2013-08-09 | 2013-11-27 | 东南大学 | Indoor-mobile-robot-oriented optical flow field vision/inertial navigation system (INS) combined navigation method |
CN111325674A (en) * | 2018-12-17 | 2020-06-23 | 北京京东尚科信息技术有限公司 | Image processing method, device and equipment |
CN109632085A (en) * | 2018-12-29 | 2019-04-16 | 中国计量科学研究院 | A kind of low-frequency vibration calibration method based on monocular vision |
US20220138985A1 (en) * | 2020-10-29 | 2022-05-05 | Black Sesame Technologies Inc. | Method, apparatus, electronic device and storage medium for image distortion calibrating |
CN112258588A (en) * | 2020-11-13 | 2021-01-22 | 江苏科技大学 | Calibration method and system of binocular camera and storage medium |
CN113450418A (en) * | 2021-06-24 | 2021-09-28 | 深圳市明日系统集成有限公司 | Improved method, device and system for underwater calibration based on complex distortion model |
CN114298923A (en) * | 2021-12-13 | 2022-04-08 | 吉林大学 | Lens evaluation and image restoration method for machine vision measurement system |
CN115170435A (en) * | 2022-07-28 | 2022-10-11 | 上海海洋大学 | Image geometric distortion correction method based on Unet network |
KR102518913B1 (en) * | 2022-12-14 | 2023-04-10 | 라온피플 주식회사 | Method and apparatus for managing performance of artificial intelligence model |
Non-Patent Citations (2)
Title |
---|
于舒春;朱延河;闫继宏;赵杰;: "基于新型双目机构的立体视觉系统标定", 机器人, no. 04, pages 353 - 356 * |
唐东林;游传坤;丁超;龙再勇;汤炎锦;: "爬壁机器人双目视觉障碍检测系统", 机械科学与技术, no. 05, pages 765 - 772 * |
Also Published As
Publication number | Publication date |
---|---|
CN117301078B (en) | 2024-03-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI742382B (en) | Neural network system for vehicle parts recognition executed by computer, method for vehicle part recognition through neural network system, device and computing equipment for vehicle part recognition | |
CN106599830B (en) | Face key point positioning method and device | |
CN111950396B (en) | Meter reading neural network identification method | |
CN115439694A (en) | High-precision point cloud completion method and device based on deep learning | |
CN111833237A (en) | Image registration method based on convolutional neural network and local homography transformation | |
CN114283320B (en) | Branch-free structure target detection method based on full convolution | |
CN115660233A (en) | Photovoltaic power prediction method and device, electronic equipment and storage medium | |
CN108628164A (en) | A kind of semi-supervised flexible measurement method of industrial process based on Recognition with Recurrent Neural Network model | |
CN103679639B (en) | Image denoising method and device based on non-local mean value | |
CN111476307A (en) | Lithium battery surface defect detection method based on depth field adaptation | |
CN116707331B (en) | Inverter output voltage high-precision adjusting method and system based on model prediction | |
JP2024528419A (en) | Method and apparatus for updating an object detection model | |
CN116524062A (en) | Diffusion model-based 2D human body posture estimation method | |
CN115797808A (en) | Unmanned aerial vehicle inspection defect image identification method, system, device and medium | |
CN114298923A (en) | Lens evaluation and image restoration method for machine vision measurement system | |
CN117301078B (en) | Robot vision calibration method and system | |
CN111553954B (en) | Online luminosity calibration method based on direct method monocular SLAM | |
CN113536926A (en) | Human body action recognition method based on distance vector and multi-angle self-adaptive network | |
CN117095236A (en) | Method and system for evaluating test accuracy of blade root wheel groove | |
CN112329845A (en) | Method and device for replacing paper money, terminal equipment and computer readable storage medium | |
CN109886105B (en) | Price tag identification method, system and storage medium based on multi-task learning | |
CN116596915A (en) | Blind image quality evaluation method based on multi-scale characteristics and long-distance dependence | |
CN110211122A (en) | A kind of detection image processing method and processing device | |
CN112488125B (en) | Reconstruction method and system based on high-speed visual diagnosis and BP neural network | |
CN116123040A (en) | Fan blade state detection method and system based on multi-mode data fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |