CN118544360A - Robot vision detection method, system, terminal and medium based on laser compensation - Google Patents
Robot vision detection method, system, terminal and medium based on laser compensation Download PDFInfo
- Publication number
- CN118544360A CN118544360A CN202411001958.2A CN202411001958A CN118544360A CN 118544360 A CN118544360 A CN 118544360A CN 202411001958 A CN202411001958 A CN 202411001958A CN 118544360 A CN118544360 A CN 118544360A
- Authority
- CN
- China
- Prior art keywords
- information
- pixel coordinate
- coordinate information
- mechanical arm
- tail end
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 46
- 239000003550 marker Substances 0.000 claims abstract description 92
- 238000000034 method Methods 0.000 claims abstract description 84
- 238000012545 processing Methods 0.000 claims abstract description 49
- 238000005259 measurement Methods 0.000 claims abstract description 33
- 238000012937 correction Methods 0.000 claims abstract description 19
- 230000006870 function Effects 0.000 claims description 37
- 230000008569 process Effects 0.000 claims description 23
- 238000004364 calculation method Methods 0.000 claims description 19
- 238000004590 computer program Methods 0.000 claims description 16
- 238000007689 inspection Methods 0.000 claims description 12
- 238000003384 imaging method Methods 0.000 claims description 11
- 238000003860 storage Methods 0.000 claims description 7
- 238000004422 calculation algorithm Methods 0.000 claims description 3
- 239000000463 material Substances 0.000 claims description 2
- 238000013459 approach Methods 0.000 abstract 1
- 238000013528 artificial neural network Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 8
- 230000000694 effects Effects 0.000 description 7
- 230000001131 transforming effect Effects 0.000 description 5
- 238000009826 distribution Methods 0.000 description 3
- 238000012800 visualization Methods 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- NIXOWILDQLNWCW-UHFFFAOYSA-N acrylic acid group Chemical group C(C=C)(=O)O NIXOWILDQLNWCW-UHFFFAOYSA-N 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 238000000149 argon plasma sintering Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000004027 cell Anatomy 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012797 qualification Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Landscapes
- Length Measuring Devices By Optical Means (AREA)
Abstract
The application is suitable for the technical field of computer vision, and provides a robot vision detection method, a system, a terminal and a medium based on laser compensation, wherein the method comprises the steps of firstly acquiring terminal marker image set information, and then executing the processing on each terminal marker image information: acquiring first pixel coordinate information of a theoretical position of the tail end of the mechanical arm, carrying out distortion correction processing on the image information of the tail end marker according to the first pixel coordinate information and the distortion coefficient set information, determining second pixel coordinate information of an actual measurement position of the tail end of the mechanical arm, effectively determining coordinate error value information of the tail end of the mechanical arm, and finally carrying out compensation processing on the second pixel coordinate information to generate compensated pixel coordinate information. The application can compensate the global error condition, so that the accuracy of camera detection almost approaches to the motion error detected by the laser tracker, the accuracy of mechanical arm positioning is greatly improved, and monocular vision precision detection oriented to plane mechanical arm control is realized.
Description
Technical Field
The application relates to the technical field of computer vision, in particular to a robot vision detection method, a system, a terminal and a medium based on laser compensation.
Background
In the 80 s of the 20 th century, the computer vision technology was first applied to a robot system, and compared with the traditional robot, the vision robot is more excellent in the aspects of adaptability, control precision, robustness and the like. With the increasing use of robots in industrial production, the visualization of mechanical arms has become an important research direction in the field of robots, and is also recognized as the most important direction for the development of modern high and new technologies. The accuracy of the mechanical arm visualization directly influences the motion accuracy of the operation guided by the mechanical arm, the accuracy of the mechanical arm visualization directly relates to the calibration of the mechanical arm vision, the accuracy of the mechanical arm vision calibration directly influences the accuracy of the motion feedback of the industrial mechanical arm, and the complexity of the calibration directly influences the rapidity of the calibration of the mechanical arm, so that the realization of high-accuracy positioning, identification and detection is of great importance to the successful operation of the mechanical arm.
At present, in monocular vision precision detection for plane mechanical arm control, image deformation occurs when the mechanical arm is captured in real time to move, so that serious deviation is generated in detection of a calibration object, the accuracy of mechanical arm positioning is affected, and the problem of lower accuracy exists and needs to be further improved.
Disclosure of Invention
Based on the above, the embodiment of the application provides a robot vision detection method, a system, a terminal and a medium based on laser compensation, so as to solve the problem of lower accuracy in the prior art.
In a first aspect, an embodiment of the present application provides a robot vision detection method based on laser compensation, where the method includes:
Continuously acquiring terminal marker image set information based on a preset camera, wherein the terminal marker image set information comprises a plurality of continuous terminal marker image information, a shooting object of the terminal marker image information is a designated mechanical arm terminal, and at least two light source markers are arranged at the mechanical arm terminal;
image information for each of the end markers: acquiring first pixel coordinate information corresponding to a theoretical position of the tail end of the mechanical arm, carrying out distortion correction processing on the tail end marker image information according to the first pixel coordinate information and preset distortion coefficient set information, and determining second pixel coordinate information corresponding to an actual measurement position of the tail end of the mechanical arm;
Determining coordinate error value information of the tail end of the mechanical arm according to the second pixel coordinate information and a preset tail end position motion error calculation function;
And carrying out compensation processing on the second pixel coordinate information according to the coordinate error value information to generate compensation pixel coordinate information.
Compared with the prior art, the beneficial effects that exist are: according to the robot vision detection method based on laser compensation provided by the embodiment of the application, terminal equipment can firstly acquire terminal marker image set information by using a camera, and then execute the processing on each terminal marker image information: acquiring first pixel coordinate information corresponding to a theoretical position of the tail end of the mechanical arm, carrying out distortion correction processing on the image information of the tail end marker according to the first pixel coordinate information and the distortion coefficient set information, accurately determining second pixel coordinate information corresponding to an actual measurement position of the tail end of the mechanical arm, effectively determining coordinate error value information of the tail end of the mechanical arm according to the second pixel coordinate information and a tail end position motion error calculation function, and finally carrying out compensation processing on the second pixel coordinate information according to the coordinate error value information to generate compensated pixel coordinate information, thereby realizing compensation on global error conditions, improving the accuracy of positioning of the mechanical arm and solving the problem of lower current accuracy to a certain extent.
In a second aspect, an embodiment of the present application provides a robot vision inspection system based on laser compensation, the system comprising:
The terminal marker image set information acquisition module: the method comprises the steps of continuously acquiring terminal marker image set information based on a preset camera, wherein the terminal marker image set information comprises a plurality of pieces of continuous terminal marker image information, a shooting object of the terminal marker image information is a designated tail end of a mechanical arm, and at least two light source markers are arranged at the tail end of the mechanical arm;
The first pixel coordinate information acquisition module: for each of said end marker image information: acquiring first pixel coordinate information corresponding to a theoretical position of the tail end of the mechanical arm, carrying out distortion correction processing on the tail end marker image information according to the first pixel coordinate information and preset distortion coefficient set information, and determining second pixel coordinate information corresponding to an actual measurement position of the tail end of the mechanical arm;
The coordinate error value information determining module: the coordinate error value information of the tail end of the mechanical arm is determined according to the second pixel coordinate information and a preset tail end position motion error calculation function;
And the compensation pixel coordinate information generation module is used for: and the compensation processing is used for carrying out compensation processing on the second pixel coordinate information according to the coordinate error value information to generate compensation pixel coordinate information.
In a third aspect, an embodiment of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method according to the first aspect as described above when the processor executes the computer program.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the method of the first aspect described above.
It will be appreciated that the advantages of the second to fourth aspects may be found in the relevant description of the first aspect and are not repeated here.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings used in the description of the embodiments or the prior art will be briefly described below.
Fig. 1 is a flow chart of a robot vision detection method according to an embodiment of the present application;
FIG. 2 is a first schematic illustration of a robotic arm according to an embodiment of the present application;
FIG. 3 is a second schematic view of a robot arm according to an embodiment of the present application;
Fig. 4 is a schematic flowchart of a robot vision inspection method according to an embodiment of the present application before step S100;
FIG. 5 is a schematic diagram of camera imaging provided by an embodiment of the present application;
FIG. 6 is a schematic diagram of calibration plate image information according to an embodiment of the present application;
Fig. 7 is a schematic diagram of corner information of a calibration board according to an embodiment of the present application;
FIG. 8 is a diagram of the re-projection error information according to an embodiment of the present application;
fig. 9 is a schematic flowchart of step S200 in the robot vision inspection method according to an embodiment of the present application;
FIG. 10 is a schematic representation of a first circular marker edge fit provided by an embodiment of the present application;
FIG. 11 is a schematic illustration of a second circular marker edge fit provided by an embodiment of the present application;
FIG. 12 is a third schematic illustration of a robotic arm according to an embodiment of the application;
FIG. 13 is a schematic illustration of dot location distribution provided by an embodiment of the present application;
Fig. 14 is a schematic flowchart of a robot vision inspection method according to an embodiment of the present application after step S240;
Fig. 15 is a schematic flowchart of a robot vision inspection method according to an embodiment of the present application before step S300;
FIG. 16 is a schematic diagram of a predicted position provided by an embodiment of the present application;
Fig. 17 is a schematic flow chart of step S400 in the robot vision inspection method according to an embodiment of the present application;
Fig. 18 is a schematic flowchart of a step S400 in the robot vision inspection method according to an embodiment of the present application;
FIG. 19 is a block diagram of a robot vision inspection system provided in accordance with one embodiment of the present application;
fig. 20 is a schematic diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In the description of the present specification and the appended claims, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
In order to illustrate the technical scheme of the application, the following description is made by specific examples.
Referring to fig. 1, fig. 1 is a flow chart of a robot vision detection method based on laser compensation according to an embodiment of the application. In this embodiment, the execution subject of the robot vision detection method is a terminal device. It will be appreciated that the types of terminal devices include, but are not limited to, cell phones, tablet computers, notebook computers, ultra-Mobile Personal Computer (UMPC), netbooks, personal digital assistants (Personal DIGITAL ASSISTANT, PDA), etc., and the embodiments of the present application do not limit any particular type of terminal device.
Referring to fig. 1, the method for detecting the vision of the robot provided by the embodiment of the application includes the following steps:
in S100, terminal marker image set information is continuously acquired based on a preset camera.
For example, referring to fig. 2, the robot vision inspection method may be applied to a robot arm.
Specifically, the terminal device may shoot the end of the mechanical arm based on a preset camera, and continuously acquire end marker image set information, where the end marker image set information includes a plurality of continuous end marker image information; the terminal marker image information is used for describing an image obtained by shooting the terminal of the mechanical arm by the camera; the tail end of the mechanical arm is used for describing one end of the mechanical arm which directly interacts with a working object or environment; the shooting object of the terminal marker image information is the designated mechanical arm terminal; at least two light source markers are arranged at the tail end of the mechanical arm.
In one possible implementation manner, the light source markers may be small white light lamps, when the number of the light source markers is two, the two light source markers may be respectively located at diagonal positions, and the two small white light lamps are installed in a circular hole of the part to be fixed, then light is guided through an acrylic rod, and then light scattering treatment is performed through pasting a white film on the hole surface.
Without loss of generality, referring to fig. 3, the terminal device may construct a kinematic inverse solution equation for a motion position of the end of the mechanical arm and joint angle angles of two arms of the mechanical arm, so as to determine a combination of joint angles through an operational pose, where the kinematic inverse solution equation may be:
,
In the method, in the process of the invention, Is the length corresponding to the first arm of the mechanical arm,Is the length corresponding to the second arm of the mechanical arm,Is the included angle between the first arm and the positive direction of the X axis,The included angle between the second arm and the positive direction of the X axis is the end position of the mechanical arm.
In the method, in the process of the invention,;
。
In some possible implementations, referring to fig. 4, in order to facilitate improvement of accuracy, before step S100, the method further includes, but is not limited to, the following steps:
In S101, an imaging model of the camera is constructed.
In particular, the terminal device may first build an imaging model of the camera, which may be, for example, an industrial camera of a specific model number ME2P-2621-15U 3M.
Referring to fig. 5 without loss of generality, the camera may be composed of a sensor and a lens, where the lens is used to collect light emitted from a point in the external environment onto a point of the photosensitive sensor, so as to implement clear imaging, and the camera may be divided into a common lens and a telecentric lens according to different projection modes, the former is perspective projection, and the latter is parallel projection. The imaging model of the camera can be simplified into a small-hole imaging model, and the imaging model is characterized in that all light rays from a scene pass through a projection center, namely the center of a lens.
For example, since the process of transforming the entire three-dimensional coordinates of the aperture projection into two-dimensional pixel coordinates can be regarded as a process of transforming one three-dimensional world coordinate system coordinate point P (Xw, yw, zw) in the vision, transforming coordinate points in the world coordinate system into coordinate points P (Xc, yc, zc) in the camera coordinate system through the relationship with the camera coordinate system, transforming coordinate points P (Xc, yc, zc) in the camera coordinate system into coordinate points P (u, v) in the pixel coordinate system, and transforming pixel coordinates into image coordinate points P (x, y), the imaging model can be:
,
Wherein; Representing the imaging model; Depth information indicating a photographed object; representing the actual abscissa of the image center of the end marker image information in a preset image coordinate system; representing the actual ordinate of the image center of the end marker image information in a preset image coordinate system; internal reference information representing the camera; the internal reference information may contain 5 unknowns; ,, focal length information representing a camera in millimeters; representing a first pixel size with respect to the abscissa; representing a second pixel size with respect to the ordinate; inside and outside parameter information representing the camera; A theoretical abscissa of an image center representing the image information of the end marker in a preset image coordinate system; the theoretical ordinate of the image center representing the end marker image information in a preset image coordinate system.
In the method, in the process of the invention,A rotation matrix representing camera coordinates relative to world coordinates,;A translation vector representing camera coordinates relative to world coordinates,。
It should be noted that, although the calibration of the monocular camera can only obtain the internal parameter and the external parameter, the image depth of the object cannot be determined, but the image depth can be eliminated when the three-dimensional world coordinate point is converted into the pixel coordinate point, so when the camera and the mechanical arm are coplanar, the terminal equipment can determine the specific pixel number occupied by the specific unit distance of the three-dimensional world coordinate according to the first functional formula, the second functional formula and the third functional formula, and the coplanar ranging of the monocular camera is realized by using the two-point pixel number between planes.
Illustratively, the first function may be:
in S102, calibration plate image information is acquired based on the camera.
Without loss of generality, it is essential to obtain three-dimensional geometric information from a two-dimensional image, and the accuracy of camera calibration is also very important for vision measurement, while the main content of camera calibration is to determine internal parameters of the camera, external parameters, and obtain distortion coefficients of the camera, so as to accurately map pixel coordinates in the image to physical coordinates in the real world.
For example, referring to fig. 6, radial and tangential distortions are introduced during the manufacture and use of the camera lens, which may cause linear distortions or corner inaccuracies in the image. The terminal equipment can calculate distortion coefficients through camera calibration, then the image is corrected in the image processing process to ensure that an object in the image keeps an accurate shape and position, and further pixel information of the real position of the object is obtained, so after the terminal equipment builds an imaging model of the camera, the terminal equipment can obtain calibration plate image information based on the camera, wherein the calibration plate image information is used for describing an image obtained by shooting a calibration plate by the camera; the calibration plate may be a checkerboard or a circular lattice, such as: the specific specification is GP290-20-12 x 9, and the calibration plate can be placed on a plane with the same height as the plane to be detected in advance, so that the subsequent detection of the light source characteristic pixels of the fixed-height plane motion is facilitated.
In S103, gradation processing is performed on the calibration plate image information to generate gradation image information.
Specifically, after the terminal device obtains the calibration plate image information, the terminal device may perform grayscale processing on the calibration plate image information to generate grayscale image information, where the grayscale image information is used to describe the calibration plate image information after the grayscale processing.
In S104, a plurality of calibration board corner information of the grayscaled image information is determined based on a preset corner detection algorithm and the grayscaled image information.
For example, referring to fig. 7, after the terminal device generates the grayscale image information, the terminal device may determine a plurality of calibration plate corner information of the grayscale image information based on a preset corner detection algorithm and the grayscale image information, where the calibration plate corner information is used to describe corners on the calibration plate, the corners are used for subsequent computation, and a right circular circle in fig. 7 represents the calibration plate corner information.
In S105, for each calibration plate corner information: acquiring actual physical coordinate information of corner points of the corner point information of the calibration plate;
For example, referring to fig. 7, after the terminal device determines a plurality of calibration plate corner information, the terminal device may perform the process for each calibration plate corner information: the actual physical coordinate information of the corner points of the corner point information of the calibration plate is obtained, so that the image coordinate and the physical coordinate are conveniently associated, wherein the actual physical coordinate information of the corner points is used for describing the actual physical coordinate of the corner point information of the calibration plate, and the actual physical coordinate can be the coordinate of grid points of the corner point information of the calibration plate on the calibration plate.
In S106, re-projection error information is generated according to a preset euclidean distance calculation function, calibration plate corner information and corner actual physical coordinate information.
Specifically, after the terminal device obtains the angular point actual physical coordinate information, the terminal device may input the angular point information of the calibration board and the angular point actual physical coordinate information into a preset euclidean distance calculation function, and generate re-projection error information, so as to evaluate the calibration result, so as to determine the calibration accuracy, where the re-projection error information is used to describe the projection error value between the angular point information of the calibration board and the angular point actual physical coordinate information.
For example, referring to fig. 8, fig. 8 shows the calculated re-projection error information for twenty-five calibration pictures, and as can be seen from fig. 8, the maximum re-projection error information is 0.1pixels, and the average re-projection error information is 0.07pixels.
It should be noted that the camera resolution may be 51205120Pixel; the focal length may be 535.460 millimeters; the internal reference may be; The single pixel representative distance of the plane to be measured may be 0.2235 mm.
In S107, calibration accuracy result information is generated according to the re-projection error information and the preset error threshold information.
Specifically, after the terminal equipment generates the re-projection error information, the terminal equipment can compare the re-projection error information with preset error threshold information to generate calibration accuracy result information, wherein the calibration accuracy result information is calibration qualified information or calibration unqualified information, the calibration qualified information is used for describing calibration qualification, and the calibration unqualified information is used for describing calibration unqualification; the error threshold information may be pre-customized. For example, if the re-projection error information is less than the error threshold information, the terminal device may generate calibration pass information, otherwise the terminal device may generate calibration fail information.
Accordingly, the step S100 includes, but is not limited to, the following steps: comprising:
In S110, if the calibration accuracy result information is calibration qualified information, continuously acquiring terminal marker image set information based on a preset camera.
Specifically, if the calibration accuracy result information is calibration qualified information, the terminal device may continuously acquire terminal marker image set information based on a preset camera.
In S200, for each end marker image information: and acquiring first pixel coordinate information corresponding to the theoretical position of the tail end of the mechanical arm, carrying out distortion correction processing on the image information of the tail end marker according to the first pixel coordinate information and preset distortion coefficient set information, and determining second pixel coordinate information corresponding to the actual measurement position of the tail end of the mechanical arm.
Specifically, after the terminal device continues to acquire the end-marker image set information, the terminal device may perform the processing for each end-marker image information: and then, carrying out distortion correction processing on the terminal marker image information according to the first pixel coordinate information and preset distortion coefficient set information, and effectively determining second pixel coordinate information corresponding to the actual measurement position of the tail end of the mechanical arm. In one possible implementation manner, the distortion coefficient set information includes first distortion coefficient information and second distortion coefficient information, where the first distortion coefficient information and the second distortion coefficient information can effectively perform de-distortion processing on the image, and it should be noted that the first distortion coefficient information and the second distortion coefficient information may be obtained through camera calibration.
In some possible implementations, referring to fig. 9, in order to implement the distortion removal processing on the image, step S200 includes, but is not limited to, the following steps:
in S210, for each end marker image information: and acquiring first pixel coordinate information corresponding to the theoretical position of the tail end of the mechanical arm.
Specifically, the terminal device may perform this processing for each end-marker image information: and acquiring first pixel coordinate information corresponding to the theoretical position of the tail end of the mechanical arm.
Illustratively, the first pixel coordinate information may be:
,
In the method, in the process of the invention, The abscissa representing the first pixel coordinate information,The actual abscissa of the image center representing the end marker image information in the preset image coordinate system,The theoretical abscissa of the image center representing the end marker image information in a preset image coordinate system,,The focal length information of the camera is represented,The dimensions of the millimeter-scale of the material,The first pixel size with respect to the abscissa is indicated,Representing the ordinate of the first pixel coordinate information,The actual ordinate of the image center representing the end marker image information in the preset image coordinate system,The theoretical ordinate of the image center representing the end marker image information in a preset image coordinate system,,Representing the second pixel size with respect to the ordinate.
In S220, distortion correction processing is performed on the terminal marker image information according to the first pixel coordinate information, the preset first distortion coefficient information and the second distortion coefficient information, and distortion correction pixel coordinate information corresponding to the actually measured position of the tail end of the mechanical arm is determined.
Specifically, because the non-ideal characteristics of the lens of the camera and the non-parallel relationship between the imager and the lens form radial distortion and tangential distortion in the process of collecting the image, the image cannot reflect the real state, when a photo is taken by the camera to obtain the pixel at the position of the mark point, deviation between the actual pixel and the ideal pixel is caused by the influence of image distortion, and the distortion problem of the image has to be processed in order to obtain the ideal pixel coordinate.
For example, assuming that the ideal undistorted image coordinates are (x, y), the image coordinates actually photographed by the camera may be expressed as [ ],) That is, the distortion correcting pixel coordinate information may be:
,
In the method, in the process of the invention, The abscissa representing the distortion correcting pixel coordinate information,The abscissa representing the first pixel coordinate information,Representing first distortion coefficient information describing first order radial distortion coefficients,,,Representing second distortion coefficient information describing second order radial distortion coefficients,,Representing a preset third-order radial distortion coefficient,Representing a predetermined first order tangential distortion coefficient,Representing a predetermined second order tangential distortion coefficient,Representing the ordinate of the distortion correcting pixel coordinate information,Representing the ordinate of the first pixel coordinate information.
In S230, the pixel coordinate information is corrected for each distortion of the first marker and each distortion of the second marker: and carrying out least square fitting processing on the distortion correction pixel coordinate information to generate ideal circle feature set information of the target ideal circle.
Specifically, after the terminal device determines the distortion correcting pixel coordinate information, the terminal device may perform the processing for each of the distortion correcting pixel coordinate information of the first marker and each of the distortion correcting pixel coordinate information of the second marker: and carrying out least square fitting processing on the distortion correction pixel coordinate information to generate ideal circle feature set information of the target ideal circle, so as to realize that the edges of the two markers are respectively subjected to least square fitting to obtain circle centers of the two markers, wherein the first marker is used for describing any one light source marker, the second marker is used for describing any other light source marker, and the ideal circle feature set information comprises circle center pixel coordinate information and radius information of the target ideal circle.
In one possible implementation, referring to fig. 10 and 11, fig. 10 shows an edge fitting map of a first circular marker, and fig. 11 shows an edge fitting map of a second circular marker; after the terminal equipment processes the gray level of the shot picture to reserve the light source marker pixel with a specific gray level threshold value, the terminal equipment can determine a moment estimation fitting edge of the neighborhood gray level distribution of the edge point for the reserved round light source by using a Canny operator, wherein the Canny operator can improve the anti-noise performance of the image and the edge connection effect.
Without loss of generality, after obtaining pixel information corresponding to the end position of the mechanical arm detected by the camera, the terminal equipment can install the laser tracking ball at a position aligned with the end position, then collect the end position of the mechanical arm by using the laser tracker, and judge the accuracy of the camera on the end position detection of the mechanical arm by using the information of the laser tracker.
In S240, second pixel coordinate information is determined from the plurality of center pixel coordinate information.
Specifically, after the terminal device generates the ideal circle feature set information, the terminal device may effectively determine the second pixel coordinate information according to the plurality of circle center pixel coordinate information, so as to implement that after the circle center pixel coordinates of the two circular markers are fitted by extracting the two circle marker edge pixel information, the two circle center pixel coordinates are summed to determine the pixel coordinate of the end position of the mechanical arm under the detection of the camera, where the second pixel coordinate information may be:
,
In the method, in the process of the invention, An abscissa representing the second pixel coordinate information,Representing the ordinate of the second pixel coordinate information.
Exemplary, referring to FIG. 12, for a set of discrete measurement points,) The center of the ideal target circle can be assumed to beThe radius of the target ideal circle is R, wherein i=1, 2 … m; since the requirement of least squares fitting is typically that the sum of squares of the distances be minimized, thenThe terminal device then further simplifies the derivationA kind of electronic deviceAt the same time, since the least square method should satisfyThenAssume thatCan determineAnd further determining: A kind of electronic device A kind of electronic device。
Without loss of generality, the relationship between the mechanical arm body coordinate system and the pixel coordinate system is established for the mechanical arm origin, so that the relationship between the mechanical arm end pixel information and the mechanical arm end position can be realized, and referring to fig. 13, fig. 13 is a point location distribution diagram of the mechanical arm end position obtained by detecting 25 points corresponding to circular tracks of the mechanical arm respectively by a camera, wherein the 25 points correspond to the circular tracks of 100mm, 200mm, 300mm and 400 mm.
In some possible implementations, to implement repeated detection of random errors with a camera, referring to fig. 14, after step S240, the method further includes, but is not limited to, the steps of:
in S241, based on the preset repeated detection number information, a plurality of repeated measurement sample data information of the robot arm end at the same position is acquired.
Specifically, when the camera acquires the pixels at the center of the light source marker, the light source of the light source marker and the light intensity of the environment have very slight changes, which cause slight influence on the extraction of the edges of the two markers, so as to reduce the adverse interference caused by the slight influence, the terminal device may acquire a plurality of pieces of repeated measurement sample data information of the end of the mechanical arm at the same position based on preset repeated detection frequency information, where the repeated detection frequency information may be a preset value.
In S242, repeated measurement sample data average information is generated from the plurality of repeated measurement sample data information.
Specifically, after the terminal device acquires the plurality of repeated measurement sample data information, the terminal device may generate repeated measurement sample data average value information according to the plurality of repeated measurement sample data information, where the repeated measurement sample data average value information may be:
,
In the method, in the process of the invention, Representing the repeated measurement sample data average information,Representing repeated measurements of sample data information.
In S243, standard deviation value information of the robot arm end at the same position is determined from the repeated measurement sample data average value information.
Specifically, after the terminal device generates the repeated measurement sample data average value information, the terminal device may determine standard deviation value information of the end of the mechanical arm at the same position according to the repeated measurement sample data average value information, where the standard deviation value information may be:
,
In the method, in the process of the invention, Representing standard deviation value information.
Without loss of generality, the terminal equipment can repeatedly detect thirteen points corresponding to circular tracks of 100 mm, 200mm, 300 mm and 400 mm respectively for 5 times at intervals of thirty degrees respectively, so that standard deviation value information is obtained to be 0.1 mm, namely, when a camera collects two markers and performs least square fitting, the repeated detection pixel error is in a range of 0 to 0.05 pixel.
In S300, the coordinate error value information of the end of the mechanical arm is determined according to the second pixel coordinate information and a preset end position motion error calculation function.
Specifically, after the terminal device determines the second pixel coordinate information, the terminal device may effectively determine the coordinate error value information of the end of the mechanical arm according to the second pixel coordinate information and a preset end position motion error calculation function.
Without loss of generality, as the motion mechanical arm belongs to an open loop motion mode, when the motion mechanical arm performs point-to-point motion, a certain motion error with an ideal circular track exists, the terminal equipment can detect the motion tracks of thirteen points separated by thirty degrees on the circular tracks of 100mm, 200mm, 300mm and 400mm respectively by using a camera, and the motion error of the mechanical arm under the detection of the camera can be obtained by comparing the detected data with a theoretical position, wherein the functional formula can be combined:
,
In the method, in the process of the invention, Representing motion error,) Representing the value of the camera for detecting the position of the tail end of the mechanical arm,) A value representing the theoretical position.
The working range of the movement of the mechanical arm is larger, the detection movement error is increased along with the movement error, the distortion error corresponding to the detected field of view is larger, and after the image is subjected to distortion processing, a certain system error exists in the camera, namely, the larger the movement range of the plane of the mechanical arm is, the larger the detection error of the camera is, meanwhile, the terminal equipment can repeatedly move the plane of the tail end of the mechanical arm for several times along the same track due to the movement deviation of the plane of the mechanical arm body along the same track, the detection and the average value are compared with the numerical value of each movement along the same track, and the movement deviation of the mechanical arm under the influence of the system error is obtained. The terminal device can sequentially make the same track motion back and forth for 5 times through the laser tracker to detect the motion errors of the mechanical arm by using the laser tracker to detect 25 points corresponding to the circular tracks of 100mm, 200mm, 300mm and 400mm respectively, so that the accuracy of the detection of the tail end position of the mechanical arm by the camera is favorably compared with the data of the laser tracker to judge the accuracy of the detection effect of the camera, and the errors detected by the camera are compared with the errors detected by the laser tracker to evaluate the detection effect of the camera.
In some possible implementations, referring to fig. 15, in order to generate the plurality of predicted pixel coordinate information, before step S300, the method further includes, but is not limited to, the following steps:
in S301, spatial interpolation distance information is generated based on a weight interpolation function according to the second pixel coordinate information and a preset distance inverse ratio.
In S302, the equidistant array generates a plurality of predicted pixel coordinate information from the spatial interpolation distance information and the second pixel coordinate information.
Specifically, after the terminal device generates the spatial interpolation distance information, the terminal device may, in which the predicted pixel coordinate information is used to describe the second pixel coordinate information generated by the equidistant array, the distance between the plurality of predicted pixel coordinate information is the spatial interpolation distance information, and the distance between the predicted pixel coordinate information and the second pixel coordinate information is the spatial interpolation distance information.
Accordingly, the step S300 includes, but is not limited to, the following steps:
In S310, the coordinate error value information of the robot arm end is determined according to the abscissa of the second pixel coordinate information and the preset end position motion error calculation function.
Specifically, the terminal device may determine coordinate error value information of the end of the mechanical arm according to the abscissa of the second pixel coordinate information and a preset end position motion error calculation function, where the end position motion error calculation function may be:
,
In the method, in the process of the invention, The information of the value of the coordinate error is represented,Representing the total amount of second pixel coordinate information,Representing the order of the second pixel coordinate information,Represent the firstThe difference between the abscissa corresponding to the second pixel coordinate information and the abscissa corresponding to the previous second pixel coordinate information.
For example, since it is difficult for the terminal device to perform camera measurement on the movement condition of each point of the end position of the mechanical arm, in order to reduce the limitation of experimental measurement and improve the time utilization, the terminal device may first measure the entire planar partial point when the end position of the mechanical arm moves in a planar manner, and then perform a method to be estimated through known data to obtain the movement condition of the desired position. For example, the terminal device may perform to-be-estimated predictions for positions of the motion track portions of 150mm, 250mm and 350mm on information of the position points detected by the motion track portions of 100mm, 200mm, 300mm and 400mm, respectively, so as to obtain error compensation, and compare the prediction results with actually detected data to evaluate effects. The distance inverse proportion weight interpolation method is one of the spatial interpolation methods, namely, the sample point which is closer to the point to be interpolated is given larger weight, the weight contribution of the sample point is inversely proportional to the distance, and the distance inverse proportion weight interpolation method can be combined with the following functional formula:
A kind of electronic device A kind of electronic device。
In the method, in the process of the invention,A numerical value representing a point to be estimated; x is the abscissa corresponding to the position of the point to be estimated, y is the ordinate corresponding to the position of the point to be estimated,The values representing the known points are represented by,Representing the distance between the point to be estimated and the known point.
For example, referring to fig. 16, the terminal device may perform compensation processing on the second pixel coordinate information according to the coordinate error value information to generate compensated pixel coordinate information, where in fig. 16, a plurality of star marks located in the same circle as the star mark indicated by the point a indicate actual measurement positions, and a plurality of star marks located in the same circle as the star mark indicated by the point B indicate predicted positions.
Without loss of generality, the inventor finds that the error estimation of the position to be estimated is quite close to the overall motion error trend of the mechanical arm through a large number of experiments, and shows that the selected data point has feasibility of using a distance inverse interpolation method for the position error to be estimated.
For example, the terminal device may detect the error of the position to be estimated by the motion trail interpolation of 150mm, 250mm and 350mm of the tail end position of the mechanical arm through the camera, and then perform error detection comparison of the same position with the laser tracker, so as to determine the actual detection error of the camera to the position, wherein the average value is 0.534mm for the motion trail of 150mm, 0.931mm for the motion trail of 250mm, 1.405mm for the motion trail of 350mm, and the total average value is 0.967 mm. Meanwhile, after the plane motion error of the position to be estimated is obtained through interpolation by a distance inverse proportion method, the terminal equipment can compensate the error value estimated through interpolation of the distance inverse proportion one by one for the error of the point corresponding to the motion track detected by the camera, and the inventor finds that after interpolation and compensation, the error value of the whole 150mm track detection position is reduced from 0.534mm to 0.136mm, the error value of the whole 250mm track detection position is reduced from 0.931mm to 0.229mm, the error value of the whole 350mm track detection position is reduced from 1.405mm to 0.262mm, and the whole error average value of 3 detection tracks is reduced from 0.967mm to 0.209mm.
In a possible implementation manner, the terminal device may also use an RBF neural network to predict the error of the position to be estimated, where the RBF neural network is a radial basis function network, is an artificial neural network using a radial basis function as an activation function, and is capable of approximating any nonlinear function, processing regularity that is difficult to analyze in the system, having good generalization capability, and having a fast learning convergence speed, and being successfully applied to nonlinear function approximation.
Without losing generality, the RBF neural network can be formed by an input layer, a hidden layer and an output layer, wherein the first layer is an input layer and consists of input data; the second layer is an implicit layer, is a transformation function of each unit, and maps the low-dimensional input to a high-dimensional space through a nonlinear Gaussian function; the third layer is an output layer, responds to the input signal, and obtains the output value of the output layer from the hidden layer to the output through linear weighted evaluation, wherein the weight value is. The activation function used in the RBF neural network may be a Gauss radial basis function, and the expression may be:
,
Wherein r= ,Representing the j-th input data,Representing the i-th sample data center point,Average deviation of core function in hidden layer;
The RBF neural network training output function expression may be:
;
Without loss of generality, some other parameters of the RBF neural network may be as follows: the number of training samples is 125, the number of test samples is 20, the radial base speed is 2.0, the number of hidden layer neurons is 125, after the plane motion error of the position to be estimated is predicted through the RBF neural network, the terminal equipment can compensate the error value predicted by the terminal equipment for the error of the point corresponding to the motion track detected by the camera one by one, the inventor finds that after the RBF neural network is compensated, the error value of the whole 150mm track detection position is reduced from 0.534mm to 0.0768mm, the error value of the whole 250mm track detection position is reduced from 0.931mm to 0.147mm, the error value of the whole 350mm track detection position is reduced from 1.405mm to 0.221mm, and the whole error average value of 3 detection tracks is reduced from 0.967mm to 0.148mm, so that the effect of the RBF neural network in the aspect of the prediction of the position error and the compensation effect of the distance inverse interpolation method are both in line with expectations.
In S400, compensation processing is performed on the second pixel coordinate information based on the coordinate error value information, and compensated pixel coordinate information is generated.
Specifically, after the terminal device determines the coordinate error value information, the terminal device may perform compensation processing on the second pixel coordinate information according to the coordinate error value information, to generate compensated pixel coordinate information, where the compensated pixel coordinate information is used to describe the compensated second pixel coordinate information.
In some possible implementations, referring to fig. 17, to enable generation of compensated pixel coordinate information, step S400 includes, but is not limited to, the steps of:
In S410, compensation direction information is determined from the first pixel coordinate information and the second pixel coordinate information.
Specifically, the terminal device may determine, according to the first pixel coordinate information and the second pixel coordinate information, compensation direction information with the first pixel coordinate information as a start point and the second pixel coordinate information as an end point, where the compensation direction information is used to describe a direction in which compensation is performed.
In S420, compensation processing is performed on the second pixel coordinate information based on the compensation direction information and the coordinate error value information, and compensation pixel coordinate information is generated.
Specifically, after the terminal device determines the compensation direction information, the terminal device may perform compensation processing on the second pixel coordinate information based on the compensation direction information and the coordinate error value information, so as to generate compensation pixel coordinate information, thereby implementing improvement of accuracy of positioning the mechanical arm, where the compensation pixel coordinate information is used for describing the second pixel coordinate information after the compensation of the coordinate error value information to the compensation direction information.
In some possible implementations, referring to fig. 18, after step S400, the method further includes, but is not limited to, the following steps:
In S500, compensation motion trajectory information is generated from the plurality of compensation pixel coordinate information.
Specifically, the terminal device may sequentially connect the plurality of pieces of compensated pixel coordinate information to generate compensated motion trajectory information, wherein the compensated motion trajectory information is used to describe a motion trajectory formed by the plurality of pieces of compensated pixel coordinate information.
In S510, the movement of the distal end of the arm is controlled according to the compensated movement trace information.
Specifically, after the terminal device generates the compensated motion trajectory information, the terminal device may control the end of the mechanical arm to move according to the compensated motion trajectory information.
The implementation principle of the robot vision detection method based on laser compensation in the embodiment of the application is as follows: the terminal device may acquire the end marker image set information based on the camera first, and then perform the processing for each end marker image information: acquiring first pixel coordinate information of a theoretical position of the tail end of the mechanical arm, performing distortion correction processing on the tail end marker image information based on the first pixel coordinate information, determining second pixel coordinate information corresponding to an actual measurement position of the tail end of the mechanical arm, further determining coordinate error value information of the tail end of the mechanical arm by combining a tail end position motion error calculation function, and finally performing compensation processing on the second pixel coordinate information according to the coordinate error value information to generate compensation pixel coordinate information, thereby realizing compensation on global error conditions and improving the positioning accuracy of the mechanical arm.
It should be noted that, the sequence number of each step in the above embodiment does not mean the sequence of execution sequence, and the execution sequence of each process should be determined by its function and internal logic, and should not limit the implementation process of the embodiment of the present application in any way.
Embodiments of the present application also provide a laser compensation-based robot vision inspection system, only portions relevant to the present application are shown for ease of illustration, as shown in fig. 19, the system 190 comprising:
End marker image set information acquisition module 1911: the terminal marker image collection information comprises a plurality of continuous terminal marker image information, wherein a shooting object of the terminal marker image information is a designated mechanical arm terminal, and at least two light source markers are arranged at the mechanical arm terminal;
The first pixel coordinate information acquisition module 192: for each end marker image information: acquiring first pixel coordinate information corresponding to a theoretical position of the tail end of the mechanical arm, carrying out distortion correction processing on the image information of the tail end marker according to the first pixel coordinate information and preset distortion coefficient set information, and determining second pixel coordinate information corresponding to an actual measurement position of the tail end of the mechanical arm;
The coordinate error value information determination module 193: the coordinate error value information of the tail end of the mechanical arm is determined according to the second pixel coordinate information and a preset tail end position motion error calculation function;
The compensation pixel coordinate information generation module 194: and the compensation processing is used for carrying out compensation processing on the second pixel coordinate information according to the coordinate error value information to generate compensation pixel coordinate information.
It should be noted that, because the content of information interaction and execution process between the modules and the embodiment of the method of the present application are based on the same concept, specific functions and technical effects thereof may be referred to in the method embodiment section, and details thereof are not repeated herein.
The embodiment of the present application also provides a terminal device, as shown in fig. 20, the terminal device 200 of the embodiment includes: a processor 201, a memory 202, and a computer program 203 stored in the memory 202 and executable on the processor 201. The steps in the above-described robot vision inspection method embodiment, such as steps S100 to S400 shown in fig. 1, are implemented when the processor 201 executes the computer program 203; or the processor 201, when executing the computer program 203, performs the functions of the modules in the apparatus described above, such as the functions of the modules 191 to 194 shown in fig. 19.
The terminal device 200 may be a desktop computer, a notebook computer, a palm computer, a cloud server, etc., and the terminal device 200 includes, but is not limited to, a processor 201 and a memory 202. It will be appreciated by those skilled in the art that fig. 20 is merely an example of the terminal device 200 and does not constitute a limitation of the terminal device 200, and may include more or less components than illustrated, or may combine certain components, or different components, e.g., the terminal device 200 may further include an input-output device, a network access device, a bus, etc.
The Processor 201 may be a central processing unit (Central Processing Unit, CPU), other general purpose Processor, digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc.; a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 202 may be an internal storage unit of the terminal device 200, such as a hard disk or a memory of the terminal device 200, or the memory 202 may be an external storage device of the terminal device 200, such as a plug-in hard disk provided on the terminal device 200, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD), or the like; further, the memory 202 may also include both an internal storage unit and an external storage device of the terminal device 200, the memory 202 may also store the computer program 203 and other programs and data required by the terminal device 200, and the memory 202 may also be used to temporarily store data that has been output or is to be output.
An embodiment of the present application also provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the various method embodiments described above. Wherein the computer program comprises computer program code, the computer program code can be in the form of source code, object code, executable file or some intermediate form, etc.; the computer readable medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth.
The above embodiments are not intended to limit the scope of the present application, so: all equivalent changes in the method, principle and structure of the present application should be covered by the protection scope of the present application.
Claims (10)
1. A robot vision detection method based on laser compensation, the method comprising:
Continuously acquiring terminal marker image set information based on a preset camera, wherein the terminal marker image set information comprises a plurality of continuous terminal marker image information, a shooting object of the terminal marker image information is a designated mechanical arm terminal, and at least two light source markers are arranged at the mechanical arm terminal;
image information for each of the end markers: acquiring first pixel coordinate information corresponding to a theoretical position of the tail end of the mechanical arm, carrying out distortion correction processing on the tail end marker image information according to the first pixel coordinate information and preset distortion coefficient set information, and determining second pixel coordinate information corresponding to an actual measurement position of the tail end of the mechanical arm;
Determining coordinate error value information of the tail end of the mechanical arm according to the second pixel coordinate information and a preset tail end position motion error calculation function;
And carrying out compensation processing on the second pixel coordinate information according to the coordinate error value information to generate compensation pixel coordinate information.
2. The method of claim 1, wherein prior to the continuously acquiring end marker image set information based on the preset camera, the method further comprises:
Constructing an imaging model of the camera;
based on the camera, obtaining calibration plate image information;
Carrying out graying treatment on the image information of the calibration plate to generate graying image information;
determining a plurality of calibration plate corner information of the gray image information based on a preset corner detection algorithm and the gray image information;
For each calibration plate corner information: acquiring the actual physical coordinate information of the corner points of the corner point information of the calibration plate;
Generating re-projection error information according to a preset Euclidean distance calculation function, the corner information of the calibration plate and the actual physical coordinate information of the corner;
generating calibration accuracy result information according to the re-projection error information and preset error threshold information, wherein the calibration accuracy result information is calibration qualified information or calibration unqualified information;
correspondingly, the continuously acquiring the terminal marker image set information based on the preset camera comprises the following steps:
And if the calibration accuracy result information is the calibration qualified information, continuously acquiring terminal marker image set information based on a preset camera.
3. The method of claim 2, wherein the set of distortion coefficient information comprises first distortion coefficient information and second distortion coefficient information; the image information for each of the end markers: acquiring first pixel coordinate information corresponding to a theoretical position of the tail end of the mechanical arm, and performing distortion correction processing on the tail end marker image information according to the first pixel coordinate information and preset distortion coefficient set information to determine second pixel coordinate information corresponding to an actually measured position of the tail end of the mechanical arm, wherein the method comprises the following steps:
Image information for each of the end markers: acquiring first pixel coordinate information corresponding to a theoretical position of the tail end of the mechanical arm, wherein the first pixel coordinate information is as follows:
,
In the method, in the process of the invention, Is the abscissa of the first pixel coordinate information,For the actual abscissa of the image center of the end marker image information in a preset image coordinate system,For the theoretical abscissa of the image center of the end marker image information in a preset image coordinate system,,As the focal length information of the camera,The dimensions of the millimeter-scale of the material,For a first pixel size with respect to the abscissa,Is the ordinate of the first pixel coordinate information,For the actual ordinate of the image center of the end marker image information in a preset image coordinate system,For the theoretical ordinate of the image center of the end marker image information in a preset image coordinate system,,A second pixel size with respect to the ordinate;
carrying out distortion correction processing on the terminal marker image information according to the first pixel coordinate information, the preset first distortion coefficient information and the second distortion coefficient information, and determining distortion correction pixel coordinate information corresponding to the actual measurement position of the tail end of the mechanical arm, wherein the distortion correction pixel coordinate information is as follows:
,
In the method, in the process of the invention, Correcting the abscissa of the pixel coordinate information for the distortion,Is the abscissa of the first pixel coordinate information,For the first distortion coefficient information, the first distortion coefficient information is used to describe a first order radial distortion coefficient,,,For the second distortion coefficient information, the second distortion coefficient information is used to describe a second order radial distortion coefficient,,Is a preset third-order radial distortion coefficient,Is a preset first order tangential distortion coefficient,For a predetermined second order tangential distortion coefficient,Correcting the ordinate of the pixel coordinate information for the distortion,An ordinate that is the first pixel coordinate information;
The distortion correcting pixel coordinate information for each of the first markers and each of the distortion correcting pixel coordinate information for the second markers: performing least square fitting processing on the distortion correction pixel coordinate information to generate ideal circle feature set information of a target ideal circle, wherein the first marker is used for describing any one light source marker, the second marker is used for describing any other light source marker, and the ideal circle feature set information comprises circle center pixel coordinate information and radius information of the target ideal circle
And determining the second pixel coordinate information according to the plurality of circle center pixel coordinate information.
4. A method according to claim 3, wherein after said determining said second pixel coordinate information from a plurality of said center pixel coordinate information, said method further comprises:
Acquiring a plurality of repeated measurement sample data information of the tail end of the mechanical arm at the same position based on preset repeated detection frequency information;
generating repeated measurement sample data average value information according to a plurality of repeated measurement sample data information;
And determining standard deviation value information of the tail end of the mechanical arm at the same position according to the repeated measurement sample data average value information.
5. The method of claim 4, wherein prior to said determining the coordinate error value information for the robot arm tip based on the second pixel coordinate information and a preset tip position motion error calculation function, the method further comprises:
Generating spatial interpolation distance information based on a weight interpolation function according to the second pixel coordinate information and a preset distance inverse proportion;
Generating a plurality of prediction pixel coordinate information by an equidistant array according to the spatial interpolation distance information and the second pixel coordinate information, wherein the prediction pixel coordinate information is used for describing the second pixel coordinate information generated by the equidistant array, the distance between the plurality of prediction pixel coordinate information is the spatial interpolation distance information, and the distance between the prediction pixel coordinate information and the second pixel coordinate information is the spatial interpolation distance information;
Correspondingly, the determining the coordinate error value information of the tail end of the mechanical arm according to the second pixel coordinate information and a preset tail end position motion error calculation function includes:
Determining coordinate error value information of the tail end of the mechanical arm according to the abscissa of the second pixel coordinate information and a preset tail end position motion error calculation function, wherein the tail end position motion error calculation function is as follows:
,
In the method, in the process of the invention, For the coordinate error value information,For the total amount of the second pixel coordinate information,For the order of the second pixel coordinate information,Is the firstAnd a difference value between an abscissa corresponding to each second pixel coordinate information and an abscissa corresponding to a previous second pixel coordinate information.
6. The method of claim 4, wherein compensating the second pixel coordinate information based on the coordinate error value information to generate compensated pixel coordinate information, comprises:
determining compensation direction information according to the first pixel coordinate information and the second pixel coordinate information;
And carrying out compensation processing on the second pixel coordinate information based on the compensation direction information and the coordinate error value information to generate compensation pixel coordinate information, wherein the compensation pixel coordinate information is used for describing the second pixel coordinate information after the coordinate error value information is compensated by the compensation direction information.
7. The method of claim 6, wherein after the compensating the first pixel coordinate information based on the coordinate error value information to generate compensated pixel coordinate information, the method further comprises:
generating compensation motion track information according to the plurality of compensation pixel coordinate information;
and controlling the tail end of the mechanical arm to move according to the compensation motion track information.
8. A laser compensation-based robotic vision inspection system, the system comprising:
The terminal marker image set information acquisition module: the method comprises the steps of continuously acquiring terminal marker image set information based on a preset camera, wherein the terminal marker image set information comprises a plurality of pieces of continuous terminal marker image information, a shooting object of the terminal marker image information is a designated tail end of a mechanical arm, and at least two light source markers are arranged at the tail end of the mechanical arm;
The first pixel coordinate information acquisition module: for each of said end marker image information: acquiring first pixel coordinate information corresponding to a theoretical position of the tail end of the mechanical arm, carrying out distortion correction processing on the tail end marker image information according to the first pixel coordinate information and preset distortion coefficient set information, and determining second pixel coordinate information corresponding to an actual measurement position of the tail end of the mechanical arm;
The coordinate error value information determining module: the coordinate error value information of the tail end of the mechanical arm is determined according to the second pixel coordinate information and a preset tail end position motion error calculation function;
And the compensation pixel coordinate information generation module is used for: and the compensation processing is used for carrying out compensation processing on the second pixel coordinate information according to the coordinate error value information to generate compensation pixel coordinate information.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411001958.2A CN118544360A (en) | 2024-07-25 | 2024-07-25 | Robot vision detection method, system, terminal and medium based on laser compensation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411001958.2A CN118544360A (en) | 2024-07-25 | 2024-07-25 | Robot vision detection method, system, terminal and medium based on laser compensation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118544360A true CN118544360A (en) | 2024-08-27 |
Family
ID=92453941
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202411001958.2A Pending CN118544360A (en) | 2024-07-25 | 2024-07-25 | Robot vision detection method, system, terminal and medium based on laser compensation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118544360A (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102208098A (en) * | 2010-03-29 | 2011-10-05 | 佳能株式会社 | Image processing apparatus and method of controlling the same |
CN110276806A (en) * | 2019-05-27 | 2019-09-24 | 江苏大学 | Online hand-eye calibration and crawl pose calculation method for four-freedom-degree parallel-connection robot stereoscopic vision hand-eye system |
US20210241491A1 (en) * | 2020-02-04 | 2021-08-05 | Mujin, Inc. | Method and system for performing automatic camera calibration |
CN113524194A (en) * | 2021-04-28 | 2021-10-22 | 重庆理工大学 | Target grabbing method of robot vision grabbing system based on multi-mode feature deep learning |
CN116476046A (en) * | 2023-03-27 | 2023-07-25 | 佛山科学技术学院 | Mechanical arm calibration and control device and method based on particle swarm optimization |
CN116619350A (en) * | 2022-02-14 | 2023-08-22 | 上海理工大学 | Robot error calibration method based on binocular vision measurement |
CN117557657A (en) * | 2023-12-15 | 2024-02-13 | 武汉理工大学 | Binocular fisheye camera calibration method and system based on Churco calibration plate |
-
2024
- 2024-07-25 CN CN202411001958.2A patent/CN118544360A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102208098A (en) * | 2010-03-29 | 2011-10-05 | 佳能株式会社 | Image processing apparatus and method of controlling the same |
CN110276806A (en) * | 2019-05-27 | 2019-09-24 | 江苏大学 | Online hand-eye calibration and crawl pose calculation method for four-freedom-degree parallel-connection robot stereoscopic vision hand-eye system |
US20210241491A1 (en) * | 2020-02-04 | 2021-08-05 | Mujin, Inc. | Method and system for performing automatic camera calibration |
CN113524194A (en) * | 2021-04-28 | 2021-10-22 | 重庆理工大学 | Target grabbing method of robot vision grabbing system based on multi-mode feature deep learning |
CN116619350A (en) * | 2022-02-14 | 2023-08-22 | 上海理工大学 | Robot error calibration method based on binocular vision measurement |
CN116476046A (en) * | 2023-03-27 | 2023-07-25 | 佛山科学技术学院 | Mechanical arm calibration and control device and method based on particle swarm optimization |
CN117557657A (en) * | 2023-12-15 | 2024-02-13 | 武汉理工大学 | Binocular fisheye camera calibration method and system based on Churco calibration plate |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113748357B (en) | Attitude correction method, device and system of laser radar | |
CN107270810B (en) | The projector calibrating method and device of multi-faceted projection | |
JP3735344B2 (en) | Calibration apparatus, calibration method, and calibration program | |
CN106408609B (en) | A kind of parallel institution end movement position and posture detection method based on binocular vision | |
Yan et al. | Joint camera intrinsic and lidar-camera extrinsic calibration | |
CN113137920B (en) | Underwater measurement equipment and underwater measurement method | |
CN107481284A (en) | Method, apparatus, terminal and the system of target tracking path accuracy measurement | |
CN109801333B (en) | Volume measurement method, device and system and computing equipment | |
CN106971408B (en) | A kind of camera marking method based on space-time conversion thought | |
CN103649674A (en) | Measurement device and information processing device | |
CN111750804B (en) | Object measuring method and device | |
CN111263142A (en) | Method, device, equipment and medium for testing optical anti-shake of camera module | |
CN112102375B (en) | Point cloud registration reliability detection method and device and mobile intelligent equipment | |
US10628968B1 (en) | Systems and methods of calibrating a depth-IR image offset | |
Ding et al. | A robust detection method of control points for calibration and measurement with defocused images | |
CN115187612A (en) | Plane area measuring method, device and system based on machine vision | |
Gong et al. | High-precision calibration of omnidirectional camera using an iterative method | |
CN113959362B (en) | Calibration method and inspection data processing method of structured light three-dimensional measurement system | |
CN118544360A (en) | Robot vision detection method, system, terminal and medium based on laser compensation | |
CN113592934B (en) | Target depth and height measuring method and device based on monocular camera | |
CN113733078A (en) | Method for interpreting fine control quantity of mechanical arm and computer-readable storage medium | |
Wan et al. | Multiresolution and wide-scope depth estimation using a dual-PTZ-camera system | |
CN117249764B (en) | Vehicle body positioning method and device and electronic equipment | |
EP3708309A1 (en) | A method for determining positional error within a robotic cell environment | |
Koljonen | Computer vision and optimization methods applied to the measurements of in-plane deformations |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |