CN115049744A - Robot hand-eye coordinate conversion method and device, computer equipment and storage medium - Google Patents
Robot hand-eye coordinate conversion method and device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN115049744A CN115049744A CN202210809386.5A CN202210809386A CN115049744A CN 115049744 A CN115049744 A CN 115049744A CN 202210809386 A CN202210809386 A CN 202210809386A CN 115049744 A CN115049744 A CN 115049744A
- Authority
- CN
- China
- Prior art keywords
- mark point
- center
- group
- point
- mark
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/75—Determining position or orientation of objects or cameras using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Graphics (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Multimedia (AREA)
- Numerical Control (AREA)
- Manipulator (AREA)
Abstract
The application relates to a robot hand-eye coordinate transformation method, a robot hand-eye coordinate transformation device, a computer device, a storage medium and a computer program product. The method comprises the following steps: acquiring images of the end effector of the robot, which is provided with the calibration plate, by a scanner of the robot to obtain calibration plate images of the end effector at different poses; detecting each marking point in the calibration plate images with different poses according to at least two graphs to obtain at least two groups of different marking point areas; determining whether the region center of the landmark regions in each group is valid based on the distance between the landmark regions in each group; generating marker point sequences with different poses according to the center of each effective area and the center marker points of each effective area in different poses; and calibrating the coordinate system conversion relation between the robot and the scanner through the mark point sequences with different poses. The method can improve the coordinate conversion precision of the scanner and the robot.
Description
Technical Field
The present application relates to the field of robotics, and in particular, to a method and an apparatus for converting coordinates of a robot hand and eye, a computer device, a storage medium, and a computer program product.
Background
With the development of artificial intelligence technology, robots have been widely used in various industries. In the industrial application field, the robot is provided with a visual perception system, and the robot can control an end effector to execute actions such as machining, installation and the like by utilizing three-dimensional information acquired by the visual perception system. In brief, the three-dimensional sensing system is equivalent to human eyes, the end effector is equivalent to human hands, and preset action tasks are completed through cooperation between the hands and the eyes.
In order to ensure that the robot accurately moves the space object to the target position, the conversion relation between the coordinate system of the vision system and the coordinate system of the manipulator needs to be determined, the accuracy of the traditional conversion relation determination method is poor, and the obtained result has larger deviation with the actual true value.
Disclosure of Invention
In view of the above, it is necessary to provide a robot hand-eye coordinate transformation method, an apparatus, a computer device, a computer readable storage medium, and a computer program product, which can improve accuracy in view of the above technical problems.
In a first aspect, the application provides a robot hand-eye coordinate transformation method. The method comprises the following steps:
acquiring images of an end effector of the robot, which is provided with a calibration plate, by a scanner of the robot to obtain calibration plate images of the end effector at different poses;
detecting each marker point in the calibration plate images with different poses according to at least two graphs to obtain at least two groups of different marker point areas;
determining whether the region center of the landmark regions in each group is valid based on the distance between the landmark regions in each group;
generating marker point sequences with different poses according to the effective region centers and the central marker points of the effective region centers in the different poses;
and calibrating the coordinate system conversion relation between the end effector and the scanner through the marker point sequences with different poses.
In one embodiment, the detecting, according to at least two patterns, each marker point in the calibration plate images with different poses to obtain at least two groups of different marker point regions includes:
performing edge extraction detection on each mark point in the calibration board image according to at least two different patterns respectively to obtain at least two groups of different image profiles;
screening the image contour of the contour length interval in each group;
and obtaining the at least two groups of different mark point areas based on the screened image outlines in the groups.
In one embodiment, the image contour includes a circular contour and a polygonal contour, and the obtaining the at least two different groups of landmark regions based on the image contours in the screened groups includes:
similarity calculation is carried out on the basis of the screened outline area and the outline length of the circular outline, and the similarity of the circular outline is obtained;
selecting circular mark point areas corresponding to the mark points from the searched circular contour based on the similarity of the circular contour;
fitting the searched polygon outlines in each group to obtain a polygon fitting image;
fitting the polygon to a quadrilateral contour in the image to serve as a quadrilateral marking point area corresponding to each marking point;
the determining whether the area centers of the landmark areas in the respective groups are valid includes:
and judging whether the area center of the circular mark point area is effective or not based on the distance between the circular mark point area and the quadrilateral mark point area of each mark point.
In one embodiment, the determining whether the area centers of the landmark areas in the groups are valid based on the distances between the landmark areas in the groups includes:
respectively carrying out overlapping detection on the mark point areas in each group, and removing the overlapping mark point areas in each group to obtain the removed mark point areas of each group;
comparing the distance between each group of removed mark point areas with the adjacent threshold distance of the mark points to obtain a plurality of adjacent point comparison results;
and respectively judging whether the area center in each group of the removed mark point areas is effective or not based on the comparison result of each adjacent point.
In one embodiment, the performing overlap detection on the marker point regions in each group, and removing the overlap marker point regions in each group to obtain each group of removed marker point regions includes:
respectively carrying out combination comparison on the mark point areas in each group;
calculating the overlapping detection distance between the marker point areas of the combined contrast;
when the overlapping detection distance meets a contour detection threshold, respectively calculating the region side length and the region area of each mark point region for the combined comparison;
and based on the side length and the area of each mark point area which is combined and compared, eliminating the overlapped mark point areas in each group to obtain the mark point areas after the elimination of each group.
In one embodiment, the generating a marker point sequence of different poses according to each valid region center and the center marker point of each valid region center in the different pose includes:
carrying out averaging calculation based on the effective area centers with the same pose to obtain the gravity center positions of the area centers with different poses;
based on the distance between the gravity center position and the effective area center with the same pose, finding out the center mark point of each pose from the area center;
respectively determining the center mark points of the centers of the areas in different poses as the origin of polar coordinates under the poses;
obtaining the position of each region center in the polar coordinate system of each pose based on each region center and the polar coordinate origin of each pose;
and according to the angle of each region center in the polar coordinate system, sequencing the positions of each region center in the polar coordinate system of each pose respectively to obtain a marker point sequence of different poses.
In one embodiment, the coordinate system transformation relationship includes a rotational transformation relationship and a translational transformation relationship, and the calibrating the coordinate system transformation relationship between the robot and the scanner by the marker point sequences of different poses includes:
calibrating the rotation conversion relation between the robot and the scanner through the corresponding rotation vectors of the mark point sequence in the conversion process of different poses;
obtaining a first translational vector through the origin translation information of the mark point sequence in the conversion process of different poses;
calculating through the mark point spherical center obtained by fitting the mark point sequences with different poses and the mark point sequence with a preset pose to obtain a second translation vector;
and combining the first translation vector and the second translation vector to obtain the translation conversion relation.
In a second aspect, the application further provides a robot hand-eye coordinate transformation device. The device comprises:
the image acquisition module is used for acquiring images of the end effector of the robot, which is provided with the calibration plate, through a scanner of the robot to obtain calibration plate images of the end effector at different poses;
the edge detection module is used for detecting each mark point in the calibration plate images with different poses according to at least two graphs to obtain at least two groups of different mark point areas;
the effective mark point judging module is used for judging whether the area center of the mark point areas in each group is effective or not based on the distance between the mark point areas in each group;
the mark point sequence generating module is used for generating mark point sequences with different poses according to the effective region centers and the center mark points of the effective region centers in the different poses;
and the hand-eye calibration module is used for calibrating the coordinate system conversion relation between the robot and the scanner through the mark point sequences with different poses.
In a third aspect, the application also provides a computer device. The computer device comprises a memory and a processor, the memory stores a computer program, and the processor realizes the steps of robot eye coordinate transformation in any of the above embodiments when executing the computer program.
In a fourth aspect, the present application further provides a computer-readable storage medium. The computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, implements the steps of robot eye coordinate transformation in any of the embodiments described above.
In a fifth aspect, the present application further provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, performs the steps of robot eye coordinate transformation in any of the embodiments described above.
According to the robot hand-eye coordinate transformation method, the robot hand-eye coordinate transformation device, the computer equipment, the storage medium and the computer program product, each mark point in the calibration plate images with different poses is detected according to at least two graphs, and a mark point area is screened out and obtained through the image geometric characteristic detection mode; whether the mark points in the calibration plate image are effective is judged based on the distance between the mark point areas in each group, then the mark point sequences with different poses are generated according to the center of each effective area and the center mark points of each effective area center in different poses, and the positions of the mark points in the calibration plate image are determined through the mark point sequences, so that the robot hand-eye calibration calculation is converted into the translation and rotation relation calculation between the point pairs in a three-dimensional space, the corresponding translation and rotation relation can be calculated based on a PnP algorithm, the calculation based on the change relation between the poses is not needed, the calculation error is easy to control, and the application and development personnel can conveniently debug.
Drawings
FIG. 1 is a diagram illustrating an exemplary embodiment of a robot hand-eye coordinate transformation method;
FIG. 2 is a schematic flowchart of a robot hand-eye coordinate transformation method according to an embodiment;
FIG. 3 is a schematic view of the structure of a sign board according to an embodiment;
FIG. 4 is a schematic diagram of a landmark in another embodiment;
FIG. 5 is a block diagram showing the structure of a robot hand-eye coordinate transformation apparatus according to an embodiment;
FIG. 6 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The robot hand-eye coordinate transformation method provided by the embodiment of the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104, or may be located on the cloud or other network server. The terminal 102 acquires an image of the end effector of the robot, which is provided with the calibration plate, through a scanner of the robot, and obtains calibration plate images of the end effector at different poses; detecting each marker point in the calibration plate images with different poses according to at least two graphs to obtain at least two groups of different marker point areas; determining whether the region center of the landmark regions in each group is valid based on the distance between the landmark regions in each group; generating marker point sequences with different poses according to the center of each region and the center marker points of each region center in different poses; and calibrating the coordinate system conversion relation between the end effector and the scanner through the marker point sequences with different poses.
The terminal 102 may be, but not limited to, various robots, personal computers, notebook computers, smart phones, tablet computers, internet of things devices, and portable wearable devices, and the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart car-mounted devices, and the like. The portable wearable device can be a smart watch, a smart bracelet, a head-mounted device, and the like. The server 104 may be implemented as a stand-alone server or as a server cluster comprised of multiple servers.
In one embodiment, as shown in fig. 2, a robot hand-eye coordinate transformation method is provided, which is described by taking the method as an example applied to the terminal 102 in fig. 1, and includes the following steps:
step 202, acquiring images of the end effector of the robot, which is provided with the calibration plate, through a scanner of the robot, so as to obtain calibration plate images of the end effector at different poses.
The robot comprises a robot body, a scanner and an end effector of the robot; the displacement relation of the robot body and a coordinate system of a scanner is fixed, and the scanner comprises a depth map and a gray camera and is used for determining mark points and pixel coordinates thereof; and at least one shaft of the robot body is provided with an end effector of the robot, a manipulator controlled by the end effector is provided with a mark plate for calibrating hands and eyes, and the mark plate is pasted with mark points. The mark plate is a mark target made of light aluminum alloy, the mark target is pasted with coded mark points, white paper is pasted on the mark target firstly, and then mark points are pasted on the white paper, so as to improve the identification precision of the long-distance mark points.
In one embodiment, the marking plate is shown in fig. 3, the edge positions of the marking plate are annularly arranged with zone position marking points, a corresponding nearby marking point is arranged near a zone position marking point to be used as a marking position, and the central position of the marking plate is provided with a central position marking point. In another embodiment, there are at least 9 zone position mark points, and a corresponding nearby mark point is set near a zone position mark point as a mark point, the angle interval between each adjacent zone position mark point and the central position mark point is at least 15 degrees, which serves as a candidate mark point in the calibration plate image and is a mark point for image detection, and the angle interval between the zone position mark point and the corresponding mark point and the central position mark point is less than 5 degrees, respectively, and the mark point is not a mark point for image detection. The landmark is a figure made up of a plurality of features, as shown in fig. 4.
In one embodiment, a working range of a certain mechanical arm corresponding to an end effector of the robot is determined, in the working range, the end effector carrying a mark plate is controlled to move for multiple times in a range covering a working interval as large as possible, and the multiple movements of the end effector are subjected to image acquisition through a scanner so as to reduce errors in subsequent calculation of pose estimation, wherein when a rotating coordinate transformation relation is calculated, an included angle between a tool coordinate system and a robot body coordinate system is kept unchanged during each movement of the end effector.
In one embodiment, image acquisition of multiple movements of the end effector by a scanner includes: in the process of calculating the translation coordinate conversion relation, the original point position of the tool coordinate system is kept different during each movement, the robot tool coordinate system is rotated, and after the movement is finished, two groups of left and right images are respectively acquired through a left image and a right image of a binocular camera of a scanner, so that two groups of left and right calibration plate image sequences of the end effector at different poses are obtained; and when the left and right groups of calibration plate image sequences and the internal and external parameters in the binocular camera are calculated according to a trigonometry method, the coordinate positions of the mark points at different poses are obtained.
It should be appreciated that image acquisition of multiple movements of the end effector by the scanner involves two processes, one of which is computing the coordinate transformation relationship for rotation and the other of which is computing the coordinate transformation relationship for translation. These two processes are complementary and are not limited to the above.
After the scanner collects the images, the calibration plate images with different poses are obtained. In the calibration board images in each pose, there are differences in the shape, coordinate position, and other marker point characteristics of the same marker point, and after detection is performed on an image based on these differences, the image detection results in different poses are also different.
And 204, detecting each marker point in the calibration plate images with different poses according to at least two graphs to obtain at least two groups of different marker point areas.
Each mark point in the calibration plate image has a respective pixel coordinate position area, the pixel coordinate position area of each mark point has a corresponding mark point characteristic, the mark point characteristics are detected according to the corresponding image, and then a plurality of mark point areas corresponding to each mark point in the calibration plate image are determined based on the detection result. The multiple mark point regions corresponding to each mark point are divided into corresponding different groups of mark point regions according to different detected graphs, the mark point regions in the same group are all outlines of the same graph, but image attributes such as positions, side lengths, areas and the like of the mark point regions in the same group are different, and whether each mark point region in the same group should be removed or not can be judged through at least one image attribute.
After the partial mark point areas in the same group are removed, the corresponding relation between the mark point areas in different groups is determined based on the positions of the mark point areas, and the mark point areas corresponding to the mark point areas in different groups are respectively determined based on the corresponding relation so as to judge whether the area center of the mark point area in each group is effective.
Step 206, based on the distance between the landmark regions in each group, determines whether the region center of the landmark region in each group is valid.
As for the distance between the marker point regions in each group, it may be the distance between the marker point regions in the same group, and the distance between the marker point regions in different groups. When the distance between the mark point areas in the same group is smaller than the non-maximum value inhibition threshold value, the terminal judges that the mark point area in the same group is a non-maximum value calculation area in the adjacent point area and the repeated point area, carries out non-maximum value calculation on the non-maximum value calculation area, and removes the points which are too adjacent or repeated points.
And for the distances between the mark point areas in different groups, the terminal calculates the distance between the corresponding mark point areas in different groups, compares the distance with the corresponding threshold value, and determines whether the area center of the mark point area in a certain figure corresponding group is effective. For example: the mark point areas in the ellipse group have an ellipse contour, the mark point areas in the quadrangle group have a quadrangle contour, and when the distance between the ellipse group and the corresponding mark point area in the quadrangle group is smaller than the corresponding threshold value, the area centers of the mark point areas in the ellipse group are respectively taken as valid, and the area centers of the corresponding mark point areas in the quadrangle group are all invalid.
After obtaining the center of each area, judging whether the area center in the marker plate image of each pose is matched with the number of the marker points attached to the calibration plate, if so, determining the center marker point under the corresponding pose based on the area center of each pose; and if the positions of the marker points are not matched, carrying out abnormal processing on the number of the coding points until the center of the area in the marker plate image of each pose is matched with the number of the marker points attached to the calibration plate.
Wherein, the area center number exception handling comprises: if the number of the area centers of a certain pose is smaller than the number of the mark points attached to the calibration plate, the pose is adjusted and rescanned until the abnormality disappears; and if the number of the area centers is larger than the number of the mark points attached to the calibration board, sequentially removing the points with the largest distance between the area centers and the average pixel coordinates until the abnormality disappears.
And 208, generating mark point sequences with different poses according to the effective region centers and the center mark points of the effective region centers in different poses.
The central mark point is one of the central areas in a certain pose, and is used for representing a central position mark point arranged on the mark plate. Because the positions of the calibration plate images are different, the center of gravity position of each area center in the same calibration plate image is not the center mark point, but the area center closest to the center of gravity position of each area center is the center mark point.
The calculation process of the central mark points of different poses comprises the following steps: the terminal carries out averaging based on position data of each area center under a certain pose to obtain an area center mean value under the pose, wherein the area center mean value is the gravity center position of a mark point under the certain pose; calculating the distance between the center of gravity position of the pose and the center of each area to obtain a calculation result between the center of each area and the center of gravity position, selecting one of the centers of each area under the pose based on the calculation result, and taking the selected center of the area as a center mark point; and the calculation result represents the distance between the gravity center position and the center mark point under the pose, and is smaller than the distance between the gravity center position and the center of any region under the pose.
After the central mark points are obtained, the distance between each effective area center and the central mark point is determined based on the central mark points, so that the corresponding relation between each area center and the corresponding mark point is constructed, the corresponding relation enables the sequence of each mark point in the calibration plate image to be determined, and a mark point sequence is constructed. Each mark point in the mark point sequence can be coded so as to better identify and call the mark point.
In one embodiment, generating a marker point sequence of different poses according to the center marker point of each effective region center and the center marker point of each effective region center in different poses comprises: carrying out averaging calculation based on the distance between the gravity center position with the same pose and the center of the effective area to obtain the gravity center position of the area center with different poses; based on the distance between the gravity center position with the same pose and the effective area center, finding out the center mark point of each pose from the area center; respectively determining the center mark points of the centers of all the areas in different poses as the origin of polar coordinates under each pose; obtaining the position of each region center in the polar coordinate system of each pose based on the center of each region and the polar coordinate origin of each pose; and respectively sequencing the positions of the centers of the regions in the polar coordinate system of each pose according to the angles of the centers of the regions in the polar coordinate system, so as to obtain the marker point sequences of different poses.
After the polar coordinate origin is determined, calculating the area center with the minimum adjacent angle as the position of the starting mark point, setting the angle corresponding to the starting mark point as the starting angle, constructing a polar coordinate system based on the polar coordinate origin and the starting mark point, wherein the angle of each area center in the polar coordinate system is different, but after rotation and inclination, the relative position of each mark point in the mark point sequence can still be known according to the angle of each area center in the polar coordinate system because the number of each mark point and the sequence of each mark point in the polar coordinate system are determined, and the mark point sequence in the polar coordinate system has the characteristics of rotation resistance and inclination resistance.
In one embodiment, the determining the central mark point of the center of each region in different poses as the polar coordinate origin in each pose respectively comprises: respectively carrying out equalization on the region centers with the same positions and postures to obtain the gravity center position of the region center of each position and posture; screening the center of each region based on the gravity center position of the region center with the same pose and the distance interval between the region centers to obtain a center mark point of each pose; and respectively determining the central mark point of each pose as the origin of the polar coordinate of each pose.
In one embodiment, the step of sorting the positions of the center of each region in the polar coordinate system of each pose according to the angles of the center of each region in the polar coordinate system to obtain marker point sequences of different poses includes: calculating initial polar coordinate angles of the central mark points and the centers of all the areas, and then sequencing according to the initial polar coordinate angles to obtain an initial mark point sequence; calculating the difference value of the included angle between two adjacent points in the initial mark point sequence, and determining the position with the minimum modular length of the difference value of the included angle as a target starting point; and determining target angles of the target starting point and the centers of the regions in the polar coordinate system according to the positions of the target starting point in the initial mark point sequence, and rearranging the centers of the regions based on the target angles to obtain a target polar coordinate sequence.
And step 210, calibrating the coordinate system conversion relation between the robot and the scanner through the mark point sequences with different poses.
In one embodiment, the coordinate system conversion relation between the robot and the scanner is calibrated through the marker point sequences of different poses, and the method comprises the following steps:
averaging the position coordinates of each mark point in the mark point sequence with the same pose to obtain a three-dimensional reconstruction coordinate mean value of each pose, calculating the three-dimensional reconstruction coordinate mean value of each pose according to a PNP algorithm to obtain a rotation value and a first translation value;
performing sphere fitting on the three-dimensional reconstruction coordinate mean value of each pose to obtain a mark point fitting sphere, and calculating based on the sphere center of the mark point fitting sphere and the position mean value corresponding to the mark point sequence of the preset pose to obtain a second translation value;
and calibrating the coordinate system conversion relation of rotation between the robot and the scanner through the rotation value, and calibrating the coordinate system conversion relation of translation between the robot and the scanner through the first translation value and the second translation value.
Therefore, after the marker point sequences of different poses are obtained, the coordinate system conversion relation between the calibration robot and the scanner can be accurately calculated on the basis of the correlation algorithm of the PNP.
In one embodiment, the coordinate system transformation relationship includes a rotational transformation relationship and a translational transformation relationship, and the coordinate system transformation relationship between the robot and the scanner is calibrated through a marker point sequence with different poses, including: calibrating the rotation conversion relation between the robot and the scanner through the position information of the movement of the mark point sequence in the conversion process of different poses, and calculating a first translation vector; calculating through the sphere center obtained by fitting the marker point sequences of different poses and the marker point sequence of the preset pose to obtain a second translation vector; and combining the first translation vector and the second translation vector to calibrate the translation conversion relation.
In one embodiment, the process of calculating the first translation vector specifically includes: on the premise of keeping the rotation value of the tool coordinate system relative to the robot coordinate system unchanged, the position of the end effector is changed for multiple times, the coordinate value of the origin of the tool coordinate system and the mean value of the three-dimensional reconstruction coordinates of the target mark point obtained by the scanner are recorded at the changed position each time, and the rotation value and the first translation vector between the robot coordinate system and the scanner coordinate system are obtained by utilizing the two groups of coordinate values through a PNP algorithm.
The step of calculating the second translation vector comprises: in the process of calculating the first translation vector, recording the three-dimensional reconstruction coordinate mean value of the target mark point in the final state, then rotating the tool coordinate system for multiple times to change the posture of the tool coordinate system under the condition of keeping the original point position of the tool coordinate system unchanged, acquiring the three-dimensional reconstruction coordinate mean value of the mark point by the scanner under each posture, distributing the mean values obtained under the multiple postures on a spherical surface, fitting the coordinates of the center of the sphere, and combining the recorded three-dimensional reconstruction coordinate mean value of the mark point to obtain a second translation vector.
And finally, combining the first translation vector and the second translation vector obtained by calculation to obtain a translation conversion relation. For example: firstly, after 8 groups of mark point sequences with different poses are collected, a rotation value conversion relation and a first translation vector can be obtained by using PNP calculation. And then, fitting at least 7 groups of marker point sequences to obtain a fitted sphere center, wherein the coordinates of the sphere center and the average value corresponding to the first group of marker point sequences with the rotation frequency of 0 are used for obtaining a second translation vector. And finally, adding the second translation vector quantity to the first translation vector calculated by the PNP to obtain the translation value conversion relation between the robot and the three-dimensional scanner.
In the robot hand-eye coordinate transformation method, each mark point in calibration plate images with different poses is detected according to at least two graphs, and a mark point area is screened out by the image geometric characteristic detection mode; whether the mark points in the calibration plate image are effective or not is judged based on the distance between the mark point areas in each group, then the mark point sequences with different poses are generated according to the center of each effective area and the mark points of the center of each effective area in different poses, and the positions of the mark points in the calibration plate image are determined through the mark point sequences, so that the robot hand-eye calibration calculation is converted into the translation and rotation relation calculation between the point pairs in a three-dimensional space, the corresponding translation and rotation relation can be calculated based on a PnP algorithm, the calculation based on the change relation between the poses is not needed, the calculation error is easy to control, and the application of developers and the debugging are facilitated.
In one embodiment, step 204, detecting each landmark point in the calibration plate images with different poses according to at least two patterns to obtain at least two groups of different landmark point regions, includes: and respectively carrying out edge extraction detection on each mark point in the calibration plate image according to at least two different graphs to obtain at least two groups of different image profiles. And screening the image contour of the contour length interval in each group. And obtaining at least two groups of different mark point areas based on the image outlines in the screened groups.
Specifically, edge canny operator edge extraction detection is carried out on each mark point in the calibration board image according to at least two different graphs respectively to obtain at least two groups of different image profiles, shorter and longer useless profiles are removed from the at least two groups of different image profiles to obtain each group of screened image profiles, and the at least two groups of different mark point areas are obtained by correspondingly calculating each group of screened image profiles.
In one embodiment, the image contour includes a circular contour and a polygonal contour, and the obtaining at least two different groups of landmark regions based on the image contours in the selected groups includes: similarity calculation is carried out on the basis of the screened outline area and the outline length of the circular outline, and the similarity of the circular outline is obtained; selecting circular mark point areas corresponding to the mark points from the searched circular contours based on the similarity of the circular contours; fitting the searched polygon outlines in each group to obtain a polygon fitting image; and fitting the polygon to the quadrilateral contour in the image to be used as a quadrilateral marking point area corresponding to each marking point.
After the similarity of the circular contour is calculated, selecting a circular mark point area with the similarity smaller than a similarity threshold value to obtain circular mark point areas corresponding to all mark points; where the circular similarity (4.0 PI profile area)/(profile perimeter +1e-7) may be 0.8.
Correspondingly, step 206, determining whether the area center of the landmark area in each group is valid, includes: and judging whether the area center of the circular mark point area is effective or not based on the distance between the circular mark point area and the quadrilateral mark point area of each mark point. The center of the area of the circular mark point area can be valid because the mark point is circular, and the center of the area of the quadrangular mark point area is invalid, so that the corresponding accuracy is improved.
In one embodiment, the step 206 of determining whether the region center of the landmark regions in each group is valid based on the distance between the landmark regions in each group comprises: respectively carrying out overlapping detection on the mark point areas in each group, and removing the overlapping mark point areas in each group to obtain the removed mark point areas of each group; comparing the distance between each group of removed mark point areas with the adjacent threshold distance of the mark points to obtain a plurality of adjacent point comparison results; and respectively judging whether the area center in each group of the removed mark point areas is effective or not based on the comparison result of each adjacent point.
In the process of respectively performing overlapping detection on the mark point areas in each group, the terminal selects the mark point areas in the same group respectively based on the distance between the mark point areas in each group, calculates the overlapping detection attributes respectively corresponding to the selected mark point areas in the same group, detects each mark point area in the same group based on the overlapping detection attributes respectively corresponding to the mark point areas, and eliminates the overlapping mark point areas in each group based on the detection result.
In one embodiment, the performing overlap detection on the marker point regions in each group, and removing the overlap marker point regions in each group to obtain each group of removed marker point regions includes: respectively carrying out combined comparison on the mark point areas in each group; calculating the overlapping detection distance between the marker point areas obtained by comparison; when the overlapping detection distance meets the contour detection threshold, respectively calculating the region side length and the region area of each mark point region for combined comparison; and based on the side length and the area of each mark point area compared by the combination, the overlapped mark point areas in each group are removed, and the mark point areas after the removal of each group are obtained.
The overlap detection distance is a distance between arbitrary marker point regions in the same group of marker point regions, and the distance is a center distance between marker point regions obtained by respectively and arbitrarily combining and calculating the same group of marker point regions.
For example: after detection is carried out according to quadrangles, any two of the quadrangle mark point areas in the quadrangle group are combined and compared, the overlapping detection distance between the two mark point areas obtained by combination and comparison is calculated, and when the overlapping detection distance is smaller than a corresponding contour detection threshold value, contour detection is carried out. In the process of contour detection, the respective region side length and region area of two marker point regions which are combined and compared are respectively calculated, the ratio between the product of the region side length and the region area is calculated, when the ratio is in an effective interval, the areas of two quadrangles in the combination are compared, the larger area is selected to be removed, and the smaller area is reserved, so that the marker point region after the quadrangle group is removed is obtained.
Therefore, the method screens the mark points through the geometric characteristic relation of the two-dimensional images, can accurately identify and encode the mark points in a mark point sequence mode under the conditions of rotation and translation of XYZ axes, improves the reconstruction precision of the binocular three-dimensional scanner by combining sub-pixel processing, uses a PnP algorithm to calculate the point pairs for the calibration calculation of the robot hand and eye, has simple calculation process and easy error control and optimization, and is convenient for application developers to debug. Compared with the traditional hand-eye calibration method, the traditional method needs to solve the AX-XB equation based on the pose change relationship, because the solution uses the difference value of the upper pose and the lower pose, the correct solution of the equation highly depends on the precision of the measurement data, and when the measurement data has a problem slightly, the obtained result has larger deviation with the actual true value. The calculation principle of the invention is intuitive, and can well find the problems generated in practical use, and even if the robot walking error, the three-dimensional scanner precision error, the sphere center fitting error and the calculation rotation and translation error exist in the measurement process, the invention can also ensure higher accuracy.
It should be understood that, although the steps in the flowcharts related to the embodiments are shown in sequence as indicated by the arrows, the steps are not necessarily executed in sequence as indicated by the arrows. The steps are not limited to being performed in the exact order described, and may be performed in other orders, unless otherwise indicated herein. Moreover, at least a part of the steps in the flowcharts related to the above embodiments may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the application also provides a robot hand-eye coordinate conversion device for realizing the robot hand-eye coordinate conversion method. The implementation scheme for solving the problem provided by the device is similar to the implementation scheme described in the above method, so specific limitations in one or more embodiments of the robot hand-eye coordinate transformation device provided below can be referred to the limitations on the robot hand-eye coordinate transformation method in the above, and are not described again here.
In one embodiment, as shown in fig. 5, there is provided a robot hand-eye coordinate transformation apparatus including: an image acquisition module 502, an edge detection module 504, a valid marker point determination module 506, a marker point sequence generation module 508, and a hand-eye calibration module 510, wherein:
an image acquisition module 502, configured to acquire, by a scanner of a robot, an image of an end effector of the robot, to which a calibration plate is mounted, to obtain calibration plate images of the end effector at different poses;
an edge detection module 504, configured to detect each landmark point in the calibration plate images at different poses according to at least two patterns, so as to obtain at least two different groups of landmark point regions;
a valid landmark determining module 506, configured to determine whether the area center of the landmark areas in each group is valid based on the distance between the landmark areas in each group;
a marker point sequence generating module 508, configured to generate marker point sequences in different poses according to the effective region centers and the central marker points of the effective region centers in the different poses;
and a hand-eye calibration module 510, configured to calibrate a coordinate system transformation relationship between the robot and the scanner through the marker point sequences of different poses.
In one embodiment, the edge detection module 504 includes:
the contour detection unit is used for carrying out edge extraction detection on each mark point in the calibration board image according to at least two different graphs respectively to obtain at least two groups of different image contours;
a length screening unit for screening the image contour of the contour length section in each group;
and the mark point region generating unit is used for obtaining the at least two groups of different mark point regions based on the screened image contours in the groups.
In one embodiment, the landmark region generating unit includes:
the similarity calculation unit is used for calculating the similarity based on the screened contour area and contour length of the circular contour to obtain the similarity of the circular contour;
the circular area screening unit is used for selecting circular mark point areas corresponding to the mark points from the searched circular outlines based on the similarity of the circular outlines;
the polygon fitting unit is used for fitting the searched polygon outlines in each group to obtain a polygon fitting image;
a quadrilateral region screening unit, configured to fit the polygon to a quadrilateral contour in the image, as a quadrilateral landmark region corresponding to each landmark point;
the valid landmark determining module 506 includes:
a center validity determination unit configured to determine whether or not an area center of the circular landmark area is valid based on a distance between the circular landmark area and the quadrangular landmark area of each landmark.
In one embodiment, the valid landmark determining module 506 includes:
the overlapping region detection unit is used for respectively carrying out overlapping detection on the mark point regions in each group, and eliminating the overlapping mark point regions in each group to obtain the mark point regions after each group is eliminated;
the adjacent point comparison unit is used for comparing the distance between each group of removed mark point areas with the adjacent threshold distance of the mark points to obtain a plurality of adjacent point comparison results;
and the center validity judging unit is used for respectively judging whether the area centers in the mark point areas after the elimination of each group are valid or not based on the comparison result of each adjacent point.
In one embodiment, the overlap area detection unit includes:
the area combination comparison subunit is used for respectively carrying out combination comparison on the mark point areas in each group;
the first detection subunit is used for calculating the overlapping detection distance between the combined and compared mark point areas;
the second detection subunit is used for respectively calculating the region side length and the region area of each mark point region for the combined comparison when the overlapping detection distance meets the contour detection threshold;
and the overlapping region removing subunit is used for removing the overlapping mark point regions in each group based on the region side length and the region area of each mark point region compared by the combination, so as to obtain each group of removed mark point regions.
In one embodiment, the marker point sequence generating module 508 includes:
the gravity center position calculating unit is used for carrying out averaging calculation based on effective area centers with the same pose to obtain the gravity center positions of the area centers with different poses;
a center mark point determination unit for finding a center mark point of each pose from the center of each region based on the distance between the center of gravity position and the center of the effective region, which are the same in pose;
an origin point generating unit, configured to determine center marker points of the area centers in different poses as polar coordinate origin points in the poses respectively;
the polar coordinate system building unit is used for obtaining the position of each region center in the polar coordinate system of each pose based on each region center and the polar coordinate origin of each pose;
and the mark point sequence generating unit is used for respectively sequencing the positions of the centers of the regions in the polar coordinate system of the poses according to the angles of the centers of the regions in the polar coordinate system to obtain mark point sequences of different poses.
In one embodiment, the coordinate system transformation relationship includes a rotation transformation relationship and a translation transformation relationship, and the hand-eye calibration module 510 includes:
the rotation relation calibration unit is used for calibrating the rotation conversion relation between the robot and the scanner through the corresponding rotation vectors of the mark point sequence in the conversion process of different poses;
the first vector calculation unit is used for obtaining a first translational vector through the origin translation information of the mark point sequence in the conversion process of different poses;
the second vector calculation unit is used for calculating the mark point spherical center obtained by fitting the mark point sequences of different poses and the mark point sequence of a preset pose to obtain a second translation vector;
and the translation relation calibration unit is used for combining the first translation vector and the second translation vector to obtain the translation conversion relation.
All or part of each module in the robot eye coordinate conversion device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, or can be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 6. The computer apparatus includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input device. The processor, the memory and the input/output interface are connected by a system bus, and the communication interface, the display unit and the input device are connected by the input/output interface to the system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The input/output interface of the computer device is used for exchanging information between the processor and an external device. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a robot hand-eye coordinate transformation method. The display unit of the computer equipment is used for forming a visual and visible picture, and can be a display screen, a projection device or a virtual reality imaging device, the display screen can be a liquid crystal display screen or an electronic ink display screen, the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 6 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
In an embodiment, a computer program product is provided, comprising a computer program which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, displayed data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the relevant laws and regulations and standards of the relevant country and region.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), Magnetic Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases involved in the embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.
Claims (10)
1. A robot hand-eye coordinate transformation method, characterized in that the method comprises:
acquiring images of an end effector of the robot, which is provided with a calibration plate, by a scanner of the robot to obtain calibration plate images of the end effector at different poses;
detecting each marker point in the calibration plate images with different poses according to at least two graphs to obtain at least two groups of different marker point areas;
determining whether the region center of the landmark regions in each group is valid based on the distance between the landmark regions in each group;
generating marker point sequences with different poses according to the effective region centers and the central marker points of the effective region centers in the different poses;
and calibrating the coordinate system conversion relation between the robot and the scanner through the marker point sequences with different poses.
2. The method according to claim 1, wherein the detecting each marker point in the calibration plate images of different poses according to at least two graphs to obtain at least two groups of different marker point regions comprises:
performing edge extraction detection on each mark point in the calibration board image according to at least two different patterns respectively to obtain at least two groups of different image profiles;
screening the image contour of the contour length interval in each group;
and obtaining the at least two groups of different mark point areas based on the screened image outlines in the groups.
3. The method of claim 2, wherein the image contours comprise a circular contour and a polygonal contour, and the deriving the at least two different sets of landmark regions based on the image contours in the screened sets comprises:
similarity calculation is carried out on the basis of the screened outline area and the outline length of the circular outline, and the similarity of the circular outline is obtained;
selecting circular mark point areas corresponding to the mark points from the searched circular contour based on the similarity of the circular contour;
fitting the searched polygon outlines in each group to obtain a polygon fitting image;
fitting the polygon to a quadrilateral contour in the image to serve as a quadrilateral marking point area corresponding to each marking point;
the determining whether the area centers of the landmark areas in the respective groups are valid includes:
and judging whether the area center of the circular mark point area is effective or not based on the distance between the circular mark point area and the quadrilateral mark point area of each mark point.
4. The method of claim 1, wherein determining whether the region centers of the landmark regions in each group are valid based on the distances between the landmark regions in each group comprises:
respectively carrying out overlapping detection on the mark point areas in each group, and removing the overlapping mark point areas in each group to obtain the removed mark point areas of each group;
comparing the distance between each group of removed mark point areas with the adjacent threshold distance of the mark points to obtain a plurality of adjacent point comparison results;
and respectively judging whether the area center in each group of the removed mark point areas is effective or not based on the comparison result of each adjacent point.
5. The method according to claim 4, wherein the performing overlap detection on the marker point regions in each group respectively, and removing the overlapped marker point regions in each group to obtain each group of removed marker point regions comprises:
respectively carrying out combination comparison on the mark point areas in each group;
calculating the overlapping detection distance between the marker point areas of the combined comparison;
when the overlapping detection distance meets a contour detection threshold, respectively calculating the region side length and the region area of each mark point region for the combined comparison;
and based on the side length and the area of each mark point area which is combined and compared, eliminating the overlapped mark point areas in each group to obtain the mark point areas after the elimination of each group.
6. The method according to claim 1, wherein the generating of marker point sequences of different poses according to each valid region center and the center marker point of each valid region center in the different poses comprises:
carrying out averaging calculation based on the effective area centers with the same pose to obtain the gravity center positions of the area centers with different poses;
based on the distance between the gravity center position and the effective area center with the same pose, finding out the center mark point of each pose from the area center;
respectively determining the center mark points of the centers of the areas in different poses as the origin of polar coordinates under the poses;
obtaining the position of each region center in the polar coordinate system of each pose based on each region center and the polar coordinate origin of each pose;
and according to the angle of each region center in the polar coordinate system, sequencing the positions of each region center in the polar coordinate system of each pose respectively to obtain a marker point sequence of different poses.
7. The method according to any one of claims 1 to 6, wherein the coordinate system transformation relationship comprises a rotation transformation relationship and a translation transformation relationship, and the calibrating the coordinate system transformation relationship between the robot and the scanner through the marker point sequences of different poses comprises:
calibrating the rotation conversion relation between the robot and the scanner through the corresponding rotation vectors of the mark point sequence in the conversion process of different poses;
obtaining a first translational vector through the origin translation information of the mark point sequence in the conversion process of different poses;
calculating through the mark point spherical center obtained by fitting the mark point sequences with different poses and the mark point sequence with a preset pose to obtain a second translation vector;
and combining the first translation vector and the second translation vector to obtain the translation conversion relation.
8. A robot hand-eye coordinate conversion apparatus, characterized by comprising:
the image acquisition module is used for acquiring images of the end effector of the robot, which is provided with the calibration plate, through a scanner of the robot to obtain calibration plate images of the end effector at different poses;
the edge detection module is used for detecting each mark point in the calibration plate images with different poses according to at least two graphs to obtain at least two groups of different mark point areas;
the effective mark point judging module is used for judging whether the area center of the mark point areas in each group is effective or not based on the distance between the mark point areas in each group;
the mark point sequence generating module is used for generating mark point sequences with different poses according to the effective region centers and the center mark points of the effective region centers in the different poses;
and the hand-eye calibration module is used for calibrating the coordinate system conversion relation between the robot and the scanner through the mark point sequences with different poses.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210809386.5A CN115049744A (en) | 2022-07-11 | 2022-07-11 | Robot hand-eye coordinate conversion method and device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210809386.5A CN115049744A (en) | 2022-07-11 | 2022-07-11 | Robot hand-eye coordinate conversion method and device, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115049744A true CN115049744A (en) | 2022-09-13 |
Family
ID=83166310
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210809386.5A Pending CN115049744A (en) | 2022-07-11 | 2022-07-11 | Robot hand-eye coordinate conversion method and device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115049744A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116687569A (en) * | 2023-07-28 | 2023-09-05 | 深圳卡尔文科技有限公司 | Coded identification operation navigation method, system and storage medium |
CN118279399A (en) * | 2024-06-03 | 2024-07-02 | 先临三维科技股份有限公司 | Scanning equipment pose tracking method and tracking equipment |
-
2022
- 2022-07-11 CN CN202210809386.5A patent/CN115049744A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116687569A (en) * | 2023-07-28 | 2023-09-05 | 深圳卡尔文科技有限公司 | Coded identification operation navigation method, system and storage medium |
CN116687569B (en) * | 2023-07-28 | 2023-10-03 | 深圳卡尔文科技有限公司 | Coded identification operation navigation method, system and storage medium |
CN118279399A (en) * | 2024-06-03 | 2024-07-02 | 先临三维科技股份有限公司 | Scanning equipment pose tracking method and tracking equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109448090B (en) | Image processing method, device, electronic equipment and storage medium | |
CN108346165B (en) | Robot and three-dimensional sensing assembly combined calibration method and device | |
CN108875524B (en) | Sight estimation method, device, system and storage medium | |
US10636168B2 (en) | Image processing apparatus, method, and program | |
JP6740033B2 (en) | Information processing device, measurement system, information processing method, and program | |
JP6573419B1 (en) | Positioning method, robot and computer storage medium | |
RU2700246C1 (en) | Method and system for capturing an object using a robot device | |
JP6594129B2 (en) | Information processing apparatus, information processing method, and program | |
CN115049744A (en) | Robot hand-eye coordinate conversion method and device, computer equipment and storage medium | |
CN104424630A (en) | Three-dimension reconstruction method and device, and mobile terminal | |
CN113330486A (en) | Depth estimation | |
KR20180050702A (en) | Image transformation processing method and apparatus, computer storage medium | |
Ahmadabadian et al. | Clustering and selecting vantage images in a low-cost system for 3D reconstruction of texture-less objects | |
CN116129037B (en) | Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof | |
JP7114686B2 (en) | Augmented reality device and positioning method | |
CN114663463A (en) | Method, system, device, electronic device and storage medium for measuring joint mobility | |
CN115042184A (en) | Robot hand-eye coordinate conversion method and device, computer equipment and storage medium | |
TW202238449A (en) | Indoor positioning system and indoor positioning method | |
CN115830135A (en) | Image processing method and device and electronic equipment | |
CN114859938A (en) | Robot, dynamic obstacle state estimation method and device and computer equipment | |
JP2018173882A (en) | Information processing device, method, and program | |
CN117352126A (en) | Muscle stress visualization method, device, computer equipment and storage medium | |
CN113436269A (en) | Image dense stereo matching method and device and computer equipment | |
CN114202554A (en) | Mark generation method, model training method, mark generation device, model training device, mark method, mark device, storage medium and equipment | |
CN115972202A (en) | Method, robot, device, medium and product for controlling operation of a robot arm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |