CN115810049A - Marker-based pose determination method, device, equipment, medium and product - Google Patents

Marker-based pose determination method, device, equipment, medium and product Download PDF

Info

Publication number
CN115810049A
CN115810049A CN202211703197.6A CN202211703197A CN115810049A CN 115810049 A CN115810049 A CN 115810049A CN 202211703197 A CN202211703197 A CN 202211703197A CN 115810049 A CN115810049 A CN 115810049A
Authority
CN
China
Prior art keywords
pose
marker
camera
image
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211703197.6A
Other languages
Chinese (zh)
Inventor
盛文波
罗子云
魏海永
丁有爽
邵天兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mech Mind Robotics Technologies Co Ltd
Original Assignee
Mech Mind Robotics Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mech Mind Robotics Technologies Co Ltd filed Critical Mech Mind Robotics Technologies Co Ltd
Priority to CN202211703197.6A priority Critical patent/CN115810049A/en
Publication of CN115810049A publication Critical patent/CN115810049A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The utility model provides a position and orientation determination method, device, equipment, medium and product based on markers, the method comprises: acquiring an image shot by a camera and a reference pose of a marker set; the marker set comprises at least one marker, and the marker is fixed on a placing table of an object to be grabbed; determining real-time poses of a set of markers in an image captured by a camera under a current camera coordinate system; if the reference pose is determined to be different from the real-time pose, determining a transformation relation between the real-time pose and the reference pose; and compensating and transforming the initial pose of the object to be grabbed in the image shot by the camera according to the transformation relation to obtain the actual pose of the object to be grabbed. The method can avoid the phenomenon that the posture of the object to be grabbed under the grabbing device is determined to be wrong due to the change of the camera coordinate system caused by the camera displacement, and further solves the problem of object grabbing failure.

Description

Marker-based pose determination method, device, equipment, medium and product
Technical Field
The present disclosure relates to the field of electronics, and in particular, to a method, an apparatus, a device, a medium, and a product for determining a pose based on a marker.
Background
At present, in an automatic object grabbing process, pose information of an object to be grabbed in a camera coordinate system needs to be acquired, and the pose information is converted into the coordinate system of the grabbing device according to a corresponding relation (namely camera external parameters) between the camera coordinate system and the coordinate system of the grabbing device, so that the grabbing device can be controlled to grab the object to be grabbed through the converted pose information.
However, when the camera is subjected to external force collision to cause the position of the camera to be displaced, the coordinate system of the camera is also changed, and if the previous corresponding relationship is still adopted for conversion, the object to be grabbed cannot be grabbed accurately.
Therefore, a pose determination method is needed to avoid the above technical problems.
Disclosure of Invention
The marker-based pose determination method, device, equipment, medium and product are used for solving the problem of object grabbing failure caused by displacement of a camera in the related art.
In a first aspect, the present disclosure provides a marker-based pose determination method, including:
acquiring an image shot by a camera and a reference pose of a marker set; wherein the image shot by the camera comprises the marker set and the object to be grabbed; the marker set comprises at least one marker, and the marker is fixed on a placing table of an object to be grabbed; the reference pose is the pose of the set of markers in the camera coordinate system of the camera when the camera's external parameters are determined;
determining real-time poses of a set of markers in an image captured by the camera under a current camera coordinate system;
if the reference pose is determined to be different from the real-time pose, determining a transformation relation between the real-time pose and the reference pose;
compensating and transforming the initial pose of the object to be grabbed in the image shot by the camera according to the transformation relation to obtain the actual pose of the object to be grabbed; and the initial pose of the object to be grabbed is the initial pose of the object to be grabbed in the current camera coordinate system.
In one possible implementation, determining a real-time pose of a set of markers in an image captured by the camera in a current camera coordinate system includes:
performing edge detection processing on a two-dimensional image in an image shot by the camera to determine a packet in the two-dimensional image
Edge images of markers in the contained marker set; wherein the image shot by the camera comprises a two-dimensional image and a depth 5-degree image;
determining two-dimensional coordinates of preset mark points in the marker according to the edge image;
determining the position of the preset mark point according to the two-dimensional coordinates and a depth image of an image shot by the camera;
determining the posture of the preset mark point; and the position of the preset mark point and the posture of the preset mark point are compared,
a real-time pose of 0 is determined for a marker in a set of markers in an image captured by the camera in a current camera coordinate system.
In a possible implementation manner, if the type of the marker in the marker set is a concentric circle type and the preset marker point is a center of a concentric circle, determining a two-dimensional coordinate of the preset marker point in the marker according to the edge image includes:
carrying out image correction processing on the edge image to obtain a corrected image; wherein the image correction processing is to 5 correct an elliptical contour in the edge image to a circular contour; the concentric circle type markers have a plurality of phases
Concentric rings;
and determining the circle center coordinates of concentric circles in the corrected image according to the corrected image, and determining the circle center coordinates as the two-dimensional coordinates of the preset mark point.
In one possible implementation, determining the posture of the preset landmark point includes: 0 determining three non-collinear preset mark points as the preset mark points in the plurality of preset mark points corresponding to the marker set
A first marker point, a second marker point, and a third marker point;
determining an orthogonal coordinate system taking the first mark point as a coordinate origin according to the position of the first mark point, the position of the second mark point and the position of the third mark point;
and determining the pose of the first marker point according to the current camera coordinate system and the orthogonal coordinate system.
In a possible implementation manner, determining a transformation relationship between the real-time pose and the reference pose includes:
and fitting the real-time pose and the reference pose based on a least square method to obtain a transformation relation between the real-time pose and the reference pose.
In one possible implementation, the marker set includes a plurality of markers having different shape parameters;
determining a transformation relationship between the real-time pose and the reference pose, comprising: 0, determining the corresponding relation between the real-time pose and the reference pose according to the shape parameters of the marker and the image shot by the camera; the corresponding relation is used for indicating the relation between the real-time pose representing the pose of the same marker and the reference pose;
and determining the transformation relation between the real-time pose and the reference pose according to the corresponding relation.
In a possible implementation manner, when the marker is a concentric circle type, the shape parameter is the number of circular rings included in the concentric circle and the width of each circular ring.
In one possible implementation, the marker types of the markers of the marker set are concentric circle types, and the marker set includes at least three non-collinear markers; the concentric circle type marker is composed of a plurality of circular rings having the same center.
In one possible implementation, acquiring a reference pose of an image captured by a camera and a set of markers includes:
in the grabbing process, acquiring an image shot by a camera;
and acquiring the reference poses of the marker set in response to receiving error information sent by the grabbing device, wherein the error information is used for indicating that the grabbing device does not grab the object to be grabbed successfully.
In a second aspect, the present disclosure provides a marker-based pose determination apparatus, comprising:
an acquisition unit configured to acquire an image captured by a camera and a reference pose of a marker set; wherein the image shot by the camera comprises the marker set and the object to be grabbed; the marker set comprises at least one marker, and the marker is fixed on a placing table of an object to be grabbed; the reference pose is the pose of the set of markers in the camera coordinate system of the camera when the camera's external parameters are determined;
a first determination unit, configured to determine real-time poses of a set of markers in an image captured by the camera in a current camera coordinate system;
the second determining unit is used for determining the transformation relation between the real-time pose and the reference pose if the reference pose is determined to be different from the real-time pose;
the compensation unit is used for carrying out compensation transformation on the initial pose of the object to be grabbed in the image shot by the camera according to the transformation relation to obtain the actual pose of the object to be grabbed; and the initial pose of the object to be grabbed is the initial pose of the object to be grabbed in the current camera coordinate system.
In one possible implementation manner, the first determining unit includes:
the detection module is used for carrying out edge detection processing on a two-dimensional image in the image shot by the camera and determining an edge image of a marker in a marker set contained in the two-dimensional image; wherein the image shot by the camera comprises a two-dimensional image and a depth image;
the first determining module is used for determining two-dimensional coordinates of preset mark points in the marker according to the edge image;
the second determining module is used for determining the position of the preset mark point according to the two-dimensional coordinates and the depth image of the image shot by the camera;
the third determining module is used for determining the posture of the preset mark point;
and the fourth determining module is used for determining the position of the preset mark point and the posture of the preset mark point as the real-time pose of the mark in the mark set in the image shot by the camera under the current camera coordinate system.
In a possible implementation manner, if the type of the marker in the marker set is a concentric circle type and the preset marker point is a center of the concentric circle, the first determining module is specifically configured to:
carrying out image correction processing on the edge image to obtain a corrected image; wherein the image correction processing is to correct an elliptical contour in the edge image to a circular contour; the concentric circle type marker is composed of a plurality of circular rings with the same circle center;
and determining the center coordinates of concentric circles in the corrected image according to the corrected image, and determining the center coordinates as the two-dimensional coordinates of the preset mark point.
In a possible implementation manner, the third determining module is specifically configured to:
determining three non-collinear preset mark points as a first mark point, a second mark point and a third mark point respectively from a plurality of preset mark points corresponding to the marker set;
determining an orthogonal coordinate system taking the first mark point as a coordinate origin according to the position of the first mark point, the position of the second mark point and the position of the third mark point;
and determining the pose of the first marker point according to the current camera coordinate system and the orthogonal coordinate system.
In a possible implementation manner, the second determining unit is specifically configured to:
and fitting the real-time pose and the reference pose based on a least square method to obtain a transformation relation between the real-time pose and the reference pose.
In one possible implementation, the marker set includes a plurality of markers having different shape parameters; a second determination unit comprising:
a fifth determining module, configured to determine a corresponding relationship between the real-time pose and the reference pose according to the shape parameter of the marker and the image captured by the camera; the corresponding relation is used for indicating the relation between the real-time pose representing the pose of the same marker and the reference pose;
and the sixth determining module is used for determining the transformation relation between the real-time pose and the reference pose according to the corresponding relation.
In a possible implementation manner, when the marker is a concentric circle type, the shape parameter is the number of circular rings included in the concentric circle and the width of each circular ring.
In one possible implementation, the marker types of the markers of the marker set are concentric circle types, and the marker set includes at least three non-collinear markers; the concentric circle type marker is composed of a plurality of circular rings with the same center.
In one possible implementation manner, the obtaining unit includes:
the first acquisition module is used for acquiring an image shot by a camera in the grabbing process;
and the second acquisition module is used for acquiring the reference poses of the marker set in response to receiving error information sent by the grabbing device, wherein the error information is used for indicating that the grabbing device does not grab the object to be grabbed successfully.
In a third aspect, the present disclosure provides an electronic device comprising: a memory, a processor;
a memory; a memory for storing the processor-executable instructions;
wherein the processor is configured to perform the method according to any one of the first aspect according to the executable instructions.
In a fourth aspect, the present disclosure provides a computer-readable storage medium having stored thereon computer-executable instructions for implementing the method of any one of the first aspects when executed by a processor.
In a fifth aspect, the present disclosure provides a computer program product comprising a computer program that, when executed by a processor, implements the method of any one of the first aspects.
The present disclosure provides a marker-based pose determination method, apparatus, device, medium, and product, the method comprising: acquiring an image shot by a camera and a reference pose of a marker set; wherein the image shot by the camera comprises the marker set and the object to be grabbed; the marker set comprises at least one marker, and the marker is fixed on a placing table of an object to be grabbed; the reference pose is the pose of the set of markers in a camera coordinate system of the camera when the external reference of the camera is determined; determining real-time poses of a set of markers in an image captured by the camera in a current camera coordinate system; if the reference pose is determined to be different from the real-time pose, determining a transformation relation between the real-time pose and the reference pose; compensating and transforming the initial pose of the object to be grabbed in the image shot by the camera according to the transformation relation to obtain the actual pose of the object to be grabbed; and the initial pose of the object to be grabbed is the initial pose of the object to be grabbed in the current camera coordinate system. According to the method and the device, after the image shot by the camera is obtained, the real-time pose of the marker set in the image is identified and compared with the reference pose of the marker set, if the real-time pose is determined to be different from the reference pose, the pose of the object to be grabbed in the image shot by the camera can be further supplemented according to the transformation relation between the real-time pose and the reference pose, and therefore the problem that the pose of the determined object to be grabbed under the grabbing device is wrong due to the change of a camera coordinate system caused by camera displacement and the problem that object grabbing failure is easily caused are solved. In addition, the problem that the position information of the point cloud in the direction vertical to the ground is wrong due to temperature drift of the camera can be avoided.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a schematic flowchart of a pose determination method based on a marker according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a second pose determination method based on a marker according to an embodiment of the present disclosure;
FIG. 3 is a schematic view of a concentric circular marker provided by an embodiment of the present disclosure;
fig. 4 is a schematic diagram of an application scenario provided by the embodiment of the present disclosure;
FIG. 5 is a schematic representation of one marker provided by the present disclosure;
FIG. 6 is a schematic representation of yet another marker provided by the present disclosure;
fig. 7 is a schematic structural diagram of a pose determination apparatus based on a marker according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a second pose determination apparatus based on a marker according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure.
With the foregoing drawings in mind, certain embodiments of the disclosure have been shown and described in more detail below. The drawings and written description are not intended to limit the scope of the disclosed concepts in any way, but rather to illustrate the disclosed concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below do not represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure.
Currently, in order to improve the gripping efficiency of the object, the object gripping is usually performed by automatically controlling a gripping device (e.g., a robot arm, etc.). When the object is grabbed by the grabbing device, firstly, the position and posture of the object to be grabbed under a camera coordinate system are determined according to an image of the object to be grabbed, which is acquired by a camera, and then the position and posture under the camera coordinate system are converted into the position and posture under the grabbing device coordinate system through a coordinate conversion relation (namely camera external parameters) between the camera coordinate system and a coordinate system corresponding to the grabbing device coordinate system, so that the grabbing device is controlled to move through the position and posture of the object to be grabbed under the grabbing device coordinate system, and the grabbing device is controlled to grab the object to be grabbed.
However, in the practical application process, when the position of the camera is changed under the action of an external force, the coordinate system corresponding to the camera is changed, and if the coordinate transformation relationship between the camera coordinate system and the coordinate system corresponding to the coordinate system of the grabbing device is still adopted for pose transformation, the grabbing device is easy to fail to grab the object.
Or, when the camera is subjected to temperature drift, the position information of the object in the direction perpendicular to the ground, which is determined based on the image shot by the camera, is erroneous, that is, the height information of the object cannot be accurately identified, and further, if the coordinate transformation relationship between the camera coordinate system and the coordinate system corresponding to the coordinate system of the grasping device is still adopted for pose transformation, the grasping device is likely to fail to grasp the object.
In the related art, the grabbing device needs to be manually controlled to stop working, the object to be grabbed on the placing table is transported to stop, and at the moment, the external reference of the camera is calibrated again to ensure that the object can be grabbed accurately in the follow-up process. However, this method is prone to consume a lot of time, causes a decrease in grasping efficiency, and requires manual intervention.
The marker-based pose determination method, device, equipment, medium and product are provided by the disclosure and used for solving the technical problems.
The following describes the technical solutions of the present disclosure and how to solve the above technical problems in specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present disclosure will be described below with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of a pose determination method based on a marker according to an embodiment of the present disclosure, and as shown in fig. 1, the method includes the following steps:
s101, acquiring an image shot by a camera and a reference pose of a marker set; the image shot by the camera comprises a marker set and an object to be grabbed; the marker set comprises at least one marker, and the marker is fixed on a placing table of an object to be grabbed; the reference pose is the pose of the marker set under the camera coordinate system of the camera when the external reference of the camera is determined.
For example, in order to acquire an accurate pose of an object to be grasped in this embodiment, first, a set of markers is fixedly disposed on a placing table for placing the object to be grasped, where the set of markers includes at least one marker. It is understood that the markers in the present embodiment are located in the shooting area of the camera mounted on the placement table, so that the images of the markers in the marker set can be acquired each time the camera shoots an image later.
In addition, when one marker is included in the set of markers, the reference pose of the set of markers, that is, the reference pose of the marker; when a plurality of markers are included in the marker set, at this time, the reference poses of the marker set may be a set of reference poses of a part of the markers in the marker set or a set of reference poses of all the markers.
The reference pose in this embodiment may be understood as the pose of the marker set in the coordinate system corresponding to the camera at the time of the extrinsic parameter determination of the camera. Wherein, the external parameter of the camera (i.e. external parameter) can be understood as the transformation relation between the camera coordinate system and the object capture device coordinate system.
When object grabbing is required, at this time, an image captured by the camera and a reference pose of the marker set obtained in the camera extrinsic determination may be acquired. The images shot by the camera comprise images of the object to be grabbed and images collected by the markers.
And S102, determining the real-time pose of the marker set in the image shot by the camera under the current camera coordinate system.
After the image captured by the camera is obtained, the real-time pose of the marker set in the image captured by the camera in the current camera coordinate system may be determined according to the image captured by the camera.
In one example, when the real-time pose of the marker set in the image captured by the camera is determined, firstly, the markers in the marker set may be identified in the image captured by the camera, the positions of the markers in the image may be further determined, and then the real-time pose of the marker set may be determined by combining the camera internal parameters and the depth image captured by the camera. Here, from an image taken by a camera, a method of determining the pose of an object in the image can be referred to as described in the related art.
And S103, if the reference pose is determined to be different from the real-time pose, determining a transformation relation between the real-time pose and the reference pose.
In this embodiment, after the real-time pose of the marker set and the reference pose of the marker set are obtained, the two poses are compared. If the reference pose is determined to be different from the real-time pose, the coordinate system corresponding to the pose identified according to the image shot by the camera at the moment is considered to be different from the camera coordinate system corresponding to the reference pose, and the pose of the object to be grabbed in the image shot by the camera needs to be compensated and transformed. Namely, the transformation relation between the reference pose and the real-time pose can be determined according to the real-time pose and the reference pose corresponding to the marker set, so that the pose determined by the object to be grabbed in the image shot by the camera which is obtained currently is compensated. The transformation relation between the reference pose and the real-time pose can be used for converting parameter values in the real-time pose to parameter values in the reference pose.
If the reference pose is determined to be the same as the real-time pose, the pose of the object to be grabbed, which is determined according to the image shot by the camera, can be directly used as the actual pose of the object to be grabbed, and the pose under the grabbing device is determined according to the actual pose and the external reference of the camera, so that the grabbing device is controlled to grab the object to be grabbed.
In one example, when comparing the reference pose and the real-time pose, the position information in the reference pose and the position information in the real-time pose may be compared to determine a first comparison result; comparing the attitude information in the reference pose and the attitude information in the real-time pose to determine a second comparison result; and determining whether the reference pose and the real-time pose are the same according to the first comparison result and the second comparison result.
In one example, if the difference between the position information in the reference pose and the position information in the real-time pose is greater than a first preset value, and/or the difference between the position information in the reference pose and the position information in the real-time pose is greater than a second preset value, then it is determined that the reference pose and the real-time pose are different.
S104, compensating and transforming the initial pose of the object to be grabbed in the image shot by the camera according to the transformation relation to obtain the actual pose of the object to be grabbed; and the initial pose of the object to be grabbed is the initial pose of the object to be grabbed in the current camera coordinate system.
For example, the initial pose of the object to be grasped in the present embodiment may be understood as a pose in the current camera coordinate system recognized from an image captured by the current camera.
After the transformation relation between the reference pose and the real-time pose is obtained in step S103, the initial pose of the obtained object to be grabbed may be further compensated and transformed according to the transformation relation, so as to obtain the actual pose of the object to be grabbed, and further realize pose compensation in the camera coordinate system.
And then, according to the actual pose and the camera external parameters acquired during the previous camera calibration, determining the pose of the object to be grabbed in the coordinate system of the grabbing device, and further realizing the grabbing of the object to be grabbed.
It can be understood that, in this embodiment, after an image captured by the camera is acquired, by identifying the real-time pose of the marker set in the image and comparing the real-time pose with the reference pose of the marker set, if it is determined that the real-time pose is different from the reference pose, the pose supplementation processing can be further performed on the object to be grabbed in the image captured by the camera according to the transformation relationship between the real-time pose and the reference pose, so as to avoid the problem that the pose of the determined object to be grabbed is incorrect due to the change of the camera coordinate system caused by the camera displacement and the object grabbing failure is easily caused. In addition, the problem that the position information of the point cloud in the direction vertical to the ground is wrong due to temperature drift of the camera can be avoided.
In some embodiments, the similarity of the shape parameters of the markers in the set of markers set on the placing table and the object to be grasped is smaller than the preset threshold, that is, the similarity is ensured to be accurate in identifying the markers in the image when the image captured by the camera is subjected to object identification.
Fig. 2 is a schematic flowchart of a second pose determination method based on a marker according to an embodiment of the present disclosure, and as shown in fig. 2, the method includes the following steps:
s201, acquiring an image shot by a camera and a reference pose of a marker set; the image shot by the camera comprises a marker set and an object to be grabbed; the marker set comprises at least one marker, and the marker is fixed on a placing table of an object to be grabbed; the reference pose is the pose of the marker set under the camera coordinate system of the camera when the external reference of the camera is determined.
For example, the specific principle of step S201 may be referred to step S101, and is not described herein again.
In one example, after each image is captured, the reference pose and the real-time pose may be compared by the method shown in fig. 1 described above to improve the accuracy of object capture.
In one example, the marker types of the markers of the marker set are concentric circle types, and the marker set comprises at least three non-collinear markers; the concentric circle type marker is constituted by a plurality of circular rings having the same center.
Exemplarily, in the present embodiment, when the markers in the marker set are selected, the markers of the concentric circle type may be selected at this time. As shown in fig. 3, fig. 3 is a schematic diagram of a concentric circle marker provided in an embodiment of the present disclosure. The concentric circle type marker is composed of a plurality of circles, and the widths of the circles corresponding to the circles can be different or the same. In addition, in order to improve the accuracy of pose determination, in this embodiment, when the markers are set, three or more markers of concentric circle type may be set, and the centers of the three markers are not collinear. Fig. 4 is a schematic view of an application scenario provided by the embodiment of the present disclosure, which is a top view of a table top of a placing table, wherein three concentric circle markers are disposed on the placing table. Further, when there are more than three markers, it is necessary to ensure that the centers of circles of at least three markers among the plurality of markers are not collinear. Furthermore, by the way of setting at least three non-collinear markers, the accuracy of determining the attitude information in the pose of the markers can be improved. Moreover, the concentric circle type marker shown in fig. 3 is adopted in the present embodiment, and can be distinguished from a circular object to be grabbed.
In one example, step S201 includes the steps of: in the grabbing process, acquiring an image shot by a camera; and acquiring the reference poses of the marker sets in response to receiving error information sent by the grabbing device, wherein the error information is used for indicating that the grabbing device does not grab the object to be grabbed successfully.
For example, in this embodiment, when an object to be grabbed is positioned, an image captured by a camera may be first obtained, if it is determined that the grabbing device has not returned error information indicating that the object to be grabbed fails to grab, pose information of the object to be grabbed in a current camera coordinate system may be directly determined according to the image captured by the camera, and then, coordinate conversion is implemented according to the pose information and external parameters of the camera, so as to complete object grabbing. If the received error information sent by the grabbing device is determined after the image shot by the camera is acquired, at this time, the real-time pose of the marker set in the image shot by the camera can be further identified, the reference pose of the marker set is acquired, and the determination of the transformation relation and the pose compensation transformation shown in the embodiment of fig. 1 are performed. And after the compensation transformation, the transformation relation can be stored, so that after images shot by the camera are acquired, pose compensation is directly performed according to the transformation relation.
In one example, after storing the transformation relationship, if the error information is received again, the transformation relationship may be determined again according to the manner shown in fig. 1.
It can be understood that, in this embodiment, after the grabbing device determines that grabbing fails, the pose compensation shown in fig. 1 may be performed, and the transformation relationship is stored, so as to implement pose compensation of the object to be grabbed in the following. Compared with the mode that the reference pose and the real-time pose are compared once after the image is shot every time, the occupation amount of processing resources of the equipment can be reduced.
S202, performing edge detection processing on a two-dimensional image in an image shot by a camera, and determining an edge image of a marker in a marker set contained in the two-dimensional image; the image shot by the camera comprises a two-dimensional image and a depth image.
In the present embodiment, the two-dimensional image and the depth image are included in the image captured by the camera. When the real-time pose of the marker set is determined, firstly, edge detection can be performed on an object contained in a two-dimensional image according to the two-dimensional image in the image shot by the camera, so as to determine an edge image corresponding to the marker contained in the two-dimensional image.
And S203, determining the two-dimensional coordinates of the preset mark points in the marker according to the edge image.
For example, in this embodiment, for the markers in the marker set, the pose of the marker point selected in advance in the markers may be used as the pose corresponding to the markers. Further, after the edge image corresponding to the marker in the marker set is determined, the two-dimensional coordinates of the preset marker point in the marker may be further determined.
For example, when a marker is rectangular, the respective vertices and/or center points of the marker may be selected as the pose of the marker at this time.
In one example, if the type of the marker in the marker set is a concentric circle type and the preset marker point is a center of the concentric circle, step S203 includes the following steps:
the first step of step S203: carrying out image correction processing on the edge image to obtain a corrected image; wherein the image correction processing is for correcting an elliptical contour in the edge image to a circular contour; the concentric type marker is formed of a plurality of rings having the same center.
For example, in this embodiment, the marker is a concentric circle shown in fig. 3, and the preset mark point corresponding to the concentric circle is taken as a center of the circle for explanation. When determining the two-dimensional coordinates corresponding to the center of the concentric circle shown in fig. 3, at this time, firstly, image correction processing may be performed on the obtained edge image, and it can be understood that, due to the influence of the shooting angle of the camera, the concentric circle in the shot two-dimensional image may be displayed as an elliptical shape, so that the elliptical contour in the edge image needs to be corrected to be a circular contour, so as to improve the accuracy of the subsequently determined center coordinates.
The second step of step S203: and determining the circle center coordinates of concentric circles in the corrected image according to the corrected image, and determining the circle center coordinates as the two-dimensional coordinates of the preset mark point.
For example, after the edge image is corrected, the center coordinates of the corresponding center of the circle may be further determined according to the circular contour in the corrected image.
In one example, when the circle center coordinate is determined, the two-dimensional coordinate of the circle center may be determined according to a circular contour corresponding to any one of the concentric circles, or the final two-dimensional coordinate may be determined according to a plurality of circle center coordinates fitted by circular contours of a plurality of circles.
It can be understood that, in this embodiment, when the type of the marker in the marker set is a concentric circle type and the preset marker point of the marker is a circle center, at this time, when determining a circle center coordinate, image correction processing may be performed on the edge image, so as to correct an elliptical contour in the edge image into a circular contour, thereby improving accuracy of determining the two-dimensional coordinate of the circle center. The circle center position of the concentric circle marker is a white area, so that the problem that the depth information of the point cloud at the black-white junction in the image cannot be accurately determined when the camera acquires the depth image can be avoided.
For example, fig. 5 is a schematic view of one marker provided by the present disclosure. In fig. 5, the markers are in a square pattern, and in the related art, the poses of the markers are determined by identifying the poses of three or four vertices in the square outline of the markers. However, when the point cloud at the black-white boundary is identified, the problem of inaccurate pose determination is easily caused, so that the problem can be avoided by selecting the concentric circles in fig. 3. For another example, fig. 6 is a schematic diagram of another marker provided by the present disclosure, and similarly, when a point cloud pose of a circular contour in the graph is identified, since the circular contour is located at a black-white boundary, the determination of the contour point cloud pose is incorrect, thereby affecting the determination of the subsequent circle center coordinates. Compared to the marker type shown in fig. 5 and 6, the concentric circle type marker can avoid the above-mentioned problems.
And S204, determining the position of the preset mark point according to the two-dimensional coordinates and the depth image of the image shot by the camera.
For example, after the two-dimensional coordinates of the preset mark point in the marker are obtained, a depth value corresponding to the preset mark point may be further determined according to depth information included in the depth image, so as to determine a position of the preset mark point in the current camera coordinate system. In practical applications, when determining the three-dimensional coordinates of the preset mark point (i.e., the position of the preset mark point) according to the two-dimensional coordinates and the depth image, the position of the preset mark point in the current camera coordinate system may be determined by combining the two-dimensional coordinates, the depth image, and preset camera parameters.
In one example, when the depth value corresponding to the center of the concentric circle is determined according to the two-dimensional coordinate of the center of the circle and the depth image, at this time, a plurality of point clouds may be selected near the center of the concentric circle, and the plurality of point clouds are located in the white area where the center of the circle is located in fig. 3, and the depth information of the center of the circle is further determined according to the depth information of the plurality of point clouds, so as to improve the accuracy of determining the position of the center of the circle.
S205, determining the posture of a preset mark point; and determining the position of the preset mark point and the posture of the preset mark point as the real-time pose of the mark in the mark set in the image shot by the camera under the current camera coordinate system.
Exemplarily, in this embodiment, when the position of the preset mark point is determined, in order to further accurately describe the pose of the preset mark point, the pose of the preset mark point is further determined, and then the determined pose and position of the preset mark point are used as the pose of the preset mark point. And the gesture of the preset mark point is used for representing the angle relation between the preset mark point and the current camera coordinate system.
It can be understood that, in this embodiment, when the real-time pose corresponding to the marker set is determined, the pose of the preset marker point corresponding to the marker may be used as the real-time pose of the marker set, so as to improve the pose determination efficiency. When the real-time pose of the marker set is determined, edge detection can be performed on the two-dimensional image, so that an edge image of a marker in the two-dimensional image is identified, a two-dimensional coordinate of a marker point is determined based on the edge image, the two-dimensional coordinate is converted into a three-dimensional coordinate by further combining a depth image, and the three-dimensional coordinate is used as the position of the marker point. And then, determining the pose of the marker point so as to obtain the pose of the marker point.
In one example, the step "determining the posture of the preset mark point" in the step S205 includes the steps of:
the first step is as follows: determining three non-collinear preset mark points as a first mark point, a second mark point and a third mark point respectively from a plurality of preset mark points corresponding to the marker set;
the second step is as follows: determining an orthogonal coordinate system taking the first mark point as a coordinate origin according to the position of the first mark point, the position of the second mark point and the position of the third mark point;
the third step: and determining the pose of the first marker point according to the current camera coordinate system and the orthogonal coordinate system.
For example, in this embodiment, when determining the posture of a certain preset marker point, at this time, three non-collinear marker points may be selected from the plurality of marker points corresponding to at least one marker in the marker set. And then, determining the pose of the preset mark point according to the two-dimensional coordinates of the three non-collinear mark points. Specifically, three marker points that are not collinear may be referred to as a first marker point, a second marker point, and a third marker point, respectively. The first marker point can be regarded as a marker point of the posture to be determined. Furthermore, an orthogonal coordinate system with the first marker point as the coordinate origin can be constructed according to the three marker points. For example, a first vector is determined according to the position of the first mark point and the position of the second mark point; and determining a second vector according to the position of the first mark point and the position of the third mark point. Furthermore, the first vector can be taken as a coordinate axis in an orthogonal coordinate system; a third vector obtained by cross multiplication of the first vector and the second vector is used as a second coordinate axis of the orthogonal coordinate system; and then taking the cross product result of the first vector and the third vector as a third coordinate axis. And then, according to the obtained orthogonal coordinate system and the current camera coordinate system, determining the angle relationship between coordinate axes corresponding to the two coordinate systems as the posture of the first mark point.
It can be understood that, in this embodiment, when determining the pose of the preset mark point, two other preset mark points may be additionally selected, and then the pose of the preset mark point is determined based on the obtained orthogonal coordinate system constructed by the three non-collinear preset mark points, so as to determine the pose information of each mark point.
And S206, if the reference pose is determined to be different from the real-time pose, determining the transformation relation between the real-time pose and the reference pose.
In one example, the step S206 of determining the transformation relationship between the real-time pose and the reference pose may be implemented by the following steps: and fitting the real-time pose and the reference pose based on a least square method to obtain a transformation relation between the real-time pose and the reference pose.
For example, in this embodiment, when determining the transformation relationship between the real-time pose and the reference pose, the least square method may be used to perform fitting processing on the two poses, so as to determine the transformation relationship between the real-time pose and the reference pose.
It can be understood that, in this embodiment, the fitting of the two poses is performed by using the least square method, so that the fitting efficiency can be improved, and further the determination efficiency of the real-time pose of the object to be grabbed can be improved.
In one example, if a plurality of markers having different shape parameters are included in the marker set, the step S206 of determining the transformation relationship between the real-time pose and the reference pose may include the following steps: determining the corresponding relation between the real-time pose and the reference pose according to the shape parameters of the marker and the image shot by the camera; the corresponding relation is used for indicating the relation between the real-time pose representing the pose of the same marker and the reference pose; and determining the transformation relation between the real-time pose and the reference pose according to the corresponding relation. "
For example, in this embodiment, when a plurality of markers are included in the marker set, and at this time, when determining the transformation relationship between the real-time pose and the reference pose, it is first necessary to determine which reference pose corresponds to each of the real-time poses of the plurality of currently determined markers. That is, for example, when the marker set includes a marker a, a marker B, and a marker C, then 3 reference poses and 3 real-time poses acquired at this time need to be first associated with one another. When the corresponding relationship is established, the plurality of markers in the marker set in this embodiment have different shape parameters, and further, when the markers are identified, the corresponding relationship between the reference pose and the real-time pose can be determined according to the shape parameters of each marker, that is, the reference pose and the real-time pose corresponding to the same shape parameter are determined as the reference pose and the real-time pose having the corresponding relationship. And then, determining a transformation relation between the reference pose and the real-time pose according to the determined corresponding relation.
It can be understood that, in this embodiment, a plurality of markers with different shape parameters may be set in the marker set, so that the reference pose and the real-time pose with the corresponding relationship may be determined in the following, and the accuracy of determining the following transformation relationship is further improved.
In one example, based on the above example, when the marker is a concentric circle type, the shape parameter is the number of circular rings included in the concentric circle and the width of each circular ring.
For example, in this embodiment, when the marker set includes a plurality of markers of concentric circle type, at this time, the shape parameter corresponding to each marker may be according to the number of circles and the width of the circle included in the marker, so as to mark the plurality of markers.
S207, compensating and transforming the initial pose of the object to be grabbed in the image shot by the camera according to the transformation relation to obtain the actual pose of the object to be grabbed; the initial pose of the object to be grabbed is the initial pose of the object to be grabbed in the current camera coordinate system.
For example, the specific principle of step S207 may refer to step S104, and is not described herein again.
In this embodiment, when the real-time pose corresponding to the marker set is determined, the pose of the preset marker point corresponding to the marker can be used as the real-time pose of the marker set, so that the pose determination efficiency is improved. When the real-time pose of the marker set is determined, edge detection can be performed on the two-dimensional image, so that an edge image of a marker in the two-dimensional image is identified, a two-dimensional coordinate of a marker point is determined based on the edge image, the two-dimensional coordinate is converted into a three-dimensional coordinate by further combining a depth image, and the three-dimensional coordinate is used as the position of the marker point. And then, determining the pose of the marker point so as to obtain the pose of the marker point. And when the type of the marker in the marker set is a concentric circle type and the preset marker point of the marker is the center of a circle, at this time, when the coordinate of the center of a circle is determined, image correction processing can be performed on the edge image so as to correct the elliptical contour in the edge image into a circular contour, and further the accuracy of determining the two-dimensional coordinate of the center of a circle is improved. The circle center position of the concentric circle marker is a white area, so that the problem that the depth information of the point cloud at the black-white junction in the image cannot be accurately determined when the camera acquires the depth image can be avoided.
Fig. 7 is a schematic structural diagram of a pose determination apparatus based on a marker according to an embodiment of the present disclosure, and as shown in fig. 7, the apparatus includes:
an acquisition unit 701 configured to acquire an image captured by a camera and a reference pose of a marker set; the image shot by the camera comprises a marker set and an object to be grabbed; the marker set comprises at least one marker, and the marker is fixed on a placing table of an object to be grabbed; the reference pose is the pose of the marker set under the camera coordinate system of the camera when the external reference of the camera is determined.
A first determining unit 702 is configured to determine a real-time pose of a set of markers in an image captured by a camera under a current camera coordinate system.
A second determining unit 703, configured to determine a transformation relationship between the real-time pose and the reference pose if the reference pose is determined to be different from the real-time pose.
The compensation unit 704 is used for performing compensation transformation on the initial pose of the object to be grabbed in the image shot by the camera according to the transformation relation to obtain the actual pose of the object to be grabbed; and the initial pose of the object to be grabbed is the initial pose of the object to be grabbed in the current camera coordinate system.
The apparatus provided in this embodiment is used to implement the technical solution provided by the above method, and the implementation principle and the technical effect are similar and will not be described again.
Fig. 8 is a schematic structural diagram of a second pose determining apparatus based on a marker according to an embodiment of the present disclosure, and based on the apparatus structure shown in fig. 7, a first determining unit 702 in the apparatus according to the embodiment includes:
a detection module 7021, configured to perform edge detection processing on a two-dimensional image in an image captured by a camera, and determine an edge image of a marker in a marker set included in the two-dimensional image; the image shot by the camera comprises a two-dimensional image and a depth image;
a first determining module 7022, configured to determine a two-dimensional coordinate of a preset marker point in the marker according to the edge image;
a second determining module 7023, configured to determine a position of the preset mark point according to the two-dimensional coordinates and a depth image of the image captured by the camera;
a third determining module 7024, configured to determine a posture of the preset landmark;
a fourth determining module 7025, configured to determine the position of the preset marker point and the pose of the preset marker point as a real-time pose of a marker in a marker set in an image captured by a camera in the current camera coordinate system.
In a possible implementation manner, if the type of the marker in the marker set is a concentric circle type and the preset marker point is a center of the concentric circle, the first determining module 7022 is specifically configured to:
carrying out image correction processing on the edge image to obtain a corrected image; wherein the image correction processing is for correcting an elliptical contour in the edge image to a circular contour; the concentric circle type marker is composed of a plurality of circular rings with the same circle center;
and determining the circle center coordinates of concentric circles in the corrected image according to the corrected image, and determining the circle center coordinates as the two-dimensional coordinates of the preset mark point.
In a possible implementation manner, the third determining module 7024 is specifically configured to:
determining three non-collinear preset mark points as a first mark point, a second mark point and a third mark point respectively from a plurality of preset mark points corresponding to the marker set;
determining an orthogonal coordinate system taking the first mark point as a coordinate origin according to the position of the first mark point, the position of the second mark point and the position of the third mark point;
and determining the pose of the first marker point according to the current camera coordinate system and the orthogonal coordinate system.
In a possible implementation manner, the second determining unit 703 is specifically configured to:
and fitting the real-time pose and the reference pose based on a least square method to obtain a transformation relation between the real-time pose and the reference pose.
In one possible implementation, the marker set includes a plurality of markers having different shape parameters; the second determination unit 703 includes:
a fifth determining module 7031, configured to determine a corresponding relationship between the real-time pose and the reference pose according to the shape parameter of the marker and the image captured by the camera; the corresponding relation is used for indicating the relation between the real-time pose representing the pose of the same marker and the reference pose;
a sixth determining module 7032 is configured to determine, according to the corresponding relationship, a transformation relationship between the real-time pose and the reference pose.
In one possible implementation, when the marker is a concentric circle type, the shape parameter is the number of circular rings included in the concentric circle and the width of each circular ring.
In one possible implementation, the marker types of the markers of the marker set are concentric circle types, and the marker set includes at least three non-collinear markers; the concentric circle type marker is constituted by a plurality of circular rings having the same center.
In a possible implementation manner, the obtaining unit 701 includes:
a first obtaining module 7011, configured to obtain an image captured by a camera in a capturing process;
a second obtaining module 7012, configured to obtain, in response to receiving error information sent by the grabbing device, a reference pose of the set of markers, where the error information is used to indicate that the grabbing device has not successfully grabbed the object to be grabbed.
The apparatus provided in this embodiment is configured to implement the technical solution provided by the foregoing method, and the implementation principle and the technical effect are similar, which are not described again.
The present disclosure provides an electronic device, including: a memory, a processor;
a memory; a memory for storing processor-executable instructions;
the processor is used for executing the method according to the executable instruction.
Fig. 9 is a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure, and as shown in fig. 9, the electronic device includes:
a processor (processor) 291, the electronic device further comprising a memory (memory) 292; a Communication Interface 293 and bus 294 may also be included. The processor 291, the memory 292, and the communication interface 293 may communicate with each other via the bus 294. Communication interface 293 may be used for the transmission of information. Processor 291 may call logic instructions in memory 294 to perform the methods of the embodiments described above.
Further, the logic instructions in the memory 292 may be implemented in software functional units and stored in a computer readable storage medium when sold or used as a stand-alone product.
The memory 292 is a computer-readable storage medium that can be used for storing software programs, computer-executable programs, such as program instructions/modules corresponding to the methods in the embodiments of the present disclosure. The processor 291 executes the functional application and data processing by executing the software program, instructions and modules stored in the memory 292, so as to implement the method in the above method embodiments.
The memory 292 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal device, and the like. Further, the memory 292 may include a high speed random access memory and may also include a non-volatile memory.
The present disclosure provides a computer-readable storage medium having stored therein computer-executable instructions for implementing any of the methods when executed by a processor.
The present disclosure provides a computer program product comprising a computer program which, when executed by a processor, implements the method of any one.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (13)

1. A pose determination method based on a marker is characterized by comprising the following steps:
acquiring an image shot by a camera and a reference pose of a marker set; the image shot by the camera comprises the marker set and an object to be grabbed; the marker set comprises at least one marker, and the marker is fixed on a placing table of an object to be grabbed; the reference pose is the pose of the set of markers in the camera coordinate system of the camera when the camera's external parameters are determined;
determining real-time poses of a set of markers in an image captured by the camera under a current camera coordinate system;
if the reference pose is determined to be different from the real-time pose, determining a transformation relation between the real-time pose and the reference pose;
compensating and transforming the initial pose of the object to be grabbed in the image shot by the camera according to the transformation relation to obtain the actual pose of the object to be grabbed; and the initial pose of the object to be grabbed is the initial pose of the object to be grabbed under the current camera coordinate system.
2. The method of claim 1, wherein determining real-time poses of a set of markers in an image captured by the camera in a current camera coordinate system comprises:
performing edge detection processing on a two-dimensional image in an image shot by the camera, and determining an edge image of a marker in a marker set contained in the two-dimensional image; wherein the image shot by the camera comprises a two-dimensional image and a depth image;
determining two-dimensional coordinates of preset mark points in the marker according to the edge image;
determining the position of the preset mark point according to the two-dimensional coordinates and a depth image of an image shot by the camera;
determining the posture of the preset mark point; and determining the position of the preset mark point and the posture of the preset mark point as the real-time pose of the mark in the mark set in the image shot by the camera under the current camera coordinate system.
3. The method according to claim 2, wherein if the type of the markers in the marker set is a concentric circle type and the preset marker point is a center of a concentric circle, determining two-dimensional coordinates of the preset marker point in the markers according to the edge image comprises:
carrying out image correction processing on the edge image to obtain a corrected image; wherein the image correction processing is to correct an elliptical contour in the edge image to a circular contour; the concentric circle type marker is composed of a plurality of circular rings with the same circle center;
and determining the circle center coordinates of concentric circles in the corrected image according to the corrected image, and determining the circle center coordinates as the two-dimensional coordinates of the preset mark point.
4. The method of claim 2, wherein determining the pose of the preset landmark point comprises:
determining three non-collinear preset mark points as a first mark point, a second mark point and a third mark point respectively from a plurality of preset mark points corresponding to the marker set;
determining an orthogonal coordinate system taking the first mark point as a coordinate origin according to the position of the first mark point, the position of the second mark point and the position of the third mark point;
and determining the pose of the first marker point according to the current camera coordinate system and the orthogonal coordinate system.
5. The method of claim 1, wherein determining the transformation relationship of the real-time pose and the reference pose comprises:
and fitting the real-time pose and the reference pose based on a least square method to obtain a transformation relation between the real-time pose and the reference pose.
6. The method of claim 1, wherein the set of markers includes a plurality of markers having different shape parameters; determining a transformation relationship between the real-time pose and the reference pose, including:
determining the corresponding relation between the real-time pose and the reference pose according to the shape parameters of the markers and the images shot by the camera; the corresponding relation is used for indicating the relation between the real-time pose representing the pose of the same marker and the reference pose;
and determining the transformation relation between the real-time pose and the reference pose according to the corresponding relation.
7. The method of claim 6, wherein the shape parameters are the number of rings contained in the concentric circles and the width of each ring when the marker is of the concentric circle type.
8. The method of claim 1, wherein the marker types of the markers of the marker set are concentric circle types and at least three non-collinear markers are included in the marker set; the concentric circle type marker is composed of a plurality of circular rings having the same center.
9. The method according to any one of claims 1-8, wherein acquiring the reference poses of the image captured by the camera and the set of markers comprises:
in the grabbing process, acquiring an image shot by a camera;
and acquiring the reference poses of the marker set in response to receiving error information sent by the grabbing device, wherein the error information is used for indicating that the grabbing device does not grab the object to be grabbed successfully.
10. A marker-based pose determination apparatus, comprising:
the acquisition unit is used for acquiring an image shot by the camera and a reference pose of the marker set; wherein the image shot by the camera comprises the marker set and the object to be grabbed; the marker set comprises at least one marker, and the marker is fixed on a placing table of an object to be grabbed; the reference pose is the pose of the set of markers in the camera coordinate system of the camera when the camera's external parameters are determined;
a first determination unit, configured to determine real-time poses of a set of markers in an image captured by the camera in a current camera coordinate system;
the second determining unit is used for determining the transformation relation between the real-time pose and the reference pose if the reference pose is determined to be different from the real-time pose;
the compensation unit is used for performing compensation transformation on the initial pose of the object to be grabbed in the image shot by the camera according to the transformation relation to obtain the actual pose of the object to be grabbed; and the initial pose of the object to be grabbed is the initial pose of the object to be grabbed under the current camera coordinate system.
11. An electronic device, comprising: a memory, a processor;
a memory; a memory for storing the processor-executable instructions;
wherein the processor is configured to perform the method according to the executable instructions of any one of claims 1-9.
12. A computer-readable storage medium having computer-executable instructions stored therein, which when executed by a processor, are configured to implement the method of any one of claims 1-9.
13. A computer program product comprising a computer program which, when executed by a processor, implements the method of any one of claims 1-9.
CN202211703197.6A 2022-12-29 2022-12-29 Marker-based pose determination method, device, equipment, medium and product Pending CN115810049A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211703197.6A CN115810049A (en) 2022-12-29 2022-12-29 Marker-based pose determination method, device, equipment, medium and product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211703197.6A CN115810049A (en) 2022-12-29 2022-12-29 Marker-based pose determination method, device, equipment, medium and product

Publications (1)

Publication Number Publication Date
CN115810049A true CN115810049A (en) 2023-03-17

Family

ID=85487239

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211703197.6A Pending CN115810049A (en) 2022-12-29 2022-12-29 Marker-based pose determination method, device, equipment, medium and product

Country Status (1)

Country Link
CN (1) CN115810049A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117428788A (en) * 2023-12-13 2024-01-23 杭州海康机器人股份有限公司 Equipment control method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117428788A (en) * 2023-12-13 2024-01-23 杭州海康机器人股份有限公司 Equipment control method and device, electronic equipment and storage medium
CN117428788B (en) * 2023-12-13 2024-04-05 杭州海康机器人股份有限公司 Equipment control method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110426051B (en) Lane line drawing method and device and storage medium
CN108827154B (en) Robot non-teaching grabbing method and device and computer readable storage medium
KR20180120647A (en) System and method for tying together machine vision coordinate spaces in a guided assembly environment
US9495750B2 (en) Image processing apparatus, image processing method, and storage medium for position and orientation measurement of a measurement target object
CN110176032B (en) Three-dimensional reconstruction method and device
WO2019114339A1 (en) Method and device for correcting motion of robotic arm
CN111673735A (en) Mechanical arm control method and device based on monocular vision positioning
JP5088278B2 (en) Object detection method, object detection apparatus, and robot system
CN106845354B (en) Part view library construction method, part positioning and grabbing method and device
WO2020228694A1 (en) Camera pose information detection method and apparatus, and corresponding intelligent driving device
CN107300382B (en) Monocular vision positioning method for underwater robot
CN112509036B (en) Pose estimation network training and positioning method, device, equipment and storage medium
CN113524187B (en) Method and device for determining workpiece grabbing sequence, computer equipment and medium
CN109727285B (en) Position and pose determination method and system using edge images
CN115810049A (en) Marker-based pose determination method, device, equipment, medium and product
CN114310901B (en) Coordinate system calibration method, device, system and medium for robot
CN111681186A (en) Image processing method and device, electronic equipment and readable storage medium
US20220327721A1 (en) Size estimation device, size estimation method, and recording medium
CN114310892B (en) Object grabbing method, device and equipment based on point cloud data collision detection
CN113172636B (en) Automatic hand-eye calibration method and device and storage medium
CN115592666A (en) Component positioning method, component positioning device, component positioning system and robot
CN114734444A (en) Target positioning method and device, electronic equipment and storage medium
CN115797332B (en) Object grabbing method and device based on instance segmentation
CN115781698B (en) Method, system, equipment and medium for automatically generating motion pose of layered hand-eye calibration robot
CN109903336A (en) Across the visual field estimation method of attitude of flight vehicle and device based on local feature

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination