CN113658260A - Robot pose calculation method and system, robot and storage medium - Google Patents

Robot pose calculation method and system, robot and storage medium Download PDF

Info

Publication number
CN113658260A
CN113658260A CN202110782453.4A CN202110782453A CN113658260A CN 113658260 A CN113658260 A CN 113658260A CN 202110782453 A CN202110782453 A CN 202110782453A CN 113658260 A CN113658260 A CN 113658260A
Authority
CN
China
Prior art keywords
preset
phase
predicted
information
position information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110782453.4A
Other languages
Chinese (zh)
Inventor
郝祁
马睿
郑玺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest University of Science and Technology
Southern University of Science and Technology
Original Assignee
Southwest University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest University of Science and Technology filed Critical Southwest University of Science and Technology
Priority to CN202110782453.4A priority Critical patent/CN113658260A/en
Publication of CN113658260A publication Critical patent/CN113658260A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/77Determining position or orientation of objects or cameras using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a robot pose calculation method, a system, a robot and a storage medium, wherein the robot pose calculation method comprises the following steps: acquiring a first phase diagram and a second phase diagram before and after the robot moves; determining a predicted phase map according to the first phase map, a preset motion parameter and a preset motion model; and substituting the predicted phase diagram and the second phase diagram into a preset optimization algorithm for optimization so as to minimize the difference between the predicted phase diagram and the second phase diagram and determine the specific value of a preset motion parameter as the movement information. The method determines the predicted phase diagram through the first phase diagram, the preset motion parameter and the preset motion model, then calculates the predicted phase diagram and the second phase diagram which is actually measured, optimizes the predicted phase diagram and the second phase diagram through a preset optimization algorithm to obtain the specific value of the preset motion parameter, and determines the movement information through the specific value of the motion parameter, so that the movement information of the robot motion is calculated simply and accurately.

Description

Robot pose calculation method and system, robot and storage medium
Technical Field
The invention relates to the technical field of robots, in particular to a robot pose calculation method, a robot pose calculation system, a robot and a storage medium.
Background
Pose estimation for robots can be roughly classified into three categories according to the difference between sensors and sensing data (images, point clouds): robots that employ a single RGB or grayscale camera, robots that employ a binocular camera, and robots that employ a depth sensor.
Among them, a single RGB or grayscale robot or a robot of a binocular camera. All the algorithms are based on feature points and matching of RGB or gray level images, and the feature points and matching of the two-dimensional images are relatively rough, so that the estimation accuracy of three-dimensional depth is poor, and the estimation of pose is poor. The robot adopting the depth sensor performs three-dimensional matching based on three-dimensional point cloud data, has large three-dimensional matching computation amount and poor robustness, and is easy to converge on a local point rather than a global optimal point. The robot adopting the camera and the depth sensor can combine the two methods, firstly uses the two-dimensional camera to shoot images for preliminary matching and pose estimation, and then uses the three-dimensional point cloud for three-dimensional matching and fine pose estimation. But also has the disadvantages of the two methods, namely that the two-dimensional matching is rough, and the computation amount of the three-dimensional matching is large.
Disclosure of Invention
The present invention is directed to solving at least one of the problems of the prior art. Therefore, the invention provides a robot pose calculation method which can calculate the displacement information of the robot more simply and accurately.
The invention further provides a robot pose calculation system.
The invention further provides the robot.
The invention also provides a computer readable storage medium.
In a first aspect, an embodiment of the present invention provides a robot pose calculation method, including:
acquiring a first phase diagram and a second phase diagram before and after the robot moves;
determining a predicted phase map according to the first phase map, a preset motion parameter and a preset motion model;
and substituting the predicted phase diagram and the second phase diagram into a preset optimization algorithm for optimization so as to minimize the difference between the predicted phase diagram and the second phase diagram and determine the specific value of a preset motion parameter as the movement information.
The robot pose calculation method provided by the embodiment of the invention at least has the following beneficial effects: the method comprises the steps of determining a predicted phase diagram through a first phase diagram, a preset motion parameter and a preset motion model, then calculating the predicted phase diagram and a second phase diagram which is actually measured, optimizing the predicted phase diagram and the second phase diagram through a preset optimization algorithm to obtain a specific value of the preset motion parameter, and determining movement information through the specific value of the motion parameter, so that the movement information of the robot motion is calculated easily and accurately.
According to further embodiments of the robot pose calculation method of the present invention, the first phase map includes first phase information, first pixel position information; the second phase map includes: second phase information and second pixel position information; the preset motion model comprises: presetting a phase and motion model, presetting a pixel and a motion model, and determining a predicted phase map according to the first phase map, a preset motion parameter and a preset motion model, wherein the method comprises the following steps:
determining predicted phase information according to the first phase information, a preset motion parameter, a preset phase and a motion model;
determining predicted pixel position information according to the first pixel position information, the preset motion parameters, preset pixels and a motion model;
determining the predicted phase map from the predicted phase information and the predicted pixel location information.
According to another embodiment of the present invention, the preset motion parameters include: the method comprises the steps of presetting a rotation parameter and a displacement parameter.
According to another embodiment of the present invention, the method for calculating a pose of a robot, wherein the determining predicted phase information according to the first phase information, the preset motion parameter, and the preset phase and motion model includes:
determining initial position information according to the first phase information and a projection equation of a projector and a camera;
determining predicted position information according to the initial position information, a preset rotation parameter and the preset displacement parameter;
and determining the predicted phase information corresponding to the predicted position information according to the predicted position information and a preset projection formula.
According to another embodiment of the present invention, the method for calculating a pose of a robot, wherein the determining predicted pixel position information according to the first pixel position information, the preset motion parameter, and a preset pixel and motion model, includes:
determining a camera matrix according to the first pixel position information;
and determining the predicted pixel position information according to the camera matrix, a preset rotation parameter and a preset displacement parameter.
According to another embodiment of the present invention, the method for calculating a pose of a robot, which includes substituting the predicted phase map and the second phase map into a preset optimization algorithm to optimize so that a difference between the predicted phase map and the second phase map is minimized to determine a specific value of a preset motion parameter as motion information, includes:
correlating the second pixel position information and the predicted pixel position information with each other;
calculating an error between the predicted phase information and the second phase information corresponding to the pixel position to obtain a plurality of error values;
calculating a sum of squares of the plurality of error values to obtain a plurality of sums of squares;
and optimizing the plurality of square sums by the preset optimization algorithm to obtain specific values of the preset motion parameters to obtain the movement information.
According to the robot pose calculation method of the other embodiments of the present invention, the preset optimization algorithm includes any one of: gradient descent method, gauss-newton method.
In a second aspect, an embodiment of the present invention provides a robot pose calculation system including:
the acquisition module is used for acquiring a first phase diagram and a second phase diagram before and after the robot moves;
the calculation module is used for determining a predicted phase map according to the first phase map, a preset motion parameter and a preset motion model;
and the optimization module is used for substituting the predicted phase map and the second phase map into a preset optimization algorithm for optimization so as to enable the difference between the predicted phase map and the second phase map to be minimum, and specific values of preset motion parameters are determined to be movement information.
The robot pose calculation system provided by the embodiment of the invention at least has the following beneficial effects: the pose of the robot is solved by moving the first phase diagram and the second phase diagram of the projector, and the specific value of the preset motion parameter of the robot is calculated by substituting the second phase diagram and the predicted phase diagram into a preset optimization algorithm to obtain the movement information, so that the movement information is calculated more quickly and accurately, and the robustness is low.
In a third aspect, an embodiment of the present invention provides a robot including:
at least one processor, and,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the robot pose calculation method of the first aspect.
In a fourth aspect, an embodiment of the invention provides a computer-readable storage medium: the computer-readable storage medium stores computer-executable instructions for causing a computer to execute the robot pose calculation method according to the first aspect.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and drawings.
Drawings
FIG. 1 is a schematic flow chart illustrating a method for calculating pose of a robot according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating a method for calculating pose of a robot according to another embodiment of the present invention;
FIG. 3 is a schematic flow chart diagram illustrating a method for calculating pose of a robot according to another embodiment of the present invention;
FIG. 4 is a flowchart illustrating a method for calculating pose of a robot according to another embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating a robot pose calculation method according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating a method for calculating pose of a robot according to another embodiment of the present invention;
FIG. 7 is a block diagram of a robot pose calculation system according to an embodiment of the present invention;
fig. 8 is a block diagram of an embodiment of an electronic control device according to the present invention.
Reference numerals: 100. an acquisition module; 200. a calculation module; 300. an optimization module; 400. a processor; 500. a memory.
Detailed Description
The concept and technical effects of the present invention will be clearly and completely described below in conjunction with the embodiments to fully understand the objects, features and effects of the present invention. It is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments, and those skilled in the art can obtain other embodiments without inventive effort based on the embodiments of the present invention, and all embodiments are within the protection scope of the present invention.
In the description of the present invention, if an orientation description is referred to, for example, the orientations or positional relationships indicated by "upper", "lower", "front", "rear", "left", "right", etc. are based on the orientations or positional relationships shown in the drawings, only for convenience of describing the present invention and simplifying the description, but not for indicating or implying that the referred device or element must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention. If a feature is referred to as being "disposed," "secured," "connected," or "mounted" to another feature, it can be directly disposed, secured, or connected to the other feature or indirectly disposed, secured, connected, or mounted to the other feature.
In the description of the embodiments of the present invention, if "a number" is referred to, it means one or more, if "a plurality" is referred to, it means two or more, if "greater than", "less than" or "more than" is referred to, it is understood that the number is not included, and if "greater than", "lower" or "inner" is referred to, it is understood that the number is included. If reference is made to "first" or "second", this should be understood to distinguish between features and not to indicate or imply relative importance or to implicitly indicate the number of indicated features or to implicitly indicate the precedence of the indicated features.
The pose estimation of the robot in the related art can be roughly classified into three categories according to the difference between sensors and sensing data:
the first type: the robot adopting a single RGB or gray camera shoots two pictures with different visual angles before and after the robot moves by using a characteristic point on a picture shot by the camera as a characteristic descriptor, then matches the characteristic points of the two pictures, and finally estimates the change of the pose by using a motion recovery structure technology. The robot adopting the binocular camera extracts the feature points and the descriptors on the same picture as the first type, and then performs feature point matching of the picture, and the difference is that the binocular camera is adopted, the depth of the feature points can be estimated by using a stereoscopic vision technology, once the matching of the feature points of the picture before and after movement is successful and the depth of the feature points is added, the absolute movement pose can be directly estimated by using a multi-view geometry method, and the pose with the zoom factor is not estimated by adopting SfM as the first type.
The second type: a robot employing a depth sensor. The depth sensor comprises structured light, a laser radar and the like, and sensing data of the depth sensor is output as three-dimensional point cloud. The most common method is to perform three-dimensional matching on Point cloud data acquired before and after movement by using an Iterative Closest Point (ICP) method, so as to estimate the movement pose.
In the third category: a robot employing a camera plus a depth sensor. It is substantially similar to (2). Except that the depth sensor is used for directly obtaining the depth of the feature point instead of using a binocular camera to obtain the depth, the depth precision of the acquisition is higher than that of a binocular system.
However, the methods for estimating the robot pose of these robots have several problems:
first, robots that employ a single RGB or grayscale camera, or a binocular camera. All the algorithms are based on feature points and matching of RGB or gray level images, and the feature points and matching of the two-dimensional images are relatively rough, so that the estimation accuracy of three-dimensional depth is poor, and the estimation of pose is poor.
Second, a robot employing a depth sensor. The method carries out three-dimensional matching based on three-dimensional point cloud data, has large three-dimensional matching operation amount and poor robustness, and is easy to converge on a local point rather than a global optimal point.
Third, a robot with a camera plus a depth sensor is employed. The two methods can be combined, the primary matching and the pose estimation are carried out by shooting images by a two-dimensional camera, and then the three-dimensional matching and the fine pose estimation are carried out by utilizing the three-dimensional point cloud. But also has the disadvantages of the two methods, namely that the two-dimensional matching is rough, and the computation amount of the three-dimensional matching is large.
Based on the above, the application discloses a robot pose calculation method, a robot pose calculation system, a robot and a storage medium, which are used for establishing a geometric relationship between a two-dimensional phase diagram and three-dimensional motion of the system so as to realize phase diagram matching for pose estimation, so that the pose estimation is more accurate and the operation is simple.
Referring to fig. 1, in a first aspect, an embodiment of the present invention discloses a robot pose calculation method, including:
s100, acquiring a first phase diagram and a second phase diagram before and after the robot moves;
s200, determining a predicted phase diagram according to the first phase diagram, a preset motion parameter and a preset motion model;
and S300, substituting the predicted phase diagram and the second phase diagram into a preset optimization algorithm for optimization so as to minimize the difference between the predicted phase diagram and the second phase diagram, and determining the specific value of the preset motion parameter as the movement information.
The method comprises the steps of obtaining a first phase diagram and a second phase diagram by obtaining phase information before and after the robot moves, determining a predicted phase diagram according to the first phase diagram, a preset motion parameter and a preset motion model, wherein the preset motion parameter is an unknown number, so that the obtained predicted phase information is the preset motion parameter carrying the unknown number, then substituting the predicted phase diagram and the second phase diagram into a preset optimization algorithm to adjust the preset motion parameter so that the error between the second phase diagrams of the predicted phase diagrams is minimum to obtain a specific value of the preset motion parameter, and determining the motion information according to the specific value of the preset motion parameter, so that the pose of the robot can be analyzed and obtained through the first phase diagram and the second phase diagram before and after the robot moves, the motion information of the robot is simple and quick to calculate, and the three-dimensional motion information calculated through a two-dimensional phase diagram is sensitive and not robust to initial pose estimation, and the precision is higher than that of three algorithms of the relative technology.
In some embodiments, the first phase map comprises: first phase information, first pixel position information; the second phase map includes: second phase information and second pixel position information; the preset motion model includes: and presetting a phase and motion model, a pixel and a motion model.
When the first phase information and the second phase information are acquired, the projector and the camera are fixed on the robot, and the three-dimensional motion of the robot causes the phase diagram before and after movement to change. This change not only causes the phase of the three-dimensional point itself in the phase map to change, but also changes the pixel position projected on the map for the three-dimensional point, so that it is necessary to consider not only the change in the phase information but also the change in the pixel position.
Referring to fig. 2, step S200 includes:
s210, determining predicted phase information according to the first phase information, the preset motion parameters, the preset phase and the motion model;
s220, determining predicted pixel position information according to the first pixel position information, the preset motion parameters, the preset pixels and the motion model;
and S230, determining a predicted phase map according to the predicted phase information and the predicted pixel position information.
The method comprises the steps of determining predicted phase information according to first phase information, preset motion parameters, preset phases and a motion model, determining predicted pixel position information according to first pixel position information, preset motion parameters, preset pixels and the motion model, determining a predicted phase diagram according to the predicted pixel position information and the predicted phase information, and determining final movement information according to the predicted phase diagram and a second phase diagram, wherein the final movement information is more accurate. The phase diagram comprises phase information and pixel position information, so that predicted phase information and predicted pixel position information are obtained by predicting according to a preset phase and motion model and a preset pixel and motion model, after the predicted pixel position information corresponds to the second pixel position information, the difference between the predicted phase information and the second phase information corresponding to the pixel position is calculated to be minimum so as to determine the specific value of the predicted motion parameter, so that the movement information is obtained, and the pose calculation of the robot is simpler and more accurate.
In some embodiments, the preset motion parameters include: the method comprises the steps of presetting a rotation parameter and a displacement parameter. Wherein, the three-dimensional motion of the robot is set as
Figure BDA0003157556350000071
For the difference of pose before and after movement, it includes 6 degrees of freedom Δ X ═ Δ X, Δ y, Δ z, Δ α, Δ β, Δ γ]TThese six degrees of freedom are displacement along and rotation about the x, y, z axes, respectively. Setting the projector coordinate system before movement as a world coordinate system, wherein the optical center of the projector is taken as a coordinate extraction point, the x axis is taken as the direction vertical to the phase change direction, and the y axis is taken as the phase changeThe direction of formation, the z-axis, is defined as the direction of depth. Whereas ay, az, Δ α, Δ β, Δ γ will result in a change in the phase values and pixel positions, ax only results in a change in pixel position.
Referring to fig. 3, in some embodiments, step S210 includes:
s211, determining initial position information according to the first phase information and projection equations of the projector and the camera;
s212, determining predicted position information according to the initial position information, the preset rotation parameter and the preset displacement parameter;
and S213, determining the predicted phase information corresponding to the predicted position information according to the predicted position information and a preset projection formula.
The two-dimensional first phase information is converted into three-dimensional initial position information through a preset reflection formula, then predicted position information is determined according to the initial position information, a preset rotation parameter and a preset displacement parameter, wherein the predicted position information is the preset rotation parameter and the preset displacement parameter which contain unknown numbers, the three-dimensional predicted position information is converted into two-dimensional predicted phase information according to the predicted position information and a preset projection formula, and therefore the three-dimensional predicted position information is calculated after the two-dimensional first phase information is converted, and the two-dimensional predicted phase information is converted according to the predicted position information, and the predicted phase information is easy to calculate.
Specifically, since the first influence factor of the phase map change is the phase change, let the projection longitudinal resolution of the projector be H, and the focal length of the projector be fpThe x and y coordinates of the principal point of the projector are cxAnd cyThen, the world coordinate of one of the three-dimensional points is obtained as x ═ x, y, z from the first phase map P1 before shifting]Then the robot performs movement delta X including rotation R and displacement T, and the pose of the robot is changed from [0, 0, 0, 0, 0, 0]TChange to [ Δ X, Δ y, Δ z, Δ α, Δ β, Δ γ ═ Δ X]T. Then, after the movement, the projector coordinates of the three-dimensional point with respect to the new position become, by geometric principles:
Xp=R(X-T)=RxRyRz(X-T) (1)
wherein the rotation matrix Rx,RyAnd RzAnd the expression for the displacement vector T is:
Figure BDA0003157556350000081
obtaining initial position information x ═ x, y, z according to projection equations of a projector and a camera by using the first phase information before movement]TAnd initial position information X ═ X, y, z]TThe coordinates relative to the projector are again X ═ X, y, z]And the predicted position information of the projector after the movement is changed into [ x ] according to the formula (1)p,yp,zp]TAccording to the projection formula of the projector, the projection of the initial position information after the movement is as follows:
Figure BDA0003157556350000082
if the direction of the phase change is defined as y-axis, the projection coordinate corresponding to the first phase information is obtained by the above formula, that is, the first position information is:
Figure BDA0003157556350000091
the first position information and the first phase information have a relationship of
Figure BDA0003157556350000092
Therefore, the phase information of the moving preset motion parameter Δ X is the preset phase information:
Figure BDA0003157556350000093
for example, when there is no rotation, only y-axis displacement Δ y, x is obtained from equation (1)p=[xp,yp,zp]T=[x,y-Δy,z]TObtained from the formula (5)The predicted phase information after the shift is:
Figure BDA0003157556350000094
consider the example of three degrees of freedom motion Δ X ═ Δ X, 0, Δ z, 0, Δ β, 0]Then, X is obtained from the formula (1)p=[xp,yp,zp]T=[cosΔβ(x-Δx)+sinΔβ(z-Δz),y,-sinΔβ(x-Δx)+cosΔβ(z-Δz)]TThe predicted phase information after the shift obtained from equation (5) is:
Figure BDA0003157556350000095
the predicted phase information after the movement can be obtained by the above-described procedure using equations (1), (2) and (5) as [ Δ X, Δ y, Δ z, Δ α, Δ β, Δ γ ] in any other motion mode.
Referring to fig. 4, in some embodiments, step S220 includes:
s221, determining a camera matrix according to the first pixel position information;
s222, determining predicted pixel position information according to the camera matrix, the preset rotation parameter and the preset displacement parameter
By establishing a preset pixel and motion model, which mainly determines a camera matrix according to the first pixel position information, the phase information and the pixel position are changed due to Δ y, Δ z, Δ α, Δ β, Δ γ, and Δ x only causes the pixel position to change. Therefore, the predicted pixel position information is determined through the camera matrix, the preset rotation parameter and the preset displacement parameter, and the accurate movement information can be calculated through the correspondence between the predicted pixel position information obtained through calculation and the second pixel position information.
Specifically, a camera matrix with a calibrated camera is determined to be M according to the first pixel position informationc
Figure BDA0003157556350000096
After the robot moves to preset rotation parameters and preset movement parameters, the predicted pixel position information on the second phase diagram is as follows:
sc[uc,vc,1]T=McR(X-T)=Mc[xp,yp,zp,1]T (9)
wherein s isc=m31xp+m32yp+m33zp+m34,[xp,yp,zp]TAs defined above, and is obtained by equation (1). As can be seen from equation (9), the new predicted pixel position is a function of motion, i.e., uc=uc(ΔX),vc=vc(ΔX)。
Referring to fig. 5 and 6, in some embodiments, step S300 includes:
s310, enabling the second pixel position information and the predicted pixel position information to correspond to each other;
s320, calculating errors of the predicted phase information and the second phase information corresponding to the pixel positions to obtain a plurality of error values;
s330, calculating the sum of squares of the error values to obtain a plurality of sum of squares;
and S340, optimizing the plurality of square sums by a preset optimization algorithm to obtain specific values of preset motion parameters so as to obtain final movement information.
The predicted pixel position information is obtained through calculation, the predicted pixel position information is required to be completely consistent with the real second pixel position information, the predicted pixel position information corresponds to the second pixel position information, then the error value of the predicted phase information and the second phase information corresponding to the pixel positions is calculated, the square sum of the error value is calculated, the predicted phase information and the second phase information corresponding to the pixel positions are multiple, the error values are multiple, and therefore the multiple square sums are obtained and are substituted into a preset optimization algorithm to calculate the specific values of the preset motion parameters when the multiple square sums are minimum so as to obtain the motion information.
Specifically, the phase map after the robot has moved and the preset motion parameters Δ X ═ Δ X, Δ y, Δ z, Δ α, Δ β, Δ γ are obtained in steps S210 and S220]TIs described by equations (5) and (9). Assuming that the first pixel position information on the first phase map P1 before movement is (u, v), the phase on the real three-dimensional point corresponding to the first pixel position information (u, v) due to the robot motion is changed to the second phase map, and the first pixel position change information is changed to the second pixel position information (u, v)c,vc). The preset motion parameter Δ X is unknown to be solved and is set as an unknown number. Its predicted phase information is estimated as P 'from the unknown number DeltaX according to formula (5) of step S210'2(Δ X) while we have obtained a true measured second phase map P2 after shifting, the true value corresponding to the second pixel position information should be at the predicted pixel position information (u) corresponding to Δ Xc,vc). Let this estimated predicted phase information P'2(Δ X) and true second phase information P2(uc,vc) The error of (2) is:
Figure BDA0003157556350000101
setting a cost function F of the preset optimization algorithm as the sum of squares of the phase error values as follows:
Figure BDA0003157556350000102
in the formula, all pixel position information (u, v) within the region of interest R is summed. For the cost function of equation (11), the solution of the optimization algorithm is preset as
ΔX*=arg minΔX F(ΔX) (12)
According to the solution of the preset optimization algorithm, the commonly used preset optimization algorithm comprises: gradient descent method, gauss-newton method, etc., in the present embodiment, the simplest gradient descent method is taken as an example, and the index i is used to index each of the values in RPixel of euvBecome eiAnd N pixels. The first order Qinler expansion of the cost function F of equation (11) is approximated as:
Figure BDA0003157556350000111
wherein, Δ XjFor the j-th component of a predetermined movement parameter, e.g. Δ X1=Δx,ΔX4Δ α. J is a Jacobian matrix and is a Jacobian matrix,
Figure BDA0003157556350000112
e=[e1,e2,…,eN]T. By the gradient descent method, the (k + 1) th update iterative formula in the solving process is as follows:
Figure BDA0003157556350000119
wherein a is the update amplitude to control the update speed. Element J of the Jacobian matrixijI.e. the deviation of the error from the motion is
Figure BDA0003157556350000113
Wherein the content of the first and second substances,
Figure BDA0003157556350000114
can be obtained from formula (5) in step S210,
Figure BDA0003157556350000115
and
Figure BDA0003157556350000116
can be obtained from the formula (9) in step S220,
Figure BDA0003157556350000117
and
Figure BDA0003157556350000118
this can be obtained by directly calculating the pixel gradients of the second phase information on the shifted second phase map P2.
Therefore, the process of solving the movement information, i.e., the change Δ X of the pose, is completed. Sufficient accuracy can be obtained by iterating equation (14) a sufficient number of times. Therefore, the specific values of the preset rotation parameters and the preset movement parameters when the square sums are minimum are obtained through repeated iterative calculation, and the movement information is determined according to the specific values of the preset rotation parameters and the preset movement parameters, so that the movement information of the robot can be obtained through calculation through the two first phase diagrams and the second phase diagram, and the movement position information is easy and accurate to calculate.
The robot pose calculation method according to the embodiment of the present invention is described in detail below in a specific embodiment with reference to fig. 1 to 6. It is to be understood that the following description is only exemplary, and not a specific limitation of the invention.
The method comprises the steps of obtaining first phase information, first pixel position information, second phase information and second pixel position information before and after the robot moves, substituting the first phase information into a formula (1) to obtain initial position information, substituting preset rotation parameters and preset movement parameters into the initial position information to obtain predicted position information, and projecting the initial position information after the robot moves according to a projection formula of a projector, wherein the projection of the initial position information is as shown in a formula (5). And then, determining a camera matrix calibrated by the camera according to the first pixel position information, and substituting the camera matrix according to the preset rotation parameter and the preset movement parameter to obtain the predicted pixel position information. And finally, corresponding the position information of the predicted pixel with the position information of the second pixel, calculating error values of the predicted phase information and the second phase information corresponding to the pixel position, and calculating the sum of squares of the error values.
In a second aspect, referring to fig. 7, an embodiment of the present invention further discloses a robot pose calculation system, including: the system comprises an acquisition module, a calculation module and an optimization module; the acquisition module is used for acquiring a first phase diagram and a second phase diagram before and after the robot moves; the calculation module is used for determining a predicted phase map according to the first phase map, the preset motion parameters and the preset motion model; the optimization module is used for substituting the predicted phase diagram and the second phase diagram into a preset optimization algorithm for optimization so as to enable the difference between the predicted phase diagram and the second phase diagram to be minimum, and specific values of preset motion parameters are determined to be movement information.
The pose of the robot is solved by moving the first phase diagram and the second phase diagram of the projector, and the specific value of the preset motion parameter of the robot is calculated by substituting the second phase diagram and the predicted phase diagram into a preset optimization algorithm to obtain the movement information, so that the movement information is calculated more quickly and accurately, and the robustness is low.
Referring to fig. 8, in a third aspect, an embodiment of the present invention further discloses a robot, including: at least one processor, and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the robot pose calculation method of the first aspect.
In a fourth aspect, the embodiment of the present invention also discloses a computer-readable storage medium, where computer-executable instructions are stored, and the computer-executable instructions are used to make a computer execute the robot pose calculation method according to the first aspect.
The above-described embodiments of the apparatus are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may also be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
One of ordinary skill in the art will appreciate that all or some of the steps, systems, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the gist of the present invention. Furthermore, the embodiments of the present invention and the features of the embodiments may be combined with each other without conflict.

Claims (10)

1. A robot pose calculation method is characterized by comprising the following steps:
acquiring a first phase diagram and a second phase diagram before and after the robot moves;
determining a predicted phase map according to the first phase map, a preset motion parameter and a preset motion model;
and substituting the predicted phase diagram and the second phase diagram into a preset optimization algorithm for optimization so as to minimize the difference between the predicted phase diagram and the second phase diagram and determine a specific value of a preset motion parameter as movement information.
2. The robot pose calculation method according to claim 1, wherein the first phase map includes first phase information, first pixel position information; the second phase map includes: second phase information and second pixel position information; the preset motion model comprises: presetting a phase and motion model, presetting a pixel and a motion model, and determining a predicted phase map according to the first phase map, a preset motion parameter and a preset motion model, wherein the method comprises the following steps:
determining predicted phase information according to the first phase information, a preset motion parameter, a preset phase and a motion model;
determining predicted pixel position information according to the first pixel position information, the preset motion parameters, preset pixels and a motion model;
determining the predicted phase map from the predicted phase information and the predicted pixel location information.
3. The robot pose calculation method according to claim 2, wherein the preset motion parameters include: the method comprises the steps of presetting a rotation parameter and a displacement parameter.
4. The robot pose calculation method according to claim 3, wherein the determining predicted phase information from the first phase information, preset motion parameters and preset phase and motion models comprises:
determining initial position information according to the first phase information and a projection equation of a projector and a camera;
determining predicted position information according to the initial position information, a preset rotation parameter and the preset displacement parameter;
and determining the predicted phase information corresponding to the predicted position information according to the predicted position information and a preset projection formula.
5. The robot pose calculation method according to claim 3, wherein the determining predicted pixel position information from the first pixel position information, the preset motion parameters, and a preset pixel and motion model comprises:
determining a camera matrix according to the first pixel position information;
and determining the predicted pixel position information according to the camera matrix, a preset rotation parameter and a preset displacement parameter.
6. The robot pose calculation method according to claim 3, wherein the step of substituting the predicted phase map and the second phase map into a preset optimization algorithm for optimization to minimize the difference between the predicted phase map and the second phase map so as to determine specific values of preset motion parameters as motion information comprises the steps of:
correlating the second pixel position information and the predicted pixel position information with each other;
calculating an error between the predicted phase information and the second phase information corresponding to the pixel position to obtain a plurality of error values;
calculating a sum of squares of the plurality of error values to obtain a plurality of sums of squares;
and optimizing the plurality of square sums by the preset optimization algorithm to obtain specific values of the preset motion parameters to obtain the movement information.
7. The robot pose calculation method according to claim 1, wherein the preset optimization algorithm comprises any one of: gradient descent method, gauss-newton method.
8. A robot pose calculation system, comprising:
the acquisition module is used for acquiring a first phase diagram and a second phase diagram before and after the robot moves;
the calculation module is used for determining a predicted phase map according to the first phase map, a preset motion parameter and a preset motion model;
and the optimization module is used for substituting the predicted phase map and the second phase map into a preset optimization algorithm to optimize so as to minimize the difference between the predicted phase map and the second phase map and determine the specific value of a preset motion parameter as the movement information.
9. A robot, comprising:
at least one processor, and,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the robot pose calculation method of any one of claims 1 to 7.
10. A computer-readable storage medium characterized in that the computer-readable storage medium stores computer-executable instructions for causing a computer to execute the robot pose calculation method according to any one of claims 1 to 7.
CN202110782453.4A 2021-07-12 2021-07-12 Robot pose calculation method and system, robot and storage medium Pending CN113658260A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110782453.4A CN113658260A (en) 2021-07-12 2021-07-12 Robot pose calculation method and system, robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110782453.4A CN113658260A (en) 2021-07-12 2021-07-12 Robot pose calculation method and system, robot and storage medium

Publications (1)

Publication Number Publication Date
CN113658260A true CN113658260A (en) 2021-11-16

Family

ID=78477269

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110782453.4A Pending CN113658260A (en) 2021-07-12 2021-07-12 Robot pose calculation method and system, robot and storage medium

Country Status (1)

Country Link
CN (1) CN113658260A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107941217A (en) * 2017-09-30 2018-04-20 杭州迦智科技有限公司 A kind of robot localization method, electronic equipment, storage medium, device
CN108225345A (en) * 2016-12-22 2018-06-29 乐视汽车(北京)有限公司 The pose of movable equipment determines method, environmental modeling method and device
CN108648215A (en) * 2018-06-22 2018-10-12 南京邮电大学 SLAM motion blur posture tracking algorithms based on IMU
CN109186596A (en) * 2018-08-14 2019-01-11 深圳清华大学研究院 IMU measurement data generation method, system, computer installation and readable storage medium storing program for executing
CN111238496A (en) * 2020-01-14 2020-06-05 深圳市锐曼智能装备有限公司 Robot posture confirming method, device, computer equipment and storage medium
CN112184809A (en) * 2020-09-22 2021-01-05 浙江商汤科技开发有限公司 Relative pose estimation method, device, electronic device and medium
CN112734852A (en) * 2021-03-31 2021-04-30 浙江欣奕华智能科技有限公司 Robot mapping method and device and computing equipment
CN112752028A (en) * 2021-01-06 2021-05-04 南方科技大学 Pose determination method, device and equipment of mobile platform and storage medium
CN113034594A (en) * 2021-03-16 2021-06-25 浙江商汤科技开发有限公司 Pose optimization method and device, electronic equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108225345A (en) * 2016-12-22 2018-06-29 乐视汽车(北京)有限公司 The pose of movable equipment determines method, environmental modeling method and device
CN107941217A (en) * 2017-09-30 2018-04-20 杭州迦智科技有限公司 A kind of robot localization method, electronic equipment, storage medium, device
CN108648215A (en) * 2018-06-22 2018-10-12 南京邮电大学 SLAM motion blur posture tracking algorithms based on IMU
CN109186596A (en) * 2018-08-14 2019-01-11 深圳清华大学研究院 IMU measurement data generation method, system, computer installation and readable storage medium storing program for executing
CN111238496A (en) * 2020-01-14 2020-06-05 深圳市锐曼智能装备有限公司 Robot posture confirming method, device, computer equipment and storage medium
CN112184809A (en) * 2020-09-22 2021-01-05 浙江商汤科技开发有限公司 Relative pose estimation method, device, electronic device and medium
CN112752028A (en) * 2021-01-06 2021-05-04 南方科技大学 Pose determination method, device and equipment of mobile platform and storage medium
CN113034594A (en) * 2021-03-16 2021-06-25 浙江商汤科技开发有限公司 Pose optimization method and device, electronic equipment and storage medium
CN112734852A (en) * 2021-03-31 2021-04-30 浙江欣奕华智能科技有限公司 Robot mapping method and device and computing equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XI ZHENG 等: "Phase-SLAM: Mobile Structured Light Illumination for Full Body 3D Scanning", 2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, 1 October 2021 (2021-10-01), pages 1 - 8 *
XI ZHENG等: "Phase-SLAM: Phase Based Simultaneous Localization and Mapping for Mobile Structured Light Illumination Systems", IEEE ROBOTICS AND AUTOMATION LETTERS, 31 July 2022 (2022-07-31) *

Similar Documents

Publication Publication Date Title
US10636151B2 (en) Method for estimating the speed of movement of a camera
EP3367677B1 (en) Calibration apparatus, calibration method, and calibration program
KR102085228B1 (en) Imaging processing method and apparatus for calibrating depth of depth sensor
US20210232845A1 (en) Information processing apparatus, information processing method, and storage medium
KR102249769B1 (en) Estimation method of 3D coordinate value for each pixel of 2D image and autonomous driving information estimation method using the same
US11082633B2 (en) Method of estimating the speed of displacement of a camera
CN107560603B (en) Unmanned aerial vehicle oblique photography measurement system and measurement method
EP3032818B1 (en) Image processing device
US11062475B2 (en) Location estimating apparatus and method, learning apparatus and method, and computer program products
JP2013534616A (en) Method and system for fusing data originating from image sensors and motion or position sensors
CN108332752B (en) Indoor robot positioning method and device
EP3633617A2 (en) Image processing device
CN114022639A (en) Three-dimensional reconstruction model generation method and system, electronic device and storage medium
CN111538029A (en) Vision and radar fusion measuring method and terminal
JP2014216813A (en) Camera attitude estimation device and program therefor
CN114217665A (en) Camera and laser radar time synchronization method, device and storage medium
CN113450334B (en) Overwater target detection method, electronic equipment and storage medium
CN117367427A (en) Multi-mode slam method applicable to vision-assisted laser fusion IMU in indoor environment
CN115511970B (en) Visual positioning method for autonomous parking
CN113658260A (en) Robot pose calculation method and system, robot and storage medium
CN113504385B (en) Speed measuring method and device for plural cameras
JP2022190173A (en) Position estimating device
KR102225321B1 (en) System and method for building road space information through linkage between image information and position information acquired from a plurality of image sensors
CN113124906A (en) Distance measurement method and device based on online calibration and electronic equipment
CN111986248A (en) Multi-view visual perception method and device and automatic driving automobile

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination