CN118096883A - Object positioning method and device - Google Patents

Object positioning method and device Download PDF

Info

Publication number
CN118096883A
CN118096883A CN202410249103.5A CN202410249103A CN118096883A CN 118096883 A CN118096883 A CN 118096883A CN 202410249103 A CN202410249103 A CN 202410249103A CN 118096883 A CN118096883 A CN 118096883A
Authority
CN
China
Prior art keywords
point
reference point
neighborhood
feature
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410249103.5A
Other languages
Chinese (zh)
Inventor
周杨
陈元吉
全晓臣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikrobot Co Ltd
Original Assignee
Hangzhou Hikrobot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikrobot Co Ltd filed Critical Hangzhou Hikrobot Co Ltd
Priority to CN202410249103.5A priority Critical patent/CN118096883A/en
Publication of CN118096883A publication Critical patent/CN118096883A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The embodiment of the application provides an object positioning method and device, and relates to the technical field of data processing, wherein the method comprises the following steps: obtaining a first point cloud of an object to be positioned, and selecting a first reference point from surface points in the first point cloud; constructing a first point pair feature between each first reference point and a first feature point in a first adjacent area of the first reference point, wherein the first feature point comprises: surface points and edge points in the first point cloud located in the first neighborhood; matching the first point pair characteristics corresponding to each first reference point with second point pair characteristics contained in a pre-constructed point cloud model of the object, and determining matching pose corresponding to each first reference point based on a matching result; and positioning the object based on the matching pose corresponding to each first reference point. By applying the scheme provided by the embodiment of the application, the object oriented to the task executed by the robot can be positioned.

Description

Object positioning method and device
Technical Field
The present application relates to the field of data processing technologies, and in particular, to an object positioning method and apparatus.
Background
Along with the development of science and technology, the degree of industrial intelligence is higher and higher, and the condition that robots replace manual execution each task is more and more common. In some scenarios, a robot needs to locate an object that the task faces before executing the task. For example, in a workpiece processing scenario, the robot needs to position the workpiece first, and then, according to the positioning result, perform specific processing treatments such as polishing, cutting, assembling, and the like on the workpiece.
Therefore, in these scenes, positioning the object facing the task is the basis for the robot to execute the task, and has great significance for the robot to smoothly execute the task.
In view of the foregoing, it is desirable to provide a positioning solution to position an object that is oriented to a task performed by a robot.
Disclosure of Invention
The embodiment of the application aims to provide an object positioning method and device for positioning an object oriented to a task executed by a robot. The specific technical scheme is as follows:
In a first aspect, an embodiment of the present application provides an object positioning method, including:
Obtaining a first point cloud of an object to be positioned, and selecting a first reference point from surface points in the first point cloud;
constructing a first point pair feature between each first reference point and a first feature point in a first adjacent area of the first reference point, wherein the first feature point comprises: surface points and edge points in the first point cloud located in the first neighborhood;
matching the first point pair features corresponding to the first reference points with second point pair features contained in a pre-constructed point cloud model of the object, and determining matching positions corresponding to the first reference points based on matching results, wherein the second point pair features are as follows: a point pair feature between a second reference point in a surface point in a second point cloud of the point cloud model and a second feature point in a second adjacent to the second reference point, wherein the second feature point comprises: and the surface points and the edge points in the second point cloud, which are positioned in the second adjacent area, are matched with the pose: the relative pose of the object with respect to the point cloud model;
and positioning the object based on the matching pose corresponding to each first reference point.
In a second aspect, an embodiment of the present application provides an object positioning apparatus, including:
The device comprises a first point cloud obtaining module, a second point cloud obtaining module and a first point cloud locating module, wherein the first point cloud obtaining module is used for obtaining a first point cloud of an object to be located and selecting a first reference point from surface points in the first point cloud;
the first point pair feature construction module is used for constructing first point pair features between each first reference point and first feature points in the first adjacent area, wherein the first feature points comprise: surface points and edge points in the first point cloud located in the first neighborhood;
The matching pose determining module is configured to match a first point pair feature corresponding to each first reference point with a second point pair feature included in a pre-constructed point cloud model of the object, and determine a matching pose corresponding to each first reference point based on a matching result, where the second point pair feature is: a point pair feature between a second reference point in a surface point in a second point cloud of the point cloud model and a second feature point in a second adjacent to the second reference point, wherein the second feature point comprises: and the surface points and the edge points in the second point cloud, which are positioned in the second adjacent area, are matched with the pose: the relative pose of the object with respect to the point cloud model;
And the object positioning module is used for positioning the object based on the matching pose corresponding to each first reference point.
In a third aspect, an embodiment of the present application provides an electronic device, including:
a memory for storing a computer program;
And a processor, configured to implement the method according to the first aspect when executing the program stored in the memory.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having a computer program stored therein, which when executed by a processor, implements the method of the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of the first aspect.
In view of the above, when the solution provided by the embodiment of the present application is applied to object positioning, a first point cloud of an object to be positioned is obtained, a first reference point is selected from surface points in the first point cloud, and first point pair features between each first reference point and a first feature point in a first adjacent thereto are constructed, so that the first point pair features corresponding to each first reference point are matched with second point pair features included in a point cloud model of a pre-constructed object, so that a matching pose corresponding to each first reference point can be obtained, and further the object can be successfully positioned based on the matching pose corresponding to each first reference point.
Of course, it is not necessary for any one product or method of practicing the application to achieve all of the advantages set forth above at the same time.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the application, and other embodiments may be obtained according to these drawings to those skilled in the art.
Fig. 1 is a flow chart of a first object positioning method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of a method for constructing point-to-feature according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an object positioning process according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a point cloud model construction process according to an embodiment of the present application;
FIG. 5 is a flowchart of a second object positioning method according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an object positioning method device according to an embodiment of the present application;
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. Based on the embodiments of the present application, all other embodiments obtained by the person skilled in the art based on the present application are included in the scope of protection of the present application.
First, an execution subject of the scheme provided by the embodiment of the present application is described.
The implementation main body of the scheme provided by the embodiment of the application is as follows: any one electronic device with data processing, storage and other functions. For example, the control device may be a control device deployed by a robot, or may be a background server.
The following describes the object positioning scheme provided by the embodiment of the present application in detail.
Referring to fig. 1, a flowchart of a first object positioning method according to an embodiment of the present application is shown, where the method includes the following steps S101 to S104.
Step S101: a first point cloud of the object to be positioned is obtained, and a first reference point is selected from surface points in the first point cloud.
The object to be positioned may be an object for which a task performed by the robot is oriented.
For example, when the task performed by the robot is a workpiece processing task, the object to be positioned may be a workpiece to be processed; when the task performed by the robot is a cargo handling task, the object to be positioned may be a cargo or the like to be handled.
The first point cloud is a set of point data obtained by acquiring information of the object by using a measuring instrument.
The embodiment of the application is not limited to a specific manner of obtaining the first point cloud of the object to be positioned, and is described below by way of example.
In one case, a 3D depth camera may be employed to image an object to be positioned, thereby generating a first point cloud of the object to be positioned.
In another case, information acquisition can be performed on the object to be positioned by using a laser radar, a laser scanner and other devices, and a first point cloud of the object is obtained according to the acquired information.
In another case, a binocular camera can be used for image acquisition of the object to be positioned, and the first point cloud of the object is obtained based on the binocular positioning principle.
The surface points in the first point cloud refer to points in the first point cloud corresponding to the outer surface of the object, and of course, edge points are also included in the first point cloud, and edge points in the first point cloud refer to points in the first point cloud corresponding to the edge of the object.
For example, if the object to be positioned is in a cube shape, the surface points in the first point cloud refer to points corresponding to 6 surfaces of the object, and the edge points in the first point cloud refer to points corresponding to 8 edges of the object.
The embodiment of the application does not limit the way in which the first reference point is selected from the surface points in the first point cloud.
For example, each first reference point may be randomly selected from the surface points; selecting one first reference point from the surface points, and sequentially selecting other first reference points with the first reference point as a starting point according to a preset sampling interval, wherein the sampling interval can be the distance between the selected first reference point and the last selected first reference point; all the above surface points may also be used as the first reference point, etc., and are not exemplified one by one.
In this case, a point located near the edge point may be selected from the above-described surface points as the first reference point.
This is advantageous to enable more edge points to be included in the field of the first reference point, so that the subsequent construction results in more first point pairs between the reference point and the edge point, as described in step S102 below.
Step S102: first point pair features between each first reference point and first feature points in the first neighborhood of the first reference point are constructed.
Wherein the first feature point includes: the first point cloud includes surface points and edge points located within a first neighborhood.
The method of distinguishing surface points from edge points in the first feature points is described below.
In one embodiment, a normal vector of each first feature point may be determined, then a point where the normal vector is abrupt is determined as an edge point, and then points other than the determined edge point are surface points. Wherein, normal vector mutation refers to: the normal difference from the near point is large. The determination of the normal vector of the first feature point is detailed in step S201 in the embodiment shown in fig. 2, which is not described in detail herein.
In another embodiment, a discontinuous point may be determined from the first feature point, and as an edge point, a point other than the determined edge point is a surface point.
Since the first feature point includes a surface point and an edge point, a first point-to-feature between the first reference point and the first feature point includes: a point-to-point feature between the first reference point and the surface point and a point-to-point feature between the first reference point and the edge point.
The size of the first neighborhood can be set by a worker according to experience or actual requirements. If a larger number of first feature points in the first area is expected to improve the subsequent positioning accuracy, a larger first area can be set; if a smaller number of first feature points in the first domain is desired to reduce the calculation amount of the positioning process, a smaller first domain may be set.
The manner in which the first point pair feature is constructed is described below.
First, the first feature points involved in the point-to-point feature construction may be all the first feature points in the first neighborhood, or may be partial feature points sampled from all the first feature points.
The following description is made with respect to the surface point and the edge point in the first feature point in the first neighborhood, respectively.
For surface points:
In one embodiment, a first distance between a first reference point and a surface point within a first field may be obtained, and a first point-to-feature between the first reference point and the surface point may be constructed based on the obtained first distance.
For example, the first distance is directly used as the first point pair feature, and the feature value obtained based on the first distance is used as the first point pair feature.
In another embodiment, a normal vector of the first reference point and a normal vector of the surface point in the first neighborhood may be obtained, and a first point-to-point feature between the first reference point and the surface point may be constructed based on the normal vector of the first reference point and the normal vector of the surface point. The specific embodiment is shown in step S202 in the embodiment shown in fig. 2, which will not be described in detail here.
For edge points:
in one embodiment, a second distance between the first reference point and the edge point in the first domain may be obtained, and a first point-to-feature between the first reference point and the edge point may be constructed based on the obtained second distance.
For example, the first distance is directly used as the first point pair feature, and the feature value obtained based on the first distance is used as the first point pair feature.
In another embodiment, a normal vector of the first reference point and a tangent vector of the edge point in the first neighborhood may be obtained, and a first point-to-point feature between the first reference point and the edge point may be constructed based on the normal vector of the first reference point and the tangent vector of the edge point. The specific embodiment is shown in step S203 in the embodiment shown in fig. 2, which will not be described in detail here.
Step S103: and matching the first point pair characteristics corresponding to the first reference points with the second point pair characteristics contained in the point cloud model of the pre-constructed object, and determining the matching pose corresponding to the first reference points based on the matching result.
Wherein the second point pair is characterized by: and a point pair feature between a second reference point in the surface points in a second point cloud used for constructing the point cloud model and a second feature point in a second adjacent area, wherein the second feature point comprises: and the surface points and the edge points in the second point cloud which are positioned in the second adjacent area are matched with the pose: relative pose of the object with respect to the point cloud model.
The above-mentioned point cloud model is constructed according to the second point cloud of the object, and the specific construction mode is described in the following embodiments, which will not be described in detail herein.
Specifically, for each first point pair feature corresponding to each first reference point, a target second point pair feature with the highest similarity to the first point pair feature is determined from second point pair features included in a pre-constructed point cloud model of the object, and then a matching pose corresponding to each first reference point is determined based on differences between the first point pair feature corresponding to each first reference point and the target second point pair feature corresponding to the first point pair feature.
The difference may be a feature distance between a first point pair feature and its corresponding target second point pair feature, etc.
The target second reference point corresponding to the target second point pair feature with the highest similarity to the first point pair feature may be considered to correspond to the same point in the object as the first reference point, so that the difference between the first point pair feature corresponding to the first reference point and the target second point pair feature corresponding to the target reference point can reflect the relative pose of the first point cloud with respect to the point cloud model, that is, reflect the relative pose of the object with respect to the point cloud model. Therefore, based on the difference between the first point pair features corresponding to the first reference points and the target second point pair features corresponding to the first point pair features, the matching pose corresponding to the first reference points can be reasonably and accurately determined.
Step S104: and positioning the object based on the matching pose corresponding to each first reference point.
As can be seen from the above steps, the first point pair features corresponding to the different first reference points may be different, and the target second point pair features corresponding to the first point pair features may be different, so that the matching pose corresponding to each first reference point obtained based on the first point pair features and the target second point pair features corresponding to the first point pair features is also different.
Specifically, the object may be positioned based on the matching pose corresponding to each first reference point in the following manner.
In one embodiment, an accuracy representation value of a matching pose corresponding to each first reference point may be determined first, then a target pose of the object relative to the point cloud model is determined based on the matching pose corresponding to each first reference point and the accuracy representation value of the matching pose corresponding to each first reference point, and then the object is positioned based on the determined target pose. The detailed description will be given in the example shown in fig. 5, which will not be described in detail here.
Therefore, the object can be positioned according to the matching pose with higher accuracy, and the positioning accuracy can be improved.
In another embodiment, an average pose of the matching poses corresponding to the respective first reference points may be determined, and then the object is positioned based on the determined target pose.
In view of the above, when the solution provided by the embodiment of the present application is applied to object positioning, a first point cloud of an object to be positioned is obtained, a first reference point is selected from surface points in the first point cloud, and first point pair features between each first reference point and a first feature point in a first adjacent thereto are constructed, so that the first point pair features corresponding to each first reference point are matched with second point pair features included in a point cloud model of a pre-constructed object, so that a matching pose corresponding to each first reference point can be obtained, and further the object can be successfully positioned based on the matching pose corresponding to each first reference point.
Wherein the first feature point includes a surface point and an edge point in the first point cloud that are located in the first neighborhood, and thus, a first point-to-feature between the first reference point and the first feature point includes: a point-to-point feature between the first reference point and the surface point and a point-to-point feature between the first reference point and the edge point. On one hand, because the normal direction difference of the surface points of the quasi-planar object is lower, the point-to-feature ratio constructed based on the surface points is almost consistent and has smaller distinguishing degree, compared with the method for acquiring the matching pose by only adopting the point-to-feature ratio constructed based on the surface points, the method overcomes the defect that the matching pose accuracy is low due to the fact that the feature matching is wrong because the point-to-feature ratio is smaller; on the other hand, the edges of the plane-like objects often have symmetrical similarity or translational similarity, so that the distinction degree of each point constructed based on the edge points is smaller, the edge points are determined according to the characteristics of the points in the neighborhood, and errors are easy to occur.
Therefore, when the scheme provided by the embodiment of the application is applied to object positioning, the surface points and the edge points in the object point cloud are comprehensively considered, the matching pose is obtained by respectively adopting the point pair characteristics constructed based on the surface points and the point pair characteristics constructed based on the edge points, the matching pose is restrained by adopting the point pair characteristics of the surface points and the edge points in two dimensions, and the surface and the edge structures of the restrained object are aligned with the model, so that the advantages of the surface points and the edge points are combined, and the accuracy of the obtained matching pose is improved under the condition that no additional positioning cost is increased, and the object positioning accuracy is improved.
The construction of the first point pair feature is described in detail below by way of the embodiment shown in fig. 2.
Referring to fig. 2, a flow chart of a point-to-feature construction method according to an embodiment of the present application is provided, where the method includes the following steps S201 to S203.
Step S201: and obtaining a normal vector of the first reference point, a normal vector of the surface point in the first neighborhood of the first reference point and a tangential vector of the edge point in the first neighborhood.
The manner in which the above-described various vectors are obtained is described below.
Normal vector for the first reference point:
a set of point clouds in a third domain of the first reference point may be determined, and then a vector perpendicular to the set of point clouds, i.e. a normal vector of the first reference point, is determined.
Normal vector for surface points within a first neighborhood of a first reference point:
Similarly, a set of point clouds in the fourth domain of the surface points may be determined, and then a vector perpendicular to the set of point clouds, that is, a normal vector of the surface points, may be determined.
Tangent vector to edge point in the first neighborhood:
An edge point cloud set in the fifth field of the edge points may be determined, and then a vector tangent to the edge point cloud set, that is, a tangent vector of the edge points, may be determined.
The fourth and fifth fields described above may be determined by a worker based on experience and actual demands.
Step S202: and constructing a first point pair feature between the first reference point and the surface point in the first neighborhood based on the normal vector of the first reference point and the normal vector of the surface point in the first neighborhood.
In particular, a first point-to-point feature between a first reference point and a surface point within a first neighborhood may be constructed in the following manner.
In one embodiment, at least one of the following information may be obtained based on a normal vector of the first reference point and a normal vector of the surface points in the first neighborhood, and the first point-to-feature between the first reference point and the surface points in the first neighborhood is constructed according to the obtained information:
1. a first included angle between a normal vector of the first reference point and a normal vector of the surface point in the first neighborhood.
2. A second included angle between the normal vector of the first reference point and the first target vector.
Wherein the first target vector is: a direction vector between the first reference point and a surface point within the first neighborhood.
3. And a third included angle between the normal vector of the surface point and the first target vector.
The first point pair feature constructed based on the above information may be a feature directly including the above information, may be a feature obtained by fusing the above information, or may be a feature obtained by performing other processing on the above information and according to a processing result, which is not limited in the embodiment of the present application.
It can be seen that based on the normal vector of the first reference point and the normal vector of the surface point in the first neighborhood, various information such as an included angle between the normal vectors, an included angle between the normal vector and the target vector and the like can be adopted to construct the first point pair feature between the first reference point and the surface point, so that the constructed first point pair feature can reflect richer and more comprehensive information, further more calculation constraints can be provided for the first point pair feature in the subsequent pose matching process, the accuracy of determining the matching pose is improved, and the accuracy of object positioning is improved.
In another embodiment, a first distance between the first reference point and the surface point in the first domain may be obtained, and a first point-to-feature between the first reference point and the surface point in the first neighborhood is constructed based on a normal vector of the first reference point, a normal vector of the surface point in the first neighborhood, and the obtained first distance.
Specifically, at least one of the above information may be obtained based on a normal vector of the first reference point and a normal vector of the surface point in the first neighborhood, and a first point-to-feature between the first reference point and the surface point in the first neighborhood may be constructed according to the obtained information and the first distance.
In this way, when the first point pair features of the first reference point and the surface points in the first field are constructed, the information of the vector dimension and the distance dimension is comprehensively considered, so that the constructed first point pair features can reflect richer and more comprehensive information, further more calculation constraints can be provided for the first point pair features in the subsequent pose matching process, the accuracy of determining the matching pose is improved, and the accuracy of object positioning is improved.
Step S203: and constructing a first point pair feature between the first reference point and the edge point in the first neighborhood based on the normal vector of the first reference point and the tangent vector of the edge point in the first neighborhood.
In one embodiment, at least one of the following information may be obtained based on a normal vector of the first reference point and a tangent vector of the edge point in the first neighborhood, and a first point-to-feature between the first reference point and the edge point in the first neighborhood is constructed according to the obtained information:
1. and a fourth included angle between the normal vector of the first reference point and the tangential vector of the edge point in the first neighborhood.
2. And a fifth included angle between the normal vector of the first reference point and the second target vector.
Wherein the second target vector is: a direction vector between the first reference point and an edge point in the first neighborhood.
3. And a sixth included angle between the tangential vector of the edge point and the second target vector.
It should be noted that, the tangential vector of the point of the edge has two directions, so that there are two included angles between the tangential vector of the edge point and the second target vector, and in this step, an acute included angle may be selected from the two included angles as the sixth included angle.
Similarly, the first point pair feature constructed based on the above information may be a feature directly including the above information, a feature obtained by fusing the above information, or a feature obtained by performing other processing on the above information and according to the processing result, which is not limited in the embodiment of the present application.
It can be seen that based on the normal vector of the first reference point and the tangent vector of the edge point in the first neighborhood, various information such as an included angle between the tangent vectors, an included angle between the normal vector and the target vector, an included angle between the tangent vector and the target vector and the like can be adopted to construct the first point pair feature between the first reference point and the edge point, so that the constructed first point pair feature can reflect richer and more comprehensive information, further more calculation constraint can be provided by the first point pair feature in the subsequent pose matching process, the accuracy of determining the matching pose is improved, and the accuracy of object positioning is further improved.
In another embodiment, a second distance between the first reference point and the edge point in the first domain may be obtained, and a first point-to-feature between the first reference point and the edge point in the first neighborhood is constructed based on a normal vector of the first reference point, a tangent vector of the edge point in the first neighborhood, and the obtained second distance.
Specifically, at least one of the above information may be obtained based on a normal vector of the first reference point and a tangent vector of the edge point in the first neighborhood, and a first point-to-feature between the first reference point and the edge point in the first neighborhood may be constructed based on the obtained information and the second distance.
In this way, when the first point pair features of the first reference point and the edge point in the first field are constructed, the information of the vector dimension and the distance dimension is comprehensively considered, so that the constructed first point pair features can reflect richer and more comprehensive information, the first point pair features can provide more calculation constraint in the subsequent pose matching process, the accuracy of determining the matching pose is improved, and the accuracy of object positioning is improved.
From the above, on the one hand, the first point pair feature between the first reference point and the surface point in the first neighborhood can be constructed according to the normal vector of the first reference point and the normal vector of the surface point, and since the surface point is a point corresponding to the outer surface of the object, the normal vector can represent the feature of the outer surface of the object, so that the constructed first point pair feature can accurately reflect the feature of the outer surface of the object; on the other hand, the first point pair feature between the first reference point and the edge point in the first neighborhood can be constructed according to the normal vector of the first reference point and the tangent vector of the edge point, and the tangent vector of the edge point can represent the feature of the edge of the object because the edge point is the point corresponding to the edge of the object, so that the constructed first point pair feature can accurately reflect the feature of the edge of the object.
Therefore, the point pair characteristics of the outer surface of the object and the characteristics of the edge of the object can be obtained respectively aiming at the surface points and the edge points, and the rationality of the obtained first point pair characteristics is improved, so that the accuracy of the matching pose obtained based on the first point pair characteristics is improved.
An overall description of an object positioning process provided in an embodiment of the present application is described below with reference to fig. 3.
Referring to fig. 3, a schematic diagram of an object positioning process is provided in an embodiment of the present application.
The following describes the object positioning process through steps S1 to S6.
Step S1: a first point cloud of the object to be positioned is obtained, from which a first edge point is extracted.
It will be appreciated that after the first edge point is determined, points other than the first edge point in the first point cloud are first surface points
Step S2: selecting a first reference point from the first point cloud, selecting an edge feature point from the first edge points, and selecting a surface feature point from first surface points in the first point cloud except the first edge points.
Step S3: the method vector is calculated for the first reference point and the surface feature point, and the tangent vector is calculated for the edge feature point.
Step S4: and constructing the point pair characteristics of the surface and the edge, and constructing the point pair characteristics of the surface and the surface to obtain a first point pair characteristic of the first reference point.
The above-mentioned surface + edge point pair feature is a first point pair feature between a first reference point and an edge point in its field, and the surface + surface point pair feature is a first point pair feature between the first reference point and a surface point in its field.
Step S5: and obtaining the matching pose corresponding to each first reference point based on the first point pair characteristic of each first reference point and the second point pair characteristic of the second reference point in the point cloud model.
And the first point pair features corresponding to the first reference points and the second point pair features can be matched, and the matching pose corresponding to the first reference points is determined based on the matching result. The specific embodiments have been described in the foregoing embodiments, and are not repeated here.
Step S6: and carrying out post-processing and fraction verification on the obtained matching pose to obtain a positioning result of the object.
The post-processing may include performing non-maximum suppression processing, clustering processing, and the like on the matching pose, and the score verification may be determining the target pose of the object relative to the point cloud model based on the accuracy representation value of the matching pose corresponding to each first reference point, where a detailed manner of determining the accuracy representation value is described in the embodiment shown in fig. 5, and is not described in detail herein.
After the target pose of the object relative to the point cloud model is determined, the positioning result of the object can be obtained based on the target pose.
The following describes the construction mode of the point cloud model through the steps A to D.
Step A: a second point cloud of the object is obtained for constructing a point cloud model.
The second point cloud may be obtained from an existing object point cloud model, or may be obtained by using the method for obtaining the first point cloud described in the embodiment shown in fig. 1, which is not described herein.
And (B) step (B): a second reference point is selected from the surface points within the second point cloud.
The selection manner of the second reference point may be obtained based on the selection manner of the first reference point, which is not described herein.
Step C: and constructing a second point pair characteristic between each second reference point and a second characteristic point in a second adjacent area.
The construction method of the second point pair feature may be obtained based on the construction method of the first point pair feature, which is not described herein.
Step D: and storing each constructed second point pair characteristic to obtain a point cloud model containing each constructed second point pair characteristic.
Specifically, the constructed second point pair features may be saved by using a data structure such as an octree or a KD (K-Dimensional) tree, so as to obtain a point cloud model including the constructed second point pair features.
Referring to fig. 4, a schematic diagram of a point cloud model construction flow is provided in an embodiment of the present application.
The following describes the above-mentioned point cloud model construction flow through step P1-step P5.
Step P1: obtaining a second point cloud of the object for constructing a point cloud model, and extracting a second edge point from the first point cloud.
Step P2: selecting a second reference point from the second point cloud, selecting an edge feature point from the second edge points, and selecting a surface feature point from second surface points in the second point cloud except for the second edge points.
Step P3: the method vector is calculated for the second reference point and the surface feature point, and the tangent vector is calculated for the edge feature point.
Step P4: constructing a surface + edge point pair feature and constructing a surface + surface point pair feature.
The steps P1-P4 may be obtained based on the steps S1-S4, and the difference is merely that the first point cloud, the first reference point, the first edge point, and the first surface point are replaced by the second point cloud, the second reference point, the second edge point, and the second surface point, which are not described herein again.
The point pair features constructed in the step are the second point pair features of each second reference point.
Step P5: and storing the second point pair characteristics to obtain a point cloud model of the object containing the second point pair characteristics.
Therefore, when pose matching is carried out subsequently, the matching pose can be restrained by adopting the points of two dimensions of the surface point and the edge point, the advantages of the surface point and the edge point are combined, the accuracy of the obtained matching pose is improved, and the accuracy of object positioning is further improved.
On the basis of the embodiment shown in fig. 1, when the object is positioned according to the matching pose corresponding to each first reference point, the alignment reference point corresponding to each first reference point can be obtained based on the matching pose, so that the target pose of the object relative to the point cloud model can be determined based on the difference between the first reference point and the alignment reference point corresponding to the first reference point, and the object is positioned based on the target pose. In view of the above, the embodiment of the present application provides a second object positioning method.
Referring to fig. 5, a flowchart of a second object positioning method according to an embodiment of the present application is shown, where the method includes the following steps S501 to S508.
Step S501: a first point cloud of the object to be positioned is obtained, and a first reference point is selected from surface points in the first point cloud.
Step S502: first point pair features between each first reference point and first feature points in the first neighborhood of the first reference point are constructed.
Step S503: and matching the first point pair characteristics corresponding to the first reference points with the second point pair characteristics contained in the point cloud model of the pre-constructed object, and determining the matching pose corresponding to the first reference points based on the matching result.
The steps S501 to S503 are the same as the steps S101 to S103 in the embodiment shown in fig. 1, and are not described herein.
Step S504: and aligning the second reference points to the first reference points according to the matching pose corresponding to the first reference points to obtain aligned reference points corresponding to the first reference points.
In this step, the alignment of the second reference points to each first reference point means that the second reference points are spatially transformed based on the matching pose corresponding to the first reference points, so as to obtain new second reference points.
Specifically, the matching pose corresponding to the first reference point may include a pose transformation matrix such as a rotation matrix and a translation matrix of the object relative to the point cloud model, and the first reference point is spatially transformed according to the pose transformation matrix, so that a new second reference point can be obtained.
The new second reference point may be referred to as an aligned reference point after the second reference point is aligned to the first reference point.
Step S505: and determining alignment degree characterization values between each first reference point and the alignment reference point corresponding to each first reference point.
In one case, the alignment degree characterization value includes at least one of the following information:
1. the ratio between the first number and the total number of aligned reference points corresponding to the first reference point.
Wherein the first number is: the number of points of the alignment reference points corresponding to the first reference point that coincide with the first reference point.
As can be seen from the above steps, the second reference point obtains an alignment reference point through spatial transformation, and the obtained alignment reference point may include a point which coincides with the first reference point or may include a point which does not coincide with the second reference point.
2. The first reference point is a distance from an alignment reference point corresponding to the first reference point.
The larger the first number is, the more aligned reference points are overlapped with the first reference point is indicated, and the higher the matching pose accuracy corresponding to the first reference point is further indicated, otherwise, the smaller the first number is, the fewer the aligned reference points are overlapped with the first reference point is indicated, and the lower the matching pose accuracy corresponding to the first reference point is further indicated; the smaller the distance is, the closer the alignment reference point is to the first reference point, and the higher the matching pose accuracy corresponding to the first reference point is, otherwise, the larger the distance is, the farther the alignment reference point is from the first reference point, and the lower the matching pose accuracy corresponding to the first reference point is.
Therefore, the first quantity and the distance can be used for representing the alignment degree between each first reference point and the alignment reference point corresponding to each first reference point, and a more comprehensive and accurate alignment degree representation value can be obtained based on the first quantity and the distance.
Step S506: and determining an accuracy characterization value of the matching pose corresponding to each first reference point based on the determined alignment degree characterization value.
As can be seen from the above step S505, the alignment degree characterization value reflects the alignment degree between each first reference point and the alignment reference point corresponding to each first reference point. Therefore, the larger the alignment degree representation value is, the more alignment reference points are overlapped with or close to the first reference point are indicated, and the higher the matching pose accuracy corresponding to the first reference point is further indicated, and vice versa.
Based on this, the accuracy representation value is proportional to the alignment degree representation value, so that the larger the alignment degree representation value corresponding to the first reference point is, the larger the accuracy representation value of the matching pose corresponding to the first reference point is, and vice versa.
Step S507: and determining the target pose of the object relative to the point cloud model based on the matching pose corresponding to each first reference point and the accuracy characterization value of the matching pose corresponding to each first reference point.
The embodiment of the present application is not limited to a specific manner of determining the target pose based on the accuracy characterization value, and will be described below by way of example.
Specifically, the target pose of the object with respect to the point cloud model may be determined in the following manner.
In one embodiment, from the matching poses corresponding to the first reference points, a matching pose with the highest corresponding accuracy representation value can be determined and used as a target pose of the object relative to the point cloud model.
In another embodiment, from the matching poses corresponding to the first reference points, a preset number of matching poses with the highest corresponding accuracy representation values can be determined, and an average pose of the determined matching poses is calculated and used as a target pose of the object relative to the point cloud model.
Step S508: the object is positioned based on the determined target pose.
Therefore, when the object is positioned according to the matching pose corresponding to each first reference point, the alignment reference point corresponding to each first reference point can be obtained based on the matching pose, so that the accuracy representation value of the matching pose corresponding to each first reference point can be determined based on the difference between the first reference point and the alignment reference point corresponding to the first reference point, the accuracy representation value reflects the accuracy of the matching pose corresponding to each first reference point, the target pose of the object relative to the point cloud model can be accurately determined based on the accuracy representation value, and the accuracy of the object positioning based on the determined target pose is improved.
Corresponding to the object positioning method, the embodiment of the application also provides an object positioning device.
Referring to fig. 6, a schematic structural diagram of an object positioning device according to an embodiment of the present application is provided, where the device includes the following modules:
A first point cloud obtaining module 601, configured to obtain a first point cloud of an object to be positioned, and select a first reference point from surface points in the first point cloud;
A first point-to-feature construction module 602, configured to construct first point-to-features between each first reference point and a first feature point in a first neighboring area, where the first feature point includes: surface points and edge points in the first point cloud located in the first neighborhood;
The matching pose determining module 603 is configured to match a first point pair feature corresponding to each first reference point with a second point pair feature included in a pre-built point cloud model of the object, and determine a matching pose corresponding to each first reference point based on a matching result, where the second point pair feature is: a point pair feature between a second reference point in a surface point in a second point cloud of the point cloud model and a second feature point in a second adjacent to the second reference point, wherein the second feature point comprises: and the surface points and the edge points in the second point cloud, which are positioned in the second adjacent area, are matched with the pose: the relative pose of the object with respect to the point cloud model;
and the object positioning module 604 is configured to position the object based on the matching pose corresponding to each first reference point.
In view of the above, when the solution provided by the embodiment of the present application is applied to object positioning, a first point cloud of an object to be positioned is obtained, a first reference point is selected from surface points in the first point cloud, and first point pair features between each first reference point and a first feature point in a first adjacent thereto are constructed, so that the first point pair features corresponding to each first reference point are matched with second point pair features included in a point cloud model of a pre-constructed object, so that a matching pose corresponding to each first reference point can be obtained, and further the object can be successfully positioned based on the matching pose corresponding to each first reference point.
Wherein the first feature point includes a surface point and an edge point in the first point cloud that are located in the first neighborhood, and thus, a first point-to-feature between the first reference point and the first feature point includes: a point-to-point feature between the first reference point and the surface point and a point-to-point feature between the first reference point and the edge point. On one hand, because the normal direction difference of the surface points of the quasi-planar object is lower, the point-to-feature ratio constructed based on the surface points is almost consistent and has smaller distinguishing degree, compared with the method for acquiring the matching pose by only adopting the point-to-feature ratio constructed based on the surface points, the method overcomes the defect that the matching pose accuracy is low due to the fact that the feature matching is wrong because the point-to-feature ratio is smaller; on the other hand, the edges of the plane-like objects often have symmetrical similarity or translational similarity, so that the distinction degree of each point constructed based on the edge points is smaller, the edge points are determined according to the characteristics of the points in the neighborhood, and errors are easy to occur.
Therefore, when the scheme provided by the embodiment of the application is applied to object positioning, the surface points and the edge points in the object point cloud are comprehensively considered, the matching pose is obtained by respectively adopting the point pair characteristics constructed based on the surface points and the point pair characteristics constructed based on the edge points, the matching pose is restrained by adopting the point pair characteristics of the surface points and the edge points in two dimensions, and the surface and the edge structures of the restrained object are aligned with the model, so that the advantages of the surface points and the edge points are combined, and the accuracy of the obtained matching pose is improved under the condition that no additional positioning cost is increased, and the object positioning accuracy is improved.
In one embodiment of the present application, the first point-to-feature construction module 602 includes:
The first point pair characteristics corresponding to each first reference point are constructed according to the following submodules:
the vector obtaining submodule is used for obtaining a normal vector of the first reference point, a normal vector of the surface point in the first neighborhood of the first reference point and a tangential vector of the edge point in the first neighborhood;
The first construction submodule is used for constructing a first point pair characteristic between a first reference point and a surface point in the first neighborhood based on a normal vector of the first reference point and a normal vector of the surface point in the first neighborhood;
And the second construction submodule constructs a first point pair characteristic between the first reference point and the edge point in the first neighborhood based on the normal vector of the first reference point and the tangent vector of the edge point in the first neighborhood.
From the above, on the one hand, the first point pair feature between the first reference point and the surface point in the first neighborhood can be constructed according to the normal vector of the first reference point and the normal vector of the surface point, and since the surface point is a point corresponding to the outer surface of the object, the normal vector can represent the feature of the outer surface of the object, so that the constructed first point pair feature can accurately reflect the feature of the outer surface of the object; on the other hand, the first point pair feature between the first reference point and the edge point in the first neighborhood can be constructed according to the normal vector of the first reference point and the tangent vector of the edge point, and the tangent vector of the edge point can represent the feature of the edge of the object because the edge point is the point corresponding to the edge of the object, so that the constructed first point pair feature can accurately reflect the feature of the edge of the object.
Therefore, the point pair characteristics of the outer surface of the object and the characteristics of the edge of the object can be obtained respectively aiming at the surface points and the edge points, and the rationality of the obtained first point pair characteristics is improved, so that the accuracy of the matching pose obtained based on the first point pair characteristics is improved.
In one embodiment of the present application, the first construction submodule is specifically configured to obtain at least one of the following information based on a normal vector of a first reference point and a normal vector of a surface point in the first neighborhood, and construct a first point pair feature between the first reference point and the surface point in the first neighborhood according to the obtained information: a first included angle between a normal vector of a first reference point and a normal vector of a surface point in the first neighborhood; a second included angle between the normal vector of the first reference point and the first target vector; a third included angle between the normal vector of the surface point and the first target vector; wherein the first target vector is: a direction vector between a first reference point and a surface point within the first neighborhood.
It can be seen that based on the normal vector of the first reference point and the normal vector of the surface point in the first neighborhood, various information such as an included angle between the normal vectors, an included angle between the normal vector and the target vector and the like can be adopted to construct the first point pair feature between the first reference point and the surface point, so that the constructed first point pair feature can reflect richer and more comprehensive information, further more calculation constraints can be provided for the first point pair feature in the subsequent pose matching process, the accuracy of determining the matching pose is improved, and the accuracy of object positioning is improved.
In one embodiment of the present application, the first constructing sub-module is specifically configured to obtain a first distance between a first reference point and a surface point in the first domain; and constructing a first point pair feature between the first reference point and the surface point in the first neighborhood based on the normal vector of the first reference point, the normal vector of the surface point in the first neighborhood and the obtained first distance.
In this way, when the first point pair features of the first reference point and the surface points in the first field are constructed, the information of the vector dimension and the distance dimension is comprehensively considered, so that the constructed first point pair features can reflect richer and more comprehensive information, further more calculation constraints can be provided for the first point pair features in the subsequent pose matching process, the accuracy of determining the matching pose is improved, and the accuracy of object positioning is improved.
In one embodiment of the present application, the first construction submodule is specifically configured to obtain at least one of the following information based on a normal vector of a first reference point and a tangent vector of an edge point in the first neighborhood, and construct a first point-to-feature between the first reference point and the edge point in the first neighborhood according to the obtained information: a fourth included angle between the normal vector of the first reference point and the tangent vector of the edge point in the first neighborhood; a fifth included angle between the normal vector of the first reference point and the second target vector; a sixth included angle between the tangent vector of the edge point and the second target vector; wherein the second target vector is: a direction vector between a first reference point and an edge point in the first neighborhood.
It can be seen that based on the normal vector of the first reference point and the tangent vector of the edge point in the first neighborhood, various information such as an included angle between the tangent vectors, an included angle between the normal vector and the target vector, an included angle between the tangent vector and the target vector and the like can be adopted to construct the first point pair feature between the first reference point and the edge point, so that the constructed first point pair feature can reflect richer and more comprehensive information, further more calculation constraint can be provided by the first point pair feature in the subsequent pose matching process, the accuracy of determining the matching pose is improved, and the accuracy of object positioning is further improved.
In one embodiment of the present application, the first constructing sub-module is specifically configured to obtain a second distance between a first reference point and the edge point in the first domain; and constructing a first point pair feature between the first reference point and the edge point in the first neighborhood based on the normal vector of the first reference point, the tangent vector of the edge point in the first neighborhood and the obtained second distance.
In this way, when the first point pair features of the first reference point and the edge point in the first field are constructed, the information of the vector dimension and the distance dimension is comprehensively considered, so that the constructed first point pair features can reflect richer and more comprehensive information, the first point pair features can provide more calculation constraint in the subsequent pose matching process, the accuracy of determining the matching pose is improved, and the accuracy of object positioning is improved.
In one embodiment of the present application, the object positioning module 604 is specifically configured to align the second reference point to each first reference point according to the matching pose corresponding to each first reference point, so as to obtain an aligned reference point corresponding to each first reference point; determining alignment degree characterization values between each first reference point and alignment reference points corresponding to each first reference point; determining an accuracy characterization value of the matching pose corresponding to each first reference point based on the determined alignment degree characterization value; determining a target pose of the object relative to the point cloud model based on the matching poses corresponding to the first reference points and the accuracy characterization values of the matching poses corresponding to the first reference points; the object is positioned based on the determined target pose.
Therefore, when the object is positioned according to the matching pose corresponding to each first reference point, the alignment reference point corresponding to each first reference point can be obtained based on the matching pose, so that the accuracy representation value of the matching pose corresponding to each first reference point can be determined based on the difference between the first reference point and the alignment reference point corresponding to the first reference point, the accuracy representation value reflects the accuracy of the matching pose corresponding to each first reference point, the target pose of the object relative to the point cloud model can be accurately determined based on the accuracy representation value, and the accuracy of the object positioning based on the determined target pose is improved.
In one embodiment of the present application, the alignment degree characterization value includes at least one of the following information:
A ratio between a first number and a total number of aligned reference points corresponding to a first reference point, wherein the first number is: the number of points, which coincide with the first reference point, in the alignment reference points corresponding to the first reference point; a distance between a first reference point and an alignment reference point corresponding to the first reference point.
The larger the first number is, the more aligned reference points are overlapped with the first reference point is indicated, and the higher the matching pose accuracy corresponding to the first reference point is further indicated, otherwise, the smaller the first number is, the fewer the aligned reference points are overlapped with the first reference point is indicated, and the lower the matching pose accuracy corresponding to the first reference point is further indicated; the smaller the distance is, the closer the alignment reference point is to the first reference point, and the higher the matching pose accuracy corresponding to the first reference point is, otherwise, the larger the distance is, the farther the alignment reference point is from the first reference point, and the lower the matching pose accuracy corresponding to the first reference point is.
Therefore, the first quantity and the distance can be used for representing the alignment degree between each first reference point and the alignment reference point corresponding to each first reference point, and a more comprehensive and accurate alignment degree representation value can be obtained based on the first quantity and the distance.
In one embodiment of the present application, the matching pose determining module 603 is specifically configured to determine, for each first point pair feature corresponding to each first reference point, a target second point pair feature with highest similarity to the first point pair feature from second point pair features included in a point cloud model of the object that is built in advance; and determining the matching pose corresponding to each first reference point based on the difference between the first point pair feature corresponding to each first reference point and the target second point pair feature corresponding to the first point pair feature.
The target second reference point corresponding to the target second point pair feature with the highest similarity to the first point pair feature may be considered to correspond to the same point in the object as the first reference point, so that the difference between the first point pair feature corresponding to the first reference point and the target second point pair feature corresponding to the target reference point can reflect the relative pose of the first point cloud with respect to the point cloud model, that is, reflect the relative pose of the object with respect to the point cloud model. Therefore, based on the difference between the first point pair features corresponding to the first reference points and the target second point pair features corresponding to the first point pair features, the matching pose corresponding to the first reference points can be reasonably and accurately determined.
In one embodiment of the present application, the point cloud model is constructed in the following manner: obtaining a second point cloud of the object for constructing the point cloud model; selecting a second reference point from the surface points in the second point cloud; constructing second point pair features between each second reference point and second feature points in a second adjacent area of the second reference point; and storing each constructed second point pair characteristic to obtain the point cloud model containing each constructed second point pair characteristic.
Therefore, when pose matching is carried out subsequently, the matching pose can be restrained by adopting the points of two dimensions of the surface point and the edge point, the advantages of the surface point and the edge point are combined, the accuracy of the obtained matching pose is improved, and the accuracy of object positioning is further improved.
In the technical scheme of the application, related operations such as acquisition, storage, use, processing, transmission, provision, disclosure and the like of the personal information of the user are performed under the condition that the authorization of the user is obtained.
The embodiment of the application also provides an electronic device, as shown in fig. 7, including:
a memory 701 for storing a computer program;
The processor 702 is configured to implement the object positioning method when executing the program stored in the memory 701.
And the electronic device may further comprise a communication bus and/or a communication interface, through which the processor 702, the communication interface, and the memory 701 communicate with each other.
The communication bus mentioned above for the electronic device may be a peripheral component interconnect standard (PERIPHERAL COMPONENT INTERCONNECT, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the electronic device and other devices.
The Memory may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but may also be a digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components.
In yet another embodiment of the present application, there is also provided a computer readable storage medium having stored therein a computer program which, when executed by a processor, implements the steps of the above object positioning method.
In yet another embodiment of the present application, a computer program product containing instructions that, when run on a computer, cause the computer to perform any of the above-described object localization methods of the above-described embodiments is also provided.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, tape), an optical medium (e.g., DVD), or a Solid state disk (Solid STATE DISK, SSD), etc.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the apparatus, electronic device and storage medium embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and references to the parts of the description of the method embodiments are only needed.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application are included in the protection scope of the present application.

Claims (14)

1. A method of locating an object, the method comprising:
Obtaining a first point cloud of an object to be positioned, and selecting a first reference point from surface points in the first point cloud;
constructing a first point pair feature between each first reference point and a first feature point in a first adjacent area of the first reference point, wherein the first feature point comprises: surface points and edge points in the first point cloud located in the first neighborhood;
matching the first point pair features corresponding to the first reference points with second point pair features contained in a pre-constructed point cloud model of the object, and determining matching positions corresponding to the first reference points based on matching results, wherein the second point pair features are as follows: a point pair feature between a second reference point in a surface point in a second point cloud of the point cloud model and a second feature point in a second adjacent to the second reference point, wherein the second feature point comprises: and the surface points and the edge points in the second point cloud, which are positioned in the second adjacent area, are matched with the pose: the relative pose of the object with respect to the point cloud model;
and positioning the object based on the matching pose corresponding to each first reference point.
2. The method of claim 1, wherein constructing first point-to-point features between each first reference point and a first feature point within its first neighborhood comprises:
The first point pair features corresponding to each first reference point are constructed as follows:
Obtaining a normal vector of a first reference point, a normal vector of a surface point in a first neighborhood of the first reference point and a tangential vector of an edge point in the first neighborhood;
Constructing a first point pair feature between a first reference point and a surface point in the first neighborhood based on a normal vector of the first reference point and a normal vector of the surface point in the first neighborhood;
And constructing a first point pair feature between the first reference point and the edge point in the first neighborhood based on the normal vector of the first reference point and the tangent vector of the edge point in the first neighborhood.
3. The method of claim 2, wherein constructing a first point-to-feature between a first reference point and a surface point in the first neighborhood based on a normal vector of the first reference point and a normal vector of the surface point in the first neighborhood comprises:
based on the normal vector of the first reference point and the normal vector of the surface point in the first neighborhood, at least one of the following information is obtained, and a first point pair feature between the first reference point and the surface point in the first neighborhood is constructed according to the obtained information:
a first included angle between a normal vector of a first reference point and a normal vector of a surface point in the first neighborhood;
a second included angle between the normal vector of the first reference point and the first target vector;
a third included angle between the normal vector of the surface point and the first target vector;
Wherein the first target vector is: a direction vector between a first reference point and a surface point within the first neighborhood.
4. The method of claim 2, wherein constructing a first point-to-feature between a first reference point and a surface point in the first neighborhood based on a normal vector of the first reference point and a normal vector of the surface point in the first neighborhood comprises:
Obtaining a first distance between a first reference point and a surface point in the first field;
And constructing a first point pair feature between the first reference point and the surface point in the first neighborhood based on the normal vector of the first reference point, the normal vector of the surface point in the first neighborhood and the obtained first distance.
5. The method of claim 2, wherein constructing a first point-to-point feature between a first reference point and an edge point in the first neighborhood based on a normal vector of the first reference point and a tangent vector of the edge point in the first neighborhood comprises:
Based on the normal vector of the first reference point and the tangent vector of the edge point in the first neighborhood, at least one of the following information is obtained, and a first point pair feature between the first reference point and the edge point in the first neighborhood is constructed according to the obtained information:
a fourth included angle between the normal vector of the first reference point and the tangent vector of the edge point in the first neighborhood;
a fifth included angle between the normal vector of the first reference point and the second target vector;
a sixth included angle between the tangent vector of the edge point and the second target vector;
wherein the second target vector is: a direction vector between a first reference point and an edge point in the first neighborhood.
6. The method of claim 2, wherein constructing a first point-to-point feature between a first reference point and an edge point in the first neighborhood based on a normal vector of the first reference point and a tangent vector of the edge point in the first neighborhood comprises:
Obtaining a second distance between a first reference point and an edge point in the first field;
and constructing a first point pair feature between the first reference point and the edge point in the first neighborhood based on the normal vector of the first reference point, the tangent vector of the edge point in the first neighborhood and the obtained second distance.
7. The method of claim 1, wherein locating the object based on the matching pose corresponding to each first reference point comprises:
Aligning the second reference points to the first reference points according to the matching pose corresponding to the first reference points to obtain aligned reference points corresponding to the first reference points;
Determining alignment degree characterization values between each first reference point and alignment reference points corresponding to each first reference point;
determining an accuracy characterization value of the matching pose corresponding to each first reference point based on the determined alignment degree characterization value;
Determining a target pose of the object relative to the point cloud model based on the matching poses corresponding to the first reference points and the accuracy characterization values of the matching poses corresponding to the first reference points;
the object is positioned based on the determined target pose.
8. The method of claim 7, wherein the alignment characterization value includes at least one of the following information:
a ratio between a first number and a total number of aligned reference points corresponding to a first reference point, wherein the first number is: the number of points, which coincide with the first reference point, in the alignment reference points corresponding to the first reference point;
A distance between a first reference point and an alignment reference point corresponding to the first reference point.
9. The method according to any one of claims 1-8, wherein the matching the first point pair feature corresponding to each first reference point with the second point pair feature included in the pre-constructed point cloud model of the object, and determining the matching pose corresponding to each first reference point based on the matching result, includes:
for each first point pair feature corresponding to each first reference point, determining a target second point pair feature with highest similarity with the first point pair feature from second point pair features contained in a point cloud model of the object constructed in advance;
And determining the matching pose corresponding to each first reference point based on the difference between the first point pair feature corresponding to each first reference point and the target second point pair feature corresponding to the first point pair feature.
10. The method according to any one of claims 1-8, wherein the point cloud model is constructed as follows:
Obtaining a second point cloud of the object for constructing the point cloud model;
selecting a second reference point from the surface points in the second point cloud;
Constructing second point pair features between each second reference point and second feature points in a second adjacent area of the second reference point;
And storing each constructed second point pair characteristic to obtain the point cloud model containing each constructed second point pair characteristic.
11. An object positioning device, the device comprising:
The device comprises a first point cloud obtaining module, a second point cloud obtaining module and a first point cloud locating module, wherein the first point cloud obtaining module is used for obtaining a first point cloud of an object to be located and selecting a first reference point from surface points in the first point cloud;
the first point pair feature construction module is used for constructing first point pair features between each first reference point and first feature points in the first adjacent area, wherein the first feature points comprise: surface points and edge points in the first point cloud located in the first neighborhood;
The matching pose determining module is configured to match a first point pair feature corresponding to each first reference point with a second point pair feature included in a pre-constructed point cloud model of the object, and determine a matching pose corresponding to each first reference point based on a matching result, where the second point pair feature is: a point pair feature between a second reference point in a surface point in a second point cloud of the point cloud model and a second feature point in a second adjacent to the second reference point, wherein the second feature point comprises: and the surface points and the edge points in the second point cloud, which are positioned in the second adjacent area, are matched with the pose: the relative pose of the object with respect to the point cloud model;
And the object positioning module is used for positioning the object based on the matching pose corresponding to each first reference point.
12. The apparatus of claim 11, wherein the device comprises a plurality of sensors,
The first point pair feature construction module includes: the first point pair characteristics corresponding to each first reference point are constructed according to the following submodules: the vector obtaining submodule is used for obtaining a normal vector of the first reference point, a normal vector of the surface point in the first neighborhood of the first reference point and a tangential vector of the edge point in the first neighborhood; the first construction submodule is used for constructing a first point pair characteristic between a first reference point and a surface point in the first neighborhood based on a normal vector of the first reference point and a normal vector of the surface point in the first neighborhood; the second construction submodule is used for constructing a first point pair characteristic between the first reference point and the edge point in the first neighborhood based on the normal vector of the first reference point and the tangent vector of the edge point in the first neighborhood;
Or (b)
The first construction submodule is specifically configured to obtain at least one of the following information based on a normal vector of a first reference point and a normal vector of a surface point in the first neighborhood, and construct a first point pair feature between the first reference point and the surface point in the first neighborhood according to the obtained information: a first included angle between a normal vector of a first reference point and a normal vector of a surface point in the first neighborhood; a second included angle between the normal vector of the first reference point and the first target vector; a third included angle between the normal vector of the surface point and the first target vector; wherein the first target vector is: a direction vector between a first reference point and a surface point within the first neighborhood;
Or (b)
The first construction submodule is specifically configured to obtain a first distance between a first reference point and a surface point in the first field; constructing a first point pair feature between a first reference point and a surface point in the first neighborhood based on a normal vector of the first reference point, a normal vector of the surface point in the first neighborhood and the obtained first distance;
Or (b)
The first construction submodule is specifically configured to obtain at least one of the following information based on a normal vector of a first reference point and a tangent vector of an edge point in the first neighborhood, and construct a first point pair feature between the first reference point and the edge point in the first neighborhood according to the obtained information: a fourth included angle between the normal vector of the first reference point and the tangent vector of the edge point in the first neighborhood; a fifth included angle between the normal vector of the first reference point and the second target vector; a sixth included angle between the tangent vector of the edge point and the second target vector; wherein the second target vector is: a direction vector between a first reference point and an edge point in the first neighborhood;
Or (b)
The first construction submodule is specifically configured to obtain a second distance between a first reference point and an edge point in the first field; constructing a first point pair feature between a first reference point and an edge point in the first neighborhood based on a normal vector of the first reference point, a tangent vector of the edge point in the first neighborhood and the obtained second distance;
Or (b)
The object positioning module is specifically configured to align the second reference point to each first reference point according to the matching pose corresponding to each first reference point, so as to obtain an aligned reference point corresponding to each first reference point; determining alignment degree characterization values between each first reference point and alignment reference points corresponding to each first reference point; determining an accuracy characterization value of the matching pose corresponding to each first reference point based on the determined alignment degree characterization value; determining a target pose of the object relative to the point cloud model based on the matching poses corresponding to the first reference points and the accuracy characterization values of the matching poses corresponding to the first reference points; positioning the object based on the determined target pose;
Or (b)
The alignment degree characterization value includes at least one of the following information:
A ratio between a first number and a total number of aligned reference points corresponding to a first reference point, wherein the first number is: the number of points, which coincide with the first reference point, in the alignment reference points corresponding to the first reference point; a distance between a first reference point and an alignment reference point corresponding to the first reference point;
Or (b)
The matching pose determining module is specifically configured to determine, for each first point pair feature corresponding to each first reference point, a target second point pair feature with highest similarity to the first point pair feature from second point pair features included in a point cloud model of the object constructed in advance; determining a matching pose corresponding to each first reference point based on the difference between the first point pair feature corresponding to each first reference point and the target second point pair feature corresponding to the first point pair feature;
Or (b)
The point cloud model is constructed in the following manner:
Obtaining a second point cloud of the object for constructing the point cloud model; selecting a second reference point from the surface points in the second point cloud; constructing second point pair features between each second reference point and second feature points in a second adjacent area of the second reference point; and storing each constructed second point pair characteristic to obtain the point cloud model containing each constructed second point pair characteristic.
13. An electronic device, comprising:
a memory for storing a computer program;
A processor for implementing the method of any of claims 1-10 when executing a program stored on a memory.
14. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program which, when executed by a processor, implements the method of any of claims 1-10.
CN202410249103.5A 2024-03-05 2024-03-05 Object positioning method and device Pending CN118096883A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410249103.5A CN118096883A (en) 2024-03-05 2024-03-05 Object positioning method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410249103.5A CN118096883A (en) 2024-03-05 2024-03-05 Object positioning method and device

Publications (1)

Publication Number Publication Date
CN118096883A true CN118096883A (en) 2024-05-28

Family

ID=91157548

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410249103.5A Pending CN118096883A (en) 2024-03-05 2024-03-05 Object positioning method and device

Country Status (1)

Country Link
CN (1) CN118096883A (en)

Similar Documents

Publication Publication Date Title
CN110264502B (en) Point cloud registration method and device
CN107810522B (en) Real-time, model-based object detection and pose estimation
CN112985842B (en) Parking performance detection method, electronic device and readable storage medium
CN113436238B (en) Point cloud registration accuracy evaluation method and device and electronic equipment
CN110969649B (en) Matching evaluation method, medium, terminal and device for laser point cloud and map
US8204714B2 (en) Method and computer program product for finding statistical bounds, corresponding parameter corners, and a probability density function of a performance target for a circuit
CN109145969B (en) Method, device, equipment and medium for processing point cloud data of three-dimensional object
CN109872026A (en) Evaluation result generation method, device, equipment and computer readable storage medium
Wang et al. Camera pose estimation from lines: a fast, robust and general method
CN114926549B (en) Three-dimensional point cloud processing method, device, equipment and storage medium
CN113091736B (en) Robot positioning method, device, robot and storage medium
CN112825199A (en) Collision detection method, device, equipment and storage medium
CN112946612B (en) External parameter calibration method and device, electronic equipment and storage medium
CN110413716B (en) Data storage and data query method and device and electronic equipment
CN109489658B (en) Moving target positioning method and device and terminal equipment
CN117611663A (en) Pose estimation method of target object and computer readable storage medium
CN118096883A (en) Object positioning method and device
CN112212851B (en) Pose determination method and device, storage medium and mobile robot
CN113420604B (en) Multi-person posture estimation method and device and electronic equipment
US10628533B2 (en) Global optimization of networks of locally fitted objects
CN116295466A (en) Map generation method, map generation device, electronic device, storage medium and vehicle
CN113538558B (en) Volume measurement optimization method, system, equipment and storage medium based on IR diagram
Shahbazi et al. Robust structure-from-motion computation: Application to open-pit mine surveying from unmanned aerial images
Mirzaei et al. Analytical least-squares solution for 3d lidar-camera calibration
US9947126B2 (en) Storing and comparing three-dimensional objects in three-dimensional storage

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination