CN117746381A - Pose estimation model configuration method and pose estimation method - Google Patents

Pose estimation model configuration method and pose estimation method Download PDF

Info

Publication number
CN117746381A
CN117746381A CN202311698845.8A CN202311698845A CN117746381A CN 117746381 A CN117746381 A CN 117746381A CN 202311698845 A CN202311698845 A CN 202311698845A CN 117746381 A CN117746381 A CN 117746381A
Authority
CN
China
Prior art keywords
pose
model
data
model parameter
parameter set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311698845.8A
Other languages
Chinese (zh)
Inventor
牛群
赵杰亮
李宏坤
樊钰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Migration Technology Co ltd
Original Assignee
Beijing Migration Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Migration Technology Co ltd filed Critical Beijing Migration Technology Co ltd
Priority to CN202311698845.8A priority Critical patent/CN117746381A/en
Publication of CN117746381A publication Critical patent/CN117746381A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The disclosure relates to a pose estimation model configuration method, a pose estimation device, electronic equipment and a storage medium. The pose estimation model configuration method comprises the following steps: acquiring a real pose data set of a target object in a target scene, and acquiring an estimated pose data set of the target object in the target scene based on a model parameter set of a pose estimation model; comparing the estimated pose data set with the real pose data set to obtain pose comparison data; and calculating an objective function value based on the pose comparison data and the objective function model, obtaining a model parameter set corresponding to the minimum value of the objective function value as an optimal model parameter set of the pose estimation model, and configuring the pose estimation model based on the optimal model parameter set. The pose estimation method comprises the following steps: and performing pose estimation on the three-dimensional image data of the target scene configured with the target object by using the pose estimation model configured by the pose estimation model configuration method disclosed by the invention to obtain the pose of the target object.

Description

Pose estimation model configuration method and pose estimation method
Technical Field
The disclosure relates to the technical fields of 3D vision, robots and the like, and in particular relates to a pose estimation model configuration method, a pose estimation device, electronic equipment and a storage medium.
Background
With the continuous development of machine vision technology, vision sensors are adopted as main sensing elements in a large number of industrial scenes.
The 2D vision is mature and stable gradually, and has good application effects in the fields of image recognition, face recognition, target detection, character recognition, defect detection and the like. However, 2D vision cannot acquire spatial coordinate information of an object, and has certain limitations in some application scenes requiring stereo information, such as application scenes of robot navigation, virtual reality, three-dimensional reconstruction, and the like.
The 3D vision can directly acquire depth information (depth image or point cloud) of the object, and can identify the object without texture and with occlusion. In the application field of robot and 3D vision combination, using point cloud data for pose estimation is a common task. In some tasks it is necessary to determine the pose of an object in a real environment from a three-dimensional model of the object. Practical scenes typically involve different objects with different shapes, and thus it is desirable that the pose estimation algorithm be adaptable to objects of different shapes and sizes.
In the technical field of robots, it is a common task to perform pose estimation by using point cloud data. Since actual scenes typically involve different objects with different shapes, a corresponding adjustment of the pose estimation algorithm is required. Many attitude estimation algorithms based on point cloud data have been proposed in the related art. However, the efficiency of these algorithms depends largely on the configuration of their parameters, requiring specific adjustments to the different objects. This inherent constraint weakens the overall adaptability of the pose estimation algorithm.
Disclosure of Invention
The disclosure provides a pose estimation model configuration method, a pose estimation device, electronic equipment and a storage medium.
According to one aspect of the present disclosure, there is provided a pose estimation model configuration method, including: acquiring a real pose data set of a target object in a target scene, and acquiring an estimated pose data set of the target object in the target scene based on a model parameter set of a pose estimation model; comparing the estimated pose data set with the real pose data set to obtain pose comparison data; and calculating an objective function value based on the pose comparison data and the objective function model, obtaining a model parameter set corresponding to the minimum value of the objective function value as an optimal model parameter set of the pose estimation model, and configuring the pose estimation model based on the optimal model parameter set.
According to the pose estimation model configuration method of at least one embodiment of the present disclosure, the objective function model includes a pose difference acquisition module that acquires a pose difference based on the pose comparison data and a pose penalty term acquisition module that generates an invalid pose penalty term based on the pose difference, and the objective function model calculates the objective function value based on the pose difference and the invalid pose penalty term.
According to at least one embodiment of the present disclosure, a pose estimation model configuration method obtains a real pose data set of a target object in a target scene, including: acquiring three-dimensional image data of a target scene configured with a target object to obtain a three-dimensional image dataset; and acquiring a real pose data set of the target object based on the three-dimensional image data set.
According to at least one embodiment of the present disclosure, a pose estimation model configuration method obtains an estimated pose data set of the target object in the target scene based on a model parameter set of a pose estimation model, including: sampling a model parameter set from a parameter search space of the pose estimation model to obtain a model parameter set; and estimating the pose of the target object in the three-dimensional image data set based on the model parameter set to obtain an estimated pose data set corresponding to the model parameter set.
According to a pose estimation model configuration method of at least one embodiment of the present disclosure, comparing the estimated pose data set and the true pose data set to obtain pose comparison data includes: obtaining pose errors of each estimated pose data and the corresponding real pose data based on the estimated pose data in the estimated pose data set and the corresponding real pose data in the real pose data set; obtaining a set of the pose errors and using the set of the pose errors as the pose comparison data; the estimated pose data in the estimated pose data set and the corresponding real pose data are obtained based on the same three-dimensional image data in the three-dimensional image data set.
According to at least one embodiment of the present disclosure, a pose estimation model configuration method calculates an objective function value based on the pose comparison data and an objective function model, including: the pose difference is obtained based on pose errors in the set of pose errors for calculating the objective function value.
According to the pose estimation model configuration method of at least one embodiment of the present disclosure, the pose error of the estimated pose data and the corresponding real pose data is obtained by the following method: calculating position errors and rotation errors of the estimated pose data and the corresponding real pose data; and obtaining the pose error based on the position error and the rotation error.
According to at least one embodiment of the present disclosure, a pose estimation model configuration method calculates an objective function value based on the pose comparison data and an objective function model, including: judging whether the pose error of the estimated pose data and the corresponding real pose data is larger than or equal to a pose error threshold; if so, the pose error is counted into the pose difference, if not, the pose difference is not counted and the invalid pose penalty term is adjusted and increased; and calculating the objective function value based on the pose difference and the invalid pose penalty term.
According to a pose estimation model configuration method of at least one embodiment of the present disclosure, model parameter set sampling is performed from a parameter search space of a pose estimation model to obtain a model parameter set, including: and obtaining model parameters of the (n+1) th model parameter set from the parameter search space based on model parameters of the (1) st model parameter set to the (N) th model parameter set, and obtaining N model parameter sets, wherein N is a natural number greater than or equal to 1, and N is the total number of the model parameter sets.
According to at least one embodiment of the present disclosure, a pose estimation model configuration method performs pose estimation on a target object in the three-dimensional image data set based on a model parameter set, and obtains an estimated pose data set corresponding to the model parameter set, including: and carrying out pose estimation on the target object in the three-dimensional image data set based on the acquisition sequence of each model parameter set in sequence to obtain an estimated pose data set corresponding to each model parameter set.
The pose estimation model configuration method according to at least one embodiment of the present disclosure further includes: and carrying out parameter characteristic judgment on the obtained n+1th model parameter set, and reserving the n+1th model parameter set or discarding the n+1th model parameter set based on a judgment result.
According to the pose estimation model configuration method of at least one embodiment of the present disclosure, a sampling algorithm set and a pruning algorithm set are preconfigured, the sampling algorithm and the pruning algorithm are called from the sampling algorithm set and the pruning algorithm set to execute model parameter set sampling and parameter feature judgment, and a plurality of model parameter set sets are obtained for obtaining an optimal model parameter set of the pose estimation model based on different model parameter set sets.
According to the pose estimation model configuration method of at least one embodiment of the present disclosure, the three-dimensional image data is point cloud data.
According to another aspect of the present disclosure, there is provided a pose estimation method, including: and performing pose estimation on the three-dimensional image data of the target scene configured with the target object by using the pose estimation model configured by the pose estimation model configuration method of any one embodiment of the present disclosure to obtain the pose of the target object.
According to still another aspect of the present disclosure, there is provided a pose estimation model configuration apparatus including: the real pose data set acquisition module acquires a real pose data set of a target object in a target scene; the pose estimation model is used for acquiring an estimated pose data set of the target object in the target scene based on a model parameter set; the pose comparison data acquisition module is used for comparing the estimated pose data set with the real pose data set to obtain pose comparison data; and an optimal model parameter set acquisition module, wherein the optimal model parameter set acquisition module comprises an objective function model, the objective function model calculates an objective function value based on the pose comparison data, and the optimal model parameter set acquisition module acquires a model parameter set corresponding to a minimum value of the objective function value as an optimal model parameter set of the pose estimation model for configuring the pose estimation model.
According to still another aspect of the present disclosure, there is provided a pose estimation device for performing pose estimation on three-dimensional image data of a target scene configured with a target object to obtain a pose of the target object, wherein the pose estimation device is a pose estimation model configured in any one embodiment of the present disclosure.
According to still another aspect of the present disclosure, there is provided an electronic apparatus including: a memory storing execution instructions; and a processor executing the execution instructions stored by the memory, causing the processor to execute the pose estimation model configuration method of any of the embodiments of the present disclosure and/or to execute the pose estimation method of any of the embodiments of the present disclosure.
According to still another aspect of the present disclosure, there is provided a readable storage medium having stored therein execution instructions which, when executed by a processor, are to implement the pose estimation model configuration method of any one embodiment of the present disclosure and/or to implement the pose estimation method of any one embodiment of the present disclosure.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the disclosure and together with the description serve to explain the principles of the disclosure.
Fig. 1 is an overall flow diagram of a pose estimation model configuration method of the present disclosure.
Fig. 2 is a schematic diagram of ROI area filtering.
Fig. 3 is a flow diagram of a method of acquiring a real pose data set in some embodiments of the present disclosure.
Fig. 4 is a flow diagram of a method of acquiring an estimated pose dataset in some embodiments of the present disclosure.
Fig. 5 is a flow diagram of obtaining comparative pose data in some embodiments of the present disclosure.
Fig. 6 is a flow diagram of computing objective function values in some embodiments of the present disclosure.
Fig. 7 shows a calculation flow of the objective function value of one embodiment of the present disclosure.
Fig. 8 is a block diagram schematically illustrating a configuration of a pose estimation model using a hardware implementation of a processor according to an embodiment of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the drawings and the embodiments. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant content and not limiting of the present disclosure. It should be further noted that, for convenience of description, only a portion relevant to the present disclosure is shown in the drawings.
In addition, embodiments of the present disclosure and features of the embodiments may be combined with each other without conflict. The technical aspects of the present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Unless otherwise indicated, the exemplary implementations/embodiments shown are to be understood as providing exemplary features of various details of some ways in which the technical concepts of the present disclosure may be practiced. Thus, unless otherwise indicated, features of the various implementations/embodiments may be additionally combined, separated, interchanged, and/or rearranged without departing from the technical concepts of the present disclosure.
The use of cross-hatching and/or shading in the drawings is typically used to clarify the boundaries between adjacent components. As such, the presence or absence of cross-hatching or shading does not convey or represent any preference or requirement for a particular material, material property, dimension, proportion, commonality between illustrated components, and/or any other characteristic, attribute, property, etc. of a component, unless indicated. In addition, in the drawings, the size and relative sizes of elements may be exaggerated for clarity and/or descriptive purposes. While the exemplary embodiments may be variously implemented, the specific process sequences may be performed in a different order than that described. For example, two consecutively described processes may be performed substantially simultaneously or in reverse order from that described. Moreover, like reference numerals designate like parts.
When an element is referred to as being "on" or "over", "connected to" or "coupled to" another element, it can be directly on, connected or coupled to the other element or intervening elements may be present. However, when an element is referred to as being "directly on," "directly connected to," or "directly coupled to" another element, there are no intervening elements present. For this reason, the term "connected" may refer to physical connections, electrical connections, and the like, with or without intermediate components.
The terminology used herein is for the purpose of describing particular embodiments and is not intended to be limiting. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Furthermore, when the terms "comprises" and/or "comprising," and variations thereof, are used in the present specification, the presence of stated features, integers, steps, operations, elements, components, and/or groups thereof is described, but the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof is not precluded. It is also noted that, as used herein, the terms "substantially," "about," and other similar terms are used as approximation terms and not as degree terms, and as such, are used to explain the inherent deviations of measured, calculated, and/or provided values that would be recognized by one of ordinary skill in the art.
The pose estimation model configuration method, the pose estimation model configuration device, the pose estimation device, and the like of the present disclosure are described in detail below with reference to fig. 1 to 8.
Fig. 1 is an overall flow diagram of a pose estimation model configuration method of the present disclosure.
Referring to fig. 1, the pose estimation model configuration method of the present disclosure includes:
s100, acquiring a real pose data set of a target object in a target scene, and acquiring an estimated pose data set of the target object in the target scene based on a model parameter set of a pose estimation model.
And S200, comparing the estimated pose data set with the real pose data set to obtain pose comparison data.
And S300, calculating an objective function value based on the pose comparison data and the objective function model, obtaining a model parameter set corresponding to the minimum value of the objective function value as an optimal model parameter set of the pose estimation model, and configuring the pose estimation model based on the optimal model parameter set.
The target object described in the pose estimation model configuration method of the present disclosure may be a simulated object model (e.g., CAD model) or a physical object.
The three-dimensional image data described in the pose estimation model configuration method of the present disclosure may be point cloud data.
The pose estimation algorithm based on the point cloud data has wide application in the field of robots. The point cloud data is a set of discrete points in three-dimensional space acquired by a lidar or a depth camera, capable of providing geometric and topological information of the environment. The position and pose estimation is carried out by utilizing the point cloud data, so that the robot can be helped to sense and understand the surrounding environment, the tasks of obstacle avoidance, target tracking, grabbing of target objects (such as various types of workpieces, boxes and the like) by the robot and the like are realized.
The pose estimation model configuration method is particularly suitable for configuring a feature-based point cloud pose estimation model (namely a point cloud pose estimation algorithm/a point cloud pose estimation method).
The feature-based point cloud pose estimation method needs to provide feature information of objects in a scene, matches the feature information of the objects with features in the scene, and finally estimates the pose of the objects in the scene.
Feature-based point cloud pose estimation methods can be classified into a point cloud pose estimation method based on template matching (e.g., a LINEMOD algorithm: a template matching method combining an RGB image and a depth image) and a point cloud pose estimation method based on shape features (including a method based on global shape feature description, a method based on local shape feature description, etc.).
The pose estimation model is a machine learning model, and the super-parameter optimization plays a vital role in realizing optimal performance and generalization.
Super-parameters are parameters that cannot be updated during the training process of machine learning, are configuration settings outside the model itself, and include learning rate, regularization strength, network architecture, and the like. The configuration of the hyper-parameters of the model is a task with technical difficulty, and the existing hyper-parameter configuration method comprises grid search and random search.
Grid search: including evaluating the performance of the model in detail for each possible combination of hyper-parameters within a predefined range.
Random search: the super-parameters are randomly sampled from their respective ranges, thereby enabling a more efficient search.
To address the limitations of grid searching and random searching, the related art has developed more advanced hyper-parametric optimization techniques. For example, bayesian optimization, which utilizes probabilistic models to build proxy models of objective functions. The method evaluates the promising hyper-parameter configuration according to the prediction sequence of the agent model, and effectively reduces the search space.
The related art provides some automated hyper-parametric optimization tools, such as Optuna, NNI, wandb, etc., that provide interfaces and interfaces to define a hyper-parametric search space.
According to the pose estimation model configuration method, firstly, a real pose data set of a target object in a target scene is obtained, and an estimated pose data set of the target object in the target scene is obtained based on a model parameter set (namely a plurality of model parameter sets) of a pose estimation model (a pose estimation model to be configured). The set of model parameters of the present disclosure may be configured through a parameter search space.
The target objects with different poses can be configured in the same target scene, and three-dimensional image data (hereinafter, point cloud data is taken as an example) are respectively acquired, so that a set including a plurality of point cloud data, namely, a point cloud data set is obtained, pose labeling of the target objects is respectively carried out on each point cloud data in the point cloud data set, real pose data of the target objects corresponding to each point cloud data is obtained, and thus, a real pose data set is obtained.
The point clouds in the point cloud data set are respectively labeled with the pose of the target object, and the registering of the CAD point cloud model of the target object and the target object in the point cloud data set can be realized, so that the real pose data of the target object in the target scene can be obtained.
In the present disclosure, point cloud data of a target object in a target scene may be acquired by the following method.
(1) And acquiring through a simulation environment, randomly placing the CAD model according to the model to be matched (target object) in the simulation environment, and acquiring point cloud data through a virtual camera.
(2) The method comprises the steps of collecting through a 3D camera in a real environment, randomly placing any number of target object models in the real environment, and collecting point cloud data through the 3D camera.
In some embodiments of the present disclosure, ROI area (Region Of Interest) filtering may be performed on the collected point cloud data, where the ROI area may be a cuboid area (may be a cube area), and the portion that does not need to be focused on may be filtered by the ROI area filtering, and referring to fig. 2, fig. 2 is a schematic diagram of the ROI area filtering. And then, marking point cloud data (comprising object point clouds and scene point clouds) according to an object model (target object) (the model object body type and the point cloud data can be opened through visual software, and matching of the object model and the target object in the point cloud data based on shape characteristics is carried out, so that pose, namely real pose data, of the target object in the point cloud data is obtained.
In step S100 of the pose estimation model configuration method of the present disclosure, the pose estimation model (i.e., the pose estimation model to be configured) can obtain estimated pose data of the target object in the target scene based on each model parameter set, and obtain an estimated pose data set, so that when there are a plurality of model parameter sets in the model parameter set, a plurality of estimated pose data sets can be obtained, and each model parameter set corresponds to one estimated pose data set.
Further, in the pose estimation model configuration method disclosed by the disclosure, the estimated pose data set and the real pose data set are compared to obtain pose comparison data through step S200.
It should be noted that, when there are a plurality of model parameter sets in the model parameter set, each estimated pose data set is compared with the real pose data set, respectively, so as to obtain a plurality of sets of pose comparison data. The pose comparison data reflects the difference between the estimated pose and the true pose.
Further, in the pose estimation model configuration method of the present disclosure, through step S300, the objective function value is calculated based on the pose comparison data and the objective function model obtained above, and the model parameter set corresponding to the minimum value of the objective function value is obtained as the optimal model parameter set of the pose estimation model, and the pose estimation model is configured based on the optimal model parameter set.
Because the pose estimation model generates an estimated pose data set based on each model parameter set, and each estimated pose data set is compared with a real pose data set to generate a set of pose comparison data, calculating an objective function value based on the pose comparison data and the objective function model generates a corresponding objective function value, and the model parameter set corresponding to the minimum value in the objective function values is used as an optimal model parameter set of the pose estimation model to complete model parameter configuration of the pose estimation model. The pose estimation model configuration method is particularly suitable for configuring the hyper-parameters of the pose estimation model.
In some embodiments of the present disclosure, preferably, the objective function model described above in the present disclosure includes a pose difference acquisition module that acquires a pose difference based on the pose comparison data and a pose penalty term acquisition module that generates an invalid pose penalty term based on the pose difference, and the objective function model of the present disclosure calculates the objective function value described above based on the pose difference and the invalid pose penalty term.
Fig. 3 is a flow diagram of a method of acquiring a real pose data set in some embodiments of the present disclosure.
Referring to fig. 3, in some embodiments of the present disclosure, in step S100 described above, acquiring a real pose data set of a target object in a target scene includes:
s102, acquiring three-dimensional image data of a target scene configured with a target object to obtain a three-dimensional image data set.
S104, acquiring a real pose data set of the target object based on the three-dimensional image data set.
Wherein the three-dimensional image dataset comprises a plurality of sets of three-dimensional image data. In each set of three-dimensional image data, one or more target objects may be arranged in the target scene, and when one target object is arranged, the target objects in each set of three-dimensional image data have different poses.
When two or more target objects are arranged in the target scene, it is preferable that the pose of each target object is arranged to be different from each other.
Fig. 4 is a flow diagram of a method of acquiring an estimated pose dataset in some embodiments of the present disclosure.
Referring to fig. 4, in some embodiments of the present disclosure, in step S100 described above, obtaining an estimated pose data set of a target object in a target scene based on a set of model parameters of a pose estimation model includes:
s110, sampling a model parameter set from a parameter search space of the pose estimation model to obtain a model parameter set.
And S120, carrying out pose estimation on the target object in the three-dimensional image data set based on the model parameter sets to obtain an estimated pose data set corresponding to the model parameter sets (one estimated pose data set corresponding to one model parameter set).
The parameter search space of the pose estimation model in the pose estimation model configuration method of the present disclosure will be described by taking find_surface_model (a point cloud pose estimation algorithm based on shape features) as an example.
The parameter search space may be configured based on the following eight parameters:
(1) RelSamplingdistance: scene sampling distance relative to the surface model diameter.
(2) KeyPointFraction: a small portion of the sampled scene points serve as keypoints.
(3) max_overlap_dist_rel: the minimum distance between the centers of the two matching axis-aligned bounding boxes is specified. This value is set relative to the diameter of the object. Once a high scoring object is found, all other matches will be suppressed if the center of its bounding box is too close to the center of the first object.
(4) post_ref_num_steps: iteration number of dense gesture refinement. Increasing the number of iterations will achieve a more accurate pose at the cost of run time. However, once convergence is achieved, even if the step size is increased, the accuracy cannot be further improved. If dense pose refinement is disabled, this parameter will be ignored.
(5) post_ref_sub_sampling: the rate of scene points for dense pose refinement is set. For example, if the value is set to 5, every 5 points in the scene will be used for pose refinement. This parameter allows a simple trade-off between speed and accuracy of pose refinement: increasing this value will result in a reduction in the number of points used, resulting in faster but less accurate pose refinement. Decreasing this value will have the opposite effect. If dense pose refinement is disabled, this parameter will be ignored.
(6) post_ref_dist_threshold_rel: and setting a distance threshold value of dense gesture refinement relative to the diameter of the curved surface model. Only scene points closer to the object than this distance will be used for optimization. Other scene points will be ignored. If dense pose refinement is disabled, this parameter will be ignored.
(7) post_ref_scanning_dist_rel: a distance threshold is set that scores against the surface model diameter.
(8) post_reflow_scan_normal: scene normals are enabled or disabled for pose refinement.
The parameter Search Space may be configured according to recommended values and ranges given by the algorithm description, and may be of a gaussian distribution, a uniform distribution, a set of option Parameters, etc., and the specific Parameters (Parameters) and Search Space (Search Space) may be as shown in the following table:
the configuration of the parameter search space may be performed according to actual requirements, and an exemplary method for configuring a single parameter may be:
the first parameter in the above table, relSamplingDistance, has 5 recommended values, default value of 0.05, and specified range of (0, 1), can be designed into discrete even distribution according to the recommended value, and is designed into [0.03,0.1] range, and the step length is 0.01. The step size and the range can be selected randomly within a specified range, can be designed as a [0.03,0.1] range, can be designed as a [0.01,0.99] range, can be designed as a 0.001 ] range, can be designed as a 0.01 range, and the like.
It may also be designed as gaussian distribution, taking the default value as the expected μ, the search range is within 3 times of standard deviation σ, still taking the first parameter relsamplingdisptance as an example, the expected search range is 0.02 (0.05-3×0.01) to 0.08 (0.05+3×0.01), and the search space is designed to be X-N (0.05,0.01).
The option parameter set type is that the alternatives (all possible cases) form a set, e.g. the parameters can select true or false, then the search space is designed as true, false. It can also be designed as a logarithmic distribution, an exponential distribution, a random distribution, etc.
The parameter search spaces of all the parameters (one parameter search space for each parameter) constitute the parameter search space of the entire algorithm (pose estimation algorithm).
Fig. 5 is a flow diagram of obtaining comparative pose data in some embodiments of the present disclosure.
In some embodiments of the present disclosure, in step S200 described above in the present disclosure, comparing the estimated pose data set and the true pose data set to obtain pose comparison data includes:
s202, obtaining pose errors of each estimated pose data and the corresponding real pose data based on the estimated pose data in the estimated pose data set and the corresponding real pose data in the real pose data set.
S204, acquiring a set of pose errors and using the set of pose errors as pose comparison data.
Wherein the estimated pose data in the estimated pose data set and the corresponding real pose data are obtained based on the same three-dimensional image data in the three-dimensional image data set.
Since each of the sets of model parameters obtains an estimated pose data set, multiple sets of pose comparison data will be obtained.
In some embodiments of the present disclosure, in step S300 described above, calculating the objective function value based on the pose comparison data and the objective function model includes: the pose difference is obtained based on pose errors in the set of pose errors for calculating the objective function value.
In some embodiments of the present disclosure, preferably, in step S202 described above, the pose error of the estimated pose data and its corresponding real pose data is obtained by:
calculating position errors and rotation errors of the estimated pose data and the corresponding real pose data; and obtaining a pose error based on the position error and the rotation error.
Fig. 6 is a flow diagram of computing objective function values in some embodiments of the present disclosure.
Referring to fig. 6, in some embodiments of the present disclosure, in step S300 of the present disclosure, calculating an objective function value based on pose comparison data and an objective function model includes:
S302, judging whether the pose error of the estimated pose data and the corresponding real pose data is larger than or equal to a pose error threshold (which can be a preset pose error threshold).
S304, if so, the pose error is counted into the pose difference, otherwise, the pose difference is not counted and an invalid pose penalty term is adjusted and increased.
S306, calculating an objective function value based on the pose difference and the invalid pose penalty term.
The goal of the pose estimation model, i.e. the pose estimation algorithm, is to calculate the pose of all object models in the target scene, so that the pose error can be used as a basic evaluation criterion.
Preferably, the above-described objective function model (i.e., objective function) of the present disclosure is constructed as:
O(Poses)=L(Poses,Poses gt )+P(N I )。
the objective functions of the present disclosure include: loss function and accuracy penalty term (invalid pose penalty term).
Loss function: the loss function is the sum of the errors between all matching pose results (estimated pose data) and the true values (true pose data). To eliminate serious match errors, the present disclosure sets a position threshold and a rotation threshold. If the error is greater than the threshold, the pose is deemed invalid and is not included in the loss function. The present disclosure represents the loss function by:
Where poises are a list of Poses (i.e., a set of pose data) of objects in a scene.
Wherein C (Pose ) gt ) As a cost function, the cost function is an error sum of position and rotation, and a subscript i in the above formula represents each estimated pose data and each real pose data.
The cost function may be represented by the following equation:
where Pose= [ P, R ] is the Pose of the object (target object), P= [ px, py, pz ] is the object position, and R= [ rx, ry, rz ] is the object rotation. When the object is rotationally symmetric about the Z-axis, the rotation is reduced to an error between the x-axis and the y-axis.
Any rotation in a three-dimensional space can be represented by rotation angles around the three axes x, y, z, and when the object has symmetry axes, only the error between the asymmetric axes need to be calculated, regardless of the error between the symmetry axes. For example, when the Z axis of the object is the symmetry axis, then the formulaInstead of j=x, y.
Invalid pose penalty term: in the loss function, only rewards and penalties for posing errors smaller than a threshold value in the matching result are recorded. In order to improve the accuracy of the penalty mechanism, the accuracy penalty term is introduced in the loss function, so that the situation that a corresponding number of matching poses possibly do not exist in the real pose list can be better processed, and the situation that effective matching is not achieved in the real scene is indicated. The invalid pose penalty term of the present disclosure may be structured as:
P(N I )=N I ×(3×THOLD p +3×THOLD r )。
Wherein N is I Is the number of objects (target objects) that are not in the pose list, and THOLDp and THOLDr are the position threshold and rotation threshold, respectively.
Thus, the objective function of the present disclosure may be constructed as:
O(Poses)=L(Poses,Poses gt )+P(N I )。
fig. 7 shows a calculation flow of the objective function value of one embodiment of the present disclosure.
In the view of figure 7 of the drawings,ngt is the number of real pose data in the real pose data set and is known. Nv is the number of valid real pose data, and the initial value is set to 0.N (N) I Is the number of pose data that are invalid.
Based on the method disclosed by the disclosure, an estimated pose data set corresponding to each model parameter set can be obtained, pose comparison data corresponding to each model parameter set is obtained, further, an objective function value corresponding to each model parameter set is calculated based on the pose comparison data, and the model parameter set corresponding to the minimum value in the objective function value is used as an optimal model parameter set of the pose estimation model, so that configuration of the pose estimation model is completed.
The minimum value of the objective function values represents the minimum pose error, and the number of invalid poses is small, so that the model parameter set corresponding to the minimum value of the objective function values is used as the optimal model parameter set.
For the pose estimation model configuration method of the above-described embodiments of the present disclosure, in step S110, model parameter set sampling is performed from a parameter search space (parameter search space including a plurality of parameters) of a pose estimation model, to obtain a model parameter set, including:
and sampling the model parameter set from the parameter search space of the pose estimation model to obtain the 1 st model parameter set. And obtaining model parameters of the (n+1) th model parameter set from the parameter search space based on model parameters of the (1) st model parameter set to the (N) th model parameter set until N model parameter sets are obtained, wherein N is a natural number greater than or equal to 1, and N is the total number of the model parameter sets.
Preferably, the method further comprises: and carrying out parameter characteristic judgment on the obtained n+1th model parameter set, and reserving the n+1th model parameter set or discarding the n+1th model parameter set based on a judgment result.
For the pose estimation model configuration method of each embodiment described above in the present disclosure, in step S120, pose estimation is performed on a target object in a three-dimensional image data set based on a model parameter set, to obtain an estimated pose data set corresponding to the model parameter set, including:
And carrying out pose estimation on the target object in the three-dimensional image data set based on the acquisition sequence of each model parameter set in sequence to obtain an estimated pose data set corresponding to each model parameter set.
In some embodiments of the present disclosure, a sampling algorithm set and a pruning algorithm set may be preconfigured, and the sampling algorithm and the pruning algorithm may be called from the sampling algorithm set and the pruning algorithm set to perform model parameter set sampling and parameter feature determination, to obtain a plurality of model parameter set sets for implementing optimal model parameter set(s) for obtaining a pose estimation model based on different model parameter set sets.
The present disclosure may construct a set of sampling algorithms based on a plurality of sampling algorithms (samplers) and a set of pruning algorithms based on a plurality of pruning algorithms (pruners). The set of sampling algorithms may include a plurality of algorithms of Grid algorithm, random algorithm, TPE algorithm, cmaEs algorithm, partialfix algorithm, NSGAII algorithm, QMC algorithm, etc., and the set of pruning algorithms may include a plurality of algorithms of Median algorithm, nop algorithm, event algorithm, percentile algorithm, successive Halving algorithm, hyper band algorithm, threshold algorithm, etc.
In each model parameter optimization test, firstly, a sampling algorithm performs parameter sampling in a parameter search space, then performs pose estimation to obtain an estimated pose of a target object in a scene, and then performs calculation of a target function value together with a real pose. This operation is repeated until the end condition is reached. All trials will constitute one dataset (dataset of model parameter sets) and each sampling algorithm will generate the value of the next trial based on the existing samples (parameter sets that have been tested) in the dataset, ensuring that the samples will not repeat. The pruning algorithm determines whether the current trial is valuable based on the existing samples in the dataset after each trial is completed, and if not, prunes (does not add the trial to the existing dataset).
The end condition may be (1) the objective function value is smaller than the set value, or (2) the number of tests reaches the set value, which is not particularly limited in the present disclosure.
The pose estimation model configuration method can solve the problem of parameter optimization of a pose estimation algorithm, can determine a set of optimized model parameters based on the pose estimation model configuration method, has the actual effect of meeting engineering requirements, can replace manual parameter adjustment or serve as an initial value of the manual parameter adjustment, and is suitable for any pose estimation algorithm and any target object type.
Based on the pose estimation model configured by the pose estimation model configuration method, pose estimation can be performed on three-dimensional image data of a target scene configured with a target object, and the pose of the target object is obtained.
Based on the pose estimation model configuration method, the pose estimation model configuration device is further provided.
Fig. 8 is a block diagram schematically illustrating a configuration of a pose estimation model using a hardware implementation of a processor according to an embodiment of the present disclosure.
The pose estimation model configuration apparatus of the present disclosure may include respective modules performing each or several steps in the above-described flowcharts. Thus, each step or several steps in the flowcharts described above may be performed by respective modules, and the apparatus may include one or more of these modules. A module may be one or more hardware modules specifically configured to perform the respective steps, or be implemented by a processor configured to perform the respective steps, or be stored within a computer-readable medium for implementation by a processor, or be implemented by some combination.
The hardware structure of the pose estimation model configuration apparatus of the present disclosure may be implemented using a bus architecture. The bus architecture may include any number of interconnecting buses and bridges depending on the specific application of the hardware and the overall design constraints. Bus 1100 connects together various circuits including one or more processors 1200, memory 1300, and/or hardware modules. Bus 1100 may also connect various other circuits 1400, such as peripherals, voltage regulators, power management circuits, external antennas, and the like.
Bus 1100 may be an industry standard architecture (ISA, industry Standard Architecture) bus, a peripheral component interconnect (PCI, peripheral Component) bus, or an extended industry standard architecture (EISA, extended Industry Standard Component) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, only one connection line is shown in the figure, but not only one bus or one type of bus.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present disclosure in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present disclosure. The processor performs the various methods and processes described above. For example, method embodiments in the present disclosure may be implemented as a software program tangibly embodied on a machine-readable medium, such as a memory. In some embodiments, part or all of the software program may be loaded and/or installed via memory and/or a communication interface. One or more of the steps of the methods described above may be performed when a software program is loaded into memory and executed by a processor. Alternatively, in other embodiments, the processor may be configured to perform one of the methods described above in any other suitable manner (e.g., by means of firmware).
Logic and/or steps represented in the flowcharts or otherwise described herein may be embodied in any readable storage medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
For the purposes of this description, a "readable storage medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable read-only memory (CDROM). In addition, the readable storage medium may even be paper or other suitable medium on which the program can be printed, as the program can be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner if necessary, and then stored in a memory.
It should be understood that portions of the present disclosure may be implemented in hardware, software, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or part of the steps implementing the method of the above embodiments may be implemented by a program to instruct related hardware, and the program may be stored in a readable storage medium, where the program when executed includes one or a combination of the steps of the method embodiments.
Furthermore, each functional unit in each embodiment of the present disclosure may be integrated into one processing module, or each unit may exist alone physically, or two or more units may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product. The storage medium may be a read-only memory, a magnetic disk or optical disk, etc.
Referring to fig. 8, in some embodiments of the present disclosure, a pose estimation model configuration apparatus 1000 of the present disclosure includes:
the real pose data set obtaining module 1002, the real pose data set obtaining module 1002 obtains a real pose data set of a target object in a target scene.
The pose estimation model 1004, the pose estimation model 1004 obtains an estimated pose data set of the target object in the target scene based on the model parameter set.
The pose comparison data acquisition module 1006, the pose comparison data acquisition module 1006 compares the estimated pose data set and the real pose data set to obtain pose comparison data.
The optimal model parameter set obtaining module 1008, the optimal model parameter set obtaining module 1008 includes an objective function model, the objective function model calculates an objective function value based on pose comparison data, and the optimal model parameter set obtaining module 1008 obtains a model parameter set corresponding to a minimum value of the objective function value as an optimal model parameter set of the pose estimation model to be used for configuring the pose estimation model.
Correspondingly, the present disclosure further provides a pose estimation device, which is configured to perform pose estimation on three-dimensional image data of a target scene configured with a target object, to obtain a pose of the target object, where the pose estimation device is a pose estimation model configured in any embodiment of the present disclosure.
The present disclosure also provides an electronic device, including: a memory storing execution instructions; and a processor executing the execution instructions stored in the memory, causing the processor to execute the pose estimation model configuration method of any of the embodiments of the present disclosure and/or to execute the pose estimation method of any of the embodiments of the present disclosure.
The present disclosure also provides a readable storage medium having stored therein execution instructions that, when executed by a processor, are to implement the pose estimation model configuration method and/or implement the pose estimation method of any of the embodiments of the present disclosure.
It will be appreciated by those skilled in the art that the above-described embodiments are merely for clarity of illustration of the disclosure, and are not intended to limit the scope of the disclosure. Other variations or modifications will be apparent to persons skilled in the art from the foregoing disclosure, and such variations or modifications are intended to be within the scope of the present disclosure.

Claims (10)

1. The pose estimation model configuration method is characterized by comprising the following steps of:
acquiring a real pose data set of a target object in a target scene, and acquiring an estimated pose data set of the target object in the target scene based on a model parameter set of a pose estimation model;
Comparing the estimated pose data set with the real pose data set to obtain pose comparison data; and
and calculating an objective function value based on the pose comparison data and the objective function model, obtaining a model parameter set corresponding to the minimum value of the objective function value as an optimal model parameter set of the pose estimation model, and configuring the pose estimation model based on the optimal model parameter set.
2. The method according to claim 1, wherein the objective function model includes a pose difference acquisition module that acquires a pose difference based on the pose comparison data and a pose penalty term acquisition module that generates an invalid pose penalty term based on the pose difference, and the objective function model calculates the objective function value based on the pose difference and the invalid pose penalty term.
3. The method according to claim 2, wherein acquiring a true pose data set of the target object in the target scene includes:
acquiring three-dimensional image data of a target scene configured with a target object to obtain a three-dimensional image dataset; and
and acquiring a real pose data set of the target object based on the three-dimensional image data set.
4. A pose estimation model configuration method according to claim 3, characterized in that obtaining an estimated pose data set of the target object in the target scene based on a set of model parameters of a pose estimation model comprises:
sampling a model parameter set from a parameter search space of the pose estimation model to obtain a model parameter set; and
and estimating the pose of the target object in the three-dimensional image data set based on the model parameter set to obtain an estimated pose data set corresponding to the model parameter set.
5. The pose estimation model configuration method according to any of claims 1 to 4, characterized by comparing the estimated pose data set and the true pose data set to obtain pose comparison data, comprising:
obtaining pose errors of each estimated pose data and the corresponding real pose data based on the estimated pose data in the estimated pose data set and the corresponding real pose data in the real pose data set; and
acquiring a set of the pose errors and using the set of the pose errors as the pose comparison data;
the estimated pose data in the estimated pose data set and the corresponding real pose data are obtained based on the same three-dimensional image data in the three-dimensional image data set;
Optionally, calculating the objective function value based on the pose comparison data and the objective function model includes:
obtaining the pose difference based on pose errors in the set of pose errors for calculating the objective function value;
optionally, the pose error of the estimated pose data and the corresponding real pose data is obtained by the following method:
calculating position errors and rotation errors of the estimated pose data and the corresponding real pose data; and
obtaining the pose error based on the position error and the rotation error;
optionally, calculating the objective function value based on the pose comparison data and the objective function model includes:
judging whether the pose error of the estimated pose data and the corresponding real pose data is larger than or equal to a pose error threshold;
if so, the pose error is counted into the pose difference, if not, the pose difference is not counted and the invalid pose penalty term is adjusted and increased; and
calculating the objective function value based on the pose difference and the invalid pose penalty term;
optionally, sampling a model parameter set from a parameter search space of the pose estimation model to obtain a model parameter set, including:
Obtaining model parameters of the (n+1) th model parameter set from the parameter search space based on model parameters of the (1) st model parameter set to the (N) th model parameter set, and obtaining N model parameter sets, wherein N is a natural number greater than or equal to 1, and N is the total number of the model parameter sets;
optionally, performing pose estimation on the target object in the three-dimensional image data set based on the model parameter set to obtain an estimated pose data set corresponding to the model parameter set, including:
sequentially estimating the pose of the target object in the three-dimensional image data set based on the acquisition sequence of each model parameter set to obtain an estimated pose data set corresponding to each model parameter set;
optionally, the method further comprises: performing parameter characteristic judgment on the obtained n+1th model parameter set, and reserving the n+1th model parameter set or discarding the n+1th model parameter set based on a judgment result;
optionally, a sampling algorithm set and a pruning algorithm set are pre-configured, and the sampling algorithm and the pruning algorithm are called from the sampling algorithm set and the pruning algorithm set to execute model parameter set sampling and parameter characteristic judgment, so as to obtain a plurality of model parameter set sets, so that an optimal model parameter set of the pose estimation model is obtained based on different model parameter set sets;
Optionally, the three-dimensional image data is point cloud data.
6. The pose estimation method is characterized by comprising the following steps of:
pose estimation of three-dimensional image data of a target scene configured with a target object using the pose estimation model configured by the pose estimation model configuration method according to any one of claims 1 to 5, to obtain a pose of the target object.
7. A pose estimation model configuration apparatus, characterized by comprising:
the real pose data set acquisition module acquires a real pose data set of a target object in a target scene;
the pose estimation model is used for acquiring an estimated pose data set of the target object in the target scene based on a model parameter set;
the pose comparison data acquisition module is used for comparing the estimated pose data set with the real pose data set to obtain pose comparison data; and
the optimal model parameter set acquisition module comprises an objective function model, the objective function model calculates an objective function value based on the pose comparison data, and the optimal model parameter set acquisition module acquires a model parameter set corresponding to a minimum value of the objective function value to be used as an optimal model parameter set of the pose estimation model for configuring the pose estimation model.
8. A pose estimation device, characterized in that the pose estimation device is configured to perform pose estimation on three-dimensional image data of a target scene configured with a target object to obtain a pose of the target object, wherein the pose estimation device is a pose estimation model configured as described in claim 7.
9. An electronic device, comprising:
a memory storing execution instructions; and
a processor executing the execution instructions stored by the memory, causing the processor to perform the pose estimation model configuration method according to any one of claims 1 to 5 and/or to perform the pose estimation method according to claim 6.
10. A readable storage medium, characterized in that the readable storage medium has stored therein execution instructions, which when executed by a processor, are for implementing the pose estimation model configuration method according to any one of claims 1 to 5 and/or for implementing the pose estimation method according to claim 6.
CN202311698845.8A 2023-12-12 2023-12-12 Pose estimation model configuration method and pose estimation method Pending CN117746381A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311698845.8A CN117746381A (en) 2023-12-12 2023-12-12 Pose estimation model configuration method and pose estimation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311698845.8A CN117746381A (en) 2023-12-12 2023-12-12 Pose estimation model configuration method and pose estimation method

Publications (1)

Publication Number Publication Date
CN117746381A true CN117746381A (en) 2024-03-22

Family

ID=90250072

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311698845.8A Pending CN117746381A (en) 2023-12-12 2023-12-12 Pose estimation model configuration method and pose estimation method

Country Status (1)

Country Link
CN (1) CN117746381A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111612842A (en) * 2020-05-29 2020-09-01 贝壳技术有限公司 Method and device for generating pose estimation model
CN111784772A (en) * 2020-07-02 2020-10-16 清华大学 Attitude estimation model training method and device based on domain randomization
CN111797740A (en) * 2020-06-24 2020-10-20 北京三快在线科技有限公司 Model training and visual positioning method and device
WO2020259248A1 (en) * 2019-06-28 2020-12-30 Oppo广东移动通信有限公司 Depth information-based pose determination method and device, medium, and electronic apparatus
CN113034575A (en) * 2021-01-27 2021-06-25 深圳市华汉伟业科技有限公司 Model construction method, pose estimation method and object picking device
CN113065593A (en) * 2021-04-01 2021-07-02 深圳大学 Model training method and device, computer equipment and storage medium
CN113361570A (en) * 2021-05-25 2021-09-07 东南大学 3D human body posture estimation method based on joint data enhancement and network training model
WO2022147736A1 (en) * 2021-01-07 2022-07-14 广州视源电子科技股份有限公司 Virtual image construction method and apparatus, device, and storage medium
CN116452638A (en) * 2023-06-14 2023-07-18 煤炭科学研究总院有限公司 Pose estimation model training method, device, equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020259248A1 (en) * 2019-06-28 2020-12-30 Oppo广东移动通信有限公司 Depth information-based pose determination method and device, medium, and electronic apparatus
CN111612842A (en) * 2020-05-29 2020-09-01 贝壳技术有限公司 Method and device for generating pose estimation model
CN111797740A (en) * 2020-06-24 2020-10-20 北京三快在线科技有限公司 Model training and visual positioning method and device
CN111784772A (en) * 2020-07-02 2020-10-16 清华大学 Attitude estimation model training method and device based on domain randomization
WO2022147736A1 (en) * 2021-01-07 2022-07-14 广州视源电子科技股份有限公司 Virtual image construction method and apparatus, device, and storage medium
CN113034575A (en) * 2021-01-27 2021-06-25 深圳市华汉伟业科技有限公司 Model construction method, pose estimation method and object picking device
CN113065593A (en) * 2021-04-01 2021-07-02 深圳大学 Model training method and device, computer equipment and storage medium
CN113361570A (en) * 2021-05-25 2021-09-07 东南大学 3D human body posture estimation method based on joint data enhancement and network training model
CN116452638A (en) * 2023-06-14 2023-07-18 煤炭科学研究总院有限公司 Pose estimation model training method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
尚洋;孙晓亮;张跃强;李由;于起峰;: "三维目标位姿跟踪与模型修正", 测绘学报, no. 06, 15 June 2018 (2018-06-15) *

Similar Documents

Publication Publication Date Title
CN109544677B (en) Indoor scene main structure reconstruction method and system based on depth image key frame
JP6681729B2 (en) Method for determining 3D pose of object and 3D location of landmark point of object, and system for determining 3D pose of object and 3D location of landmark of object
CN110340891B (en) Mechanical arm positioning and grabbing system and method based on point cloud template matching technology
CN107063228B (en) Target attitude calculation method based on binocular vision
CN109859305B (en) Three-dimensional face modeling and recognizing method and device based on multi-angle two-dimensional face
CN112233249B (en) B spline surface fitting method and device based on dense point cloud
CN107358629B (en) Indoor mapping and positioning method based on target identification
WO2011115143A1 (en) Geometric feature extracting device, geometric feature extracting method, storage medium, three-dimensional measurement apparatus, and object recognition apparatus
CN110298854B (en) Flight snake-shaped arm cooperative positioning method based on online self-adaption and monocular vision
CN108022262A (en) A kind of point cloud registration method based on neighborhood of a point center of gravity vector characteristics
CN110189257B (en) Point cloud acquisition method, device, system and storage medium
CN108280852B (en) Door and window point cloud shape detection method and system based on laser point cloud data
CN117132630A (en) Point cloud registration method based on second-order spatial compatibility measurement
CN110851978A (en) Camera position optimization method based on visibility
Drwięga Features matching based merging of 3D maps in multi-robot systems
CN114310887A (en) 3D human leg recognition method and device, computer equipment and storage medium
CN117162098B (en) Autonomous planning system and method for robot gesture in narrow space
CN116921932A (en) Welding track recognition method, device, equipment and storage medium
Cupec et al. Fast 2.5 D Mesh Segmentation to Approximately Convex Surfaces.
Cociaş et al. Multiple-superquadrics based object surface estimation for grasping in service robotics
Liu et al. Robust 3-d object recognition via view-specific constraint
Zürn et al. Topology matching of branched deformable linear objects
CN117746381A (en) Pose estimation model configuration method and pose estimation method
Grundmann et al. A gaussian measurement model for local interest point based 6 dof pose estimation
Jin et al. DOPE++: 6D pose estimation algorithm for weakly textured objects based on deep neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination