CN113128610A - Industrial part pose estimation method and system - Google Patents

Industrial part pose estimation method and system Download PDF

Info

Publication number
CN113128610A
CN113128610A CN202110455776.2A CN202110455776A CN113128610A CN 113128610 A CN113128610 A CN 113128610A CN 202110455776 A CN202110455776 A CN 202110455776A CN 113128610 A CN113128610 A CN 113128610A
Authority
CN
China
Prior art keywords
point cloud
point
target
pose
industrial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110455776.2A
Other languages
Chinese (zh)
Inventor
白洪亮
何军
刘红岩
孙琪
蒋思为
何钰霖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Feisou Technology Co ltd
Original Assignee
Suzhou Feisou Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Feisou Technology Co ltd filed Critical Suzhou Feisou Technology Co ltd
Priority to CN202110455776.2A priority Critical patent/CN113128610A/en
Publication of CN113128610A publication Critical patent/CN113128610A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering

Abstract

The invention provides an industrial part pose estimation method and system, comprising the following steps: acquiring a part point cloud of a part to be detected; classifying the part point cloud based on the part posture classification model to obtain a part classification result; determining a target part template according to the part classification result; and matching the part point cloud by using the target part template to acquire the pose information of the part to be detected. According to the industrial part pose estimation method and system, the pose information of the part to be detected is determined according to the point cloud classification and template matching of the part to be detected, the efficiency and accuracy of industrial part pose estimation are improved, support is provided for judging the part pose offset position and angle in an industrial scene, and the pose estimation problem of part grabbing by using devices such as a mechanical arm and the like in the industrial scene is solved.

Description

Industrial part pose estimation method and system
Technical Field
The invention relates to the technical field of computer vision, in particular to an industrial part pose estimation method and system.
Background
With the continuous development of computer vision and deep learning technology, the practical application service in industrial field scenes by utilizing three-dimensional information becomes possible. Different from the traditional two-dimensional image representation method, the point cloud data scanned and shot by the three-dimensional sensor provides richer characteristic information for the three-dimensional object in real life. The characteristic information with the three-dimensional point cloud data as the carrier can provide more comprehensive and real data basis for application in industrial scenes such as part picking, part assembling, part classification, part pose estimation and the like.
At present, in the conventional image method and the deep learning technology, the commonly adopted method for estimating the pose of the industrial part by using the two-dimensional image is generally realized by template matching, namely template retrieval and matching are directly performed by using manually extracted local features, and the pose estimation is obtained by calculating the transformation between the matched template and the image. However, due to the current limitations of three-dimensional scanning technology, the point cloud captured by the three-dimensional equipment is usually a description of the local surface of the part, and the scanned point cloud structure is not complete.
In addition, the traditional algorithm directly calculates the geometric characteristics of the scanned incomplete point cloud surface and matches the geometric characteristics with a complete part model based on the geometric characteristics, so that the difficulty of calculating the similarity of corresponding characteristic points is increased, and the pose estimation of the part is difficult to complete.
Disclosure of Invention
Aiming at the defect of inaccurate pose estimation in the prior art, the embodiment of the invention provides an industrial part pose estimation method and system so as to improve the effect and the accuracy of industrial part pose estimation.
The invention provides an industrial part pose estimation method, which comprises the following steps: acquiring a part point cloud of a part to be detected; classifying the part point cloud based on a part posture classification model to obtain a part classification result; determining a target part template according to the part classification result; and matching the part point cloud by using the target part template to acquire the pose information of the part to be detected.
According to the industrial part pose estimation method provided by the invention, the part point cloud of the part to be detected is obtained, and the method comprises the following steps: filtering original point clouds collected in an industrial part placing scene to obtain scene point clouds; and carrying out point cloud separation processing on the scene point cloud to obtain the part point cloud only containing the part to be detected.
According to the method for estimating the pose of the industrial part, provided by the invention, the point cloud separation processing is carried out on the scene point cloud of the industrial part placing scene so as to obtain the part point cloud only containing the part to be detected, and the method comprises the following steps: according to the Euclidean distances among all points in the scene point cloud, carrying out segmentation processing on the scene point cloud, and determining all unit point cloud clusters; screening the point cloud clusters according to the point cloud number of each unit point cloud cluster to obtain part point cloud clusters; and carrying out random down-sampling treatment on the part point cloud cluster to obtain the part point cloud.
According to the industrial part pose estimation method provided by the invention, based on a preset distance threshold value, the scene point cloud is segmented according to Euclidean distances among all points in the scene point cloud, and all unit point cloud clusters are determined; the method comprises the following steps:
step 1, randomly determining a target point in the scene point cloud;
step 2, acquiring all neighbor points of the target point based on a neighbor search method to construct an initial neighbor set of the target point; the adjacent point is a point of which the Euclidean distance between the scene point cloud and the target point is smaller than a preset threshold value;
step 3, determining all neighbor points of any neighbor point in the initial neighbor set based on a neighbor search method to construct a neighbor set of any neighbor point; the neighbor set of any neighbor point includes all points in the initial neighbor set and all neighbors of each point in the initial neighbor set;
step 4, taking the neighbor set of any neighbor point as a new initial neighbor set, and iteratively executing step 3 to obtain a new neighbor set until the points in the new neighbor set are not increased any more;
step 5, determining a target point cloud cluster according to the new neighbor set, wherein the target point cloud cluster comprises the target point and points in the new neighbor set;
and 6, randomly selecting any point which does not belong to the target point cloud cluster from the scene point cloud as a new target point, and iteratively executing the steps 2 to 5 until the segmentation of all point clouds in the scene point cloud is completed, and taking each target point cloud cluster as a unit point cloud cluster.
According to the industrial part pose estimation method provided by the invention, based on a part pose classification model, the part point cloud is classified to obtain a part classification result, and the method comprises the following steps: based on the part posture classification model, performing feature extraction processing on the part point cloud to obtain point cloud neighborhood information; the part posture classification model is constructed based on a dynamic graph convolution neural network; splicing the point cloud neighborhood information, and acquiring the global characteristics of the part point cloud through maximum pooling operation; and classifying the global features to obtain the part classification result.
According to the industrial part pose estimation method provided by the invention, the target part template is determined according to the part point cloud and the part classification result, and the method comprises the following steps: and according to the part classification result, determining CAD model point cloud corresponding to the part point cloud from a CAD part model point cloud library to serve as the target part template.
According to the industrial part pose estimation method provided by the invention, the target part template is used for matching the part point cloud to obtain the pose information of the part to be detected, and the method comprises the following steps: respectively extracting key points of the part point cloud and the CAD model point cloud corresponding to the target part template to obtain key points; the key points comprise part point cloud key points and CAD model point cloud key points; acquiring a local descriptor of the key point; carrying out consistency evaluation on the key points by using the local descriptors to obtain the corresponding relation between the part point cloud key points and the CAD model point cloud key points, and determining key point pairs according to the corresponding relation; based on a random sampling consistency method, correcting the key point pairs, and constructing a preliminary pose transformation matrix according to a correction result; transforming the part point cloud key points by using the preliminary pose transformation matrix to obtain part point cloud key points with initial pose transformation; matching the part point cloud key points subjected to initial pose transformation with the CAD model cloud key points based on a closest point iteration method to obtain a fine registration pose transformation matrix; and determining a target transformation matrix based on the preliminary pose transformation matrix and the fine registration pose transformation matrix so as to acquire pose information of the part to be detected.
The invention also provides an industrial part pose estimation system, which comprises:
the acquisition unit is used for acquiring a part point cloud of a part to be detected;
the classification unit is used for classifying the point cloud of the part based on the part posture classification model to obtain a part classification result;
the determining unit is used for determining a target part template according to the part point cloud and the part classification result;
and the matching unit is used for matching the part point cloud by using the target part template so as to acquire the pose information of the part to be detected.
The invention also provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the computer program to realize the steps of any one of the industrial part pose estimation methods.
The present invention also provides a non-transitory computer-readable storage medium having stored thereon a computer program that, when executed by a processor, implements the steps of any one of the industrial part pose estimation methods described above.
According to the industrial part pose estimation method and system, the pose information of the part to be detected is determined according to the point cloud classification and template matching of the part to be detected, the efficiency and accuracy of industrial part pose estimation are improved, support is provided for judging the part pose offset position and angle in an industrial scene, and the pose estimation problem of part grabbing by using devices such as a mechanical arm and the like in the industrial scene is solved.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of an industrial part pose estimation method provided by the invention;
FIG. 2 is a schematic structural diagram of an industrial part pose estimation system provided by the present invention;
fig. 3 is a schematic structural diagram of an electronic device provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that in the description of the embodiments of the present invention, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
At present, a pose estimation technology based on three-dimensional point cloud is not perfect, and an efficient method for directly obtaining the categories and poses of all parts to be detected in a scene from scene point cloud is still lacking. The existing method only uses deep learning to realize the process, and can generate larger deviation with the actual pose when realizing pose estimation, so that the ideal effect under the industrial scene is difficult to achieve.
The industrial part pose estimation method and system provided by the embodiment of the invention are described below with reference to fig. 1 to 3.
Fig. 1 is a schematic flow chart of an industrial part pose estimation method provided by the present invention, as shown in fig. 1, which mainly includes, but is not limited to, the following steps:
s1, acquiring a part point cloud of the part to be detected;
s2, classifying the part point cloud based on the part posture classification model to obtain a part classification result;
s3, determining a target part template according to the part classification result;
and S4, matching the part point cloud by using the target part template to acquire the pose information of the part to be detected.
First, a part point cloud may be collected in an industrial scene, or the collected part point cloud may be called. According to the complex environment of the industrial scene, when the industrial part data is acquired, the point cloud can be acquired right above or beside the top of the industrial scene, and the point cloud can be set according to actual requirements so as to acquire the part point cloud of the part to be detected.
The equipment for collecting the part point cloud can be a Three-Dimensional sensor such as a structured light camera and a Three-Dimensional (3D) depth camera.
As an alternative embodiment, in step S1, the internal reference is obtained by calibrating the depth camera, and the collected depth map of the part is converted into a three-dimensional point cloud map. According to the point cloud clustering characteristics, the three-dimensional point cloud picture is segmented through a Random sample consensus (RANSAC) algorithm, the part point cloud and the background point cloud are segmented, the background point cloud is removed, and the part point cloud of the part to be detected is obtained.
Further, in step S2, based on the part posture classification model, the part point cloud is classified to obtain a part classification result.
The part attitude classification model can be a trained Point cloud deep learning network model (Point Net) neural network model, local and global features of a part Point cloud are bound together through splicing, a multilayer perceptron is used for fusion, finally, Point-by-Point classification of the part Point cloud is realized through a classifier, and then a classification result of the part Point cloud is obtained.
Optionally, before the part posture classification model is input, zero-mean normalization processing can be performed on the part point cloud to ensure a better classification effect.
Further, in step S3, a target part template is determined according to the part classification result.
The target part template may be a Computer Aided Design (CAD) template stored in a part template Library, or may be a part Point Cloud template stored in a Point Cloud Library (PCL).
Further, in step S4, the part point cloud and the key point of the target part template are used to match and calibrate the part point cloud and the part template, so as to obtain pose information of the part to be detected.
According to the industrial part pose estimation method provided by the invention, the pose information of the part to be detected is determined according to the point cloud classification and template matching of the part to be detected, the efficiency and the accuracy of industrial part pose estimation are improved, support is provided for the judgment of the part pose offset position and angle in an industrial scene, and the pose estimation problem of part grabbing by using devices such as a mechanical arm and the like in the industrial scene is solved.
Based on the content of the above embodiment, as an optional embodiment, obtaining a part point cloud of a part to be measured includes:
filtering original point clouds collected in an industrial part placing scene to obtain scene point clouds;
and carrying out point cloud separation processing on the scene point cloud so as to obtain the part point cloud only containing the part to be detected.
The method comprises the steps of collecting part forms by using three-dimensional sensors such as a structured light camera and a 3D camera in an industrial part placing scene, and acquiring original point clouds of parts to be detected in the industrial part placing scene.
The original point cloud data can be filtered by using a straight-through filter and a statistical filter to remove noise points in the original point cloud. The straight-through filter can filter out points outside a limited range in a specified direction, namely the background can be removed by the method under the condition that the background and the foreground have a certain distance; the statistical filter can remove outliers and noise points to better separate the industrial part point cloud.
The scene point cloud is segmented through an Euclidean algorithm, the part point cloud only containing the part to be detected is obtained, the part point cloud is separated from the scene point cloud, and the part point cloud can be used as the input of a part posture classification model so as to obtain a better classification effect.
In the embodiment, the collected original point cloud is subjected to filtering pretreatment, and the part point cloud is separated from the scene point cloud so as to obtain the part point cloud only containing the part to be detected, thereby providing a necessary basis for subsequently obtaining a part posture classification model and improving the accuracy of part posture identification.
Based on the content of the above embodiment, as an optional embodiment, performing point cloud separation processing on a scene point cloud of an industrial part placement scene to obtain a part point cloud only including a part to be measured, includes:
according to the Euclidean distance between all points in the scene point cloud, the scene point cloud is segmented, and all unit point cloud clusters are determined;
screening the point cloud clusters according to the point cloud number of each unit point cloud cluster to obtain part point cloud clusters;
and carrying out random down-sampling treatment on the part point cloud cluster to obtain the part point cloud.
As an optional embodiment, first, the euclidean distances between all points in the scene point cloud are calculated, the point clouds with the euclidean distances larger than the preset distance are segmented, and a set formed by the point clouds with the euclidean distances smaller than the preset distance is used as a unit point cloud cluster to obtain a plurality of unit point cloud clusters.
And secondly, screening the unit point cloud clusters to obtain the part point cloud cluster. Wherein the number of points in a cluster is set to a threshold value nmin,nmax]. For each unit point cloud cluster, if the number of the midpoints of any unit point cloud cluster is less than nminIf so, removing the unit point cloud cluster as noise from all the unit point cloud clusters; if the number of the middle points of the unit point cloud cluster is more than nmaxIf so, taking the unit point cloud cluster as a background plane, and removing the unit point cloud cluster from all the unit point cloud clusters; if the number of the points in the unit point cloud cluster is in the threshold value nmin,nmax]And (4) keeping the point cloud cluster as a part point cloud cluster.
And uniformly carrying out random down-sampling treatment on the divided and screened part point cloud clusters to obtain part point clouds, wherein the number of points in each part point cloud cluster subjected to random down-sampling treatment is kept consistent, so that subsequent classification and matching operations are facilitated.
According to the method, the part point cloud is segmented from the scene point cloud and normalized, so that the subsequent processing of the part posture classification model is facilitated, the accuracy of part position and posture identification is improved, and the industrial part position and posture estimation method provided by the invention can be suitable for point cloud images with different sizes.
Based on the content of the above embodiment, as an optional embodiment, based on a preset distance threshold, according to the euclidean distances between all points in the scene point cloud, the scene point cloud is segmented, and all unit point cloud clusters are determined; the method comprises the following steps:
step 1, randomly determining a target point in a scene point cloud;
step 2, acquiring all neighbor points of the target point based on a neighbor search method to construct an initial neighbor set of the target point; the near-neighbor point is a point of which the Euclidean distance between the scene point cloud and the target point is smaller than a preset threshold value;
step 3, determining all neighbor points of any neighbor point in the initial neighbor set based on a neighbor search method to construct a neighbor set of any neighbor point; the neighbor set of any neighbor point comprises all points in the initial neighbor set and all neighbor points of each point in the initial neighbor set;
step 4, taking the neighbor set of any neighbor point as a new initial neighbor set, and iteratively executing step 3 to obtain a new neighbor set until the points in the new neighbor set are not increased any more;
step 5, determining a target point cloud cluster according to the new neighbor set, wherein the target point cloud cluster comprises a target point and points in the new neighbor set;
and 6, randomly selecting any point which does not belong to the target point cloud cluster from the scene point cloud as a new target point, and iteratively executing the steps 2 to 5 until the segmentation of all point clouds in the scene point cloud is completed, and taking each target point cloud cluster as a unit point cloud cluster.
As an alternative embodiment, a target point is randomly determined as a center in the scene point cloud according to the euclidean distance between the point cloud and the point, and the point clouds smaller than the distance threshold are grouped into a cluster. The specific steps of the method are as follows.
Since initially each point in the point cloud does not belong to any cluster. If the Euclidean distance between two points in the point cloud is smaller than a given threshold value, the two points are considered to be adjacent to each other, wherein one point is the adjacent point of the other point.
First, in step 1, a target P in a point cloud that does not belong to any cluster in a scene point cloud is randomly determined.
Further, in step 2, all neighbor points of the target point P are found based on a nearest neighbor search algorithm (K-dimension tree, KD tree), and points that do not belong to any other cluster are formed into a neighbor set; the near-neighbor point is a point of which the Euclidean distance between the scene point cloud and the target point is smaller than a preset threshold value; the setting of the preset threshold value can be flexibly set according to the point cloud density or actual requirements.
Further, in step 3, finding out the neighbor point of each point in the neighbor set through a KD tree algorithm, and adding neighbors which do not belong to any other cluster into the neighbor set to construct the neighbor set of any neighbor point; the neighbor set of any neighbor includes all points in the initial neighbor set as well as all neighbors of each point in the initial neighbor set.
Further, in step 4, the neighbor set of any neighbor point is taken as a new initial neighbor set, and step 3 is iteratively executed to obtain a new neighbor set until the points in the new neighbor set are not increased any more.
Further, in step 5, the target point P and all the points in the neighboring set are regarded as a target point cloud cluster.
Further, in step 6, any point which does not belong to the target point cloud cluster is randomly selected from the scene point clouds to serve as a new target point, steps 2 to 5 are iteratively executed until the segmentation of all the point clouds in the scene point cloud is completed, and each target point cloud cluster is taken as a unit point cloud cluster. So far, all the points belong to a certain cluster, and the clustering process is finished.
The method divides the scene point cloud into unit point cloud clusters by a neighbor search method, facilitates screening and division of the part point cloud, has high calculation precision of a neighbor search algorithm, is not influenced by abnormal values, and is suitable for fine pose estimation of parts in the industrial field.
Based on the content of the foregoing embodiment, as an optional embodiment, based on the part posture classification model, classifying the part point cloud to obtain a part classification result, including:
based on the part posture classification model, performing feature extraction processing on the part point cloud to obtain point cloud neighborhood information; the part posture classification model is constructed based on a dynamic graph convolution neural network;
splicing the point cloud neighborhood information, and acquiring the global characteristics of the part point cloud through maximum pooling operation;
and classifying the global features to obtain the part classification result.
The part classification result includes the part type and the part model.
The part posture classification model may be constructed based on a Dynamic Graph Convolution Neural Network (DGCNN), or based on a Neural Network model with a three-dimensional Point cloud classification function such as Point Net, and is described in the following embodiments of the present invention by taking DGCNN as an example, which is not considered to limit the protection scope of the present invention. The DGCNN has light magnitude and high processing speed, can acquire enough local information, and is suitable for tasks such as classification, segmentation and the like.
Firstly, defining categories according to different placing states (such as a forward placing state, a side placing state and a reverse placing state) of different parts in the camera view field, and completing labeling and manufacturing of a data set. The prepared data set is used for training and verifying the part posture classification model.
The DGCNN network mainly uses an Edge Convolution (Edge Convolution) module to extract features. The Edge Conv module pays attention to the point cloud neighborhood information on the basis of ensuring the invariance of point cloud replacement. Specifically, the Edge Conv module uses the relative relation of points to complete the extraction of the neighborhood characteristics of semantic levels, and completes the extraction of the overall characteristic information through the step-by-step module characteristic stacking. The method is suitable for information acquisition application of small part point clouds in industrial scenes.
As an alternative embodiment, the part point cloud is input to the part pose classification model. In the part posture classification model, 4 Edge Conv modules are used for extracting point cloud neighborhood information of the input part point cloud (including coordinate space and feature space); based on the obtained point cloud field information, splicing the features of different network layers, and obtaining the global features of the point cloud through maximum pooling operation; and then classifying the global features of the point cloud through 3 full-connection layers, thereby obtaining the posture classification result of the part.
In the embodiment, the part point cloud is classified through the part posture classification model to obtain a part classification result. By combining the traditional method with the deep learning method, the classification effect of high precision, strong interpretability and strong generalization is achieved, and meanwhile, the robustness of the whole process is enhanced.
Based on the content of the foregoing embodiment, as an optional embodiment, determining a target part template according to the part point cloud and the part classification result includes:
and according to the part classification result, determining the CAD model point cloud corresponding to the part point cloud from the CAD part model point cloud library as a target part template.
Optionally, the CAD part model point cloud library contains model point clouds of all parts in the industrial scene.
As an optional embodiment, according to the part classification result, determining the part type and the part model corresponding to any target part point cloud, and determining a CAD model point cloud with the same type and model in a CAD part model point cloud library as a target part template corresponding to the target part point cloud.
According to the embodiment, the corresponding CAD model is called as the matching template of the target part according to the part classification result. The CAD model is widely applied in industrial production, is exquisite in design and easy to obtain, and can enhance the accuracy of pose estimation, thereby providing a basis for obtaining pose information of a part to be detected.
Based on the content of the above embodiment, as an optional embodiment, matching the part point cloud by using the target part template to obtain pose information of the part to be detected includes:
respectively extracting key points of the part point cloud and the CAD model point cloud corresponding to the target part template to obtain key points; the key points comprise part point cloud key points and CAD model point cloud key points;
acquiring a local descriptor of the key point;
carrying out consistency evaluation on the key points by using the local descriptors to obtain the corresponding relation between the part point cloud key points and the CAD model point cloud key points, and determining key point pairs according to the corresponding relation;
based on a random sampling consistency method, correcting the key point pairs, and constructing a preliminary pose transformation matrix according to a correction result;
transforming the part point cloud key points by using the preliminary pose transformation matrix to obtain part point cloud key points with initial pose transformation;
matching the part point cloud key points subjected to initial pose transformation with the CAD model cloud key points based on a closest point iteration method to obtain a fine registration pose transformation matrix;
and determining a target transformation matrix based on the preliminary pose transformation matrix and the fine registration pose transformation matrix so as to acquire pose information of the part to be detected.
The key points may be extracted based on an original feature point extraction algorithm (ISS), or may be extracted by a Scale-invariant feature transform (SIFT).
As an optional embodiment, local descriptors are respectively calculated for the part point cloud and the key points of the CAD model, consistency estimation of the key points is performed by using the local descriptors, that is, the corresponding relationship between the key points of the part point cloud and the key points of the CAD model is determined, and the key point pairs are determined according to the corresponding relationship; and rejecting the wrong corresponding key point pairs through an RANSAC algorithm, and solving a rotation matrix R and a translation matrix T by using least square fitting according to the remaining correct key point pairs to obtain a primary pose transformation matrix. And minimizing the sum of the distance differences between the key point pairs of the part point cloud and the CAD point cloud after the transformation of the transformation matrix.
Local descriptors include, but are not limited to, Histogram of oriented features (SHOT), Point Feature Histogram (PFH), Fast Point Feature Histogram (FPFH), 3D Shape Context features (3D Shape Context, 3D).
And applying the initial pose transformation matrix to the key points of the original part point cloud, and performing R and T transformation to obtain the key points of the part point cloud with the initial pose transformation.
And registering the key points of the part Point cloud with the initial pose transformation and the key points of the CAD model Point cloud by using an Iterative Closest Point (ICP) algorithm to obtain a fine-registration pose transformation matrix. The fine registration pose transformation matrix comprises a fine registration rotation matrix R 'and a fine registration translation matrix T'.
And calculating the final transformed target transformation matrix by combining the fine-registration pose transformation matrix with the preliminary transformation matrix so as to acquire the pose information of the part to be detected. The target transformation matrix includes a final rotation matrix Rfinal and a final translation matrix Tfnal. And the final target transformation matrix is a transformation matrix of the target part template based on the CAD model.
The final calculation method of the rotation matrix Rfinal is as follows:
Rfinal=R*R’;
the final translation matrix Tfnal is calculated by:
Tfnal=T+T’;
and outputting the part classification result and the transformation matrix based on the CAD model of each point cloud identified as the part to be detected, and finishing the pose estimation process of the part in the point cloud scene.
And outputting the category of each point cloud identified as the part to be detected and a transformation matrix of the point cloud based on the CAD model, and finishing the pose estimation process of the part in the point cloud scene.
In the embodiment, the point cloud of the part and the point cloud of the CAD model are registered through the key point, so that the pose information of the part to be detected is obtained. Because the CAD model is widely applied in industrial production, has exquisite design and is easy to obtain, the pose estimation accuracy can be enhanced, and the part pose estimation precision is effectively improved.
As an optional embodiment, aiming at the problem of pose estimation of parts in an industrial scene, a set of complete industrial process is provided by using a three-dimensional sensor to obtain scene point cloud data, and pose information of the parts in an industrial background environment is finally obtained by separating point clouds of the parts, confirming the types of the parts and registering. The details are as follows.
The method comprises the steps of collecting part forms by using three-dimensional sensors such as a structured light camera and a 3D camera in an industrial part placing scene, and acquiring original point clouds of parts to be detected in the industrial part placing scene.
The original point cloud data can be filtered by using a straight-through filter and a statistical filter to remove noise points in the original point cloud. The straight-through filter can filter out points outside a limited range in a specified direction, namely the background can be removed by the method under the condition that the background and the foreground have a certain distance; the statistical filter can remove outliers and noise points to better separate the industrial part point cloud.
Calculating Euclidean distances among all points in the scene point cloud, segmenting the point cloud with the Euclidean distance larger than a preset distance, and taking a set formed by the point clouds with the Euclidean distances smaller than the preset distance as a unit point cloud cluster to obtain a plurality of unit point cloud clusters.
And secondly, screening the unit point cloud clusters to obtain the part point cloud cluster. Wherein the number of points in a cluster is set to a threshold value nmin,nmax]. For each unit point cloud cluster, if the number of the midpoints of any unit point cloud cluster is less than nminIf so, removing the unit point cloud cluster as noise from all the unit point cloud clusters; if the number of the middle points of the unit point cloud cluster is more than nmaxIf so, taking the unit point cloud cluster as a background plane, and removing the unit point cloud cluster from all the unit point cloud clusters; if the number of the points in the unit point cloud cluster is in the threshold value nmin,nmax]And (4) keeping the point cloud cluster as a part point cloud cluster.
And uniformly carrying out random down-sampling treatment on the divided and screened part point cloud clusters to obtain part point clouds, wherein the number of points in each part point cloud cluster subjected to random down-sampling treatment is kept consistent, so that subsequent classification and matching operations are facilitated.
According to the Euclidean distance between the point cloud midpoint and the point, a target point is randomly determined in the scene point cloud as the center, and the point clouds smaller than the distance threshold value are gathered into a cluster. The specific steps of the method are as follows.
Since initially each point in the point cloud does not belong to any cluster. If the Euclidean distance between two points in the point cloud is smaller than a given threshold value, the two points are considered to be adjacent to each other, wherein one point is the adjacent point of the other point.
First, in step 1, a target P in a point cloud that does not belong to any cluster in a scene point cloud is randomly determined.
Further, in step 2, all neighbor points of the target point P are found based on the KD tree algorithm, and a neighbor set is formed from the points that do not belong to any other cluster; the near-neighbor point is a point of which the Euclidean distance between the scene point cloud and the target point is smaller than a preset threshold value; the setting of the preset threshold value can be flexibly set according to the point cloud density or actual requirements.
Further, in step 3, finding out the neighbor point of each point in the neighbor set through a KD tree algorithm, and adding neighbors which do not belong to any other cluster into the neighbor set to construct the neighbor set of any neighbor point; the neighbor set of any neighbor includes all points in the initial neighbor set as well as all neighbors of each point in the initial neighbor set.
Further, in step 4, the neighbor set of any neighbor point is taken as a new initial neighbor set, and step 3 is iteratively executed to obtain a new neighbor set until the points in the new neighbor set are not increased any more.
Further, in step 5, the target point P and all the points in the neighboring set are regarded as a target point cloud cluster.
Further, in step 6, any point which does not belong to the target point cloud cluster is randomly selected from the scene point clouds to serve as a new target point, steps 2 to 5 are iteratively executed until the segmentation of all the point clouds in the scene point cloud is completed, and each target point cloud cluster is taken as a unit point cloud cluster. So far, all the points belong to a certain cluster, and the clustering process is finished.
And constructing a part posture classification model based on the DGCNN, and inputting the point cloud of the part into the part posture classification model. In the part posture classification model, repeatedly extracting point cloud neighborhood information in input part point clouds (including coordinate spaces and feature spaces) through 4 Edge Conv modules to obtain features of different network levels; based on the obtained point cloud field information, splicing the features of different network layers, and obtaining the global features of the point cloud through maximum pooling operation; and then classifying the global features of the point cloud through 3 full-connection layers, thereby obtaining the posture classification result of the part. By extracting the point cloud neighborhood information for multiple times, the receptive field is enlarged, and more features are obtained in deeper and deeper network layers.
And determining the part type and the part model corresponding to any target part point cloud according to the part classification result, and determining the CAD model point cloud with the consistent type and model in a CAD part model point cloud library as a target part template corresponding to the target part point cloud.
Respectively extracting key points of the part point cloud and the CAD model point cloud corresponding to the target part template to obtain key points; the key points comprise part point cloud key points and CAD model point cloud key points. The key points can be extracted based on an ISS algorithm, and can also be extracted by an SIFT algorithm.
Respectively calculating local descriptors for the key points of the part point cloud and the CAD model, and utilizing the local descriptors to carry out consistency estimation on the key points, namely determining the corresponding relation between the key points of the part point cloud and the key points of the CAD model, and determining key point pairs according to the corresponding relation; and rejecting wrong corresponding key point pairs through an RANSAC algorithm, and fitting and solving a rotation matrix R and a translation matrix T by using a least square method according to the remaining correct key point pairs to construct a primary pose transformation matrix. And minimizing the sum of the distance differences between the key point pairs of the part point cloud and the CAD point cloud after the transformation of the transformation matrix.
And applying the initial pose transformation matrix to the key points of the original part point cloud, and performing R and T transformation to obtain the key points of the part point cloud with the initial pose transformation.
And registering the key points of the part point cloud with the initial pose transformation and the key points of the CAD model point cloud through an ICP (inductively coupled plasma) algorithm to obtain a fine-registration pose transformation matrix. The fine registration pose transformation matrix comprises a fine registration rotation matrix R 'and a fine registration translation matrix T'.
And calculating the final transformed target transformation matrix by combining the fine-registration pose transformation matrix with the preliminary transformation matrix so as to acquire the pose information of the part to be detected. The target transformation matrix includes a final rotation matrix Rfinal and a final translation matrix Tfnal. And the final target transformation matrix is a transformation matrix of the target part template based on the CAD model.
The final calculation method of the rotation matrix Rfinal is as follows:
Rfinal=R*R’;
the final translation matrix Tfnal is calculated by:
Tfnal=T+T’;
and outputting the part classification result and the transformation matrix based on the CAD model of each point cloud identified as the part to be detected, and finishing the pose estimation process of the part in the point cloud scene.
In the embodiment, the original point cloud obtained by the three-dimensional sensor is utilized, the traditional three-dimensional point cloud processing algorithm and the front edge deep learning method are combined, the position and orientation information of the part to be detected is determined according to the point cloud of the part to be detected in a classification and template matching mode, the efficiency and the accuracy of position and orientation estimation of the industrial part are improved, support is provided for judging the position and the angle of the position and orientation of the part in the industrial scene, and the position and orientation estimation problem of part grabbing by using devices such as a mechanical arm and the like in the industrial scene is solved.
FIG. 2 is a schematic structural diagram of an industrial part pose estimation system provided by the present invention, as shown in FIG. 2, including but not limited to the following units:
an obtaining unit 201, configured to obtain a part point cloud of a part to be detected;
the classification unit 202 is used for classifying the point cloud of the part based on the part posture classification model to obtain a part classification result;
a determining unit 203, configured to determine a target part template according to the part point cloud and the part classification result;
the matching unit 204 is configured to match the part point cloud by using the target part template to obtain pose information of the part to be detected.
In one embodiment, a part point cloud of a part to be measured is first acquired by the acquisition unit 201; in the classification unit 202, based on the part posture classification model, the part point cloud output by the obtaining unit 201 is classified to obtain a part classification result; the determining unit 203 determines a target part template according to the part point cloud output by the acquiring unit 201 and the part classification result output by the classifying unit 202; the matching unit 204 matches the part point cloud output by the acquisition unit 201 by using the target part template determined by the determination unit 203 to acquire pose information of the part to be detected.
First, a part point cloud may be collected in an industrial scene, or the collected part point cloud may be called. According to the complex environment of the industrial scene, when the industrial part data is acquired, the point cloud can be acquired right above or beside the top of the industrial scene, and the point cloud can be set according to actual requirements so as to acquire the part point cloud of the part to be detected.
The equipment for collecting the part point cloud can be a three-dimensional sensor such as a structured light camera and a 3D depth camera.
As an optional embodiment, the obtaining unit 201 obtains the internal reference through the depth camera calibration, and converts the collected depth map of the part into a three-dimensional point cloud map. And according to the point cloud clustering characteristics, performing secondary segmentation processing on the three-dimensional point cloud picture through an RANSAC algorithm, segmenting the part point cloud and the background point cloud to remove the background point cloud, and acquiring the part point cloud of the part to be detected.
Further, the classification unit 202 classifies the point cloud of the part based on the part posture classification model, and obtains a part classification result.
The part attitude classification model can be a trained Point Net neural network model, the part attitude classification model binds local and global characteristics of the part Point cloud together through splicing, a multilayer perceptron is used for fusion, finally, a classifier is used for realizing Point-by-Point classification of the part Point cloud, and then a classification result of the part Point cloud is obtained.
Optionally, before the part posture classification model is input, zero-mean normalization processing can be performed on the part point cloud, so that a good classification effect can be ensured.
Further, the determination unit 203 determines a target part template according to the part classification result.
The target part template can be a CAD template stored in a part template library, and can also be a part point cloud template stored in PCL.
Further, the matching unit 204 matches and calibrates the point cloud of the part and the part template by using the key points of the point cloud of the part and the target part template, and obtains pose information of the part to be detected.
According to the industrial part pose estimation system provided by the invention, the pose information of the part to be detected is determined according to the point cloud classification and template matching of the part to be detected, the efficiency and the accuracy of the pose estimation of the industrial part are improved, support is provided for the judgment of the part pose offset position and the part pose angle in an industrial scene, and the pose estimation problem of part grabbing by using devices such as a mechanical arm and the like in the industrial scene is solved.
It should be noted that, when being specifically executed, the industrial part pose estimation system provided in the embodiment of the present invention may be implemented based on the industrial part pose estimation method described in any of the above embodiments, and details of this embodiment are not described herein.
Fig. 3 is a schematic structural diagram of an electronic device provided in the present invention, and as shown in fig. 3, the electronic device may include: a processor (processor)310, a communication Interface (communication Interface)320, a memory (memory)330 and a communication bus 34, wherein the processor 31, the communication Interface 320 and the memory 330 communicate with each other via the communication bus 340. Processor 310 may invoke logic instructions in memory 330 to perform an industrial part pose estimation method comprising: acquiring a part point cloud of a part to be detected; classifying the part point cloud based on the part posture classification model to obtain a part classification result; determining a target part template according to the part classification result; and matching the part point cloud by using the target part template to acquire the pose information of the part to be detected.
In addition, the logic instructions in the memory 330 may be implemented in the form of software functional units and stored in a computer readable storage medium when the software functional units are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product including a computer program stored on a non-transitory computer-readable storage medium, the computer program including program instructions, which when executed by a computer, enable the computer to execute the industrial part pose estimation method provided by the above methods, the method including: acquiring a part point cloud of a part to be detected; classifying the part point cloud based on the part posture classification model to obtain a part classification result; determining a target part template according to the part classification result; and matching the part point cloud by using the target part template to acquire the pose information of the part to be detected.
In yet another aspect, the present invention also provides a non-transitory computer-readable storage medium, on which a computer program is stored, the computer program being implemented by a processor to execute the industrial part pose estimation method provided by the above embodiments, the method including: acquiring a part point cloud of a part to be detected; classifying the part point cloud based on the part posture classification model to obtain a part classification result; determining a target part template according to the part classification result; and matching the part point cloud by using the target part template to acquire the pose information of the part to be detected.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. An industrial part pose estimation method is characterized by comprising the following steps:
acquiring a part point cloud of a part to be detected;
classifying the part point cloud based on a part posture classification model to obtain a part classification result;
determining a target part template according to the part classification result;
and matching the part point cloud by using the target part template to acquire the pose information of the part to be detected.
2. The industrial part pose estimation method according to claim 1, wherein obtaining a part point cloud of a part to be measured comprises:
filtering original point clouds collected in an industrial part placing scene to obtain scene point clouds;
and carrying out point cloud separation processing on the scene point cloud to obtain the part point cloud only containing the part to be detected.
3. The industrial part pose estimation method according to claim 2, wherein performing point cloud separation processing on a scene point cloud of an industrial part pose scene to obtain the part point cloud including only a part to be measured comprises:
according to the Euclidean distances among all points in the scene point cloud, carrying out segmentation processing on the scene point cloud, and determining all unit point cloud clusters;
screening the point cloud clusters according to the point cloud number of each unit point cloud cluster to obtain part point cloud clusters;
and carrying out random down-sampling treatment on the part point cloud cluster to obtain the part point cloud.
4. The industrial part pose estimation method according to claim 3, wherein the scene point cloud is segmented according to Euclidean distances between all points in the scene point cloud based on a preset distance threshold value to determine all unit point cloud clusters; the method comprises the following steps:
step 1, randomly determining a target point in the scene point cloud;
step 2, acquiring all neighbor points of the target point based on a neighbor search method to construct an initial neighbor set of the target point; the adjacent point is a point of which the Euclidean distance between the scene point cloud and the target point is smaller than a preset threshold value;
step 3, determining all neighbor points of any neighbor point in the initial neighbor set based on a neighbor search method to construct a neighbor set of any neighbor point; the neighbor set of any neighbor point includes all points in the initial neighbor set and all neighbors of each point in the initial neighbor set;
step 4, taking the neighbor set of any neighbor point as a new initial neighbor set, and iteratively executing step 3 to obtain a new neighbor set until the points in the new neighbor set are not increased any more;
step 5, determining a target point cloud cluster according to the new neighbor set, wherein the target point cloud cluster comprises the target point and points in the new neighbor set;
and 6, randomly selecting any point which does not belong to the target point cloud cluster from the scene point cloud as a new target point, and iteratively executing the steps 2 to 5 until the segmentation of all point clouds in the scene point cloud is completed, and taking each target point cloud cluster as a unit point cloud cluster.
5. The industrial part pose estimation method according to claim 1, wherein the step of classifying the part point cloud based on a part pose classification model to obtain a part classification result comprises the steps of:
based on the part posture classification model, performing feature extraction processing on the part point cloud to obtain point cloud neighborhood information; the part posture classification model is constructed based on a dynamic graph convolution neural network;
splicing the point cloud neighborhood information, and acquiring the global characteristics of the part point cloud through maximum pooling operation;
and classifying the global features to obtain the part classification result.
6. The industrial part pose estimation method of claim 1, wherein determining a target part template from the part point cloud and the part classification results comprises:
and according to the part classification result, determining CAD model point cloud corresponding to the part point cloud from a CAD part model point cloud library to serve as the target part template.
7. The industrial part pose estimation method according to any one of claims 6, wherein the matching of the part point cloud by using the target part template to obtain pose information of a part to be detected comprises:
respectively extracting key points of the part point cloud and the CAD model point cloud corresponding to the target part template to obtain key points; the key points comprise part point cloud key points and CAD model point cloud key points;
acquiring a local descriptor of the key point;
carrying out consistency evaluation on the key points by using the local descriptors to obtain the corresponding relation between the part point cloud key points and the CAD model point cloud key points, and determining key point pairs according to the corresponding relation;
based on a random sampling consistency method, correcting the key point pairs, and constructing a preliminary pose transformation matrix according to a correction result;
transforming the part point cloud key points by using the preliminary pose transformation matrix to obtain part point cloud key points with initial pose transformation;
matching the part point cloud key points subjected to initial pose transformation with the CAD model point cloud key points based on a closest point iteration method to obtain a fine registration pose transformation matrix;
and determining a target transformation matrix based on the preliminary pose transformation matrix and the fine registration pose transformation matrix so as to acquire pose information of the part to be detected.
8. An industrial part pose estimation system, comprising:
the acquisition unit is used for acquiring a part point cloud of a part to be detected;
the classification unit is used for classifying the point cloud of the part based on the part posture classification model to obtain a part classification result;
the determining unit is used for determining a target part template according to the part point cloud and the part classification result;
and the matching unit is used for matching the part point cloud by using the target part template so as to acquire the pose information of the part to be detected.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the industrial part pose estimation method steps according to any one of claims 1 to 7 when executing the computer program.
10. A non-transitory computer-readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the industrial part pose estimation method steps of any one of claims 1 to 7.
CN202110455776.2A 2021-04-26 2021-04-26 Industrial part pose estimation method and system Pending CN113128610A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110455776.2A CN113128610A (en) 2021-04-26 2021-04-26 Industrial part pose estimation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110455776.2A CN113128610A (en) 2021-04-26 2021-04-26 Industrial part pose estimation method and system

Publications (1)

Publication Number Publication Date
CN113128610A true CN113128610A (en) 2021-07-16

Family

ID=76780059

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110455776.2A Pending CN113128610A (en) 2021-04-26 2021-04-26 Industrial part pose estimation method and system

Country Status (1)

Country Link
CN (1) CN113128610A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113724329A (en) * 2021-09-01 2021-11-30 中国人民大学 Object attitude estimation method, system and medium fusing plane and stereo information
CN114453981A (en) * 2022-04-12 2022-05-10 北京精雕科技集团有限公司 Workpiece alignment method and device
CN114742789A (en) * 2022-04-01 2022-07-12 中国科学院国家空间科学中心 General part picking method and system based on surface structured light and electronic equipment
CN115049730A (en) * 2022-05-31 2022-09-13 北京有竹居网络技术有限公司 Part assembling method, part assembling device, electronic device and storage medium
CN115338874A (en) * 2022-10-19 2022-11-15 爱夫迪(沈阳)自动化科技有限公司 Laser radar-based robot real-time control method
WO2023238451A1 (en) * 2022-06-09 2023-12-14 日産自動車株式会社 Component inspection method and component inspection device
CN115049730B (en) * 2022-05-31 2024-04-26 北京有竹居网络技术有限公司 Component mounting method, component mounting device, electronic apparatus, and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018189510A (en) * 2017-05-08 2018-11-29 株式会社マイクロ・テクニカ Method and device for estimating position and posture of three-dimensional object
CN110948492A (en) * 2019-12-23 2020-04-03 浙江大学 Three-dimensional grabbing platform and grabbing method based on deep learning
CN111251295A (en) * 2020-01-16 2020-06-09 清华大学深圳国际研究生院 Visual mechanical arm grabbing method and device applied to parameterized parts
CN111515945A (en) * 2020-04-10 2020-08-11 广州大学 Control method, system and device for mechanical arm visual positioning sorting and grabbing
CN111652085A (en) * 2020-05-14 2020-09-11 东莞理工学院 Object identification method based on combination of 2D and 3D features
CN112476434A (en) * 2020-11-24 2021-03-12 新拓三维技术(深圳)有限公司 Visual 3D pick-and-place method and system based on cooperative robot
CN112651944A (en) * 2020-12-28 2021-04-13 哈尔滨工业大学(深圳) 3C component high-precision six-dimensional pose estimation method and system based on CAD model

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018189510A (en) * 2017-05-08 2018-11-29 株式会社マイクロ・テクニカ Method and device for estimating position and posture of three-dimensional object
CN110948492A (en) * 2019-12-23 2020-04-03 浙江大学 Three-dimensional grabbing platform and grabbing method based on deep learning
CN111251295A (en) * 2020-01-16 2020-06-09 清华大学深圳国际研究生院 Visual mechanical arm grabbing method and device applied to parameterized parts
CN111515945A (en) * 2020-04-10 2020-08-11 广州大学 Control method, system and device for mechanical arm visual positioning sorting and grabbing
CN111652085A (en) * 2020-05-14 2020-09-11 东莞理工学院 Object identification method based on combination of 2D and 3D features
CN112476434A (en) * 2020-11-24 2021-03-12 新拓三维技术(深圳)有限公司 Visual 3D pick-and-place method and system based on cooperative robot
CN112651944A (en) * 2020-12-28 2021-04-13 哈尔滨工业大学(深圳) 3C component high-precision six-dimensional pose estimation method and system based on CAD model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
方军: "融合LiDAR点云与影像数据的矿区建筑物提取研究", 31 July 2020, 西安交通大学出版社, pages: 71 - 73 *
李栋: "结构光视觉高精度测量与工件位姿识别", 《工程科技Ⅱ辑》, no. 02, pages 030 - 69 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113724329A (en) * 2021-09-01 2021-11-30 中国人民大学 Object attitude estimation method, system and medium fusing plane and stereo information
CN114742789A (en) * 2022-04-01 2022-07-12 中国科学院国家空间科学中心 General part picking method and system based on surface structured light and electronic equipment
CN114742789B (en) * 2022-04-01 2023-04-07 桂林电子科技大学 General part picking method and system based on surface structured light and electronic equipment
CN114453981A (en) * 2022-04-12 2022-05-10 北京精雕科技集团有限公司 Workpiece alignment method and device
CN114453981B (en) * 2022-04-12 2022-07-19 北京精雕科技集团有限公司 Workpiece alignment method and device
CN115049730A (en) * 2022-05-31 2022-09-13 北京有竹居网络技术有限公司 Part assembling method, part assembling device, electronic device and storage medium
CN115049730B (en) * 2022-05-31 2024-04-26 北京有竹居网络技术有限公司 Component mounting method, component mounting device, electronic apparatus, and storage medium
WO2023238451A1 (en) * 2022-06-09 2023-12-14 日産自動車株式会社 Component inspection method and component inspection device
CN115338874A (en) * 2022-10-19 2022-11-15 爱夫迪(沈阳)自动化科技有限公司 Laser radar-based robot real-time control method
CN115338874B (en) * 2022-10-19 2023-01-03 爱夫迪(沈阳)自动化科技有限公司 Real-time robot control method based on laser radar

Similar Documents

Publication Publication Date Title
CN113128610A (en) Industrial part pose estimation method and system
US11144787B2 (en) Object location method, device and storage medium based on image segmentation
JP5677798B2 (en) 3D object recognition and position and orientation determination method in 3D scene
EP2720171B1 (en) Recognition and pose determination of 3D objects in multimodal scenes
JP6395481B2 (en) Image recognition apparatus, method, and program
CN112836734A (en) Heterogeneous data fusion method and device and storage medium
JP6912215B2 (en) Detection method and detection program to detect the posture of an object
CN107818598B (en) Three-dimensional point cloud map fusion method based on visual correction
Patterson et al. Object detection from large-scale 3d datasets using bottom-up and top-down descriptors
CN112712589A (en) Plant 3D modeling method and system based on laser radar and deep learning
CN116229189B (en) Image processing method, device, equipment and storage medium based on fluorescence endoscope
CN114743259A (en) Pose estimation method, pose estimation system, terminal, storage medium and application
CN111553422A (en) Automatic identification and recovery method and system for surgical instruments
CN110490915B (en) Point cloud registration method based on convolution-limited Boltzmann machine
CN111127556A (en) Target object identification and pose estimation method and device based on 3D vision
Srivastava et al. Drought stress classification using 3D plant models
Zhong et al. Copy-move forgery detection using adaptive keypoint filtering and iterative region merging
CN117351078A (en) Target size and 6D gesture estimation method based on shape priori
CN111950556A (en) License plate printing quality detection method based on deep learning
CN108985294B (en) Method, device and equipment for positioning tire mold picture and storage medium
CN115409938A (en) Three-dimensional model construction method, device, equipment and storage medium
CN113111741A (en) Assembly state identification method based on three-dimensional feature points
CN113837106A (en) Face recognition method, face recognition system, electronic equipment and storage medium
CN117495891B (en) Point cloud edge detection method and device and electronic equipment
Rangel et al. Object recognition in noisy rgb-d data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination