CN110942515A - Point cloud-based target object three-dimensional computer modeling method and target identification method - Google Patents

Point cloud-based target object three-dimensional computer modeling method and target identification method Download PDF

Info

Publication number
CN110942515A
CN110942515A CN201911175716.4A CN201911175716A CN110942515A CN 110942515 A CN110942515 A CN 110942515A CN 201911175716 A CN201911175716 A CN 201911175716A CN 110942515 A CN110942515 A CN 110942515A
Authority
CN
China
Prior art keywords
point
model
point cloud
scene
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911175716.4A
Other languages
Chinese (zh)
Inventor
鲁荣荣
付兴银
刘骁
李广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Megvii Technology Co Ltd
Original Assignee
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Megvii Technology Co Ltd filed Critical Beijing Megvii Technology Co Ltd
Priority to CN201911175716.4A priority Critical patent/CN110942515A/en
Publication of CN110942515A publication Critical patent/CN110942515A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a target object three-dimensional computer modeling method based on point cloud, a three-dimensional target identification method based on point cloud and a part sorting system. The modeling method comprises the following steps: acquiring point cloud data of a target object; sample points in a preset number range are extracted in a self-adaptive mode, and effective sample point combinations are selected as model point pairs according to the relative position relation of the sample points; and calculating the enhanced feature descriptors of the model point pairs, calculating the Hash values of the model point pairs according to the feature descriptors, and generating a point cloud model of the target object according to the Hash values of all the model point pairs. The method can be used for computing equipment to automatically generate the digital model representation of the object according to the three-dimensional target data, and a novel three-dimensional object industrial digital model which is easy to rapidly build a model, is not limited by the form of the target object and is convenient to identify and compare is formed.

Description

Point cloud-based target object three-dimensional computer modeling method and target identification method
Technical Field
The invention relates to the technical field of image processing and industrial automation, in particular to a point cloud-based target object three-dimensional computer modeling method, a point cloud-based three-dimensional target identification method and a part sorting system.
Background
Currently, with the development of Artificial Intelligence (AI) technology and computer vision technology, intelligent automatic sorting of industrial parts is gradually becoming possible in industrial production, and the technology maturity is also continuously improving. The intelligent sorting firstly needs to rely on AI technology and computer vision to recognize and locate the position and the posture of the target, and then the target is grabbed by a mechanical arm to be placed at a specified position. Intelligent sorting is one of the key technologies for factories to provide productivity and save labor cost. Under the current technical condition, the identification and positioning of the target are mostly realized by comparing a 2D/3D model of the known target with image and point cloud data acquired by a system, and identifying and positioning the pose of the target in scene data acquired by the system.
However, the technical products on the market at present have no recognized mature and robust intelligent sorting method for industrial parts. The related art generally identifies and locates objects by establishing a 2D image template and registering the acquired 2D images of the scene. Methods based on 2D images can typically only address object recognition and localization in single context and clutter-free scenes. There are also some 3D feature-based methods, which typically build local feature descriptors of objects at highly differentiated locations based on 3D models of the objects, and identify and locate the objects by matching specific local features. The method based on the 3D local feature descriptor has difficulty in obtaining ideal effects when the target features are not obvious or the accuracy of the obtained 3D data is not ideal.
Disclosure of Invention
The invention aims to solve at least one of the technical problems in the related art to a certain extent, and provides a point cloud-based target object three-dimensional computer modeling method, a point cloud-based three-dimensional target identification method and a part sorting system according to the modeling method.
In order to achieve the above object, an embodiment of the first aspect of the present invention provides a point cloud-based target object three-dimensional computer modeling method and a point cloud-based target object three-dimensional computer modeling apparatus.
The modeling method comprises the following steps:
acquiring point cloud data of a target object;
sample points in a preset number range are extracted in the point cloud data of the target object in a self-adaptive mode, wherein the sample points are uniformly distributed on the target object in a global space;
selecting effective sample point combinations as model point pairs according to the relative position relation of the sample points in the sample point set;
calculating an enhanced feature descriptor of the model point pair, wherein the enhanced feature descriptor comprises Euclidean distance information between two sample points of each model point pair and information of the correlation between vector direction and normal vector between the two sample points; and
and calculating the hash value of the model point pair according to the feature descriptor, and generating a point cloud model of the target object according to the hash values of all the model point pairs.
In some embodiments, adaptively extracting a preset number of ranges of sample points in the point cloud data of the target object includes:
dividing the voxels, namely uniformly dividing the target object into a plurality of voxels with the same shape;
extracting alternative sample points, and selecting a point closest to the geometric center of the voxel in each voxel space from the point cloud data as an alternative sample point;
judging whether the total number of the alternative sample points falls into the preset number range or not;
if the total number of the alternative sample points does not fall into the preset number range, repeating the step of voxel division, changing the volume of the voxel, and executing the step of alternative sample point extraction again;
and if the total number of the alternative sample points falls into the preset number range, taking the currently extracted alternative sample points as the sample points.
In some embodiments, selecting valid sample point combinations as model point pairs according to the relative position relationship of the sample points in the set of sample points includes:
combining the sample points pairwise to obtain a set of sample point pairs;
removing point pairs which do not satisfy a view point visibility constraint from the set of sample point pairs, and/or removing point pairs which do not satisfy a normal vector parallel constraint from the sample point pairs; and
and taking the rest sample point pairs in the set of sample point pairs as the model point pairs.
In some embodiments, for any pair of points (m)r,mi) Vector nrAnd niAre respectively a point mrAnd point miThe normal vector of (a) is,
culling, from the sample point pairs, point pairs that do not satisfy a view visibility constraint, comprises: eliminating all normal vector inner products rho (m)r,mi)=nr·niLess than a first threshold τminPoint pair of (2); and/or
Rejecting point pairs from the sample point pairs that do not satisfy a normal vector parallel constraint comprises: eliminating all normal vector inner products rho (m)r,mi)=nr·niGreater than a second threshold τmaxThe point pair of (2).
In some embodiments, the first threshold τminCos (pi/2), the second threshold τmax=cos(π/(2Nang) Wherein N) isangRepresenting the maximum discrete number of angular components, NangIs an integer in the range of 10-30.
In some embodiments, said computing a feature descriptor for said pair of model points comprises:
for model point pairs (p, q), its feature descriptor Fe(p, q) is calculated as follows:
Fe(p,q)=(||d||2,∠(n1,n3),∠(n2,n3),δ(n·n2)∠(n1,n2))
wherein d is a vector from point p to point q
Figure BDA0002289881140000031
||d||2Represents the points p andeuclidean distance of point q, vector n1And n2Normal vectors, n, for point p and point q, respectively3=d/||d||2,∠(n1,n3) Representing a vector n1And n3Angle of (d), ∠ (n)2,n3) Representing a vector n2And n3Angle of (d), ∠ (n)1,n2) Representing a vector n1And n2The included angle of (A); n is n3×n1,δ(n·n2) 1, if and only if n · n2Not less than 0, otherwise, delta (n.n)2)=-1。
In some embodiments, said computing a hash value of said pair of model points from said feature descriptor comprises:
normalizing and discretizing and rounding each component of the feature descriptors of the model point pairs to obtain integer vector representation of the feature descriptors;
taking the integer vector as a coefficient of a polynomial function, and taking a value of the polynomial function as a hash value;
wherein the normalizing comprises transforming a fourth component of the feature descriptor to [0, π ] by translational biasing.
In some embodiments, the calculating the hash value of the model point pair according to the feature descriptor and generating the point cloud model of the target object according to the hash value of the model point pair further includes:
obtaining a rotation angle of each of the model point pairs
Figure BDA0002289881140000041
And storing into the point cloud model, wherein model point pairs (m) are registeredr,mi) First point m in (1)rCoordinate transformation to origin of global coordinate system to Tm→gThen T will bem→gmiThe angle of rotation required to rotate about the x-axis to a plane defined by the x-axis and the non-negative half-axis of y is said angle of rotation
Figure BDA0002289881140000042
The invention provides a point cloud-based target object three-dimensional computer modeling device, which comprises:
the point cloud acquisition module is used for acquiring point cloud data of a target object;
the sample point acquisition module is used for extracting sample points in a preset number range in the point cloud data of the target object in a self-adaptive manner, wherein the sample points are uniformly distributed on the target object in a global space;
the effective sample point screening module is used for selecting an effective sample point combination as a model point pair according to the relative position relation of the sample points in the sample point set;
the characteristic descriptor calculation module is used for calculating an enhanced characteristic descriptor of the model point pair, wherein the enhanced characteristic descriptor comprises Euclidean distance information between two sample points of each model point pair and information of the mutual relation between vector direction and normal vector between the two sample points; and
and the hash module is used for calculating the hash value of the model point pair according to the feature descriptor and generating a point cloud model of the target object according to the hash values of all the model point pairs.
The modeling method and the modeling device of the invention replace the special point sampling in the prior art with the model point uniform sampling, screen the relative position relationship of the sample point pair, remove invalid and interference data, set up the enhanced feature descriptor, and combine the structure of the sample point pair Hash table, can be used for the computing equipment to automatically generate the digital model representation of the object according to the three-dimensional target data, form a novel three-dimensional object industrial digital model which is easy to model quickly, is not limited by the form of the target object, is convenient to identify and compare, and is used for the industrial automation scenes such as part sorting, scene object identification and the like and the life scenes. The method eliminates the dependence on the local feature recognition of the target object in the sampling stage, can be suitable for objects without special features which cannot be accurately represented in the prior art, has good robustness, and accelerates the modeling speed because the feature recognition of the target object is not needed and the sampling points are uniformly selected. Further, according to the present model, the rotation angles of the point pairs are calculated and stored in the model, thereby reducing the amount of calculation in object recognition using the model.
In order to achieve the above object, an embodiment of a second aspect of the present invention provides a method and an apparatus for identifying a three-dimensional object based on a point cloud. The identification method comprises the following steps:
acquiring a point cloud model of a target object, wherein the point cloud model of the target object is established according to the method of the first aspect of the invention;
acquiring point cloud data according to a stereo image of a working scene acquired by a three-dimensional information acquisition device, wherein the stereo image of the working scene is subjected to self-adaptive uniform space sampling which is the same as the point cloud model to obtain scene point cloud data; and
and identifying whether an object to be identified which accords with the characteristics of the target object exists in the working scene or not and the pose of the object to be identified according to the matching result of the scene point cloud data and the point cloud model of the target object.
In some embodiments, the identifying, according to the point cloud model of the scene point cloud and the target object, whether an object to be identified which meets the characteristics of the target object exists in the working scene and the pose of the object to be identified includes:
noise data are filtered, and the scene point cloud is preprocessed to obtain effective point cloud data;
randomly selecting a preset number N from the effective point cloud datarPoints as reference points, constituting a set of reference points
Figure BDA0002289881140000051
For each reference point in the reference point set Q, selecting a point in a preset neighborhood range of the reference point from the effective point cloud data as a neighborhood scene point, and constructing a neighborhood scene point pair in the preset neighborhood range according to the reference point and the neighborhood scene point;
calculating the hash value of the neighborhood scene point pair, and searching a model point pair matched with the neighborhood scene point pair in the point cloud model of the target object according to the hash value of the neighborhood scene point pair; and
and identifying the coarse identification result of the object to be identified and the pose thereof in the working scene according to the positions and the number of the neighborhood scene point pairs of each reference point in the reference point set Q and the matched model point pairs matched with the neighborhood scene point pairs.
In some embodiments, the preprocessing the scene point cloud and filtering noise data to obtain valid point cloud data includes:
performing through filtering on the point cloud data to remove points which are far away from each other; and/or
And carrying out plane segmentation according to the point cloud data and removing invalid points in the point cloud data.
In some embodiments, the identifying the object to be identified in the working scene according to the positions and the number of the neighborhood scene point pairs of each reference point in the reference point set Q and the matched model point pairs matched therewith and calculating the pose thereof includes:
for each reference point srConstructing a corresponding two-dimensional accumulator ArWherein the two-dimensional accumulator ArCorresponds to each sample point of the model of the target object, the two-dimensional accumulator arThe second dimension of the object to be identified correspondingly divides the pose space of the object to be identified into a plurality of pose blocks formed by a plurality of intervals; according to the matched model point pairs of the neighborhood scene point pairs and the number and the positions of the model point pairs, in the two-dimensional accumulator ArThe corresponding coordinate points in the table are accumulated and voted;
determining the reference point s according to the result of the accumulated votesrCorresponding model sample points and corresponding pose blocks, taking the intermediate values of the corresponding pose blocks as reference points srCorresponding hypothetical poses; and
obtaining a candidate pose set T according to model sample points and hypothesis poses corresponding to all reference points in the reference point set QHFrom the candidate pose set THAnd determining the real pose of the object to be recognized.
In some embodiments, the obtaining a point cloud model of an object to be identified further comprises:
obtaining a hash table composed of hash values of model point pairs of an object to be identified, and obtaining a rotation angle of each model point pair
Figure BDA0002289881140000071
Wherein the angle of rotation
Figure BDA0002289881140000072
To pass through Tm→gPair model points (m)r,mi) After the first point mr in (1) is transformed to the origin of the global coordinate system, T is transformedm→gmiThe angle of rotation required to rotate about the x-axis to a plane defined by the x-axis and the non-negative half-axis of y;
the two-dimensional accumulator A is used for storing the number and the positions of the matched model point pairs according to the neighborhood scene point pairsrThe accumulated voting is carried out on the corresponding coordinate points in the table, and the method comprises the following steps: according to the rotation angle of the field scene point pair
Figure BDA0002289881140000073
And the rotation angle of the matched pair of model points
Figure BDA0002289881140000074
Determining a coordinate position of a vote, wherein the rotation angle of the domain scene point pair
Figure BDA0002289881140000075
To pass through Ts→gPair scene points(s)r,si) First point s inrAfter transformation to the origin of the global coordinate system, T is transformeds→gsiThe rotation about the x-axis to the desired angle of rotation in the plane defined by the x-axis and the non-negative half-axis of y.
In some embodiments, the set of candidate poses T is a function ofHDetermining the true pose of the object to be recognized comprises:
for candidate pose set THClustering each pose in the group to obtain an aggregate pose, and eliminating invalid aggregate poses with relatively small number of poses from the aggregate poses to obtain effective aggregate poses as rough matching poses;
for each coarse matching pose, transforming the target model into a scene according to the coarse matching pose, and performing pose optimization;
according to the result of pose optimization and the fitting degree of the target model, rejecting poses which do not accord with preset fitting conditions, wherein the preset fitting conditions comprise pose residual error conditions and point cloud overlapping rates; and
when the pose which meets the preset fitting condition is obtained after the elimination, judging that the object to be recognized which meets the characteristics of the target object exists in the working scene, and taking the pose which meets the preset fitting condition as the real pose of the object to be recognized;
and when the pose meeting the preset fitting condition does not exist after the elimination, judging that the object to be recognized meeting the characteristics of the target object does not exist in the working scene.
In some embodiments, more than two targets may be identified, including: acquiring point cloud models of more than two target objects in different shapes; and identifying whether an object to be identified which accords with the characteristics of the target object exists in the working scene according to the matching result of the scene point cloud and the point cloud model of the target object, wherein the pose of the object to be identified comprises the following steps:
identifying and stripping a single target, specifically comprising identifying whether an object to be identified which accords with the characteristics of the current target object exists in the scene; if an object to be recognized which accords with the characteristics of the current target object exists, recording a recognition result, and removing a scene point corresponding to the object to be recognized which accords with the characteristics of the current target object from the scene point cloud; and
and repeating the steps of identifying and stripping the single target for each target object until all the target objects are traversed or the number of the remaining scene points in the scene point cloud is less than a preset scene point threshold.
The invention provides a three-dimensional target recognition device based on point cloud, comprising:
the model acquisition module is used for acquiring a point cloud model of a target object, wherein the point cloud model of the target object is established according to the method of the first aspect of the invention;
the scene point cloud acquisition module is used for acquiring point cloud data according to a stereo image of a working scene acquired by the three-dimensional information acquisition device, wherein the stereo image of the working scene is subjected to self-adaptive uniform space sampling which is the same as the point cloud model to obtain scene point cloud data; and
and the point cloud matching module is used for identifying whether an object to be identified which accords with the characteristics of the target object exists in the working scene or not and identifying the pose of the object to be identified according to the matching result of the scene point cloud data and the point cloud model of the target object.
By using the three-dimensional object identification method and the three-dimensional object identification device, the model point is uniformly sampled to replace special point sampling in the prior art, the relative position relationship of the sample point pair is screened, invalid and interference data are removed, and the structure of the hash table of the sample point pair is combined, so that the model structure is simple and rapid and is not limited by the form of a target object. The method eliminates the dependence on the local feature recognition of the target object in the sampling stage, can be suitable for recognizing objects without special features which cannot be accurately represented in the prior art, has good robustness, and can make the recognition speed faster because the object recognition process is mainly based on the comparison of the hash table. Further, the calculation amount of three-dimensional recognition can be reduced and the calculation speed can be increased by using a model in which the rotation angles of the point pairs are stored. And enables the identification of multiple targets for a scene.
To achieve the above object, an embodiment of the third aspect of the present invention provides a non-transitory computer-readable storage medium on which a computer program is stored, which, when executed by a processor, implements the point cloud-based three-dimensional computer modeling method for a target object according to the first aspect of the present invention or implements the point cloud-based three-dimensional target recognition method according to the second aspect of the present invention.
To achieve the above object, an embodiment of a fourth aspect of the present invention provides a computing device, including: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method for three-dimensional computer modeling of a target object based on point clouds according to the first aspect of the invention or the method for three-dimensional identification of a target object based on point clouds according to the second aspect of the invention when executing the computer program.
According to the non-transitory computer readable storage medium and the computing device of the present invention, the beneficial effects similar to those of the point cloud based target object three-dimensional computer modeling method according to the first aspect and the point cloud based three-dimensional target identification method according to the second aspect of the present invention are achieved, and the detailed description is omitted here.
An embodiment of a fifth aspect of the present invention provides a component sorting system, including: the system comprises a control subsystem, a part sorting subsystem and a scene shooting subsystem, wherein the scene shooting subsystem is used for acquiring three-dimensional image data of a working scene; the control subsystem is used for controlling the part sorting subsystem to work according to the three-dimensional image data acquired by the scene shooting subsystem; the part sorting subsystem is used for carrying out physical operation on parts in a working scene; wherein the control subsystem comprises a memory, a processor and a computer program stored on the memory and executable on the processor, when executing the program, implementing the method for three-dimensional computer modeling of a target object based on point cloud according to the first aspect of the invention and/or implementing the method for three-dimensional identification of a target object based on point cloud according to the second aspect of the invention.
The part sorting system can accurately identify and position and pose of parts with different shapes, is not limited by the appearance conditions of the parts, and has stable work and good robustness. And supports online identification of multiple parts.
Drawings
FIG. 1 is a schematic diagram of a work scenario for point cloud based modeling and identification of three-dimensional objects;
FIG. 2 is a schematic flow chart diagram of a method for three-dimensional computer modeling of a target object based on a point cloud in accordance with an embodiment of the invention;
FIG. 3 is a vector diagram of an enhanced feature point pair descriptor in accordance with an embodiment of the present invention;
FIG. 4 is a schematic diagram of the relationship between point pair rotation angle and pose transformation according to an embodiment of the present invention;
FIG. 5 is a block diagram of a three-dimensional computer modeling apparatus for a target object based on point clouds according to an embodiment of the invention;
FIG. 6 is a schematic flow chart of a method for identifying a three-dimensional target object based on a point cloud according to an embodiment of the invention;
FIG. 7 is a schematic diagram of a data processing procedure of a method for identifying a three-dimensional target object based on a point cloud according to an embodiment of the present invention;
FIG. 8 is a schematic flow chart diagram of a method for identifying multiple three-dimensional target objects based on a point cloud in accordance with an embodiment of the invention;
fig. 9 is a block diagram of the structure of a point cloud-based three-dimensional target object recognition apparatus according to an embodiment of the present invention;
FIG. 10 is a schematic structural diagram of a computing device according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of a part to be recognized for three-dimensional object recognition according to an embodiment of the present invention;
fig. 12 is a diagram of part recognition results of three-dimensional object recognition according to an embodiment of the present invention.
Detailed Description
Embodiments in accordance with the present invention will now be described in detail with reference to the drawings, wherein like reference numerals refer to the same or similar elements throughout the different views unless otherwise specified. It is to be noted that the embodiments described in the following exemplary embodiments do not represent all embodiments of the present invention. They are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the claims, and the scope of the present disclosure is not limited in these respects. Features of the various embodiments of the invention may be combined with each other without departing from the scope of the invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
A schematic diagram of a typical application scenario for three-dimensional object recognition based on point cloud is shown in fig. 1, and includes an image capturing device 200, a server 100, and object objects 310, 320 in the scenario. Three-dimensional information of a target scene is acquired by a three-dimensional image acquisition device 300, such as a depth camera, a laser radar, or the like. Of course, a two-dimensional image may be acquired by a two-dimensional camera, and a three-dimensional stereogram of a scene may be generated from a plurality of two-dimensional images in a three-dimensional modeling manner. In the present invention, there is no limitation on the manner of acquiring the scene three-dimensional information, and any manner of acquiring the scene three-dimensional information can be applied to this.
When solving the technical problem of computer automatic identification of objects, one of the most basic aspects is how to describe the objects, i.e. how to build a computer numerical model. Compared with the era of computer power shortage, when large-scale numerical comparison calculation becomes possible, the influence of the model establishing form on the recognition result becomes more remarkable in the recognition process based on the scene three-dimensional point cloud.
In the art, under the influence of the inherent idea of feature-based object recognition widely used in the innovation of pattern recognition technology, in modeling and recognizing a three-dimensional object, it is also common to first recognize a "feature portion" on the target three-dimensional object, such as an edge, a protrusion, a depression of a plane, a curve of a line, an inflection point, etc., and describe the feature portion to generate a model form with "feature" information. And when identifying the objects in the scene, extracting the features of the scene objects, and judging the similarity between the scene objects and the target objects based on the comparison of the extracted features.
Two different results, even larger errors, caused by various field conditions and algorithms are difficult to avoid for the two times of feature extraction of the model object and the target object under different scene conditions, and the influence of the errors can be further enlarged by comparing the features based on the two results. Therefore, the characteristic-based mode is severely limited by the modeling accuracy, the disturbance of the object shape can greatly influence the recognition result, and even factors such as illumination of a scene can generate serious interference on the characteristic extraction, so that the robustness is poor. The method brings great trouble to a plurality of industrial application scenes needing automatic identification of the three-dimensional object, and becomes one of the bottlenecks of the forward development of industrial automation.
In order to solve at least one aspect of the above technical problems to a certain extent and provide accurate and stable identification of three-dimensional objects in a scene, the present invention provides a new, simple and easy modeling and method and a three-dimensional object identification method.
In practical applications, the recognition system usually needs to numerically model the target object to be recognized, and this stage is also referred to as an offline learning stage. And then, comparing the established object model with the scene point cloud to realize the identification of the object. The method for three-dimensional computer modeling of a target object based on point cloud according to the present invention will be explained first.
Referring to fig. 2, fig. 2 is a schematic flow chart of the point cloud-based three-dimensional computer modeling method for a target object according to the present invention.
The invention discloses a three-dimensional computer modeling method of a target object based on point cloud, comprising the following steps S110 to S150.
In step S110, point cloud data of the target object is acquired. The point cloud data of the target object can be obtained by obtaining the point cloud data based on obtaining the real shape of the target object, for example, obtaining a three-dimensional perspective view of the object, and sampling. In an industrial scene, since many target objects are designed parts, point cloud data of the target objects can be generated directly from numerical information of a design drawing.
In step S120, sample points in a preset number range are extracted in the point cloud data of the target object in a self-adaptive manner, wherein the sample points are uniformly distributed on the target object in a global space. The preset number range depends on the required model resolution, and the selection of the preset number range is related to the identification precision requirement and the computing capability of the processor in the later identification process, and can be selected by a person skilled in the art according to the actual situation. For example, for common industrial parts, a sampling point on the order of 10^3 is the preferred choice for overall efficiency and accuracy.
For example, in some embodiments, the sampling process may include sub-steps S121 through S125.
Firstly, in S121, voxel division is performed to uniformly divide the target object into a plurality of voxels having the same shape;
then, in S122, performing candidate sample point extraction, and selecting a point closest to the geometric center of the voxel in each voxel space from the point cloud data as a candidate sample point;
at S123, determining whether the total number of the candidate sample points falls within the preset number range; and
in S124, if the total number of the candidate sample points does not fall within the preset number range, repeating the step S121 of voxel division, changing the volume of the voxel, and performing the step S122 of candidate sample point extraction again;
in S125, if the total number of the candidate sample points falls within the preset number range, the currently extracted candidate sample point is used as the sample point.
For convenience of description, the above sampling method is referred to as an adaptive uniform sampling algorithm herein. In the self-adaptive uniform sampling process, the voxel space can be divided relatively small initially, and during sampling, a point closest to the center of the voxel is selected as an alternative; and if the number of the extracted alternative sample points is greater than the set number, increasing the size of the voxel for resampling until the number of the extracted alternative sample points is less than the set threshold, and taking the alternative sample point sampled last time as an output.
There are many choices for the shape of the voxel, but for ease of handling, a centrosymmetric, or at least an axisymmetric, shape is usually chosen. For example, cubic voxels, honeycomb voxels, and the like, and may be various regular shapes, for example, and the voxel shape may be set with reference to a crystal form of a crystal. The selection of different voxel shapes has certain influence on the description of the edge points of the target object. The method is beneficial to modeling of the edge points of the object and quick calculation, and can be flexibly selected by a person skilled in the art according to actual operation requirements.
By the self-adaptive uniform sampling algorithm, the modeling method of the invention can be completely carried out without human intervention, the whole process is automatically completed by a computer, and models with proper scale can be obtained for target objects with different scales.
In step S130, in the set of sample points, effective sample point combinations are selected as model point pairs according to the relative position relationship of the sample points. In theory, the model point pairs can include any permutation and combination of all sample points, but in practical application, the number of such sample point pairs is proportional to the square of the sample points, the data volume is large, the sampling is uniform, and the appearance of the actual three-dimensional object is not as simple as a sphere or a square which is uniformly distributed, so that not all sample point combinations can provide effective information about the shape of the model in terms of the sample points of the sampling points. Therefore, it is necessary to remove invalid sample points in which the amount of information that can be provided is small.
In some embodiments, the sample points may be combined pairwise to obtain a set of sample point pairs; removing point pairs which do not satisfy a view point visibility constraint from the set of sample point pairs, and/or removing point pairs which do not satisfy a normal vector parallel constraint from the sample point pairs; and taking the remaining sample point pairs in the set of sample point pairs as the model point pairs. Intuitively, the viewpoint visibility constraint is not satisfied, that is, for viewpoints set within a specified range, a combination of sample points cannot be imaged simultaneously at the same viewpoint. If the normal vector parallel constraint is not satisfied, the fact that the vector pointing from one sample point to the other sample point in the sample point pair is approximately coplanar with the surface of the object means that the vector is parallel to the surface of the object within a certain allowable threshold range.
The mathematical description about both is as follows, for any pair of points (m)r,mi) Recording the vector nrAnd niAre respectively a point mrAnd point miThe removing, from the sample point pairs, point pairs that do not satisfy the view visibility constraint comprises: eliminating all normal vector inner products rho (m)r,mi)=nr·niLess than a first threshold τminPoint pair of (2); rejecting point pairs from the sample point pairs that do not satisfy a normal vector parallel constraint comprises: eliminating all normal vector inner products rho (m)r,mi)=nr·niGreater than a second threshold τmaxThe point pair of (2). In some embodiments, τmin=cos(π/2),τmax=cos(π/(2Nang) Wherein N) isangWhich is the maximum discrete number of the angle component, which will also be used in the hash process in step S150, see the section of step S150 for a detailed description. Said N isangIn fact, it is a compromise between the noise immunity and the accuracy of the algorithm, usually an integer in the range of 10-30, which can be flexibly selected by the skilled person for the purpose of the algorithm.
In step S140, an enhanced feature descriptor of the model point pair is calculated, wherein the enhanced feature descriptor includes information of euclidean distance between two sample points of each of the model point pairs and information of interrelation between vector direction and normal vector between two sample points.
Referring to fig. 3, fig. 3 is a vector diagram of enhanced pairs of feature points according to an embodiment of the present invention. Wherein, for the model point pair (p, q), the characteristic descriptor Fe(p, q) is calculated as follows:
Fe(p,q)=(||d||2,∠(n1,n3),∠(n2,n3),δ(n·n2)∠(n1,n2)) (1)
wherein d is a vector from point p to point q
Figure BDA0002289881140000151
||d||2Euclidean distance, vector n, representing point p and point q1And n2Normal vectors, n, for point p and point q, respectively3=d/||d||2,∠(n1,n3) Representing a vector n1And n3Angle of (d), ∠ (n)2,n3) Representing a vector n2And n3Angle of (d), ∠ (n)1,n2) Representing a vector n1And n2The included angle of (A); n is n3×n1A normal vector representing plane Π; δ (x) is a binary sign function, δ (x) being 1 if and only if x ≧ 0, otherwise δ (x) being-1. Based on this setting, the normal amount of the point q is taken as n2Or n'2Different point pair characteristics will be corresponded.
Through the introduction of various vector information, the enhanced feature descriptor can carry out all-round description on the relative relation between a point pair and an object, the information quantity is larger, and in the identification process, the enhanced feature descriptor is combined with uniform sampling to play a role in carrying out all-round description on a target object.
In order to fully utilize the vector characteristics of the characteristic item and simplify the later calculation, in some embodiments, the calculating a hash value of the model point pair according to the characteristic descriptor and generating a point cloud model of the target object according to the hash value of the model point pair further includes obtaining further parameters. For example, referring to fig. 4, the rotation angle of each of the pair of model points is obtained
Figure BDA0002289881140000152
And storing into the point cloud model, wherein model point pairs (m) are recordedr,mi) First point m in (1)rCoordinate transformation to origin of global coordinate system to Tm→gThen T will bem→gmiRotation about the x-axis to a desired angle of rotation in a plane defined by the x-axis and the non-negative half-axis of yIs the angle of rotation
Figure BDA0002289881140000153
Lower diagonal angle
Figure BDA0002289881140000154
The meaning and usage of (A) are explained in detail. Suppose a pair of scene points(s)r,si) Pair with model point (m)r,mi) With the same hash value, a candidate pose transformation T from the object model to the scene can be estimated from these two pairs of points. As shown in fig. 4, first by transforming Tm→gAnd Ts→gRespectively connect the points mrAnd srTransforming to the origin of the global coordinate system and using the normal vectors of the two
Figure BDA0002289881140000161
Figure BDA0002289881140000162
Aligned with the x-axis of the global coordinate system, i.e. Tm→gmr=Ts→gsr=0,
Figure BDA0002289881140000163
Another pair of points m associated therewithiAnd siThe coordinate transformed to the global coordinate system is Tm→gmiAnd Ts→gsi. If point pair(s)r,si) And (m)r,mi) Is a correct matching pair, then Tm→gmiAnd Ts→gsiShould coincide under the global coordinate system, satisfy between the two
Figure BDA0002289881140000164
I.e. counterclockwise about the x-axis
Figure BDA0002289881140000165
Can be combined with Tm→gmiAnd Ts→gsiAnd (4) aligning.
From this assumption, the pose determined by the two pairs of points is transformed
Figure BDA0002289881140000166
The above process can be seen as(s)r,si) To (m)r
Figure BDA0002289881140000167
) A vote is cast because T is changed by srAnd (m)r
Figure BDA0002289881140000168
) And (4) determining. In fact, if scene point srWith target model point mrIs the correct matching point pair, then srAnd mrThe pairs of points formed for the reference points will produce a plurality of pairs of correct matches, all corresponding to the same rotation angle between them
Figure BDA0002289881140000169
To accelerate the rotation angle
Figure BDA00022898811400001610
Calculating the rotation angle of
Figure BDA00022898811400001611
Divided into two parts, i.e.
Figure BDA00022898811400001612
Then there is
Figure BDA00022898811400001613
Wherein the rotation angle of a scene point pair
Figure BDA00022898811400001614
Definition of (2) and rotation angle of model point pair
Figure BDA00022898811400001615
In a similar manner to that described above,
Figure BDA00022898811400001616
to pass through Ts→gThe first point s in the scene point pair (sr, si)rAfter transformation to the origin of the global coordinate system, T is transformeds→gsiThe rotation about the x-axis to the desired angle of rotation in the plane defined by the x-axis and the non-negative half-axis of y.
Also, because the rotation of the vector with respect to the coordinate axis can be viewed as a relative motion between the two, namely:
Figure BDA00022898811400001617
thereby to obtain
Figure BDA00022898811400001618
Equation (3) shows that the rotation angle can be adjusted
Figure BDA00022898811400001619
Is decomposed into Tm→gmiAnd Ts→gsiThe difference in the required rotation angles around the x-axis to the plane defined by the x-axis and the non-negative half-axis of y, respectively. Due to the angle of rotation
Figure BDA00022898811400001620
And
Figure BDA00022898811400001621
the calculation of the point pairs does not influence each other, so that the rotation angles of all the point pairs meeting the viewpoint visibility constraint in the target three-dimensional model can be calculated in advance in an off-line training stage
Figure BDA00022898811400001622
In the online identification stage, only the rotation angle of the scene point pair needs to be calculated
Figure BDA00022898811400001623
The corresponding rotation angle spread can be obtained
Figure BDA00022898811400001624
In step S150, the hash values of the model point pairs are calculated according to the feature descriptors, and a point cloud model of the target object is generated according to the hash values of all the model point pairs.
In some embodiments, said computing a hash value of said pair of model points from said feature descriptor comprises: normalizing and discretizing and rounding each component of the feature descriptors of the model point pairs to obtain integer vector representation of the feature descriptors; and taking the integer vector as a coefficient of a polynomial function, and taking a value of the polynomial function as a hash value.
For the polynomial form hash function, in order to facilitate the calculation of the hash value, it is preferable to transform the coefficients of the polynomial into the positive integer category by setting an appropriate step size and a transform interval. The discretization is to perform rounding, and the normalization is to convert a coefficient into a positive number to obtain a positive integer after discretization rounding.
For example, one hash strategy is to characterize pairs of model points by a descriptor Fe(mr,mi) Are respectively discretized into 4 integers by fixed step length, and are marked as N1,N2,N3And N4Then point pair (m)r,mi) The hash value of (a) is calculated as follows:
Figure BDA0002289881140000171
at Fe(mr,mi)=(||d||2,∠(n1,n3),∠(n2,n3),δ(n·n2)∠(n1,n2) Of the components), the last three are angular components. Note fiAnd i ═ 1, 2, 3, 4 denotes an enhanced point pair descriptor Fe(mr,mi) Normalized values of 4 components.
When view visibility τmin0, the fourth component of the enhanced point pair characteristic has a value range of [ -pi/2, pi/2]. To ensure that the discretized integer value is non-negative, F needs to be sete(mr,mi) Fourth of (2)The component being transformed to [0, π by applying a bias of π/2]. When tau isminWhen different values are taken, the fourth component of the feature descriptor can also be transformed to [0, π by different shift offsets]. In this embodiment, f2 and f3 are defined as positive values and may not be further biased. The discrete number of the angle is NangI.e. according to the division of the range of angles into NangDiscretizing the mode of each section, then, Fe(mr,mi) Discretization step delta of the last three componentsang=π/NangAnd N isi=[fiang」,i=2,3,4。
The first component of the enhanced point pair feature is the Euclidean distance between the point pairs, and in order to make the discretization of this component able to adapt to the size variations between different models, the discretization step δ of this component can be setdIs set to 0.05DMWherein D isMIs the diameter of the three-dimensional model of the object (the distance between the two points that are the farthest apart in the model), then
Figure BDA0002289881140000172
. Finally, all point pairs (m) satisfying the view visibility constraintr,mi) According to its hash value K (m)r,mi) And storing the model into a hash table of the model so as to complete off-line training. For all target models, the construction process of the hash table is only carried out once, and the hash table is preloaded as required before online identification.
In addition, the present invention also provides a three-dimensional computer modeling apparatus 400 for a target object based on point cloud, referring to fig. 7, the apparatus 400 comprising:
a point cloud obtaining module 410, configured to obtain point cloud data of a target object;
a sample point acquisition module 420, configured to adaptively extract sample points in a preset number range from the point cloud data of the target object, where the sample points are uniformly distributed in a global space on the target object;
an effective sample point screening module 430, configured to select an effective sample point combination as a model point pair according to a relative position relationship of sample points in the sample point set;
a feature descriptor calculation module 440, configured to calculate an enhanced feature descriptor for the model point pairs, where the enhanced feature descriptor includes information about euclidean distances between two sample points of each of the model point pairs, and information about interrelations between vector direction and normal vector between two sample points; and
the hash module 450 is configured to calculate hash values of the model point pairs according to the feature descriptors, and generate a point cloud model of the target object according to the hash values of all the model point pairs.
It should be noted that the above modules may be implemented in the form of hardware or software, and may also be computer program modules. The specific implementation manner of each module can refer to the description of the point cloud-based target object three-dimensional computer modeling method in conjunction with fig. 2, and is not described herein again.
The modeling method and the modeling device of the invention replace the special point sampling in the prior art with the model point uniform sampling, screen the relative position relationship of the sample point pair, remove invalid and interference data, set up the enhanced feature descriptor, and combine the structure of the sample point pair Hash table, can be used for the computing equipment to automatically generate the digital model representation of the object according to the three-dimensional target data, form a novel three-dimensional object industrial digital model which is easy to model quickly, is not limited by the form of the target object, is convenient to identify and compare, and is used for the industrial automation scenes such as part sorting, scene object identification and the like and the life scenes. The method eliminates the dependence on the local feature recognition of the target object in the sampling stage, can be suitable for objects without special features which cannot be accurately represented in the prior art, has good robustness, and accelerates the modeling speed because the feature recognition of the target object is not needed and the sampling points are uniformly selected. Further, according to the present model, the rotation angles of the point pairs are calculated and stored in the model, thereby reducing the amount of calculation in object recognition using the model.
The embodiment of the second aspect of the invention provides a point cloud-based three-dimensional target identification method and an identification device. Referring to fig. 6, the recognition method includes steps S210 to S230.
In step S210, a point cloud model of a target object is obtained, wherein the point cloud model of the target object is the point cloud model established according to the method of the first aspect of the present invention. In particular, in order to facilitate the calculation of the post-pose voting, the point cloud model of the object to be identified is obtained, and besides the hash table formed by the model point pairs of the object to be identified and the hash value, the point cloud model also comprises the rotation angle of each model point pair
Figure BDA0002289881140000191
In step S220, point cloud data is obtained according to the stereo image of the working scene acquired by the three-dimensional information acquisition device, wherein the stereo image of the working scene is subjected to adaptive uniform spatial sampling identical to the point cloud model to obtain scene point cloud data. It is obvious that the execution order of steps S210 and S220 can be interchanged without affecting the implementation of the method of the present embodiment.
In practical applications, sometimes the obtained scene information may be 2.5D information rather than complete 3D information, for example, there is a certain occlusion between objects, and no matter what information, as long as the scene point cloud data can be sampled, the implementation of the method of the present invention is not affected, so in the description of the present disclosure, the 3D and 2.5D situations are uniformly represented by three dimensions, but the present invention is not limited thereto. Invalid point pairs used for preprocessing the model point cloud are removed, and means such as self-adaptive sampling can be applied to the scene point cloud as well, and are not repeated herein.
In step S230, according to a matching result between the scene point cloud data and the point cloud model of the target object, whether an object to be recognized that meets the characteristics of the target object exists in the working scene or not and a pose of the object to be recognized are recognized.
To detect an object from a scene and estimate its pose, a scene reference point (the first point in a constituent pair of points) s is selectedrMust fall on the corresponding target model. For example, referring to FIG. 7, to identify what is shown in FIG. 7The cock model shown in the figure can obtain correct pose transformation by voting on the features only when the reference points are the dark color points, and cannot be identified by using the light color points as the reference points. In fact, for a given 2.5D scene, it is not known in advance at which part of the scene the object model is located. Therefore, in order to improve the recognition rate of the algorithm and also consider the calculation efficiency, a feasible strategy is to randomly select N from the scenerThe points are used as a reference point set, then possible candidate poses are obtained through pose voting and hypothesis generation, and finally the true poses are identified through pose clustering and verification.
Specifically, in some embodiments, the step S230 may further include sub-steps S231 to S2334.
S231, noise data are filtered, and the scene point cloud is preprocessed to obtain effective point cloud data. For example, the point cloud data may be pass-through filtered, removing points located at relatively far distances therefrom; and/or carrying out plane segmentation according to the point cloud data to remove invalid points in the point cloud data.
S232, randomly selecting a preset number N from the effective point cloud datarPoints as reference points, constituting a set of reference points
Figure BDA0002289881140000201
For each reference point in the reference point set Q, selecting a point in a preset neighborhood range of the reference point from the effective point cloud data as a neighborhood scene point, and constructing a neighborhood scene point pair in the preset neighborhood range according to the reference point and the neighborhood scene point. This step allows for simultaneous calculation of the normal vector of the scene point cloud, thereby providing for subsequent calculations. Due to the fact that the pose candidate is generated
Figure BDA0002289881140000202
This calculation is done at this stage.
S233, calculating the hash value of the neighborhood scene point pair, and searching a model point pair matched with the neighborhood scene point pair in the point cloud model of the target object according to the hash value of the neighborhood scene point pair; and
and S234, identifying the coarse identification result of the object to be identified and the pose thereof in the working scene according to the positions and the number of the neighborhood scene point pairs of each reference point in the reference point set Q and the matched model point pairs.
In particular, it may be for each reference point srConstructing a corresponding two-dimensional accumulator ArWherein the two-dimensional accumulator ArCorresponds to each sample point of the model of the target object, the two-dimensional accumulator arThe second dimension of the object to be identified correspondingly divides the pose space of the object to be identified into a plurality of pose blocks formed by a plurality of intervals; according to the matched model point pairs of the neighborhood scene point pairs and the number and the positions of the model point pairs, in the two-dimensional accumulator ArThe corresponding coordinate points in (a) are accumulated and voted.
For example, the two-dimensional accumulator Ar has an initial value of 0 and a size of
Figure BDA0002289881140000211
ArLine-to-line object model of
Figure BDA0002289881140000212
Index of each point in ArThe column of (1) corresponds to the rotation angle range of [0, 2 π]Is divided evenly into
Figure BDA0002289881140000213
Generated section index, ArThe values in (l, j) represent the reference point srAnd model point mlThe probability of a match is high.
Determining the reference point s according to the result of the accumulated votesrCorresponding model sample points and corresponding pose blocks, taking the intermediate values of the corresponding pose blocks as reference points srCorresponding hypothetical pose. And obtaining a candidate pose set T according to model sample points and hypothesis poses corresponding to all reference points in the reference point set QHFrom the candidate pose set THAnd determining the real pose of the object to be recognized.
In the following by srE.g. Q, specifically introduces point pair voting and candidate pose TrAnd (5) generating. First, with srAs a center, in the scene
Figure BDA00022898811400002114
Searching all the search results with the distance not exceeding DsIs taken as the point ofrConstructing a candidate point set of pairs of scene points
Figure BDA0002289881140000214
This is because, under a single viewing angle, the target to be recognized in the scene is usually only partially visible due to self-occlusion and the possible occlusion of the target by other objects, and the euclidean distance between the valid point pairs that can be formed necessarily does not exceed the diameter D of the three-dimensional model of the targetMIn severe cases, the shielding may be much smaller than DM. Therefore, by setting a reasonable distance threshold DsThe generation of the invalid point pairs can be effectively controlled.
Then, the candidate point sets are traversed one by one
Figure BDA0002289881140000215
C, calculating a point pair(s)rAngle of rotation of c)
Figure BDA0002289881140000216
And its enhanced point pair hash value K(s) mapped to featurerAnd c) is adopted. By K(s)rC) searching the model hash table of the target to be recognized for the model point pair (possibly multiple pairs) possibly matched with the target to be recognized, and marking as
Figure BDA0002289881140000217
Traverse in sequence
Figure BDA0002289881140000218
Each point pair (m) inl,mj) Calculating the rotation angle
Figure BDA0002289881140000219
And will be in the accumulator
Figure BDA00022898811400002110
Plus
1. When all and srAfter the formed point pair is voted, the model point corresponding to the row where the peak position of the accumulator is located is the most possibly matched point, the coordinates of the point are recorded as (h, w), and the value is Maxr. Then by srAnd
Figure BDA00022898811400002111
the generated candidate pose is
Figure BDA00022898811400002112
Figure BDA00022898811400002113
Wherein,
Figure BDA00022898811400002115
it can be pre-calculated in the off-line training phase to reduce the amount of calculation.
After traversing all the reference points in Q, finally obtaining the reference point from NrA set of candidate poses, denoted as TH={(Tl,Maxi)|l=1,2,…,Nr}。
Obtaining candidate pose set THFrom which the true pose transformation (possibly none) needs to be found later. It is not difficult to know that correct pose changes can only be generated if the reference point falls on a target in the scene. Randomly selected N due to possible interference from other objects in the scenerOnly a portion of the reference points typically fall on the target. In order to extract correct pose transformation from the candidate pose set, a clustering strategy is adopted for extraction. In some embodiments, the set of candidate poses T is a function ofHDetermining the true pose of the object to be recognized comprises:
for candidate pose set THClustering each pose in the cluster to obtain an aggregate pose, and eliminating the number of poses in the aggregate poseInvalid pose aggregation with relatively small amount is obtained, and the effective pose aggregation is used as a rough matching pose;
for each coarse matching pose, transforming the target model into a scene according to the coarse matching pose, and performing pose optimization;
according to the result of pose optimization and the fitting degree of the target model, rejecting poses which do not accord with preset fitting conditions, wherein the preset fitting conditions comprise pose residual error conditions and point cloud overlapping rates; and
when the pose which meets the preset fitting condition is obtained after the elimination, judging that the object to be recognized which meets the characteristics of the target object exists in the working scene, and taking the pose which meets the preset fitting condition as the real pose of the object to be recognized;
and when the pose meeting the preset fitting condition does not exist after the elimination, judging that the object to be recognized meeting the characteristics of the target object does not exist in the working scene.
In some embodiments, the pose verification may be performed by the following operations.
First, pose clustering includes substeps Step1 through Step 3.
Step 1: according to the number of votes obtained by each pose in the voting stage, the candidate pose set T is obtainedHAnd (5) carrying out descending rearrangement. Setting the sorted candidate pose set as Th
Step 2: by ThThe first pose in (1) establishes a new category with a center of Tc=[R1,t1]Wherein R is1Rotation matrix, t, representing pose1And representing a translation vector of the pose rotation center. The poses belonging to that category are then found from the remaining set of poses. The term "belonging to this category" means the pose transformation [ R ]i,ti]And the category center TcSatisfies the following constraint that
Figure BDA0002289881140000231
Where ε and σ are the angular thresholds of the rotation matrix R, respectivelyAnd distance threshold for translation vector t, e.g., ε and σ may be set to π/6, 0.2D, respectivelyM. Save the category and then go from ThAnd all poses belonging to the category are removed.
Step 3: let ThEqual to the rest pose set, and repeat Step2 until ThIs empty. Counting the number of poses contained in each category, rearranging all the categories in a descending order according to the number of the poses contained in each category, and recording the number of the poses contained in the first category as Nmax. The number of the removed elements is less than NmaxAnd 2, returning the rest categories and finishing the algorithm.
Finally returning N by the hypothesis pose clustering algorithmvalidAnd each category is subjected to pose center calculation. The specific method is that the average value of the translation vector is used as the translation vector of the pose center, all rotation matrixes are converted into quaternions, the average value of the quaternions is calculated, and then the quaternion average value is converted into a rotation matrix which is used as the rotation matrix of the pose center. Will finally obtain NvalidAnd (5) roughly matching the pose.
Transforming the model into a scene by using each coarse matching pose, optimizing the pose by an iterative nearest neighbor (ICP) algorithm, and if the residual after fitting is greater than a set threshold taurmesIf not, calculating the overlap ratio of the model and the scene point cloud (namely the distance between the model point and the model point is less than or equal to d)0The number of scene points to the total number of model points) if the overlap ratio is below a set threshold τ0And eliminating the pose. Tau isrmes,τ0And d0For example, 4pr, 0.15 pr and 2pr (pr represents the average resolution of the target model) can be set, and the adjustment can be made according to the actual situation. And finally, returning the pose which is optimized by the ICP and meets the requirement of the overlap rate as a result of the three-dimensional target recognition algorithm. Of course, other pose optimization algorithms common in the art besides ICP may be applied here.
Further, in some embodiments, more than two targets may be identified, and point cloud models of more than two target objects of different shapes may be obtained. At this time, the identifying whether the object to be identified which meets the characteristics of the target object exists in the working scene according to the matching result of the scene point cloud and the point cloud model of the target object, and the pose of the object to be identified specifically includes:
identifying and stripping a single target, specifically, identifying whether an object to be identified which meets the characteristics of the current target object exists in the scene by using the three-dimensional target identification method; if an object to be recognized which accords with the characteristics of the current target object exists, recording a recognition result, and removing a scene point corresponding to the object to be recognized which accords with the characteristics of the current target object from the scene point cloud; and
and repeating the steps of identifying and stripping the single target for each target object until all the target objects are traversed or the number of the remaining scene points in the scene point cloud is less than a preset scene point threshold.
Referring to FIG. 8, a flow diagram of one embodiment of multi-target identification is shown. Firstly, scene point cloud data is obtained, straight-through filtering is carried out on the point cloud data, and after noise is filtered, a normal vector of the scene point cloud is estimated for subsequent pose voting and invalid scene point elimination. And performing plane detection according to the normal vector of the scene point cloud, and removing possible plane points. And carrying out self-adaptive spatial uniform sampling on the rest point cloud. At the same time, or before and after, the hash tables and related data of a plurality of target models are obtained. Matching and identifying one target model with the current residual scene point cloud each time, and if the identification fails, identifying the next target model; if the identification is successful, the identification result is stored and the corresponding scene points are removed, namely the scene points corresponding to the object to be identified which accords with the characteristics of the current target object are removed from the scene point cloud, and then whether the number of the remaining scene points is less than a set threshold value or not is judged. If not, taking the next target model for matching, if not, outputting all the identified results, and ending.
The invention also provides a three-dimensional target recognition device 500 based on point cloud, referring to fig. 9, the device 500 comprises:
a model obtaining module 510, configured to obtain a point cloud model of a target object, where the point cloud model of the target object is a point cloud model established according to the method of the first aspect of the present invention;
a scene point cloud acquisition module 520, configured to obtain point cloud data according to a stereo image of a working scene acquired by a three-dimensional information acquisition device, where the same adaptive uniform spatial sampling as the point cloud model is performed on a stereo image of the working scene to obtain scene point cloud data; and
and the point cloud matching module 530 is configured to identify whether an object to be identified meeting the characteristics of the target object exists in the working scene or not and identify the pose of the object to be identified according to the matching result of the scene point cloud data and the point cloud model of the target object.
By using the three-dimensional object identification method and the three-dimensional object identification device, the model point is uniformly sampled to replace special point sampling in the prior art, the relative position relationship of the sample point pair is screened, invalid and interference data are removed, and the structure of the hash table of the sample point pair is combined, so that the model structure is simple and rapid and is not limited by the form of a target object. The method eliminates the dependence on the local feature recognition of the target object in the sampling stage, can be suitable for recognizing objects without special features which cannot be accurately represented in the prior art, has good robustness, and can make the recognition speed faster because the object recognition process is mainly based on the comparison of the hash table. Further, the calculation amount of three-dimensional recognition can be reduced and the calculation speed can be increased by using a model in which the rotation angles of the point pairs are stored. And enables the identification of multiple targets for a scene.
Embodiments of the third aspect of the invention propose a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method for three-dimensional computer modeling of a target object based on a point cloud according to the first aspect of the invention and/or implements a method for three-dimensional identification of a target object based on a point cloud according to the second aspect of the invention.
Generally, computer instructions for carrying out the methods of the present invention may be carried using any combination of one or more computer-readable storage media. Non-transitory computer readable storage media may include any computer readable medium except for the signal itself, which is temporarily propagating.
A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Computer program code for carrying out operations of the present invention may be written in one or more programming languages, or a combination thereof. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The invention also provides a computer program product, wherein instructions, when executed by a processor, implement the method for three-dimensional computer modeling of a target object based on a point cloud according to the first aspect of the invention and/or implement the method for three-dimensional identification of a target object based on a point cloud according to the second aspect of the invention.
Embodiments of a fourth aspect of the present invention provide a computing device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, when executing the program, implementing a method for three-dimensional computer modeling of a target object based on a point cloud according to the first aspect of the present invention and/or implementing a method for three-dimensional identification of a target object based on a point cloud according to the second aspect of the present invention.
The non-transitory computer-readable storage medium, the computer program product and the computing device according to the third to fourth aspects of the present invention may refer to the point cloud-based three-dimensional computer modeling method for a target object according to the first aspect of the present invention and/or implement the point cloud-based three-dimensional target identification method according to the second aspect of the present invention, and have similar beneficial effects, which are not described herein again.
FIG. 10 illustrates a block diagram of an exemplary computing device suitable for use to implement embodiments of the present application. The computing device 12 shown in FIG. 10 is only one example and should not be taken to limit the scope of use and functionality of embodiments of the present application.
As shown in FIG. 10, computing device 12 may be implemented in the form of a general purpose computing device. Components of computing device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. These architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, to name a few.
Computing device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computing device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 28 may include computer system readable media in the form of volatile Memory, such as Random Access Memory (RAM) 30 and/or cache Memory 32. Computing device 12 may further include other removable/non-removable, volatile/nonvolatile computer-readable storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown, but commonly referred to as a "hard drive"). Although not shown in FIG. 9, a disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a Compact disk read Only memory (CD-ROM), a Digital versatile disk read Only memory (DVD-ROM), or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the application.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally perform the functions and/or methodologies of the embodiments described herein.
Computing device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with the computer system/server 12, and/or with any devices (e.g., network card, modem, etc.) that enable the computer system/server 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Moreover, computing device 12 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public Network such as the Internet via Network adapter 20. As shown, network adapter 20 communicates with the other modules of computing device 12 via bus 18. It is noted that although not shown, other hardware and/or software modules may be used in conjunction with computing device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and data processing, for example, implementing the methods mentioned in the foregoing embodiments, by executing programs stored in the system memory 28.
An embodiment of a fifth aspect of the present invention provides a component sorting system, including: the system comprises a control subsystem, a part sorting subsystem and a scene shooting subsystem, wherein the scene shooting subsystem is used for acquiring three-dimensional image data of a working scene; the control subsystem is used for controlling the part sorting subsystem to work according to the three-dimensional image data acquired by the scene shooting subsystem; the part sorting subsystem is used for carrying out physical operation on parts in a working scene; wherein the control subsystem comprises a memory, a processor and a computer program stored on the memory and executable on the processor, when executing the program, implementing the method for three-dimensional computer modeling of a target object based on point cloud according to the first aspect of the invention and/or implementing the method for three-dimensional identification of a target object based on point cloud according to the second aspect of the invention.
Referring to fig. 11 and 12, fig. 11 is a schematic diagram of a part to be recognized for three-dimensional target recognition according to an embodiment of the present invention; fig. 12 is a diagram of part recognition results of three-dimensional object recognition according to an embodiment of the present invention. It can be seen that although the two objects Sobj1 and Sobj2 in the scene are relatively close in form, the system still successfully identifies that the two parts are the first target object Mobj1 and the second target object Mobj2, respectively. It should be noted that, the scene point cloud sampling of the three-dimensional stereo image is realized by three-dimensional sampling, and is identified on the basis; the part schematic diagram in the displayed scene is only a two-dimensional photo, so the parts and the recognition result are displayed under a different view angle from the schematic diagram.
The part sorting system can accurately identify and position and pose of parts with different shapes, is not limited by the appearance conditions of the parts, and has stable work and good robustness. And supports online identification of multiple parts.
Although embodiments of the present invention have been shown and described above, it should be understood that the above embodiments are illustrative and not to be construed as limiting the present invention, and that changes, modifications, substitutions and alterations can be made in the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (20)

1. A target object three-dimensional computer modeling method based on point cloud is characterized by comprising the following steps:
acquiring point cloud data of a target object;
sample points in a preset number range are extracted in the point cloud data of the target object in a self-adaptive mode, wherein the sample points are uniformly distributed on the target object in a global space;
selecting effective sample point combinations as model point pairs according to the relative position relation of the sample points in the sample point set;
calculating an enhanced feature descriptor of the model point pair, wherein the enhanced feature descriptor comprises Euclidean distance information between two sample points of each model point pair and information of the correlation between vector direction and normal vector between the two sample points; and
and calculating the hash value of the model point pair according to the feature descriptor, and generating a point cloud model of the target object according to the hash values of all the model point pairs.
2. The point cloud-based three-dimensional computer modeling method for a target object according to claim 1, wherein adaptively extracting a preset number of ranges of sample points in the point cloud data of the target object comprises:
dividing the voxels, namely uniformly dividing the target object into a plurality of voxels with the same shape;
extracting alternative sample points, and selecting a point closest to the geometric center of the voxel in each voxel space from the point cloud data as an alternative sample point;
judging whether the total number of the alternative sample points falls into the preset number range or not; and
if the total number of the alternative sample points does not fall into the preset number range, repeating the step of voxel division, changing the volume of the voxel, and executing the step of alternative sample point extraction again;
and if the total number of the alternative sample points falls into the preset number range, taking the currently extracted alternative sample points as the sample points.
3. The method of claim 1, wherein selecting valid combinations of sample points from the set of sample points as pairs of model points according to their relative positions comprises:
combining the sample points pairwise to obtain a set of sample point pairs;
removing point pairs which do not satisfy a view point visibility constraint from the set of sample point pairs, and/or removing point pairs which do not satisfy a normal vector parallel constraint from the sample point pairs; and
and taking the rest sample point pairs in the set of sample point pairs as the model point pairs.
4. Root of herbaceous plantThe method of claim 3, wherein for any pair of points (m)r,mi) Vector nrAnd niAre respectively a point mrAnd point miThe normal vector of (a) is,
culling, from the sample point pairs, point pairs that do not satisfy a view visibility constraint, comprises: eliminating all normal vector inner products rho (m)r,mi)=nr·niLess than a first threshold τminPoint pair of (2); and/or
Rejecting point pairs from the sample point pairs that do not satisfy a normal vector parallel constraint comprises: eliminating all normal vector inner products rho (m)r,mi)=nr·niGreater than a second threshold τmaxThe point pair of (2).
5. The method of claim 4, wherein the first threshold τ is greater than the threshold τminCos (pi/2), the second threshold τmax=cos(π/(2Nang) Wherein N) isangRepresenting the maximum discrete number of angular components, NangIs an integer in the range of 10-30.
6. The point cloud-based three-dimensional computer modeling method for a target object as claimed in claim 1, wherein computing an enhanced feature descriptor for the pair of model points comprises:
for model point pairs (p, q), its feature descriptor Fe(p, q) is calculated as follows:
Fe(p,q)=(‖d‖2,∠(n1,n3),∠(n2,n3),δ(n·n2)∠(n1,n2))
wherein d is a vector from point p to point q
Figure FDA0002289881130000021
‖d‖2Euclidean distance, vector n, representing point p and point q1And n2Are respectively a pointNormal vector of p and point q, n3=d/‖d‖2,∠(n1,n3) Representing a vector n1And n3Angle of (d), ∠ (n)2,n3) Representing a vector n2And n3Angle of (d), ∠ (n)1,n2) Representing a vector n1And n2The included angle of (A); n is n3×n1,δ(n·n2) 1, if and only if n · n2Not less than 0, otherwise, delta (n.n)2)=-1。
7. The point cloud-based three-dimensional computer modeling method of a target object of claim 6, wherein said computing a hash value of said pair of model points from said feature descriptors comprises:
normalizing and discretizing and rounding each component of the feature descriptors of the model point pairs to obtain integer vector representation of the feature descriptors;
taking the integer vector as a coefficient of a polynomial function, and taking a value of the polynomial function as a hash value;
wherein the normalizing comprises transforming a fourth component of the feature descriptor to [0, π ] by translational biasing.
8. The point cloud-based three-dimensional computer modeling method for a target object as recited in claim 1, wherein said computing hash values of said pair of model points from said feature descriptors and generating a point cloud model of the target object from said hash values of said pair of model points, further comprises:
obtaining a rotation angle of each of the model point pairs
Figure FDA0002289881130000031
And storing into the point cloud model, wherein model point pairs (m) are registeredr,mi) First point m in (1)rCoordinate transformation to origin of global coordinate system to Tm→gThen T will bem→gmiThe angle of rotation required to rotate about the x-axis to a plane defined by the x-axis and the non-negative half-axis of y is the rotationCorner
Figure FDA0002289881130000032
9. A three-dimensional target identification method based on point cloud is characterized by comprising the following steps:
acquiring a point cloud model of a target object, wherein the point cloud model of the target object is established according to the method of any one of claims 1-8;
acquiring point cloud data according to a stereo image of a working scene acquired by a three-dimensional information acquisition device, wherein the stereo image of the working scene is subjected to self-adaptive uniform space sampling which is the same as the point cloud model to obtain scene point cloud data; and
and identifying whether an object to be identified which accords with the characteristics of the target object exists in the working scene or not and the pose of the object to be identified according to the matching result of the scene point cloud data and the point cloud model of the target object.
10. The three-dimensional target identification method according to claim 9, wherein the identifying whether the object to be identified which meets the characteristics of the target object exists in the working scene or not and the pose of the object to be identified according to the matching result of the scene point cloud data and the point cloud model of the target object comprises:
noise data are filtered, and the scene point cloud is preprocessed to obtain effective point cloud data;
randomly selecting a preset number N from the effective point cloud datarPoints as reference points, constituting a set of reference points
Figure FDA0002289881130000041
For each reference point in the reference point set Q, selecting a point in a preset neighborhood range of the reference point from the effective point cloud data as a neighborhood scene point, and constructing the preset neighborhood range according to the reference point and the neighborhood scene pointA neighborhood scene point pair of (1);
calculating the hash value of the neighborhood scene point pair, and searching a model point pair matched with the neighborhood scene point pair in the point cloud model of the target object according to the hash value of the neighborhood scene point pair; and
and identifying the coarse identification result of the object to be identified and the pose thereof in the working scene according to the positions and the number of the neighborhood scene point pairs of each reference point in the reference point set Q and the matched model point pairs matched with the neighborhood scene point pairs.
11. The method of claim 10, wherein the preprocessing the scene point cloud and filtering out noise data to obtain valid point cloud data comprises:
performing through filtering on the point cloud data to remove points which are far away from each other; and/or
And carrying out plane segmentation according to the point cloud data and removing invalid points in the point cloud data.
12. The three-dimensional target recognition method according to claim 10, wherein the recognizing the object to be recognized in the working scene according to the positions and the number of the neighborhood scene point pairs of each reference point in the set of reference points Q and the matched model point pairs matched therewith and calculating the pose thereof comprises:
for each reference point srConstructing a corresponding two-dimensional accumulator ArWherein the two-dimensional accumulator ArCorresponds to each sample point of the model of the target object, the two-dimensional accumulator arThe second dimension of the object to be identified correspondingly divides the pose space of the object to be identified into a plurality of pose blocks formed by a plurality of intervals; according to the matched model point pairs of the neighborhood scene point pairs and the number and the positions of the model point pairs, in the two-dimensional accumulator ArThe corresponding coordinate points in the table are accumulated and voted;
determining the reference point s according to the result of the accumulated votesrCorresponding model samplePoints and corresponding pose blocks, taking the intermediate values of the corresponding pose blocks as reference points srCorresponding hypothetical poses; and
obtaining a candidate pose set T according to model sample points and hypothesis poses corresponding to all reference points in the reference point set QHFrom the candidate pose set THAnd determining the real pose of the object to be recognized.
13. The method according to claim 12, wherein the obtaining a point cloud model of the object to be recognized further comprises:
obtaining a hash table composed of hash values of model point pairs of an object to be identified, and obtaining a rotation angle of each model point pair
Figure FDA0002289881130000051
Wherein the angle of rotation
Figure FDA0002289881130000052
To pass through Tm→gPair model points (m)r,mi) First point m in (1)rAfter transformation to the origin of the global coordinate system, T is transformedm→gmiThe angle of rotation required to rotate about the x-axis to a plane defined by the x-axis and the non-negative half-axis of y; and is
The two-dimensional accumulator A is used for the matching model point pairs according to the neighborhood scene point pairs and the number and the positions of the model point pairsrThe accumulated voting is carried out on the corresponding coordinate points in the table, and the method comprises the following steps: according to the rotation angle of the field scene point pair
Figure FDA0002289881130000053
And the rotation angle of the matched pair of model points
Figure FDA0002289881130000054
Determining coordinate positions of votes, wherein rotation angles of the pairs of domain scene points
Figure FDA0002289881130000055
To pass through Ts→gPair scene points(s)r,si) First point s inrAfter transformation to the origin of the global coordinate system, T is transformeds→gsiThe rotation about the x-axis to the desired angle of rotation in the plane defined by the x-axis and the non-negative half-axis of y.
14. The three-dimensional object recognition method of claim 12, wherein the target is identified according to a candidate pose set THDetermining the true pose of the object to be recognized comprises:
for candidate pose set THClustering each pose in the group to obtain an aggregate pose, and eliminating invalid aggregate poses with relatively small number of poses from the aggregate poses to obtain effective aggregate poses as rough matching poses;
for each coarse matching pose, transforming the target model into a scene according to the coarse matching pose, and performing pose optimization;
according to the result of pose optimization and the fitting degree of the target model, rejecting poses which do not accord with preset fitting conditions, wherein the preset fitting conditions comprise pose residual error conditions and point cloud overlapping rates; and
when the pose which meets the preset fitting condition is obtained after the elimination, judging that the object to be recognized which meets the characteristics of the target object exists in the working scene, and taking the pose which meets the preset fitting condition as the real pose of the object to be recognized;
and when the pose meeting the preset fitting condition does not exist after the elimination, judging that the object to be recognized meeting the characteristics of the target object does not exist in the working scene.
15. The three-dimensional object recognition method according to any one of claims 9 to 14, comprising: acquiring point cloud models of more than two target objects in different shapes; and identifying whether an object to be identified which accords with the characteristics of the target object exists in the working scene according to the matching result of the scene point cloud and the point cloud model of the target object, wherein the pose of the object to be identified comprises the following steps:
identifying and stripping a single target, specifically comprising identifying whether an object to be identified which accords with the characteristics of the current target object exists in the scene; if an object to be recognized which accords with the characteristics of the current target object exists, recording a recognition result, and removing a scene point corresponding to the object to be recognized which accords with the characteristics of the current target object from the scene point cloud; and
and repeating the steps of identifying and stripping the single target for each target object until all the target objects are traversed or the number of the remaining scene points in the scene point cloud is less than a preset scene point threshold.
16. A non-transitory computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the method for three-dimensional computer modeling of a target object based on a point cloud according to any one of claims 1-8 and/or implements the method for three-dimensional target recognition based on a point cloud according to any one of claims 9-15.
17. A computing device, comprising: memory, processor and computer program stored on the memory and executable on the processor, which when executed performs the method of three-dimensional computer modeling of a target object based on a point cloud according to any of claims 1 to 8 and/or the method of three-dimensional object recognition based on a point cloud according to any of claims 9 to 15.
18. A parts sorting system comprising: the system comprises a control subsystem, a part sorting subsystem and a scene shooting subsystem, wherein the scene shooting subsystem is used for acquiring three-dimensional image data of a working scene; the control subsystem is used for controlling the part sorting subsystem to work according to the three-dimensional image data acquired by the scene shooting subsystem; the part sorting subsystem is used for carrying out physical operation on parts in a working scene, and is characterized in that:
the control subsystem comprises a memory, a processor and a computer program stored on the memory and executable on the processor, which when executed by the processor implements the method for three-dimensional computer modeling of a target object based on a point cloud according to any one of claims 1 to 8 and/or implements the method for three-dimensional identification of a target object based on a point cloud according to any one of claims 9 to 15.
19. A point cloud-based three-dimensional computer modeling apparatus for a target object, comprising:
the point cloud acquisition module is used for acquiring point cloud data of a target object;
the sample point acquisition module is used for extracting sample points in a preset number range in the point cloud data of the target object in a self-adaptive manner, wherein the sample points are uniformly distributed on the target object in a global space;
the effective sample point screening module is used for selecting an effective sample point combination as a model point pair according to the relative position relation of the sample points in the sample point set;
the characteristic descriptor calculation module is used for calculating an enhanced characteristic descriptor of the model point pair, wherein the enhanced characteristic descriptor comprises Euclidean distance information between two sample points of each model point pair and information of the mutual relation between vector direction and normal vector between the two sample points; and
and the hash module is used for calculating the hash value of the model point pair according to the feature descriptor and generating a point cloud model of the target object according to the hash values of all the model point pairs.
20. A point cloud-based three-dimensional target recognition device, comprising:
a model obtaining module, configured to obtain a point cloud model of a target object, where the point cloud model of the target object is the point cloud model established according to any one of claims 1 to 8;
the scene point cloud acquisition module is used for acquiring point cloud data according to a stereo image of a working scene acquired by the three-dimensional information acquisition device, wherein the stereo image of the working scene is subjected to self-adaptive uniform space sampling which is the same as the point cloud model to obtain scene point cloud data; and
and the point cloud matching module is used for identifying whether an object to be identified which accords with the characteristics of the target object exists in the working scene or not and identifying the pose of the object to be identified according to the matching result of the scene point cloud data and the point cloud model of the target object.
CN201911175716.4A 2019-11-26 2019-11-26 Point cloud-based target object three-dimensional computer modeling method and target identification method Pending CN110942515A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911175716.4A CN110942515A (en) 2019-11-26 2019-11-26 Point cloud-based target object three-dimensional computer modeling method and target identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911175716.4A CN110942515A (en) 2019-11-26 2019-11-26 Point cloud-based target object three-dimensional computer modeling method and target identification method

Publications (1)

Publication Number Publication Date
CN110942515A true CN110942515A (en) 2020-03-31

Family

ID=69908297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911175716.4A Pending CN110942515A (en) 2019-11-26 2019-11-26 Point cloud-based target object three-dimensional computer modeling method and target identification method

Country Status (1)

Country Link
CN (1) CN110942515A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111402256A (en) * 2020-04-13 2020-07-10 视研智能科技(广州)有限公司 Three-dimensional point cloud target detection and attitude estimation method based on template
CN111400695A (en) * 2020-04-09 2020-07-10 中国建设银行股份有限公司 Equipment fingerprint generation method, device, equipment and medium
CN111898493A (en) * 2020-07-15 2020-11-06 武汉科技大学 Object identification method based on binary quantitative three-dimensional feature descriptor
CN112668066A (en) * 2020-12-29 2021-04-16 上海设序科技有限公司 Abstract model construction method and device and electronic equipment
CN112884888A (en) * 2021-03-23 2021-06-01 中德(珠海)人工智能研究院有限公司 Exhibition display method, system, equipment and medium based on mixed reality
CN112902966A (en) * 2021-01-28 2021-06-04 开放智能机器(上海)有限公司 Fusion positioning system and method
CN113658339A (en) * 2021-10-19 2021-11-16 长江水利委员会长江科学院 Contour line-based three-dimensional entity generation method and device
CN114782438A (en) * 2022-06-20 2022-07-22 深圳市信润富联数字科技有限公司 Object point cloud correction method and device, electronic equipment and storage medium
CN115100258A (en) * 2022-08-29 2022-09-23 杭州三坛医疗科技有限公司 Hip joint image registration method, device, equipment and storage medium
US20220327729A1 (en) * 2021-04-08 2022-10-13 Coretronic Corporation Object positioning method and object positioning system
CN115661426A (en) * 2022-12-15 2023-01-31 山东捷瑞数字科技股份有限公司 Model modification method, device, equipment and medium based on three-dimensional engine
CN116817771A (en) * 2023-08-28 2023-09-29 南京航空航天大学 Aerospace part coating thickness measurement method based on cylindrical voxel characteristics

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107886528A (en) * 2017-11-30 2018-04-06 南京理工大学 Distribution line working scene three-dimensional rebuilding method based on a cloud
CN109785389A (en) * 2019-01-18 2019-05-21 四川长虹电器股份有限公司 A kind of three-dimension object detection method based on Hash description and iteration closest approach

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107886528A (en) * 2017-11-30 2018-04-06 南京理工大学 Distribution line working scene three-dimensional rebuilding method based on a cloud
CN109785389A (en) * 2019-01-18 2019-05-21 四川长虹电器股份有限公司 A kind of three-dimension object detection method based on Hash description and iteration closest approach

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
陈续威 等: "三维地质模型的一种快速建模方法研究", 《长江大学学报(自科版)》, vol. 12, no. 7, pages 1 - 3 *
鲁荣荣等: "基于增强型点对特征的三维目标识别方法", vol. 39, no. 8, pages 1 - 10 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111400695A (en) * 2020-04-09 2020-07-10 中国建设银行股份有限公司 Equipment fingerprint generation method, device, equipment and medium
CN111400695B (en) * 2020-04-09 2024-05-10 中国建设银行股份有限公司 Equipment fingerprint generation method, device, equipment and medium
CN111402256A (en) * 2020-04-13 2020-07-10 视研智能科技(广州)有限公司 Three-dimensional point cloud target detection and attitude estimation method based on template
CN111898493B (en) * 2020-07-15 2022-04-29 武汉科技大学 Object identification method based on binary quantitative three-dimensional feature descriptor
CN111898493A (en) * 2020-07-15 2020-11-06 武汉科技大学 Object identification method based on binary quantitative three-dimensional feature descriptor
CN112668066A (en) * 2020-12-29 2021-04-16 上海设序科技有限公司 Abstract model construction method and device and electronic equipment
CN112668066B (en) * 2020-12-29 2024-04-12 上海设序科技有限公司 Method and device for constructing abstract model and electronic equipment
CN112902966A (en) * 2021-01-28 2021-06-04 开放智能机器(上海)有限公司 Fusion positioning system and method
CN112884888B (en) * 2021-03-23 2024-06-04 中德(珠海)人工智能研究院有限公司 Exhibition display method, system, equipment and medium based on mixed reality
CN112884888A (en) * 2021-03-23 2021-06-01 中德(珠海)人工智能研究院有限公司 Exhibition display method, system, equipment and medium based on mixed reality
US20220327729A1 (en) * 2021-04-08 2022-10-13 Coretronic Corporation Object positioning method and object positioning system
CN113658339B (en) * 2021-10-19 2022-01-21 长江水利委员会长江科学院 Contour line-based three-dimensional entity generation method and device
CN113658339A (en) * 2021-10-19 2021-11-16 长江水利委员会长江科学院 Contour line-based three-dimensional entity generation method and device
CN114782438B (en) * 2022-06-20 2022-09-16 深圳市信润富联数字科技有限公司 Object point cloud correction method and device, electronic equipment and storage medium
CN114782438A (en) * 2022-06-20 2022-07-22 深圳市信润富联数字科技有限公司 Object point cloud correction method and device, electronic equipment and storage medium
CN115100258A (en) * 2022-08-29 2022-09-23 杭州三坛医疗科技有限公司 Hip joint image registration method, device, equipment and storage medium
CN115661426B (en) * 2022-12-15 2023-03-17 山东捷瑞数字科技股份有限公司 Model modification method, device, equipment and medium based on three-dimensional engine
CN115661426A (en) * 2022-12-15 2023-01-31 山东捷瑞数字科技股份有限公司 Model modification method, device, equipment and medium based on three-dimensional engine
CN116817771A (en) * 2023-08-28 2023-09-29 南京航空航天大学 Aerospace part coating thickness measurement method based on cylindrical voxel characteristics
CN116817771B (en) * 2023-08-28 2023-11-17 南京航空航天大学 Aerospace part coating thickness measurement method based on cylindrical voxel characteristics

Similar Documents

Publication Publication Date Title
CN110942515A (en) Point cloud-based target object three-dimensional computer modeling method and target identification method
JP6216508B2 (en) Method for recognition and pose determination of 3D objects in 3D scenes
JP6681729B2 (en) Method for determining 3D pose of object and 3D location of landmark point of object, and system for determining 3D pose of object and 3D location of landmark of object
CN109934847B (en) Method and device for estimating posture of weak texture three-dimensional object
CN109919984A (en) A kind of point cloud autoegistration method based on local feature description's
CN108122256B (en) A method of it approaches under state and rotates object pose measurement
CN111598946B (en) Object pose measuring method and device and storage medium
CN107329962B (en) Image retrieval database generation method, and method and device for enhancing reality
CN108550166B (en) Spatial target image matching method
CN112233181A (en) 6D pose recognition method and device and computer storage medium
CN110992427B (en) Three-dimensional pose estimation method and positioning grabbing system for deformed object
CN112164115B (en) Object pose recognition method and device and computer storage medium
Guo et al. 3D free form object recognition using rotational projection statistics
CN112712589A (en) Plant 3D modeling method and system based on laser radar and deep learning
CN111524168A (en) Point cloud data registration method, system and device and computer storage medium
CN114022542A (en) Three-dimensional reconstruction-based 3D database manufacturing method
CN113111741A (en) Assembly state identification method based on three-dimensional feature points
CN115147599A (en) Object six-degree-of-freedom pose estimation method for multi-geometric feature learning of occlusion and truncation scenes
CN105405122A (en) Circle detection method based on data stationarity
CN115098717A (en) Three-dimensional model retrieval method and device, electronic equipment and storage medium
CN113298838B (en) Object contour line extraction method and system
CN115082498A (en) Robot grabbing pose estimation method, device, equipment and storage medium
CN111161348A (en) Monocular camera-based object pose estimation method, device and equipment
CN117953232A (en) Three-dimensional point cloud simplifying method based on boundary point dimension reduction extraction
CN117745780A (en) Outdoor large scene 3D point cloud registration method based on isolated cluster removal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination