CN108256529A - Global point cloud based on Dian Yun projected outlines signature and distribution matrix describes method - Google Patents

Global point cloud based on Dian Yun projected outlines signature and distribution matrix describes method Download PDF

Info

Publication number
CN108256529A
CN108256529A CN201711226667.3A CN201711226667A CN108256529A CN 108256529 A CN108256529 A CN 108256529A CN 201711226667 A CN201711226667 A CN 201711226667A CN 108256529 A CN108256529 A CN 108256529A
Authority
CN
China
Prior art keywords
point cloud
projection
signature
projected
target point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711226667.3A
Other languages
Chinese (zh)
Inventor
付明亮
冷雨泉
韩小宁
任利学
占志鹏
马维斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shen Shi Science And Technology Co Ltd
Original Assignee
Shenzhen Shen Shi Science And Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Shen Shi Science And Technology Co Ltd filed Critical Shenzhen Shen Shi Science And Technology Co Ltd
Priority to CN201711226667.3A priority Critical patent/CN108256529A/en
Publication of CN108256529A publication Critical patent/CN108256529A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of Dian Yun global descriptions methods based on projected outline's signature and distribution matrix.It, then will be under target point Cloud transform to reference frame first with local reference frame of the spatial coordinated information structure with rotation and translation invariance of target point cloud.The space boundary box of target point cloud after posture normalizes is built, target point cloud is projected to respectively on three adjacent planes of Bounding Box.In order to enhance expression ability of description to target point cloud viewable portion, projected outline's signature and distribution matrix that point cloud is projected in each projection plane are calculated respectively, and relevant series connection rule is determined according to the statistical nature that cloud projects.Target point cloud is connected successively to obtain final CSDM description in the form of histogram in the subcharacter of three projection planes.

Description

Global point cloud based on Dian Yun projected outlines signature and distribution matrix describes method
Technical field
The invention belongs to area of pattern recognition, and in particular to it is a kind of based on Dian Yun projected outlines signature and distribution matrix it is complete Office point cloud describes method.
Background technology
Target identification is one of most challenging task of robot field.In order to ensure the object of robot and ambient enviroment Reason interaction, it is necessary to provide the related of I-goal in ambient enviroment to robot in real time using relevant object representation method and believe Breath.
Although there are many target identification method based on object 2D information and 3D information designs, there are noises With variation point cloud resolution ratio in the case of identify 3D objects be still an extremely challenging task.Compare 2D information, 3D Information includes more object space information as put cloud, this will be advantageously implemented the scene cut of target.Also, depth data It is so sensitive to illumination and shade unlike RGB data.Therefore 3D point cloud data are well suited for for describing the several of target surface What structure.
The identifying system of 3D targets generally comprises following several modules:Object detection, object expression, object identification and sense Know memory.Object detection realizes the detection of all targets in robot interactive scene.Object represents then to detect in scene The character pair that target is arranged with some is uniquely represented.Object identification then by comparing the Feature Descriptor of object in scene and Description of the known target stored in memory is perceived, and the identification of scene objects is realized according to the sub matching degree of description. Therefore, represent that the height of the sub performance of the description that algorithm obtains directly determines the accuracy rate of recognition result using object.
Existing 3D object representations method can be divided into two major class of global description's and local description.Global description's is right Entire 3D objects carry out coded representation, and local description then only carries out character representation to feature vertex neighborhood.In view of global description Son is applied to the fields such as target identification, interactive operation more and more to the whole expression ability of object.But it is based on regarding Global description's such as VFH [Rusu R B, Bradski G, Thibaux R, et al.Fast 3d of point feature histogram recognition and pose using the viewpoint feature histogram[C]//Intelligent Robots and Systems(IROS),2010IEEE/RSJ International Conference on.IEEE,2010: 2155-2162.] it needs to calculate the normal vector on point cloud surface.But currently used cloud obtains sensor such as kinect and base It can be influenced by various error sources in the depth camera of time airmanship, converted using the depth image that these equipment obtain To be inevitably mingled with various noises after point cloud data.
Invention content
To solve the above-mentioned problems, the purpose of the present invention is to provide one kind based on Dian Yun projected outlines signature and moment of distribution The global point cloud of battle array describes method.
The technical solution adopted by the present invention to solve the technical problems is:Based on Dian Yun projected outlines signature and distribution matrix Global point cloud method is described, include the following steps:
1) target point cloud pretreatment:Target point cloud is pre-processed;
2) target point cloud posture normalizes:Target point cloud is aligned with local reference frame;
3) Dian Yun projected outlines signature is calculated:The central point c of cloud and space cuboid Bounding Box are put after asking for normalization;It will Target point cloud is projected to respectively on three adjacent planes of Bounding Box, using the projection on each projection planes of central point c as the beginning Point, equally spaced outside radiation ray;The intersection point of the outer profile of point cloud projection is formed on these ray clusters and corresponding projection plane Target point cloud projects corresponding profile signature on the projection plane;
4) point cloud projective distribution matrix is calculated:Using the maximal side l of cuboid Bounding Box obtained in step 3) as the length of side, Obtain a new square Bounding Box;Using three adjacent surfaces of square Bounding Box as projection plane, and projection is flat Grid of the face even partition for a n × n, statistics fall into the quantity of point cloud subpoint in each cell, obtain a cloud projection In the projective distribution matrix M of the projection plane;
5) fusion of Dian Yun projected outlines signature and distribution matrix:Profile signature, distribution matrix are normalized respectively Processing, and series connection rule is determined according to the statistical nature that cloud projects, it is successively that target point cloud is special in the son of three projection planes Sign connects to obtain final CSDM description in the form of histogram.
In step 1), described pre-processed to target point cloud specially removes wave filter to mesh using statistics exceptional value Punctuate cloud is pre-processed;The abnormal wave filter that removes of statistics is using the standard implementation in point cloud library PCL.
In step 2), the local reference frame LRF to establish process as follows:
2-1) LRF origins is determining:
A given target point cloud P={ p for including m 3D point1..., pm, the origin of local coordinate is defined as target The geometric center of point cloud:
Wherein, piRepresent the coordinate of a three-dimensional point in target point cloud;
2-2) LRF reference axis is determining:
Three reference axis are determined using Principal Component Analysis PCA:
Normalized covariance matrix C can be expressed as:
By performing Eigenvalues Decomposition to covariance matrix C, character pair vector sum characteristic value can be obtained:
CV=EV
Wherein, V=[v1 v2 v3] matrix that is formed for three feature vectors, E=diag (λ1 λ2 λ3) it is by characteristic value The diagonal matrix of composition, and three characteristic value sizes meet λ1≥λ2≥λ3
LRF using the central point c of target point cloud as origin is expressed as:
LP={ x (p), y (p), x (p) × y (p) }
X (P) and y (P) represent L respectivelyPX-axis and Y-axis, × represent multiplication cross;X-axis and Y-axis are respectively with two larger spies Levy vector v1And v2It represents, Z axis v1×v2It represents;
2-3) the disambiguation of LRF change in coordinate axis direction.
The disambiguation detailed process of the LRF change in coordinate axis direction is:
2-3-1) the direction disambiguation of X-axis:
Wherein, piRepresent i-th point of target point cloud;
2-3-2) the direction disambiguation of Y-axis:
After X-axis and Y-axis uniquely determine, Z axis is uniquely determined.
In the step 3), the center of spatial cuboids Bounding Box is the geometric center of target point cloud, and its length and width high score The difference of cloud coordinate maxima and minima in three reference axis is not taken.
In step 3), Dian Yun projected outlines signature is obtained by following steps:
3-1) enable three adjacent surfaces on Bounding Box that as projection plane, target point cloud is projected to these three throwings respectively In shadow plane;On three projection planes, initial point is projected as so that Bounding Box center is corresponding on a projection plane respectively, with angle Δ is interval by ambient radiation ray, these ray clusters and the two-dimentional bounding box each projected is obtainedA intersection point;
3-2) centered on these intersection points, using r as search radius, k-d tree search is carried out;If the neighborhood point quantity searched out is not Less than threshold value Nneighbor, then it is assumed that the point is the outline projection point on the ray;Otherwise, by intersection point along ray to nearly LRF Move d in origin directionΔ, then with radius r progress k-d tree search, until the neighborhood point quantity searched is no less than NneighborWhen, it will Corresponding Searching point is determined as the profile point on the ray;So finally obtainA profile point;
The distance of profile point to ray initial point 3-3) is stored to vectorial VcsIn, obtain the outline projection on corresponding projection plane Signature.
In step 4), the center of space square Bounding Box is the geometric center of target point cloud, and the length of side takes point Yun San Maximum value in a reference axis in three groups of differences of maxima and minima.
In step 5), by step 3) and 4) in obtained profile signature and distribution matrix to normalize to obtain 6 sons respectively special Histogram is levied, and this 6 sub- feature histograms are merged by the way of series connection;Series connection rule be:
The projected density entropy PDE of three projection planes 5-1) is calculated first, it will be corresponding to the projection plane with maximum PDE Subcharacter histogram be placed on CSDM description son first part;
5-2) calculate the distribution variance of remaining two projections;The corresponding subcharacter of the larger projection of distribution variance forms CSDM Sub stage casing is described, remaining third projects the tail portion that corresponding subcharacter histogram is placed in CSDM description, and each throwing Subcharacter corresponding to shadow plane follows outline projection and signs corresponding histogram preceding, and distribution matrix corresponds to histogram rear Principle.
Step 5-1) in, the corresponding PDE of target point cloud projection is calculated with following formula:
Wherein, E represents that PDE, n represent horizontal and vertical division space-number during point cloud projection grating method, MabRepresent corresponding Distribution matrix, a, b represent that grid row, column indexes respectively.
Step 5-2) in, the distribution variance calculation formula of point cloud projection on a projection plane is as follows:
Wherein, μmRepresent desired value, andN represents horizontal and vertical during point cloud projection grating method and divides interval Number, miRepresent the corresponding probability distribution of each grid.
The invention has the advantages that and advantage:
1. description proposed by the present invention makes it have pose invariance on the basis of local reference frame.
2. Dian Yun global descriptions proposed by the present invention has merged point cloud projective distribution matrix and profile two kinds of features of signature Information has stronger expression ability.
3. description proposed by the present invention possesses time overhead that can be compared with global description's for representing present situation, It can be applied to some real-time occasions.
Description of the drawings
Fig. 1 is the generating process schematic diagram of present invention description.
Fig. 2 is accuracy of identification comparison diagram of the present invention description under public data collection.
Fig. 3 is present invention description accuracy of identification comparison diagram in the case of different stage noise.
Fig. 4 is present invention description accuracy of identification comparison diagram under different densities.
Fig. 5 is the average calculation times comparison diagram of present invention description.
Fig. 6 is flow chart of the method for the present invention.
Specific embodiment
With reference to embodiment, the present invention is described in further detail.
It to be retouched as shown in fig. 6, the invention discloses a kind of global point clouds based on Dian Yun projected outlines signature and distribution matrix State method.First with local reference coordinate of the spatial coordinated information structure with rotation and translation invariance of target point cloud System, then will be under target point Cloud transform to reference frame.The space boundary box of target point cloud after posture normalizes is built, by mesh Punctuate cloud is projected to respectively on three adjacent planes of Bounding Box.In order to enhance expression of description to target point cloud viewable portion Ability calculates point cloud and is projected in the projected outline's signature and distribution matrix of each projection plane respectively, and projected according to cloud Statistical nature determines relevant series connection rule.Successively by target point cloud three projection planes subcharacter in the form of histogram Series connection obtains final CSDM (contour signature and distribution matrix, CSDM) description.
A kind of global point cloud based on Dian Yun projected outlines signature and distribution matrix describes method, includes the following steps:
1) target point cloud pretreatment:
Wave filter is removed using statistics exceptional value to pre-process target point cloud, removes sparse peeling off in point cloud Point;
2) target point cloud posture normalizes:
In order to make the global object Feature Descriptor of generation that there is the invariance to pose, the first posture to target point cloud It is normalized, so-called posture normalization is substantially the process being aligned with local reference frame;
3) Dian Yun projected outlines signature is calculated:
After target point cloud is normalized, the central point c of normalization point cloud and space cuboid side are asked for first Boundary's box.Target point cloud is projected to respectively on three adjacent planes of Bounding Box, with throwings of the central point c on each projection plane Shadow is as initial point, equally spaced outside radiation ray.The outer profile of point cloud projection on these ray clusters and corresponding projection plane Intersection point constitutes target point cloud on the projection plane and projects corresponding profile signature;
4) point cloud projective distribution matrix is calculated:
Using the maximal side l of cuboid Bounding Box obtained in step 3) as the length of side, a new square boundary is obtained Box.Using three adjacent surfaces of square Bounding Box as projection plane, and it is a n × n by projection plane even partition Grid, statistics fall into the quantity of point cloud subpoint in each cell, just obtain the projection minute that a cloud is projected in the projection plane Cloth matrix M;
5) fusion of Dian Yun projected outlines signature and distribution matrix:
Target point cloud is being obtained after the profile signature and distribution matrix of three projection planes, it is being normalized respectively Processing, and relevant series connection rule is determined according to the statistical nature that cloud projects, successively by target point cloud in three projection planes Subcharacter connect to obtain final CSDM description in the form of histogram.
In step 1), the statistics removes wave filter and employs a cloud library PCL (Point Cloud extremely Library the standard implementation in).
In step 2), the local reference frame LRF's (local reference frame) establishes process such as Under:
2-1) LRF origins is determining:
A given target point cloud P={ p for including m 3D point1..., pm, the origin of local coordinate is defined as target The geometric center of point cloud:
Wherein, piRepresent the coordinate of a three-dimensional point in target point cloud;
2-2) LRF reference axis is determining:
The repeatability of LRF directly affects the expression ability to target of description, used here as Principal Component Analysis PCA determines three reference axis.Normalized covariance matrix C can be expressed as:
By performing Eigenvalues Decomposition to covariance matrix C, character pair vector sum characteristic value can be obtained:
CV=EV
Wherein, V=[v1 v2 v3] matrix that is formed for three feature vectors, E=diag (λ1 λ2 λ3) it is by characteristic value The diagonal matrix of composition, and three characteristic value sizes meet λ1≥λ2≥λ3.Symmetric positive definite during due to covariance matrix, Its characteristic value is for positive value, feature vector pairwise orthogonal.
LRF using the geometric center c of target point cloud as origin can be expressed as:
LP={ x (p), y (p), x (p) × y (p) }
Here x (P) and y (P) represent L respectivelyPX-axis and Y-axis, × represent multiplication cross.In order to ensure the repeatability of LRF, X Axis and Y-axis are respectively with two larger feature vector v1And v2It represents, Z axis v1×v2It represents.Since feature vector defines direction Nonuniqueness, need to determine unique change in coordinate axis direction using symbol disambiguation mechanism;
2-3) the disambiguation mechanism of LRF change in coordinate axis direction:
The disambiguation detailed process of the change in coordinate axis direction defined by feature vector is:
2-3-1) for step 2-2) in the ambiguity coordinate system established using feature vector, existing ambiguity form has four Kind.According to target point cloud reference axis and most of principle of the orientation consistency of vector being made of geometric center to peripheral point come Disambiguation direction:
2-3-2) the direction disambiguation process of Y-axis is similar with X-axis, and after X-axis and Y-axis uniquely determine, Z axis is also unique simultaneously It determines.
In the step 3), the center of spatial cuboids Bounding Box is the geometric center of target point cloud, and its length and width high score The difference of cloud coordinate maxima and minima in three reference axis is not taken.
In the step 3), the calculating process of Dian Yun projected outlines signature:
3-1) enable three adjacent surfaces on Bounding Box that as projection plane, target point cloud is projected to these three throwings respectively In shadow plane.On three projection planes, initial point is projected as so that Bounding Box center is corresponding on a projection plane respectively, with angle Δ is interval by ambient radiation ray, these ray clusters and the two-dimentional bounding box each projected is obtainedA intersection point;
3-2) centered on these intersection points, using r as search radius, k-d tree search is carried out.If the neighborhood point quantity searched out is not Less than Nneighbor, then it is assumed that the point is the outline projection point on the ray.If the neighborhood point quantity searched out is less than Nneighbor, Intersection point is then moved into d along ray to nearly origin directionΔ, k-d tree search is carried out with radius r again, until the neighborhood searched Point quantity is no less than NneighborWhen, corresponding Searching point is determined as the profile point on the ray, is so finally obtainedIt is a Profile point.In order to avoid search is absorbed in endless loop, if the extra N of the searching times on certain raysearch, do not search also not Less than NneighborDuring a neighborhood point, then using the central point of the secondary search as outline projection point corresponding on the ray;
The distance of outline projection point to ray initial point 3-3) is stored to vectorial VcsIn, it has just obtained on corresponding projection plane Outline projection is signed.
In the step 4), the center of spatial cuboids Bounding Box is the geometric center of target point cloud, and the length of side takes a cloud Maximum value in reference axis in three groups of differences of maxima and minima.
In the step 5), by step 3) and 4) in be calculated profile signature and distribution matrix normalize to obtain respectively 6 sub- feature histograms, and this 6 sub- feature histograms are merged by the way of series connection.Specifically series connection rule is:
5-1) the profile signature V of each point cloud projectioncsIt corresponds to 2 histograms respectively with distribution matrix M and forms one group.It is first The projected density entropy PDE (Projection Density Entropy) of three projection planes is first calculated, it will be with maximum PDE's Subcharacter histogram corresponding to projection plane is placed on the first part of CSDM description;
5-2) calculate the distribution variance of remaining two projections.The corresponding subcharacter of the larger projection of distribution variance forms CSDM Sub stage casing is described, remaining third projects the tail portion that corresponding subcharacter histogram is placed in CSDM description, and each throwing Subcharacter corresponding to shadow plane follows outline projection and signs corresponding histogram preceding, and distribution matrix corresponds to histogram rear Principle.
The step 5-2) in, the corresponding PDE of target point cloud projection is calculated with following formula:
The distribution variance calculation formula of point cloud projection on a projection plane is as follows:
Wherein, μmRepresent desired value, andN represents the corresponding number of lines and columns of point cloud gridding, miRepresent each The corresponding probability distribution of grid.
Accuracy of identification assessment of 1 invention of embodiment description on public data collection
Global point cloud provided by the invention based on Dian Yun projected outlines signature and distribution matrix describes method and includes by suitable The following steps that sequence carries out:
1) pretreatment stage
Depth image 1-1) is mapped to 3D point cloud data.With Washington RGB-D Object in the present embodiment Dataset(http://rgbd-dataset.cs.washington.edu/) it is test data set, which provides 51 The depth map image set of class corresponding different visual angles of totally 300 General household daily necessities.
As shown in Figure 1, by taking the plate_1_4_236_crop.png in the data set as an example, it is corresponding to illustrate generation The schematic process of CSDM description.The interface function depthToCloud provided first with the data set converts depth image For 3D point cloud data, for any point pcloud on depth image, the 3D coordinates under corresponding sensor coordinate system can be with table It is shown as
Wherein, u and v represents pixel coordinate of the corresponding 3D points on depth image is cut, xgrid (u, v) and ygrid respectively (u, v) represents pixel coordinate of the corresponding 3D points on original depth image, and constant is constant, value 570.3.depth (u, v) represents corresponding depth value.In the corresponding visualization point cloud data such as Fig. 1 that are obtained using the interface function shown in (a).
1-2) in target point cloud discrete exceptional value removal
Discrete exceptional value in target point cloud generates larger interference to projected outline's Autograph Session, needs to move using statistics outlier Except wave filter removes this part exceptional value.The statistics outlier of one standard removes wave filter (http:// pointclouds.org/documentation/tut orials/statistical_outlier.php#statistical- Outlier-removal) it is used to complete the filtering of initial target point cloud, Fig. 1 (a) illustrates the front and rear point cloud of filtering.
2) target point cloud posture normalizes
2-1) LRF origins is determining
In order to ensure invariance of designed description for rotating with shift transformation, it is necessary first to true for target point cloud Fixed one repeatable and unique local referential system (local reference frame, LRF).Enable plate_1_ in Fig. 1 The corresponding target point clouds of 4_236_crop.png are expressed as P={ p1..., pm, wherein m represents 3D points in pretreated cloud Sum, pi(i=1 ..., m) represents a point in target point cloud.So, the origin c of LRF can be defined as:
2-2) LRF reference axis is determining
The repeatability of local reference frame directly affects the expression ability to target of description, used here as master Componential analysis determines the direction of three reference axis.Normalized covariance matrix C can be expressed as:
By performing Eigenvalues Decomposition to covariance matrix c, character pair vector sum characteristic value can be obtained and meet following formula:
CV=EV
Wherein, V=[v1 v2 v3] matrix that is formed for three feature vectors, E=diag (λ1 λ2 λ3) it is by characteristic value The diagonal matrix of composition, and three characteristic value sizes meet λ1≥λ2≥λ3.Since covariance matrix is symmetric positive definite, Its characteristic value is and the feature vector pairwise orthogonal for positive value.
After the coordinate origin and feature vector that LRF is determined, corresponding reference frame can be expressed as:
Lc={ x (p), y (p), x (p) × y (p) }
Here x (P) and y (P) represent L respectivelyPX-axis and Y-axis.In order to which the calculating for realizing local reference frame repeats Property, X-axis and Y-axis are respectively with two larger feature vector v1And v2It represents, Z axis v1×v2It represents.Since feature vector defines Direction nonuniqueness, need to determine unique change in coordinate axis direction using symbol disambiguation mechanism.
2-3) the disambiguation mechanism of LRF change in coordinate axis direction
In order to complete the symbol disambiguation of above-mentioned LRF, a temporary coordinate system, the X-axis and Y-axis point of temporary coordinate system are first defined Do not represent that Z axis is represented with the multiplication cross of two feature vectors with two feature vectors of maximum.The coordinate system possibility of ambiguity remains at this time Lower four.According to cloud reference axis and most of principle of the orientation consistency of vector being made of geometric center to peripheral point come Determine its final direction:
The direction disambiguation of Y-axis is similar with X-axis, and after X-axis and Y-axis uniquely determine, Z axis also uniquely determines simultaneously.In this way The determination process of temporary coordinate system and the determination process of LRF.Fig. 1 (b) illustrates the corresponding LRF of example points cloud.
3) Dian Yun projected outlines signature is calculated
After target point cloud is normalized, the central point c of normalization point cloud and space cuboid side are asked for first Boundary box Boxsign.Enable three adjacent surfaces on Bounding Box that as projection plane, target point cloud is projected to these three throwings respectively In shadow plane.On three projection planes, initial point is projected as so that Bounding Box center is corresponding on a projection plane respectively, with angle 30, to be spaced ambient radiation ray, are obtained 12 intersection points of the two-dimentional bounding box on these ray clusters and each projection plane; W and h is enabled to represent the width and height of two-dimentional bounding box on projection plane respectively, centered on these intersection points, withFor search Radius carries out k-d tree search.If the neighborhood point quantity searched out is no less than 4, then it is assumed that the point is that the profile on this ray is thrown Shadow point.If the neighborhood point quantity searched out be less than 4, enable dis represent current search point to ray initial point distance, then by intersection point along Ray moves 0.2dis to nearly origin direction, carries out k-d tree search again with radius r, until the neighborhood point quantity searched is not During less than 4, corresponding Searching point is determined as the profile point on the ray, so finally obtains 12 profile points.In order to avoid Search is absorbed in endless loop, if the searching times on certain ray are more than 6, when not searching no less than 12 neighborhood points also, then Using the central point of the secondary search as outline projection point corresponding on the ray;The distance of outline projection point to ray initial point is deposited To vectorial VcsIn, just obtain the outline projection signature on corresponding projection plane.
4) calculating of cloud projective distribution matrix is put
With cuboid Bounding Box BoxsignMaximal side for the length of side, construct a new cube boundaries box Boxdistr, Bounding Box BoxsignAnd BoxdistrCenter overlap.After three-dimensional Bounding Box determines, a cloud can be obtained on Bounding Box surface Projection projectioni∈{XY,YZ,XZ}.By each projection plane even partition be 5 × 5 grid, such as Fig. 1 (c) institutes Show.Statistics falls into the quantity of subpoint in each grid, so as to obtain one of corresponding projection plane about point cloud projection quantity The distribution matrix M of statisticsi, and it is normalized, MiDimension be 5 × 5.
5) fusion of subcharacter histogram
It will obtain profile signature and depth profile subcharacter normalizes to obtain 6 sub- feature histograms, and by this 6 respectively Subcharacter histogram is merged by the way of series connection, obtains final CSDM description.
Specific series connection rule is to calculate the projected density entropy of three projection planes first, by corresponding maximal projection density entropy Projection plane subcharacter histogram be placed on CSDM description son first part;Next, calculate point of remaining two projections Cloth variance.The larger corresponding subcharacter of projection of distribution variance forms the second part of CSDM description, and remaining third is thrown The corresponding subcharacter of shadow is placed in CSDM tail portions.Each point cloud is projected corresponding subcharacter and is signed using profile in preceding, moment of distribution The posterior fusion sequence of battle array.
In order to assess the sub- Object representation performance of Dian Yun global descriptions proposed by the present invention, using 10 folding cross validation modes into Row accuracy of identification is assessed.Compare the preferable VFH and GOOD features of algorithm discrimination [Kasaei S H, Tom é A M, Lopes L S,et al.GOOD:A global orthographic object descriptor for 3D object recognition and manipulation[J].Pattern Recognition Letters,2016,83:312- 320.].The parameter comparison of three kinds of description is shown in Table 1.
The parameter comparison of 1 each Feature Descriptor of table
All examples are corresponding under 51 classification in calculating Washington RGB-D Object Dataset respectively CSDM, GOOD and VPF feature;It is 51,45,40,35,30,25,20,15,10,5 that Feature Descriptor is calculated respectively in class number When the sub- recognition performance of corresponding description;For the Performance Evaluation under each particular category number, the correspondence classification that will be calculated All example aspects be randomly divided into approximately equal 10 parts of quantity, successively using 1 part therein as test sample, remaining 9 parts are used as training sample;Training stage is deposited as the 9 of training sample parts of corresponding features of data by establishing a k-d tree Storage represents that the corresponding feature of that part of data that test phase is used as test sample successively scans on trained k-d tree, If the classification of test feature is identical with the classification of search result, it is considered as and identifies successfully, otherwise is recognition failures.It is if corresponding every The identification success instance number mesh of one folding is TPi(i=1,2 ..., 10), the total number of corresponding test sample is Ti(i=1,2 ..., 10), then the computational methods of corresponding accuracy of identification are under category number:
Corresponding experimental result is as shown in Figure 2.
2 invention of embodiment description assesses the robustness of noise
In order to assess global point cloud description proposed by the present invention to the robustness of noise, it is real to devise 10 groups of noise testings It tests:
Add the Gauss of different stage on its 3 coordinate directions to each point in the target point cloud in data set respectively Noise, the standard deviation of corresponding noise is respectively 1,2,3,4,5,6,7,8,9 and 10, unit mm.Corresponding result such as Fig. 3 institutes Show.
3 invention of embodiment description assesses the robustness of a cloud variable density
In order to assess global description's proposed by the present invention to a robustness for cloud variable density, it is real to devise 5 groups of tests It tests:VoxelGrid wave filters (the http provided in point of use cloud library PCL://pointclouds.org/documentation/ Tutorials/voxel_grid.php different degrees of down-sampling) is carried out to target point cloud.VoxelGrid wave filters pass through control Voxel sizes processed realize different degrees of down-sampling, and Voxel sizes have been respectively adopted as 1mm in 5 groups of contrived experiments, 5mm, 10mm, 15mm and 20mm.Corresponding experimental result is as shown in Figure 4.
4 invention of embodiment describes sub- computational efficiency assessment
In order to assess the computational efficiency of global description's proposed by the present invention, 51 are randomly extracted from different classes of data set Target, comparing calculation three classes describe the average calculation times of sub- VFH, GOOD and CSDM, and test computer is 7 systems of Windows System, i3 processors, dominant frequency 3.4GHz, running memory 6GB, corresponding the results are shown in Figure 5.

Claims (10)

1. the global point cloud based on Dian Yun projected outlines signature and distribution matrix describes method, it is characterised in that including following step Suddenly:
1) target point cloud pretreatment:Target point cloud is pre-processed;
2) target point cloud posture normalizes:Target point cloud is aligned with local reference frame;
3) Dian Yun projected outlines signature is calculated:The central point c of cloud and space cuboid Bounding Box are put after asking for normalization;By target Point cloud is projected to respectively on three adjacent planes of Bounding Box, using the projection on each projection planes of central point c as initial point, etc. The outside radiation ray in compartment of terrain;The intersection point of the outer profile of point cloud projection constitutes the throwing on these ray clusters and corresponding projection plane Target point cloud projects corresponding profile signature in shadow plane;
4) point cloud projective distribution matrix is calculated:Using the maximal side l of cuboid Bounding Box obtained in step 3) as the length of side, obtain One new square Bounding Box;Using three adjacent surfaces of square Bounding Box as projection plane, and projection plane is equal The even grid for being divided into a n × n, statistics fall into the quantity of point cloud subpoint in each cell, obtain a cloud and be projected in this The projective distribution matrix M of projection plane;
5) fusion of Dian Yun projected outlines signature and distribution matrix:Profile signature, distribution matrix are normalized respectively, And series connection rule is determined according to the statistical nature that cloud projects, successively by target point cloud three projection planes subcharacter with straight The form of square figure connects to obtain final CSDM description.
2. the global point cloud according to claim 1 based on Dian Yun projected outlines signature and distribution matrix describes method, It is characterized in that:In step 1), described pre-processed to target point cloud specially removes wave filter pair using statistics exceptional value Target point cloud is pre-processed;The abnormal wave filter that removes of statistics is using the standard implementation in point cloud library PCL.
3. the global point cloud according to claim 1 based on Dian Yun projected outlines signature and distribution matrix describes method, It is characterized in that:In step 2), the local reference frame LRF to establish process as follows:
2-1) LRF origins is determining:
A given target point cloud P={ p for including m 3D point1..., pm, the origin of local coordinate is defined as target point cloud Geometric center:
Wherein, piRepresent the coordinate of a three-dimensional point in target point cloud;
2-2) LRF reference axis is determining:
Three reference axis are determined using Principal Component Analysis PCA:
Normalized covariance matrix C can be expressed as:
By performing Eigenvalues Decomposition to covariance matrix C, character pair vector sum characteristic value can be obtained:
CV=EV
Wherein, V=[v1 v2 v3] matrix that is formed for three feature vectors, E=diag (λ1 λ2 λ3) it is to be made of characteristic value Diagonal matrix, and three characteristic value sizes meet λ1≥λ2≥λ3
LRF using the central point c of target point cloud as origin is expressed as:
LP={ x (p), y (p), x (p) × y (p) }
X (P) and y (P) represent L respectivelyPX-axis and Y-axis, × represent multiplication cross;X-axis and Y-axis are respectively with two larger feature vectors v1And v2It represents, Z axis v1×v2It represents;
2-3) the disambiguation of LRF change in coordinate axis direction.
4. method is described according to the global point cloud based on Dian Yun projected outlines signature and distribution matrix described in claim 3, It is characterized in that the disambiguation detailed process of the LRF change in coordinate axis direction is:
2-3-1) the direction disambiguation of X-axis:
Wherein, piRepresent i-th point of target point cloud;
2-3-2) the direction disambiguation of Y-axis:
After X-axis and Y-axis uniquely determine, Z axis is uniquely determined.
5. method is described according to the global point cloud based on Dian Yun projected outlines signature and distribution matrix described in claim 1, It is characterized in that:In the step 3), the center of spatial cuboids Bounding Box is the geometric center of target point cloud, and its length, width and height The difference of cloud coordinate maxima and minima in three reference axis is taken respectively.
6. method is described according to the global point cloud based on Dian Yun projected outlines signature and distribution matrix described in claim 1, It is characterized in that:In step 3), Dian Yun projected outlines signature is obtained by following steps:
3-1) enable three adjacent surfaces on Bounding Box that as projection plane, it is flat that target point cloud is projected to these three projections respectively On face;On three projection planes, be projected as initial point respectively with Bounding Box center is corresponding on a projection plane, using angle delta as These ray clusters and the two-dimentional bounding box each projected is obtained in ambient radiation ray by intervalA intersection point;
3-2) centered on these intersection points, using r as search radius, k-d tree search is carried out;If the neighborhood point quantity searched out is no less than Threshold value Nneighbor, then it is assumed that the point is the outline projection point on the ray;Otherwise, by intersection point along ray to nearly LRF origins Move d in directionΔ, then with radius r progress k-d tree search, until the neighborhood point quantity searched is no less than NneighborWhen, by correspondence Searching point be determined as profile point on the ray;So finally obtainA profile point;
The distance of profile point to ray initial point 3-3) is stored to vectorial VcsIn, obtain the outline projection label on corresponding projection plane Name.
7. method is described according to the global point cloud based on Dian Yun projected outlines signature and distribution matrix described in claim 1, It is characterized in that:In step 4), the center of space square Bounding Box is the geometric center of target point cloud, and the length of side takes a cloud to exist Maximum value in three reference axis in three groups of differences of maxima and minima.
8. method is described according to the global point cloud based on Dian Yun projected outlines signature and distribution matrix described in claim 1, It is characterized in that:In step 5), by step 3) and 4) in obtain profile signature and distribution matrix normalize to obtain 6 sons respectively Feature histogram, and this 6 sub- feature histograms are merged by the way of series connection;Series connection rule be:
The projected density entropy PDE of three projection planes 5-1) is calculated first, by the son corresponding to the projection plane with maximum PDE Feature histogram is placed on the first part of CSDM description;
5-2) calculate the distribution variance of remaining two projections;The corresponding subcharacter of the larger projection of distribution variance forms CSDM descriptions The stage casing of son, remaining third projects the tail portion that corresponding subcharacter histogram is placed in CSDM description, and each projection is flat Subcharacter corresponding to face follows outline projection and signs corresponding histogram preceding, and distribution matrix corresponds to the posterior original of histogram Then.
9. method is described according to the global point cloud based on Dian Yun projected outlines signature and distribution matrix described in claim 8, It is characterized in that:Step 5-1) in, the corresponding PDE of target point cloud projection is calculated with following formula:
Wherein, E represents that PDE, n represent horizontal and vertical division space-number during point cloud projection grating method, MabRepresent corresponding distribution Matrix, a, b represent that grid row, column indexes respectively.
10. method is described according to the global point cloud based on Dian Yun projected outlines signature and distribution matrix described in claim 8, It is characterized in that:Step 5-2) in, the distribution variance calculation formula of point cloud projection on a projection plane is as follows:
Wherein, μmRepresent desired value, andN represents horizontal and vertical division space-number during point cloud projection grating method, mi Represent the corresponding probability distribution of each grid.
CN201711226667.3A 2017-11-29 2017-11-29 Global point cloud based on Dian Yun projected outlines signature and distribution matrix describes method Pending CN108256529A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711226667.3A CN108256529A (en) 2017-11-29 2017-11-29 Global point cloud based on Dian Yun projected outlines signature and distribution matrix describes method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711226667.3A CN108256529A (en) 2017-11-29 2017-11-29 Global point cloud based on Dian Yun projected outlines signature and distribution matrix describes method

Publications (1)

Publication Number Publication Date
CN108256529A true CN108256529A (en) 2018-07-06

Family

ID=62721481

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711226667.3A Pending CN108256529A (en) 2017-11-29 2017-11-29 Global point cloud based on Dian Yun projected outlines signature and distribution matrix describes method

Country Status (1)

Country Link
CN (1) CN108256529A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109685069A (en) * 2018-12-27 2019-04-26 乐山师范学院 Image detecting method, device and computer readable storage medium
CN110569926A (en) * 2019-09-19 2019-12-13 北京工商大学 point cloud classification method based on local edge feature enhancement
CN111127622A (en) * 2019-11-25 2020-05-08 浙江大学 Three-dimensional point cloud outlier rejection method based on image segmentation
CN111899152A (en) * 2019-05-06 2020-11-06 上海交通大学 Point cloud data compression method and system based on projection and video stitching
CN112001261A (en) * 2020-07-28 2020-11-27 哈尔滨工业大学(深圳) 3D feature extraction method, system and medium based on TOLDI descriptor
CN112381932A (en) * 2020-11-24 2021-02-19 光沦科技(深圳)有限公司 Three-dimensional point cloud model processing method and device based on context awareness
CN113313092A (en) * 2021-07-29 2021-08-27 太平金融科技服务(上海)有限公司深圳分公司 Handwritten signature recognition method, and claims settlement automation processing method, device and equipment
CN114463165A (en) * 2021-09-28 2022-05-10 西安大医集团股份有限公司 Method and device for determining VOI (volume of interest)
CN115830340A (en) * 2022-11-16 2023-03-21 白城师范学院 Point cloud target identification method and device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103871044A (en) * 2012-12-14 2014-06-18 阿里巴巴集团控股有限公司 Image signature generating method and image verifying method and device
US20160154999A1 (en) * 2014-12-02 2016-06-02 Nokia Technologies Oy Objection recognition in a 3d scene
CN106778768A (en) * 2016-11-22 2017-05-31 广西师范大学 Image scene classification method based on multi-feature fusion
CN106971199A (en) * 2017-03-10 2017-07-21 东华大学 A kind of automatic classification method of fabric three-dimensional draping shape
CN107122780A (en) * 2017-02-28 2017-09-01 青岛科技大学 The Activity recognition method of mutual information and spatial and temporal distributions entropy based on space-time characteristic point

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103871044A (en) * 2012-12-14 2014-06-18 阿里巴巴集团控股有限公司 Image signature generating method and image verifying method and device
US20160154999A1 (en) * 2014-12-02 2016-06-02 Nokia Technologies Oy Objection recognition in a 3d scene
CN106778768A (en) * 2016-11-22 2017-05-31 广西师范大学 Image scene classification method based on multi-feature fusion
CN107122780A (en) * 2017-02-28 2017-09-01 青岛科技大学 The Activity recognition method of mutual information and spatial and temporal distributions entropy based on space-time characteristic point
CN106971199A (en) * 2017-03-10 2017-07-21 东华大学 A kind of automatic classification method of fabric three-dimensional draping shape

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SEYOUN PARK.ET AL: ""Shape and appearance repair for incomplete point surfaces"", 《IEEE》 *
王培宁: ""三维点云模型的孔洞修补算法研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109685069A (en) * 2018-12-27 2019-04-26 乐山师范学院 Image detecting method, device and computer readable storage medium
CN109685069B (en) * 2018-12-27 2020-03-13 乐山师范学院 Image detection method, device and computer readable storage medium
CN111899152A (en) * 2019-05-06 2020-11-06 上海交通大学 Point cloud data compression method and system based on projection and video stitching
CN110569926A (en) * 2019-09-19 2019-12-13 北京工商大学 point cloud classification method based on local edge feature enhancement
CN111127622A (en) * 2019-11-25 2020-05-08 浙江大学 Three-dimensional point cloud outlier rejection method based on image segmentation
CN112001261A (en) * 2020-07-28 2020-11-27 哈尔滨工业大学(深圳) 3D feature extraction method, system and medium based on TOLDI descriptor
CN112381932A (en) * 2020-11-24 2021-02-19 光沦科技(深圳)有限公司 Three-dimensional point cloud model processing method and device based on context awareness
CN113313092A (en) * 2021-07-29 2021-08-27 太平金融科技服务(上海)有限公司深圳分公司 Handwritten signature recognition method, and claims settlement automation processing method, device and equipment
CN113313092B (en) * 2021-07-29 2021-10-29 太平金融科技服务(上海)有限公司深圳分公司 Handwritten signature recognition method, and claims settlement automation processing method, device and equipment
CN114463165A (en) * 2021-09-28 2022-05-10 西安大医集团股份有限公司 Method and device for determining VOI (volume of interest)
CN115830340A (en) * 2022-11-16 2023-03-21 白城师范学院 Point cloud target identification method and device and storage medium
CN115830340B (en) * 2022-11-16 2023-11-21 白城师范学院 Point cloud target identification method and device and storage medium

Similar Documents

Publication Publication Date Title
CN108256529A (en) Global point cloud based on Dian Yun projected outlines signature and distribution matrix describes method
Zhu et al. Single image 3d object detection and pose estimation for grasping
Mundy Object recognition in the geometric era: A retrospective
Tombari et al. Object recognition in 3d scenes with occlusions and clutter by hough voting
Park et al. Fast and automatic object pose estimation for range images on the GPU
Rodrigues et al. 6D pose estimation of textureless shiny objects using random ferns for bin-picking
Costa et al. 3D object recognition and pose with relational indexing
US20140105506A1 (en) Recognition and pose determination of 3d objects in multimodal scenes
Gao et al. A stable and accurate marker-less augmented reality registration method
Rusu et al. Perception for mobile manipulation and grasping using active stereo
do Nascimento et al. On the development of a robust, fast and lightweight keypoint descriptor
Han et al. Line-based initialization method for mobile augmented reality in aircraft assembly
Jiang et al. Multi-class fruit classification using RGB-D data for indoor robots
Cupec et al. Object recognition based on convex hull alignment
US11816857B2 (en) Methods and apparatus for generating point cloud histograms
Andrade-Cetto et al. Object recognition
Marton et al. Part-Based Geometric Categorization and Object Reconstruction in Cluttered Table-Top Scenes: Paper Type: Categories (7) and (5)
Han et al. Sq-slam: Monocular semantic slam based on superquadric object representation
Wittrowski et al. 3d implicit shape models using ray based hough voting for furniture recognition
Yoruk et al. Efficient object localization and pose estimation with 3D wireframe models
Álvarez et al. Junction assisted 3d pose retrieval of untextured 3d models in monocular images
Lugo et al. Semi-supervised learning approach for localization and pose estimation of texture-less objects in cluttered scenes
Liang et al. Decorin: An automatic method for plane-based decorating
Asif et al. Model-free segmentation and grasp selection of unknown stacked objects
Joshi et al. A survey on training free 3D texture-less object recognition techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned
AD01 Patent right deemed abandoned

Effective date of abandoning: 20220208