CN104715254A - Ordinary object recognizing method based on 2D and 3D SIFT feature fusion - Google Patents
Ordinary object recognizing method based on 2D and 3D SIFT feature fusion Download PDFInfo
- Publication number
- CN104715254A CN104715254A CN201510117991.6A CN201510117991A CN104715254A CN 104715254 A CN104715254 A CN 104715254A CN 201510117991 A CN201510117991 A CN 201510117991A CN 104715254 A CN104715254 A CN 104715254A
- Authority
- CN
- China
- Prior art keywords
- point
- vector
- feature
- key point
- neighborhood
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses an ordinary object recognizing method based on 2D and 3D SIFT feature fusion. The ordinary object recognizing method based on 2D and 3D SIFT feature fusion aims to increase the ordinary object recognizing accuracy. A 3D SIFT feather descriptor based on a point cloud model is provided based on Scale Invariant Feature Transform, SIFT (2D SIFT), and then the ordinary object recognizing method based on 2D and 3D SIFT feature fusion is provided. The ordinary object recognizing method based on 2D and 3D SIFT feature fusion comprises the following steps that 1, a two-dimension image and 2D and 3D feather descriptors of three-dimension point cloud of an object are extracted; 2, feather vectors of the object are obtained by means of a BoW (Bag of Words) model; 3, the two feature vectors are fused according to feature level fusion, so that description of the object is achieved; 4, classified recognition is achieved through a support vector machine (SVN) of a supervised classifier, and a final recognition result is given.
Description
Technical field
The present invention relates to a kind of general object identification method merged based on 2D and 3D SIFT feature, belong to the technical field of recognition methods.
Background technology
General object identification is the hot issue of recent domestic research, be different from certain objects identification (Specific Object Recognition), as recognition of face etc., can be trained by the training sample of magnanimity, only process certain object or certain type objects; General object identification difficulty is many, because general features general between object type must be used, and can not be certain particular category defined feature, and this feature needs to give expression to as much as possible general character and class inherited in class, it must process multicategory classification and incremental learning, puts the Massive Sample of given classification cannot be used to train before this.
The main approaches of general object identification is that extract body characteristics realizes Object representation at present, utilizes certain machine learning algorithm to carry out object type study, finally carries out object classification, realize object identification.General object identification method based on image local feature is research emphasis for a long time, and be the research field of current relative maturity, but based on two dimensional image identification mainly for the identification of digitized greyscale image, lost the three-dimensional information of actual object, and be easily subject to the impact of the external conditions such as illumination.Point cloud model is through necessarily processing the object model obtained by Object Depth image, because depth information only depends on the geometric configuration of object, have nothing to do with the characteristic such as the brightness of object and reflection, there is not shade when using gray level image or surface projection's problem, so based on the process of object point cloud model recognition object, more easier than use gray level image.
When identifying in target class that difference is large, between class, similarity is high, single feature can not well reflect general character in class inherited and class.In order to address this problem, Many researchers proposes target identification method based on multi-feature fusion, is all widely used in Aircraft Target Identification, recognition of face, object identification.
But the general object identification research in true environment is artificial intelligence pith, plays an important role in intelligent monitoring, remote measurement remote sensing, robot, Medical Image Processing etc.Be different from certain objects identification, in true environment, general kind of object is various, has that similarity between class is high, otherness is little in class a problem, makes general object identification become especially difficulty.In prior art, normal employing two dimensional character method, but it describes at object space local characteristics the technical matters that this is existence disappearance on the one hand.How to select general character in suitable character representation general object class inherited and class most important, extract stable and effective feature could obtain best recognition result under limited training sample, improve discrimination.
Summary of the invention
Goal of the invention: in order to overcome the deficiencies in the prior art, the invention provides a kind of general object identification method merged based on 2D and 3D SIFT feature, method in conjunction with two and three dimensions feature merges multiple object information, effectively can reduce based on the low problem of the recognizer discrimination of single feature, difference is little in high, the class of similarity when between class, still have higher recognition correct rate.
Technical scheme: for achieving the above object, the technical solution used in the present invention is:
Based on the general object identification method that 2D and 3D SIFT feature merges, comprise the following steps:
1) feature extraction and expression:
For sample object, extract the feature interpretation of described sample object, described feature interpretation comprises subject image and object point cloud; First extract volume image 2D SIFT feature, completes subject image character representation; Then extract object point cloud 3DSIFT feature, complete object point cloud character representation; Namely 2D and the 3D SIFT feature descriptor of sample object is obtained;
2) Object representation:
Utilize the method for KMeans++ cluster to obtain namely corresponding vision word storehouse, sample clustering center, recycling BoW model, adopts multi-C vector to carry out Object representation, obtains 2D and the 3D SIFT feature vector of the correspondence of sample object;
3) Fusion Features:
Utilize the method for feature-based fusion to carry out Fusion Features 2D and the 3D SIFT feature of the correspondence of sample object vector, obtain the serial fusion feature vector of sample object;
4) classifier design and training:
Utilize support vector machine and SVM to learn the target type of described sample object and realize target classification, training classifier is to build multi classifier;
5) object identification to be identified:
By the input of the serial fusion feature of object to be identified vector through described step 4) multi classifier that trains, obtain the probability that described object to be identified belongs to each classification, the sample object classification corresponding to most probable value is the recognition result of described object to be identified.
Further, in the present invention, the extracting method of described object point cloud 3D SIFT feature comprises the following steps:
1-1) critical point detection:
The point cloud model mid point coordinate of object is expressed as P (x, y, z), and for realizing scale invariability, the metric space of definition 3D point cloud is L (x, y, z, σ):
L(x,y,z,σ)=G(x,y,z,σ)*P(x,y,z) (1)
Wherein σ is the metric space factor, and the three-dimensional Gaussian kernel function of change yardstick is:
Utilize multiplication factor k
iobtain different scale, if often organizing the number of plies in pyramid group is s, then k is set
s=2; Build some cloud gaussian pyramid, utilize difference of Gaussian DoG function to carry out extremum extracting, obtain DoG Function Extreme Value point and be key point; Wherein, DoG operator computing formula is:
D(x,y,z,k
iσ)=L(x,y,z,k
i+1σ)-L(x,y,z,k
iσ) (3)
Wherein, i ∈ [0, s+2];
1-2) key point direction is distributed:
For the key point that each detects, need for described key point calculates this key point local feature of a vector description, this vector is called the descriptor at key point place; In order to make descriptor have rotational invariance, utilize the local feature of some cloud to distribute a reference direction for key point, the direction distribution method of described key point is as follows:
1-2-1) calculate the k neighborhood of key point P, neighborhood point is designated as P
ki, i={1,2 ... n} represents neighborhood point sequence number, and wherein n represents neighborhood point number;
1-2-2) calculate the center point P of the k neighborhood of key point P
c;
1-2-3) compute vector
with
obtain vector magnitude d and two angle
wherein (x, y, z) is vectorial coordinate;
1-2-4) use in statistics with histogram k neighborhood according to described step 1-2-3) in the vector magnitude d that calculates and angle
i.e. direction, respectively will
be divided into 18 sub-ranges (bins) and 36 sub-ranges, each sub-range is 10 °; Using amplitude d as weights, statistics angle
shi Jinhang Gauss weighting
wherein R
maxrepresent key point neighborhood maximum radius, ignore the point exceeding this distance;
1-2-5) histogrammic peak value represents the direction of this key point neighborhood, using the principal direction of this direction as described key point, in order to strengthen the robustness of coupling, only retains the auxiliary direction of direction as this key point that peak value is greater than principal direction peak value 80%, definition
corresponding principal direction is (α, β);
1-3) key point feature interpretation:
The generative process of the Feature Descriptor of key point is as follows:
1-3-1) calculate the k neighborhood of key point P, neighborhood point is designated as P
ki, i={1,2 ... n} represents neighborhood point sequence number, and n represents neighborhood point number, and when this k neighborhood distributes with key point direction, described neighborhood choice scope is identical;
1-3-2) by histogrammic X-axis rotate to key point principal direction, ensure rotational invariance, neighborhood point coordinate transformation for mula is:
Wherein (x, y, z) and (x', y', z') is the coordinate of neighborhood point before and after rotating respectively,
1-3-3) calculate the k neighborhood of key point P at a normal vector at P place
1-3-4) compute vector
utilize described formula (4) compute vector amplitude and two angles, simultaneously computing method vector
and vector
angle δ is:
1-3-5) the feature four-tuple obtained of key point and neighborhood
represent, according to 45 ° of posts, respectively will
be divided into 8,4 and 4 sub-ranges, and statistics drops on counting out of each sub-range; Using amplitude d as weights, when counting out between Statistical Area, carry out Gauss's weighting
the proper vector obtaining one 128 dimension is thus F={f
1, f
2, L f
128;
1-3-6) normalization characteristic vector: for proper vector F={f
1, f
2, L f
128, after normalization be
L={l
1, l
2, L l
128, wherein
so far, the 3D SIFT feature descriptor of key point is generated.
Further, in the present invention, described step 2) in the concrete grammar of Object representation be:
Utilize KMeans++ clustering method, obtain the vision word storehouse that sample clustering center is namely corresponding, be designated as center={center
l, l=1,2, K k}, wherein k represents cluster centre number, center
lrepresent l vision word in vision word storehouse; Recycling BoW model method, carries out Object representation with a multi-C vector;
Further, in the present invention, described step 4) in, the method of target classification is: build multi classifier by training the method for several binary classifiers, concrete training process is as follows: the i-th class training sample and remaining n-1 class training sample are carried out SVM between two respectively and trains, obtain multiple 1V1SVM sorter, then n class training sample has
individual 1V1SVM sorter.
Further, in the present invention, described step 1) in, the method obtaining described DoG Function Extreme Value point and key point is:
Each some P (x, y, z) in the point cloud model of described object compares with other all consecutive point, determines whether the maximum value or minimum value in this contiguous range; Wherein, middle check point not only will compare with 26 points of described monitoring point with yardstick, and 27 × 2 points that also will be corresponding with neighbouring yardstick compare, and the extreme point detected thus is key point; Arrange threshold tau=1.0, the key point being less than this threshold value is the key point of low contrast, rejects.
Further, in the present invention, described step 3) in, for sample O
ξ∈ O, wherein O is sample space, described sample O
ξcorresponding 2D and 3D SIFT feature vector is respectively Vec_2D and Vec_3D, obtains described sample O
ξserial fusion feature vector be Vec_3D2D=(Vec_3D, Vec_2D)
t, utilize described serial fusion feature vector to realize Object representation.
Further, in the present invention, described step 5) in, the concrete grammar obtaining the recognition result of described object to be identified is:
5-1) extract 2D and the 3D SIFT feature vector of object to be identified, obtain 2D and the 3D SIFT feature descriptor of object to be identified; Utilize the proper vector of BoW modeling statistics object to be identified to distribute, be expressed as Vec_2D and Vec_3D;
5-2) carry out feature-based fusion to two proper vectors of described object to be identified, forming new serial fusion feature vector is Vec_3D2D=(Vec_3D, Vec_2D)
t, realize Object representation;
5-3) described serial fusion feature vector is inputted the 1V1SVM multi classifier trained, discriminant function obtains corresponding differentiation result, obtains the probability that this object belongs to the i-th class be designated as P (i), i ∈ [1 by ballot, n], wherein n represents the total class number of object;
5-4) judge by the value of maximum probability the class class that described object to be identified is corresponding, mathematical formulae is:
Beneficial effect: a kind of general object identification method merged based on 2D and 3D SIFT feature provided by the invention, for two dimensional image and the three-dimensional point cloud of arbitrary object, extract its local feature 2D and 3D SIFT descriptor, represent as this object features, based on " word bag " (Bag of Words, BoW) model obtains object features vector, then utilize feature-based fusion to complete the corresponding BoW proper vector of 2D and 3D SIFT to merge, realize Object representation, support vector machine (Support Vector Machine, SVM) is finally utilized to realize object identification.The 3D SIFT feature descriptor that the present invention proposes can be good at describing object space local characteristics, effectively solves this problem lacked on the one hand of two dimensional character.The method that 2D and 3D SIFT feature merges compensate for the deficiency of single feature recognition algorithms, and more abundant characterizes object properties, improves the correct recognition rata of general object identification method significantly.
This method addresses object features general object identification and extracts and the difficult problem represented from two aspects, propose a kind of general object identification method merged based on 2D and 3D SIFT feature, first for the object identification Problems existing based on two dimensional image, developing rapidly of three-dimensional point cloud model, and 3D SIFT feature is based on the superperformance in the object identification of voxel model, 2D SIFT is extended to object dimensional point cloud model by this method, proposes a kind of general object identification method based on 3D SIFT descriptor.Secondly this problem of object properties can not be represented very well in order to solve single features, in conjunction with the premium properties of 2D SIFT in image recognition, this method, on the basis proposing 3D SIFT algorithm, proposes a kind of general object identification method merged based on 2D and 3D SIFT feature.To sum up, the novelty of the method is:
(1) 3D SIFT feature descriptor is improved, be applied in point cloud model character representation, statistics point cloud local feature histogram, describes the vital normal vector of local characteristics, realizes the feature extraction of object point cloud model and expression in addition point cloud model;
(2) improvement 3D SIFT is applied in general object identification, realizes general object identification function;
(3) 2D and 3D SIFT feature is carried out feature-based fusion, achieve general object recognition algorithm based on multi-feature fusion, solve the problem that single features discrimination is low.
Accompanying drawing explanation
Fig. 1 is the block schematic illustration that the present invention is based on the general object identification method that 2D and 3D SIFT feature merges;
Fig. 2 is the schematic flow sheet that the present invention is based on the general object identification method that 2D and 3D SIFT feature merges;
Fig. 3 is multiclass object different characteristic fusion method correct recognition rata schematic diagram;
Fig. 4 is each type objects correct recognition rata schematic diagram;
Fig. 5 is various visual angles correct recognition rata schematic diagram;
Fig. 6 is size scaling correct recognition rata schematic diagram.
Embodiment
Below in conjunction with accompanying drawing, the present invention is further described.
The framework of the general object identification method based on the fusion of 2D and 3D SIFT feature that the present invention proposes as shown in Figure 1, first extract body characteristics, set up the description of general object, then utilize machine learning method to learn object type, finally by known object type, unknown object is identified.By early stage some sample training and study, under simpler environment, machine vision technique can realize detecting the environment observed, splitting, and when observing the new object being subordinated to old classification, provides corresponding recognition result.The algorithm frame that Fig. 1 provides mainly comprises following 4 aspects:
1) feature extraction and expression: extract object point cloud and 3D and 2D SIFT feature corresponding to image, realize object features and represent;
2) object BoW model: obtain BoW proper vector corresponding to 3D and 2D SIFT corresponding to object two features with classical statistical models BoW (Bag of Words) model;
3) Fusion Features: BoW proper vector corresponding for 3D and 2D SIFT is carried out feature-based fusion, realizes Object representation;
4) object type learns and classification: for multiclass object, and training builds 1V1SVM between two respectively, and in identifying, utilization ballot provides the probability that object to be identified belongs to the i-th class, according to probability distribution, provides final recognition result.
Embodiment 1 recognizer framework
Based on the general object identification method that 2D and 3D SIFT feature merges, mainly comprise the following steps:
1) feature extraction and expression:
Feature extraction and expression are the bases of object identification, and how to extract stable and effective feature is the Focal point and difficult point in Study on Feature Extraction, the feature chosen could obtain best recognition result under limited training sample condition.General physical quantities is numerous, it can not be each object Modling model storehouse, simultaneously, the differences such as each body form color of each class are also very large, so the object features extracted must meet the following conditions: 1) make class inherited maximum, namely can characterize the feature that every type objects is different from other type objects; 2) make difference in class minimum, namely can characterize the common feature of every type objects.This carries out abstract and reasonable expression to every type objects with regard to needing on certain semantic hierarchies, characterizes this type objects with limited training physical quantities.The present invention proposes the 3D SIFT feature based on point cloud model, as object features together with the 2D SIFT feature of image, realizes object identification, specific as follows:
A) 2D SIFT feature is extracted
Utilize the gaussian kernel function convolution of image and different scale to generate metric space, using the Local Extremum that detects in difference of Gaussian function (Differenceof Gaussian, DoG) metric space as key point, DoG operator computing formula is as follows:
D(x,y,σ)=L(x,y,kσ)-L(x,y,σ) (1-1)
L(x,y,σ)=G(x,y,σ)*I(x,y) (1-2)
Wherein L represents metric space, and I (x, y) representative image is at the pixel value at (x, y) place, and σ is the metric space factor, and be worth less expression image by level and smooth fewer, corresponding yardstick is also less, and dimensional Gaussian kernel function is:
Because DoG operator can produce stronger skirt response, in order to strengthen the stability of identification and increase antimierophonic ability, the key point of low contrast and unstable skirt response point need be rejected.Arrange threshold tau=0.02, every key point being less than this threshold value all needs disallowable.Then utilize the Hessian matrix of 2 × 2 to reject frontier point, even because very little noise, it also can be made to produce unstable descriptor.
Utilize the gradient direction distribution characteristic of key point neighborhood territory pixel to be each key point determination principal direction and auxiliary square to, gradient magnitude and direction is obtained by formula (1-4), each crucial neighborhood of a point point is assigned in the subregion of 4 × 4, calculate the gradient and the direction that affect the sampled point of subregion, be assigned on 8 directions, namely each key point forms the proper vector of one 128 dimension.
B) 3D SIFT feature is extracted
Lost important three-dimensional information due to two dimensional image and be easily subject to the external condition impacts such as illumination, so SIFT is extended to 3D SIFT by the present invention, inherit the above feature of 2D SIFT, simultaneously adding due to depth information, make 3D SIFT descriptor can describe the local space relation of object more accurately.The 3D SIFT feature extraction algorithm key step that the present invention proposes is as follows: critical point detection, key point direction distribute and key point feature interpretation, specific as follows:
1-1) critical point detection:
The point cloud model mid point coordinate of object is expressed as P (x, y, z), for realizing scale invariability, the metric space of definition 3D point cloud is L (x, y, z, σ), by gaussian kernel function G (x, the y of a change yardstick, z, σ) obtain with input point cloud P (x, y, z) convolution:
L(x,y,z,σ)=G(x,y,z,σ)*P(x,y,z) (1-5)
Wherein σ is the metric space factor, and three-dimensional Gaussian kernel function is:
Utilize multiplication factor k
iobtain different scale, if often organizing the number of plies in pyramid group is s, then k is set
s=2; Build some cloud gaussian pyramid, replace the Gauss-Laplace of dimension normalization with more efficient difference of Gaussian function (difference-of-Gaussian, DoG)
carry out extremum extracting, obtain DoG Function Extreme Value point and be key point; Wherein, DoG operator computing formula is:
D(x,y,z,k
iσ)=L(x,y,z,k
i+1σ)-L(x,y,z,k
iσ) (1-7)
Wherein, i ∈ [0, s+2].
Key point is made up of the Local Extremum in DoG space, the method obtaining DoG Function Extreme Value point and key point is: each some P (x in the point cloud model of object, y, z) compare with other all consecutive point, determine whether the maximum value or minimum value in this contiguous range; Wherein, middle check point not only will compare with 26 points of monitoring point with yardstick, and 27 × 2 points that also will be corresponding with neighbouring yardstick compare, and the extreme point detected thus is key point; Arrange threshold tau=1.0, the key point being less than this threshold value is the key point of low contrast, rejects.Utilize Rusu etc. at document " Towards 3D object maps for autonomous household robots " (Rusu R B, Blodow N, Marton Z, Soos A, Beetz M.Towards 3D object maps for autonomous household robots.In:Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems.SanDiego, CA:IEEE, 2007.3191-3198) in propose method judge whether key point is frontier point, if so, then reject.
1-2) key point direction is distributed:
For the key point that each detects, need for it calculates this key point local feature of vector description, this vector is called the descriptor at key point place; In order to make descriptor have rotational invariance, utilize the local feature of some cloud to distribute a reference direction for key point, the direction distribution method of key point is as follows:
1-2-1) calculate the k neighborhood of key point P, neighborhood point is designated as P
ki, i={1,2 ... n} represents neighborhood point sequence number, and wherein n represents neighborhood point number;
1-2-2) calculate the center point P of the k neighborhood of key point P
c;
1-2-3) compute vector
with
obtain vector magnitude d and two angle
wherein (x, y, z) is vectorial coordinate;
1-2-4) use in statistics with histogram k neighborhood according to step 2-3) in the vector magnitude d that calculates and angle
i.e. direction, respectively will
be divided into 18 sub-ranges (bins) and 36 sub-ranges, each sub-range is 10 °; Using amplitude d as weights, statistics angle
shi Jinhang Gauss weighting
wherein R
maxrepresent key point neighborhood maximum radius, ignore the point exceeding this distance;
1-2-5) histogrammic peak value represents the direction of this key point neighborhood, using the principal direction of this direction as key point, in order to strengthen the robustness of coupling, only retains the auxiliary direction of direction as this key point that peak value is greater than principal direction peak value 80%, definition
corresponding principal direction is (α, β);
So far, namely the key point containing position, yardstick and direction detected is the 3D SIFT feature point of this cloud.
1-3) key point feature interpretation:
By above step, for each key point, have three information: position, yardstick and direction.Next set up a descriptor for each key point exactly, with one group of vector, this key point is described out, make it not change, such as illumination variation, visual angle change etc. with various change; This descriptor not only comprises key point, also comprise to its contributive point around key point, and descriptor should have higher uniqueness, so that the probability that raising key point is correctly mated.
2D SIFT descriptor is that the one of key point neighborhood Gaussian image gradient statistics represents, for three-dimensional point cloud model, be then statistics key point neighborhood local space relation, calculate each angular histogram in neighborhood, generate 3D SIFT feature vector, this cloud of unique expression.Surface normal is the important attribute on solid surface, normal vector distribution can the 3D geometric properties of representation surface, so in step 1-2 when the present invention calculates 3D SIFT feature vector) in addition method vector on the basis of vector that calculates, more comprehensively express the local spatial feature of object.
The generative process of the Feature Descriptor of key point is as follows:
1-3-1) calculate the k neighborhood of key point P, neighborhood point is designated as P
ki, i={1,2 ... n} represents neighborhood point sequence number, and n represents neighborhood point number, and when this k neighborhood distributes with key point direction, neighborhood choice scope is identical;
1-3-2) by histogrammic X-axis rotate to key point principal direction, ensure rotational invariance, neighborhood point coordinate transformation for mula is:
Wherein (x, y, z) and (x', y', z') is the coordinate of neighborhood point before and after rotating respectively,
1-3-3) calculate the k neighborhood of key point P at a normal vector at P place
1-3-4) compute vector
utilize formula (1-8) compute vector amplitude and two angles, simultaneously computing method vector
and vector
angle δ is:
1-3-5) the feature four-tuple obtained of key point and neighborhood
represent, according to 45 ° of posts, respectively will
be divided into 8,4 and 4 sub-ranges, and statistics drops on counting out of each sub-range; Using amplitude d as weights, when counting out between Statistical Area, carry out Gauss's weighting
the proper vector obtaining one 128 dimension is thus F={f
1, f
2, L f
128;
1-3-6) normalization characteristic vector: for proper vector F={f
1, f
2, L f
128, after normalization be
L={l
1, l
2, L l
128, wherein
so far, the 3D SIFT feature descriptor of key point is generated.
2) Object representation:
The present invention adopts classical BoW (Bag of Words) modeling statistics object features vector distribution, realizes Object representation with a multi-C vector; Being different from classical BoW model utilizes KMeans to carry out cluster, and the present invention utilizes KMeans++ clustering algorithm to obtain object vision word storehouse.Compared with KMeans clustering algorithm, KMeans++ clustering algorithm improves initial cluster center, makes algorithm in cluster result accuracy or all there is lifting working time.First utilize the method for KMeans++ cluster to obtain namely corresponding vision word storehouse, sample clustering center, recycling BoW model, adopts multi-C vector to carry out Object representation, obtains 2D and the 3D SIFT feature vector of the correspondence of sample object;
Wherein, the concrete grammar of Object representation is:
Utilize KMeans++ clustering method, obtain the vision word storehouse that sample clustering center is namely corresponding, be designated as center={center
l, l=1,2, K k}, wherein k represents cluster centre number, center
lrepresent l vision word in vision word storehouse; Recycling BoW model method, carries out Object representation with a multi-C vector;
Multi-C vector computing method are specially: add up in vision word storehouse, the number of times that the vision word in 2D and the 3D SIFT feature vector that sample object is corresponding occurs, are designated as (y
0y
1k y
k-2y
k-1), wherein y
lrepresent vision word center
lthe number of times occurred describes the one dimension in the multi-C vector of object; Wherein, the statistical method of vision word occurrence number is specially: calculate the distance of 2D and 3D SIFT feature vector corresponding to sample object to center, to corresponding center
lthe minimum sample object of distance, corresponding number of times y
ladd 1.
The basic thought chosen of KMeans++ clustering algorithm initial cluster center is: the phase mutual edge distance between initial cluster centre is far away as much as possible.Initial cluster center selects step as follows:
2-1) cluster centre set is designated as center, from the vector set X={x of input
1, x
2, x
3l x
n-1, x
nmiddle Stochastic choice vector x
i∈ X is as first cluster centre;
2-2) for satisfied { x
j| x
j∈
xany vector of center}, calculates the distance D (x of itself and nearest cluster centre (referring to the cluster centre selected)
j)
2;
2-3) select a vector as new cluster centre, each vector is chosen as the probability P (x of cluster centre
j) calculated by formula (1-12), P (x
j) maximum time corresponding vector and new cluster centre;
2-4) repeat step 2-2) and 2-3) until K initial cluster center is out selected.
Obtain the KMeans algorithm of operative norm after K initial cluster center.Carry out Experimental Comparison by choosing different K values, the embodiment of the present invention chooses K=300.
3) Fusion Features:
Fusion Features mode mainly comprises: pixel-based fusion, feature-based fusion and decision level fusion.Feature-based fusion merges for the proper vector extracted, and enriched target object feature, and compared with having the pixel-based fusion of huge data volume process, recognition effect has and reduces a little, but data volume reduces greatly, can realize real-time process.In addition on the one hand by feature-based fusion, the effective information that can characterize object essence can retain, and enriches than the effective information of decision level fusion.But directly carry out fusion for object different characteristic descriptor and there is different characteristic descriptor number difference, reluctant problem, so after Feature Descriptor utilizes BoW modeling statistics by this method, obtain the proper vector of a multidimensional, then carry out feature-based fusion, can effectively solve the problem.
The method of feature-based fusion is utilized to realize general object identification based on multi-feature fusion; For sample O
ξ∈ O, wherein O is sample space, sample O
ξcorresponding 2D and 3D SIFT feature vector is respectively Vec_2D and Vec_3D, utilizes the method for feature-based fusion to carry out Fusion Features, obtains sample O
ξserial fusion feature vector be Vec_3D2D=(Vec_3D, Vec_2D)
t, utilize serial fusion feature vector to realize Object representation;
4) classifier design and training:
After goal description completes, utilize the target type of support vector machine (Support Vector Machine, SVM) learning sample and realize target classification, training classifier is to build multi classifier; SVM is that one is of good performance supervision, discriminant machine learning method, by the off-line training of limited sample in early stage, seeks compromise, finally try to achieve a discriminant function between the complexity and learning ability of model.
SVM is typical binary classifier, and what more often need to realize is multicategory classification problem, and this method builds multi classifier by the method for the multiple binary classifier of training and solves appeal problem.The method of target classification is: build multi classifier by training the method for several binary classifiers, concrete training process is as follows: the i-th class training sample and remaining n-1 class training sample are carried out SVM between two respectively and trains, obtain multiple 1V1SVM sorter, then n class training sample has
individual 1V1SVM sorter.
5) recognition methods of general object based on multi-feature fusion is as follows:
5-1) extract 2D and the 3D SIFT feature vector of object to be identified, obtain 2D and the 3D SIFT feature descriptor of object to be identified; Utilize the proper vector of BoW modeling statistics object to be identified to distribute, be expressed as Vec_2D and Vec_3D;
Two proper vectors 5-2) treating recognition object carry out feature-based fusion, and forming new serial fusion feature vector is Vec_3D2D=(Vec_3D, Vec_2D)
t, realize Object representation;
5-3) serial fusion feature vector is inputted the 1V1SVM multi classifier trained, discriminant function obtains corresponding differentiation result, obtains the probability that this object belongs to the i-th class be designated as P (i), i ∈ [1 by ballot, n], wherein n represents the total class number of object;
5-4) judge by the value of maximum probability the class class that object to be identified is corresponding, mathematical formulae is:
Embodiment 2 algorithm flow
Be illustrated in figure 2 the general object recognition algorithm schematic flow sheet merged based on 2D and 3D SIFT feature, the general object identification process that the present invention proposes mainly comprises off-line training and two stages of ONLINE RECOGNITION.Below for training link in process flow diagram and identifying that link is described in detail.
1. training algorithm flow process:
1.1 off-line training step:
1.1.1 after off-line training starts, for the image p that the i-th type objects in subject image storehouse is corresponding
ithe point cloud pc corresponding with the i-th type objects in object point cloud storehouse
i, i=1,2, K n, n represent training sample classification number, first extract 2D and the 3D SIFT feature that n class training sample is corresponding, are designated as F_R={f
i_ R, i=1,2, K n}, R ∈ (2D, 3D), wherein, f
i_ 2D is m
i* the set of eigenvectors of 128, f
i_ 3D is mc
i* the set of eigenvectors of 128, wherein m
iand mc
irepresent corresponding object 2D and 3D SIFT key point number, complete 2D and 3D SIFT feature and extract and represent.
1.1.2 utilize KMeans++ cluster, obtain namely corresponding vision word storehouse, sample clustering center and image vision word library and put cloud vision word storehouse, being designated as center={center
l, l=1,2, K k}, wherein k represents cluster centre number, center
lrepresent l, vision word storehouse vision word; The cluster centre that 2D and 3D SIFT feature descriptor is corresponding is center_2D and center_3D.
1.1.3 utilize BoW model method, obtain the i-th type objects BoW model, describe object with a multi-C vector.Add up the number of times that in each training sample proper vector, vision word occurs, be designated as (y
0y
1k y
k-2y
k-1), wherein y
lrepresent vision word center
lthe number of times occurred.Statistical method is: calculation training sampling feature vectors to the distance of center, if to center
ldistance minimum, then corresponding y
ladd 1.The BoW model eigenvectors that 2D and 3D SIFT feature descriptor is corresponding is Vec_2D and Vec_3D.
1.1.4 utilize feature-based fusion to realize Object representation, after merging, object features vector is
Vec_3D2D=(Vec_3D,Vec_2D)
T。
1.1.5 last 1V1SVM training is carried out to training sample, obtain corresponding discriminant function.Select linear core SVM of the present invention realizes multi classifier, and concrete training process is as follows: for the i-th type objects, and make it carry out SVM between two respectively with residue (n-1) type objects and train, obtain multiple 1V1SVM sorter, then n class training sample has
individual 1V1SVM sorter.
2. recognizer flow process
ONLINE RECOGNITION stage partial, for image and the some cloud of object to be identified, first complete 2D and 3D SIFT feature extract and represent, obtain corresponding subject image BoW model and some cloud BoW model respectively, then feature-based fusion is utilized to realize Object representation, finally utilize n (n-1)/2 sorter Forecasting recognition result one by one of training and obtaining, obtain the probability P (i) that object to be identified belongs to the i-th class by voting, then finally identify that classification class is calculated by formula (1-12).
Embodiment 3 experimental result
The present invention tests the point cloud model of employing and RGB image comes from K.lai etc. (RGB-D dataset.http: //rgbd-dataset.cs.washington.edu/dataset.html, 2011-03-05. the document of correspondence is K.Lai, L.-F. ~ Bo, X.-F ~ Ren, D. ~ Fox, A Large-Scale Hierarchical Multi-View RGB-D Object Dataset, Proc.of IEEE Int.Conf.on Robotics and Autom., pp:1817--1824, Shanghai, China, 2011.) the large-scale cloud data storehouse set up, this database comprises the 51 classes point cloud model of totally 300 objects and RGB image, each object point cloud and image comprise 3 visual angles.Experimental technique a: object in each class of random selecting is as test sample book, and residue object is then as training sample, and every class training sample selection 100, test sample book is 60, all randomly draws from database.In order to assess the performance proposing algorithm herein, this part has carried out multiple experiment, adds up correct recognition rata in multiple situation, correct recognition rata computing method:
Wherein, P represents correct recognition rata, n
rrepresent in test sample book and correctly identify number, N represents total test sample book number.
3.1 experiment 1:3D SIFT correct recognition ratas
This experiment choose difference in class obviously, similarity is high between class 6 type objects test, and are respectively apple, tomato, banana, pitcher, cereal_box, kleenex.In this experiment, first 6 class training samples are trained, then test by test sample book.In existing numerous somes cloud features, PFHRGB and PFH is that [Alexandre L is Descriptors for Object and Category Recognition:a ComparativeEvaluation.In:Proceedings of IEEE International Conference on Intelligent Robotic Systems.Vilamoura A.3D for the good feature of discrimination, Portugal:IEEE, 2012.Vol.Workshop on Color-Depth Camera Fusion in Robotics, 1-6].In order to verify the advantage of 3D SIFT feature in this paper in object identification, carry out the contrast test of 3 kinds of Feature Descriptors under identical condition, often kind of Feature Descriptor all adopts SIFTKeypoint module to detect key point, then the proper vector of key point place different characteristic descriptor is calculated respectively, statistics correct recognition rata, experimental result is in table 1.
The each Feature Descriptor correct recognition rata of table 1
Colouring information is dissolved into PFH Feature Descriptor by PFHRGB, and as shown in Table 1, characteristic information has been enriched in the introducing of colouring information, improves object correct recognition rata.The 3D SIFT feature descriptor that this method proposes improves 9.72% and 6.94% respectively than PFH and PFHRGB in discrimination, demonstrates 3D SIFT feature descriptor based on the validity in the general object identification of point cloud model.
3.2 experiments 2: the correct recognition rata merged based on 2D and 3D SIFT feature
This problem of otherness of similar object between class can not be indicated preferably in order to overcome point cloud model, propose the general object identification method merged based on 2D and 3D SIFT feature level, experiment 2 is detailed comparisons's correct recognition rata of 2DSIFT, 3D SIFT and both feature-based fusion under identical condition, training sample is identical with experiment 1 with test sample book, and experimental result is in table 2.
Table 2 Feature Fusion Algorithm correct recognition rata
In order to represent convenient, represent that 2D and 3D SIFT feature level merges with 2D+3D SIFT.As shown in Table 2, compared with 2D SIFT, 3D SIFT discrimination improves 3.05%, and the introducing of visible depth information is of value to and realizes object identification.Due to the polytrope of object, there is out of true, uncertain and incomplete problem in the information that single features provides, make single features algorithm discrimination lower, the discrimination of 2D and 3D SIFT after average weighted merges is 93.06%, have a distinct increment than the discrimination of single features descriptor, illustrate that the general object recognition algorithm that the present invention proposes has obvious advantage in discrimination.
3.3 experiments 3: various features blending algorithm correct recognition rata
This experiment gives multiple blending algorithm recognition result, choose difference in class obviously, similarity is high between class 10 type objects, identifications is carried out to 2-10 class and tests, be respectively apple, tomato, banana, pitcher, cereal_box, kleenex, camera, coffee_mug, calculator, cell_phone.Compared for the correct recognition rata of 4 kinds of blending algorithms altogether, be respectively: the average weighted in feature-based fusion and decision level fusion merges, DSmT is theoretical and Murphy is regular.Experimental result as shown in Figure 3.
In Fig. 3, ave represents that average weighted merges, and horizontal ordinate represents classification number, and such as, " 6 " represent that this experiment comprises 6 type objects altogether, add up the correct recognition rata of this 6 type objects.As shown in Figure 3, when object classification number increases, (1) Feature Fusion Algorithm has higher correct recognition rata and stronger robustness than single features algorithm.In 4 kinds of blending algorithms, average weighted merge and DSmT theory fusion results relatively, lower than other two kinds of fusion methods; Compared with feature-based fusion, the result of 2D and 3D SIFT and both feature-based fusion is not improved, so adopt the method for feature-based fusion to complete the task of general object identification herein according to the result after the fusion of Murphy rule in 3 evidence sources totally.(2) the 3D SIFT feature descriptor that the present invention proposes has better recognition effect relative to PFHRGB and 2D SIFT feature descriptor; (3) often kind of recognizer discrimination declines all to some extent, and partly cause is the design of sorter, and the multi classifier that the present invention adopts is constructed by multiple 1V1SVM sorter, and the error of each sorter can be accumulated in final vote result.Along with the increase of object classification number, 1V1SVM sorter number increases sharply, and as 10 type objects have 45 sorters, the error in judgement of 45 sorters is added in final vote result the identification error that will cause to a great extent.
3.4 experiments 4: Algorithm robustness is tested
Still higher correct recognition rata and good robustness can be had similarity is high in order to verify that general object recognition algorithm otherness in class that the present invention proposes is large, between class, this Experimental comparison inhomogeneity but high similar (such as, apple and tomato), and similar but High Defferential (such as, pitcher) correct recognition rata of object in different characteristic expression situation, experimental result as shown in Figure 4.
To choose in pitcher class 3 different objects, be respectively the high circular stainless steel kettle of the high round ceramic kettle of 345mm, 230mm and the high round ceramic kettle of 130mm, in such class, difference is huge, PFHRGB is utilized to identify pitcher class, only have the discrimination of 70%, but now 3D SIFT can realize the discrimination of 96.67%.Alternative gets apple class and tomato class sample, and between this two classes class, similarity is high, and when adopting other single features to identify apple class, its discrimination is poor, but 3D SIFT can realize the discrimination of 71.67%.Contrast the discrimination curve that various feature is corresponding, can verify that between class similarity is high, under difference is large in class condition, 3D SIFT feature descriptor in this paper has higher discrimination compared with other Feature Descriptors, and has better robustness based on the method that 2D and 3D SIFT feature level merges than single features.
3.5 experiments 5: various visual angles experiment
In order to verify the robustness of this method for visual angle change, to 30 °, 3 visual angles of every type objects, 45 ° and 60 ° carry out contrast experiment, training sample is identical with experiment 1, from each visual angle of each class testing sample, Stochastic choice 60 is as new test sample book, namely each visual angle all comprises 6 classes totally 360 test sample books, and experimental result as shown in Figure 5.
As shown in Figure 5, compared with PFHRGB Feature Descriptor, 3D SIFT recognition effect is relatively accurate and stable; Compared with single features, Feature Fusion Algorithm discrimination when visual angle change of proposition maintains more than 90%, demonstrates this method for the validity of visual angle change and robustness.
3.6 experiments 6: size scaling
The object of this experiment investigates the validity of this method for scaling, and training sample database is identical with experiment 1, and convergent-divergent is carried out in test sample book storehouse on the basis of experiment 1, zooms to 1/2,1/3,1/4 respectively, adds up now object identification rate,
Experimental result is shown in Fig. 6.
As seen from Figure 6, when carrying out convergent-divergent to object, the blending algorithm that the present invention proposes is better than single features recognizer.But the recognition correct rate that each Feature Descriptor is corresponding declines all to some extent, when especially zooming to 1/4,2D SIFT feature descriptor correct recognition rata only has 49.54%, main cause is that parts of images such as apple original size only has 84*82, after convergent-divergent, substantially effective key point can not be detected.And now feature-based fusion algorithm in this paper still has the correct recognition rata of 63.05%.
3.7 experiments 7: time complexity
At i7-3770@3.4GHz CPU, under the experiment porch of 64 Win7 operating systems, the time that this experiment statistics utilizes different characteristic descriptor to complete identifying to expend, with experiment 1 test sample book identical, calculate an average object identification required time, experimental result is in table 3.
The table 3 different characteristic descriptor time compares
Point cloud model relative image, information is horn of plenty more, and the data volume comprised is large many, so the processing time is long.Carry out time complexity analysis to the recognizer that the present invention proposes, what whole identifying ratio consuming time was maximum is feature extraction and expression part.3D SIFT feature descriptor comprises critical point detection and key point feature interpretation two parts, if object point cloud quantity to be identified is n, critical point detection part-time complexity is O (octavesscalekn), because the yardstick scale of pyramid number of plies octaves, every one deck and key point neighborhood k is constant, so critical point detection part-time complexity is O (n); Calculate the feature interpretation vector of m (m<n) the individual key point detected, time complexity is O (mn), so 3D SIFT feature descriptor Algorithms T-cbmplexity is O (mn+n), ignore lower term, the time complexity of 3D SIFT is O (mn).As shown in Table 3, compared with PFHRGB, the average each test sample book of recognizer of the 3D SIFT recognizer that the present invention proposes and fusion 2D and 3D SIFT is consuming time decreases 34.75% and 22.01%, improves the recognizer performance based on point cloud model.
The above is only the preferred embodiment of the present invention; be noted that for those skilled in the art; under the premise without departing from the principles of the invention, can also make some improvements and modifications, these improvements and modifications also should be considered as protection scope of the present invention.
Claims (7)
1., based on the general object identification method that 2D and 3D SIFT feature merges, it is characterized in that: comprise the following steps:
1) feature extraction and expression:
For sample object, extract the feature interpretation of described sample object, described feature interpretation comprises subject image and object point cloud; First extract volume image 2D SIFT feature, completes subject image character representation; Then extract object point cloud 3DSIFT feature, complete object point cloud character representation; Namely 2D and the 3D SIFT feature descriptor of sample object is obtained;
2) Object representation:
Utilize the method for KMeans++ cluster to obtain namely corresponding vision word storehouse, sample clustering center, recycling BoW model, adopts multi-C vector to carry out Object representation, obtains 2D and the 3D SIFT feature vector of the correspondence of sample object;
3) Fusion Features:
Utilize the method for feature-based fusion to carry out Fusion Features 2D and the 3D SIFT feature of the correspondence of sample object vector, obtain the serial fusion feature vector of sample object;
4) classifier design and training:
Utilize support vector machine and SVM to learn the target type of described sample object and realize target classification, training classifier is to build multi classifier;
5) object identification to be identified:
By the input of the serial fusion feature of object to be identified vector through described step 4) multi classifier that trains, obtain the probability that described object to be identified belongs to each classification, the sample object classification corresponding to most probable value is the recognition result of described object to be identified.
2. the general object identification method merged based on 2D and 3D SIFT feature according to claim 1, is characterized in that: step 1) in state object point cloud 3D SIFT feature extracting method comprise the following steps:
1-1) critical point detection:
The point cloud model mid point coordinate of object is expressed as P (x, y, z), and for realizing scale invariability, the metric space of definition 3D point cloud is L (x, y, z, σ):
L (x, y, z, σ)=G (x, y, z, σ) * P (x, y, z) (1) wherein σ is the metric space factor, and the three-dimensional Gaussian kernel function of change yardstick is:
D(x,y,z,k
iσ)=L(x,y,z,k
i+1σ)-L(x,y,z,k
iσ) (3)
Wherein, i ∈ [0, s+2];
1-2) key point direction is distributed:
For the key point that each detects, need for described key point calculates this key point local feature of a vector description, this vector is called the descriptor at key point place; In order to make descriptor have rotational invariance, utilize the local feature of some cloud to distribute a reference direction for key point, the direction distribution method of described key point is as follows:
1-2-1) calculate the k neighborhood of key point P, neighborhood point is designated as P
ki, i={1,2 ... n} represents neighborhood point sequence number, and wherein n represents neighborhood point number;
1-2-2) calculate the center point P of the k neighborhood of key point P
c;
1-2-3) compute vector
with
obtain vector magnitude d and two angle
wherein (x, y, z) is vectorial coordinate;
θ=sin
-1(z/d) (4)
1-2-4) use in statistics with histogram k neighborhood according to described step 1-2-3) in the vector magnitude d that calculates and angle
i.e. direction, respectively will
be divided into 18 sub-ranges and 36 sub-ranges, each sub-range is 10 °; Using amplitude d as weights, statistics angle
shi Jinhang Gauss weighting
wherein R
maxrepresent key point neighborhood maximum radius, ignore the point exceeding this distance;
1-2-5) histogrammic peak value represents the direction of this key point neighborhood, using the principal direction of this direction as described key point, in order to strengthen the robustness of coupling, only retains the auxiliary direction of direction as this key point that peak value is greater than principal direction peak value 80%, definition
corresponding principal direction is (α, β);
1-3) key point feature interpretation:
The generative process of the Feature Descriptor of key point is as follows:
1-3-1) calculate the k neighborhood of key point P, neighborhood point is designated as P
ki, i={1,2 ... n} represents neighborhood point sequence number, and n represents neighborhood point number, and when this k neighborhood distributes with key point direction, described neighborhood choice scope is identical;
1-3-2) by histogrammic X-axis rotate to key point principal direction, ensure rotational invariance, neighborhood point coordinate transformation for mula is:
Wherein (x, y, z) and (x', y', z') is the coordinate of neighborhood point before and after rotating respectively,
1-3-3) calculate the k neighborhood of key point P at a normal vector at P place
1-3-4) compute vector
utilize described formula (4) compute vector amplitude and two angles, simultaneously computing method vector
and vector
angle δ is:
1-3-5) the feature four-tuple obtained of key point and neighborhood
represent, according to 45 ° of posts, respectively will
be divided into 8,4 and 4 sub-ranges, and statistics drops on counting out of each sub-range; Using amplitude d as weights, when counting out between Statistical Area, carry out Gauss's weighting
the proper vector obtaining one 128 dimension is thus F={f
1, f
2, L f
128;
1-3-6) normalization characteristic vector: for proper vector F={f
1, f
2, L f
128, be L={l after normalization
1, l
2, L l
128, wherein
so far, the 3D SIFT feature descriptor of key point is generated.
3. the general object identification method merged based on 2D and 3D SIFT feature according to claim 1, is characterized in that: described step 2) in the concrete grammar of Object representation be:
Utilize KMeans++ clustering method, obtain the vision word storehouse that sample clustering center is namely corresponding, be designated as center={center
l, l=1,2, K k}, wherein k represents cluster centre number, center
lrepresent l vision word in vision word storehouse; Recycling BoW model method, carries out Object representation with a multi-C vector.
4. the general object identification method merged based on 2D and 3D SIFT feature according to claim 1, it is characterized in that: described step 4) in, the method of target classification is: build multi classifier by training the method for several binary classifiers, concrete training process is as follows: the i-th class training sample and remaining n-1 class training sample are carried out SVM between two respectively and trains, obtain multiple 1V1SVM sorter, then n class training sample has
individual 1V1SVM sorter.
5. the general object identification method merged based on 2D and 3D SIFT feature according to claim 2, is characterized in that: described step 1) in, the method obtaining described DoG Function Extreme Value point and key point is:
Each some P (x, y, z) in the point cloud model of described object compares with other all consecutive point, determines whether the maximum value or minimum value in this contiguous range; Wherein, middle check point not only will compare with 26 points of described monitoring point with yardstick, and 27 × 2 points that also will be corresponding with neighbouring yardstick compare, and the extreme point detected thus is key point; Arrange threshold tau=1.0, the key point being less than this threshold value is the key point of low contrast, rejects.
6. the general object identification method merged based on 2D and 3D SIFT feature according to claim 1, is characterized in that: described step 3) in, for sample O
ξ∈ O, wherein O is sample space, described sample O
ξcorresponding 2D and 3D SIFT feature vector is respectively Vec_2D and Vec_3D, obtains described sample O
ξserial fusion feature vector be Vec_3D2D=(Vec_3D, Vec_2D)
t, utilize described serial fusion feature vector to realize Object representation.
7. the general object identification method merged based on 2D and 3D SIFT feature according to claim 1, is characterized in that: described step 5) in, the concrete grammar obtaining the recognition result of described object to be identified is:
5-1) extract 2D and the 3D SIFT feature vector of object to be identified, obtain 2D and the 3D SIFT feature descriptor of object to be identified; Utilize the proper vector of BoW modeling statistics object to be identified to distribute, be expressed as Vec_2D and Vec_3D;
5-2) carry out feature-based fusion to two proper vectors of described object to be identified, forming new serial fusion feature vector is Vec_3D2D=(Vec_3D, Vec_2D)
t, realize Object representation;
5-3) described serial fusion feature vector is inputted the 1V1SVM multi classifier trained, discriminant function obtains corresponding differentiation result, obtains the probability that this object belongs to the i-th class be designated as P (i), i ∈ [1 by ballot, n], wherein n represents the total class number of object;
5-4) judge by the value of maximum probability the class class that described object to be identified is corresponding, mathematical formulae is:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510117991.6A CN104715254B (en) | 2015-03-17 | 2015-03-17 | A kind of general object identification method merged based on 2D and 3D SIFT features |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510117991.6A CN104715254B (en) | 2015-03-17 | 2015-03-17 | A kind of general object identification method merged based on 2D and 3D SIFT features |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104715254A true CN104715254A (en) | 2015-06-17 |
CN104715254B CN104715254B (en) | 2017-10-10 |
Family
ID=53414564
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510117991.6A Active CN104715254B (en) | 2015-03-17 | 2015-03-17 | A kind of general object identification method merged based on 2D and 3D SIFT features |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104715254B (en) |
Cited By (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105551068A (en) * | 2015-12-07 | 2016-05-04 | 中国人民解放军空军装备研究院雷达与电子对抗研究所 | Three-dimensional laser scanning and optical photograph synthetic method |
CN105654122A (en) * | 2015-12-28 | 2016-06-08 | 江南大学 | Spatial pyramid object identification method based on kernel function matching |
CN105654035A (en) * | 2015-12-21 | 2016-06-08 | 湖南拓视觉信息技术有限公司 | Three-dimensional face recognition method and data processing device applying three-dimensional face recognition method |
CN106326395A (en) * | 2016-08-18 | 2017-01-11 | 北京大学 | Local visual feature selection method and device |
CN106529394A (en) * | 2016-09-19 | 2017-03-22 | 广东工业大学 | Indoor scene and object simultaneous recognition and modeling method |
CN106682672A (en) * | 2016-10-24 | 2017-05-17 | 深圳大学 | Method and device for acquiring feature descriptor of hyper-spectral image |
CN106778449A (en) * | 2015-11-23 | 2017-05-31 | 创意点子数位股份有限公司 | The interactive film method for building up of the object discrimination method of dynamic image and automatic interception target image |
CN106909873A (en) * | 2016-06-21 | 2017-06-30 | 湖南拓视觉信息技术有限公司 | The method and apparatus of recognition of face |
CN107247917A (en) * | 2017-04-21 | 2017-10-13 | 东南大学 | A kind of airplane landing control method based on ELM and DSmT |
CN107423697A (en) * | 2017-07-13 | 2017-12-01 | 西安电子科技大学 | Activity recognition method based on non-linear fusion depth 3D convolution description |
CN107450577A (en) * | 2017-07-25 | 2017-12-08 | 天津大学 | UAV Intelligent sensory perceptual system and method based on multisensor |
CN107886528A (en) * | 2017-11-30 | 2018-04-06 | 南京理工大学 | Distribution line working scene three-dimensional rebuilding method based on a cloud |
CN107886101A (en) * | 2017-12-08 | 2018-04-06 | 北京信息科技大学 | A kind of scene three-dimensional feature point highly effective extraction method based on RGB D |
CN107895386A (en) * | 2017-11-14 | 2018-04-10 | 中国航空工业集团公司西安飞机设计研究所 | A kind of multi-platform joint objective autonomous classification method |
CN108171432A (en) * | 2018-01-04 | 2018-06-15 | 南京大学 | Ecological risk evaluating method based on Multidimensional Cloud Model-fuzzy support vector machine |
CN108197532A (en) * | 2017-12-18 | 2018-06-22 | 深圳云天励飞技术有限公司 | The method, apparatus and computer installation of recognition of face |
CN108470373A (en) * | 2018-02-14 | 2018-08-31 | 天目爱视(北京)科技有限公司 | It is a kind of based on infrared 3D 4 D datas acquisition method and device |
CN108491773A (en) * | 2018-03-12 | 2018-09-04 | 中国工商银行股份有限公司 | A kind of recognition methods and system |
CN108734087A (en) * | 2018-03-29 | 2018-11-02 | 京东方科技集团股份有限公司 | Object automatic identifying method and system, shopping apparatus and storage medium |
CN109270079A (en) * | 2018-11-15 | 2019-01-25 | 燕山大学 | A kind of Surface Flaw accurate detecting method based on point cloud model |
CN109543557A (en) * | 2018-10-31 | 2019-03-29 | 百度在线网络技术(北京)有限公司 | Processing method, device, equipment and the storage medium of video frame |
CN109902702A (en) * | 2018-07-26 | 2019-06-18 | 华为技术有限公司 | The method and apparatus of target detection |
CN109978885A (en) * | 2019-03-15 | 2019-07-05 | 广西师范大学 | A kind of tree three-dimensional point cloud segmentation method and system |
CN110069968A (en) * | 2018-01-22 | 2019-07-30 | 耐能有限公司 | Face recognition and face recognition method |
CN110390671A (en) * | 2019-07-10 | 2019-10-29 | 杭州依图医疗技术有限公司 | A kind of method and device of Breast Calcifications detection |
CN110503148A (en) * | 2019-08-26 | 2019-11-26 | 清华大学 | A kind of point cloud object identifying method with scale invariability |
CN110913226A (en) * | 2019-09-25 | 2020-03-24 | 西安空间无线电技术研究所 | Image data processing system and method based on cloud detection |
CN111007565A (en) * | 2019-12-24 | 2020-04-14 | 清华大学 | Three-dimensional frequency domain full-acoustic wave imaging method and device |
CN111339974A (en) * | 2020-03-03 | 2020-06-26 | 景德镇陶瓷大学 | Method for identifying modern ceramics and ancient ceramics |
CN111366084A (en) * | 2020-04-28 | 2020-07-03 | 上海工程技术大学 | Part size detection platform based on information fusion, detection method and fusion method |
CN111582014A (en) * | 2020-02-29 | 2020-08-25 | 佛山市云米电器科技有限公司 | Container identification method, device and computer readable storage medium |
CN112163557A (en) * | 2020-10-19 | 2021-01-01 | 南宁职业技术学院 | Face recognition method and device based on 3D structured light |
CN112179353A (en) * | 2020-09-30 | 2021-01-05 | 深圳市银星智能科技股份有限公司 | Positioning method and device of self-moving robot, robot and readable storage medium |
CN114627112A (en) * | 2022-05-12 | 2022-06-14 | 宁波博登智能科技有限公司 | Semi-supervised three-dimensional target labeling system and method |
CN115496931A (en) * | 2022-11-14 | 2022-12-20 | 济南奥普瑞思智能装备有限公司 | Industrial robot health monitoring method and system |
CN116844142A (en) * | 2023-08-28 | 2023-10-03 | 四川华腾公路试验检测有限责任公司 | Bridge foundation scouring identification and assessment method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102708380A (en) * | 2012-05-08 | 2012-10-03 | 东南大学 | Indoor common object identification method based on machine vision |
CN102930302A (en) * | 2012-10-18 | 2013-02-13 | 山东大学 | On-line sequential extreme learning machine-based incremental human behavior recognition method |
CN104298971A (en) * | 2014-09-28 | 2015-01-21 | 北京理工大学 | Method for identifying objects in 3D point cloud data |
-
2015
- 2015-03-17 CN CN201510117991.6A patent/CN104715254B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102708380A (en) * | 2012-05-08 | 2012-10-03 | 东南大学 | Indoor common object identification method based on machine vision |
CN102930302A (en) * | 2012-10-18 | 2013-02-13 | 山东大学 | On-line sequential extreme learning machine-based incremental human behavior recognition method |
CN104298971A (en) * | 2014-09-28 | 2015-01-21 | 北京理工大学 | Method for identifying objects in 3D point cloud data |
Non-Patent Citations (1)
Title |
---|
XINGHUA SUN等: "Action Recognition via Local Descriptors and Holistic Features", 《IEEE》 * |
Cited By (57)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106778449A (en) * | 2015-11-23 | 2017-05-31 | 创意点子数位股份有限公司 | The interactive film method for building up of the object discrimination method of dynamic image and automatic interception target image |
CN106778449B (en) * | 2015-11-23 | 2020-09-22 | 创意点子数位股份有限公司 | Object identification method of dynamic image and interactive film establishment method for automatically capturing target image |
CN105551068A (en) * | 2015-12-07 | 2016-05-04 | 中国人民解放军空军装备研究院雷达与电子对抗研究所 | Three-dimensional laser scanning and optical photograph synthetic method |
CN105551068B (en) * | 2015-12-07 | 2018-07-24 | 中国人民解放军空军装备研究院雷达与电子对抗研究所 | A kind of synthetic method of 3 D laser scanning and optical photograph |
CN105654035A (en) * | 2015-12-21 | 2016-06-08 | 湖南拓视觉信息技术有限公司 | Three-dimensional face recognition method and data processing device applying three-dimensional face recognition method |
CN105654035B (en) * | 2015-12-21 | 2019-08-09 | 湖南拓视觉信息技术有限公司 | Three-dimensional face identification method and the data processing equipment for applying it |
CN105654122A (en) * | 2015-12-28 | 2016-06-08 | 江南大学 | Spatial pyramid object identification method based on kernel function matching |
CN106909873A (en) * | 2016-06-21 | 2017-06-30 | 湖南拓视觉信息技术有限公司 | The method and apparatus of recognition of face |
CN106326395A (en) * | 2016-08-18 | 2017-01-11 | 北京大学 | Local visual feature selection method and device |
CN106326395B (en) * | 2016-08-18 | 2019-05-28 | 北京大学 | A kind of local visual feature selection approach and device |
CN106529394A (en) * | 2016-09-19 | 2017-03-22 | 广东工业大学 | Indoor scene and object simultaneous recognition and modeling method |
CN106529394B (en) * | 2016-09-19 | 2019-07-19 | 广东工业大学 | A kind of indoor scene object identifies simultaneously and modeling method |
CN106682672A (en) * | 2016-10-24 | 2017-05-17 | 深圳大学 | Method and device for acquiring feature descriptor of hyper-spectral image |
CN106682672B (en) * | 2016-10-24 | 2020-04-24 | 深圳大学 | Method and device for acquiring hyperspectral image feature descriptor |
CN107247917A (en) * | 2017-04-21 | 2017-10-13 | 东南大学 | A kind of airplane landing control method based on ELM and DSmT |
CN107423697B (en) * | 2017-07-13 | 2020-09-08 | 西安电子科技大学 | Behavior identification method based on nonlinear fusion depth 3D convolution descriptor |
CN107423697A (en) * | 2017-07-13 | 2017-12-01 | 西安电子科技大学 | Activity recognition method based on non-linear fusion depth 3D convolution description |
CN107450577A (en) * | 2017-07-25 | 2017-12-08 | 天津大学 | UAV Intelligent sensory perceptual system and method based on multisensor |
CN107895386A (en) * | 2017-11-14 | 2018-04-10 | 中国航空工业集团公司西安飞机设计研究所 | A kind of multi-platform joint objective autonomous classification method |
CN107886528B (en) * | 2017-11-30 | 2021-09-03 | 南京理工大学 | Distribution line operation scene three-dimensional reconstruction method based on point cloud |
CN107886528A (en) * | 2017-11-30 | 2018-04-06 | 南京理工大学 | Distribution line working scene three-dimensional rebuilding method based on a cloud |
CN107886101A (en) * | 2017-12-08 | 2018-04-06 | 北京信息科技大学 | A kind of scene three-dimensional feature point highly effective extraction method based on RGB D |
CN108197532A (en) * | 2017-12-18 | 2018-06-22 | 深圳云天励飞技术有限公司 | The method, apparatus and computer installation of recognition of face |
WO2019120115A1 (en) * | 2017-12-18 | 2019-06-27 | 深圳励飞科技有限公司 | Facial recognition method, apparatus, and computer apparatus |
CN108171432A (en) * | 2018-01-04 | 2018-06-15 | 南京大学 | Ecological risk evaluating method based on Multidimensional Cloud Model-fuzzy support vector machine |
CN110069968A (en) * | 2018-01-22 | 2019-07-30 | 耐能有限公司 | Face recognition and face recognition method |
US10885314B2 (en) | 2018-01-22 | 2021-01-05 | Kneron Inc. | Face identification system and face identification method with high security level and low power consumption |
TWI693556B (en) * | 2018-01-22 | 2020-05-11 | 美商耐能有限公司 | Face identification system and face identification method |
CN108470373B (en) * | 2018-02-14 | 2019-06-04 | 天目爱视(北京)科技有限公司 | It is a kind of based on infrared 3D 4 D data acquisition method and device |
CN108470373A (en) * | 2018-02-14 | 2018-08-31 | 天目爱视(北京)科技有限公司 | It is a kind of based on infrared 3D 4 D datas acquisition method and device |
CN108491773B (en) * | 2018-03-12 | 2022-11-08 | 中国工商银行股份有限公司 | Identification method and system |
CN108491773A (en) * | 2018-03-12 | 2018-09-04 | 中国工商银行股份有限公司 | A kind of recognition methods and system |
CN108734087A (en) * | 2018-03-29 | 2018-11-02 | 京东方科技集团股份有限公司 | Object automatic identifying method and system, shopping apparatus and storage medium |
US10872227B2 (en) | 2018-03-29 | 2020-12-22 | Boe Technology Group Co., Ltd. | Automatic object recognition method and system thereof, shopping device and storage medium |
CN109902702B (en) * | 2018-07-26 | 2021-08-03 | 华为技术有限公司 | Method and device for detecting target |
CN109902702A (en) * | 2018-07-26 | 2019-06-18 | 华为技术有限公司 | The method and apparatus of target detection |
CN109543557A (en) * | 2018-10-31 | 2019-03-29 | 百度在线网络技术(北京)有限公司 | Processing method, device, equipment and the storage medium of video frame |
CN109270079A (en) * | 2018-11-15 | 2019-01-25 | 燕山大学 | A kind of Surface Flaw accurate detecting method based on point cloud model |
CN109978885A (en) * | 2019-03-15 | 2019-07-05 | 广西师范大学 | A kind of tree three-dimensional point cloud segmentation method and system |
CN110390671B (en) * | 2019-07-10 | 2021-11-30 | 杭州依图医疗技术有限公司 | Method and device for detecting mammary gland calcification |
CN110390671A (en) * | 2019-07-10 | 2019-10-29 | 杭州依图医疗技术有限公司 | A kind of method and device of Breast Calcifications detection |
CN110503148B (en) * | 2019-08-26 | 2022-10-11 | 清华大学 | Point cloud object identification method with scale invariance |
CN110503148A (en) * | 2019-08-26 | 2019-11-26 | 清华大学 | A kind of point cloud object identifying method with scale invariability |
CN110913226A (en) * | 2019-09-25 | 2020-03-24 | 西安空间无线电技术研究所 | Image data processing system and method based on cloud detection |
CN110913226B (en) * | 2019-09-25 | 2022-01-04 | 西安空间无线电技术研究所 | Image data processing system and method based on cloud detection |
CN111007565A (en) * | 2019-12-24 | 2020-04-14 | 清华大学 | Three-dimensional frequency domain full-acoustic wave imaging method and device |
CN111582014A (en) * | 2020-02-29 | 2020-08-25 | 佛山市云米电器科技有限公司 | Container identification method, device and computer readable storage medium |
CN111339974B (en) * | 2020-03-03 | 2023-04-07 | 景德镇陶瓷大学 | Method for identifying modern ceramics and ancient ceramics |
CN111339974A (en) * | 2020-03-03 | 2020-06-26 | 景德镇陶瓷大学 | Method for identifying modern ceramics and ancient ceramics |
CN111366084A (en) * | 2020-04-28 | 2020-07-03 | 上海工程技术大学 | Part size detection platform based on information fusion, detection method and fusion method |
CN112179353A (en) * | 2020-09-30 | 2021-01-05 | 深圳市银星智能科技股份有限公司 | Positioning method and device of self-moving robot, robot and readable storage medium |
CN112163557A (en) * | 2020-10-19 | 2021-01-01 | 南宁职业技术学院 | Face recognition method and device based on 3D structured light |
CN114627112A (en) * | 2022-05-12 | 2022-06-14 | 宁波博登智能科技有限公司 | Semi-supervised three-dimensional target labeling system and method |
CN115496931A (en) * | 2022-11-14 | 2022-12-20 | 济南奥普瑞思智能装备有限公司 | Industrial robot health monitoring method and system |
CN115496931B (en) * | 2022-11-14 | 2023-02-10 | 济南奥普瑞思智能装备有限公司 | Industrial robot health monitoring method and system |
CN116844142A (en) * | 2023-08-28 | 2023-10-03 | 四川华腾公路试验检测有限责任公司 | Bridge foundation scouring identification and assessment method |
CN116844142B (en) * | 2023-08-28 | 2023-11-21 | 四川华腾公路试验检测有限责任公司 | Bridge foundation scouring identification and assessment method |
Also Published As
Publication number | Publication date |
---|---|
CN104715254B (en) | 2017-10-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104715254A (en) | Ordinary object recognizing method based on 2D and 3D SIFT feature fusion | |
Bariya et al. | Scale-hierarchical 3d object recognition in cluttered scenes | |
Redondo-Cabrera et al. | Surfing the point clouds: Selective 3d spatial pyramids for category-level object recognition | |
CN104318219A (en) | Face recognition method based on combination of local features and global features | |
CN104834941A (en) | Offline handwriting recognition method of sparse autoencoder based on computer input | |
CN103530633A (en) | Semantic mapping method of local invariant feature of image and semantic mapping system | |
CN104881671A (en) | High resolution remote sensing image local feature extraction method based on 2D-Gabor | |
CN105930792A (en) | Human action classification method based on video local feature dictionary | |
CN104966090A (en) | Visual word generation and evaluation system and method for realizing image comprehension | |
Moetesum et al. | Segmentation and recognition of electronic components in hand-drawn circuit diagrams | |
Sun et al. | Brushstroke based sparse hybrid convolutional neural networks for author classification of Chinese ink-wash paintings | |
Nasser et al. | Signature recognition by using SIFT and SURF with SVM basic on RBF for voting online | |
Massa et al. | Convolutional neural networks for joint object detection and pose estimation: A comparative study | |
Xu et al. | Robust joint representation of intrinsic mean and kernel function of lie group for remote sensing scene classification | |
Dong et al. | Fusing multilevel deep features for fabric defect detection based NTV-RPCA | |
CN105608443A (en) | Multi-feature description and local decision weighting face identification method | |
CN103942572A (en) | Method and device for extracting facial expression features based on bidirectional compressed data space dimension reduction | |
Ahmad et al. | A fusion of labeled-grid shape descriptors with weighted ranking algorithm for shapes recognition | |
Sun et al. | Indoor scene recognition based on deep learning and sparse representation | |
Xu et al. | Object detection using principal contour fragments | |
Chen et al. | Wafer maps defect recognition based on transfer learning of handwritten pre-training network | |
Du et al. | Shape matching and recognition base on genetic algorithm and application to plant species identification | |
Zhang et al. | A hierarchical oil depot detector in high-resolution images with false detection control | |
Alwaely et al. | Graph spectral domain feature representation for in-air drawn number recognition | |
Ramezani et al. | 3D object categorization based on histogram of distance and normal vector angles on surface points |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |