CN103440501A - Scene classification method based on nonparametric space judgment hidden Dirichlet model - Google Patents

Scene classification method based on nonparametric space judgment hidden Dirichlet model Download PDF

Info

Publication number
CN103440501A
CN103440501A CN2013103928915A CN201310392891A CN103440501A CN 103440501 A CN103440501 A CN 103440501A CN 2013103928915 A CN2013103928915 A CN 2013103928915A CN 201310392891 A CN201310392891 A CN 201310392891A CN 103440501 A CN103440501 A CN 103440501A
Authority
CN
China
Prior art keywords
image
space
nonparametric
hidden
key point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2013103928915A
Other languages
Chinese (zh)
Inventor
牛振兴
王斌
高新波
宗汝
郑昱
李洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN2013103928915A priority Critical patent/CN103440501A/en
Publication of CN103440501A publication Critical patent/CN103440501A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a scene classification method based on a nonparametric space judgment hidden Dirichlet model. The scene classification method mainly overcomes the defect that an existing classification method does not contain scene space information. The scene classification method is implemented in the following steps of (1) inputting images, (2) extracting image block features, (3) initializing nonparametric space judgment hidden Dirichlet model parameters, (4) establishing the nonparametric space judgment hidden Dirichlet model, and (5) classifying image scenes. The scene classification method based on the nonparametric space judgment hidden Dirichlet model utilizes image blocks containing space information, can describe the image scenes more abundantly, and improves the accuracy rate of image scene classification.

Description

Adjudicate the scene classification method of hidden Di Li Cray model based on the nonparametric space
Technical field
The invention belongs to technical field of image processing, further relate to a kind of scene classification method of adjudicating hidden Di Li Cray (Nonparametric Spatial Discriminative Latent Dirichlet Allocation, NS-DiscLDA) model based on the nonparametric space in mode identification technology.The present invention can be used for the scene classification to natural image, improves the scene classification accuracy rate.
Background technology
Scene classification is one of basic task of image understanding, and it has very important effect in scene Recognition.Traditional scene classification is usually based on three kinds of methods: one, the scene classification method of the image collection based on atlas analysis; Its two, based on the supervision manifold learning the scene classification method; Its three, the scene classification method of based target and spatial relationship characteristic thereof.
Patent " scene classification method and the device of the image collection based on the spectrum analysis " (application number: 201110221407.3 applyings date: 2011-08-03 application publication number: disclose a kind of scene classification method CN102542285A) of Tsing-Hua University's application.The method is determined degree of membership by interaction time, mainly solves the problem that in existing method, nonlinear data is lost, and then improves classification accuracy.The method embodiment is: the SIFT characteristic set that at first extracts image collection, and obtain K cluster and K code word, set up according to SIFT feature and the code word of arbitrary image the collection of illustrative plates of having the right, define weight graph spectrum K ' the individual node nearest with the Euclidean distance of arbitrary node, obtain weight matrix corresponding to node set, then obtain the Laplace operator matrix according to weight matrix, the Laplace operator matrix is carried out to computing and obtain each SIFT feature of arbitrary image and the interaction time between K code word, determine degree of membership according to interaction time, finally according to degree of membership, determine the code assignment result, according to allocation result, scene is classified.But the weak point that the method for this patented claim exists is: utilize merely sorter to be classified to image scene, lacked semantic information in scene, and then reduced the accuracy rate of scene classification.
The patent of Tsing-Hua University's application " scene classification method and device based on the supervision manifold learning " (application number: 201110202756.0 applyings date: 2011-07-19 application publication number: disclose a kind of scene classification method CN102254194A).The method utilizes manifold learning to classify to image scene, mainly solves the problem that existing method is not considered the stream shape feature of high dimensional feature point.The method embodiment is: at first extract characteristics of image and obtain the code book that the cluster centre of feature forms, then obtain each feature on each manifold structure to the tolerance on code word, the feature of calculating test pattern is to the degree of membership of code word and obtain histogram vectors, finally utilize support vector machine to be learnt histogram vectors, obtain the scene classification of image.But, the weak point that the method for this patented claim exists is: the method has adopted the manifold learning technology, however the classification capacity of manifold learning technology a little less than, thereby caused the accuracy rate of scene classification to reduce, the computation complexity of the method is too high in addition, causes the scene classification Speed Reduction.
Patent " the image scene sorting technique of a kind of based target and spatial relationship characteristic thereof the " (application number: 201110214985.4 applyings date: 2011-07-29 application publication number: disclose a kind of scene classification method CN102902976A) of CAS Electronics Research Institute's application.The method improves the scene classification accuracy rate by the spatial relationship characteristic merged between theme.The method embodiment is: at first define a kind of spatial relationship histogram, characterize the spatial relationship between target, then adopt to merge the implicit semantic analysis model of probability of spatial relationship characteristic between theme, set up iconic model, finally with the support vector machine scene image of classifying.But the weak point that the method for this patented claim exists is: because the method has adopted the method for pLSA topic model modeling, yet pLSA topic model disappearance prior imformation causes detailed information to lose and then reduced the accuracy rate of scene classification.
Summary of the invention
The object of the invention is to overcome above-mentioned the deficiencies in the prior art, propose to adjudicate based on the nonparametric space scene analysis method of hidden Di Li Cray model, Description Image, improve the scene classification accuracy rate more all sidedly.
Realize that technical thought of the present invention is, the image block that is many 8 * 8 by the image even partition, extract the SIFT feature of image block, obtain the volume coordinate of image block, utilize the feature of image block and volume coordinate to set up the nonparametric space and adjudicate hidden Di Li Cray model, make in model the spatial information that comprises image block, reach Description Image more all sidedly, improve the purpose of scene classification accuracy rate.
For achieving the above object, the present invention includes following key step:
(1) input picture: the training image of the artificial mark of input scene classification.
(2) extract image block characteristics.
Training image is divided into to a plurality of 8 * 8 image block, each image block is extracted to the SIFT feature, record the volume coordinate of each image block.
(3) initialization model parameter: hidden Di Li Cray model is adjudicated in the nonparametric space and carry out manual initialization, obtain the situation elements spatial distributed parameters.
(4) set up the nonparametric space and adjudicate hidden Di Li Cray model.
The parameter that in estimation model, the word of each theme distributes, carry out statistical modeling to feature and the volume coordinate of image block, sets up the nonparametric space and adjudicate hidden Di Li Cray model.
(5) image scene classification.
Adjudicate hidden Di Li Cray model according to the nonparametric space, the classification mark of prediction test pattern, complete the classification of image scene.
The present invention has following advantage compared with the conventional method:
The first, the volume coordinate of the present invention's document image piece when extracting image block characteristics, overcome in the prior art and do not comprised the shortcoming of spatial information, makes image information of the present invention more complete, improved the image scene classification accuracy.
Second, the present invention carries out statistical modeling to feature and the volume coordinate of image block, the feature of image block is connected each other by volume coordinate, overcome the incoherent shortcoming of information of image block in prior art image representation method, made image representation method of the present invention keep the comprehensive of image information.
The 3rd, the nonparametric space that the present invention sets up is adjudicated hidden Di Li Cray model and has been utilized the word distribution of theme to carry out modeling, modeling is more prone to, has overcome the shortcoming of image modeling ability in the prior art, make the present invention show better modeling ability.
The accompanying drawing explanation
Fig. 1 is process flow diagram of the present invention;
Fig. 2 is that hidden Di Li Cray illustraton of model is adjudicated in nonparametric of the present invention space.
Embodiment
Below in conjunction with accompanying drawing, the present invention is described in further detail.
With reference to accompanying drawing 1, the step that the present invention is realized is as follows:
Step 1, input picture.
The training image of the artificial mark of input scene classification.Artificial mark of the present invention refers to all training images is marked respectively to natural image classification mark.
Step 2, extract image block characteristics.
Training image is divided into to a plurality of 8 * 8 image block, each image block is extracted to the SIFT feature, record the volume coordinate of each image block.
The step of SIFT feature extraction is as follows:
The first step, extract the image block composition diagram of SIFT feature as set of blocks by intending.
Second step, for scale-value σ chooses 0.5,0.8,1.1,1.4,1.7 5 scale-value, bring respectively five scale-value into following formula, obtains five Gaussian functions of different scale.
G ( x , y , σ ) = 1 2 π σ 2 e - ( x 2 + y 2 )
Wherein, G (x, y, σ) is illustrated in the Gaussian function under the σ scale-value, and x, y be corresponding horizontal stroke, the ordinate value of presentation video piece pixel respectively.
The 3rd step, by each image block in the set of first step image block respectively with the Gaussian function convolution of five different scales, obtain the first rank five tomographic image collection.
The 4th step, the every width image dot interlace sampling by the first rank five tomographic image collection, obtain second-order five tomographic image collection.
The 5th step, the every width image dot interlace sampling by second-order five tomographic image collection, obtain the 3rd rank five tomographic image collection.
The 6th step, will, with the image subtraction on the adjacent rank of layer, obtain five layers of difference diagram image set of second order.
The 7th step, obtain five layers of difference diagram image set of second order of all images, and five layers of difference diagram image set of the second order of all images are exactly the difference of Gaussian metric space.
The 8th step, each pixel by image in the difference of Gaussian metric space, 18 pixels that 8 pixels adjacent with this pixel position are adjacent with same order levels picture position respectively carry out the gray-scale value size relatively, judge whether this pixel is extreme point, if this pixel is extreme point, be labeled as unique point, otherwise, mark not.
The 9th step, according to the following formula, calculate the principal curvatures ratio of each unique point in the difference of Gaussian metric space.
C = ( α + β ) 2 αβ
Wherein, C means the principal curvatures ratio of unique point in the difference of Gaussian metric space, α, and β means that respectively the unique point in the difference of Gaussian metric space is horizontal at the image slices vegetarian refreshments, the Grad of ordinate direction.
The tenth step, judge whether the principal curvatures ratio of each unique point in the difference of Gaussian metric space is less than principal curvatures ratio threshold value 10, if be less than, this unique point of mark is key point, otherwise, mark not.
The 11 step, according to the following formula, calculate the gradient magnitude of each key point of image in the difference of Gaussian metric space.
m ( x , y ) = [ L ( x + 1 , y ) - L ( x - 1 , y ) ] 2 + [ L ( x , y + 1 ) - L ( x , y - 1 ) ] 2
Wherein, m (x, y) means the gradient magnitude of key point, and x, y mean respectively horizontal stroke corresponding to key point, ordinate value, and L means the yardstick of key point in metric space.
The 12 step, according to the following formula, calculate the gradient direction of each key point of image in the difference of Gaussian metric space.
θ(x,y)=αtan2{[L(x,y+1)-L(x,y-1)]/[L(x+1,y)-L(x-1,y)]}
Wherein, θ (x, y) means the gradient direction of key point, and x, y mean respectively horizontal stroke corresponding to key point, ordinate value, and α means the Grad of key point in the horizontal ordinate direction, and L means the yardstick of key point in metric space.
The 13 step, in statistics difference of Gaussian metric space, gradient direction and the amplitude of 8 * 8 pixels of each key point periphery, obtain histogram of gradients, and wherein, the transverse axis of histogram of gradients is gradient direction angle, and the longitudinal axis is the amplitude that gradient direction angle is corresponding.
The 14 step, rotate to be the coordinate axis of difference of Gaussian metric space the direction of key point, and 8 dimensional vectors that calculate each key point subregion mean, by 8 dimensional vectors combinations of all key point subregions, and 128 of each key point dimension SIFT features in acquisition.
Step 3, the initialization model parameter.
Hidden Di Li Cray model is adjudicated in the nonparametric space and carry out manual initialization, obtain the situation elements spatial distributed parameters.
The step of model initialization is as follows:
The first step, search in training image the situation elements space layout information that whether exists, if exist, forwards second step to, otherwise, forward the 3rd step to.
Second step, using the situation elements space layout information of training image as the situation elements spatial distributed parameters.
The 3rd step, evenly be divided into training image a plurality of 8 * 8 image block, by the method for statistical picture piece, obtains the situation elements spatial distributed parameters.
Step 4, set up the nonparametric space and adjudicate hidden Di Li Cray model.
The parameter that in estimation model, the word of each theme distributes, carry out statistical modeling to feature and the volume coordinate of image block, sets up the nonparametric space and adjudicate hidden Di Li Cray model.
Below in conjunction with accompanying drawing 2, the present invention is set up to the nonparametric space and adjudicate the process of hidden Di Li Cray model and be described in further detail.
The present invention sets up the nonparametric space, and to adjudicate the step of hidden Di Li Cray model as follows:
The first step, according to the distribution parameter in situation elements space, the probability distribution R of estimated image theme.
Second step, to image d (d=1,2 ..., D) the probability distribution θ on theme dstochastic sampling, obtain the sample z of image block theme dn, α means variable θ ddistribution parameter, k (k=1,2 ..., K 1) the expression theme.
The 3rd step, according to the sample z of image block theme dn, the word distribution parameter φ of estimated image piece theme k, β means variable φ kdistribution parameter.
The 4th step, according to the word distribution parameter φ of image block theme k, set up the nonparametric space and adjudicate hidden Di Li Cray model, wherein, π means the distribution of classification mark, in the present invention for being uniformly distributed, y dthe classification mark of presentation video d, T means the mapping matrix of scene, N dthe presentation video piece, u dnthe hidden theme of presentation video piece dn, w dnpresentation video piece dn word number, l dnthe volume coordinate of presentation video piece dn.
Step 5, the image scene classification.
Adjudicate hidden Di Li Cray model according to the nonparametric space, the classification mark of prediction test pattern, complete the classification of image scene.
The first step, bring test pattern into the nonparametric space and adjudicate hidden Di Li Cray model, obtains the test pattern class probability and distribute.
Second step, in the test pattern class probability distributes, the classification of maximum probability is as the classification mark of test pattern.
Effect of the present invention can be described further by following emulation experiment.
1. simulated conditions
The present invention is to be on Intel (R) Core i3-5302.93GHZ, internal memory 4G, WINDOWS7 operating system at central processing unit, the emulation of using MATLAB software to carry out.Database adopts LabelMe database and UIUC-Sports database.
2. emulation content
The present invention carries out the scene classification emulation experiment on the image scene database.Test pattern adopts LabelMe database and UIUC-Sports database.The LabelMe database comprises 8 scene classifications, is respectively " highway ", " city ", " high constructure ”,“ street ", " forest ", " seashore ", " mountain range " and " rural area ".The UIUC-Sports database comprises 8 scene classifications, is respectively " shuttlecock ", " deliver ", " croquet ", " polo ", " rock-climbing ", " racing boat ", " sailing boat " and " skiing ".
The present invention be take classification accuracy and the method performance is evaluated and tested as index, simulation comparison different scene classification methods accuracys rate that image scene is classified, the several scenes sorting technique of contrast comprises the hidden Di Li Cray in space (Spatial Latent Dirichlet Allocation, sLDA) method, adjudicate hidden Di Li Cray (Discriminative Latent Dirichlet Allocation, DiscLDA) method, hidden Di Li Cray (Spatial Discriminative Latent Dirichlet Allocation is adjudicated in space, S-DiscLDA) method and the inventive method.Contrast and experiment is as shown in the table.
Figure BDA0000375644960000071
As seen from the above table, on two kinds of databases, carry out the scene classification experiment, classification accuracy of the present invention is the highest in four kinds of methods.This is because the present invention has given prominence to the image block that comprises spatial information, so Description Image scene better obtains the effect that is better than other scene classification methods on classification accuracy thus, has further verified advance of the present invention.

Claims (6)

1. a scene classification method of adjudicating hidden Di Li Cray model based on the nonparametric space, is characterized in that, comprises the following steps:
(1) input picture: the training image of the artificial mark of input scene classification;
(2) extract image block characteristics:
Training image is divided into to a plurality of 8 * 8 image block, each image block is extracted to the SIFT feature, record the volume coordinate of each image block;
(3) initialization model parameter:
Hidden Di Li Cray model is adjudicated in the nonparametric space and carry out manual initialization, obtain the situation elements spatial distributed parameters;
(4) set up the nonparametric space and adjudicate hidden Di Li Cray model:
The parameter that in estimation model, the word of each theme distributes, carry out statistical modeling to feature and the volume coordinate of image block, sets up the nonparametric space and adjudicate hidden Di Li Cray model;
(5) image scene classification:
Adjudicate hidden Di Li Cray model according to the nonparametric space, the classification mark of prediction test pattern, complete the classification of image scene.
2. scene classification method of adjudicating hidden Di Li Cray model based on the nonparametric space according to claim 1, is characterized in that, the described mark scene of step (1) classification refers to all training images are marked respectively to natural image classification mark.
3. scene classification method of adjudicating hidden Di Li Cray model based on the nonparametric space according to claim 1, is characterized in that, the step of the described extraction of step (2) SIFT feature is as follows:
The first step, extract the image block composition diagram of SIFT feature as set of blocks by intending;
Second step, for scale-value σ chooses 0.5,0.8,1.1,1.4,1.7 5 scale-value, bring respectively five scale-value into following formula, obtains five Gaussian functions of different scale;
G ( x , y , σ ) = 1 2 π σ 2 e - ( x 2 + y 2 )
Wherein, G (x, y, σ) is illustrated in the Gaussian function under the σ scale-value, and x, y be corresponding horizontal stroke, the ordinate value of presentation video piece pixel respectively;
The 3rd step, by each image block in the set of first step image block respectively with the Gaussian function convolution of five different scales, obtain the first rank five tomographic image collection;
The 4th step, the every width image dot interlace sampling by the first rank five tomographic image collection, obtain second-order five tomographic image collection;
The 5th step, the every width image dot interlace sampling by second-order five tomographic image collection, obtain the 3rd rank five tomographic image collection;
The 6th step, will, with the image subtraction on the adjacent rank of layer, obtain five layers of difference diagram image set of second order;
The 7th step, obtain five layers of difference diagram image set of second order of all images, and five layers of difference diagram image set of the second order of all images are exactly the difference of Gaussian metric space;
The 8th step, each pixel by image in the difference of Gaussian metric space, 18 pixels that 8 pixels adjacent with this pixel position are adjacent with same order levels picture position respectively carry out the gray-scale value size relatively, judge whether this pixel is extreme point, if this pixel is extreme point, be labeled as unique point, otherwise, mark not;
The 9th step, according to the following formula, calculate the principal curvatures ratio of each unique point in the difference of Gaussian metric space;
C = ( α + β ) 2 αβ
Wherein, C means the principal curvatures ratio of unique point in the difference of Gaussian metric space, α, and β means that respectively the unique point in the difference of Gaussian metric space is horizontal at the image slices vegetarian refreshments, the Grad of ordinate direction;
The tenth step, judge whether the principal curvatures ratio of each unique point in the difference of Gaussian metric space is less than principal curvatures ratio threshold value 10, if be less than, this unique point of mark is key point, otherwise, mark not;
The 11 step, according to the following formula, calculate the gradient magnitude of each key point of image in the difference of Gaussian metric space;
m ( x , y ) = [ L ( x + 1 , y ) - L ( x - 1 , y ) ] 2 + [ L ( x , y + 1 ) - L ( x , y - 1 ) ] 2
Wherein, m (x, y) means the gradient magnitude of key point, and x, y mean respectively horizontal stroke corresponding to key point, ordinate value, and L means the yardstick of key point in metric space;
The 12 step, according to the following formula, calculate the gradient direction of each key point of image in the difference of Gaussian metric space;
θ(x,y)=αtan2{[L(x,y+1)-L(x,y-1)]/[L(x+1,y)-L(x-1,y)]}
Wherein, θ (x, y) means the gradient direction of key point, and x, y mean respectively horizontal stroke corresponding to key point, ordinate value, and α means the Grad of key point in the horizontal ordinate direction, and L means the yardstick of key point in metric space;
The 13 step, in statistics difference of Gaussian metric space, gradient direction and the amplitude of 8 * 8 pixels of each key point periphery, obtain histogram of gradients, and wherein, the transverse axis of histogram of gradients is gradient direction angle, and the longitudinal axis is the amplitude that gradient direction angle is corresponding;
The 14 step, rotate to be the coordinate axis of difference of Gaussian metric space the direction of key point, and 8 dimensional vectors that calculate each key point subregion mean, by 8 dimensional vectors combinations of all key point subregions, and 128 of each key point dimension SIFT features in acquisition.
4. scene classification method of adjudicating hidden Di Li Cray model based on the nonparametric space according to claim 1, is characterized in that, the step of the described model initialization of step (3) is as follows:
The first step, search in training image the situation elements space layout information that whether exists, if exist, forwards second step to, otherwise, forward the 3rd step to;
Second step, using the situation elements space layout information of training image as the situation elements spatial distributed parameters;
The 3rd step, evenly be divided into training image a plurality of 8 * 8 image block, by the method for statistical picture piece, obtains the situation elements spatial distributed parameters.
5. scene classification method of adjudicating hidden Di Li Cray model based on the nonparametric space according to claim 1, is characterized in that, step (4) is described to be set up the nonparametric space to adjudicate the step of hidden Di Li Cray model as follows:
The first step, according to the distribution parameter in situation elements space, the probability distribution of estimated image theme;
Second step, to the probability distribution stochastic sampling of image subject, obtain the sample of image block theme;
The 3rd step, according to the sample of image block theme, the word distribution parameter of estimated image piece theme;
The 4th step, according to the word distribution parameter of image block theme, set up the nonparametric space and adjudicate hidden Di Li Cray model.
6. scene classification method of adjudicating hidden Di Li Cray model based on the nonparametric space according to claim 1, is characterized in that, the step of the prediction test pattern classification mark described in step (5) is as follows:
The first step, bring test pattern into the nonparametric space and adjudicate hidden Di Li Cray model, obtains the test pattern class probability and distribute;
Second step, in the test pattern class probability distributes, the classification of maximum probability is as the classification mark of test pattern.
CN2013103928915A 2013-09-01 2013-09-01 Scene classification method based on nonparametric space judgment hidden Dirichlet model Pending CN103440501A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2013103928915A CN103440501A (en) 2013-09-01 2013-09-01 Scene classification method based on nonparametric space judgment hidden Dirichlet model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2013103928915A CN103440501A (en) 2013-09-01 2013-09-01 Scene classification method based on nonparametric space judgment hidden Dirichlet model

Publications (1)

Publication Number Publication Date
CN103440501A true CN103440501A (en) 2013-12-11

Family

ID=49694194

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2013103928915A Pending CN103440501A (en) 2013-09-01 2013-09-01 Scene classification method based on nonparametric space judgment hidden Dirichlet model

Country Status (1)

Country Link
CN (1) CN103440501A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103810500A (en) * 2014-02-25 2014-05-21 北京工业大学 Place image recognition method based on supervised learning probability topic model
CN104616026A (en) * 2015-01-20 2015-05-13 衢州学院 Monitor scene type identification method for intelligent video monitor
CN106295653A (en) * 2016-07-29 2017-01-04 宁波大学 A kind of water quality image classification method
CN108898169A (en) * 2018-06-19 2018-11-27 Oppo广东移动通信有限公司 Image processing method, picture processing unit and terminal device
CN108898587A (en) * 2018-06-19 2018-11-27 Oppo广东移动通信有限公司 Image processing method, picture processing unit and terminal device
CN110705360A (en) * 2019-09-05 2020-01-17 上海零眸智能科技有限公司 Method for efficiently processing classified data by human-computer combination
CN110967674A (en) * 2018-09-29 2020-04-07 杭州海康威视数字技术股份有限公司 Vehicle-mounted radar array antenna failure detection method and device and vehicle-mounted radar
CN111611919A (en) * 2020-05-20 2020-09-01 西安交通大学苏州研究院 Road scene layout analysis method based on structured learning
CN111709388A (en) * 2020-06-23 2020-09-25 中国科学院空天信息创新研究院 Method and system for extracting emergency water source area in drought state

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090089036A1 (en) * 2007-09-28 2009-04-02 Huawei Technologies Co., Ltd. Method And Apparatus for Establishing Network Performance Model
CN102968620A (en) * 2012-11-16 2013-03-13 华中科技大学 Scene recognition method based on layered Gaussian hybrid model

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090089036A1 (en) * 2007-09-28 2009-04-02 Huawei Technologies Co., Ltd. Method And Apparatus for Establishing Network Performance Model
CN102968620A (en) * 2012-11-16 2013-03-13 华中科技大学 Scene recognition method based on layered Gaussian hybrid model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
牛振兴: ""足球视频主题建模及内容分析方法研究"", 《中国博士学问论文全文数据库 信息科技辑》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103810500A (en) * 2014-02-25 2014-05-21 北京工业大学 Place image recognition method based on supervised learning probability topic model
CN103810500B (en) * 2014-02-25 2017-04-05 北京工业大学 A kind of place image-recognizing method based on supervised learning probability topic model
CN104616026A (en) * 2015-01-20 2015-05-13 衢州学院 Monitor scene type identification method for intelligent video monitor
CN104616026B (en) * 2015-01-20 2017-12-12 衢州学院 A kind of monitoring scene type discrimination method towards intelligent video monitoring
CN106295653A (en) * 2016-07-29 2017-01-04 宁波大学 A kind of water quality image classification method
CN106295653B (en) * 2016-07-29 2020-03-31 宁波大学 Water quality image classification method
CN108898587A (en) * 2018-06-19 2018-11-27 Oppo广东移动通信有限公司 Image processing method, picture processing unit and terminal device
CN108898169A (en) * 2018-06-19 2018-11-27 Oppo广东移动通信有限公司 Image processing method, picture processing unit and terminal device
CN110967674A (en) * 2018-09-29 2020-04-07 杭州海康威视数字技术股份有限公司 Vehicle-mounted radar array antenna failure detection method and device and vehicle-mounted radar
CN110967674B (en) * 2018-09-29 2022-03-01 杭州海康威视数字技术股份有限公司 Vehicle-mounted radar array antenna failure detection method and device and vehicle-mounted radar
CN110705360A (en) * 2019-09-05 2020-01-17 上海零眸智能科技有限公司 Method for efficiently processing classified data by human-computer combination
CN111611919A (en) * 2020-05-20 2020-09-01 西安交通大学苏州研究院 Road scene layout analysis method based on structured learning
CN111709388A (en) * 2020-06-23 2020-09-25 中国科学院空天信息创新研究院 Method and system for extracting emergency water source area in drought state
CN111709388B (en) * 2020-06-23 2023-05-12 中国科学院空天信息创新研究院 Method and system for extracting emergency water source in drought state

Similar Documents

Publication Publication Date Title
CN103440501A (en) Scene classification method based on nonparametric space judgment hidden Dirichlet model
CN105956560B (en) A kind of model recognizing method based on the multiple dimensioned depth convolution feature of pondization
CN109034210A (en) Object detection method based on super Fusion Features Yu multi-Scale Pyramid network
CN104050247B (en) The method for realizing massive video quick-searching
CN105718532B (en) A kind of across media sort methods based on more depth network structures
CN104966104A (en) Three-dimensional convolutional neural network based video classifying method
CN103440471B (en) The Human bodys' response method represented based on low-rank
CN103824079B (en) Multi-level mode sub block division-based image classification method
CN104112143A (en) Weighted hyper-sphere support vector machine algorithm based image classification method
CN110414600A (en) A kind of extraterrestrial target small sample recognition methods based on transfer learning
CN105868711B (en) Sparse low-rank-based human behavior identification method
CN110532946A (en) A method of the green vehicle spindle-type that is open to traffic is identified based on convolutional neural networks
CN107767416A (en) The recognition methods of pedestrian's direction in a kind of low-resolution image
CN105868706A (en) Method for identifying 3D model based on sparse coding
CN111414958B (en) Multi-feature image classification method and system for visual word bag pyramid
CN107133640A (en) Image classification method based on topography's block description and Fei Sheer vectors
CN105913083A (en) Dense SAR-SIFT and sparse coding-based SAR classification method
CN105160290A (en) Mobile boundary sampling behavior identification method based on improved dense locus
CN104298977A (en) Low-order representing human body behavior identification method based on irrelevance constraint
Wang et al. Basketball shooting angle calculation and analysis by deeply-learned vision model
CN106156798A (en) Scene image classification method based on annular space pyramid and Multiple Kernel Learning
Wang et al. Spatial weighting for bag-of-features based image retrieval
CN107273824A (en) Face identification method based on multiple dimensioned multi-direction local binary patterns
CN107316005A (en) The Activity recognition method of son is described based on dense track core covariance
CN109255339B (en) Classification method based on self-adaptive deep forest human gait energy map

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20131211