CN108256572B - Indoor visual feature classification method based on improved naive Bayes - Google Patents

Indoor visual feature classification method based on improved naive Bayes Download PDF

Info

Publication number
CN108256572B
CN108256572B CN201810040937.XA CN201810040937A CN108256572B CN 108256572 B CN108256572 B CN 108256572B CN 201810040937 A CN201810040937 A CN 201810040937A CN 108256572 B CN108256572 B CN 108256572B
Authority
CN
China
Prior art keywords
surf
classification
naive bayes
video database
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810040937.XA
Other languages
Chinese (zh)
Other versions
CN108256572A (en
Inventor
殷锡亮
郭娜
朱娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Vocational and Technical College
Original Assignee
Harbin Vocational and Technical College
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Vocational and Technical College filed Critical Harbin Vocational and Technical College
Priority to CN201810040937.XA priority Critical patent/CN108256572B/en
Publication of CN108256572A publication Critical patent/CN108256572A/en
Application granted granted Critical
Publication of CN108256572B publication Critical patent/CN108256572B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • G06V20/36Indoor scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/478Contour-based spectral representations or scale-space representations, e.g. by Fourier analysis, wavelet analysis or curvature scale-space [CSS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Library & Information Science (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

An indoor visual feature classification method based on improved naive Bayes belongs to the technical field of indoor positioning image processing, and particularly relates to an indoor feature classification method. Firstly, performing feature extraction on a video database image by using an SURF algorithm; applying an iterative comparison algorithm to classify the extracted features and generating an SURF feature tree; then, selecting the dimensionality of the SURF characteristics of the video database image with the variance of the dimensionality mean value exceeding a threshold value by applying a dimensionality selection algorithm; generating an improved naive Bayes algorithm model; then, SURF characteristics are extracted from the user positioning picture; and finally, inputting the extracted features of the user positioning picture into an improved naive Bayes classification algorithm model to obtain the classification of the SURF features of the user positioning picture. The invention solves the problems of poor matching precision and slow matching speed of the traditional indoor visual feature classification method. The invention can be used for indoor visual positioning systems.

Description

Indoor visual feature classification method based on improved naive Bayes
Technical Field
The invention belongs to the technical field of indoor positioning image processing, and particularly relates to an indoor feature classification method.
Background
In the field of visual positioning in image processing technology, the visual positioning needs to complete positioning work by utilizing abundant image information, and any type of visual indoor positioning method relates to an accurate positioning process. The existing indoor visual positioning accurate positioning process generally adopts a method of comparing Euclidean distances of characteristic vectors based on reference frames, and the method has larger error and poor matching accuracy due to the lack of an effective reference frame selection strategy; the method for carrying out feature classification based on the naive Bayes classifier can select a reference frame with a small error, but the positioning effect is unstable, the classification time depends on the number of sample classifications and the number of sample dimensions, and if more features and dimensions are required to be classified, the precision matching speed is influenced, so that the matching speed is low when indoor visual positioning is carried out.
Disclosure of Invention
The invention provides an indoor visual feature classification method based on improved naive Bayes, which aims to solve the problems of poor matching precision and low matching speed of the traditional indoor visual feature classification method.
The invention relates to an indoor visual feature classification method based on improved naive Bayes, which is realized by the following technical scheme:
the method comprises the following steps: performing feature extraction on the video database image by applying an SURF algorithm;
step two: classifying SURF characteristics of the video database images extracted in the first step by applying an iterative comparison algorithm, and generating a SURF characteristic tree;
step three: selecting the dimensionality of the SURF characteristics of the video database image with the variance of the dimensionality mean value exceeding a threshold value by applying a dimensionality selection algorithm;
step four: generating an improved naive Bayes algorithm model according to the dimension selected in the third step;
step five: extracting SURF characteristics from the user positioning picture by using a SURF algorithm;
step six: inputting the extracted SURF characteristics of the user positioning pictures into an improved naive Bayes classification algorithm model to obtain the classification of the SURF characteristics of the user positioning pictures;
compared with the prior art, the invention has the most prominent characteristics and remarkable beneficial effects that: when the method is used for indoor visual accurate positioning, the required time is shorter, the accuracy is higher, no requirement is required on an offline database in the same positioning scene, and the positioning speed is higher under the condition of adopting rough-accurate matching. And in the off-line stage, an SURF algorithm, an iterative comparison algorithm, a feature dimension selection algorithm and a naive Bayes algorithm are adopted to generate an improved naive Bayes model, and in the on-line stage, classification prediction of user features and determination of reference data frames required by subsequent positioning are completed according to SURF features of a user positioning image. The algorithm further reduces the time consumption of a naive Bayes classifier, and can also be used as a classifier for other data.
When the method is used for accurate positioning, the required time is reduced to about one fifth of the required time by a naive Bayes classification algorithm, and the accumulated classification errors are obviously reduced in 256, 128 and 64 segmented clusters and slightly reduced in 32 segmented clusters.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a graph comparing the time required for a naive Bayes classification algorithm to perform image matching with the method of the present invention in 256-frame clusters;
FIG. 3 is a graph comparing the time required for a naive Bayes classification algorithm to perform image matching with the method of the present invention in a 128-frame cluster;
FIG. 4 is a graph comparing the time required for a naive Bayes classification algorithm to perform image matching with the method of the present invention in 64-frame clusters;
FIG. 5 is a graph comparing the time required for a naive Bayes classification algorithm to perform image matching with the method of the present invention in a 32-frame cluster;
FIG. 6 is a graph of cumulative error for SURF feature classification in a 256-frame cluster using a naive Bayes classification algorithm with the method of the present invention;
FIG. 7 is a graph of cumulative error for SURF feature classification in a 128-frame cluster using a naive Bayes classification algorithm with the method of the present invention;
FIG. 8 is a graph of the cumulative error for SURF feature classification in a 64-frame cluster using a naive Bayes classification algorithm with the method of the present invention.
Detailed Description
The first embodiment is as follows: as shown in fig. 1, the indoor visual feature classification method based on the naive bayes in this embodiment is specifically implemented according to the following steps:
the method comprises the following steps: performing feature extraction on the video database image by applying an SURF algorithm;
step two: classifying SURF characteristics of the video database images extracted in the first step by applying an iterative comparison algorithm, and generating a SURF characteristic tree;
step three: selecting the dimensionality of the SURF characteristics of the video database image with the variance of the dimensionality mean value exceeding a threshold value by applying a dimensionality selection algorithm;
step four: generating an improved naive Bayes algorithm model according to the dimension selected in the third step;
step five: extracting SURF characteristics from the user positioning picture by using a SURF algorithm;
step six: inputting the extracted SURF characteristics of the user positioning pictures into an improved naive Bayes classification algorithm model to obtain the classification of the SURF characteristics of the user positioning pictures;
and performing orthogonality on each feature classification, and selecting 1 frame with the most occurrence times of features from the orthogonal result as a reference frame in the accurate positioning process of indoor visual positioning.
The second embodiment is as follows: the first difference between the present embodiment and the specific embodiment is: in the first step, the process of extracting the features of the video database image by applying the SURF algorithm comprises the following steps:
step one, feature point detection:
constructing a scale space, convolving box filters with different scales with images of a video database and constructing a scale space pyramid to form a multi-scale space function Dxx,Dyy,Dxy(ii) a Wherein DxxRepresenting points on a video database image and second-order partial derivatives of gaussian
Figure GDA0003245981550000031
The result of the convolution, wherein DyyRepresenting points on a video database image and second-order partial derivatives of gaussian
Figure GDA0003245981550000032
The result of the convolution, wherein DxyRepresenting points on a video database image and second-order partial derivatives of gaussian
Figure GDA0003245981550000033
The result of the convolution; x represents the abscissa of a point on the video database image, y represents the ordinate of a point on the video database image, and g (σ) represents the gaussian kernel function;
after the scale space pyramid is constructed, the local extreme value detH under a certain specific scale is obtained through the following formula:
detH=Dxx×Dyy-(0.9×Dxy)2 (1)
carrying out non-maximum suppression on points on the video database image in a 3 x 3 stereo neighborhood, screening qualified points as feature points, and simultaneously storing the positions and the sizes of the feature points;
step two, describing feature points:
after the positions of the feature points are determined, determining the main direction of the feature points by using haar wavelets to ensure the rotation and scale invariance of the feature points; in the circular area, computing the harr wavelet response in the x and y directions in each sector range, and finding the sector direction with the maximum modulus.
The description information of the feature points is the SURF features of the extracted video database image.
Other steps and parameters are the same as those in the first embodiment.
The third concrete implementation mode: the second embodiment is different from the first embodiment in that: and step five, the process of extracting SURF characteristics from the user positioning picture is the same as the method of extracting the characteristics of the video database image by using the SURF algorithm in the step one.
Other steps and parameters are the same as those in the first or second embodiment.
The fourth concrete implementation mode: the first difference between the present embodiment and the specific embodiment is: in the second step, an iterative comparison algorithm is applied to classify the SURF characteristics of the video database image extracted in the first step, and the concrete process of generating the SURF characteristic tree comprises the following steps:
step two, classifying and labeling SURF characteristics of a first frame in the video database image;
step two, starting iteration from a second frame in the image sequence of the video database:
comparing all the characteristics in the frame with the characteristics in all the previous frames, and if a certain characteristic in the frame is matched with any one characteristic in all the previous frames, marking the characteristic as a matched classification label; if the features in the frame do not match the features of all the previous frames, labeling the features as L +1, wherein L represents the maximum classification label in all the previous features;
and step two, after iteration is completed, the frame numbers of the SURF features with the same classification labels form branches of the SURF feature tree.
Other steps and parameters are the same as those in the first, second or third embodiment.
The fifth concrete implementation mode: the fourth difference between this embodiment and the specific embodiment is that: the concrete process of the third step comprises:
step three, multiplying the SURF characteristics of the video database image extracted in the step one by an amplification factor;
step two, calculating a feature mean value under each classification according to dimensions;
thirdly, calculating the variance of the feature mean values of different classifications under each dimension;
step three and four, selecting n dimensionalities of the calculation result in the step three and exceeding the variance threshold value as classification probability dimensionalities of the naive Bayes classifier, wherein the n dimensionalities are x dimensionalities respectively1,x2,...,xn,n≤64。
Other steps and parameters are the same as those of the first, second, third, or fourth embodiments.
The sixth specific implementation mode: the fifth embodiment is different from the fifth embodiment in that: in the fourth step, the specific steps for generating the improved naive Bayes algorithm model are as follows:
according to the naive bayes model equation (2), the probability under each classification is calculated:
P(C|F1F2…F64)=P(F1F2…F64|C)P(C)/P(F1F2…F64) (2)
p (-) is a probability function, C denotes classification, F1、F2、F64Respectively refer to a first dimension characteristic value, a second dimension characteristic value, and a 64 th dimension characteristic value, P (C | F)1F2…F64)、P(F1F2…F64| C) is a conditional probability function, P (C | F)1F2…F64) Is referred to as at F1F2…F64Under the condition (1), the probability of C; p (F)1F2…F64I C) means that under the condition of C, F1F2…F64The probability of (d);
P(F1F2…F64) The data of the SURF characteristics are independent under different classifications, and the formula (2) is changed into the formula (4) according to the formula (3):
P(F1F2…F64|C)=P(F1|C)P(F2|C)…P(F64|C) (3)
P(C|F1F2…F64)=P(F1|C)P(F2|C)…P(F64|C)P(C) (4)
the probability density function in each dimension obeys a gaussian distribution, so the probability density value in each classification can be determined by n dimensions exceeding the variance threshold in step three or four, so formula (5) is obtained from formula (4), i.e., a modified naive bayes algorithm model:
Figure GDA0003245981550000051
wherein,
Figure GDA0003245981550000052
denotes the x th1The characteristic value of the dimension is determined,
Figure GDA0003245981550000053
denotes the x th2The characteristic value of the dimension is determined,
Figure GDA0003245981550000054
denotes the x thnThe characteristic value of the dimension.
Other steps and parameters are the same as those in the first, second, third, fourth or fifth embodiment.
The seventh embodiment: the fifth embodiment is different from the fifth embodiment in that: in the third step, the variance threshold is 0.05.
Other steps and parameters are the same as those in the first, second, third, fourth, fifth or sixth embodiment.
The specific implementation mode is eight: the first difference between the present embodiment and the specific embodiment is: in the third step, for data with only one feature under classification, the variance value of the feature mean is 0.00001, and for data with different features under the same classification under a certain dimension, the value of the variance value of the dimension is 0.00001.
Other steps and parameters are the same as those of the first, second, third, fourth, fifth, sixth or seventh embodiments.
The specific implementation method nine: the first difference between the present embodiment and the specific embodiment is: in the third step, the amplification factor takes 1000.
Other steps and parameters are the same as those of the first, second, third, fourth, fifth, sixth, seventh or eighth embodiments.
Examples
The following examples were used to demonstrate the beneficial effects of the present invention:
1. at level 2A of 12 of the department of sciences of the harbin industrial university, video acquisition equipment is used for acquiring videos of the positioning area.
2. Extracting features of the collected video files by using an SURF algorithm, classifying the SURF features by using an iterative comparison algorithm, generating an SURF feature tree, selecting 10 SURF feature dimensions with the variance of a dimension mean value exceeding a threshold value of 0.05 by using a dimension selection algorithm, and generating an improved naive Bayes algorithm model
Figure GDA0003245981550000055
3. And inputting a picture to be positioned by a user, extracting SURF (speeded up robust features), and providing a prediction feature classification and positioning reference frame by improving a naive Bayes model.
4 as shown in fig. 2, fig. 3, fig. 4, and fig. 5, the time consumption of the standard naive bayes classification algorithm is compared with the time consumption of the algorithm proposed by the present invention under each of the different segment clusters, and it can be seen that the classification time required by the method of the present invention is reduced to about one fifth of the time required by the naive bayes classification algorithm.
And 5, comparing the classification result with the result of the naive Bayes classification algorithm to obtain a comparison graph of the accumulated error curve classified by the method and the naive Bayes method algorithm, wherein as shown in fig. 6, 7 and 8, the accumulated classification error is obviously reduced in 256, 128 and 64 segmented clusters, and is only slightly reduced in 32 segmented clusters.
The present invention is capable of other embodiments and its several details are capable of modifications in various obvious respects, all without departing from the spirit and scope of the present invention.

Claims (8)

1. An indoor visual feature classification method based on improved naive Bayes is characterized by comprising the following steps: the method is specifically carried out according to the following steps:
the method comprises the following steps: performing feature extraction on the video database image by applying an SURF algorithm;
step two: classifying SURF characteristics of the video database images extracted in the first step by applying an iterative comparison algorithm, and generating a SURF characteristic tree;
step three: the specific process of selecting the dimensionality of the SURF characteristics of the video database image with the variance of the dimensionality mean value exceeding the threshold value by applying the dimensionality selection algorithm comprises the following steps:
step three, multiplying the SURF characteristics of the video database image extracted in the step one by an amplification factor;
step two, calculating a feature mean value under each classification according to dimensions;
thirdly, calculating the variance of the feature mean values of different classifications under each dimension;
step three and four, selecting n dimensionalities exceeding the variance threshold value as classification probability dimensionalities of the naive Bayes classifier, wherein the n dimensionalities are x respectively1,x2,...,xn,n≤64;
Step four: generating an improved naive Bayes algorithm model according to the dimension selected in the third step;
step five: extracting SURF characteristics from the user positioning picture by using a SURF algorithm;
step six: and inputting the extracted SURF characteristics of the user positioning pictures into an improved naive Bayes classification algorithm model to obtain the classification of the SURF characteristics of the user positioning pictures.
2. The indoor visual feature classification method based on the naive Bayes improvement as claimed in claim 1, wherein in the first step, the process of applying the SURF algorithm to perform the feature extraction on the video database image comprises:
step one, feature point detection:
constructing a scale space, convolving box filters with different scales with images of a video database and constructing a scale space pyramid to form a multi-scale space function Dxx,Dyy,Dxy(ii) a Wherein DxxRepresenting points on a video database image and second-order partial derivatives of gaussian
Figure FDA0003245981540000011
The result of the convolution, wherein DyyRepresenting points on a video database image and second-order partial derivatives of gaussian
Figure FDA0003245981540000012
The result of the convolution, wherein DxyRepresenting points on a video database image and second-order partial derivatives of gaussian
Figure FDA0003245981540000013
Of convolutionThe result is; x represents the abscissa of a point on the video database image, y represents the ordinate of a point on the video database image, and g (σ) represents the gaussian kernel function;
after the pyramid of the scale space is constructed, the local extreme value det H under a certain scale is obtained through the following formula:
detH=Dxx×Dyy-(0.9×Dxy)2 (1)
carrying out non-maximum suppression on points on the video database image in a 3 x 3 stereo neighborhood, screening qualified points as feature points, and simultaneously storing the positions and the sizes of the feature points;
step two, describing feature points:
after the positions of the characteristic points are determined, the main direction of the characteristic points is determined by using haar wavelets.
3. The improved naive bayes-based indoor visual feature classification method according to claim 2, wherein in the fifth step, the process of extracting SURF features from the user positioning picture is the same as the method of extracting features from the video database image by using SURF algorithm in the first step.
4. The indoor visual feature classification method based on naive Bayes as claimed in claim 1, 2 or 3, wherein in the second step, applying iterative comparison algorithm to classify SURF features of video database images extracted in the first step, and generating SURF feature tree comprises:
step two, classifying and labeling SURF characteristics of a first frame in the video database image;
step two, starting iteration from a second frame in the image sequence of the video database:
comparing all the characteristics in the frame with the characteristics in all the previous frames, and if a certain characteristic in the frame is matched with any one characteristic in all the previous frames, marking the characteristic as a matched classification label; if the features in the frame do not match the features of all the previous frames, labeling the features as L +1, wherein L represents the maximum classification label in all the previous features;
and step two, after iteration is completed, the frame numbers of the SURF features with the same classification labels form branches of the SURF feature tree.
5. The indoor visual feature classification method based on the improved naive Bayes as claimed in claim 4, wherein in the fourth step, the specific steps of generating the improved naive Bayes algorithm model are as follows:
according to the naive bayes model equation (2), the probability under each classification is calculated:
P(C|F1F2…F64)=P(F1F2…F64|C)P(C)/P(F1F2…F64) (2)
p (-) is a probability function, C denotes classification, F1、F2、F64Respectively refer to a first dimension characteristic value, a second dimension characteristic value, and a 64 th dimension characteristic value, P (C | F)1F2…F64)、P(F1F2…F64| C) is a conditional probability function, P (C | F)1F2…F64) Is referred to as at F1F2…F64Under the condition (1), the probability of C; p (F)1F2…F64I C) means that under the condition of C, F1F2…F64The probability of (d);
P(F1F2…F64) The data of the SURF characteristics are independent under different classifications, and the formula (2) is changed into the formula (4) according to the formula (3):
P(F1F2…F64|C)=P(F1|C)P(F2|C)…P(F64|C) (3)
P(C|F1F2…F64)=P(F1|C)P(F2|C)…P(F64|C)P(C) (4)
the probability density value under each classification is determined by the n dimensions exceeding the variance threshold in step three or four, so equation (5) is obtained from equation (4), i.e., a modified naive bayes algorithm model:
Figure FDA0003245981540000031
wherein,
Figure FDA0003245981540000032
denotes the x th1The characteristic value of the dimension is determined,
Figure FDA0003245981540000033
denotes the x th2The characteristic value of the dimension is determined,
Figure FDA0003245981540000034
denotes the x thnThe characteristic value of the dimension.
6. The indoor visual feature classification method based on the modified naive Bayes as claimed in claim 1, wherein in the third and fourth steps, the variance threshold is 0.05.
7. The indoor visual feature classification method based on the improved naive Bayes as claimed in claim 1, wherein in the third step, for the data of only one feature under classification, the variance of the feature mean value is 0.00001, and for the same classification, the values of different features under a certain dimension are all the same, and the variance of the dimension is 0.00001.
8. The indoor visual feature classification method based on the improved naive Bayes as claimed in claim 1, wherein in the first step, the amplification factor takes a value of 1000.
CN201810040937.XA 2018-01-16 2018-01-16 Indoor visual feature classification method based on improved naive Bayes Active CN108256572B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810040937.XA CN108256572B (en) 2018-01-16 2018-01-16 Indoor visual feature classification method based on improved naive Bayes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810040937.XA CN108256572B (en) 2018-01-16 2018-01-16 Indoor visual feature classification method based on improved naive Bayes

Publications (2)

Publication Number Publication Date
CN108256572A CN108256572A (en) 2018-07-06
CN108256572B true CN108256572B (en) 2022-04-19

Family

ID=62741285

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810040937.XA Active CN108256572B (en) 2018-01-16 2018-01-16 Indoor visual feature classification method based on improved naive Bayes

Country Status (1)

Country Link
CN (1) CN108256572B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111739066B (en) * 2020-07-27 2020-12-22 深圳大学 Visual positioning method, system and storage medium based on Gaussian process

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663436A (en) * 2012-05-03 2012-09-12 武汉大学 Self-adapting characteristic extracting method for optical texture images and synthetic aperture radar (SAR) images
CN106157330A (en) * 2016-07-01 2016-11-23 广东技术师范学院 A kind of visual tracking method based on target associating display model
CN104680554B (en) * 2015-01-08 2017-10-31 深圳大学 Compression tracking and system based on SURF

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104881646B (en) * 2015-05-26 2018-08-03 重庆金山科技(集团)有限公司 The method that WCE video segmentations extract notable feature information
CN104933733A (en) * 2015-06-12 2015-09-23 西北工业大学 Target tracking method based on sparse feature selection
WO2018013438A1 (en) * 2016-07-09 2018-01-18 Grabango Co. Visually automated interface integration

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663436A (en) * 2012-05-03 2012-09-12 武汉大学 Self-adapting characteristic extracting method for optical texture images and synthetic aperture radar (SAR) images
CN104680554B (en) * 2015-01-08 2017-10-31 深圳大学 Compression tracking and system based on SURF
CN106157330A (en) * 2016-07-01 2016-11-23 广东技术师范学院 A kind of visual tracking method based on target associating display model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Research of vehicle recognition method under SURF feature and Bayesian Model";TANG Zhi-Wei.et al;<2013 Sixth International Symposium on Computational Intelligence and Design>;20140424;全文 *
《基于快速鲁棒特征集合统计特征的图像分类方法》;王澍等;《计算机应用》;20150110;第35卷(第1期);全文 *

Also Published As

Publication number Publication date
CN108256572A (en) 2018-07-06

Similar Documents

Publication Publication Date Title
CN107515895B (en) Visual target retrieval method and system based on target detection
CN108154118B (en) A kind of target detection system and method based on adaptive combined filter and multistage detection
CN108009559B (en) Hyperspectral data classification method based on space-spectrum combined information
CN113408605B (en) Hyperspectral image semi-supervised classification method based on small sample learning
CN107368807B (en) Monitoring video vehicle type classification method based on visual word bag model
CN108038435B (en) Feature extraction and target tracking method based on convolutional neural network
CN107784288B (en) Iterative positioning type face detection method based on deep neural network
Xia et al. Loop closure detection for visual SLAM using PCANet features
CN111241931A (en) Aerial unmanned aerial vehicle target identification and tracking method based on YOLOv3
CN107480585B (en) Target detection method based on DPM algorithm
CN113592923B (en) Batch image registration method based on depth local feature matching
CN110443279B (en) Unmanned aerial vehicle image vehicle detection method based on lightweight neural network
CN106778768A (en) Image scene classification method based on multi-feature fusion
CN107944459A (en) A kind of RGB D object identification methods
CN110110618B (en) SAR target detection method based on PCA and global contrast
CN106934398B (en) Image de-noising method based on super-pixel cluster and rarefaction representation
CN111652273A (en) Deep learning-based RGB-D image classification method
CN111639697B (en) Hyperspectral image classification method based on non-repeated sampling and prototype network
CN115147632A (en) Image category automatic labeling method and device based on density peak value clustering algorithm
CN109740552A (en) A kind of method for tracking target based on Parallel Signature pyramid neural network
CN116363535A (en) Ship detection method in unmanned aerial vehicle aerial image based on convolutional neural network
CN111882000A (en) Network structure and method applied to small sample fine-grained learning
CN116796248A (en) Forest health environment assessment system and method thereof
CN113313179A (en) Noise image classification method based on l2p norm robust least square method
CN108256572B (en) Indoor visual feature classification method based on improved naive Bayes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant