CN102663401A - Image characteristic extracting and describing method - Google Patents

Image characteristic extracting and describing method Download PDF

Info

Publication number
CN102663401A
CN102663401A CN2012101140611A CN201210114061A CN102663401A CN 102663401 A CN102663401 A CN 102663401A CN 2012101140611 A CN2012101140611 A CN 2012101140611A CN 201210114061 A CN201210114061 A CN 201210114061A CN 102663401 A CN102663401 A CN 102663401A
Authority
CN
China
Prior art keywords
image
describing method
parameter
characteristic
characteristics extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012101140611A
Other languages
Chinese (zh)
Other versions
CN102663401B (en
Inventor
赵春晖
王莹
齐滨
王立国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN2012101140611A priority Critical patent/CN102663401B/en
Publication of CN102663401A publication Critical patent/CN102663401A/en
Application granted granted Critical
Publication of CN102663401B publication Critical patent/CN102663401B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to the field of image processing and computer vision and particularly provides an image characteristic extracting and describing method which is suitable for a BoW (Bag of Words) model and is applied to the field of computer vision. The image characteristic extracting and describing method comprises the following steps of: carrying out format judgment on an input image, not processing if the input image is a gray level image and converting the input image into an HSV (Hue, Saturation, Value) model if the input image is not the gray level image; selecting scale parameters; by adopting a uniform sampling method, according to the selected scale parameters, extracting characteristic points of the image at equal pixel intervals, calculating DF-SIFT (Dense Fast-Scale Invariant Feature Transform) descriptors of an H (Hue) channel, an S (Saturation) channel and a V (Value) channel of the image, applying color information into a classification task and controlling the sampling density by a parameter step to obtain the dense characteristic of the image; and carrying out description on the dense characteristic. According to the invention, by densely sampling, a visual dictionary is more accurate and reliable; and the bilinear interpolation replaces the image and Gaussian kernel function convolution process, so that the implementing process is simpler and more efficient.

Description

A kind of image characteristics extraction and describing method
Technical field
The present invention relates to Flame Image Process and computer vision field, specifically provide a kind of BoW of being applicable to (Bag of Words) model to be applied in the image characteristics extraction and the describing method of computer vision field.
Background technology
Image classification receives various countries experts and scholars and engineering technical personnel's extensive concern for a long time as the base application of Flame Image Process.And the BoW model is applied to document processing field at first, and document is expressed as the crucial contamination of sequence independence, matees through the frequency that keyword in the statistics document occurs.In recent years; The researchers of computer vision field successfully are transplanted to image processing field with the thought of this model, through image is carried out feature extraction and description, obtain big measure feature and handle; Thereby obtain being used for the word of presentation video; And make up the vision dictionary on this basis, and then image to be classified is adopted identical disposal route, the result is updated in the sorter of training and classifies.The step of most critical is Feature Extraction and description in this model; Adopt yardstick extraneous features conversion (Scale-invariant Feature Transform in the classic method; SIFT) method is applied to the BoW model; Yet the SIFT descriptor just extracts and describes to the invariant feature point of image, so it certainly exists the problem of information dropout and omission.To image applications SIFT descriptor the time, need ask image to have the comparatively form of standard, for example picture size is enough big, and crucial object proportion is enough big, could guarantee like this to use follow-up coupling after enough unique points are extracted.In the process that unique point is extracted and described, complexity is very high, needs to consume a large amount of computing times, and this also is to image recognition and the disadvantageous one side of classification task.And in the BoW model; After carrying out feature extraction step, use clustering method and generate sight word, if therefore enough abundant information can not be provided in feature extraction step; Then can directly influence the representativeness of the sight word of generation, and then influence follow-up classify accuracy.Therefore, researchers are devoted to the SIFT descriptor is improved or replaces the SIFT descriptor to be applied in the BoW model with new descriptor always.For example adopt the PCA-SIFT descriptor, through an orthogonal matrix conversion in former data conversion to the new coordinate system, thereby realize the conversion of high dimensional data to low dimension data, reduce the complexity of calculating.In addition, quicken in addition robust features (Speeded Up Robust of Features, SURF) descriptor is primarily aimed at the SIFT descriptor and improves, its efficient is higher, robustness is stronger.
Summary of the invention
The object of the present invention is to provide a kind of when being applied to the BoW model DF-SIFT (Dense Fast-SIFT) image characteristics extraction and describing method more efficiently.
The objective of the invention is to realize like this:
A kind of image characteristics extraction of the present invention and describing method comprise:
(1) input picture is carried out form and judge, if gray level image does not then deal with, if not gray level image then converts the HSV model into;
(2) choose scale parameter;
(3) adopt the uniform sampling method; By the scale parameter of choosing; At interval the unique point of image is extracted with same pixel, the DF-SIFT descriptor of computed image H passage, S passage, V passage is applied to colouring information in the classification task; Sampling density is controlled by parameter step length, obtains the dense feature of image;
(4) dense feature is described.
The model of the DF-SIFT descriptor of image H passage, S passage, V passage is:
Figure BDA0000154590820000021
s = 0 , if max = 0 max - min max = 1 - min max , otherwise ,
v=max,
Wherein h represents colourity, and on behalf of saturation degree, v, s represent brightness, and max representes r, g, and the maximal value in three components of b, min representes r, g, the minimum value in three components of b.
Dense feature is described, being comprised:
(1) characteristic is adjusted to 0 °;
(2) with the unique point being the center of circle, is radius structure border circular areas to unify yardstick, and the pixel that drops on this border circular areas is divided into 4 * 4 nonoverlapping subregions;
(3) calculate the Grad of 0 °, 45 °, 90 °, 135 °, 180 °, 225 °, 270 °, 315 ° eight directions at each subregion;
(4) each subregion is carried out even weighting, use the average of Gaussian function each subregion is carried out weighting, use bilinear interpolation and accomplish the accumulation to gradient, each characteristic is described by the vector of 128 dimensions of a 4*4*8.
After dense feature described, add up the accuracy rate result of asynchronous long parameter, obtain the optimal step size parameter, the dense feature of extracting image again with the optimal step size parameter line description of going forward side by side.
Scale parameter is respectively 4,6,8,10.
Beneficial effect of the present invention is:
Image characteristics extraction of the present invention and describing method adopt the Feature Points Extraction of simplifying; Obtained the characteristic of abundant token image information through intensive sampling; The vision dictionary that makes the application clustering method carry out cluster and generate adopts multiple dimensioned describing method to guarantee the yardstick unchangeability of characteristics of image more accurately and reliably.The introducing of colouring information makes that the utilization of image information is more complete, thus for follow-up classification and identification link characteristic information accurately is provided more comprehensively.
Replace the Gaussian window in traditional SIFT descriptor that image is carried out smoothly utilizing the process of bilinear interpolation alternative image and gaussian kernel function convolution with rectangular window, simplified implementation procedure.To the multiple dimensioned distribution that characteristic is unified, avoided complicated yardstick computation process.Choosing of optimal step size parameter makes the present invention on the basis that has guaranteed accuracy, improve efficient.
Description of drawings
Fig. 1 is a schematic flow sheet of the present invention;
Fig. 2 uses the characteristic that obtains when the DF-SIFT descriptor carries out feature description synoptic diagram as a result;
Fig. 3 is that asynchronous long parameter is provided with down resulting classification accuracy rate statistics synoptic diagram when using the DF-SIFT descriptor and in the BoW model, carrying out image classification;
Fig. 4 is SIFT and the DF-SIFT descriptor single classification image classification accuracy comparative result synoptic diagram when being applied to the BoW model.
Embodiment
When the objective of the invention is to that the BoW model that originally is applied to the text-processing field is applied to the image classification field; Can be through using the DF-SIFT descriptor; The characteristic of image information is accurately described; And be applicable to follow-up structure dictionary and svm classifier process, thereby overcome the complexity height and the not good enough problem of classification results of conventional images feature extraction and describing method.When Application of B oW model carried out graphical representation, it compared critical step for image is carried out Feature Extraction and description, and needs the characteristic of great deal of rich to guarantee that the information of image obtains complete description.Therefore, the DF-SIFT descriptor that the present invention proposes adopts the method for uniform sampling, carries out the extraction of unique point by pixel, thereby obtains intensive characteristics of image, and the density of sampling is controlled by parameter " step-length ".Yet this does not also mean that unique point is The more the better, because increasing of unique point brings complicated computation burden can for follow-up cluster step, so the present invention has carried out choosing of optimized parameter through a large amount of random experiments.Colouring information is the important information of token image content; The DF-SIFT descriptor has adopted automatic HSV preference pattern; At first the graphical representation model is judged that promptly the image outer to gray level image carries out model conversion, difference calculated characteristics on H, S, three passages of V.
When the completion feature extraction is described characteristic; What SIFT adopted is to utilize the Gaussian window function to carry out the weighted accumulation of gradient, and in DF-SIFT, the Gaussian window function is replaced with rectangular window; The characteristic neighborhood of a point is carried out even weighting; Rather than Gauss's weighting, so only use bilinear interpolation and replace just can accomplishing accumulation gradient with the convolution of Gaussian function, use the average of Gaussian function afterwards weighting is unified in its unit, place.This approximation method had both improved speed, had guaranteed that again performance do not suffer a loss.Because what DF-SIFT adopted is the method that evenly extracts key point; Therefore the yardstick unchangeability performance of its characteristic can receive certain destruction; In order to guarantee the yardstick unchangeability, we adopt the method for multiple dimensioned extraction, and each key point is extracted and described with a plurality of different yardsticks.Large scale is corresponding to the general picture characteristic of image, and small scale is corresponding to the minutia of image, and the characteristic that obtains like this can guarantee the unchangeability of yardstick equally.
For example the present invention is done description in more detail below in conjunction with accompanying drawing:
1. 10 types of images of difference picked at random from Caltech 101 and 256 two databases of Caltech are randomly drawed the image of respective number again and are trained from each type.At first each training image is carried out form and judge,, then convert the HSV model into,, directly carry out feature extraction and handle if gray level image is then jumped out if not gray level image;
2. image is carried out uniform sampling, carry out the extraction of unique point at interval by same pixel, at first under yardstick is 4 situation, carry out, so just obtained a series of yardsticks and be 4 characteristic area;
3. image is carried out multiple dimensioned extraction once more.Here, yardstick is set at 6,8,10.So far just obtained a large amount of multi-scale image characteristics;
4. each characteristic is described.With the unique point is the center of circle, and yardstick is a radius structure border circular areas, and this zone is divided into the subregion of 4*4, in each subregion, calculates the gradient accumulated value of its 8 directions (0 °, 45 °, 90 °, 135 °, 180 °, 225 °, 270 °, 315 °).With rectangular window subregion is carried out smoothly in the computation process, just can accomplish the accumulation of gradient like this with bilinear interpolation, use the average of Gaussian function at last each sub regions is carried out weighting.Like this, each characteristic can be described with the vector of a 4*4*8=128 dimension.
5. after characteristic being extracted and described, adopt the k-means clustering method that the big measure feature that obtains is carried out cluster, with cluster centre as sight word;
6. the sight word that step 5 is obtained is integrated, structure vision dictionary, and each type image is used based on the histogram of vision dictionary and is represented;
7. image to be classified is repeated the process of 1-4, obtain the feature extraction and description result of image to be classified.Through calculating the distance of the sight word in each proper vector and the sight word storehouse, judge which sight word it belongs to, and image table is shown as the histogram based on the vision dictionary then;
8. above-mentioned vision dictionary histogram is input in the svm classifier device and classifies;
9. be the efficient of ensuring method, carry out a large amount of random experiments and different step-lengths are provided with down resulting classification accuracy result add up, the parameter value that finds optimal step size to be provided with through the result.
With reference to Fig. 2, what Fig. 1 represented is to use DF-SIFT to extract the characteristic synoptic diagram that obtains with describing method more, can find out from this figure, uses the available intensive multi-scale characteristic of DF-SIFT descriptor.
With reference to Fig. 3, be intended to that the DF-SIFT descriptor is carried out optimized parameter and choose.From Caltech 101 and 256 two databases of Caltech, randomly draw 10 types and experimentize, and therefrom randomly draw 20 and be used for training, randomly draw 20 again and be used for test.The selection of step-length from 2 to 20.Suppose in 20 width of cloth images that are used for testing, correct classification N width of cloth image arranged, then obvious, classification accuracy rate is N/20.As can be seen from the figure; For Caltech 101 and 256 two databases of Caltech; Classification accuracy almost remains unchanged smaller or equal to 8 o'clock in step-length, is about 91% to the classification accuracy of Caltech 101, and is about 55.5% to the classification accuracy of Caltech 256.Can find out that lower for the classification accuracy rate in Caltech 256 storehouses, this is because the image in this storehouse has bigger variation characteristic, the difficulty of therefore classifying is higher.But this can not negate the performance of DF-SIFT because when comparing with traditional SIFT operator, its result still has bigger raising, when step-length greater than 8 the time, the accuracy of classification begins to descend.Therefore parameter " step-length " is made as 8, on the basis of guaranteed efficiency, has improved the accuracy rate of classification like this.
Table 1
Figure BDA0000154590820000051
Table 1 is SIFT and the image classification accuracy comparative result of DF-SIFT descriptor when being applied to the BoW model, is intended to compare with the classification accuracy rate result who uses the SIFT descriptor in the BoW model, using the DF-SIFT descriptor.Select for use Caltech 101 and 256 two databases of Caltech to verify equally, guarantee the persuasion property of experimental result.From database, randomly draw 10 types, the training image number is made as 5,10,15,20,25,30 respectively.Randomly draw 20 images again as test.Experimental result is the statistical average of 20 experiments.Use the result that the resulting classification accuracy rate of DF-SIFT descriptor will be better than the SIFT descriptor far away, this has just verified the validity of the method that the present invention proposes.
With reference to Fig. 4, be intended to the performance of further verification algorithm, every type of classification results of 10 types of images of single experiment has been carried out following the tracks of statistics, these 10 types of images comprise: Faces; Faces_easy, Leopards, Motorbikes; Airplanes, Bonsai, Brain; Buddha, Butterfly, car_side.Experimental result is the statistical average of 20 experiments.It is 5,10,15 that Fig. 4 from left to right is followed successively by training image N_train; The classification accuracy of 20,25,30 o'clock each specific categories when using SIFT and DF-SIFT descriptor relatively; Wherein transverse axis 1-10 represents 10 above-mentioned class objects respectively successively, and the accuracy of longitudinal axis representative classification is in the bar chart; The left side bar shaped representative of every pair of bar pattern is based on the classification accuracy rate of every type of image of SIFT descriptor; Right side bar shaped representative can find out that based on the classification accuracy rate of every type of image of DF-SIFT descriptor the value of right side bar chart will be higher than the bar chart in left side.In conjunction with above-mentioned experimental result, we may safely draw the conclusion, no matter is the average classification accuracy rate for integral body, and still for the classification accuracy rate of specific category, the performance of DF-SIFT all is better than SIFT.
Table 2
Figure BDA0000154590820000061
Table 2 is SIFT and DF-SIFT descriptor processing time statisticses when being applied to the BoW model and carrying out image classification, the complexity that is intended to verification method.Iff is considered the feature extraction link, and DF-SIFT is because avoided the number of complex computing, so its extraction speed will be obviously faster than the SIFT descriptor.But when applying it in the BoW model, its intensive characteristic area brings certain data burden will inevitably for follow-up cluster link.In order to address this problem, we have carried out optimized parameter to DF-SIFT and have chosen, and according to experimental result the value of parameter " step-length " are made as 8.In when experiment, we have added up DF-SIFT and the algorithm working time of SIFT when the value of step-length is 8,, comprise training time and test duration working time here.As can be seen from the figure, be less than the working time of using the SIFT descriptor working time of the DF-SIFT descriptor of process parameter optimization, and this has also verified the necessity and the meaning of the DF-SIFT descriptor being carried out selection of parameter.
Above-mentioned embodiment for the special act of the present invention is not in order to limit the present invention.DF-SIFT feature extraction provided by the invention and describing method equally also are applicable to other field of image recognition.In not breaking away from essence of the present invention and scope, can do a little adjustment and optimization, be as the criterion with claim with protection scope of the present invention.

Claims (9)

1. image characteristics extraction and describing method is characterized in that comprising:
(1) input picture is carried out form and judge, if gray level image does not then deal with, if not gray level image then converts the HSV model into;
(2) choose scale parameter;
(3) adopt the uniform sampling method; By the scale parameter of choosing; At interval the unique point of image is extracted with same pixel, the DF-SIFT descriptor of computed image H passage, S passage, V passage is applied to colouring information in the classification task; Sampling density is controlled by parameter step length, obtains the dense feature of image;
(4) dense feature is described.
2. a kind of image characteristics extraction according to claim 1 and describing method is characterized in that: the model of the DF-SIFT descriptor of said image H passage, S passage, V passage is:
s = 0 , if max = 0 max - min max = 1 - min max , otherwise ,
v=max,
Wherein h represents colourity, and on behalf of saturation degree, v, s represent brightness, and max representes r, g, and the maximal value in three components of b, min representes r, g, the minimum value in three components of b.
3. a kind of image characteristics extraction according to claim 1 and 2 and describing method is characterized in that: said dense feature is described, being comprised:
(1) characteristic is adjusted to 0 °;
(2) with the unique point being the center of circle, is radius structure border circular areas to unify yardstick, and the pixel that drops on this border circular areas is divided into 4 * 4 nonoverlapping subregions;
(3) calculate the Grad of 0 °, 45 °, 90 °, 135 °, 180 °, 225 °, 270 °, 315 ° eight directions at each subregion;
(4) each subregion is carried out even weighting, use the average of Gaussian function each subregion is carried out weighting, use bilinear interpolation and accomplish the accumulation to gradient, each characteristic is described by the vector of 128 dimensions of a 4*4*8.
4. a kind of image characteristics extraction according to claim 1 and 2 and describing method; It is characterized in that: after dense feature is described; Add up the accuracy rate result of asynchronous long parameter, obtain the optimal step size parameter, the dense feature of extracting image again with the optimal step size parameter line description of going forward side by side.
5. a kind of image characteristics extraction according to claim 3 and describing method; It is characterized in that: after dense feature is described; Add up the accuracy rate result of asynchronous long parameter, obtain the optimal step size parameter, the dense feature of extracting image again with the optimal step size parameter line description of going forward side by side.
6. a kind of image characteristics extraction according to claim 1 and 2 and describing method is characterized in that: said scale parameter is respectively 4,6,8,10.
7. a kind of image characteristics extraction according to claim 3 and describing method is characterized in that: said scale parameter is respectively 4,6,8,10.
8. a kind of image characteristics extraction according to claim 4 and describing method is characterized in that: said scale parameter is respectively 4,6,8,10.
9. a kind of image characteristics extraction according to claim 5 and describing method is characterized in that: said scale parameter is respectively 4,6,8,10.
CN2012101140611A 2012-04-18 2012-04-18 Image characteristic extracting and describing method Expired - Fee Related CN102663401B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012101140611A CN102663401B (en) 2012-04-18 2012-04-18 Image characteristic extracting and describing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012101140611A CN102663401B (en) 2012-04-18 2012-04-18 Image characteristic extracting and describing method

Publications (2)

Publication Number Publication Date
CN102663401A true CN102663401A (en) 2012-09-12
CN102663401B CN102663401B (en) 2013-11-20

Family

ID=46772885

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012101140611A Expired - Fee Related CN102663401B (en) 2012-04-18 2012-04-18 Image characteristic extracting and describing method

Country Status (1)

Country Link
CN (1) CN102663401B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093226A (en) * 2012-12-20 2013-05-08 华南理工大学 Construction method of RATMIC descriptor for image feature processing
CN103577840A (en) * 2013-10-30 2014-02-12 汕头大学 Item identification method
CN104156707A (en) * 2014-08-14 2014-11-19 深圳市汇顶科技股份有限公司 Fingerprint identification method and fingerprint identification device
CN104850859A (en) * 2015-05-25 2015-08-19 电子科技大学 Multi-scale analysis based image feature bag constructing method
CN105631860A (en) * 2015-12-21 2016-06-01 中国资源卫星应用中心 Local sorted orientation histogram descriptor-based image correspondence point extraction method
CN107423739A (en) * 2016-05-23 2017-12-01 北京陌上花科技有限公司 Image characteristic extracting method and device
CN107818341A (en) * 2017-10-25 2018-03-20 天津大学 A kind of color extraction method based on improvement K means algorithms
CN108776802A (en) * 2018-04-18 2018-11-09 中国农业大学 A kind of peanut varieties recognition methods and system
CN111339974A (en) * 2020-03-03 2020-06-26 景德镇陶瓷大学 Method for identifying modern ceramics and ancient ceramics
CN113538360A (en) * 2021-07-12 2021-10-22 哈尔滨理工大学 Plastic cup surface defect detection system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101853299A (en) * 2010-05-31 2010-10-06 杭州淘淘搜科技有限公司 Image searching result ordering method based on perceptual cognition
CN102184411A (en) * 2011-05-09 2011-09-14 中国电子科技集团公司第二十八研究所 Color-information-based scale invariant feature point describing and matching method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101853299A (en) * 2010-05-31 2010-10-06 杭州淘淘搜科技有限公司 Image searching result ordering method based on perceptual cognition
CN102184411A (en) * 2011-05-09 2011-09-14 中国电子科技集团公司第二十八研究所 Color-information-based scale invariant feature point describing and matching method

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093226B (en) * 2012-12-20 2016-01-20 华南理工大学 A kind of building method of the RATMIC descriptor for characteristics of image process
CN103093226A (en) * 2012-12-20 2013-05-08 华南理工大学 Construction method of RATMIC descriptor for image feature processing
CN103577840A (en) * 2013-10-30 2014-02-12 汕头大学 Item identification method
CN103577840B (en) * 2013-10-30 2017-05-31 汕头大学 Item identification method
US9824258B2 (en) 2014-08-14 2017-11-21 Shenzhen GOODIX Technology Co., Ltd. Method and apparatus for fingerprint identification
CN104156707B (en) * 2014-08-14 2017-09-22 深圳市汇顶科技股份有限公司 Fingerprint identification method and its fingerprint identification device
CN104156707A (en) * 2014-08-14 2014-11-19 深圳市汇顶科技股份有限公司 Fingerprint identification method and fingerprint identification device
CN104850859A (en) * 2015-05-25 2015-08-19 电子科技大学 Multi-scale analysis based image feature bag constructing method
CN105631860A (en) * 2015-12-21 2016-06-01 中国资源卫星应用中心 Local sorted orientation histogram descriptor-based image correspondence point extraction method
CN105631860B (en) * 2015-12-21 2018-07-03 中国资源卫星应用中心 Image point extracting method of the same name based on partial ordering's direction histogram description
CN107423739A (en) * 2016-05-23 2017-12-01 北京陌上花科技有限公司 Image characteristic extracting method and device
CN107818341A (en) * 2017-10-25 2018-03-20 天津大学 A kind of color extraction method based on improvement K means algorithms
CN108776802A (en) * 2018-04-18 2018-11-09 中国农业大学 A kind of peanut varieties recognition methods and system
CN111339974A (en) * 2020-03-03 2020-06-26 景德镇陶瓷大学 Method for identifying modern ceramics and ancient ceramics
CN111339974B (en) * 2020-03-03 2023-04-07 景德镇陶瓷大学 Method for identifying modern ceramics and ancient ceramics
CN113538360A (en) * 2021-07-12 2021-10-22 哈尔滨理工大学 Plastic cup surface defect detection system

Also Published As

Publication number Publication date
CN102663401B (en) 2013-11-20

Similar Documents

Publication Publication Date Title
CN102663401B (en) Image characteristic extracting and describing method
US10929649B2 (en) Multi-pose face feature point detection method based on cascade regression
CN102722712B (en) Multiple-scale high-resolution image object detection method based on continuity
CN110321967B (en) Image classification improvement method based on convolutional neural network
CN101329734B (en) License plate character recognition method based on K-L transform and LS-SVM
CN102254196B (en) Method for identifying handwritten Chinese character by virtue of computer
CN102147858B (en) License plate character identification method
CN101763516B (en) Character recognition method based on fitting functions
CN112016605B (en) Target detection method based on corner alignment and boundary matching of bounding box
CN104680127A (en) Gesture identification method and gesture identification system
CN102982349A (en) Image recognition method and device
CN105389593A (en) Image object recognition method based on SURF
CN103870803A (en) Vehicle license plate recognition method and system based on coarse positioning and fine positioning fusion
CN105956560A (en) Vehicle model identification method based on pooling multi-scale depth convolution characteristics
CN105718866A (en) Visual target detection and identification method
CN102385592B (en) Image concept detection method and device
CN103020971A (en) Method for automatically segmenting target objects from images
CN105404886A (en) Feature model generating method and feature model generating device
CN104598885A (en) Method for detecting and locating text sign in street view image
CN103839078A (en) Hyperspectral image classifying method based on active learning
Zhang et al. Automatic discrimination of text and non-text natural images
CN103679191A (en) An automatic fake-licensed vehicle detection method based on static state pictures
CN105117740A (en) Font identification method and device
CN103279738A (en) Automatic identification method and system for vehicle logo
CN103455823A (en) English character recognizing method based on fuzzy classification and image segmentation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20131120

Termination date: 20190418

CF01 Termination of patent right due to non-payment of annual fee