CN102024152B - Method for recognizing traffic sings based on sparse expression and dictionary study - Google Patents
Method for recognizing traffic sings based on sparse expression and dictionary study Download PDFInfo
- Publication number
- CN102024152B CN102024152B CN 201010587536 CN201010587536A CN102024152B CN 102024152 B CN102024152 B CN 102024152B CN 201010587536 CN201010587536 CN 201010587536 CN 201010587536 A CN201010587536 A CN 201010587536A CN 102024152 B CN102024152 B CN 102024152B
- Authority
- CN
- China
- Prior art keywords
- dictionary
- image
- blocks
- sparse expression
- class
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Abstract
The invention discloses a method for recognizing traffic sings based on sparse expression and dictionary study, comprising the following steps: collecting pictures containing traffic signs and dividing the pictures into C image sample classes by hand, extracting image blocks of each image to form C training image block sets corresponding to the image sample classes, automatically studying a vision dictionary D by utilizing all the training image block sets, calculating the sparse expression of each class of training image block sets on the vision dictionary D, counting the average probability distribution of sparse expression coefficients along dictionary atoms as the feature expression for describing the C different sample classes, counting the probability distribution Pt of the sparse expression coefficients, for a test picture, along the dictionary atoms by using the same method, calculating a distance between the Pt and the average probability distribution, and selecting the sample class with the shortest distance as the recognition result of the traffic sign of the test image. In the invention, the classification of traffic sign pictures is realized by using the sparse expression and the probabilistic method, thereby achieving higher traffic sign recognition rate.
Description
Technical field
The invention belongs to the computer controlled automatic application, a kind of method of carrying out Traffic Sign Recognition based on sparse expression and dictionary learning of specific design.
Background technology
The Traffic Sign Recognition that occupies critical role in intelligent transportation system is more and more paid close attention in recent years.Key in the Traffic Sign Recognition is, how to collect the information of traffic sign, then it showed for classification.The mankind can identify it easily according to the CF of traffic sign.Yet, with the identification of computer realization traffic sign, a lot of challenges are arranged still.At first, because illumination, daytime, the color of traffic sign was different from evening.Secondly, the shape of traffic sign may not be the geometric configuration of standard, such as circle, and triangle or hexagon.At last, traffic sign can such as shade, hide owing to weather, mist etc., and reduce its observability.
Nearest researcher has proposed several methods for Traffic Sign Recognition.They were divided into for two steps to this task of Traffic Sign Recognition: detection-phase and sorting phase.At detection-phase, detect the zone that may have traffic sign with color or shape facility.At sorting phase, the traffic sign that sorter obtains detection-phase is assigned to the class of appointment.Yet, in this traditional method, also there is a subject matter, be exactly before classification, must locate the center of a traffic sign, because the recognition result that the location of mistake meeting of traffic sign candidate region leads to errors.Sparse expression and probabilistic method can well address this problem.
Sparse model is used widely in signal, image and video task, and it is modeled as input signal the linear combination of minority atom in the dictionary.In the research of computer vision, we are to how presentation video is interested.In this model, sparse quilt preferentially is used for the expression of signal.Key challenge in this model is how to choose base or dictionary that image information represents.Nearest studies show that the dictionary that obtains by study is than using predefined dictionary can obtain better effect.
Summary of the invention
The invention provides and a kind ofly carry out the method for Traffic Sign Recognition based on sparse expression and dictionary learning, solved higher-dimension and calculated the difficulty of bringing, the dictionary that upgrades with unceasing study improves accuracy of identification.
A kind ofly carry out the method for Traffic Sign Recognition based on sparse expression and dictionary learning, comprising:
(1) collects the natural scene picture that comprises traffic sign, and be divided into C image pattern class by the classification of traffic sign wherein is manual, then for the every width of cloth image in each image pattern class, extract the image block of some formed objects, form C training image set of blocks of correspondence image sample class;
(2) utilize all C training image set of blocks, automatic learning goes out a vision dictionary D;
(3) training image blocks that calculates each class is integrated into the sparse expression on the vision dictionary D, and statistics sparse expression coefficient is along the Average probability distribution { P of dictionary atom
1, P
2..., P
C, as the feature representation of describing C different sample classes;
(4) comprise the test picture of unknown traffic sign for a width of cloth, extract the some image blocks in this picture, consist of the test pattern set of blocks, then calculate the sparse expression of test pattern set of blocks on vision dictionary D, statistics sparse expression coefficient is along the probability distribution P of dictionary atom
t
(5) calculate P
tWith { P
1, P
2..., P
CSimilarity, select the most similar sample class as the Traffic Sign Recognition result of test pattern.
Concrete steps in the above-mentioned steps (1) are: collect the natural scene picture comprise traffic sign, and by manual C the image pattern class (then be some STOP traffic indication maps of by the natural conditions such as various shapes, illumination, background under obtaining such as STOP traffic sign class) that be divided into of the classification of traffic sign wherein; Then the every width of cloth image in each image pattern class is processed, centered by each pixel of every width of cloth image, extracting size is the image block of n * n (n is less than or equal to the image size), by row all pixels in every width of cloth image are all carried out the extraction of image block again by row first, thereby each image pattern class has consisted of a training image set of blocks
Wherein
The image block of j class training image, n
jBe such image block number altogether, and the image block here is to allow to cover; At last, C image pattern class all carried out the extraction of image block, form total image block set
Then utilize all training image set of blocks, automatic learning goes out a vision dictionary D, and concrete steps are:
(i) dictionary D is initialized as some image blocks of selecting at random from C training image set of blocks;
(ii) in the sparse coding stage, for dictionary D, by finding the solution
Find the sparse expression α of each image block x; Wherein,
Be a sparse coefficient vector, λ is a Regularization coefficient, and P is l
0Or l
1Norm, be one in order to reach the regular of sparse purpose;
(iii) in the dictionary updating stage, with K-SVD algorithm (Aharon, M.and Elad, M.andBruckstein, A, K-SVD:An Algorithm for Designing OvercompleteDictionaries for Sparse Representation, Signal Processing, IEEE Transactionson[see also Acoustics, Speech, and Signal Processing, IEEE Transactions on], 2006) upgrade dictionary
In each element d
l, and l=1,2 ..., k;
(vi) last sparse coding and the dictionary updating of constantly repeating, until convergence or iterations cut-off, obtained a vision dictionary D obtaining from the study of all training image blocks, this dictionary had not only comprised unique visual information of each class but also had comprised the common visual information of all classes.
The below has provided the dictionary updating algorithm based on K-SVD:
Arthmetic statement:
Step 1: dictionary D is initialized as some image blocks of from C training image set of blocks, selecting at random;
Step 2: repeating step 3~4, until convergence or iterations stop;
Step 3: in the sparse coding stage, to dictionary D, by finding the solution
Find the sparse factor alpha of each image block x;
Step 4.1: for current d
l, select α
lThe non-vanishing numbering of coefficient forms set ω
l:
ω
l={i∈1,...,N|α
l[i]≠0}
Step 4.2: error of calculation matrix E:
Step 4.3: take out among the error matrix E corresponding to set ω
lRow, thereby obtain E
l
Step 4.4: by solving following optimization problem, upgrade d
lNonzero coefficient α with correspondence
l:
Step 4.5: finish;
Step 5: finish.
Described step (3) is calculated the sparse expression coefficient of i class training image set of blocks along the Average probability distribution P of dictionary atom
iMethod be:
(i) at first calculate the sparse expression of each image block on dictionary D in the training image set of blocks of i class, use
Find the solution and obtain sparse coefficient;
(ii) obtain the sparse expression coefficient of such training image set of blocks along the Average probability distribution P of dictionary atom by the method for averaging
i, method is as follows:
Wherein, for given i class training image set of blocks
Here n
iThe sum of i class image block, set w
lAtom d is used in expression
lThe label of piece, therefore set { w
lIn the number of element can be expressed as atom d
lAccess times, be defined as N
l, S
iThe distribution of dictionary D on the i class.
The sparse expression coefficient of described step (4) calculating testing image set of blocks along the method for the Average probability distribution Pt of dictionary atom is:
(i) centered by each pixel of test pattern, extracting size is the image block of n * n, by row all pixels in the image is all carried out the extraction of image block again by row first, thereby has consisted of the set of test pattern piece
(iii) try to achieve the sparse expression coefficient along the probability distribution P of dictionary atom with qualitative modeling at last
t, method is as follows:
Wherein, for test picture I
lDictionary distribute the image block set of test pattern
Here M is the sum of image block, w
lBe expressed as and use atom d
lThe index of piece, therefore set { w
lIn the number of element can be expressed as atom d
lAccess times, be defined as N
l
After obtaining the dictionary distribution separately of every class training image blocks and test pattern piece, next be exactly to look for test picture I
lBelong to which class, we can obtain to test classification under the picture by minimizing following optimization problem:
Wherein, p
Te(I|D) for testing the probability of picture block, p
Tr(S
i| D) for training the probability of picture block.
Beneficial effect of the present invention:
(1) the present invention represents signal with sparse model, so that the test picture solves higher-dimension and calculates the difficulty of bringing only with a few pictures linear dependence in the training picture;
(2) the present invention uses the dictionary that unceasing study upgrades, than obtaining better effect with predefined dictionary;
(3) the present invention's probability of picture fritter, rather than the probability of picture in its entirety is classified, and has reached comparatively ideal effect;
(4) the present invention replaces each class to have separately a dictionary with a large vision dictionary, and this big dictionary has comprised again the common information of all classes namely by each class information separately, so that identification is more accurate.
Description of drawings
Fig. 1 is some the STOP traffic indication maps under the conditions such as the difformity selected among the embodiment, illumination, background;
Fig. 2 serve as reasons sparse expression and the dictionary learning process flow diagram of the training picture that all traffic indication maps form;
Fig. 3 is take dictionary learning and the cognitive phase process flow diagram of STOP traffic sign as the test picture;
Fig. 4 is total schematic flow sheet of recognition methods among the present invention.
Embodiment
As shown in Figure 4, a kind ofly carry out the method for Traffic Sign Recognition based on sparse expression and dictionary learning, detailed process is as follows:
Collect the natural scene picture of traffic sign, and be divided into C image pattern class by the classification of traffic sign wherein is manual, such as some STOP traffic indication maps that contain under the conditions such as difformity, illumination, background, as shown in Figure 1; Then for the every width of cloth image in each image pattern class, centered by each pixel of every width of cloth image, extracting size is the image block of n * n (n is less than or equal to the image size), by row all pixels in every width of cloth image are all carried out the extraction of image block again by row first, thereby each image pattern class has consisted of a training image set of blocks
As shown in Figure 2, wherein
The image block of j class training image, n
jBe such image block number altogether, and the image block here is to allow to cover; At last, C image pattern class all carried out the extraction of image block, form total image block set
As shown in Figure 2.
Then utilize and obtain all training image set of blocks
Automatic learning goes out a vision dictionary D, and as shown in Figure 2, concrete steps are:
(i) dictionary D is initialized as some image blocks of selecting at random from C training image set of blocks;
(ii) in the sparse coding stage, for dictionary D, by finding the solution
Find the sparse expression α of each image block x; Wherein, α ∈ R
kBe a sparse coefficient vector, λ is a Regularization coefficient, and P is l
0Or l
1Norm, be one in order to reach the regular of sparse purpose;
(iii) in the dictionary updating stage, with K-SVD algorithm (Aharon, M.and Elad, M.andBruckstein, A, K-SVD:An Algorithm for Designing OvercompleteDictionaries for Sparse Representation, Signal Processing, IEEE Transactionson[see also Acoustics, Speech, and Signal Processing, IEEE Transactions on], 2006) upgrade dictionary
In each element d
l, and l=1,2 ..., k;
(vi) last sparse coding and the dictionary updating of constantly repeating, until convergence or iterations cut-off, obtained a vision dictionary D obtaining from the study of all training image blocks, this dictionary had not only comprised unique visual information of each class but also had comprised the common visual information of all classes.
Dictionary updating algorithm based on K-SVD is:
Arthmetic statement:
Step 1: dictionary D is initialized as some image blocks of from C training image set of blocks, selecting at random;
Step 2: repeating step 3~4, until convergence or iterations stop;
Step 3: in the sparse coding stage, to dictionary D, by finding the solution
Find the sparse factor alpha of each image block x;
Step 4: in the dictionary updating stage, for l=1 ..., k constantly updates atom d
l, wherein
Step 4.1: for current d
l, select α
lThe non-vanishing numbering of coefficient forms set ω
l:
ω
l={i∈1,...,N|α
l[i]≠0}
Step 4.2: error of calculation matrix E:
Step 4.3: take out among the error matrix E corresponding to set ω
lRow, thereby obtain E
l
Step 4.4: by solving following optimization problem, upgrade d
lNonzero coefficient α with correspondence
l:
Step 4.5: finish;
Step 5: finish.
The training image blocks that calculates each class is integrated into the sparse expression on the vision dictionary D, and statistics sparse expression coefficient is along the Average probability distribution { P of dictionary atom
1, P
2..., P
C, as the feature representation of describing C different sample classes: (i) at first calculate the sparse expression of each image block on dictionary D in the training image set of blocks of i class, use
Find the solution and obtain sparse coefficient;
(ii) obtain the sparse expression coefficient of such training image set of blocks along the Average probability distribution P of dictionary atom by the method for averaging
i, method is as follows:
Wherein, for given i class training image set of blocks
Here n
iThe sum of i class image block, set w
lAtom d is used in expression
lThe label of piece, therefore set { w
lIn the number of element can be expressed as atom d
lAccess times, be defined as N
l, S
iThe distribution of dictionary D on the i class.
As shown in Figure 3, comprise the test picture of unknown traffic sign for a width of cloth, extract the some image blocks in this picture, consist of the test pattern set of blocks, then calculate the sparse expression of test pattern set of blocks on vision dictionary D, statistics sparse expression coefficient is along the probability distribution P of dictionary atom
t: (i) centered by each pixel of test pattern, extracting size is the image block of n * n, by row all pixels in the image is all carried out the extraction of image block again by row first, thereby has consisted of the set of test pattern piece
(iii) try to achieve the sparse expression coefficient along the probability distribution P of dictionary atom with qualitative modeling at last
t, method is as follows:
Wherein, for test picture I
lDictionary distribute the image block set of test pattern
Here M is the sum of image block, w
lBe expressed as and use atom d
lThe index of piece, therefore set { w
lIn the number of element can be expressed as atom d
lAccess times, be defined as N
l
As shown in Figure 3, calculate P
tWith { P
1, P
2..., P
CSimilarity, select the most similar sample class as the Traffic Sign Recognition result of test pattern, detailed process is: after obtaining every class training image blocks and test pattern piece dictionary separately and distributing, next be exactly to look for test picture I
lBelong to which class, we can obtain to test classification under the picture by minimizing following optimization problem:
Wherein, p
Te(I|D) for testing the probability of picture block, p
Tr(S
i| D) for training the probability of picture block.
Claims (2)
1. one kind is carried out the method for Traffic Sign Recognition based on sparse expression and dictionary learning, comprising:
(1) collects the natural scene image that comprises traffic sign, and be divided into C image pattern class by the classification of traffic sign wherein is manual, then for the every width of cloth image in each image pattern class, centered by each pixel of every width of cloth image, extracting size is the image block of n * n, by row all pixels in every width of cloth image are all carried out the extraction of image block again by row first, form C training image set of blocks of correspondence image sample class; Wherein, n is less than or equal to the image size;
(2) utilize all C training image set of blocks, automatic learning goes out a vision dictionary D;
(3) training image blocks that calculates each class is integrated into the sparse expression on the vision dictionary D, and statistics sparse expression coefficient is along the Average probability distribution { P of dictionary atom
1, P
2..., P
C, as the feature representation of describing C image pattern class;
(4) comprise the test pattern of unknown traffic sign for a width of cloth, centered by each pixel of test pattern, extracting size is the image block of n * n, by row all pixels in the image is all carried out the extraction of image block again by row first, thereby has consisted of the set of test pattern piece
Wherein, n is less than or equal to the image size, and M is the sum of image block; Then calculate the sparse expression of test pattern set of blocks on vision dictionary D, statistics sparse expression coefficient is along the probability distribution P of dictionary atom
t
(5) calculate P
tWith { P
1, P
2..., P
CSimilarity, select the most similar image pattern class as the Traffic Sign Recognition result of test pattern;
The method that described step (2) automatic learning goes out a vision dictionary D is:
(i) dictionary D is initialized as some image blocks of selecting at random from C training image set of blocks;
(ii) in the sparse coding stage, for dictionary D, by finding the solution
Find the sparse expression α of each image block x; Wherein,
Be a sparse coefficient vector, λ is a Regularization coefficient, and P is l
0Or l
1Norm, be one in order to reach the regular of sparse purpose;
(iii) in the dictionary updating stage, upgrade dictionary with the K-SVD algorithm
In each atom d
l, and l=1,2 ..., k;
(vi) last constantly repeating step (ii) and step (iii), until convergence or iterations cut-off have obtained a vision dictionary D who obtains from all training image blocks study;
Described step (3) is calculated the sparse expression of i class training image set of blocks, and statistics sparse expression coefficient is along the Average probability distribution P of dictionary atom
iMethod be:
(i) at first calculate the sparse expression of each image block on dictionary D in the training image set of blocks of i class, use
Find the solution and obtain the sparse expression coefficient;
(ii) obtain the sparse expression coefficient of such training image set of blocks along the Average probability distribution P of dictionary atom by averaging method
iDescribed mean value method is:
Wherein, for given i class training image set of blocks
Here n
iIt is the sum of i class image block; Set { w
lExpression use atom d
lThe label of piece,
Therefore gather { w
lIn the number of element can be expressed as atom d
lAccess times, be defined as N
l, S
iThe distribution of dictionary D on the i class;
Described step (4) is calculated the sparse expression of test pattern set of blocks, and statistics sparse expression coefficient is along the probability distribution P of dictionary atom
tMethod be:
(ii) at last try to achieve the sparse expression coefficient along the probability distribution P of dictionary atom with mean value method
tBe specially:
Wherein, for test pattern I
lProbability distribution, the set of test pattern piece
Here M is the sum of test pattern piece.
2. according to claim 1ly carry out the method for Traffic Sign Recognition based on sparse expression and dictionary learning, it is characterized in that P in the described step (5)
tWith { P
1, P
2..., P
CThe computing method of similarity be:
Wherein, p
Te(I
l| D) be the probability distribution of test pattern piece, p
Tr(S
i| D) be the probability distribution of training image blocks.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201010587536 CN102024152B (en) | 2010-12-14 | 2010-12-14 | Method for recognizing traffic sings based on sparse expression and dictionary study |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201010587536 CN102024152B (en) | 2010-12-14 | 2010-12-14 | Method for recognizing traffic sings based on sparse expression and dictionary study |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102024152A CN102024152A (en) | 2011-04-20 |
CN102024152B true CN102024152B (en) | 2013-01-30 |
Family
ID=43865432
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 201010587536 Expired - Fee Related CN102024152B (en) | 2010-12-14 | 2010-12-14 | Method for recognizing traffic sings based on sparse expression and dictionary study |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102024152B (en) |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102622583A (en) * | 2012-02-23 | 2012-08-01 | 北京师范大学 | Multi-angle type number recognition method and system based on model and sparse representations |
CN102625107A (en) * | 2012-03-29 | 2012-08-01 | 航天科工深圳(集团)有限公司 | Method and device for compressing image |
CN102651072A (en) * | 2012-04-06 | 2012-08-29 | 浙江大学 | Classification method for three-dimensional human motion data |
CN104573738B (en) * | 2013-10-28 | 2018-03-06 | 北京大学 | Signal processing method and its device |
CN104517103A (en) * | 2014-12-26 | 2015-04-15 | 广州中国科学院先进技术研究所 | Traffic sign classification method based on deep neural network |
CN106156775B (en) * | 2015-03-31 | 2020-04-03 | 日本电气株式会社 | Video-based human body feature extraction method, human body identification method and device |
CN105590088A (en) * | 2015-09-17 | 2016-05-18 | 重庆大学 | Traffic sign recognition method based on spare self-encoding and sparse representation |
CN105279705A (en) * | 2015-09-30 | 2016-01-27 | 国网智能电网研究院 | Sparse representation method for on-line data collection of power |
CN107122785B (en) * | 2016-02-25 | 2022-09-27 | 中兴通讯股份有限公司 | Text recognition model establishing method and device |
CN106355196A (en) * | 2016-08-23 | 2017-01-25 | 大连理工大学 | Method of identifying synthetic aperture radar image targets based on coupled dictionary learning |
CN107423668B (en) * | 2017-04-14 | 2022-09-27 | 山东建筑大学 | Electroencephalogram signal classification system and method based on wavelet transformation and sparse expression |
CN107392115B (en) * | 2017-06-30 | 2021-01-12 | 中原智慧城市设计研究院有限公司 | Traffic sign identification method based on hierarchical feature extraction |
CN114353819A (en) * | 2022-01-04 | 2022-04-15 | 腾讯科技(深圳)有限公司 | Navigation method, device, equipment, storage medium and program product for vehicle |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101373519A (en) * | 2007-08-20 | 2009-02-25 | 富士通株式会社 | Device and method for recognizing character |
CN101404117A (en) * | 2008-10-21 | 2009-04-08 | 东软集团股份有限公司 | Traffic sign recognition method and device |
CN101556690A (en) * | 2009-05-14 | 2009-10-14 | 复旦大学 | Image super-resolution method based on overcomplete dictionary learning and sparse representation |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2006106508A2 (en) * | 2005-04-04 | 2006-10-12 | Technion Research & Development Foundation Ltd. | System and method for designing of dictionaries for sparse representation |
US8538200B2 (en) * | 2008-11-19 | 2013-09-17 | Nec Laboratories America, Inc. | Systems and methods for resolution-invariant image representation |
-
2010
- 2010-12-14 CN CN 201010587536 patent/CN102024152B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101373519A (en) * | 2007-08-20 | 2009-02-25 | 富士通株式会社 | Device and method for recognizing character |
CN101404117A (en) * | 2008-10-21 | 2009-04-08 | 东软集团股份有限公司 | Traffic sign recognition method and device |
CN101556690A (en) * | 2009-05-14 | 2009-10-14 | 复旦大学 | Image super-resolution method based on overcomplete dictionary learning and sparse representation |
Non-Patent Citations (2)
Title |
---|
Jone Wright et.al.Sparse Representation for Computer Vision and Pattern Recognition.《Proceedings of the IEEE》.2010,第98卷(第6期),1031-1044. * |
李祥熙 等.交通标志识别研究综述.《公路交通科技(应用技术版)》.2010,(第6期),253-257. * |
Also Published As
Publication number | Publication date |
---|---|
CN102024152A (en) | 2011-04-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102024152B (en) | Method for recognizing traffic sings based on sparse expression and dictionary study | |
CN110136170B (en) | Remote sensing image building change detection method based on convolutional neural network | |
CN106778604B (en) | Pedestrian re-identification method based on matching convolutional neural network | |
Zhang et al. | Integrating bottom-up classification and top-down feedback for improving urban land-cover and functional-zone mapping | |
CN107506703A (en) | A kind of pedestrian's recognition methods again for learning and reordering based on unsupervised Local Metric | |
CN106557579B (en) | Vehicle model retrieval system and method based on convolutional neural network | |
Nemoto et al. | Building change detection via a combination of CNNs using only RGB aerial imageries | |
CN103530638B (en) | Method for pedestrian matching under multi-cam | |
CN106951830B (en) | Image scene multi-object marking method based on prior condition constraint | |
CN105718912B (en) | A kind of vehicle characteristics object detecting method based on deep learning | |
CN106408030A (en) | SAR image classification method based on middle lamella semantic attribute and convolution neural network | |
CN103226584B (en) | The construction method of shape description symbols and image search method based on this descriptor | |
CN103853724A (en) | Multimedia data sorting method and device | |
CN112016605A (en) | Target detection method based on corner alignment and boundary matching of bounding box | |
CN103390046A (en) | Multi-scale dictionary natural scene image classification method based on latent Dirichlet model | |
CN111652273B (en) | Deep learning-based RGB-D image classification method | |
Guo et al. | Urban impervious surface extraction based on multi-features and random forest | |
CN107480585A (en) | Object detection method based on DPM algorithms | |
CN104063713A (en) | Semi-autonomous on-line studying method based on random fern classifier | |
CN106650811B (en) | A kind of EO-1 hyperion mixed pixel classification method cooperateing with enhancing based on neighbour | |
CN105654122A (en) | Spatial pyramid object identification method based on kernel function matching | |
CN113239753A (en) | Improved traffic sign detection and identification method based on YOLOv4 | |
CN105260995A (en) | Image repairing and denoising method and system | |
CN106228136A (en) | Panorama streetscape method for secret protection based on converging channels feature | |
CN101526955B (en) | Method for automatically withdrawing draft-based network graphics primitives and system thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20130130 Termination date: 20141214 |
|
EXPY | Termination of patent right or utility model |