CN104463192A - Dark environment video target real-time tracking method based on textural features - Google Patents

Dark environment video target real-time tracking method based on textural features Download PDF

Info

Publication number
CN104463192A
CN104463192A CN201410610358.6A CN201410610358A CN104463192A CN 104463192 A CN104463192 A CN 104463192A CN 201410610358 A CN201410610358 A CN 201410610358A CN 104463192 A CN104463192 A CN 104463192A
Authority
CN
China
Prior art keywords
carry out
lambda
target
ulbp
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410610358.6A
Other languages
Chinese (zh)
Other versions
CN104463192B (en
Inventor
孙继平
杜东璧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Mining and Technology CUMT
China University of Mining and Technology Beijing CUMTB
Original Assignee
China University of Mining and Technology Beijing CUMTB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Mining and Technology Beijing CUMTB filed Critical China University of Mining and Technology Beijing CUMTB
Priority to CN201410610358.6A priority Critical patent/CN104463192B/en
Publication of CN104463192A publication Critical patent/CN104463192A/en
Application granted granted Critical
Publication of CN104463192B publication Critical patent/CN104463192B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a dark environment video target real-time tracking method based on textural features. A multi-scale rectangular filter serves as a signal sampling matrix, a sparse and random gauss matrix serves as a compression sensing matrix, and therefore sample features can be quickly extracted through a vector integral diagram algorithm; by the adoption of template trimming method, redundancy computation can be effectively reduced in the vector integral diagram step. The method utilizes an ULBP operator in a rotation invariant mode to extract the features, is suitable for target tracking of targets which possibly rotate and deform at dark night and under a poor underground lighting condition, is high in recognition rate and provides reliable results for target tracking.

Description

Based on the dark situation video object method for real time tracking of textural characteristics
Technical field
The present invention relates to a kind of dark situation video object method for real time tracking based on textural characteristics, belong to image pattern recognition field.
Background technology
Computer vision target tracking domain generally adopts detection formula to follow the tracks of framework, and tracing task, by generating a small amount of positive negative sample on-line training sorter, is converted into Detection task by this framework.This is because object detection field achieves major progress, classifier technique is also constantly made progress by large quantity research, effectively ensure that the success ratio of tracking.Detection task needs to carry out feature extraction to the sample collected, with the feature of reflected sample, just sample classification and differentiation can be carried out, traditional feature extracting method needs dependence experience to construct, K.H.Zhang etc. propose a kind of feature extracting method based on compressed sensing (CompressiveTracking), by by broad sense Haar feature and a series of multi-scale filtering device convolution to ensure the multiple dimensioned character of feature, recycling Random sparseness Gaussian matrix carries out dimensionality reduction to ensure the live effect of tracking to feature.But there is the characteristic to illumination brightness, target rotational sensitive in broad sense Haar feature, the present invention uses invariable rotary pattern ULBP operator to improve the flow process of feature extraction, on the basis ensureing real-time and stability, make target tracking algorism can adapt to all kinds of scenes easily causing track rejection such as low-light (level), target rotation, illumination variation.
Summary of the invention
The problem of target following in extreme illumination scene can not be processed in order to overcome existing track algorithm.The present invention proposes a kind of object real-time tracking method based on textural characteristics of the particular surroundingss such as applicable down-hole, night, the method utilizes invariable rotary pattern ULBP operator to carry out texture feature extraction, the feature after extraction is made to contain the texture information of abundant sample, utilize texture information to the characteristic of illumination-insensitive, enable tracker in dim environment, reach higher tracking success ratio.
The invention discloses a kind of dark situation video object method for real time tracking based on textural characteristics, comprise initial phase and target tracking stage, described initial phase comprises the following steps:
1) the compute sparse sampling matrix Θ when initialization:
A) signal sampling matrix Φ is calculated;
B) compute sparse perception matrix Ψ;
C) compute sparse sampling matrix Θ, wherein Θ=Ψ Φ;
2) establishment one is by two classification Naive Bayes Classifier H (x) of 50 Bayes classifier cascades, each Weak Classifier h c(x c) be all based on representing two normal distributions of positive label y=0 and negative label y=1 wherein (μ y, c, σ y, c) represent that label is the parameter value of the Normal Discrimination curve of the Weak Classifier that the c dimensional feature of y is corresponding;
Described target tracking stage comprises the following steps:
1) to the target detection of the kth frame of video
A) with the target O that kth-1 frame traces into k-1centered by carry out candidate samples collection, in kth frame, collect n yindividual Euclidean distance meets z y = { z | 0 ≤ | | z - O k - 1 | | l 2 ≤ r y + } Candidate samples;
B) minimum rectangular area ∪ z (the z ∈ z comprising whole candidate samples is calculated y), gray processing is carried out successively to this rectangular region image sheet, invariable rotary pattern ULBP encodes, vectorial integration, finally obtains vector product component I;
C) with the diagonal line of the nonzero element in sparse sampling matrix Θ for scale, from vector product component I, extract compressed encoding eigenwert z → x (z ∈ z of each candidate samples with diagonal line subtraction y);
D) by the compressed encoding eigenwert x of each candidate samples rthe classifier calculated classification score that input kth-1 frame trains the x that classification score is maximum reven if the target O that r corresponding sample kth frame traces into k;
2) sorter of kth frame upgrades
A) with the target O that kth frame traces into kcentered by carry out positive and negative sample collection, in kth frame, collect n 1individual Euclidean distance meets positive sample, in kth frame, collect n 0individual Euclidean distance meets z 0 = { z | r 0 - ≤ | | z - O k | | l 2 ≤ r 0 + } Negative sample;
B) minimum rectangular area ∪ z (the z ∈ z comprising all positive negative samples is calculated 1∪ z 0), gray processing is carried out successively to this rectangular region image sheet, invariable rotary pattern ULBP encodes, vectorial integration, finally obtains vector product component I;
C) with the diagonal line of the nonzero element in sparse sampling matrix Θ for scale, from vector product component I, extract compressed encoding eigenwert z → x (z ∈ z of each positive negative sample with diagonal line subtraction 1∪ z 0);
D) sorter is upgraded
μ y , c ′ ← ( 1 - λ ) μ y , c + λ EX y , c σ y , c ′ ← [ ( 1 - λ ) σ y , c 2 + λ DX y , c + λ ( 1 - λ ) ( μ y , c - EX y , c ) 2 ] 1 / 2
Wherein with the average for positive sample (y=1) and negative sample (y=0) and variance respectively.
The present invention further discloses described target tracking stage by gray-scale map I grayto uniform pattern ULBP code pattern I rULBPcoding method comprises the following steps:
1) pending pixel is as central pixel point, its gray-scale value g 0with p the pixel g that distance is R i∈ Γ pthe difference of gray-scale value carry out binaryzation, and connect into the binary number of a p position, if the transition times U of 0 ~ 1 or 1 ~ 0 is not more than 2 in this binary number, then with the number of element 1, the uniform pattern ULBP as this pending pixel encodes, wherein U = Σ i = 1 p 1 - δ [ ϵ ( g mod ( i + 1 , p ) - g 0 ) - ϵ ( g i - g 0 ) ] ;
The present invention further discloses described target tracking stage by uniform pattern ULBP code pattern I rULBPintegration method to vector product component I comprises the following steps:
1) statistics with histogram, each coding occurrence number of 9 dimension statistics with histogram of structure corresponding 0 ~ 8;
2) longitudinally cumulative, its step is
A) carry out flattening by row to H, after flattening, obtain a dimensional vector V c;
B) to V cadd up, the cumulative dimensional vector obtained meets
C) to V Σ Ccarry out fractureing by row, obtain the equal-sized image with H, count H i;
3) laterally cumulative, its step is
A) to H iflatten by row, after flattening, obtain one dimension row vector V r;
B) to V radd up, the cumulative dimensional vector obtained meets
C) to V Σ Rcarry out fractureing by row, obtain the equal-sized image with H, count H ii;
H iibe the image I after process, in H, the statistic histogram of anyon matrix all can by H iicarry out diagonal line subtraction to try to achieve.
Accompanying drawing explanation
Below in conjunction with the drawings and specific embodiments, the present invention is described in further detail.
Fig. 1 is the dark situation video object real-time follow-up process flow diagram based on textural characteristics;
Fig. 2 is sparse sampling matrix Θ and sample convolution schematic diagram;
Fig. 3 is the ROI region figure after coding;
Embodiment
Below in conjunction with Figure of description, the specific embodiment of the invention is described in detail, first the basic procedure of the dark situation video object method for real time tracking based on textural characteristics is described.With reference to Fig. 1, process is divided into initial phase, target tracking stage, the sorter more new stage, and its concrete steps are as follows, initial phase:
1) compute sparse sampling matrix Θ;
A) the long-pending Monte Carlo simulation utilizing signal sampling matrix Φ to be multiplied with sparse perception matrix Ψ calculates, with initial target O 1rectangle is action scope, generates the rectangle frame of 2 ~ 3 random sizes of random site, and these rectangle frames are contained in O in wanting 1, using a line nonzero element of these rectangle frames as Θ;
B) step 1a is repeated) d time, with reference to Fig. 2, obtain whole nonzero elements that the d of Θ is capable;
2) generate d and tie up Naive Bayes Classifier h (x);
A) generate one two classification Bayes classifier as a Weak Classifier, its positive label Critical curve parameter is μ 1, c=0, σ 1, c=1, its negative label Critical curve parameter is μ 0, c=0, σ 0, c=1;
B) step 2a is repeated) d time, obtain d the cascade Weak Classifier h of h (x) c(x c), wherein c=1,2 ..., d;
3) the calculating masterplate that gray-scale value is encoded to invariable rotary pattern ULBP is created;
Radius is adopted to be that the invariable rotary pattern ULBP operator of 1 is as the calculating masterplate of RULBP, generating the length comprising 0 ~ 255 is the array of 256, each element subscript value in array is converted into binary digit, the transition times of adding up this scale-of-two subscript value adjacent bit position 0 ~ 1 or 1 ~ 0 counts U, if U≤2, the number of times then bit 1 in this scale-of-two subscript value occurred as element value, otherwise using 0 as element value, the length after operation be 256 array comprise 0 ~ 89 kinds of codings;
Target tracking stage:
1) to the target detection of the kth frame of video
A) with the target O that kth-1 frame traces into k-1top left corner apex centered by, calculate its distance meet all pixel { p y, with { p ybe top left corner apex, with O k-1size sized by, the rectangle of gained is candidate samples z y, count z y = { z | 0 ≤ | | z - O k - 1 | | l 2 ≤ r y + } ;
B) comprise the minimum rectangular area ROI of whole candidate samples, with reference to Fig. 3, its computing method are ∪ z (z ∈ z y), rectangle union operator ∪ is O (l 1, r 1, t 1, b 1) ∪ O (l 2, r 2, t 2, b 2)=O (max (l 1, l 2), min (r 1, r 2), max (t 1, t 2), min (b 1, b 2));
C) to the image sheet feature coding that ROI comprises, for pixel p each in ROI image sheet, its adjacent 8 pixels, with right pixel point for starting point, according to counterclockwise arrangement, 8 points after sequence are expressed as q successively 1~ q 8, have compare q successively lwith the gray-scale value size of p, by 8 times relatively after logical value connect be 8 bits, the invariable rotary pattern ULBP as this pixel encodes;
D) ROI after coding counts matrix H and carries out vectorial integration, carries out flattening by row, obtain a dimensional vector V after flattening to H c, to V cadd up, the cumulative dimensional vector obtained meets to V Σ Ccarry out fractureing by row, obtain the equal-sized image with H, count H i, to H iflatten by row, after flattening, obtain one dimension row vector V r, to V radd up, the cumulative dimensional vector obtained meets to V Σ Rcarry out fractureing by row, obtain the equal-sized image with H, count H ii;
2 ~ 3 nonzero elements of the often row of the sparse sampling matrix Θ e) generated by initialization procedure, wherein each element is all rectangular filters, and these wave filters are to candidate samples z rcarry out results added that filtering the obtains one dimension x as feature r, c, all carry out same operation by capable for the d of Θ, namely obtain candidate samples z rd dimensional feature x r=(x r, 1, x r, 2..., x r, d);
F) whole candidate samples z is calculated r∈ z yfeature utilize the Naive Bayes Classifier h (x that kth-1 frame trains r; K-1) feature of each candidate samples classified and calculate classification score h ( x r ; k - 1 ) = Σ c = 1 d h ( x r , c ; k - 1 ) = Σ c = 1 d log p ( x r , c | y = 1 ) p ( x r , c | y = 0 ) , Wherein p ( x r , c | y ) = 1 2 π σ y , c e ( x r , c - μ y , c ) 2 σ y , c 2 , Using the target O that candidate samples maximum for classification score traces into as kth frame k;
2) sorter of kth frame upgrades
A) with the target O that kth frame traces into ktop left corner apex centered by, calculate its distance meet all pixel { p 1, with { p 1be upper left corner fixed point, with O ksize sized by, the rectangle of gained is positive sample z 1, count z 1 = { z | 0 ≤ | | z - O k - 1 | | l 2 ≤ r y + } ;
B) with the target O that kth frame traces into ktop left corner apex centered by, calculate its distance meet all pixel { p 0, with { p 0be upper left corner fixed point, with O ksize sized by, the rectangle of gained is negative sample z 0, count z 0 = { z | r 0 - ≤ | | z - O k - 1 | | l 2 ≤ r 0 + } ;
C) comprise the minimum rectangular area ROI of all positive negative samples, with reference to Fig. 2, its computing method are ∪ z (z ∈ z 1∪ z 0), the step 1b of rectangle union operator ∪ and the target detection of target tracking stage kth frame) identical;
D) the step 1c of the target detection of target tracking stage kth frame is repeated);
E) the step 1d of the target detection of target tracking stage kth frame is repeated);
2 ~ 3 nonzero elements of the often row of the sparse sampling matrix Θ f) generated by initialization procedure, wherein each element is all rectangular filters, and these wave filters align negative sample z rcarry out results added that filtering the obtains one dimension x as feature r, c, all carry out same operation by capable for the d of Θ, namely obtain positive negative sample z rd dimensional feature x r=(x r, 1, x r, 2..., x r, d);
G) all positive sample z are calculated r∈ z 1feature wherein each positive sample z rfeature be all d tie up x r=(x r, 1, x r, 2..., x r, d), solve the average of all positive sample characteristics and variance DX 1 , c = 1 n 1 Σ r = 1 n 1 x 1 , r , c 2 - ( EX 1 , c ) 2 , Wherein c=1,2 ..., d;
H) whole negative sample z is calculated r∈ z 0feature wherein each negative sample z rfeature be all d tie up x r=(x r, 1, x r, 2..., x r, d), solve the average of whole negative sample feature and variance DX 0 , c = 1 n 0 Σ r = 1 n 0 x 0 , r , c 2 - ( EX 0 , c ) 2 , Wherein c=1,2 ..., d;
I) all Weak Classifiers are upgraded
μ y , c ′ ← ( 1 - λ ) μ y , c + λ EX y , c σ y , c ′ ← [ ( 1 - λ ) σ y , c 2 + λ DX y , c + λ ( 1 - λ ) ( μ y , c - EX y , c ) 2 ] 1 / 2
Wherein c=1,2 ..., d.

Claims (3)

1. based on a dark situation video object method for real time tracking for textural characteristics, it is characterized in that, comprise initial phase and target tracking stage, described initial phase comprises the following steps:
1) when initialization, compute sparse sampling matrix Θ:
A) signal sampling matrix Φ is calculated;
B) compute sparse perception matrix Ψ;
C) compute sparse sampling matrix Θ, wherein Θ=Ψ Φ;
2) establishment one is by two classification Naive Bayes Classifier H (x) of 50 Bayes classifier cascades, each Weak Classifier h c(x c) be all based on representing two normal distributions of positive label y=1 and negative label y=0 wherein (μ y, c, σ y, c) represent that label is the parameter value of the Normal Discrimination curve of the Weak Classifier that the c dimensional feature of y is corresponding;
Described target tracking stage comprises the following steps:
1) target detection is carried out to the kth frame of video
A) with the target O that kth-1 frame traces into k-1centered by carry out candidate samples collection, in kth frame, collect n vindividual Euclidean distance meets z y = { z | 0 ≤ | | z - O k - 1 | | l 2 ≤ r y + } Candidate samples;
B) minimum rectangular area Uz (the z ∈ z comprising whole candidate samples is calculated v), gray processing is carried out successively to this rectangular region image sheet, invariable rotary pattern ULBP encodes, vectorial integration, finally obtains vector product component I;
C) with the diagonal line of the nonzero element in sparse sampling matrix Θ for scale, from vector product component I, extract compressed encoding eigenwert z → x (z ∈ z of each candidate samples with diagonal line subtraction v);
D) by the compressed encoding eigenwert x of each candidate samples rthe Naive Bayes Classifier that input kth-1 frame trains, classifies to the feature of each candidate samples, and calculates classification score the x that classification score is maximum rnamely r corresponding sample be the target O that kth frame traces into k;
2) sorter of kth frame upgrades
A) with the target O that kth frame traces into kcentered by carry out positive and negative sample collection, in kth frame, collect n 1individual Euclidean distance meets z 1=z|0≤|| z-O k|| l2≤ r 1 +positive sample, in kth frame, collect n 0individual Euclidean distance meets z 0 = { z | r 0 - ≤ | | z - O k | | l 2 ≤ r 0 + } Negative sample;
B) minimum rectangular area Uz (the z ∈ z comprising all positive negative samples is calculated 1uz 0), gray processing is carried out successively to this rectangular region image sheet, invariable rotary pattern ULBP encodes, vectorial integration, finally obtains vector product component I;
C) with the diagonal line of the nonzero element in sparse sampling matrix Θ for scale, from vector product component I, extract compressed encoding eigenwert z → x (z ∈ z of each positive negative sample with diagonal line subtraction 1uz 0);
D) sorter is upgraded
μ y , c ′ ← ( 1 - λ ) μ y , c + λ EX y , c σ y , c ′ ← [ ( 1 - λ ) σ y , c 2 + λ DX y , c + λ ( 1 - λ ) ( μ y , c - EX y , c ) 2 ] 1 / 2
Wherein with the average for positive sample (y=1) and negative sample (y=0) and variance respectively.
2. a kind of dark situation video object method for real time tracking based on textural characteristics according to claim 1, it is characterized in that, described target tracking stage is by gray-scale map I grayto uniform pattern ULBP code pattern I rULBPcoding method is:
1) pending pixel is as central pixel point, its gray-scale value g 0with p the pixel g that distance is R i∈ Γ pthe difference of gray-scale value carry out binaryzation, and connect into the binary number of a p position, if the transition times U of 0 ~ 1 or 1 ~ 0 is not more than 2 in this binary number, then with the number of element 1, the uniform pattern ULBP as this pending pixel encodes, wherein U = Σ i = 1 p 1 - δ [ ϵ ( g mod ( i + 1 , p ) - g 0 ) - ϵ ( g i - g 0 ) ] ;
3. a kind of dark situation video object method for real time tracking based on textural characteristics according to claim 1, it is characterized in that, described target tracking stage is by uniform pattern ULBP code pattern I rULBPintegration method to vector product component I comprises the following steps:
1) statistics with histogram, each coding occurrence number of 9 dimension statistics with histogram of structure corresponding 0 ~ 8;
2) longitudinally cumulative, its step is
A) carry out flattening by row to H, after flattening, obtain a dimensional vector V c;
B) to V cadd up, the cumulative dimensional vector obtained meets
C) to V Σ Ccarry out fractureing by row, obtain the equal-sized image with H, count H i;
3) laterally cumulative, its step is
A) to H iflatten by row, after flattening, obtain one dimension row vector V r;
B) to V radd up, the cumulative dimensional vector obtained meets
C) to V Σ Rcarry out fractureing by row, obtain the equal-sized image with H, count H ii;
H iibe the image I after process, in H, the statistic histogram of anyon matrix all can by H iicarry out diagonal line subtraction to try to achieve.
CN201410610358.6A 2014-11-04 2014-11-04 Dark situation video object method for real time tracking based on textural characteristics Active CN104463192B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410610358.6A CN104463192B (en) 2014-11-04 2014-11-04 Dark situation video object method for real time tracking based on textural characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410610358.6A CN104463192B (en) 2014-11-04 2014-11-04 Dark situation video object method for real time tracking based on textural characteristics

Publications (2)

Publication Number Publication Date
CN104463192A true CN104463192A (en) 2015-03-25
CN104463192B CN104463192B (en) 2018-01-05

Family

ID=52909206

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410610358.6A Active CN104463192B (en) 2014-11-04 2014-11-04 Dark situation video object method for real time tracking based on textural characteristics

Country Status (1)

Country Link
CN (1) CN104463192B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110243218A1 (en) * 2002-12-16 2011-10-06 Xiaochun Nie Method of implementing improved rate control for a multimedia compression and encoding system
CN102915562A (en) * 2012-09-27 2013-02-06 天津大学 Compressed sensing-based multi-view target tracking and 3D target reconstruction system and method
CN103310466A (en) * 2013-06-28 2013-09-18 安科智慧城市技术(中国)有限公司 Single target tracking method and achievement device thereof
CN103632382A (en) * 2013-12-19 2014-03-12 中国矿业大学(北京) Compressive sensing-based real-time multi-scale target tracking method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110243218A1 (en) * 2002-12-16 2011-10-06 Xiaochun Nie Method of implementing improved rate control for a multimedia compression and encoding system
CN102915562A (en) * 2012-09-27 2013-02-06 天津大学 Compressed sensing-based multi-view target tracking and 3D target reconstruction system and method
CN103310466A (en) * 2013-06-28 2013-09-18 安科智慧城市技术(中国)有限公司 Single target tracking method and achievement device thereof
CN103632382A (en) * 2013-12-19 2014-03-12 中国矿业大学(北京) Compressive sensing-based real-time multi-scale target tracking method

Also Published As

Publication number Publication date
CN104463192B (en) 2018-01-05

Similar Documents

Publication Publication Date Title
Ellahyani et al. Traffic sign detection and recognition using features combination and random forests
CN109325507B (en) Image classification method and system combining super-pixel saliency features and HOG features
CN102722712A (en) Multiple-scale high-resolution image object detection method based on continuity
Younesi et al. Gabor filter and texture based features for palmprint recognition
He et al. Rotation invariant texture descriptor using local shearlet-based energy histograms
CN104778457A (en) Video face identification algorithm on basis of multi-instance learning
CN102945378A (en) Method for detecting potential target regions of remote sensing image on basis of monitoring method
Zakir et al. Road sign detection and recognition by using local energy based shape histogram (LESH)
Lin et al. Robust license plate detection using image saliency
CN105404868A (en) Interaction platform based method for rapidly detecting text in complex background
Yamauchi et al. Relational HOG feature with wild-card for object detection
Paisitkriangkrai et al. Face detection with effective feature extraction
CN104143091A (en) Single-sample face recognition method based on improved mLBP
CN107886093B (en) Character detection method, system, equipment and computer storage medium
TW201820260A (en) All-weather thermal image-type pedestrian detecting method to express the LBP encoding in the same window by HOG as the feature representation, and use SVM and Adaboost to proceed classifier training
CN110570450B (en) Target tracking method based on cascade context-aware framework
CN109829511B (en) Texture classification-based method for detecting cloud layer area in downward-looking infrared image
Saha et al. Neural network based road sign recognition
Dhar et al. Interval type-2 fuzzy set and human vision based multi-scale geometric analysis for text-graphics segmentation
Wang et al. Target automatic recognition based on ISAR image with wavelet transform and MBLBP
Gad et al. Crowd density estimation using multiple features categories and multiple regression models
CN104463192A (en) Dark environment video target real-time tracking method based on textural features
CN104331909A (en) Gradient features based method of tracking video targets in dark environment in real time
Sheikh et al. Noise tolerant classification of aerial images into manmade structures and natural-scene images based on statistical dispersion measures
KR101468566B1 (en) Method for Malaysian Vehicle License Plate Recognition in Low Illumination Images and system thereof

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant