CN112270328A - Traffic signal lamp detection method fused with HOG-LBP function - Google Patents
Traffic signal lamp detection method fused with HOG-LBP function Download PDFInfo
- Publication number
- CN112270328A CN112270328A CN202011119655.2A CN202011119655A CN112270328A CN 112270328 A CN112270328 A CN 112270328A CN 202011119655 A CN202011119655 A CN 202011119655A CN 112270328 A CN112270328 A CN 112270328A
- Authority
- CN
- China
- Prior art keywords
- image
- lbp
- hog
- feature
- characteristic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 15
- 230000004927 fusion Effects 0.000 claims abstract description 28
- 238000000034 method Methods 0.000 claims abstract description 19
- 238000012706 support-vector machine Methods 0.000 claims abstract description 19
- 239000011159 matrix material Substances 0.000 claims description 33
- 239000013598 vector Substances 0.000 claims description 28
- 238000004364 calculation method Methods 0.000 claims description 12
- 238000010606 normalization Methods 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 8
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/467—Encoded features or binary features, e.g. local binary patterns [LBP]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Computing Systems (AREA)
- Multimedia (AREA)
- Algebra (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a signal traffic light detection method fused with HOG-LBP, which comprises the steps of firstly inputting a training sample traffic light image; then, respectively extracting HOG characteristics and LBP characteristics of the training sample images; carrying out PCA + LDA dimension reduction and feature fusion on the extracted HOG feature and LBP feature of the image to obtain HOG-LBP feature; putting the HOG-LBP characteristics of the obtained image into a Support Vector Machine (SVM) algorithm for training to obtain an SVM traffic signal lamp classifier; detecting the image by using the obtained SVM classifier; the method combines the HOG characteristic and the LPB characteristic, is richer than a single characteristic in description performance, can make up the limitation of the single characteristic so as to improve the recognition rate, adopts the method of combining PCA and LDA to reduce the dimension of the characteristic while improving the recognition rate by adopting the fusion characteristic, greatly shortens the recognition time, and improves the robustness of the system.
Description
Technical Field
The invention belongs to the field of target detection, and particularly relates to a traffic signal lamp detection method fused with HOG-LBP.
Background
In real life, the traffic signal lamp and the countdown digital detection thereof are susceptible to changes of external light and weather, such as backlight conditions or various, shielding, fouling, background noise and other nonresistant factors of the traffic signal lamp increase the difficulty for detection and identification. The HOG feature is used for describing key point features of an object in the field of target detection and tracking, the LBP feature is successfully applied in the field of texture analysis, and the 2 features have strong expression capability but cannot be applied to more complex traffic light surrounding environments. In a complex traffic signal environment, a single feature has limited expressive power, and therefore a plurality of different features are required to be fused, and accurate traffic signal feature representation is expected.
Aiming at the problems, the traffic signal lamp identification method based on HOG and LBP feature fusion is provided, firstly HOG features and LBP features are respectively extracted, then the PCA-LDA method is used for dimension reduction, and finally a feature fusion strategy is used for identification.
Disclosure of Invention
The invention aims to provide a traffic signal lamp detection method fused with an HOG-LBP function, which improves the robustness and the identification accuracy of an algorithm on the premise of ensuring the speed of the algorithm.
The invention adopts the technical scheme that a signal traffic light detection method fused with HOG-LBP is implemented according to the following steps:
step 1, inputting a training sample traffic light image;
step 2, respectively extracting the HOG characteristic and the LBP characteristic of the training sample image in the step 1;
step 3, performing PCA + LDA dimension reduction on the HOG characteristic and the LBP characteristic of the image extracted in the step 2;
step 4, performing feature fusion on the HOG feature and the LBP feature after the dimensionality reduction obtained in the step 3 to obtain a HOG-LBP feature;
step 5, putting the HOG-LBP characteristics of the image obtained in the step 4 into a Support Vector Machine (SVM) algorithm for training to obtain an SVM traffic signal lamp classifier;
and 6, detecting the image through the SVM classifier obtained in the step 5.
The invention is also characterized in that:
in the step 1, after a training sample traffic light image is input, whether the sample image is a gray image needs to be judged, and if not, the image is converted into the gray image;
wherein the HOG feature extraction process in the step 2 mainly comprises the following steps:
the method comprises the following steps of image normalization, gradient calculation, direction weight projection based on gradient amplitude and feature vector normalization, and the specific calculation process is as follows:
assuming that the size of the candidate region is 80 × 64, and setting the size of the block to be 8 × 8, the candidate region contains 80 non-overlapping block blocks in total;
firstly, calculating the gradient direction and amplitude of each block, and calculating the gradient by adopting a simple central symmetry operator [ -1, 0, 1], as shown in the following formula:
where I (x, y) is the pixel value of the image point (x, y), θ (x, y) is the gradient direction of the point, and m (x, y) corresponds to the amplitude value of the point;
then setting the size of the cell to be 4 multiplied by 4, counting a gradient histogram in each block according to the size of the cell, and projecting a specified weight by applying the amplitude of the gradient;
carrying out contrast normalization on the cells in each overlapped block;
finally, combining the histogram vectors in all blocks to obtain a final HOG characteristic vector;
the LBP feature extraction process in the step 2 mainly comprises the following steps:
the LBP operator is usually represented by (P, R), where P represents the number of pixels contained within the domain; r represents the radius of the domain, and the basic LBP operator is (8, 1) domain;
first, a 3 × 3 domain pixel value pi (i ═ 1,2, …, 8) is compared with a central pixel value p0, and thresholding is performed, and the calculation formula is:
arranging bi (i is 1,2, …, 8) in a clockwise direction to obtain an 8-bit binary code, and converting the binary code into a decimal number to obtain a result obtained by calculating a central pixel by an LBP operator;
after the traffic signal lamp image is subjected to LBP operator operation, each pixel point f in the image is subjected to LBP operator operationlThe (x, y) characteristic value is counted to obtain a histogram characteristic vector HiSpecifically, it can be defined as:
where n is the data of different labels generated by the LBP operator, and a 3 × 3 domain consistent mode operator is used, i.e. n is 256, and when x is true, i (x) is 1; when x is false, i (x) is 0;
dividing the image into regions R0,R1,...,Rm-1Histogram of each region Hi,jCan be defined as:
i=0,1,…,n-1;j=0,1,…,m-1 (5)
the PCA + LDA dimension reduction of the HOG characteristic and the LBP characteristic in the step 3 specifically comprises the following steps:
firstly, PCA dimensionality reduction is carried out:
assume that there are N traffic signal samples { x1,x2,...,xNBelong to c classes { X }1,X2,...,XcAnd the image specification of each sample is w × h, then the dimension of each sample image is n ═ w × h,obtaining N N-dimensional column vectors of all traffic signal lamp samples, solving a covariance matrix of the training samples to obtain:
wherein μ is the mean of the samples;
and finding an optimal projection matrix WOPT1So that:
Wopt1=argmax|WTSTW|=[w1,w2,…wm] (7)
where W is the covariance matrix STAfter the feature vector is unitized, W is a matrix arranged in rowsTIs a transpose of W, WiIs S of a divergence matrixTThe characteristic vectors are arranged from large to small, and the characteristic vectors corresponding to the first m maximum characteristic values are taken to approximately represent the original data;
then LDA dimensionality reduction is carried out:
inter-class divergence matrix SbAnd an intra-class divergence matrix SwRespectively as follows:
wherein, muiIs the mean of class i; n is a radical ofiIs of the type XiThe number of samples of (a); if S iswNonsingular, an optimal orthogonal matrix can be obtained, so that the ratio of the inter-class divergence matrix and the intra-class divergence matrix after projection is maximum, namely:
equation (10) can be calculated using the following equation:
wherein i is 1,2, …, m; wiIs a matrixCharacteristic values lambda arranged from large to smalliA corresponding feature vector;
the specific process of performing feature fusion on the HOG feature and the LBP feature after dimension reduction in the step 4 is as follows:
and performing feature fusion by adopting a weighting mode, wherein a fusion formula is as follows:
wherein m represents the number of classifiers; w is aiAnd ciRespectively representing the weight and the output score of the ith classifier; (c) is the score output after feature fusion, and the weight calculation formula is as follows:
in the formula, EiEqual error rate for the ith classifier;
assuming that there are m different classifiers with a traffic light image feature of x, when estimating the true classification discriminant function, there are m different discriminant functions:
gi(x)=h(x)+εi(x),i=1,2,…m (14)
wherein h (x) represents the true classification discriminant function; TABLE gi(x) Showing the discriminant function of the ith classifier; and epsiloni(x) Is represented as gi(x) An error function from the true function;
after feature fusion, the mean square error of the whole feature fusion system can be expressed as:
wherein the step 6 specifically comprises: the SVM traffic signal light classifier trained through the steps detects the image, if the classification result is true, the target in the image is considered to be a traffic signal light, and if the classification result is other, the target in the image is not considered to be the traffic signal light.
The invention has the beneficial effects that:
the signal traffic light detection method fused with the HOG-LBP is used for carrying out traffic light identification based on the HOG-LPB fusion characteristics. The method has constant scale, can perform reliable traffic signal lamp detection in a complex traffic signal lamp scene, and is convenient for next identification. The method combines the HOG characteristic and the LPB characteristic, is richer than a single characteristic in description performance, and can make up the limitation of the single characteristic so as to improve the recognition rate. When the recognition rate is improved by adopting the fusion features, the feature dimensionality reduction is carried out by adopting a method of combining PCA and LDA, so that the recognition time is greatly shortened, and the robustness of the system is improved.
Drawings
FIG. 1 is a HOG-LBP fusion framework diagram in the HOG-LBP fusion traffic signal light detection method of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The invention provides a signal traffic light detection method fused with HOG-LBP, which is implemented according to the following steps as shown in figure 1:
step 1, inputting a training sample traffic light image, judging whether the sample image is a gray image, and if not, converting the image into the gray image;
and 2, the feature extraction is mainly divided into two steps of HOG feature extraction and LBP feature extraction.
Obtaining HOG feature extraction of an image, wherein the steps mainly comprise image normalization, gradient calculation, direction weight projection based on gradient amplitude and feature vector normalization, and the specific calculation process comprises the following steps:
assuming that the size of the candidate region is 80 × 64, and setting the size of the block to be 8 × 8, the candidate region contains 80 non-overlapping block blocks in total;
firstly, calculating the gradient direction and amplitude of each block, and calculating the gradient by adopting a simple central symmetry operator [ -1, 0, 1], as shown in the following formula:
where I (x, y) is the pixel value of the image point (x, y), θ (x, y) is the gradient direction of the point, and m (x, y) corresponds to the amplitude value of the point;
then setting the size of the cell to be 4 multiplied by 4, counting a gradient histogram in each block according to the size of the cell, and projecting a specified weight by applying the amplitude of the gradient;
then, carrying out contrast normalization on the cells in each overlapped block to eliminate the influence of illumination;
and finally, combining the histogram vectors in all blocks to obtain a final HOG characteristic vector.
Image LBP features are then acquired:
the basic idea of LBP is to describe local texture features according to binary codes obtained by comparing central pixel points with pixel points in the circular domain thereof, and an LBP operator is usually represented by (P, R), wherein P represents the number of pixels contained in the domain; r represents the radius of the domain, and the basic LBP operator is (8, 1) domain;
first, a 3 × 3 domain pixel value pi (i ═ 1,2, …, 8) is compared with a central pixel value p0, and thresholding is performed, and the calculation formula is:
arranging bi (i is 1,2, …, 8) in a clockwise direction to obtain an 8-bit binary code, and converting the binary code into a decimal number to obtain a result obtained by calculating a central pixel by an LBP operator;
after the traffic signal lamp image is subjected to LBP operator operation, each pixel point f in the image is subjected to LBP operator operationlThe (x, y) characteristic value is counted to obtain a histogram characteristic vector HiSpecifically, it can be defined as:
where n is the data that the LBP operator produces different labels, a 3 × 3 domain consistent pattern operator is used herein, i.e., n is 256, and when x is true, i (x) is 1; when x is false, i (x) is 0;
to better characterize the traffic signal, the image is divided into regions R0,R1,...,Rm-1Histogram of each region Hi,jCan be defined as:
i=0,1,…,n-1;j=0,1,…,m-1 (5)
step 3, performing PCA + LDA dimension reduction on the HOG characteristic and the LBP characteristic:
firstly, PCA dimensionality reduction is carried out:
assume that there are N traffic signal samples { x1,x2,…,xNBelong to c classes { X }1,X2,…,XcAnd if the image specification of each sample is w × h, the dimension of each sample image is n ═ w × h.Thus, N N-dimensional column vectors of all traffic signal lamp samples are obtained, and a covariance matrix of the training samples is obtained:
where μ is the mean of the samples.
And finding an optimal projection matrix WOPTSo that:
Wopt1=argmax|WTSTW|=[w1,w2,…wm] (7)
where W is the covariance matrix STAfter the feature vector is unitized, W is a matrix arranged in rowsTIs a transpose of W, WiIs a divergence matrix STThe characteristic vectors are arranged from large to small, and the characteristic vectors corresponding to the first m maximum characteristic values are taken to approximately represent the original data;
then LDA dimensionality reduction is carried out:
inter-class divergence matrix SbAnd an intra-class divergence matrix SwRespectively as follows:
wherein, muiIs the mean of class i; n is a radical ofiIs of the type XiThe number of samples of (1). If S iswNonsingular, an optimal orthogonal matrix can be obtained, so that the ratio of the inter-class divergence matrix and the intra-class divergence matrix after projection is maximum, namely:
equation (10) can be calculated using the following equation:
wherein i is 1,2, …, m; wiIs a matrixCharacteristic values lambda arranged from large to smalliThe corresponding feature vector.
And 4, fusing HOG-LBP characteristics:
and performing feature fusion by adopting a weighting mode, wherein a fusion formula is as follows:
wherein m represents the number of classifiers; w is aiAnd ciRespectively representing the weight and the output score of the ith classifier; (c) is the score output after feature fusion, and the weight calculation formula is as follows:
in the formula, EiEqual error rate for the ith classifier;
assuming that there are m different classifiers with a traffic light image feature of x, when estimating the true classification discriminant function, there are m different discriminant functions:
gi(x)=h(x)+εi(x),i=1,2,…m (14)
wherein h (x) represents the true classification discriminant function; gi(x) A discriminant function representing the ith classifier; and epsiloni(x) Is represented as gi(x) An error function from the true function;
after feature fusion, the mean square error of the whole feature fusion system can be expressed as:
step 5, training the HOG-LBP fusion feature Vector by adopting a linear Support Vector Machine (SVM) (support Vector machine) algorithm to obtain an SVM traffic signal lamp classifier;
and 6, detecting the image by using the trained SVM traffic signal light classifier, if the classification result is true, determining that the target in the image is a traffic signal light, and if the classification result is other, determining that the target in the image is not the traffic signal light.
Claims (7)
1. A signal traffic light detection method fused with HOG-LBP is characterized by comprising the following steps:
step 1, inputting a training sample traffic light image;
step 2, respectively extracting the HOG characteristic and the LBP characteristic of the training sample image in the step 1;
step 3, performing PCA + LDA dimension reduction on the HOG characteristic and the LBP characteristic of the image extracted in the step 2;
step 4, performing feature fusion on the HOG feature and the LBP feature after the dimensionality reduction obtained in the step 3 to obtain a HOG-LBP feature;
step 5, putting the HOG-LBP characteristics of the image obtained in the step 4 into a Support Vector Machine (SVM) algorithm for training to obtain an SVM traffic signal lamp classifier;
and 6, detecting the image through the SVM classifier obtained in the step 5.
2. The method as claimed in claim 1, wherein in step 1, after the training sample traffic light image is input, it is determined whether the sample image is a gray image, and if not, the image is converted into a gray image.
3. The method as claimed in claim 1, wherein the HOG feature extraction process in step 2 mainly comprises:
the method comprises the following steps of image normalization, gradient calculation, direction weight projection based on gradient amplitude and feature vector normalization, and the specific calculation process is as follows:
assuming that the size of the candidate region is 80 × 64, and setting the size of the block to be 8 × 8, the candidate region contains 80 non-overlapping block blocks in total;
firstly, calculating the gradient direction and amplitude of each block, and calculating the gradient by adopting a simple central symmetry operator [ -1, 0, 1], as shown in the following formula:
where I (x, y) is the pixel value of the image point (x, y), θ (x, y) is the gradient direction of the point, and m (x, y) corresponds to the amplitude value of the point;
then setting the size of the cell to be 4 multiplied by 4, counting a gradient histogram in each block according to the size of the cell, and projecting a specified weight by applying the amplitude of the gradient;
carrying out contrast normalization on the cells in each overlapped block;
and finally, combining the histogram vectors in all blocks to obtain a final HOG characteristic vector.
4. The method as claimed in claim 1, wherein the LBP feature extraction process in step 2 mainly comprises:
the LBP operator is usually represented by (P, R), where P represents the number of pixels contained within the domain; r represents the radius of the domain, and the basic LBP operator is (8, 1) domain;
first, a 3 × 3 domain pixel value pi (i ═ 1,2, …, 8) is compared with a central pixel value p0, and thresholding is performed, and the calculation formula is:
arranging bi (i is 1,2, …, 8) in a clockwise direction to obtain an 8-bit binary code, and converting the binary code into a decimal number to obtain a result obtained by calculating a central pixel by an LBP operator;
after the traffic signal lamp image is subjected to LBP operator operation, each pixel point f in the image is subjected to LBP operator operationlThe (x, y) characteristic value is counted to obtain a histogram characteristic vector HiSpecifically, it can be defined as:
where n is the data of different labels generated by the LBP operator, and a 3 × 3 domain consistent mode operator is used, i.e. n is 256, and when x is true, i (x) is 1; when x is false, i (x) is 0;
dividing the image into regions R0,R1,...,Rm-1Histogram of each region Hi,jCan be defined as:
5. the method as claimed in claim 1, wherein the PCA + LDA dimension reduction of the HOG feature and the LBP feature in step 3 specifically comprises:
firstly, PCA dimensionality reduction is carried out:
assume that there are N traffic signal samples { x1,x2,...,xNBelong to c classes { X }1,X2,...,XcAnd obtaining N-dimensional column vectors of all traffic signal lamp samples by solving a covariance matrix for the training samples, wherein the image specification of each sample is w × h, and the dimension of each sample image is N ═ w × h:
wherein μ is the mean of the samples;
and finding an optimal projection matrix WOPTSo that:
Wopt1=argmax|WTSTW|=[w1,w2,L wm] (7)
where W is the covariance matrix STAfter the feature vector is unitized, W is a matrix arranged in rowsTIs a transpose of W, WiIs a divergence matrix STThe characteristic vectors are arranged from large to small, and the characteristic vectors corresponding to the first m maximum characteristic values are taken to approximately represent the original data;
performing LDA dimension reduction:
inter-class divergence matrix SbAnd an intra-class divergence matrix SwRespectively as follows:
wherein, muiIs the mean of class i; n is a radical ofiIs of the type XiThe number of samples of (a); if S iswNonsingular, an optimal orthogonal matrix can be obtained, so that the ratio of the inter-class divergence matrix and the intra-class divergence matrix after projection is maximum, namely:
equation (10) can be calculated using the following equation:
6. The method as claimed in claim 1, wherein the step 4 of feature fusion of the HOG features and LBP features comprises the following specific steps:
and performing feature fusion by adopting a weighting mode, wherein a fusion formula is as follows:
wherein m represents the number of classifiers; w is aiAnd ciRespectively representing the weight and the output score of the ith classifier; (c) is the score output after feature fusion, and the weight calculation formula is as follows:
in the formula, EiEqual error rate for the ith classifier;
assuming that there are m different classifiers with a traffic light image feature of x, when estimating the true classification discriminant function, there are m different discriminant functions:
gi(x)=h(x)+εi(x),i=1,2,L m (14)
wherein h (x) represents the true classification discriminant function; gi(x) A discriminant function representing the ith classifier; and epsiloni(x) Is represented as gi(x) An error function from the true function;
after feature fusion, the mean square error of the whole feature fusion system can be expressed as:
7. the method as claimed in claim 1, wherein the step 6 comprises: the SVM traffic signal light classifier trained through the steps detects the image, if the classification result is true, the target in the image is considered to be a traffic signal light, and if the classification result is other, the target in the image is not considered to be the traffic signal light.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011119655.2A CN112270328A (en) | 2020-10-19 | 2020-10-19 | Traffic signal lamp detection method fused with HOG-LBP function |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011119655.2A CN112270328A (en) | 2020-10-19 | 2020-10-19 | Traffic signal lamp detection method fused with HOG-LBP function |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112270328A true CN112270328A (en) | 2021-01-26 |
Family
ID=74338709
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011119655.2A Pending CN112270328A (en) | 2020-10-19 | 2020-10-19 | Traffic signal lamp detection method fused with HOG-LBP function |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112270328A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113505695A (en) * | 2021-07-09 | 2021-10-15 | 上海工程技术大学 | AEHAL characteristic-based track fastener state detection method |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103049751A (en) * | 2013-01-24 | 2013-04-17 | 苏州大学 | Improved weighting region matching high-altitude video pedestrian recognizing method |
CN104091157A (en) * | 2014-07-09 | 2014-10-08 | 河海大学 | Pedestrian detection method based on feature fusion |
CN106781513A (en) * | 2016-11-28 | 2017-05-31 | 东南大学 | The recognition methods of vehicle behavior in a kind of urban transportation scene of feature based fusion |
CN108805018A (en) * | 2018-04-27 | 2018-11-13 | 淘然视界(杭州)科技有限公司 | Road signs detection recognition method, electronic equipment, storage medium and system |
CN109063619A (en) * | 2018-07-25 | 2018-12-21 | 东北大学 | A kind of traffic lights detection method and system based on adaptive background suppression filter and combinations of directions histogram of gradients |
CN109086687A (en) * | 2018-07-13 | 2018-12-25 | 东北大学 | The traffic sign recognition method of HOG-MBLBP fusion feature based on PCA dimensionality reduction |
CN109344271A (en) * | 2018-09-30 | 2019-02-15 | 南京物盟信息技术有限公司 | Video portrait records handling method and its system |
-
2020
- 2020-10-19 CN CN202011119655.2A patent/CN112270328A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103049751A (en) * | 2013-01-24 | 2013-04-17 | 苏州大学 | Improved weighting region matching high-altitude video pedestrian recognizing method |
CN104091157A (en) * | 2014-07-09 | 2014-10-08 | 河海大学 | Pedestrian detection method based on feature fusion |
CN106781513A (en) * | 2016-11-28 | 2017-05-31 | 东南大学 | The recognition methods of vehicle behavior in a kind of urban transportation scene of feature based fusion |
CN108805018A (en) * | 2018-04-27 | 2018-11-13 | 淘然视界(杭州)科技有限公司 | Road signs detection recognition method, electronic equipment, storage medium and system |
CN109086687A (en) * | 2018-07-13 | 2018-12-25 | 东北大学 | The traffic sign recognition method of HOG-MBLBP fusion feature based on PCA dimensionality reduction |
CN109063619A (en) * | 2018-07-25 | 2018-12-21 | 东北大学 | A kind of traffic lights detection method and system based on adaptive background suppression filter and combinations of directions histogram of gradients |
CN109344271A (en) * | 2018-09-30 | 2019-02-15 | 南京物盟信息技术有限公司 | Video portrait records handling method and its system |
Non-Patent Citations (1)
Title |
---|
孙玉等: "基于HOG与LBP特征的人脸识别方法", 《计算机工程》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113505695A (en) * | 2021-07-09 | 2021-10-15 | 上海工程技术大学 | AEHAL characteristic-based track fastener state detection method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Işık | A comparative evaluation of well-known feature detectors and descriptors | |
CN109558823B (en) | Vehicle identification method and system for searching images by images | |
CN107103317A (en) | Fuzzy license plate image recognition algorithm based on image co-registration and blind deconvolution | |
CN109255289B (en) | Cross-aging face recognition method based on unified generation model | |
CN108520215B (en) | Single-sample face recognition method based on multi-scale joint feature encoder | |
CN114067444A (en) | Face spoofing detection method and system based on meta-pseudo label and illumination invariant feature | |
CN105654054A (en) | Semi-supervised neighbor propagation learning and multi-visual dictionary model-based intelligent video analysis method | |
CN112132117A (en) | Fusion identity authentication system assisting coercion detection | |
CN113255828B (en) | Feature retrieval method, device, equipment and computer storage medium | |
CN118230354A (en) | Sign language recognition method based on improvement YOLOv under complex scene | |
Maharani et al. | Deep features fusion for KCF-based moving object tracking | |
CN112270328A (en) | Traffic signal lamp detection method fused with HOG-LBP function | |
CN117854104A (en) | Feature alignment-based unsupervised pedestrian re-identification method | |
Ying et al. | License plate detection and localization in complex scenes based on deep learning | |
Mai et al. | Vietnam license plate recognition system based on edge detection and neural networks | |
Poostchi et al. | Feature selection for appearance-based vehicle tracking in geospatial video | |
CN113344047A (en) | Platen state identification method based on improved K-means algorithm | |
CN111968154A (en) | HOG-LBP and KCF fused pedestrian tracking method | |
CN117437691A (en) | Real-time multi-person abnormal behavior identification method and system based on lightweight network | |
Sulistyaningrum et al. | Classification of damaged road types using multiclass support vector machine (SVM) | |
Jubair et al. | A simplified method for handwritten character recognition from document image | |
CN112487927B (en) | Method and system for realizing indoor scene recognition based on object associated attention | |
Jena et al. | Elitist TLBO for identification and verification of plant diseases | |
CN110751023B (en) | Series pedestrian detection method for video stream | |
CN110866534B (en) | Far infrared pedestrian training method for gradient amplitude distribution gradient orientation histogram |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210126 |