CN108388920A - A kind of Copy of ID Card detection method of fusion HOG and LBPH features - Google Patents

A kind of Copy of ID Card detection method of fusion HOG and LBPH features Download PDF

Info

Publication number
CN108388920A
CN108388920A CN201810172048.9A CN201810172048A CN108388920A CN 108388920 A CN108388920 A CN 108388920A CN 201810172048 A CN201810172048 A CN 201810172048A CN 108388920 A CN108388920 A CN 108388920A
Authority
CN
China
Prior art keywords
indicate
rectangle frame
pixel
features
ordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810172048.9A
Other languages
Chinese (zh)
Other versions
CN108388920B (en
Inventor
柯逍
卢安琪
牛玉贞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Original Assignee
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou University filed Critical Fuzhou University
Priority to CN201810172048.9A priority Critical patent/CN108388920B/en
Publication of CN108388920A publication Critical patent/CN108388920A/en
Application granted granted Critical
Publication of CN108388920B publication Critical patent/CN108388920B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of Copy of ID Card detection methods of fusion HOG and LBPH features, including:The positive and negative samples of a large amount of identity cards and non-identity card picture as training sample set are chosen first, HOG features and LBPH features are extracted to training sample set respectively, and training SVM, obtain the first grader and the second grader, target detection is carried out to test image using the first grader, obtains the LBPH features of object detection results;Judged according to the LBPH features of object detection results using the second grader, reservation judging result is genuine target.The present invention is detected first with HOG graders, is then detected again to HOG testing results using LBPH graders, and method is simple, quick, efficient, and Detection accuracy is high.

Description

A kind of Copy of ID Card detection method of fusion HOG and LBPH features
Technical field
The present invention relates to mode identification technology more particularly to a kind of identity card duplicating of fusion HOG and LBPH features Part detection method.
Background technology
With science and technology and expanding economy, the workloads of enterprises and institutions of government is more and more, office activity model Enclose increasing higher and higher with the market-oriented requirement to speed of handling official business and accuracy rate, available Copy of ID Card inspection in the market It is fewer to survey software.Most of formality about Copy of ID Card detection is still based entirely on artificial, and which results in big The waste of time, manpower and material resources are measured, human factor is affected to result.Traditional Copy of ID Card inspection software is big Mostly it is to be detected using only single features, the accuracy rate of detection is relatively low.Both detection methods suffer from clearly disadvantageous.Political affairs Mansion enterprises and institutions be badly in need of it is a can automatic, detection quickly and accurately and the relevant formality of Copy of ID Card whether just True office automation software so that enterprises and institutions of government can spend less human and material resources and time, but can be quick And accurately Copy of ID Card is detected.When handling with financial industry related service, it is required for providing identity Demonstrate,prove copy.Business, banking and insurance business business etc. for example, debit card business, security are opened an account.With Sciences Economics development and more Carry out more attention of more people to economic management, the workload of the business personnel of many financial industry increases severely, it is necessary to periodically inspection Survey Copy of ID Card.
Traditional treatment method is all that selection single features are detected, and in the market about Copy of ID Card detection side Method is less, the case where substantially still fully relying on artificial detection.
Invention content
Copy of ID Card detection, inspection are carried out for traditional artificial detection Copy of ID Card or according only to single features Efficiency and the not high problem of Detection accuracy are surveyed, the present invention provides a kind of Copy of ID Card of fusion HOG and LBPH features Detection method carries out HOG target detections and then detects again using LBPH and SVM technologies classifying and accurately can quickly sentencing first Whether disconnected Copy of ID Card operation is correct, improves detection efficiency and Detection accuracy.
To achieve the above object, the technical scheme is that:A kind of Copy of ID Card of fusion HOG and LBPH features Detection method includes the following steps:
Step S1:The positive and negative samples of a large amount of identity cards and non-identity card picture as training sample set are chosen, to training sample Every pictures of this concentration carry out dimension normalization;
Step S2:The HOG features for extracting the training sample after dimension normalization train SVM based on HOG features, obtain the One grader;
Step S3:The LBPH features for extracting the training sample after dimension normalization are trained SVM based on LBPH features, are obtained Second grader;
Step S4:Copy of ID Card test image is pre-processed;
Step S5:Target detection is carried out to pretreated test image using the first grader;
Step S6:It is special to generate LBPH according to obtained LBP features for the LBP features for calculating the object detection results of step S5 Sign;
Step S7:The LBPH features generated according to step S6 using the second grader are judged that retaining judging result is Genuine target obtains the Copy of ID Card in test image.
Further, the method for the HOG features of the training sample in the step S2 after extraction dimension normalization is specially:
The spaces gamma and color space and the gradient for calculating each pixel of training sample are standardized, and using such as following formula Son:
H (s, t)=H (s, t)gamma
Gs(s, t)=H (s+1, t)-H (s-1, t) Gt(s, t)=H (s, t+1)-H (s, t-1)
Wherein, s indicates that the abscissa in training sample image, t indicate the ordinate in training sample image, Gs(s,t) Indicate the horizontal direction gradient at pixel (s, t), Gt(s, t) indicates the vertical gradient at pixel (s, t), G (s, t) Indicate that the gradient magnitude at pixel (s, t), α (s, t) indicate the gradient direction at pixel (s, t), parameter gamma= 0.5, H (s, t) indicates the pixel value at pixel (s, t);
Training sample is divided into cell factory lattice, then is the histogram of each cell structure gradient direction, then Cell combination is blocking, and normalized gradient histogram is needed in block, ultimately produces the HOG feature vectors of training sample.
Further, SVM, the method for obtaining the first grader is trained to specifically include based on HOG features in the step S2:
Using linear kernel function, and utilize following formula:
Wherein, xεIndicate the HOG feature vectors of sample ε, xlIndicate that the HOG feature vectors of sample l, κ indicate kernel function, Indicate xεTransposition, training result is saved in XML file;
Array alpha, array support vector and floating number rho are read from obtained XML file, first Alpha is multiplied with support vector, a row vector is obtained, then will be multiplied by -1 before the vector, then in the row vector Last addition floating number rho, obtain the first grader.
Further, the method that the LBPH features of the training sample after dimension normalization are extracted in the step S3 is specific For:
The LBP features for calculating training sample, are divided into multiple coded images by image LBP, are calculated using following formula each The pixel value of coded image:
dxn=-radius*sin (2.0* π * n/neighbors) dyn=radius*cos (2.0* π * n/neighbors),
Wherein, x indicates that the abscissa on image, y indicate that the ordinate on image, radius indicate sample radius, Neighbors indicates Size of Neighborhood, and parameter n is integer, dxnIndicate the n-th horizontal seat of neighbor assignment pixel-shift of pixel (x, y) Mark, dynIndicate that the n-th neighbor assignment pixel-shift ordinate of pixel (x, y), gray (x, y) indicate at pixel (x, y) Gray value, gray (x, y)nIndicate that the gray value of the n-th neighborhood of pixel (x, y), lbp (x, y) indicate at pixel (x, y) Encoded radio, lbp (x, y)nIndicate that the encoded radio of the n-th neighborhood of pixel (x, y), w indicate each region of LBP coded images Width, h indicate the height in each region of LBP coded images;The LBPH features for generating training sample obtain each lattice using following formula The width and height of son:
Wherein, gridxIndicate width direction grid number, gridyIndicate short transverse grid number, LBPiIndicate LBP code patterns I-th of coded image area as in, cols indicate that columns, rows indicate line number, LBPi.cols it indicates in LBP code patterns i-th The columns of coded image area, LBPi.rows the line number of i-th of coded image area in LBP code patterns, grad are indicatedwIndicate lattice The width of son, gradhThe height for indicating grid counts the height that histogram is each worth in each grid according to row sequence, and according to Sequence stores the result in every a line of corresponding histogram matrix;Then histogram height is normalized;It regard row as main sequence again Corresponding histogram matrix is transformed into 1 row M*2neighborsThe vector matrix of row, M indicate the total number in region;Finally connection office Portion's histogram has just obtained the histogram of entire training sample.
Further, the step S4 is specifically included:
Step S41:Input Copy of ID Card test image;
Step S42:Test image dimension normalization uses following formula using bilinear interpolation value-based algorithm:
F (λ+u, j+v)=(1-u) (1-v) f (λ, j)+(1-u) vf (λ, j+1)+u (1-v) f (λ+1, j)+uvf (λ+1, j+ 1) wherein, λ indicates that abscissa in test image, j indicate ordinate in test image, and λ and j are integer, and u and v are to be more than It is less than 1 decimal equal to 0, f (λ, j) indicates the pixel value at pixel (λ, j) in test image;
Step S43:Test image is converted into gray-scale map, uses following formula:
Gray (λ, j)=[Red (λ, j)+Green (λ, j)+Blue (λ, j)]/3
Wherein, Red (λ, j) indicates that the red color channel value at pixel (λ, j), Green (λ, j) indicate pixel (λ, j) The green channel value at place, Blue (λ, j) indicate that the blue channel value at pixel (λ, j), Gray (λ, j) indicate picture in gray-scale map Gray value at vegetarian refreshments (λ, j);
Step S44:It is smooth to carry out gaussian filtering, utilizes following formula:
Wherein, σ indicates that the variance of Gaussian function, Gauss (λ, j) indicate the picture in test image after gaussian filtering process Pixel value at vegetarian refreshments (λ, j).
Further, the step S5 is specifically included:
Step S51:The first grader is read, and target detection is done to test image;
Step S52:The region with inside and outside inclusion relation in testing result is removed, following formula is utilized:
(Recte&Rectψ)==Recte
Wherein, RecteIndicate rectangle frame e, RectψIndicate rectangle frame ψ, if above formula judge if true, indicate rectangle frame e with The relationship that rectangle frame ψ, which is inside and outside, includes, retains big region;
Step S53:Judge whether testing result intersects, utilizes following formula:
xc1=max (xa1,xb1) yc1=max (ya1,yb1) xc2=min (xa2,xb2) yc2=min (ya2,yb2) xc1<=xc2 yc1<=yc2
Wherein, xa1Indicate the abscissa in the upper left corner of rectangle frame a, ya1Indicate the ordinate in the upper left corner of rectangle frame a, xa2 Indicate the abscissa in the lower right corner of rectangle frame a, ya2Indicate the ordinate in the lower right corner of rectangle frame a, xb1Indicate a left side of rectangle frame b The abscissa at upper angle, yb1Indicate the ordinate in the upper left corner of rectangle frame b, xb2Indicate the abscissa in the lower right corner of rectangle frame b, yb2 Indicate the ordinate in the lower right corner of rectangle frame b, xc1Indicate the maximum value of the upper left corner abscissa of rectangle frame a and rectangle frame b, yc1 Indicate the maximum value of the upper left corner ordinate of rectangle frame a and rectangle frame b, xc2Indicate the horizontal seat in the lower right corner of rectangle frame a and rectangle frame b Target minimum value, yc2The minimum value of the lower right corner ordinate of rectangle frame a and rectangle frame b is indicated, if above formula judges if true, table Show that rectangle a with rectangle b is the relationship intersected;
Step S54:The region intersected in testing result is merged, if rectangle frame intersects, finds out intersecting area, such as Fruit intersecting area is more than threshold value, then is merged to two rectangle frames, use following formula:
xd1=min (xg1,xr1) yd1=min (yg1,yr1) xd2=max (xg2,xr2) yd2=max (yg2,yr2)
Wherein, xd1Indicate the abscissa in the rectangle frame upper left corner after merging, yd1Indicate the ordinate in the rectangle frame upper left corner after merging, xd2Indicate the abscissa in the rectangle frame lower right corner after merging, yd2Indicate the ordinate in the rectangle frame lower right corner after merging, xg1Indicate rectangle The abscissa in the upper left corners frame g, yg1Indicate the ordinate in the upper left corners rectangle frame g, xg2Indicate the abscissa in the lower right corner rectangle frame g, yg2 Indicate the ordinate in the lower right corner rectangle frame g, xr1Indicate the abscissa in the upper left corners rectangle frame r, yr1Indicate the upper left corners rectangle frame r Ordinate, xr2Indicate the abscissa in the lower right corner rectangle frame r, yr2Indicate the ordinate in the lower right corner rectangle frame r.
Compared with prior art, the present invention has advantageous effect:The present invention is primarily based on HOG features training SVM, and protects Deposit the first grader;It is then based on LBPH features training SVM and obtains the second grader, then primarily determined by the first grader Identity card;It finally gives the target primarily determined to the second grader and carries out detection classification again, determine final target.This hair It is bright quickly to detect automatically, and HOG features can accurately detect identity card with being combined for LBPH, improve inspection The accuracy rate of survey, does not need artificial detection, has saved time and energy, avoids the error in manual operation.
Description of the drawings
Fig. 1 is a kind of flow diagram of the Copy of ID Card detection method of fusion HOG and LBPH features of the present invention.
Specific implementation mode
The present invention will be further described with reference to the accompanying drawings and embodiments.
As shown in Figure 1, a kind of Copy of ID Card detection method of fusion HOG and LBPH features, includes the following steps:
Step S1:The positive and negative samples of a large amount of identity cards and non-identity card picture as training sample set are chosen, to training sample Every pictures of this concentration carry out dimension normalization;
Step S2:The HOG features for extracting the training sample after dimension normalization train SVM based on HOG features, obtain the One grader;
The method of HOG features of training sample after extraction dimension normalization is specially:
The spaces gamma and color space and the gradient for calculating each pixel of training sample are standardized, and using such as following formula Son:
H (s, t)=H (s, t)gamma
Gs(s, t)=H (s+1, t)-H (s-1, t) Gt(s, t)=H (s, t+1)-H (s, t-1)
Wherein, s indicates that the abscissa in training sample image, t indicate the ordinate in training sample image, Gs(s,t) Indicate the horizontal direction gradient at pixel (s, t), Gt(s, t) indicates the vertical gradient at pixel (s, t), G (s, t) Indicate that the gradient magnitude at pixel (s, t), α (s, t) indicate the gradient direction at pixel (s, t), parameter gamma= 0.5, H (s, t) indicates the pixel value at pixel (s, t);
Training sample is divided into cell factory lattice, then is the histogram of each cell structure gradient direction, then Cell combination is blocking, and normalized gradient histogram is needed in block, ultimately produces the HOG feature vectors of training sample.
SVM, the method for obtaining the first grader is trained to specifically include based on HOG features:
Using linear kernel function, and utilize following formula:
Wherein, xεIndicate the HOG feature vectors of sample ε, xlIndicate that the HOG feature vectors of sample l, κ indicate kernel function, Indicate xεTransposition, training result is saved in XML file;
Array alpha, array support vector and floating number rho are read from obtained XML file, first Alpha is multiplied with support vector, a row vector is obtained, then will be multiplied by -1 before the vector, then in the row vector Last addition floating number rho, obtain the first grader.
Step S3:The LBPH features for extracting the training sample after dimension normalization are trained SVM based on LBPH features, are obtained Second grader;
The method of LBPH features of training sample after extraction dimension normalization is specially:
The LBP features for calculating training sample, are divided into multiple coded images by image LBP, are calculated using following formula each The pixel value of coded image:
dxn=-radius*sin (2.0* π * n/neighbors) dyn=radius*cos (2.0* π * n/neighbors),
Wherein, x indicates that the abscissa on image, y indicate that the ordinate on image, radius indicate sample radius, Neighbors indicates Size of Neighborhood, and parameter n is integer, dxnIndicate the n-th horizontal seat of neighbor assignment pixel-shift of pixel (x, y) Mark, dynIndicate that the n-th neighbor assignment pixel-shift ordinate of pixel (x, y), gray (x, y) indicate at pixel (x, y) Gray value, gray (x, y)nIndicate that the gray value of the n-th neighborhood of pixel (x, y), lbp (x, y) indicate at pixel (x, y) Encoded radio, lbp (x, y)nIndicate that the encoded radio of the n-th neighborhood of pixel (x, y), w indicate each region of LBP coded images Width, h indicate the height in each region of LBP coded images;
The LBPH features for generating training sample, the width and height of each grid are obtained using following formula:
Wherein, gridxIndicate width direction grid number, gridyIndicate short transverse grid number, LBPiIndicate LBP code patterns I-th of coded image area as in, cols indicate that columns, rows indicate line number, LBPi.cols it indicates in LBP code patterns i-th The columns of coded image area, LBPi.rows the line number of i-th of coded image area in LBP code patterns, grad are indicatedwIndicate lattice The width of son, gradhThe height for indicating grid counts the height that histogram is each worth in each grid according to row sequence, and according to Sequence stores the result in every a line of corresponding histogram matrix;Then histogram height is normalized, that is, whole straight Side's figure height divided by gradw*gradh
Corresponding histogram matrix is transformed into 1 row M*2 by row as main sequence againneighborsThe vector matrix of row, M are indicated The total number in region;Finally connection local histogram has just obtained the histogram of entire training sample.
SVM is trained based on LBPH features.Using linear kernel function, following formula is utilized:
Wherein, xεIndicate the LBPH feature vectors of sample ε, xlIndicate that the LBPH feature vectors of sample l, κ indicate kernel function,Indicate xεTransposition, training obtain the second grader.
Step S4:Copy of ID Card test image is pre-processed;
It specifically includes:
Step S41:Input Copy of ID Card test image;
Step S42:Test image dimension normalization uses following formula using bilinear interpolation value-based algorithm:
F (λ+u, j+v)=(1-u) (1-v) f (λ, j)+(1-u) vf (λ, j+1)+u (1-v) f (λ+1, j)+uvf (λ+1, j+ 1) wherein, λ indicates that abscissa in test image, j indicate ordinate in test image, and λ and j are integer, and u and v are to be more than It is less than 1 decimal equal to 0, f (λ, j) indicates the pixel value at pixel (λ, j) in test image;
Step S43:Test image is converted into gray-scale map, uses following formula:
Gray (λ, j)=[Red (λ, j)+Green (λ, j)+Blue (λ, j)]/3
Wherein, Red (λ, j) indicates that the red color channel value at pixel (λ, j), Green (λ, j) indicate pixel (λ, j) The green channel value at place, Blue (λ, j) indicate that the blue channel value at pixel (λ, j), Gray (λ, j) indicate picture in gray-scale map Gray value at vegetarian refreshments (λ, j);
Step S44:It is smooth to carry out gaussian filtering, utilizes following formula:
Wherein, σ indicates that the variance of Gaussian function, Gauss (λ, j) indicate the picture in test image after gaussian filtering process Pixel value at vegetarian refreshments (λ, j).
Step S5:Target detection is carried out to pretreated test image using the first grader;
It specifically includes:
Step S51:The first grader is read, and target detection is done to test image;
Step S52:The region with inside and outside inclusion relation in testing result is removed, following formula is utilized:
(Recte&Rectψ)==Recte
Wherein, RecteIndicate rectangle frame e, RectψIndicate rectangle frame ψ, if above formula judge if true, indicate rectangle frame e with The relationship that rectangle frame ψ, which is inside and outside, includes, retains big region;
Step S53:Judge whether testing result intersects, utilizes following formula:
xc1=max (xa1,xb1) yc1=max (ya1,yb1) xc2=min (xa2,xb2) yc2=min (ya2,yb2) xc1<=xc2 yc1<=yc2
Wherein, xa1Indicate the abscissa in the upper left corner of rectangle frame a, ya1Indicate the ordinate in the upper left corner of rectangle frame a, xa2 Indicate the abscissa in the lower right corner of rectangle frame a, ya2Indicate the ordinate in the lower right corner of rectangle frame a, xb1Indicate a left side of rectangle frame b The abscissa at upper angle, yb1Indicate the ordinate in the upper left corner of rectangle frame b, xb2Indicate the abscissa in the lower right corner of rectangle frame b, yb2 Indicate the ordinate in the lower right corner of rectangle frame b, xc1Indicate the maximum value of the upper left corner abscissa of rectangle frame a and rectangle frame b, yc1 Indicate the maximum value of the upper left corner ordinate of rectangle frame a and rectangle frame b, xc2Indicate the horizontal seat in the lower right corner of rectangle frame a and rectangle frame b Target minimum value, yc2The minimum value of the lower right corner ordinate of rectangle frame a and rectangle frame b is indicated, if above formula judges if true, table Show that rectangle a with rectangle b is the relationship intersected;
Step S54:The region intersected in testing result is merged, if rectangle frame intersects, finds out intersecting area, such as Fruit intersecting area is more than threshold value, then is merged to two rectangle frames, use following formula:
xd1=min (xg1,xr1) yd1=min (yg1,yr1) xd2=max (xg2,xr2) yd2=max (yg2,yr2)
Wherein, xd1Indicate the abscissa in the rectangle frame upper left corner after merging, yd1Indicate the ordinate in the rectangle frame upper left corner after merging, xd2Indicate the abscissa in the rectangle frame lower right corner after merging, yd2Indicate the ordinate in the rectangle frame lower right corner after merging, xg1Indicate rectangle The abscissa in the upper left corners frame g, yg1Indicate the ordinate in the upper left corners rectangle frame g, xg2Indicate the abscissa in the lower right corner rectangle frame g, yg2 Indicate the ordinate in the lower right corner rectangle frame g, xr1Indicate the abscissa in the upper left corners rectangle frame r, yr1 indicates the upper left corners rectangle frame r Ordinate, xr2Indicate the abscissa in the lower right corner rectangle frame r, yr2Indicate the ordinate in the lower right corner rectangle frame r.
Step S6:It is special to generate LBPH according to obtained LBP features for the LBP features for calculating the object detection results of step S5 Sign;
Step S7:The LBPH features generated according to step S6 using the second grader are judged that retaining judging result is Genuine target obtains the Copy of ID Card in test image.
The above are preferred embodiments of the present invention, all any changes made according to the technical solution of the present invention, and generated function is made When with range without departing from technical solution of the present invention, all belong to the scope of protection of the present invention.

Claims (6)

1. a kind of Copy of ID Card detection method of fusion HOG and LBPH features, which is characterized in that include the following steps:
Step S1:The positive and negative samples of a large amount of identity cards and non-identity card picture as training sample set are chosen, to training sample set In every pictures carry out dimension normalization;
Step S2:The HOG features for extracting the training sample after dimension normalization train SVM based on HOG features, obtain first point Class device;
Step S3:The LBPH features for extracting the training sample after dimension normalization train SVM based on LBPH features, obtain second Grader;
Step S4:Copy of ID Card test image is pre-processed;
Step S5:Target detection is carried out to pretreated test image using the first grader;
Step S6:The LBP features for calculating the object detection results of step S5 generate LBPH features according to obtained LBP features;
Step S7:The LBPH features generated according to step S6 using the second grader are judged that it is genuine to retain judging result Target obtains the Copy of ID Card in test image.
2. Copy of ID Card detection method according to claim 1, which is characterized in that extract scale in the step S2 The method of the HOG features of training sample after normalization is specially:
The spaces gamma and color space and the gradient for calculating each pixel of training sample are standardized, and utilizes following formula:
H (s, t)=H (s, t)gamma
Gs(s, t)=H (s+1, t)-H (s-1, t) Gt(s, t)=H (s, t+1)-H (s, t-1)
Wherein, s indicates that the abscissa in training sample image, t indicate the ordinate in training sample image, Gs(s, t) indicates picture Horizontal direction gradient at vegetarian refreshments (s, t), Gt(s, t) indicates that the vertical gradient at pixel (s, t), G (s, t) indicate picture Gradient magnitude at vegetarian refreshments (s, t), α (s, t) indicate the gradient direction at pixel (s, t), parameter gamma=0.5, H (s, T) pixel value at pixel (s, t) is indicated;
Training sample is divided into cell factory lattice, then builds the histogram of gradient direction for each cell, then unit Lattice combination is blocking, and normalized gradient histogram is needed in block, ultimately produces the HOG feature vectors of training sample.
3. Copy of ID Card detection method according to claim 1, which is characterized in that be based on HOG in the step S2 Feature trains SVM, the method for obtaining the first grader to specifically include:
Using linear kernel function, and utilize following formula:
Wherein, xεIndicate the HOG feature vectors of sample ε, xlIndicate that the HOG feature vectors of sample l, κ indicate kernel function,It indicates xεTransposition, training result is saved in XML file;
Array alpha, array support vector and floating number rho are read from obtained XML file, first alpha It is multiplied with support vector, a row vector is obtained, then -1 will be multiplied by before the vector, then in the last of the row vector Floating number rho is added, the first grader is obtained.
4. Copy of ID Card detection method according to claim 1, which is characterized in that extract scale in the step S3 The method of the LBPH features of training sample after normalization is specially:
The LBP features for calculating training sample, are divided into multiple coded images by image LBP, and each coding is calculated using following formula The pixel value of image:
dxn=-radius*sin (2.0* π * n/neighbors) dyn=radius*cos (2.0* π * n/neighbors),
Wherein, x indicates that the abscissa on image, y indicate that the ordinate on image, radius indicate sample radius, neighbors Indicate Size of Neighborhood, parameter n is integer, dxnIndicate the n-th neighbor assignment pixel-shift abscissa of pixel (x, y), dynIt indicates N-th neighbor assignment pixel-shift ordinate of pixel (x, y), gray (x, y) indicate the gray value at pixel (x, y), gray(x,y)nIndicate that the gray value of the n-th neighborhood of pixel (x, y), lbp (x, y) indicate the encoded radio at pixel (x, y), lbp(x,y)nIndicate that the encoded radio of the n-th neighborhood of pixel (x, y), w indicate that the width in each region of LBP coded images, h indicate The height in each region of LBP coded images;
The LBPH features for generating training sample, the width and height of each grid are obtained using following formula:
Wherein, gridxIndicate width direction grid number, gridyIndicate short transverse grid number, LBPiIt indicates in LBP coded images I-th of coded image area, cols indicate that columns, rows indicate line number, LBPi.cols i-th of coding in LBP code patterns is indicated The columns of image-region, LBPi.rows the line number of i-th of coded image area in LBP code patterns, grad are indicatedwIndicate grid Width, gradhThe height for indicating grid, counts the height that histogram is each worth in each grid, and in sequence according to row sequence Store the result in every a line of corresponding histogram matrix;Then histogram height is normalized;It is used as main sequence right row again The histogram matrix answered is transformed into 1 row M*2neighborsThe vector matrix of row, M indicate the total number in region;Finally connection part is straight Side's figure has just obtained the histogram of entire training sample.
5. Copy of ID Card detection method according to claim 1, which is characterized in that the step S4 is specifically included:
Step S41:Input Copy of ID Card test image;
Step S42:Test image dimension normalization uses following formula using bilinear interpolation value-based algorithm:
F (λ+u, j+v)=(1-u) (1-v) f (λ, j)+(1-u) vf (λ, j+1)+u (1-v) f (λ+1, j)+uvf (λ+1, j+1)
Wherein, λ indicates that abscissa in test image, j indicate ordinate in test image, and λ and j are integer, and u and v are big In the decimal for being less than 1 equal to 0, f (λ, j) indicates the pixel value at pixel (λ, j) in test image;
Step S43:Test image is converted into gray-scale map, uses following formula:
Gray (λ, j)=[Red (λ, j)+Green (λ, j)+Blue (λ, j)]/3
Wherein, Red (λ, j) indicates that the red color channel value at pixel (λ, j), Green (λ, j) indicate at pixel (λ, j) Green channel value, Blue (λ, j) indicate that the blue channel value at pixel (λ, j), Gray (λ, j) indicate pixel in gray-scale map Gray value at (λ, j);
Step S44:It is smooth to carry out gaussian filtering, utilizes following formula:
Wherein, σ indicates that the variance of Gaussian function, Gauss (λ, j) indicate the pixel in test image after gaussian filtering process Pixel value at (λ, j).
6. Copy of ID Card detection method according to claim 1, which is characterized in that the step S5 is specifically included:
Step S51:The first grader is read, and target detection is done to test image;
Step S52:The region with inside and outside inclusion relation in testing result is removed, following formula is utilized:
(Recte&Rectψ)==Recte
Wherein, RecteIndicate rectangle frame e, RectψRectangle frame ψ is indicated, if above formula judges if true, indicating rectangle frame e and rectangle The relationship that frame ψ, which is inside and outside, includes, retains big region;
Step S53:Judge whether testing result intersects, utilizes following formula:
xc1=max (xa1,xb1) yc1=max (ya1,yb1) xc2=min (xa2,xb2) yc2=min (ya2,yb2) xc1< =xc2 yc1<=yc2
Wherein, xa1Indicate the abscissa in the upper left corner of rectangle frame a, ya1Indicate the ordinate in the upper left corner of rectangle frame a, xa2It indicates The abscissa in the lower right corner of rectangle frame a, ya2Indicate the ordinate in the lower right corner of rectangle frame a, xb1Indicate the upper left corner of rectangle frame b Abscissa, yb1Indicate the ordinate in the upper left corner of rectangle frame b, xb2Indicate the abscissa in the lower right corner of rectangle frame b, yb2It indicates The ordinate in the lower right corner of rectangle frame b, xc1Indicate the maximum value of the upper left corner abscissa of rectangle frame a and rectangle frame b, yc1It indicates The maximum value of the upper left corner ordinate of rectangle frame a and rectangle frame b, xc2Indicate the lower right corner abscissa of rectangle frame a and rectangle frame b Minimum value, yc2The minimum value of the lower right corner ordinate of rectangle frame a and rectangle frame b is indicated, if above formula judges if true, indicating square Shape a is the relationship intersected with rectangle b;
Step S54:The region intersected in testing result is merged, if rectangle frame intersects, intersecting area is found out, if phase Cross surface product is more than threshold value, then is merged to two rectangle frames, use following formula:
xd1=min (xg1,xr1) yd1=min (yg1,yr1) xd2=max (xg2,xr2) yd2=max (yg2,yr2)
Wherein, xd1Indicate the abscissa in the rectangle frame upper left corner after merging, yd1Indicate the ordinate in the rectangle frame upper left corner after merging, xd2Indicate the abscissa in the rectangle frame lower right corner after merging, yd2Indicate the ordinate in the rectangle frame lower right corner after merging, xg1Indicate rectangle The abscissa in the upper left corners frame g, yg1Indicate the ordinate in the upper left corners rectangle frame g, xg2Indicate the abscissa in the lower right corner rectangle frame g, yg2 Indicate the ordinate in the lower right corner rectangle frame g, xr1Indicate the abscissa in the upper left corners rectangle frame r, yr1Indicate the upper left corners rectangle frame r Ordinate, xr2Indicate the abscissa in the lower right corner rectangle frame r, yr2Indicate the ordinate in the lower right corner rectangle frame r.
CN201810172048.9A 2018-03-01 2018-03-01 HOG and LBPH characteristic fused identity card copy detection method Active CN108388920B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810172048.9A CN108388920B (en) 2018-03-01 2018-03-01 HOG and LBPH characteristic fused identity card copy detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810172048.9A CN108388920B (en) 2018-03-01 2018-03-01 HOG and LBPH characteristic fused identity card copy detection method

Publications (2)

Publication Number Publication Date
CN108388920A true CN108388920A (en) 2018-08-10
CN108388920B CN108388920B (en) 2022-04-08

Family

ID=63070166

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810172048.9A Active CN108388920B (en) 2018-03-01 2018-03-01 HOG and LBPH characteristic fused identity card copy detection method

Country Status (1)

Country Link
CN (1) CN108388920B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109377624A (en) * 2018-11-23 2019-02-22 卢伟涛 A kind of door intelligent opening system based on facial image identification
CN112800968A (en) * 2021-01-29 2021-05-14 江苏大学 Method for identifying identity of pig in drinking area based on feature histogram fusion of HOG blocks

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102169544A (en) * 2011-04-18 2011-08-31 苏州市慧视通讯科技有限公司 Face-shielding detecting method based on multi-feature fusion
US20130070997A1 (en) * 2011-09-16 2013-03-21 Arizona Board of Regents, a body Corporate of the State of Arizona, Acting for and on Behalf of Ariz Systems, methods, and media for on-line boosting of a classifier
CN103634519A (en) * 2012-08-28 2014-03-12 北京博威康技术有限公司 Image display method and device based on dual-camera head
CN104063722A (en) * 2014-07-15 2014-09-24 国家电网公司 Safety helmet identification method integrating HOG human body target detection and SVM classifier
CN104268514A (en) * 2014-09-17 2015-01-07 西安交通大学 Gesture detection method based on multi-feature fusion
CN104978567A (en) * 2015-06-11 2015-10-14 武汉大千信息技术有限公司 Vehicle detection method based on scenario classification
CN105447503A (en) * 2015-11-05 2016-03-30 长春工业大学 Sparse-representation-LBP-and-HOG-integration-based pedestrian detection method
CN105825243A (en) * 2015-01-07 2016-08-03 阿里巴巴集团控股有限公司 Method and device for certificate image detection
CN106250936A (en) * 2016-08-16 2016-12-21 广州麦仑信息科技有限公司 Multiple features multithreading safety check contraband automatic identifying method based on machine learning
CN106570523A (en) * 2016-10-25 2017-04-19 浙江工业大学 Multi-feature combined robot football recognition method
CN106682641A (en) * 2017-01-05 2017-05-17 北京细推科技有限公司 Pedestrian identification method based on image with FHOG- LBPH feature
CN106940791A (en) * 2017-03-09 2017-07-11 中南大学 A kind of pedestrian detection method based on low-dimensional histograms of oriented gradients
WO2017197620A1 (en) * 2016-05-19 2017-11-23 Intel Corporation Detection of humans in images using depth information
CN107563377A (en) * 2017-08-30 2018-01-09 江苏实达迪美数据处理有限公司 It is a kind of to detect localization method using the certificate key area of edge and character area

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102169544A (en) * 2011-04-18 2011-08-31 苏州市慧视通讯科技有限公司 Face-shielding detecting method based on multi-feature fusion
US20130070997A1 (en) * 2011-09-16 2013-03-21 Arizona Board of Regents, a body Corporate of the State of Arizona, Acting for and on Behalf of Ariz Systems, methods, and media for on-line boosting of a classifier
CN103634519A (en) * 2012-08-28 2014-03-12 北京博威康技术有限公司 Image display method and device based on dual-camera head
CN104063722A (en) * 2014-07-15 2014-09-24 国家电网公司 Safety helmet identification method integrating HOG human body target detection and SVM classifier
CN104268514A (en) * 2014-09-17 2015-01-07 西安交通大学 Gesture detection method based on multi-feature fusion
CN105825243A (en) * 2015-01-07 2016-08-03 阿里巴巴集团控股有限公司 Method and device for certificate image detection
CN104978567A (en) * 2015-06-11 2015-10-14 武汉大千信息技术有限公司 Vehicle detection method based on scenario classification
CN105447503A (en) * 2015-11-05 2016-03-30 长春工业大学 Sparse-representation-LBP-and-HOG-integration-based pedestrian detection method
WO2017197620A1 (en) * 2016-05-19 2017-11-23 Intel Corporation Detection of humans in images using depth information
CN106250936A (en) * 2016-08-16 2016-12-21 广州麦仑信息科技有限公司 Multiple features multithreading safety check contraband automatic identifying method based on machine learning
CN106570523A (en) * 2016-10-25 2017-04-19 浙江工业大学 Multi-feature combined robot football recognition method
CN106682641A (en) * 2017-01-05 2017-05-17 北京细推科技有限公司 Pedestrian identification method based on image with FHOG- LBPH feature
CN106940791A (en) * 2017-03-09 2017-07-11 中南大学 A kind of pedestrian detection method based on low-dimensional histograms of oriented gradients
CN107563377A (en) * 2017-08-30 2018-01-09 江苏实达迪美数据处理有限公司 It is a kind of to detect localization method using the certificate key area of edge and character area

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHUNMEI QING ET AL.: "AUTOMATIC NESTING SEABIRD DETECTION BASED ON BOOSTED HOG-LBP DESCRIPTORS", 《2011 18TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING》 *
MD. ATIQUR RAHMAN AHAD ET AL.: "Action recognition based on binary patterns of action-history and histogram of oriented gradient", 《J MULTIMODAL USER INTERFACES》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109377624A (en) * 2018-11-23 2019-02-22 卢伟涛 A kind of door intelligent opening system based on facial image identification
CN112800968A (en) * 2021-01-29 2021-05-14 江苏大学 Method for identifying identity of pig in drinking area based on feature histogram fusion of HOG blocks
CN112800968B (en) * 2021-01-29 2024-05-14 江苏大学 HOG blocking-based feature histogram fusion method for identifying identity of pigs in drinking area

Also Published As

Publication number Publication date
CN108388920B (en) 2022-04-08

Similar Documents

Publication Publication Date Title
CN109657665B (en) Invoice batch automatic identification system based on deep learning
CN108596166A (en) A kind of container number identification method based on convolutional neural networks classification
CN101894260B (en) Method for identifying forgery seal based on feature line randomly generated by matching feature points
CN101282461B (en) Image processing methods
CN108647681A (en) A kind of English text detection method with text orientation correction
CN104112128B (en) Digital image processing system and method applied to bill image character recognition
CN102800148B (en) RMB sequence number identification method
CN111401372A (en) Method for extracting and identifying image-text information of scanned document
US20180165552A1 (en) All-weather thermal-image pedestrian detection method
US20230206487A1 (en) Detection and identification of objects in images
CN104636706A (en) Complicated background bar code image automatic partitioning method based on gradient direction consistency
CN104298989A (en) Counterfeit identifying method and counterfeit identifying system based on zebra crossing infrared image characteristics
CN112307919B (en) Improved YOLOv 3-based digital information area identification method in document image
CN108334955A (en) Copy of ID Card detection method based on Faster-RCNN
CN106156777A (en) Textual image detection method and device
CN112052845A (en) Image recognition method, device, equipment and storage medium
CN109446345A (en) Nuclear power file verification processing method and system
CN109635805A (en) Image text location method and device, image text recognition methods and device
CN113591866A (en) Special job certificate detection method and system based on DB and CRNN
CN108388920A (en) A kind of Copy of ID Card detection method of fusion HOG and LBPH features
Edward V Support vector machine based automatic electric meter reading system
CN104616321A (en) Method for describing movement behavior of luggage image based on constant dimension and variable characteristics
Scuderi The fingerprint of linear dunes
CN104408403A (en) Arbitration method and apparatus for inconsistent phenomenon of two pieces of entry information
CN110232306A (en) A kind of present status system based on image detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant