CN106971168A - Multi-direction multi-level dual crossing robustness recognition methods based on human face structure feature - Google Patents

Multi-direction multi-level dual crossing robustness recognition methods based on human face structure feature Download PDF

Info

Publication number
CN106971168A
CN106971168A CN201710207911.5A CN201710207911A CN106971168A CN 106971168 A CN106971168 A CN 106971168A CN 201710207911 A CN201710207911 A CN 201710207911A CN 106971168 A CN106971168 A CN 106971168A
Authority
CN
China
Prior art keywords
dcp
image
face
robustness
dual crossing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710207911.5A
Other languages
Chinese (zh)
Inventor
樊小萌
陈志�
岳文静
黄雅楠
李熠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Post and Telecommunication University
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing Post and Telecommunication University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Post and Telecommunication University filed Critical Nanjing Post and Telecommunication University
Priority to CN201710207911.5A priority Critical patent/CN106971168A/en
Publication of CN106971168A publication Critical patent/CN106971168A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/32Normalisation of the pattern dimensions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering

Abstract

The invention discloses a kind of multi-direction multi-level dual crossing robustness identification based on human face structure feature, this method is a kind of completely new approach for extracting facial expression feature, studies two big main components of face recognition:Extraction and face's matching that face is characterized.We are divided to two groups of coded system to extract face sign using on eight directions of inside and outside circle, diesis intersects, and while view face characteristic, simplify amount of calculation.Add multidirectional gradient filtering and multi-level face is characterized, Gauss single order operator derivative converting gradation facial image is used for multidirectional gradient image, mitigate illumination, it is image blurring, block, the influence of posture and these factors of expressing one's feelings so that illumination variation has more robustness.According to the Optimal gradient wave filter of three criterions, i.e. signal to noise ratio(SNR)Maximize, edge positioning-precision is preserved, it is single in response to single edges so that the suffered interference reduction when extracting countenance, so as to accurately extract countenance feature, save and calculate cost.

Description

Multi-direction multi-level dual crossing robustness recognition methods based on human face structure feature
Technical field
The present invention relates to technical field of image processing, the multi-direction multi-level dual crossing of human face structure feature is based particularly on Robustness recognition methods.
Background technology
Recognition of face, is a kind of biological identification technology that the facial feature information based on people carries out identification.With shooting Machine or camera collection image or video flowing containing face, and automatic detect and track face in the picture, and then to detection The face that arrives carries out a series of correlation techniques of face, generally also referred to as Identification of Images, face recognition.
The research of face identification system is started from the 1960s, with computer technology and optical imagery skill after the eighties The development of art is improved, and actually enters the primary application stage then 90 year later stage, and with the U.S., Germany and Japan Based on technology is realized;The successful key of face identification system is the core algorithm for whether possessing tip, and has recognition result There are practical discrimination and recognition speed;" face identification system " is integrated with artificial intelligence, machine recognition, machine learning, mould A variety of professional techniques such as type theory, expert system, Computer Vision, while theory and realization that median is handled need to be combined, It is the more recent application of living things feature recognition, the realization of its core technology presents weak artificial intelligence to the conversion of strong artificial intelligence. Face identification system mainly includes four parts, is respectively:Man face image acquiring and detection, facial image pretreatment, people Face characteristic feature is extracted and matching and identification.
Current existing traditional algorithm, because many influences the appearance factor of face-image, is produced aobvious when carrying out feature extraction Write individual difference, appearance factor for example, illumination variation, postural change, block, image blurring and expression shape change etc. so that people The accuracy rate of face identification is difficult to improve;In addition, the amount of calculation of existing recognition of face is still huge, the speed of recognition of face has been dragged slowly Rate, this delay often carrys out very big error to data band, and system gives unfriendly impression.
The content of the invention
The technical problems to be solved by the invention overcome the deficiencies in the prior art and provided based on human face structure feature Multi-direction multi-level dual crossing robustness recognition methods, the present invention can effectively reduce amount of calculation, improve the accurate of recognition of face Rate and recognition speed.
The present invention uses following technical scheme to solve above-mentioned technical problem:
According to a kind of multi-direction multi-level dual crossing robustness identification side based on human face structure feature proposed by the present invention Method, comprises the following steps:
Step 1, input facial image H (xk, yk), H (xk, yk) it is gray level image, facial image training set Pk:{Pk| 1 ..., M }, wherein, M is training set image pattern sum, and k is integer and 1≤k≤M;
If it is d to input distance between two of facial image, using two midpoints as origin of coordinates O (0,0), left and right two Coordinate value is respectively (- 0.5d, 0), (0.5d, 0), into step 2;
Step 2, the facial image to step 1, on the basis of O, using left and right it is each away from O as d, vertical two line directions it is upper Direction takes composition rectangular area between 0.5d and lower direction 1.5d to be sheared, and obtains rectangular area, and the rectangular area is expression Subregion, the image of the rectangular area after shearing is expression sub-district area image;
Step 3, the expression sub-district area image to step 2 carry out entering step 4 after change of scale, uniform sizes and specification;
Step 4, to each pixel in the expression sub-district area image after uniform sizes, inside and outside two are drawn using it as the center of circle Individual circle, respectively take 0 on inside and outside circle,π、WithThis eight directions;
Step 5, the texture information on eight directions of pixel in step 4 is quantified, distribute one unique ten System number:Defined function S (y):Y is to become Amount, IoIt is point 0, A respectivelyi、BiGray value;Ai、BiPixel inner circle in step 4 respectively, it is cylindrical on point, DCPiIt is the texture information amount on the i-th direction;The directions of 0- the 7th refer to 0 respectively,π、With
Step 6, for the DCP in step 5i, define { DCP0, DCP2, DCP4, DCP6It is used as the first subset, { DCP1, DCP3, DCP5, DCP7It is used as yield in the second subset;The two subsets build the shape of a cross, constitute dual crossing pattern, occur most Big combination entropy;
Step 7, two subsets in step 6 are grouped into two cross encoders, are respectively designated as DCP-1, DCP-2, The code of pixel in expression sub-district area image is expressed as: Two cross encoders constitute total descriptor DCP:DCP={ DCP-1, DCP-2 }, produces two by total descriptor and is mapped respectively Into the trellis diagram of Non-overlapping Domain;All histograms in each region are calculated in this trellis diagram, and will be all straight Square figure is connected, and overall face representation framework is formed, into step 8;
Step 8, by the overall face representation framework eventually formed in step 7 by Gaussian filter, using central point O as original Point, uses the first derivative formula of Gauss operator:N=(cos θ, sin θ) is to represent filtering direction Normal vector, θ is the angle for filtering direction,It is dimensional Gaussian Multi-aspect filtering device, xk、ykIt is face Image H (xk, yk) coordinate value, σ is the variance of the function;
Step 9, using being used as training data, initialization training sample by the face representation overall framework of wave filter in step 8 This weights distribution, i.e., be endowed identical weights i.e. when each training sample most starts:D1=(w11, w12… w1j…,w1M),D1It is training set initialization weights distribution, w1jFor j-th of sample power of initialization Weight;
Step 10, the weight to training sample in step 9 carry out many wheel iteration;
101st, m is set as iterations, and during initialization iterations m=1, it is D that weights are initialized this moment1
102nd, it is distributed D using with weightsmTraining dataset study, DmFor the sample weights after the m times iteration;Selection allows mistake The minimum threshold value of rate designs basic classification device Gm(x):Gm(x):X → { -1,1 }, x is variable, Gm(x) on training dataset Error in classification rate em, emIt is by Gm(x) sample weights of misclassification and, and wmjJ-th of sample weight when being the m times iteration, I (Gm(xk)≠yk) it is Gm(xk)≠ykThe gray value at place;
If the 103, m < M, m=m+1, otherwise repeat step 102 stops circulation;
G in step 11, calculation procedure 10m(x) coefficient, obtains basic classification device Gm(x) institute in final classification device G (x) The weight α accounted formαm≥0;So as to obtain final classification device G (x):Multiple face feature points are finally given, so as to form multi-direction multistage dual crossing pattern Shandong Rod recognition of face.
It is used as a kind of multi-direction multi-level dual crossing robustness identification side based on human face structure feature of the present invention , there is maximum combined entropy in the further prioritization scheme of method, the dual crossing packet mode based on facial symmetry feature that step 6 is used, Robustness mode is formed, into step 7 dual crossing coding.
It is used as a kind of multi-direction multi-level dual crossing robustness identification side based on human face structure feature of the present invention The further prioritization scheme of method, step 8 uses gaussian filtering,It is dimensional Gaussian Multi-aspect filtering device, makes Obtain gray scale face-image and be converted to the multi-direction gradient image for having robustness to illumination change, signal noise ratio (snr) of image is maximized, increase Image robustness.
It is used as a kind of multi-direction multi-level dual crossing robustness identification side based on human face structure feature of the present invention In the further prioritization scheme of method, step 8, θ ∈ (0, π).
It is used as a kind of multi-direction multi-level dual crossing robustness identification side based on human face structure feature of the present invention In the further prioritization scheme of method, step 10, it is 200 to take training set image pattern sum M.
The present invention uses above technical scheme compared with prior art, with following technique effect:
(1) weak classifier set it is determined that when face feature point, using iterative algorithm, constituted most strong point by the present invention Class device, mitigates on error in classification, " focusing on " those more difficult point samples;
(2) present invention is summarised eight directions, successfully make use of the symmetrical of face using the coding method of dual crossing pattern Architectural characteristic, simplifies algorithm steps, reduces amount of calculation;
(3) present invention utilizes Gaussian filter transition diagram picture, solves traditional algorithm when carrying out feature extraction because being permitted The factor influences for making the outward appearance of face-image produce notable individual difference more, for example, illumination variation, postural change, block, image Fuzzy and expression shape change etc. so that gray scale face-image is converted to the multi-direction gradient image for having robustness to illumination change, Maximize signal noise ratio (snr) of image, marginal position precision keeps and becomes unilateral from single response;
Brief description of the drawings
Fig. 1 is the flow of the multi-direction multi-level dual crossing robustness recognition methods of the invention based on human face structure feature Figure.
Fig. 2 is face's geometrical normalization center.
Fig. 3 is local sampling point set.
Fig. 4 is dual crossing block encoding;Wherein, (a) be by 0,π,Cross that this four direction is constituted Collection, (b) serves as reasonsAnother cross subset that this four direction is constituted.
Fig. 5 is multiple characteristic points of image face.
Embodiment
Technical scheme is described in further detail below in conjunction with the accompanying drawings:
In specific implementation, Fig. 1 is the multi-direction multi-level dual crossing robustness recognition methods based on human face structure feature Flow, input facial image H (xk, yk), H (xk, yk) it is gray level image, facial image training set Pk:{Pk| 1 ..., M }, its In, M is training set image pattern sum, and k is integer and 1≤k≤M;If it is d to input distance between two of facial image, with Two midpoints are origin of coordinates O (0,0), and the coordinate value that left and right is two is respectively (- 0.5d, 0), (0.5d, 0).
To the above-mentioned facial image for having marked coordinate value, on the basis of O, using left and right respectively away from O as d, vertical two companies The upper direction in line direction takes composition rectangular area between 0.5d and lower direction 1.5d to be sheared, and obtains rectangular area, the rectangle Expression subregion is in region, and the image of the rectangular area after shearing is expression sub-district area image, as shown in Figure 2.So more have Beneficial to the extraction of expressive features.
Then expression sub-district area image is carried out after change of scale, uniform sizes and specification, in expression sub-district area image Pixel, inside and outside two circles are drawn using it as the center of circle, respectively take 0 on inside and outside circle,π、WithThis Eight directions.
Then the texture information on eight directions of pixel is quantified, distributes a unique decimal number:Defined function S (y):Y is variable, IoIt is point 0, A respectivelyi、BiGray value;Ai、BiPixel inner circle in step 4 respectively, it is cylindrical on point, DCPiIt is i-th Texture information amount on direction;0-7 directions refer to 0 respectively,π、WithAs shown in Figure 3.
For above-mentioned DCPi, define { DCP0, DCP2, DCP4, DCP6It is used as the first subset, { DCP1, DCP3, DCP5, DCP7It is used as yield in the second subset;The two subsets build the shape of a cross, constitute dual crossing pattern, maximum combined entropy occur, As shown in (a), (b) in Fig. 4.
The two subsets are grouped into two cross encoders, DCP-1, DCP-2 is respectively designated as, in expression subregion figure The code of pixel as in is expressed as:Two intersect volume Code device constitutes total descriptor DCP:DCP={ DCP-1, DCP-2 }, produces two by total descriptor and is mapped to non-overlapped area respectively The trellis diagram in domain;All histograms in each region are calculated in this trellis diagram, and all histograms are connected Come, form overall face representation framework.
Above-mentioned row into overall face representation framework by Gaussian filter, using central point O as origin, calculated using Gauss The first derivative formula of son:N=(cos θ, sin θ) is the normal vector for representing filtering direction, and θ is filtering The angle in direction,It is dimensional Gaussian Multi-aspect filtering device, xk、ykIt is facial image H mentioned above (xk, yk) coordinate value, σ is the variance of the function.By wave filter so that gray scale face-image is converted to illumination change There is the multi-direction gradient image of robustness, maximize signal noise ratio (snr) of image.
Then it assign the face representation overall framework by wave filter as training data, the weights point of initialization training sample Cloth, i.e., be endowed identical weights i.e. when each training sample most starts:D1=(w11, w12…w1j…,w1M),D1It is training set initialization weights distribution, w1jTo initialize j-th of sample weights;
Then the weight to above-mentioned training sample carries out many wheel iteration:
(1) m, is set as iterations, during note primary iteration number of times m=1, the D in step 91To initialize weights this moment, That is DmValue as m=1;
(2), it is distributed D using with weightsmTraining dataset study, DmFor the sample weights after the m times iteration;Selection allows mistake The minimum threshold value of rate designs basic classification device Gm(x):Gm(x):X → { -1,1 }, x is variable, Gm(x) on training dataset Error in classification rate em, emIt is by Gm(x) sample weights of misclassification and, and wmjJ-th of sample weight when being the m times iteration, I (Gm(xk)≠yk) it is Gm(xk)≠ykThe gray value at place;
(3) if, m < M, m=m+1, repeat step (2) otherwise stops circulation.
Finally, G is calculatedm(x) coefficient, obtains basic classification device Gm(x) weight shared in final classification device G (x) αmαm≥0;So as to obtain final classification device G (x): Multiple face feature points are finally given, so as to form multi-direction multistage dual crossing pattern robustness recognition of face, as shown in Figure 5.
The present invention not only solves traditional algorithm when carrying out feature extraction because many produces the outward appearance of face-image The factor influence of notable individual difference, for example, illumination variation, postural change, block, image blurring and expression shape change etc..Also Amount of calculation is alleviated, the accuracy rate and speed of recognition of face is improved.

Claims (5)

1. a kind of multi-direction multi-level dual crossing robustness recognition methods based on human face structure feature, it is characterised in that including Following steps:
Step 1, input facial image H (xk, yk), H (xk, yk) it is gray level image, facial image training set Pk:{Pk|1,..., M }, wherein, M is training set image pattern sum, and k is integer and 1≤k≤M;
If it is d to input distance between two of facial image, using two midpoints as origin of coordinates O (0,0), the coordinate that left and right is two Value is respectively (- 0.5d, 0), (0.5d, 0), into step 2;
Step 2, the facial image to step 1, on the basis of O, using left and right respectively away from O as d, the upper direction in vertical two line directions Take composition rectangular area between 0.5d and lower direction 1.5d to be sheared, obtain rectangular area, the rectangular area is expression sub-district Domain, the image of the rectangular area after shearing is expression sub-district area image;
Step 3, the expression sub-district area image to step 2 carry out entering step 4 after change of scale, uniform sizes and specification;
Step 4, to each pixel in the expression sub-district area image after uniform sizes, inside and outside two are drawn using it as the center of circle Circle, respectively take 0 on inside and outside circle,π、WithThis eight directions;
Step 5, the texture information on eight directions of pixel in step 4 is quantified, distribute a unique decimal system Number:Defined function S (y):Y is variable, IoIt is point 0, A respectivelyi、BiGray value;Ai、BiPixel inner circle in step 4 respectively, it is cylindrical on point, DCPiIt is i-th Texture information amount on direction;The directions of 0- the 7th refer to 0 respectively,π、WithStep 6, for step DCP in rapid 5i, define { DCP0, DCP2, DCP4, DCP6It is used as the first subset, { DCP1, DCP3, DCP5, DCP7It is used as second Subset;The two subsets build the shape of a cross, constitute dual crossing pattern, maximum combined entropy occur;
Step 7, two subsets in step 6 are grouped into two cross encoders, DCP-1, DCP-2 are respectively designated as, in table The code of pixel in feelings sub-district area image is expressed as:Two Individual cross encoder constitutes total descriptor DCP:DCP={ DCP-1, DCP-2 }, produces two by total descriptor and is mapped to respectively The trellis diagram of Non-overlapping Domain;Calculate all histograms in each region in this trellis diagram, and by all Nogatas Figure is connected, and overall face representation framework is formed, into step 8;
Step 8, by the overall face representation framework eventually formed in step 7 by Gaussian filter, using central point O as origin, Use the first derivative formula of Gauss operator:It is the normal direction for representing filtering direction Amount, θ is the angle for filtering direction,It is dimensional Gaussian Multi-aspect filtering device, xk、ykIt is facial image H (xk, yk) coordinate value, σ is the variance of the function;
Step 9, using in step 8 by the face representation overall framework of wave filter as training data, initialization training sample Weights are distributed, i.e., identical weights are endowed when each training sample most starts i.e.:D1=(w11, w12…w1j…, w1M),D1It is training set initialization weights distribution, w1jTo initialize j-th of sample weights;
Step 10, the weight to training sample in step 9 carry out many wheel iteration;
101st, m is set as iterations, and during initialization iterations m=1, it is D that weights are initialized this moment1
102nd, it is distributed D using with weightsmTraining dataset study, DmFor the sample weights after the m times iteration;Selection allows error rate Minimum threshold value designs basic classification device Gm(x):Gm(x):X → { -1,1 }, x is variable, Gm(x) classification on training dataset Error rate em, emIt is by Gm(x) sample weights of misclassification and, and wmjJ-th of sample weight when being the m times iteration, I (Gm(xk)≠yk) it is Gm(xk)≠ykThe gray value at place;
If the 103, m < M, m=m+1, otherwise repeat step 102 stops circulation;
G in step 11, calculation procedure 10m(x) coefficient, obtains basic classification device Gm(x) weight shared in final classification device G (x) αmαm≥0;So as to obtain final classification device G (x): Multiple face feature points are finally given, so as to form multi-direction multistage dual crossing pattern robustness recognition of face.
2. a kind of multi-direction multi-level dual crossing robustness identification side based on human face structure feature according to claim 1 , there is maximum combined entropy in method, it is characterised in that the dual crossing packet mode based on facial symmetry feature that step 6 is used, is formed Robustness mode, into step 7 dual crossing coding.
3. a kind of multi-direction multi-level dual crossing robustness identification side based on human face structure feature according to claim 1 Method, it is characterised in that step 8 uses gaussian filtering,It is dimensional Gaussian Multi-aspect filtering device so that Gray scale face-image is converted to the multi-direction gradient image for having robustness to illumination change, and signal noise ratio (snr) of image is maximized, increase figure As robustness.
4. a kind of multi-direction multi-level dual crossing robustness identification side based on human face structure feature according to claim 1 Method, it is characterised in that in step 8, θ ∈ (0, π).
5. a kind of multi-direction multi-level dual crossing robustness identification side based on human face structure feature according to claim 1 Method, it is characterised in that in step 10, it is 200 to take training set image pattern sum M.
CN201710207911.5A 2017-03-31 2017-03-31 Multi-direction multi-level dual crossing robustness recognition methods based on human face structure feature Pending CN106971168A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710207911.5A CN106971168A (en) 2017-03-31 2017-03-31 Multi-direction multi-level dual crossing robustness recognition methods based on human face structure feature

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710207911.5A CN106971168A (en) 2017-03-31 2017-03-31 Multi-direction multi-level dual crossing robustness recognition methods based on human face structure feature

Publications (1)

Publication Number Publication Date
CN106971168A true CN106971168A (en) 2017-07-21

Family

ID=59336387

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710207911.5A Pending CN106971168A (en) 2017-03-31 2017-03-31 Multi-direction multi-level dual crossing robustness recognition methods based on human face structure feature

Country Status (1)

Country Link
CN (1) CN106971168A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107577994A (en) * 2017-08-17 2018-01-12 南京邮电大学 A kind of pedestrian based on deep learning, the identification of vehicle auxiliary product and search method
CN112464901A (en) * 2020-12-16 2021-03-09 杭州电子科技大学 Face feature extraction method based on gradient face local high-order main direction mode

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103824052A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Multilevel semantic feature-based face feature extraction method and recognition method
CN105893941A (en) * 2016-03-28 2016-08-24 电子科技大学 Facial expression identifying method based on regional images

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103824052A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Multilevel semantic feature-based face feature extraction method and recognition method
CN105893941A (en) * 2016-03-28 2016-08-24 电子科技大学 Facial expression identifying method based on regional images

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHANGXING DING等: "Multi-Directional Multi-Level Dual-Cross Patterns for Robust Face Recognition", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
严政: "人脸表情运动单元识别系统", 《中国优秀硕士论文全文数据库信息科技辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107577994A (en) * 2017-08-17 2018-01-12 南京邮电大学 A kind of pedestrian based on deep learning, the identification of vehicle auxiliary product and search method
CN112464901A (en) * 2020-12-16 2021-03-09 杭州电子科技大学 Face feature extraction method based on gradient face local high-order main direction mode
CN112464901B (en) * 2020-12-16 2024-02-02 杭州电子科技大学 Face feature extraction method based on gradient face local high-order main direction mode

Similar Documents

Publication Publication Date Title
CN105975931B (en) A kind of convolutional neural networks face identification method based on multiple dimensioned pond
CN108520216B (en) Gait image-based identity recognition method
Zhu et al. Lesion detection of endoscopy images based on convolutional neural network features
CN104517104B (en) A kind of face identification method and system based under monitoring scene
CN103605972B (en) Non-restricted environment face verification method based on block depth neural network
CN104778457B (en) Video face identification method based on multi-instance learning
CN107316031A (en) The image characteristic extracting method recognized again for pedestrian
Zhang et al. GAN-based image augmentation for finger-vein biometric recognition
CN108304826A (en) Facial expression recognizing method based on convolutional neural networks
CN103927511B (en) image identification method based on difference feature description
CN105718889B (en) Based on GB (2D)2The face personal identification method of PCANet depth convolution model
Herdiyeni et al. Combination of morphological, local binary pattern variance and color moments features for indonesian medicinal plants identification
CN106529504B (en) A kind of bimodal video feeling recognition methods of compound space-time characteristic
CN107122712B (en) Palm print image identification method based on CNN and bidirectional VLAD
CN107992850B (en) Outdoor scene three-dimensional color point cloud classification method
CN103679136B (en) Hand back vein identity recognition method based on combination of local macroscopic features and microscopic features
Liu et al. Finger vein recognition with superpixel-based features
CN106778768A (en) Image scene classification method based on multi-feature fusion
CN108615007B (en) Three-dimensional face identification method, device and storage medium based on characteristic tensor
CN107886558A (en) A kind of human face expression cartoon driving method based on RealSense
CN115830652B (en) Deep palm print recognition device and method
CN109800677A (en) A kind of cross-platform palm grain identification method
Khan et al. Texture representation through overlapped multi-oriented tri-scale local binary pattern
CN106203448A (en) A kind of scene classification method based on Nonlinear Scale Space Theory
CN110188646B (en) Human ear identification method based on fusion of gradient direction histogram and local binary pattern

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170721

RJ01 Rejection of invention patent application after publication