CN108304789A - Recognition algorithms and device - Google Patents

Recognition algorithms and device Download PDF

Info

Publication number
CN108304789A
CN108304789A CN201810048643.1A CN201810048643A CN108304789A CN 108304789 A CN108304789 A CN 108304789A CN 201810048643 A CN201810048643 A CN 201810048643A CN 108304789 A CN108304789 A CN 108304789A
Authority
CN
China
Prior art keywords
image
feature
face
obtains
identified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810048643.1A
Other languages
Chinese (zh)
Inventor
袁培江
史震云
李建民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology Beijing USTB
Original Assignee
University of Science and Technology Beijing USTB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology Beijing USTB filed Critical University of Science and Technology Beijing USTB
Publication of CN108304789A publication Critical patent/CN108304789A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods

Abstract

This disclosure relates to a kind of recognition algorithms and device, the method includes:The first image and the second image of face to be identified are obtained, described first image is different with the imaging method of second image;The characteristics of image of extraction described first image obtains the fisrt feature set of described first image, and the characteristics of image of extraction second image obtains the second feature set of second image;According to the fisrt feature set and the second feature set, fusion feature set is determined;The fusion feature is subjected to the smooth discriminant analysis of multiple view, obtains the differentiation result of face to be identified;The differentiation result is inputted trained convolutional neural networks to be identified, obtains the recognition result of face to be identified.The disclosure can improve the recognition accuracy of face recognition.

Description

Recognition algorithms and device
Technical field
This disclosure relates to image identification technical field more particularly to a kind of recognition algorithms and device.
Background technology
Face detection is a kind of biological identification technology that the facial characteristics based on people carries out identification.With to face Recognition methods research in portion is goed deep into, current face recognition algorithms oneself also become through reaching higher level, face recognition The biological identification technology of mainstream, is widely used in practice, for example network account logs in, and banking system logs in, and goes out Enter control, face's payment etc..
The key of traditional all kinds of recognition algorithms is all, extracts special with the relevant essence of identity in face data Sign, while the part wherein influenced by non-identity factor is eliminated, non-identity factor generally comprises:Ambient lighting, posture, expression, Jewelry etc..Wherein lighting issues are mostly important in practical applications, and user's common demands face identification system adapts to different Light environment.General face identification system is all identified using common visible light face image, and this kind of system is easy to receive The influence changed to ambient light, generally requires to handle illumination using some Preprocessing Algorithms before recognition.Although Illumination pretreatment algorithm can eliminate the influence of illumination to a certain extent, but can also make a part of useful letter of image impairment simultaneously Breath.
Invention content
In view of this, the present disclosure proposes a kind of recognition algorithms and device, to solve traditional face recognition side The low problem of method recognition accuracy.
According to the one side of the disclosure, a kind of recognition algorithms are provided, the method includes:
Obtain the first image and the second image of face to be identified, the imaging side of described first image and second image Method is different;
The characteristics of image of extraction described first image obtains the fisrt feature set of described first image, and described in extraction The characteristics of image of second image obtains the second feature set of second image;
According to the fisrt feature set and the second feature set, fusion feature set is determined;
The fusion feature is subjected to the smooth discriminant analysis of multiple view, obtains the differentiation result of face to be identified;
The differentiation result is inputted trained convolutional neural networks to be identified, obtains the identification knot of face to be identified Fruit.
In one possible implementation, first image and the second image for obtaining face to be identified, including:
The first original image of face to be identified is obtained using infrared imaging method;
The second original image of face to be identified is obtained using visual light imaging method;
First original image is subjected to pretreatment and face detection, obtains the first image;
Second original image is subjected to pretreatment and face detection, obtains the second image.
In one possible implementation, the characteristics of image for extracting described first image obtains the of described first image One characteristic set, and the characteristics of image of extraction second image obtain the second feature set of second image, including:
Extract Scale invariant features transform value, local binary patterns characteristic value and the direction gradient histogram of described first image Figure characteristic value obtains the fisrt feature set of described first image;
Extract the Scale invariant features transform value, local binary patterns characteristic value and direction gradient histogram of second image Figure characteristic value obtains the second feature set of second image.
In one possible implementation, according to the fisrt feature set and the second feature set, determination is melted Feature is closed, including:
According to identification parameter feature to be fused is determined in the fisrt feature set and second feature set;
Fusion Features processing is carried out to the feature to be fused, obtains fusion feature.
In one possible implementation, Fusion Features processing is carried out to the feature to be fused, including:
The continuous type Fusion Features processing of Weighted Coefficients is carried out to the feature to be fused.
According to another aspect of the present disclosure, a kind of face authentification device is provided, including:
Image collection module, the first image and the second image for obtaining face to be identified, described first image and institute The imaging method for stating the second image is different;
Characteristic extracting module, the characteristics of image for extracting described first image obtain the fisrt feature of described first image Set, and the characteristics of image of extraction second image obtain the second feature set of second image;
Fusion feature determining module, for according to the fisrt feature set and the second feature set, determining fusion Characteristic set;
Differentiate result acquisition module, for the fusion feature to be carried out the smooth discriminant analysis of multiple view, obtains to be identified The differentiation result of face;
Recognition result acquisition module is identified for the differentiation result to be inputted trained convolutional neural networks, Obtain the recognition result of face to be identified.
In one possible implementation, described image acquisition module, including:
First original image acquisition submodule, the first original graph for obtaining face to be identified using infrared imaging method Picture;
Second original image acquisition submodule, for obtaining the second original of face to be identified using visual light imaging method Image;
First image acquisition submodule obtains for first original image to be carried out pretreatment and face detection One image;
Second image acquisition submodule obtains for second original image to be carried out pretreatment and face detection Two images.
In one possible implementation, the characteristic extracting module, including:
Fisrt feature extracting sub-module, Scale invariant features transform value, local binary for extracting described first image Mode characteristic values and histograms of oriented gradients characteristic value obtain the fisrt feature set of described first image;
Second feature extracting sub-module, Scale invariant features transform value, local binary for extracting second image Mode characteristic values and histograms of oriented gradients characteristic value obtain the second feature set of second image.
In one possible implementation, the fusion feature determining module, including:
Feature determination sub-module to be fused is used for according to identification parameter, in the fisrt feature set and second feature collection In conjunction, feature to be fused is determined;
Fusion feature determination sub-module obtains fusion feature for carrying out Fusion Features processing to the feature to be fused.
In one possible implementation, the fusion feature determination sub-module, including:
First fusion submodule, the continuous type Fusion Features for carrying out Weighted Coefficients to the feature to be fused are handled.
According to another aspect of the present disclosure, a kind of face authentification device is provided, including:
Processor;
Memory for storing processor-executable instruction;
Wherein, the processor is configured as:Execute the method described in any one of embodiment of the present disclosure.
According to another aspect of the present disclosure, a kind of non-volatile computer readable storage medium storing program for executing is provided, meter is stored thereon with Calculation machine program instruction, when the computer program instructions are executed by processor so that processor is able to carry out the embodiment of the present disclosure Any one of described in method.
The technical scheme provided by this disclosed embodiment can include the following benefits:By using different imaging sides Method obtains two images, is merged after extracting the feature of two images respectively, then after the feature after fusion is handled it is defeated Enter convolutional neural networks and carry out face recognition, the recognition accuracy of face recognition can be improved.
According to below with reference to the accompanying drawings to detailed description of illustrative embodiments, the other feature and aspect of the disclosure will become It is clear.
Description of the drawings
Including in the description and the attached drawing of a part for constitution instruction and specification together illustrate the disclosure Exemplary embodiment, feature and aspect, and for explaining the principles of this disclosure.
Fig. 1 is the flow chart according to the recognition algorithms shown in an exemplary embodiment;
Fig. 2 is the block diagram according to the convolutional neural networks shown in an exemplary embodiment;
Fig. 3 is the flow chart of the recognition algorithms shown according to another exemplary embodiment;
Fig. 4 is the flow chart of the recognition algorithms shown according to another exemplary embodiment;
Fig. 5 is the flow chart of the recognition algorithms shown according to another exemplary embodiment;
Fig. 6 is the flow chart of the recognition algorithms shown according to another exemplary embodiment;
Fig. 7 is the flow chart according to the recognition algorithms shown in an exemplary embodiment;
Fig. 8 is the block diagram of the face authentification device shown according to another exemplary embodiment;
Fig. 9 is the block diagram of the face authentification device shown according to another exemplary embodiment;
Figure 10 is a kind of block diagram of face authentification device shown according to an exemplary embodiment;
Figure 11 is a kind of block diagram of face authentification device shown according to an exemplary embodiment.
Specific implementation mode
Various exemplary embodiments, feature and the aspect of the disclosure are described in detail below with reference to attached drawing.It is identical in attached drawing Reference numeral indicate functionally the same or similar element.Although the various aspects of embodiment are shown in the accompanying drawings, remove It non-specifically points out, it is not necessary to attached drawing drawn to scale.
Dedicated word " exemplary " means " being used as example, embodiment or illustrative " herein.Here as " exemplary " Illustrated any embodiment should not necessarily be construed as preferred or advantageous over other embodiments.
In addition, in order to better illustrate the disclosure, numerous details is given in specific implementation mode below. It will be appreciated by those skilled in the art that without certain details, the disclosure can equally be implemented.In some instances, for Method, means, element and circuit well known to those skilled in the art are not described in detail, in order to highlight the purport of the disclosure.
Recognition of face is most common a kind of mode in living things feature recognition field, is obtained in recent years in public safety field It is widely applied.All kinds of face identification methods it is critical that extraction human face data in the relevant substantive characteristics of identity, together When eliminate the part that is wherein influenced by non-identity factor.Non- identity factor generally comprises:Ambient lighting, posture, expression, jewelry Deng.Wherein lighting issues are mostly important in practical applications.Usually common demands face identification system can fit in practical applications Answer different light environments.General face identification system is all identified using common visible light facial image, this kind of system System is easy to be influenced by ambient light variation.Generally require before recognition using some Preprocessing Algorithms to illumination at Reason.Although illumination pretreatment algorithm can eliminate the influence of illumination to a certain extent, it can also make image impairment a part of simultaneously It is inaccurate to eventually lead to face recognition result for useful information.
Fig. 1 is according to the flow chart of the recognition algorithms shown in an exemplary embodiment, as shown in Figure 1, this method packet It includes:
Step 10, the first image and the second image of face to be identified, described first image and second image are obtained Imaging method it is different.
In one possible implementation, different imaging methods includes:Infrared imaging method and visual light imaging side Method.The present embodiment obtains the first image of face to be identified using infrared imaging method, and the first image is infrared light image;It utilizes Visual light imaging method obtains the second image of face to be identified, and the second image is visible images.
Step 20, the characteristics of image for extracting described first image obtains the fisrt feature set of described first image, and The characteristics of image for extracting second image obtains the second feature set of second image.
In one possible implementation, extraction characteristics of image includes:Extract a kind of characteristics of image of image, or extraction A variety of characteristics of image.Such as it is terraced from the first image zooming-out Scale invariant features transform value, local binary patterns characteristic value and direction Histogram feature value is spent, fisrt feature set is obtained further according to three kinds of characteristics of image of the first image extracted.From the second figure The characteristics of image of picture, extraction Scale invariant features transform value, local binary patterns characteristic value and histograms of oriented gradients characteristic value, Second feature set is obtained further according to three kinds of characteristics of image of the second image extracted.
Step 30, according to the fisrt feature set and the second feature set, fusion feature set is determined.
In one possible implementation, different features has different accuracy when showing identical image.Example If any feature can accurately embody the texture of image, some features can accurately embody the shape of image, and some features can be accurate The color of true embodiment image, some features can accurately embody the spatial relationship of image.Table is selected in two characteristic sets The preferable feature of existing index accuracy, obtains fusion feature set, can more comprehensively and accurately embody the spy of face to be identified Sign.
Step 40, the fusion feature is subjected to the smooth discriminant analysis of multiple view, obtains the differentiation result of face to be identified.
In one possible implementation, it after fusion feature set being carried out the smooth discriminant analysis of multiple view, gets rid of Useless feature, and useful feature is subjected to differentiation.
Step 50, the differentiation result is inputted trained convolutional neural networks to be identified, obtains face to be identified Recognition result.
Convolutional neural networks (Convolutional Neural Network, CNN) are a kind of feedforward neural networks.Convolution The artificial neuron of neural network can respond the surrounding cells in a part of coverage area, have remarkably for large-scale image procossing Performance.Convolutional neural networks are made of the full-mesh layer (corresponding classical neural network) on one or more convolutional layers and top, Including associated weights and pond layer (pooling layer).This structure enables convolutional neural networks to utilize input data Two-dimensional structure.Compared with other deep learning structures, convolutional neural networks can provide more in terms of image and speech recognition Good result.This model can be trained using back-propagation algorithm.Compare other depth, feedforward neural network, volume Product neural network needs the parameter considered less.For medium-sized image library, convolutional neural networks compared with traditional neural network, Training time and recognition time, which have, largely to be reduced.
Fig. 2 is according to the block diagram of the convolutional neural networks shown in an exemplary embodiment, as shown in Fig. 2, convolutional Neural net Network structure only has 8 layers of structure, while only there are two characteristic extracting modules, can improve the efficiency of identification.Utilize the sample of face Convolutional neural networks are trained, the differentiation result, which is inputted trained convolutional neural networks, to be identified, and is waited for Identify the final recognition result of face.
The present embodiment obtains two images by using different imaging methods, by the feature of extract respectively two images It is merged, then inputs convolutional neural networks after the feature after fusion is handled and carry out face recognition.Pass through Fusion Features Algorithm integrates the advantage of independent algorithm, and is identified by convolutional neural networks, and the identification for improving image is accurate Rate in particular improves the recognition accuracy of the non-cooperation image of dynamic.
Fig. 3 is the flow chart of the recognition algorithms shown according to another exemplary embodiment, as shown in figure 3, with above-mentioned Embodiment the difference is that, step S10 includes:
Step S11 obtains the first original image of face to be identified using infrared imaging method.
Step S12 obtains the second original image of face to be identified using visual light imaging method.
First original image is carried out pretreatment and face detection, obtains the first image by step S13.
Second original image is carried out pretreatment and face detection, obtains the second image by step S14.
In practical applications, the shooting, collecting of many face images such as face is carried out under conditions of natural visible light , such as identity document, living photo, magazine photo etc..Visual light imaging can obtain the details of reference object under visible light Feature.When only carrying out image recognition using visible images, it is easy to be influenced by ambient light variation, causes face recognition accurate True rate is relatively low.Infrared imaging method obtains the infrared light image of target, energy using the difference of the infrared ray between target and background Occasionally get the face feature that can not be got under visible light conditions.Therefore, infrared light image can be used as visible images Supplement, it is common to realize more accurate recognition of face.
First original image and the second original image are pre-processed, including carry out relevant image preprocessing, For example convert cromogram to gray-scale map, using the method for denoising, reduce the influence of noise on image.Then using based on figure It obtains including face to be identified as the artificial nerve network model of identification carries out face detection after face's processing to original image The first image and the second image.
Fig. 4 is the flow chart of the recognition algorithms shown according to another exemplary embodiment, as shown in figure 4, with above-mentioned Embodiment the difference is that, step S20 includes:
Step S21 extracts the Scale invariant features transform value, local binary patterns characteristic value and direction of described first image Histogram of gradients characteristic value obtains the fisrt feature set of described first image.
Step S22 extracts Scale invariant features transform value, local binary patterns characteristic value and the direction of second image Histogram of gradients characteristic value obtains the second feature set of second image.
In one possible implementation, the face characteristic in the first image obtained using infrared imaging method It can be monotonically changed with the face of people and the distance change of camera, therefore use some specific features on the first image Extracting mode can further eliminate image such as local binary patterns feature (Local Binary Pattern, LBP) feature Monotone variation, obtain the unrelated feature representation of complete illumination.
In the present embodiment, the feature of the first image and the second image, and root are proposed respectively using three kinds of feature extracting methods Fisrt feature set and second feature set are obtained according to the feature extracted.Three kinds of feature extracting methods include:SIFT(Scale- Invariant feature transform, Scale invariant features transform) feature extraction, LBP (Local Binary Patterns, local binary patterns feature) feature extraction, HOG (Histogram of Oriented Gradient, direction ladder Spend histogram) feature extraction.
Fig. 5 is the flow chart of the recognition algorithms shown according to another exemplary embodiment, as shown in figure 5, with above-mentioned Embodiment the difference is that, step S30 includes:
Step S31 determines spy to be fused according to identification parameter in the fisrt feature set and second feature set Sign.
Step S32 carries out Fusion Features processing to the feature to be fused, obtains fusion feature.
In one possible implementation, it is seen that the image of both different modalities of light image and infrared light image is It is shot under different spectrum, the characteristic set of two images is subjected to fusion treatment, the figure of both modalities can be reduced Difference as between.
Identification parameter includes the preset parameter value of purpose according to image recognition.In fisrt feature set and second feature collection Include SIFT feature, LBP features and HOG features in conjunction, according to identification parameter, that is determined in characteristic set is to be fused Feature can more accurately express the feature of image, preferably be used for subsequent analysis.Feature to be fused is carried out at fusion Reason, including carry out addition, the multiplication etc. of characteristic.
Fig. 6 is the flow chart of the recognition algorithms shown according to another exemplary embodiment, as shown in fig. 6, with above-mentioned Embodiment the difference is that, step S32 includes:
Step S321, the continuous type Fusion Features that Weighted Coefficients are carried out to the feature to be fused are handled.
In one possible implementation, the algorithm of fusion treatment include continuous type (Serial) fusion treatment algorithm and Parallel type (Parallel) fusion treatment algorithm.Fusion treatment algorithm can also be divided at the fusion of Weighted Coefficients (Weighted) simultaneously The method for amalgamation processing of reason method and not Weighted Coefficients (Non-Weighted), the fusion treatment algorithm that the present embodiment uses is cum rights The continuous type algorithm of value.
To better illustrate disclosed method, following embodiment is one exemplary embodiment of the disclosure.Fig. 7 is according to another The flow chart of recognition algorithms shown in one exemplary embodiment, as shown in fig. 7, comprises:
Step 1, camera acquire NIR images (Near infrared image, near-infrared picture) and VIS images respectively (Visible Light system image, it is seen that light picture).
Step 2 respectively pre-processes near-infrared picture and visible light picture, recognition of face and face processing.Wherein, Pretreatment includes converting cromogram to gray-scale map, using the method for denoising, reduces the influence of noise on image, face detection The Face datection in image is come out including the use of artificial neural network or image recognition technology.Face's processing includes that removal is not inconsistent The face for closing identification condition or identifying purpose, to improve the accuracy of final face recognition result.
Step 3, SIFT (the Scale-invariant feature for proposing image respectively using three kinds of feature extracting methods Transform, Scale invariant features transform), LBP (Local Binary Patterns, local binary patterns feature), HOG (Histogram of Oriented Gradient, histograms of oriented gradients) feature.
Wherein, the extraction of SIFT feature, includes the following steps:(1) extreme point in scale space is detected.(2) positioning is closed Key point.Position and scale in each position candidate are determined by the model for being fitted fine.The degree of stability of key point is selection Key.(3) direction of key point is determined.The direction that each key point is determined using gradient orientation histogram, key point Distribute to the key point in the highest direction of Grad.Followed by the scale of key point, direction and position become and brings reality Now to the detection of image, matching, the operations such as identification and tracking, so that it is determined that the direction of key point, realizes invariable rotary shape.(4) Key point is described.The partial gradient on the scale image responded in the regional area around each key point is measured, by these ladders Degree is transformed to a kind of expression.
SIFT feature is a kind of local feature algorithm, to translation occurred, is rotated, the image after affine transformation still has very Good recognition effect, SIFT description have preferable robustness and stability.SIFT algorithm ideas are sought in scale space Extreme point is looked for, the regional area around extreme point is then described, so that SIFT description have rotation, position, scale is not Denaturation.It preferably maintains the invariance in addition, SIFT feature also has brightness change.
The extraction of LBP features:LBP is mainly extracted by the thought of structuring as a kind of textural characteristics, algorithm Window feature.Then the extraction of global feature is realized using the idea of statisticsization.The window of LBP description is being initially defined as 3X3 pixels, the method counted are:Middle village, the pixel for defining window are wealthy value, by the value of 8 pixels adjacent thereto Compare therewith, in must pixel value be labeled as 1, otherwise be labeled as 0.In this way, in window in also 8 other than element Pixel can W obtain one 8 unsigned binary numbers, an integer can be obtained by converting binary number to the decimal system, use This integer indicates the LBP values of this window, the i.e. texture information of this window area.LBP is that one kind is used for describing image The operator of Local textural feature;Have the advantages that rotational invariance and gray scale invariance etc. are notable.
The extraction of HOG features:
Histograms of oriented gradients feature (HOG) is a kind of Feature Descriptor being used as object detection in computer vision. HOG features obtain feature by calculating with the gradient orientation histogram of statistical picture regional area.Its basic principle is to utilize The shape and presentation of the localized target of image can describe this property well by the direction Density Distribution at gradient or edge, A kind of HOG characteristic statistics feature description of the gradient information as image.Since HOG is grasped on the local pane location of image Make, so can keep good invariance to image geometry and optical deformation, both deformation only appear in bigger Space field on.Secondly, in conditions such as thick spatial domain sampling, the sampling of fine direction and the normalization of stronger indicative of local optical Under, as long as the posture that pedestrian can generally be kept upright, the limb action that pedestrian can be allowed to have some subtle, these are subtle Action can be ignored without influence detection result.Therefore HOG features are particularly suitable for doing the human testing in image.
Step 4 obtains a kind of new feature using Feature Fusion Algorithm.
In above-mentioned feature extracting method, SIFT feature method is then right by the vector to 128 dimension of characteristic point construction Vector is matched, this sampled images needs to have enough textures, and the 128 dimensional vector distinctivenesses otherwise constructed are too big, is held Easily causing error hiding, the matching of limiting case such as fingerprint image does not have texture around this kind of image characteristic point such as importance in star map recognition, SIFT algorithms are easy failure.LBP characterization methods are under sensor noise without good robustness.HOG characterization methods description Generating process is tediously long, causes speed slow, real-time is poor;It is difficult processing occlusion issue, due to the property of gradient, description is to noise It is quite sensitive.In the non-cooperation user application of such as monitoring, object is in movement, and imaging system may be adjusted again Coke by the photographic fog for leading to system photographs or generates noise.So far, the method that near infrared light image identification uses is all It is to coordinate the design and test taken pictures for user, is established in large database simultaneously for the sample set of deep neural network On.
To obtain more accurately embodying the feature of image, above-mentioned three kinds of features are merged, feature combinational algorithm It is divided into two kinds:Continuous type (Serial) and parallel type (Parallel), at the same can also be divided into Weighted Coefficients (Weighted) method and Weighted Coefficients (Non-Weighted) method does not obtain more comprehensive more accurate characteristic value, so that final recognition result is more Accurately.
Step 5 finds projection matrix of each view to public subspace using the smooth discriminant analysis of multiple view (MSDA).
The smooth discriminant analysis of multiple view is a kind of subspace method, empty to public son the purpose is to be used for finding each view Between projection matrix.For example, it is a=[a to define initial data1,a2,....,an], the data markers of v-th of viewWhereinJ-th of v-th of view that expression classification in D dimension spaces is i is real Example,Indicate the example number in v-th of view that classification is Z.Data are transmitted to the data definition behind public subspace isFor v-th of viewing matrix of initial data Transposition, C indicate classification number.By using Laplce's smooth function, MSDA methods are by the basal orientation from different views Amount is smooth.The object function of MSDA methods can indicate as follows:
Wherein, ρ is related coefficient,Indicate scatter matrix in the class of x quasi-modes,It indicates to spread between the class of x quasi-modes Matrix.J (a) indicates that discrete Laplace regularization function, λ are the parameter for controlling smoothness, 0 < λ < 1.
Step 6 will be identified in result input convolutional neural networks grader.
As shown in Fig. 2, in convolutional neural networks grader, 1*1 convolutional layers play the part of two main angles in feature extraction Color.1*1 convolutional layers increase the non-linear of network first, while remaining the abundant information from upper layer.Secondly using more Before scale convolution extracts upper layer feature, 1*1 convolutional layers can reduce calculation amount.The pond sample level of 3*3 has between single pixel Filling (P) away from (S) and a pixel, can not only keep the resolution ratio of characteristic spectrum, it is thin can also to extract more textures Section.The output of 3*3 convolution filters and other relevant convolutional layers are stacked by Concat functions using the input as next layer.
Step 7, recognition result output.
Compared with the prior art, the disclosure is integrated the advantage of independent algorithm by Feature Fusion Algorithm, is greatly improved The accuracy rate of identification is directed to simultaneously by convolutional neural networks so that the recognition accuracy of the non-cooperation image of dynamic is improved Medium-sized image library, the training time of the neural network and recognition time are obtained for and largely reduce.
Fig. 8 is the block diagram of the face authentification device shown according to another exemplary embodiment, as shown in figure 8, the face Identification device includes:
Image collection module 61, the first image and the second image for obtaining face to be identified, described first image and The imaging method of second image is different;
Characteristic extracting module 62, the characteristics of image for extracting described first image obtain the first spy of described first image Collection is closed, and the characteristics of image of extraction second image obtains the second feature set of second image;
Fusion feature determining module 63, for according to the fisrt feature set and the second feature set, determination to be melted Close characteristic set;
Differentiate result acquisition module 64, for the fusion feature to be carried out the smooth discriminant analysis of multiple view, obtains waiting knowing The differentiation result of other face;
Recognition result acquisition module 65 is known for the differentiation result to be inputted trained convolutional neural networks Not, the recognition result of face to be identified is obtained.
Fig. 9 is the block diagram of the face authentification device shown according to another exemplary embodiment, as shown in figure 9, with above-mentioned reality Apply example the difference is that,
In one possible implementation, described image acquisition module 61, including:
First original image acquisition submodule 611, the first original for obtaining face to be identified using infrared imaging method Beginning image;
Second original image acquisition submodule 612, for obtaining the second of face to be identified using visual light imaging method Original image;
First image acquisition submodule 613 is obtained for first original image to be carried out pretreatment and face detection First image;
Second image acquisition submodule 614 is obtained for second original image to be carried out pretreatment and face detection Second image.
In one possible implementation, the characteristic extracting module 62, including:
Fisrt feature extracting sub-module 621, Scale invariant features transform value, part two for extracting described first image It is worth mode characteristic values and histograms of oriented gradients characteristic value, obtains the fisrt feature set of described first image;
Second feature extracting sub-module 622, Scale invariant features transform value, part two for extracting second image It is worth mode characteristic values and histograms of oriented gradients characteristic value, obtains the second feature set of second image.
In one possible implementation, the fusion feature determining module 63, including:
Feature determination sub-module 631 to be fused is used for according to identification parameter, in the fisrt feature set and second feature In set, feature to be fused is determined;
It is special to obtain fusion for carrying out Fusion Features processing to the feature to be fused for fusion feature determination sub-module 632 Sign.
In one possible implementation, the fusion feature determination sub-module, including:
First fusion submodule, the continuous type Fusion Features for carrying out Weighted Coefficients to the feature to be fused are handled.
Figure 10 is a kind of block diagram of device 800 for face recognition shown according to an exemplary embodiment.For example, dress It can be mobile phone, computer, digital broadcast terminal, messaging devices, game console, tablet device, medical treatment to set 800 Equipment, body-building equipment, personal digital assistant etc..
Referring to Fig.1 0, device 800 may include following one or more components:Processing component 802, memory 804, power supply Component 806, multimedia component 808, audio component 810, the interface 812 of input/output (I/O), sensor module 814, and Communication component 816.
The integrated operation of 802 usual control device 800 of processing component, such as with display, call, data communication, phase Machine operates and record operates associated operation.Processing component 802 may include that one or more processors 820 refer to execute It enables, to perform all or part of the steps of the methods described above.In addition, processing component 802 may include one or more modules, just Interaction between processing component 802 and other assemblies.For example, processing component 802 may include multi-media module, it is more to facilitate Interaction between media component 808 and processing component 802.
Memory 804 is configured as storing various types of data to support the operation in device 800.These data are shown Example includes instruction for any application program or method that are operated on device 800, contact data, and telephone book data disappears Breath, image, video etc..Memory 804 can be by any kind of volatibility or non-volatile memory device or their group It closes and realizes, such as static RAM (SRAM), electrically erasable programmable read-only memory (EEPROM) is erasable to compile Journey read-only memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash Device, disk or CD.
Power supply module 806 provides electric power for the various assemblies of device 800.Power supply module 806 may include power management system System, one or more power supplys and other generated with for device 800, management and the associated component of distribution electric power.
Multimedia component 808 is included in the screen of one output interface of offer between described device 800 and user.One In a little embodiments, screen may include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screen Curtain may be implemented as touch screen, to receive input signal from the user.Touch panel includes one or more touch sensings Device is to sense the gesture on touch, slide, and touch panel.The touch sensor can not only sense touch or sliding action Boundary, but also detect duration and pressure associated with the touch or slide operation.In some embodiments, more matchmakers Body component 808 includes a front camera and/or rear camera.When device 800 is in operation mode, such as screening-mode or When video mode, front camera and/or rear camera can receive external multi-medium data.Each front camera and Rear camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio component 810 is configured as output and/or input audio signal.For example, audio component 810 includes a Mike Wind (MIC), when device 800 is in operation mode, when such as call model, logging mode and speech recognition mode, microphone by with It is set to reception external audio signal.The received audio signal can be further stored in memory 804 or via communication set Part 816 is sent.In some embodiments, audio component 810 further includes a loud speaker, is used for exports audio signal.
I/O interfaces 812 provide interface between processing component 802 and peripheral interface module, and above-mentioned peripheral interface module can To be keyboard, click wheel, button etc..These buttons may include but be not limited to:Home button, volume button, start button and lock Determine button.
Sensor module 814 includes one or more sensors, and the state for providing various aspects for device 800 is commented Estimate.For example, sensor module 814 can detect the state that opens/closes of device 800, and the relative positioning of component, for example, it is described Component is the display and keypad of device 800, and sensor module 814 can be with 800 1 components of detection device 800 or device Position change, the existence or non-existence that user contacts with device 800,800 orientation of device or acceleration/deceleration and device 800 Temperature change.Sensor module 814 may include proximity sensor, be configured to detect without any physical contact Presence of nearby objects.Sensor module 814 can also include optical sensor, such as CMOS or ccd image sensor, at As being used in application.In some embodiments, which can also include acceleration transducer, gyro sensors Device, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 816 is configured to facilitate the communication of wired or wireless way between device 800 and other equipment.Device 800 can access the wireless network based on communication standard, such as WiFi, 2G or 3G or combination thereof.In an exemplary implementation In example, communication component 816 receives broadcast singal or broadcast related information from external broadcasting management system via broadcast channel. In one exemplary embodiment, the communication component 816 further includes near-field communication (NFC) module, to promote short range communication.Example Such as, NFC module can be based on radio frequency identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra wide band (UWB) technology, Bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, device 800 can be believed by one or more application application-specific integrated circuit (ASIC), number Number processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for executing the above method.
In the exemplary embodiment, a kind of non-volatile computer readable storage medium storing program for executing is additionally provided, such as including calculating The memory 804 of machine program instruction, above computer program instruction can be executed above-mentioned to complete by the processor 820 of device 800 Method.
Figure 11 is a kind of block diagram of device 1900 for face recognition shown according to an exemplary embodiment.For example, Device 1900 may be provided as a server.Referring to Fig.1 1, device 1900 includes processing component 1922, further comprises one A or multiple processors and memory resource represented by a memory 1932, can be by processing component 1922 for storing The instruction of execution, such as application program.The application program stored in memory 1932 may include one or more every One module for corresponding to one group of instruction.In addition, processing component 1922 is configured as executing instruction, to execute the above method.
Device 1900 can also include that a power supply module 1926 be configured as the power management of executive device 1900, one Wired or wireless network interface 1950 is configured as device 1900 being connected to network and input and output (I/O) interface 1958.Device 1900 can be operated based on the operating system for being stored in memory 1932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or similar.
In the exemplary embodiment, a kind of non-volatile computer readable storage medium storing program for executing is additionally provided, such as including calculating The memory 1932 of machine program instruction, above computer program instruction can be executed by the processing component 1922 of device 1900 to complete The above method.
The disclosure can be system, method and/or computer program product.Computer program product may include computer Readable storage medium storing program for executing, containing for making processor realize the computer-readable program instructions of various aspects of the disclosure.
Computer readable storage medium can be can keep and store the instruction used by instruction execution equipment tangible Equipment.Computer readable storage medium for example can be-- but be not limited to-- storage device electric, magnetic storage apparatus, optical storage Equipment, electromagnetism storage device, semiconductor memory apparatus or above-mentioned any appropriate combination.Computer readable storage medium More specific example (non exhaustive list) includes:Portable computer diskette, random access memory (RAM), read-only is deposited hard disk It is reservoir (ROM), erasable programmable read only memory (EPROM or flash memory), static RAM (SRAM), portable Compact disk read-only memory (CD-ROM), digital versatile disc (DVD), memory stick, floppy disk, mechanical coding equipment, for example thereon It is stored with punch card or groove internal projection structure and the above-mentioned any appropriate combination of instruction.Calculating used herein above Machine readable storage medium storing program for executing is not interpreted that instantaneous signal itself, the electromagnetic wave of such as radio wave or other Free propagations lead to It crosses the electromagnetic wave (for example, the light pulse for passing through fiber optic cables) of waveguide or the propagation of other transmission mediums or is transmitted by electric wire Electric signal.
Computer-readable program instructions as described herein can be downloaded to from computer readable storage medium it is each calculate/ Processing equipment, or outer computer or outer is downloaded to by network, such as internet, LAN, wide area network and/or wireless network Portion's storage device.Network may include copper transmission cable, optical fiber transmission, wireless transmission, router, fire wall, interchanger, gateway Computer and/or Edge Server.Adapter or network interface in each calculating/processing equipment are received from network to be counted Calculation machine readable program instructions, and the computer-readable program instructions are forwarded, for the meter being stored in each calculating/processing equipment In calculation machine readable storage medium storing program for executing.
For execute the disclosure operation computer program instructions can be assembly instruction, instruction set architecture (ISA) instruction, Machine instruction, machine-dependent instructions, microcode, firmware instructions, condition setup data or with one or more programming languages Arbitrarily combine the source code or object code write, the programming language include the programming language-of object-oriented such as Smalltalk, C++ etc., and conventional procedural programming languages-such as " C " language or similar programming language.Computer Readable program instructions can be executed fully, partly execute on the user computer, is only as one on the user computer Vertical software package executes, part executes or on the remote computer completely in remote computer on the user computer for part Or it is executed on server.In situations involving remote computers, remote computer can pass through network-packet of any kind It includes LAN (LAN) or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as profit It is connected by internet with ISP).In some embodiments, by using computer-readable program instructions Status information carry out personalized customization electronic circuit, such as programmable logic circuit, field programmable gate array (FPGA) or can Programmed logic array (PLA) (PLA), the electronic circuit can execute computer-readable program instructions, to realize each side of the disclosure Face.
Referring herein to according to the flow chart of the method, apparatus (system) of the embodiment of the present disclosure and computer program product and/ Or block diagram describes various aspects of the disclosure.It should be appreciated that flowchart and or block diagram each box and flow chart and/ Or in block diagram each box combination, can be realized by computer-readable program instructions.
These computer-readable program instructions can be supplied to all-purpose computer, special purpose computer or other programmable datas The processor of processing unit, to produce a kind of machine so that these instructions are passing through computer or other programmable datas When the processor of processing unit executes, work(specified in one or more of implementation flow chart and/or block diagram box is produced The device of energy/action.These computer-readable program instructions can also be stored in a computer-readable storage medium, these refer to It enables so that computer, programmable data processing unit and/or other equipment work in a specific way, to be stored with instruction Computer-readable medium includes then a manufacture comprising in one or more of implementation flow chart and/or block diagram box The instruction of the various aspects of defined function action.
Computer-readable program instructions can also be loaded into computer, other programmable data processing units or other In equipment so that series of operation steps are executed on computer, other programmable data processing units or miscellaneous equipment, with production Raw computer implemented process, so that executed on computer, other programmable data processing units or miscellaneous equipment Instruct function action specified in one or more of implementation flow chart and/or block diagram box.
Flow chart and block diagram in attached drawing show the system, method and computer journey of multiple embodiments according to the disclosure The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation One module of table, program segment or a part for instruction, the module, program segment or a part for instruction include one or more use The executable instruction of the logic function as defined in realization.In some implementations as replacements, the function of being marked in box It can occur in a different order than that indicated in the drawings.For example, two continuous boxes can essentially be held substantially in parallel Row, they can also be executed in the opposite order sometimes, this is depended on the functions involved.It is also noted that block diagram and/or The combination of each box in flow chart and the box in block diagram and or flow chart can use function or dynamic as defined in executing The dedicated hardware based system made is realized, or can be realized using a combination of dedicated hardware and computer instructions.
The presently disclosed embodiments is described above, above description is exemplary, and non-exclusive, and It is not limited to disclosed each embodiment.Without departing from the scope and spirit of illustrated each embodiment, for this skill Many modifications and changes will be apparent from for the those of ordinary skill in art field.The selection of term used herein, purport In the principle, practical application or technological improvement to the technology in market for best explaining each embodiment, or this technology is made to lead Other those of ordinary skill in domain can understand each embodiment disclosed herein.

Claims (12)

1. a kind of recognition algorithms, which is characterized in that the method includes:
Obtain the first image and the second image of face to be identified, the imaging method of described first image and second image is not Together;
The characteristics of image of extraction described first image obtains the fisrt feature set of described first image, and extraction described second The characteristics of image of image obtains the second feature set of second image;
According to the fisrt feature set and the second feature set, fusion feature set is determined;
The fusion feature is subjected to the smooth discriminant analysis of multiple view, obtains the differentiation result of face to be identified;
The differentiation result is inputted trained convolutional neural networks to be identified, obtains the recognition result of face to be identified.
2. according to the method described in claim 1, it is characterized in that, first image and the second figure for obtaining face to be identified Picture, including:
The first original image of face to be identified is obtained using infrared imaging method;
The second original image of face to be identified is obtained using visual light imaging method;
First original image is subjected to pretreatment and face detection, obtains the first image;
Second original image is subjected to pretreatment and face detection, obtains the second image.
3. according to the method described in claim 1, it is characterized in that, the characteristics of image of extraction described first image obtains described the The fisrt feature set of one image, and the characteristics of image of extraction second image obtain the second feature of second image Set, including:
The Scale invariant features transform value, local binary patterns characteristic value and histograms of oriented gradients for extracting described first image are special Value indicative obtains the fisrt feature set of described first image;
The Scale invariant features transform value, local binary patterns characteristic value and histograms of oriented gradients for extracting second image are special Value indicative obtains the second feature set of second image.
4. according to the method described in claim 1, it is characterized in that, according to the fisrt feature set and the second feature collection It closes, determines fusion feature, including:
According to identification parameter feature to be fused is determined in the fisrt feature set and second feature set;
Fusion Features processing is carried out to the feature to be fused, obtains fusion feature.
5. according to the method described in claim 4, it is characterized in that, carrying out Fusion Features processing, packet to the feature to be fused It includes:
The continuous type Fusion Features processing of Weighted Coefficients is carried out to the feature to be fused.
6. a kind of face authentification device, which is characterized in that including:
Image collection module, the first image and the second image for obtaining face to be identified, described first image and described The imaging method of two images is different;
Characteristic extracting module, the characteristics of image for extracting described first image obtain the fisrt feature collection of described first image It closes, and the characteristics of image of extraction second image obtains the second feature set of second image;
Fusion feature determining module, for according to the fisrt feature set and the second feature set, determining fusion feature Set;
Differentiate result acquisition module, for the fusion feature to be carried out the smooth discriminant analysis of multiple view, obtains face to be identified Differentiation result;
Recognition result acquisition module is identified for the differentiation result to be inputted trained convolutional neural networks, obtains The recognition result of face to be identified.
7. device according to claim 6, which is characterized in that described image acquisition module, including:
First original image acquisition submodule, the first original image for obtaining face to be identified using infrared imaging method;
Second original image acquisition submodule, the second original graph for obtaining face to be identified using visual light imaging method Picture;
First image acquisition submodule obtains the first figure for first original image to be carried out pretreatment and face detection Picture;
Second image acquisition submodule obtains the second figure for second original image to be carried out pretreatment and face detection Picture.
8. device according to claim 6, which is characterized in that the characteristic extracting module, including:
Fisrt feature extracting sub-module, Scale invariant features transform value, local binary patterns for extracting described first image Characteristic value and histograms of oriented gradients characteristic value obtain the fisrt feature set of described first image;
Second feature extracting sub-module, Scale invariant features transform value, local binary patterns for extracting second image Characteristic value and histograms of oriented gradients characteristic value obtain the second feature set of second image.
9. device according to claim 6, which is characterized in that the fusion feature determining module, including:
Feature determination sub-module to be fused is used for according to identification parameter, in the fisrt feature set and second feature set, Determine feature to be fused;
Fusion feature determination sub-module obtains fusion feature for carrying out Fusion Features processing to the feature to be fused.
10. device according to claim 9, which is characterized in that the fusion feature determination sub-module, including:
First fusion submodule, the continuous type Fusion Features for carrying out Weighted Coefficients to the feature to be fused are handled.
11. a kind of face authentification device, which is characterized in that including:
Processor;
Memory for storing processor-executable instruction;
Wherein, the processor is configured as:Perform claim requires the method described in any one of 1 to 5.
12. a kind of non-volatile computer readable storage medium storing program for executing, is stored thereon with computer program instructions, which is characterized in that institute When stating computer program instructions and being executed by processor so that processor is able to carry out according to any one of claim 1 to 5 institute The method stated.
CN201810048643.1A 2017-12-12 2018-01-18 Recognition algorithms and device Pending CN108304789A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711314868 2017-12-12
CN2017113148689 2017-12-12

Publications (1)

Publication Number Publication Date
CN108304789A true CN108304789A (en) 2018-07-20

Family

ID=62865900

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810048643.1A Pending CN108304789A (en) 2017-12-12 2018-01-18 Recognition algorithms and device

Country Status (1)

Country Link
CN (1) CN108304789A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109196518A (en) * 2018-08-23 2019-01-11 合刃科技(深圳)有限公司 A kind of gesture identification method and device based on high light spectrum image-forming
CN109190633A (en) * 2018-11-06 2019-01-11 西安文理学院 A kind of intelligent object identifying system and control method based on deep learning
CN109977826A (en) * 2019-03-15 2019-07-05 百度在线网络技术(北京)有限公司 The classification recognition methods of object and device
CN109977860A (en) * 2019-03-25 2019-07-05 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium
CN110245573A (en) * 2019-05-21 2019-09-17 平安科技(深圳)有限公司 A kind of register method, apparatus and terminal device based on recognition of face
CN110827394A (en) * 2018-08-10 2020-02-21 宏达国际电子股份有限公司 Facial expression construction method and device and non-transitory computer readable recording medium
CN111008670A (en) * 2019-12-20 2020-04-14 云南大学 Fungus image identification method and device, electronic equipment and storage medium
CN112101479A (en) * 2020-09-27 2020-12-18 杭州海康威视数字技术股份有限公司 Hair style identification method and device
CN112101186A (en) * 2020-09-11 2020-12-18 广州小鹏自动驾驶科技有限公司 Device and method for identifying a vehicle driver and use thereof
CN112115838A (en) * 2020-09-11 2020-12-22 南京华图信息技术有限公司 Thermal infrared image spectrum fusion human face classification method
CN112258564A (en) * 2020-10-20 2021-01-22 推想医疗科技股份有限公司 Method and device for generating fusion feature set
CN113033545A (en) * 2019-12-24 2021-06-25 同方威视技术股份有限公司 Empty tray identification method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105718889A (en) * 2016-01-21 2016-06-29 江南大学 Human face identity recognition method based on GB(2D)2PCANet depth convolution model
CN106874871A (en) * 2017-02-15 2017-06-20 广东光阵光电科技有限公司 A kind of recognition methods of living body faces dual camera and identifying device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105718889A (en) * 2016-01-21 2016-06-29 江南大学 Human face identity recognition method based on GB(2D)2PCANet depth convolution model
CN106874871A (en) * 2017-02-15 2017-06-20 广东光阵光电科技有限公司 A kind of recognition methods of living body faces dual camera and identifying device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GUILHEM CH´ERON ET AL: "P-CNN: Pose-based CNN Features for Action Recognition", 《2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV)》 *
李杰: "可见光/近红外人脸识别方法的研究与实现", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI716938B (en) * 2018-08-10 2021-01-21 宏達國際電子股份有限公司 Facial expression modeling method, apparatus and non-transitory computer readable medium of the same
CN110827394A (en) * 2018-08-10 2020-02-21 宏达国际电子股份有限公司 Facial expression construction method and device and non-transitory computer readable recording medium
US10885702B2 (en) 2018-08-10 2021-01-05 Htc Corporation Facial expression modeling method, apparatus and non-transitory computer readable medium of the same
CN110827394B (en) * 2018-08-10 2024-04-02 宏达国际电子股份有限公司 Facial expression construction method, device and non-transitory computer readable recording medium
CN109196518A (en) * 2018-08-23 2019-01-11 合刃科技(深圳)有限公司 A kind of gesture identification method and device based on high light spectrum image-forming
CN109196518B (en) * 2018-08-23 2022-06-07 合刃科技(深圳)有限公司 Gesture recognition method and device based on hyperspectral imaging
WO2020037594A1 (en) * 2018-08-23 2020-02-27 合刃科技(深圳)有限公司 Hyperspectral imaging-based gesture recognition method and apparatus
CN109190633A (en) * 2018-11-06 2019-01-11 西安文理学院 A kind of intelligent object identifying system and control method based on deep learning
CN109977826A (en) * 2019-03-15 2019-07-05 百度在线网络技术(北京)有限公司 The classification recognition methods of object and device
CN109977860A (en) * 2019-03-25 2019-07-05 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium
CN110245573A (en) * 2019-05-21 2019-09-17 平安科技(深圳)有限公司 A kind of register method, apparatus and terminal device based on recognition of face
CN110245573B (en) * 2019-05-21 2023-05-26 平安科技(深圳)有限公司 Sign-in method and device based on face recognition and terminal equipment
CN111008670A (en) * 2019-12-20 2020-04-14 云南大学 Fungus image identification method and device, electronic equipment and storage medium
CN113033545B (en) * 2019-12-24 2023-11-03 同方威视技术股份有限公司 Empty tray identification method and device
CN113033545A (en) * 2019-12-24 2021-06-25 同方威视技术股份有限公司 Empty tray identification method and device
CN112115838A (en) * 2020-09-11 2020-12-22 南京华图信息技术有限公司 Thermal infrared image spectrum fusion human face classification method
CN112101186A (en) * 2020-09-11 2020-12-18 广州小鹏自动驾驶科技有限公司 Device and method for identifying a vehicle driver and use thereof
CN112115838B (en) * 2020-09-11 2024-04-05 南京华图信息技术有限公司 Face classification method based on thermal infrared image spectrum fusion
CN112101479B (en) * 2020-09-27 2023-11-03 杭州海康威视数字技术股份有限公司 Hair style identification method and device
CN112101479A (en) * 2020-09-27 2020-12-18 杭州海康威视数字技术股份有限公司 Hair style identification method and device
CN112258564B (en) * 2020-10-20 2022-02-08 推想医疗科技股份有限公司 Method and device for generating fusion feature set
CN112258564A (en) * 2020-10-20 2021-01-22 推想医疗科技股份有限公司 Method and device for generating fusion feature set

Similar Documents

Publication Publication Date Title
CN108304789A (en) Recognition algorithms and device
TWI766201B (en) Methods and devices for biological testing and storage medium thereof
WO2021139324A1 (en) Image recognition method and apparatus, computer-readable storage medium and electronic device
Zhu et al. Targeting accurate object extraction from an image: A comprehensive study of natural image matting
CN112052831B (en) Method, device and computer storage medium for face detection
Raghavendra et al. Exploring the usefulness of light field cameras for biometrics: An empirical study on face and iris recognition
CN112052186B (en) Target detection method, device, equipment and storage medium
CN109543714A (en) Acquisition methods, device, electronic equipment and the storage medium of data characteristics
WO2021196389A1 (en) Facial action unit recognition method and apparatus, electronic device, and storage medium
CN108197585A (en) Recognition algorithms and device
CN109376631A (en) A kind of winding detection method and device neural network based
CN112232163B (en) Fingerprint acquisition method and device, fingerprint comparison method and device, and equipment
CN112215180A (en) Living body detection method and device
WO2022247539A1 (en) Living body detection method, estimation network processing method and apparatus, computer device, and computer readable instruction product
CN112232155A (en) Non-contact fingerprint identification method and device, terminal and storage medium
CN112016525A (en) Non-contact fingerprint acquisition method and device
CN112232159B (en) Fingerprint identification method, device, terminal and storage medium
CN113569598A (en) Image processing method and image processing apparatus
Feng et al. Iris R-CNN: Accurate iris segmentation and localization in non-cooperative environment with visible illumination
CN111310531B (en) Image classification method, device, computer equipment and storage medium
Prasad et al. Palmprint for individual’s personality behavior analysis
Jingade et al. DOG-ADTCP: A new feature descriptor for protection of face identification system
CN112651333B (en) Silence living body detection method, silence living body detection device, terminal equipment and storage medium
Wang et al. A comprehensive survey of rgb-based and skeleton-based human action recognition
CN112232157A (en) Fingerprint area detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180720

RJ01 Rejection of invention patent application after publication