CN108664909A - A kind of auth method and terminal - Google Patents

A kind of auth method and terminal Download PDF

Info

Publication number
CN108664909A
CN108664909A CN201810402939.9A CN201810402939A CN108664909A CN 108664909 A CN108664909 A CN 108664909A CN 201810402939 A CN201810402939 A CN 201810402939A CN 108664909 A CN108664909 A CN 108664909A
Authority
CN
China
Prior art keywords
image
neural networks
convolutional neural
pond
networks model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810402939.9A
Other languages
Chinese (zh)
Inventor
刘小东
王凯丰
蒋杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Aiyouwei Software Development Co Ltd
Original Assignee
Shanghai Aiyouwei Software Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Aiyouwei Software Development Co Ltd filed Critical Shanghai Aiyouwei Software Development Co Ltd
Priority to CN201810402939.9A priority Critical patent/CN108664909A/en
Publication of CN108664909A publication Critical patent/CN108664909A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0861Network architectures or network communication protocols for network security for authentication of entities using biometrical features, e.g. fingerprint, retina-scan

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

This application involves intelligent terminal technical field, more particularly to a kind of auth method and terminal are applied to terminal, and method includes:Obtain the target identities image of user to be verified;Target identities image is directly inputted into scheduled convolutional neural networks model;Go out the target image characteristics of target identities image by convolutional neural networks model extraction;Obtain image feature base corresponding with convolutional neural networks model;Target image characteristics are verified based on image feature base;Target identities image is directly inputted into user images, save the complicated image preprocessing for needing to carry out to original user image, keep the feature invariant of user images, the process that training is completed in convolutional neural networks directly extracts the characteristic information of user images, and it is verified further according to the characteristic information of extraction, so that authentication is more accurate, and it is eased and quick to verify flow;Improve the intelligent level of terminal.

Description

A kind of auth method and terminal
Technical field
This application involves intelligent terminal technical field more particularly to a kind of auth method and terminals.
Background technology
With the continuous development of bio-identification verification technique, also applied more and more in the terminals such as mobile phone, tablet computer To biological identification technology, such as fingerprint recognition, recognition of face, iris recognition, existing biological identification technology such as fingerprint recognition, greatly It mostly uses and establishes fingerprint recognition model, taken the fingerprint by acquiring fingerprint image and characteristic information and verified, the prior art exists When extracting feature, the feature of extraction is mostly global characteristics, such as the crestal line feature of fingerprint, has ignored the minutia in fingerprint, Such as endpoint, bifurcation, isolated point, ring, broken string, short-term, bridge, crosspoint, burr;It is unfavorable for the accuracy for identifying and verifying, And it needs to carry out image segmentation, noise processed, Fingerprint enhancement, binaryzation, fingerprint image refinement before feature extraction Deng complicated image preprocessing, cause process flow complicated, increases significantly calculation amount;In the prior art, in identification model The composite factors such as complexity, calculation amount, accuracy rate, the robustness of system consider relatively rough, such as AlexNet in the prior art Model, model complexity is higher, and calculation amount is also larger, in equal accuracy, expends more training times and calculates power; Characteristic information extraction in the core technology of bio-identification is still the core bottleneck of traditional technology.
Invention content
The purpose of the application is to provide a kind of auth method and terminal, by the convolutional Neural net for building deep learning Network framework is specifically designed the multilayer neural network for handling 2-D data, directly inputs user images;It saves and needs to target Identity image does the image preprocessing of the complexity such as image segmentation, noise processed, image enhancement, binaryzation, image thinning;It keeps using The feature invariant of family image, the process that training is completed in convolutional neural networks directly extract the characteristic information of user images, and again It is verified according to the characteristic information of extraction so that authentication is more accurate, and it is eased and quick to verify flow.
According to some embodiments of the present application in a first aspect, embodiments herein provides a kind of authentication side Method is applied to terminal, the method includes:
Obtain the target identities image of user to be verified;
The target identities image is directly inputted into scheduled convolutional neural networks model;
Go out the target image characteristics of the target identities image by the convolutional neural networks model extraction;
Obtain it is corresponding with the convolutional neural networks model include at least one set of characteristics of image image feature data Library;
The target image characteristics are verified based on image feature base.
Optionally, before the target identities image for obtaining user to be verified, the method further includes:Pre-training convolution Neural network model;
The method of the pre-training convolutional neural networks model includes:
Obtain identity information typing instruction;
It is instructed based on the identity information typing, acquires the identity image of user;
Build convolutional neural networks model;
Identity image is inputted to the convolutional neural networks model built;
The convolutional neural networks model is trained based on the identity image;
Preserve trained convolutional neural networks model.
Optionally, before building convolutional neural networks model, the method further includes:
Identify the identity image;
Determine the image type of identity image;
The type for the convolutional neural networks model for needing to build is determined according to image type;
Wherein, different image types corresponds to different types of convolutional neural networks model.
Optionally, the method for building convolutional neural networks model includes:
Input layer, convolutional layer, pond layer, full articulamentum and output layer are configured, convolutional neural networks model is formed;
Wherein, convolutional layer, pond layer and the full articulamentum in the convolutional neural networks model distinguish at least one.
Optionally, the convolutional layer includes:First convolutional layer Conv1 and the second convolutional layer Conv2;
The pond layer includes:First pond layer Pool1 and the second pond layer Pool2;
The full articulamentum includes:The first complete full articulamentum FCN2 of articulamentum FCN1 and second;
The convolutional layer is arranged alternately successively with the pond layer;
The full articulamentum is set to after the second pond layer Pool2;
Output layer is set to after full articulamentum.
Optionally, the method packet of the convolutional neural networks model put up based on the training of at least one set of identity image It includes:
Identity image is inputted into the first convolutional layer Conv1 by input layer;
The identity image progress convolution algorithm, which is checked, based on the first convolution obtains the first convolution matrix C1;
The first convolution matrix C is inputted into the first pond layer Pool1;
Pond operation is carried out to the first convolution matrix C1 based on the first pond unit, obtains the first pond matrix AVG1;
First pond matrix A VG1 is being inputted into the second convolutional layer Conv1;
The first pond matrix A VG1 progress convolution algorithm, which is checked, based on the second convolution obtains the second convolution matrix C2;
The second convolution matrix C2 is inputted into the second pond layer Pool2;
Pond operation is carried out to the second convolution matrix C2 based on the second pond unit, obtains the second pond matrix AVG2;
The first full articulamentum FCN1 of second pond matrix A VG2 inputs is subjected to extraction for the first time and obtains the first characteristic of division Information;
The second classification spy is obtained first the second full articulamentum FCN2 of characteristic of division information input is carried out second of extraction Reference ceases;
Using Softmax activation primitive extract the second characteristic of division information classify, using as with the identity figure As corresponding characteristics of image, and complete the training to convolutional neural networks model.
Optionally, the method further includes:When building convolutional neural networks model, the method further includes:
Build image feature base corresponding with the convolutional neural networks model;
It completes to convolutional neural networks model after training, the method further includes:
Will and the corresponding characteristics of image typing of identity image image corresponding with the convolutional neural networks model it is special It levies in database.
Optionally, for each convolutional layer, the method for the convolution algorithm includes:Convolution fortune is calculated based on predetermined formula It calculates;
The predetermined formula is:
Wherein, wherein Input is the data for inputting the convolutional layer, and W is the convolution kernel of the convolutional layer.M and n respectively represents volume The length and width of product core, c (i, j) are the operation result after convolution algorithm;All operation results combine the convolution square to form the convolutional layer Battle array;
For each pond layer, the method for the pond operation includes:Pond is determined according to the pond unit of the pond layer Region;
Calculate separately the average value in each pond region;
The average value in all pond regions constitutes the pond matrix of the pond layer.
Optionally, the method verified to the target image characteristics based on image feature base includes:
Traverse the characteristics of image in described image property data base;
Judge in described image property data base with the presence or absence of the characteristics of image to match with the target image characteristics;
If in the presence of judging that user passes through authentication;
If being not present, judge that user can not pass through authentication;
Before the target identities image is directly inputted scheduled convolutional neural networks model, the method is also wrapped It includes:
Identify target identities image;
Determine the image type of target identities image;
Target identities image is inputted into corresponding scheduled convolutional neural networks model;
Wherein, described image type includes:Fingerprint pattern, iris type, face type.
According to the another aspect of the application, embodiments herein additionally provides a kind of terminal device, including memory, quilt It is configured to storage data and instruction;
The processor communicated is established with memory, wherein when executing the instruction in memory, the processor is configured To execute following operation:
Obtain the target identities image of user to be verified;
The target identities image is directly inputted into scheduled convolutional neural networks model;
Go out the target image characteristics of the target identities image by the convolutional neural networks model extraction;
Obtain it is corresponding with the convolutional neural networks model include at least one set of characteristics of image image feature data Library;
The target image characteristics are verified based on image feature base.
The target identities image that the above-mentioned technical proposal of the application passes through acquisition user to be verified;By the target identities Image directly inputs scheduled convolutional neural networks model;Go out the target identities by the convolutional neural networks model extraction The target image characteristics of image;Obtain it is corresponding with the convolutional neural networks model include at least one set of characteristics of image figure As property data base;The target image characteristics are carried out based on image feature base to verify this technical solution, by target Identity image directly inputs user images;It saves and needs to do image segmentation, noise processed, image increasing to target identities image slices By force, the image preprocessing of the complexity such as binaryzation, image thinning;The feature invariant of user images is kept, it is complete in convolutional neural networks The characteristic information of user images is directly extracted at trained process, and is verified further according to the characteristic information of extraction so that body Part verification is more accurate, and it is eased and quick to verify flow;Improve the intelligent level of terminal.
Description of the drawings
To more fully understand and illustrating some embodiments of the present application, below with reference to the description of attached drawing reference implementation example, In the drawings, same digital number indicates corresponding part in the accompanying drawings.
Fig. 1 is the schematic flow chart of the auth method provided according to some embodiments of the present application;
Fig. 2 is the schematic flow of the method for the pre-training convolutional neural networks model that some embodiments of the present application provide Figure;
Fig. 3 is the structure chart for the convolutional neural networks model that some embodiments of the present application provide;
Fig. 4 is the schematic diagram for the convolution algorithm that some embodiments of the present application provide;
Fig. 5 is the schematic diagram for the pond operation that some embodiments of the present application provide.
Specific implementation mode
To keep the purpose, technical scheme and advantage of the application of greater clarity, With reference to embodiment and join According to attached drawing, the application is further described.It should be understood that these descriptions are merely illustrative, and it is not intended to limit this Shen Range please.In addition, in the following description, descriptions of well-known structures and technologies are omitted, to avoid this is unnecessarily obscured The concept of application.
The term and phrase used in following description and claims is not limited to literal meaning, and being merely can Understand and consistently understands the application.Therefore, for those skilled in the art, it will be understood that provide to the various implementations of the application The description of example is only the purpose to illustrate, rather than limits appended claims and its application of Equivalent definitions.
Below in conjunction with the attached drawing in the application some embodiments, technical solutions in the embodiments of the present application carries out clear Chu is fully described by, it is clear that described embodiments are only a part of embodiments of the present application, instead of all the embodiments. Based on the embodiment in the application, obtained by those of ordinary skill in the art without making creative efforts all Other embodiment shall fall in the protection scope of this application.
It should be noted that the term used in the embodiment of the present application is the mesh only merely for description specific embodiment , it is not intended to be limiting the application." one " of the embodiment of the present application and singulative used in the attached claims, "one", "an", " described " and "the" be also intended to including most forms, unless context clearly shows that other meanings.Also It should be appreciated that term "and/or" used herein refers to and includes that one or more list items purposes mutually bound are any Or all possible combinations.Expression " first ", " second ", " first " and " second " be for modify respective element without Consideration sequence or importance are used only for distinguishing a kind of element and another element, without limiting respective element.Under in addition, Involved technical characteristic as long as they do not conflict with each other can phase in the application different embodiments described in face Mutually combine.
In order to which the clearer each attached drawing of description gives same step different labels in various figures.
Terminal according to the application some embodiments can be electronic equipment, the electronic equipment may include smart mobile phone, PC (PC, such as tablet computer, desktop computer, notebook, net book, palm PC PDA), mobile phone, e-book Reader, portable media player (PMP), audio/video player (MP3/MP4), video camera, virtual reality device (VR) and the combination of one or more of wearable device etc..According to some embodiments of the present application, the wearable device May include type of attachment (such as wrist-watch, ring, bracelet, glasses or wear-type device (HMD)), integrated type (such as electronics Clothes), decorated type (such as pad skin, tatoo or built in electronic device) etc. or several combinations.In some realities of the application It applies in example, the electronic equipment can be flexible, be not limited to above equipment, or can be one kind in above-mentioned various equipment Or several combination.In this application, term " user " can be indicated using the people of electronic equipment or setting using electronic equipment Standby (such as artificial intelligence electronic equipment).
Below in conjunction with attached drawing, it is described in detail according to the sequence of figure to figure.
Fig. 1 (i.e. shown in attached drawing 100) is please referred to, Fig. 1 is showing for the auth method that embodiments herein provides Meaning property flow chart;
As shown in Figure 1, embodiments herein provides a kind of auth method, it is applied to terminal, the method packet It includes:
Step S101:Obtain the target identities image of user to be verified;
The target identities image is directly inputted into scheduled convolutional neural networks model;
Go out the target image characteristics of the target identities image by the convolutional neural networks model extraction;
Obtain it is corresponding with the convolutional neural networks model include at least one set of characteristics of image image feature data Library;
The target image characteristics are verified based on image feature base.
It should be noted that convolutional neural networks (Convolutional Neural Network, abbreviation CNN) are artificial One kind of neural network, it has also become current speech is analyzed and the research hotspot of field of image recognition.Its weights share network knot Structure is allowed to be more closely similar to biological neural network, reduces the complexity of network model, reduces the quantity of weights.The advantage is in net What is showed when the input of network is multidimensional image becomes apparent, and allows input of the image directly as network, avoids traditional knowledge Complicated feature extraction and data reconstruction processes in other algorithm.Convolutional neural networks are special designing for identification two-dimensional shapes One multilayer perceptron, this network structure have height invariance to the deformation of translation, inclination or other forms;The application According to their needs, corresponding convolutional neural networks model oneself is had devised, target identities image is directly inputted scheduled Convolutional neural networks model is trained, and the characteristic information of target identities image can be directly obtained after the completion of model training, And then carry out identification.
As an alternative embodiment, before the target identities image for obtaining user to be verified, the method Further include:Pre-training convolutional neural networks model;
(i.e. shown in attached drawing 200) as shown in Figure 2, Fig. 2 are the pre-training convolution god that some embodiments of the present application provide The schematic flow chart of method through network model;
As shown in Fig. 2, the method for the pre-training convolutional neural networks model includes:
Step S201:Obtain identity information typing instruction;
Step S202:It is instructed based on the identity information typing, acquires the identity image of user;
Step S203:Build convolutional neural networks model;
Step S204:Identity image is inputted to the convolutional neural networks model built;
Step S205:The convolutional neural networks model is trained based on the identity image;
Step S206:Preserve trained convolutional neural networks model.
As an alternative embodiment, before building convolutional neural networks model, the method further includes:
Identify the identity image;
Determine the image type of identity image;
The type for the convolutional neural networks model for needing to build is determined according to image type;
Wherein, different image types corresponds to different types of convolutional neural networks model.
Wherein, it should be noted that due to the difference of the characteristic point of fingerprint image, facial image, iris image, so needing Different neural network models is established according to different identity images.
As an alternative embodiment, the method for building convolutional neural networks model includes:
Input layer, convolutional layer, pond layer, full articulamentum and output layer are configured, convolutional neural networks model is formed;
Wherein, convolutional layer, pond layer and the full articulamentum in the convolutional neural networks model distinguish at least one.
As an alternative embodiment, the convolutional layer includes:First convolutional layer Conv1 and the second convolutional layer Conv2;
The pond layer includes:First pond layer Pool1 and the second pond layer Pool2;
The full articulamentum includes:The first complete full articulamentum FCN2 of articulamentum FCN1 and second;
The convolutional layer is arranged alternately successively with the pond layer;
The full articulamentum is set to after the second pond layer Pool2;
Output layer is set to after full articulamentum;It please refers to shown in Fig. 3, Fig. 3 is that some embodiments of the present application provide The structure chart of convolutional neural networks model;Far Left is the identity image of input, which can be directly by the identity image of acquisition As the input layer of model, the operation for needing to carry out a series of complex such as image preprocessing in conventional model is saved;Followed by volume Lamination (Conv1), pond layer (Pool1), followed by convolutional layer (Conv2), pond layer (Pool2), full articulamentum (FCN1, ), FCN2 output layer (Output).The combination of wherein convolutional layer, pond layer can repeatedly occur in hidden layer, can also flexibly set Count into convolutional layer, convolutional layer or convolutional layer, convolutional layer, pond layer combination, need designed according to actual demand, this Invention using convolutional layer twice, twice pond layer and two layers of full articulamentum architecture design.
As an optional implementation manner, described that the convolutional Neural net put up is trained based at least one set of identity image The method of network model includes:
Identity image is inputted into the first convolutional layer Conv1 by input layer;
The identity image progress convolution algorithm, which is checked, based on the first convolution obtains the first convolution matrix C1;
The first convolution matrix C is inputted into the first pond layer Pool1;
Pond operation is carried out to the first convolution matrix C1 based on the first pond layer Pool1, obtains the first pond matrix AVG1;
First pond matrix A VG1 is being inputted into the second convolutional layer Conv1;
The first pond matrix A VG1 progress convolution algorithm, which is checked, based on the second convolution obtains the second convolution matrix C2;
The second convolution matrix C2 is inputted into the second pond layer Pool2;
Pond operation is carried out to the second convolution matrix C2 based on the second pond unit, obtains the second pond matrix AVG2;
The first full articulamentum FCN1 of second pond matrix A VG2 inputs is subjected to extraction for the first time and obtains the first characteristic of division Information;
The second classification spy is obtained first the second full articulamentum FCN2 of characteristic of division information input is carried out second of extraction Reference ceases;
Using Softmax activation primitive extract the second characteristic of division information classify, using as with the identity figure As corresponding characteristics of image, and complete the training to convolutional neural networks model.
Convolution algorithm of the first above-mentioned convolutional layer Conv1 is illustrated at this, convolution algorithm is to defeated in fact The value of different part matrix each positions corresponding with convolution nuclear matrix is multiplied in the fingerprint image entered, then accumulates the meter of summation It calculates;
Fig. 4 is please referred to, Fig. 4 is the schematic diagram for the convolution algorithm that some embodiments of the present application provide;As shown in figure 4, Input is the matrix of the user images of input;W is convolution kernel;Output is output;Assuming that the identity image of input is 3x3 Matrix, and the first convolution kernel of the first convolutional layer Conv1 is the matrix of 2x2, the moving step length of the first convolution kernel is a unit, Then convolution algorithm process is as follows:First convolution kernel is slided into the region of fingerprint image upper left corner 2x2 and is made with the first convolution kernel Convolution, the value for obtaining c (0,0) they are a*p+b*q+d*r+e*s, and the first convolution kernel is then slided a unit to the right, i.e., (b, C, e, f) matrix that constitutes of four values and the first convolution kernel carry out convolution, and we have obtained output matrix c (0,1) and are in this way:b*p+ C*q+e*r+f*s similarly, obtains the c (1,0) of output matrix C, the value of c (1,1);Finally obtain the first convolution matrix C1.
Above-mentioned pond operation based on the first pond layer Pool1 is illustrated at this, convolutional neural networks Pond operation is actually to be compressed to the matrix after convolution, and mode one kind that there are two types of the standards in pond is that pond region is asked Maximum value Max, another kind are that pond region is averaged Avg;The present invention seeks pond regional value using Avg.
It please refers to shown in Fig. 5, Fig. 5 is the schematic diagram for the pond operation that some embodiments of the present application provide;
As shown in fig. 5, it is assumed that the first convolution matrix that user images are obtained by the convolution algorithm of the first convolutional layer Conv1 C is 4x4 matrixes, and the pond unit of the first pond layer Pool1 is 2x2, and the moving step length of pond unit is 2 units.First The pond operation of pond layer Pool1 is as follows:
The region of the upper left corner 2x2 of the first convolution matrix C1 change unit slided into after convolutional calculation carries out pond, Then convolution kernel is slided 2 lists by the value for first element that the value for obtaining Avg (A1, A2, A3, A4) is Chi Huahou to x-axis side Member, i.e., the value for second element that the average value Avg (B1, B2, B3, B4) that (B1, B2, B3, B4) four values are constituted is Chi Huahou. And so on to find out the matrix value of Chi Huahou be behind pond as a result, i.e. the first pond matrix.
Similarly, the convolution algorithm of the second convolutional layer Conv2 uses flow similar to the above, and only distinctive points are, are The the first pond matrix A VG1 obtained to the first pond ponds layer Pool1 carries out convolution algorithm, the basic principle of operation all with The convolution algorithm of first convolutional layer Conv1 is identical;
Similarly the pond operation of the second pond layer Pool12 uses flow similar to the above, and only distinctive points are, are The convolution matrix obtained to the second convolutional layer Conv2 convolution carries out pond operation, the basic principle of operation all with the first pond The pond operation of layer Pool1 is identical.
After the convolution of two layers of convolutional neural networks, pondization operation, using two layers fully-connected network FCN1 and FCN2 Image feature information is extracted, and image feature information is updated and is saved in network structure.When carrying out identification, instruction is called The user images to be identified are input to deep learning convolutional neural networks model by the convolutional neural networks model perfected, and Output layer need to verify the characteristics of image of identification using the excitation function extraction of Softmax, by the characteristics of image and fingerprint database In the middle matching verification is carried out to finger print information, identify success if characteristic information is identical, if it is not the same, if identify mistake It loses.
As an optional implementation manner, when building convolutional neural networks model, the method further includes:
Build image feature base corresponding with the convolutional neural networks model;
It completes to convolutional neural networks model after training, the method further includes:
Will and the corresponding characteristics of image typing of identity image image corresponding with the convolutional neural networks model it is special It levies in database.
As an optional implementation manner for each convolutional layer, the method for the convolution algorithm includes:Based on predetermined Formula calculates convolution algorithm;
The predetermined formula is:
Wherein, wherein Input is the data for inputting the convolutional layer, and W is the convolution kernel of the convolutional layer.M and n respectively represents volume The length and width of product core, c (i, j) are the operation result after convolution algorithm;All operation results combine the convolution square to form the convolutional layer Battle array;
For each pond layer, the method for the pond operation includes:Pond is determined according to the pond unit of the pond layer Region;
Calculate separately the average value in each pond region;
The average value in all pond regions constitutes the pond matrix of the pond layer.
As an optional implementation manner, described that the target image characteristics are tested based on image feature base The method of card includes:
Traverse the characteristics of image in described image property data base;
Judge in described image property data base with the presence or absence of the characteristics of image to match with the target image characteristics;
If in the presence of judging that user passes through authentication;
If being not present, judge that user can not pass through authentication;
Before the target identities image is directly inputted scheduled convolutional neural networks model, the method is also wrapped It includes:
Identify target identities image;
Determine the image type of target identities image;
Target identities image is inputted into corresponding scheduled convolutional neural networks model;
Wherein, described image type includes:Fingerprint pattern, iris type, face type.
The application mainly proposes that method for distinguishing is known in a kind of authentication of deep learning convolutional neural networks framework;Convolution god It is a kind of special case framework of deep learning through network, is specifically designed the multilayer neural network for handling 2-D data, schemes referring to Picture not aspect has unique advantage, using the deep learning framework with robustness of multilayer level structural network.It is user When image procossing, original user image, which need not do too many pretreatment, can preferably learn the constant spy of user images Sign;And the weights of convolution kernel are shared, and local receptor field and down-sampling greatly reduced the calculation amount of convolutional neural networks, The training effectiveness for improving network increases the efficiency of image recognition;For traditional image recognition technology, carried in characteristics of image It needs to do original image the image of the complexity such as image segmentation, noise processed, image enhancement, binaryzation, image thinning before taking Pretreatment.The application uses the convolutional neural networks framework of deep learning, and figure can be saved by putting up convolutional neural networks model As pretreated process, image feature information is directly extracted in the convolutional Neural trained process that is over, model passes through for last two layers Full articulamentum realizes the Classification and Identification of characteristics of image.
According to the another aspect of the application, embodiments herein additionally provides a kind of terminal device, including memory, quilt It is configured to storage data and instruction;
The processor communicated is established with memory, wherein when executing the instruction in memory, the processor is configured To execute following operation:
Obtain the target identities image of user to be verified;
The target identities image is directly inputted into scheduled convolutional neural networks model;
Go out the target image characteristics of the target identities image by the convolutional neural networks model extraction;
Obtain it is corresponding with the convolutional neural networks model include at least one set of characteristics of image image feature data Library;
The target image characteristics are verified based on image feature base.
As an alternative embodiment, before the target identities image for obtaining user to be verified, the processing Device is configured as executing following operation:
Pre-training convolutional neural networks model;
In pre-training convolutional neural networks model, the processor is configured as executing following operation:
Obtain identity information typing instruction;
It is instructed based on the identity information typing, acquires the identity image of user;
Build convolutional neural networks model;
Identity image is inputted to the convolutional neural networks model built;
The convolutional neural networks model is trained based on the identity image;
Preserve trained convolutional neural networks model.
As an alternative embodiment, before building convolutional neural networks model, the processor is configured as Execute following operation:
Identify the identity image;
Determine the image type of identity image;
The type for the convolutional neural networks model for needing to build is determined according to image type;
Wherein, different image types corresponds to different types of convolutional neural networks model.
As an alternative embodiment, when building convolutional neural networks model, the processor is configured as holding The following operation of row:
Input layer, convolutional layer, pond layer, full articulamentum and output layer are configured, convolutional neural networks model is formed;
Wherein, convolutional layer, pond layer and the full articulamentum in the convolutional neural networks model distinguish at least one.
As an alternative embodiment, the convolutional layer includes:First convolutional layer Conv1 and the second convolutional layer Conv2;
The pond layer includes:First pond layer Pool1 and the second pond layer Pool2;
The full articulamentum includes:The first complete full articulamentum FCN2 of articulamentum FCN1 and second;
The convolutional layer is arranged alternately successively with the pond layer;
The full articulamentum is set to after the second pond layer Pool2;
Output layer is set to after full articulamentum.
As an alternative embodiment, training the convolutional neural networks put up based at least one set of identity image When model, the processor is configured as executing following operation:
Identity image is inputted into the first convolutional layer Conv1 by input layer;
The identity image progress convolution algorithm, which is checked, based on the first convolution obtains the first convolution matrix C1;
The first convolution matrix C is inputted into the first pond layer Pool1;
Pond operation is carried out to the first convolution matrix C1 based on the first pond unit, obtains the first pond matrix AVG1;
First pond matrix A VG1 is being inputted into the second convolutional layer Conv1;
The first pond matrix A VG1 progress convolution algorithm, which is checked, based on the second convolution obtains the second convolution matrix C2;
The second convolution matrix C2 is inputted into the second pond layer Pool2;
Pond operation is carried out to the second convolution matrix C2 based on the second pond unit, obtains the second pond matrix AVG2;
The first full articulamentum FCN1 of second pond matrix A VG2 inputs is subjected to extraction for the first time and obtains the first characteristic of division Information;
The second classification spy is obtained first the second full articulamentum FCN2 of characteristic of division information input is carried out second of extraction Reference ceases;
Using Softmax activation primitive extract the second characteristic of division information classify, using as with the identity figure As corresponding characteristics of image, and complete the training to convolutional neural networks model.
As an alternative embodiment, when building convolutional neural networks model, the processor is configured as holding The following operation of row:
Build image feature base corresponding with the convolutional neural networks model;
It completes to convolutional neural networks model after training, the method further includes:
Will and the corresponding characteristics of image typing of identity image image corresponding with the convolutional neural networks model it is special It levies in database.
As an alternative embodiment, for each convolutional layer, when carrying out convolution algorithm, the processor by with It is set to the following operation of execution:
Convolution algorithm is calculated based on predetermined formula;
The predetermined formula is:
Wherein, wherein Input is the data for inputting the convolutional layer, and W is the convolution kernel of the convolutional layer.M and n respectively represents volume The length and width of product core, c (i, j) are the operation result after convolution algorithm;All operation results combine the convolution square to form the convolutional layer Battle array;
For each pond layer, the method for the pond operation includes:Pond is determined according to the pond unit of the pond layer Region;
Calculate separately the average value in each pond region;
The average value in all pond regions constitutes the pond matrix of the pond layer.
As an alternative embodiment, being verified to the target image characteristics based on image feature base When, the processor is configured as executing following operation:
Traverse the characteristics of image in described image property data base;
Judge in described image property data base with the presence or absence of the characteristics of image to match with the target image characteristics;
If in the presence of judging that user passes through authentication;
If being not present, judge that user can not pass through authentication;
Before the target identities image is directly inputted scheduled convolutional neural networks model, the method is also wrapped It includes:
Identify target identities image;
Determine the image type of target identities image;
Target identities image is inputted into corresponding scheduled convolutional neural networks model;
Wherein, described image type includes:Fingerprint pattern, iris type, face type.
The application is intended to protect a kind of auth method and terminal, the above-mentioned technical proposal of the application to be tested by obtaining The target identities image of the user of card;The target identities image is directly inputted into scheduled convolutional neural networks model;Pass through The convolutional neural networks model extraction goes out the target image characteristics of the target identities image;It obtains and the convolutional Neural net Network model is corresponding include at least one set of characteristics of image image feature base;Based on image feature base to the mesh Logo image feature carries out verifying this technical solution, and target identities image is directly inputted user images;It saves and needs to target Identity image does the image preprocessing of the complexity such as image segmentation, noise processed, image enhancement, binaryzation, image thinning;It keeps using The feature invariant of family image, the process that training is completed in convolutional neural networks directly extract the characteristic information of user images, and again It is verified according to the characteristic information of extraction so that authentication is more accurate, and it is eased and quick to verify flow;It improves The intelligent level of terminal.
It is found by experimental verification, the present invention has preferable Fingerprint recognition precision, and does not need too much pre- Processing can preferably learn the invariant features of fingerprint image;Save the complexity such as Fingerprint Image Segmentation, binaryzation, image enhancement Process.By building deep learning convolutional neural networks framework, the characteristic information for the image that directly takes the fingerprint, and convolution kernel Weights it is shared, local receptor field and down-sampling greatly reduced the calculation amount of convolutional neural networks, improve the instruction of network Practice efficiency, increases the efficiency of Fingerprint recognition.
It should be noted that the above embodiments are intended merely as example, the application is not limited to such example, but can To carry out various change.
It should be noted that in the present specification, the terms "include", "comprise" or its any other variant are intended to Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those Element, but also include other elements that are not explicitly listed, or further include for this process, method, article or equipment Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that There is also other identical elements in process, method, article or equipment including the element.
Finally, it is to be noted that, it is above-mentioned it is a series of processing include not only with sequence described here in temporal sequence The processing of execution, and include the processing executed parallel or respectively rather than in chronological order.
One of ordinary skill in the art will appreciate that realizing all or part of flow in above-described embodiment method, being can be with It is completed by the relevant hardware of computer program instructions, the program can be stored in a computer readable storage medium, The program is when being executed, it may include such as the flow of the embodiment of above-mentioned each method.Wherein, the storage medium can be magnetic disc, CD, read-only memory (Read-Only Memory, ROM) or random access memory (Random Access Memory, RAM) Deng.
Above disclosed is only some preferred embodiments of the application, and the right model of the application cannot be limited with this It encloses, those skilled in the art can understand all or part of the processes for realizing the above embodiment, and is wanted according to the application right Equivalent variations made by asking, still belong to the scope covered by the invention.

Claims (10)

1. a kind of auth method, which is characterized in that it is applied to terminal, the method includes:
Obtain the target identities image of user to be verified;
The target identities image is directly inputted into scheduled convolutional neural networks model;
Go out the target image characteristics of the target identities image by the convolutional neural networks model extraction;
Obtain it is corresponding with the convolutional neural networks model include at least one set of characteristics of image image feature base;
The target image characteristics are verified based on image feature base.
2. according to the method described in claim 1, it is characterized in that, obtain user to be verified target identities image it Before, the method further includes:Pre-training convolutional neural networks model;
The method of the pre-training convolutional neural networks model includes:
Obtain identity information typing instruction;
It is instructed based on the identity information typing, acquires the identity image of user;
Build convolutional neural networks model;
Identity image is inputted to the convolutional neural networks model built;
The convolutional neural networks model is trained based on the identity image;
Preserve trained convolutional neural networks model.
3. according to the method described in claim 2, it is characterized in that, before building convolutional neural networks model, the method Further include:
Identify the identity image;
Determine the image type of identity image;
The type for the convolutional neural networks model for needing to build is determined according to image type;
Wherein, different image types corresponds to different types of convolutional neural networks model.
4. according to the method described in claim 2, it is characterized in that, the method for building convolutional neural networks model includes:
Input layer, convolutional layer, pond layer, full articulamentum and output layer are configured, convolutional neural networks model is formed;
Wherein, convolutional layer, pond layer and the full articulamentum in the convolutional neural networks model distinguish at least one.
5. according to the method described in claim 4, it is characterized in that, the convolutional layer includes:First convolutional layer Conv1 and second Convolutional layer Conv2;
The pond layer includes:First pond layer Pool1 and the second pond layer Pool2;
The full articulamentum includes:The first complete full articulamentum FCN2 of articulamentum FCN1 and second;
The convolutional layer is arranged alternately successively with the pond layer;
The full articulamentum is set to after the second pond layer Pool2;
Output layer is set to after full articulamentum.
6. according to the method described in claim 5, it is characterized in that, described put up based on the training of at least one set of identity image The method of convolutional neural networks model includes:
Identity image is inputted into the first convolutional layer Conv1 by input layer;
The identity image progress convolution algorithm, which is checked, based on the first convolution obtains the first convolution matrix C1;
The first convolution matrix C is inputted into the first pond layer Pool1;
Pond operation is carried out to the first convolution matrix C1 based on the first pond unit, obtains the first pond matrix A VG1;
First pond matrix A VG1 is being inputted into the second convolutional layer Conv1;
The first pond matrix A VG1 progress convolution algorithm, which is checked, based on the second convolution obtains the second convolution matrix C2;
The second convolution matrix C2 is inputted into the second pond layer Pool2;
Pond operation is carried out to the second convolution matrix C2 based on the second pond unit, obtains the second pond matrix A VG2;
The first full articulamentum FCN1 of second pond matrix A VG2 inputs is subjected to extraction for the first time and obtains the first characteristic of division information;
The second characteristic of division letter is obtained first the second full articulamentum FCN2 of characteristic of division information input is carried out second of extraction Breath;
Using Softmax activation primitive extract the second characteristic of division information classify, using as with the identity image pair The characteristics of image answered, and complete the training to convolutional neural networks model.
7. according to the method described in claim 6, it is characterized in that, the method further includes:Building convolutional neural networks mould When type, the method further includes:
Build image feature base corresponding with the convolutional neural networks model;
It completes to convolutional neural networks model after training, the method further includes:
Will and the corresponding characteristics of image typing of identity image characteristics of image number corresponding with the convolutional neural networks model According in library.
8. the method according to the description of claim 7 is characterized in that for each convolutional layer, the method packet of the convolution algorithm It includes:Convolution algorithm is calculated based on predetermined formula;
The predetermined formula is:
Wherein, wherein Input is the data for inputting the convolutional layer, and W is the convolution kernel of the convolutional layer.M and n respectively represent convolution kernel Length and width, c (i, j) be convolution algorithm after operation result;All operation results combine the convolution matrix to form the convolutional layer;
For each pond layer, the method for the pond operation includes:Pond region is determined according to the pond unit of the pond layer;
Calculate separately the average value in each pond region;
The average value in all pond regions constitutes the pond matrix of the pond layer.
9. according to the method described in claim 3, it is characterized in that, the image feature base that is based on is to the target image The method that feature is verified includes:
Traverse the characteristics of image in described image property data base;
Judge in described image property data base with the presence or absence of the characteristics of image to match with the target image characteristics;
If in the presence of judging that user passes through authentication;
If being not present, judge that user can not pass through authentication;
Before the target identities image is directly inputted scheduled convolutional neural networks model, the method further includes:
Identify target identities image;
Determine the image type of target identities image;
Target identities image is inputted into corresponding scheduled convolutional neural networks model;
Wherein, described image type includes:Fingerprint pattern, iris type, face type.
10. a kind of terminal device, which is characterized in that including memory, be configured as storage data and instruction;
The processor communicated is established with memory, wherein when executing the instruction in memory, the processor is configured as holding The following operation of row:
Obtain the target identities image of user to be verified;
The target identities image is directly inputted into scheduled convolutional neural networks model;
Go out the target image characteristics of the target identities image by the convolutional neural networks model extraction;
Obtain it is corresponding with the convolutional neural networks model include at least one set of characteristics of image image feature base;
The target image characteristics are verified based on image feature base.
CN201810402939.9A 2018-04-28 2018-04-28 A kind of auth method and terminal Pending CN108664909A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810402939.9A CN108664909A (en) 2018-04-28 2018-04-28 A kind of auth method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810402939.9A CN108664909A (en) 2018-04-28 2018-04-28 A kind of auth method and terminal

Publications (1)

Publication Number Publication Date
CN108664909A true CN108664909A (en) 2018-10-16

Family

ID=63781492

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810402939.9A Pending CN108664909A (en) 2018-04-28 2018-04-28 A kind of auth method and terminal

Country Status (1)

Country Link
CN (1) CN108664909A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111984884A (en) * 2020-08-18 2020-11-24 深圳市维度统计咨询股份有限公司 Non-contact data acquisition method and device for large database
CN112000819A (en) * 2019-05-27 2020-11-27 北京达佳互联信息技术有限公司 Multimedia resource recommendation method and device, electronic equipment and storage medium
CN112507312A (en) * 2020-12-08 2021-03-16 电子科技大学 Digital fingerprint-based verification and tracking method in deep learning system
CN112883355A (en) * 2021-03-24 2021-06-01 南京邮电大学 Non-contact user identity authentication method based on RFID and convolutional neural network

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104778702A (en) * 2015-04-15 2015-07-15 中国科学院自动化研究所 Image stego-detection method on basis of deep learning
CN106022317A (en) * 2016-06-27 2016-10-12 北京小米移动软件有限公司 Face identification method and apparatus
CN106778607A (en) * 2016-12-15 2017-05-31 国政通科技股份有限公司 A kind of people based on recognition of face and identity card homogeneity authentication device and method
CN107295362A (en) * 2017-08-10 2017-10-24 上海六界信息技术有限公司 Live content screening technique, device, equipment and storage medium based on image
CN107545243A (en) * 2017-08-07 2018-01-05 南京信息工程大学 Yellow race's face identification method based on depth convolution model
CN107800572A (en) * 2017-10-27 2018-03-13 福州瑞芯微电子股份有限公司 A kind of method and apparatus based on neutral net updating apparatus
CN107958471A (en) * 2017-10-30 2018-04-24 深圳先进技术研究院 CT imaging methods, device, CT equipment and storage medium based on lack sampling data

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104778702A (en) * 2015-04-15 2015-07-15 中国科学院自动化研究所 Image stego-detection method on basis of deep learning
CN106022317A (en) * 2016-06-27 2016-10-12 北京小米移动软件有限公司 Face identification method and apparatus
CN106778607A (en) * 2016-12-15 2017-05-31 国政通科技股份有限公司 A kind of people based on recognition of face and identity card homogeneity authentication device and method
CN107545243A (en) * 2017-08-07 2018-01-05 南京信息工程大学 Yellow race's face identification method based on depth convolution model
CN107295362A (en) * 2017-08-10 2017-10-24 上海六界信息技术有限公司 Live content screening technique, device, equipment and storage medium based on image
CN107800572A (en) * 2017-10-27 2018-03-13 福州瑞芯微电子股份有限公司 A kind of method and apparatus based on neutral net updating apparatus
CN107958471A (en) * 2017-10-30 2018-04-24 深圳先进技术研究院 CT imaging methods, device, CT equipment and storage medium based on lack sampling data

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112000819A (en) * 2019-05-27 2020-11-27 北京达佳互联信息技术有限公司 Multimedia resource recommendation method and device, electronic equipment and storage medium
CN112000819B (en) * 2019-05-27 2023-07-11 北京达佳互联信息技术有限公司 Multimedia resource recommendation method and device, electronic equipment and storage medium
CN111984884A (en) * 2020-08-18 2020-11-24 深圳市维度统计咨询股份有限公司 Non-contact data acquisition method and device for large database
CN112507312A (en) * 2020-12-08 2021-03-16 电子科技大学 Digital fingerprint-based verification and tracking method in deep learning system
CN112507312B (en) * 2020-12-08 2022-10-14 电子科技大学 Digital fingerprint-based verification and tracking method in deep learning system
CN112883355A (en) * 2021-03-24 2021-06-01 南京邮电大学 Non-contact user identity authentication method based on RFID and convolutional neural network
CN112883355B (en) * 2021-03-24 2023-05-02 南京邮电大学 Non-contact user identity authentication method based on RFID and convolutional neural network

Similar Documents

Publication Publication Date Title
CN111814574B (en) Face living body detection system, terminal and storage medium applying double-branch three-dimensional convolution model
US10733421B2 (en) Method for processing video, electronic device and storage medium
Pandey et al. FoodNet: Recognizing foods using ensemble of deep networks
CN107633207B (en) AU characteristic recognition methods, device and storage medium
CN107330904B (en) Image processing method, image processing device, electronic equipment and storage medium
Nguyen et al. Facial emotion recognition using an ensemble of multi-level convolutional neural networks
CN108664909A (en) A kind of auth method and terminal
CN107679447A (en) Facial characteristics point detecting method, device and storage medium
CN107633204A (en) Face occlusion detection method, apparatus and storage medium
CN110059465A (en) Auth method, confrontation generate training method, device and the equipment of network
CN105631466B (en) The method and device of image classification
CN107333071A (en) Video processing method and device, electronic equipment and storage medium
CN109829448A (en) Face identification method, device and storage medium
CN112288018A (en) Training method of character recognition network, character recognition method and device
CN111108508A (en) Facial emotion recognition method, intelligent device and computer-readable storage medium
Dar et al. Efficient-SwishNet based system for facial emotion recognition
CN114463805B (en) Deep forgery detection method, device, storage medium and computer equipment
Tai et al. A dish recognition framework using transfer learning
CN109902167B (en) Interpretation method and device of embedded result
Simanjuntak et al. Fusion of cnn-and cosfire-based features with application to gender recognition from face images
Contreras et al. A new multi-filter framework with statistical dense SIFT descriptor for spoofing detection in fingerprint authentication systems
George et al. Development of Android Application for Facial Age Group Classification Using TensorFlow Lite
CN116266394A (en) Multi-modal emotion recognition method, device and storage medium
Shukla et al. Deep Learning Model to Identify Hide Images using CNN Algorithm
Iodice et al. Strict pyramidal deep architectures for person re-identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20181016

WD01 Invention patent application deemed withdrawn after publication