CN109711361A - Intelligent cockpit embedded fingerprint feature extracting method based on deep learning - Google Patents
Intelligent cockpit embedded fingerprint feature extracting method based on deep learning Download PDFInfo
- Publication number
- CN109711361A CN109711361A CN201811630650.9A CN201811630650A CN109711361A CN 109711361 A CN109711361 A CN 109711361A CN 201811630650 A CN201811630650 A CN 201811630650A CN 109711361 A CN109711361 A CN 109711361A
- Authority
- CN
- China
- Prior art keywords
- feature
- fingerprint
- deep learning
- sample
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses the intelligent cockpit embedded fingerprint feature extracting method based on deep learning, acquisition user fingerprints are mainly comprised the following steps that, being taken the fingerprint using CNN network feature CNN_Features and is obtained the comprehensive ELF16 of local feature using Hand-crafted mode, feature combination is carried out using the Fusion Layer mode connected entirely, weighted activation fusion, multitiered network iteration, calculated cross entropy and judge whether minimum, finally obtain fingerprint fusion feature.Intelligence cockpit proposed by the present invention is identified with fingerprint identity recognition, and since the people of different identity manipulates, permission is different, and manipulating than the automobile of traditional intelligence cockpit has higher safety, has higher target discrimination.
Description
Technical field
The present invention relates to automobile technical fields, specially the intelligent cockpit embedded fingerprint feature extraction based on deep learning
Method.
Background technique
Current intelligent cockpit do not have fingerprint recognition difference identity function, be mostly the entertainment systems with multi-screen interactive,
The functions such as virtual meter panel, vehicle body central control system, there is no the people of subregion different identity, and which kind of should have manipulate permission, this is in vapour
It is very crucial in terms of the safe driving of vehicle.
There is the function of fingerprint door opening and one-key start on the High Tier Brand automobile of the world at present, but its fingerprint characteristic mentions
It takes method to be all based on traditional coding to table look-up or to the fingerprint ridge line endpoint of pixel direct geometry calculating and the spy of bifurcation
Levy extracting method, the method for being not based on the Finger print characteristic abstract of deep learning.
Summary of the invention
The purpose of the present invention is to provide the intelligent cockpit embedded fingerprint feature extracting methods based on deep learning, with solution
Certainly the problems mentioned above in the background art.
To achieve the above object, the invention provides the following technical scheme:
The invention discloses the intelligent cockpit embedded fingerprint feature extracting methods based on deep learning, including following step
It is rapid:
(1) user fingerprints are acquired, the picture element matrix of fingerprint image data is obtained;
(2) fingerprint image data smoothly eliminates noise;
(3) it is taken the fingerprint feature CNN_Features to fingerprint image data using CNN network;
(4) the comprehensive ELF16 of local feature is obtained using Hand-crafted mode to fingerprint image data, local feature is comprehensive
Closing ELF16 includes grey level histogram feature and textural characteristics;
(5) by fingerprint characteristic CNN_Features and part characteristic synthetic ELF16 using the Fusion Layer connected entirely
Mode carries out feature combination, and the fingerprint fusion feature after feature combination is x:
X=[ratio1*ELF16,ratio2*CNN_Features];
In formula:
ratio1--- the gradient descending slope of the comprehensive ELF16 of local feature, taking initial value is 0.4~0.6;
ratio2--- the gradient descending slope of fingerprint characteristic CNN_Features, taking initial value is 0.9~1.2;
(6) weighting excitation fusion calculation:
H --- activation primitive;
-- the weight of Feature Fusion Algorithm, taking initial value is 0.8~1.0;
bFusion-- the biasing of Feature Fusion Algorithm, taking initial value is 0.08~0.1;
(7) multitiered network iterates to calculate:
In formula:
W(l)--- l layers of weight;
ΔW(l)--- l layers of weight variable quantity;
--- weight after l layers of new iteration;
b(l)--- l layers of biasing;
Δb(l)--- the variable quantity of l layers of biasing;
--- it is biased after l layers of new iteration;
α --- the learning rate sufficiently selected;
M --- number of samples;
λ --- hyper parameter, value > 0;
(8) cross entropy J is calculated:
In formula:
J --- cross entropy;
The single input vector of x --- the last layer;
The single output node of j --- the last layer;
The quantity of n --- output node;
--- the model parameter of output node j;
--- the model parameter of output node k;
pk--- the estimated probability calculated by data;
logpk--- the logarithm of estimated probability;
Y --- mid-level network output node;
P (y=jx;θ) --- mid-level network output node be j, the last layer it is single input be x, output node mould
When shape parameter is θ, pass through the estimated probability of data calculating;
(9) if cross entropy J is not minimum, the continuation iteration since step (5), until cross entropy J is minimum;
(10) terminate, obtain fingerprint fusion feature.
Preferably, in the step (3), CNN network uses the feature convolution of Global, by constructing a triple sample
ThisAnd using stochastic gradient descent training algorithm training CNN network, the feature space of the triple sample isSpecific step is as follows:
3.1 input training sample finger images data { Ii};
3.2 outputs weight network parameter { w };
As long as 3.3 t < T, then repeating the operation of 3.3.1~3.3.6:
3.3.1t←t+1;
3.3.2 pass through the feature space of propagated forward network query function triple sample
3.3.3 triple sample characteristics space is calculated to the partial derivative of weight network parameter by counterpropagation network:With
3.3.4 inter- object distance is calculated to the partial derivative of weight network parameter according to following two formulaWith inter- object distance and class
Between range difference to the partial derivative of weight network parameter
3.3.5 loss function is calculated to the partial derivative of output weight network parameter according to following 3 formula
3.3.6 updating weighting network parameter
In formula:
T-- moment thus;
T+1-- is subsequent time;
T-1 --- last moment;
T-- is iteration finish time;
-- partial derivative of the loss function to output weight network parameter;
-- it is original sample;
-- it is positive sample;
-- it is negative sample;
I-- is sample number;
N --- it is number of samples;
h1(Ii, w) -- for inter- object distance and between class distance difference activation primitive fragmentation value;
h2(Ii, w) -- it is inter- object distance activation primitive fragmentation value;
λt--- it is weight coefficient of the loss function to the partial derivative of weight network parameter, λtValue range be 0≤λt≤
1;
T1--- it is the limitation parameter of inter- object distance and between class distance difference, T1Value range be 0.5≤T1≤5.5;
T2--- it is the limitation parameter of inter- object distance, T2Value range be 0.5≤T1≤1.6。
Preferably, the distance function of the triple sample architecture includes inter- object distance and between class distance is poor, inter- object distance,
The distance function of the triple sample architecture meets following constraint:
In formula:
--- it is poor for inter- object distance and between class distance;
--- it is inter- object distance;
W --- weight network parameter;
--- the feature space of original sample;
--- the feature space of positive sample;
--- it is the feature space of negative sample.
Preferably, the total losses of the loss function:
In formula:
L (I, w) --- it is total losses function;
Max --- it is max function;
N --- it is number of samples;
β --- it is Balance parameter.
Preferably, the feature convolution of the Global generates 4096D Feature.
Preferably, λ in the step 3.3.6tValue is 0.3 or 0.8.
Preferably, T in the step 3.31Value is 3, T2Value is 1.
Preferably, ratio in the step (5)1Initial value be 0.5, ratio2Initial value be 1.05.
Preferably, in the step (6)Initial value value be 0.9.
Preferably, b in the step (6)FusionInitial value value be 0.09.
Compared with prior art, the beneficial effects of the present invention are:
The present invention is based on the intelligent cockpit embedded fingerprint feature extracting method of deep learning, first, the present invention proposes
Intelligent cockpit identified with fingerprint identity recognition, since to manipulate permission different by the people of different identity, than the vapour of traditional intelligence cockpit
Vehicle manipulation has a higher safety, and second, the present invention is based on the Finger print characteristic abstract sides of Fusion Feature Net (FFN)
Method has higher target discrimination compared with the method for traditional Finger print characteristic abstract.
Detailed description of the invention
Fig. 1 is the intelligent cockpit embedded fingerprint feature extracting method schematic diagram based on deep learning.
Specific embodiment
The invention discloses the intelligent cockpit embedded fingerprint feature extracting methods based on deep learning, are based on Fusion
Steps are as follows for the Finger print characteristic abstract of Feature Net (FFN):
(1) user fingerprints are acquired, the picture element matrix of fingerprint image data is obtained;
(2) fingerprint image data smoothly eliminates noise;
(3) it is taken the fingerprint feature CNN_Features to fingerprint image data using CNN network;
In an example of the invention, CNN network uses the feature convolution of Global, generates 4096D Feature
(features of 4096 dimensions), by constructing a triple sampleAnd using the training of stochastic gradient descent training algorithm
The feature space of CNN network, the triple sample isSpecific step is as follows:
3.1 input training sample finger images data { Ii};
3.2 outputs weight network parameter { w };
As long as 3.3 t < T, then repeating the operation of 3.3.1~3.3.6:
3.3.1t←t+1;
3.3.2 pass through the feature space of propagated forward network query function triple sample
3.3.3 triple sample characteristics space is calculated to the partial derivative of weight network parameter by counterpropagation network:With
3.3.4 inter- object distance is calculated to the partial derivative of weight network parameter according to following two formulaWith inter- object distance and class
Between range difference to the partial derivative of weight network parameter
3.3.5 loss function is calculated to the partial derivative of output weight network parameter according to following 3 formula
3.3.6 updating weighting network parameter
In formula:
T-- moment thus;
T+1-- is subsequent time;
T-1 --- last moment;
T-- is iteration finish time;
-- partial derivative of the loss function to output weight network parameter;
-- it is original sample;
-- it is positive sample;
-- it is negative sample;
I-- is sample number;
N --- it is number of samples;
h1(Ii, w) -- for inter- object distance and between class distance difference activation primitive fragmentation value;
h2(Ii, w) -- it is inter- object distance activation primitive fragmentation value;
λt--- it is weight coefficient of the loss function to the partial derivative of weight network parameter, λtValue range be 0≤λt≤
1, in an example of the invention, λtValue is the predicted value of 0.3 or 0.8 output that can preferably measure CNN network;
T1--- it is the limitation parameter of inter- object distance and between class distance difference, T1Value range be 0.5≤T1≤ 5.5, at this
In one example of invention, T1Value be 3;
T2--- it is the limitation parameter of inter- object distance, T2Value range be 0.5≤T1≤ 1.6, show at of the invention one
In example, T2Value is 1.
In an example of the invention, the distance function of the triple sample architecture includes inter- object distance and class spacing
Deviation, inter- object distance, the distance function of the triple sample architecture meet following constraint:
In formula:
--- it is poor for inter- object distance and between class distance;
--- it is inter- object distance;
W --- weight network parameter;
--- the feature space of original sample;
--- the feature space of positive sample;
--- it is the feature space of negative sample.
In an example of the invention, the total losses of the loss function:
In formula:
L (I, w) --- it is total losses function;
Max --- it is max function;
N --- it is number of samples;
β --- it is Balance parameter;
(the d when meeting constraint conditionn< T1), max value is constant, therefore is carrying out stochastic gradient descent algorithm gradient
It is not contributed when calculating, otherwise can be worth according to this distance and be modified along gradient opposite direction.
(4) the comprehensive ELF16 of local feature is obtained using Hand-crafted mode to fingerprint image data, local feature is comprehensive
Closing ELF16 includes grey level histogram feature and textural characteristics;
(5) by fingerprint characteristic CNN_Features and part characteristic synthetic ELF16 using the Fusion Layer connected entirely
Mode carries out feature combination, and the fingerprint fusion feature after feature combination is x:
X=[ratio1*ELF16,ratio2*CNN_Features];
In formula:
ratio1--- the gradient descending slope of the comprehensive ELF16 of local feature, taking initial value is 0.4~0.6, in the present invention
An example in, ratio1Initial value is 0.5;
ratio2--- the gradient descending slope of fingerprint characteristic CNN_Features, taking initial value is 0.9~1.2, at this
In one example of invention, ratio2Initial value is 1.05;
(6) weighting excitation fusion calculation:
In formula:
H --- activation primitive;
-- the weight of Feature Fusion Algorithm, taking initial value is 0.8~1.0, in an example of the invention, is taken
Initial value is 0.9;
bFusion-- the biasing of Feature Fusion Algorithm, taking initial value is 0.08~0.1, in an example of the invention, is taken
Initial value is 0.09;
(7) multitiered network iterates to calculate:
In formula:
W(l)--- l layers of weight;
ΔW(l)--- l layers of weight variable quantity;
--- weight after l layers of new iteration;
b(l)--- l layers of biasing;
Δb(l)--- the variable quantity of l layers of biasing;
--- it is biased after l layers of new iteration;
α --- the learning rate sufficiently selected;
M --- number of samples;
λ --- hyper parameter, value > 0;
(8) cross entropy J is calculated:
In formula:
J --- cross entropy;
The single input vector of x --- the last layer;
The single output node of j --- the last layer;
The quantity of n --- output node;
--- the model parameter of output node j;
--- the model parameter of output node k;
pk--- the estimated probability calculated by data;
logpk--- the logarithm of estimated probability;
Y --- mid-level network output node;
P (y=j | x;θ) --- mid-level network output node be j, the last layer it is single input be x, output
When nodal analysis method parameter is θ, pass through the estimated probability of data calculating;
(9) if cross entropy J is not minimum, the continuation iteration since step (5), until cross entropy J is minimum;In iteration
In the process, adjust automatically ratio1、ratio2、And bFusionThe initial parameter value of parameter, to feature weight automated tuning,
Until loss function minimum;
(10) terminate, obtain fingerprint fusion feature.
It is above-mentioned either with or without the variable and parameter clearly stated, be manual feature (Hand-crafted) or CNN network
Common variable and parameter;It is related to iterative process, the initial value of first time iteration is manual feature in addition to clearly stating
(Hand-crafted) or the common processing method of CNN network;All minimum value judgment methods are the common judgement of neural network
Method.
Pass through manual feature (Hand-crafted) and CNN network (convolutional neural networks Convolutional Neural
Network feature complementary) is extracted, and Fusion Features are realized by Fusion frame, CNN feature and manual feature are mapped to system
One feature space realizes target discrimination more higher than simple CNN or simple manual feature.
The first, intelligent cockpit proposed by the present invention is identified with fingerprint identity recognition, since the people of different identity manipulates permission
Difference, than the automobile of traditional intelligence cockpit manipulation have higher safety, second, the present invention is based on the fingerprint of deep learning spy
Extracting method is levied compared with the method for traditional Finger print characteristic abstract, there is higher target discrimination.
It is obtained, is got well than traditional algorithm for recognizing fingerprint using feature extraction effect of the invention, reason by experiment
It is the integration of Buffer Layer&Fusion Layer, to initial parameter value and feature weight automated tuning.
Method of the invention and traditional algorithm for recognizing fingerprint recognition performance comparison are as follows:
It although an embodiment of the present invention has been shown and described, for the ordinary skill in the art, can be with
A variety of variations, modification, replacement can be carried out to these embodiments without departing from the principles and spirit of the present invention by understanding
And modification, the scope of the present invention is defined by the appended.
Claims (10)
1. the intelligent cockpit embedded fingerprint feature extracting method based on deep learning, it is characterised in that: have follow steps:
(1) user fingerprints are acquired, the picture element matrix of fingerprint image data is obtained;
(2) fingerprint image data smoothly eliminates noise;
(3) it is taken the fingerprint feature CNN_Features to fingerprint image data using CNN network;
(4) the comprehensive ELF16 of local feature is obtained using Hand-crafted mode to fingerprint image data, local feature is comprehensive
ELF16 includes grey level histogram feature and textural characteristics;
(5) by fingerprint characteristic CNN_Features and part characteristic synthetic ELF16 using the Fusion Layer mode connected entirely
Feature combination is carried out, the fingerprint fusion feature after feature combination is x:
X=[ratio1*ELF16,ratio2*CNN_Features];
In formula:
ratio1--- the gradient descending slope of the comprehensive ELF16 of local feature, taking initial value is 0.4~0.6;
ratio2--- the gradient descending slope of fingerprint characteristic CNN_Features, taking initial value is 0.9~1.2;
(6) weighting excitation fusion calculation:
H --- activation primitive;
-- the weight of Feature Fusion Algorithm, taking initial value is 0.8~1.0;
bFusion-- the biasing of Feature Fusion Algorithm, taking initial value is 0.08~0.1;
(7) multitiered network iterates to calculate:
In formula:
W(l)--- l layers of weight;
ΔW(l)--- l layers of weight variable quantity;
--- weight after l layers of new iteration;
b(l)--- l layers of biasing;
Δb(l)--- the variable quantity of l layers of biasing;
--- it is biased after l layers of new iteration;
α --- the learning rate sufficiently selected;
M --- number of samples;
λ --- hyper parameter, value > 0;
(8) cross entropy J is calculated:
In formula:
J --- cross entropy;
The single input vector of x --- the last layer;
The single output node of j --- the last layer;
The quantity of n --- output node;
--- the model parameter of output node j;
--- the model parameter of output node k;
pk--- the estimated probability calculated by data;
log pk--- the logarithm of estimated probability;
Y --- mid-level network output node;
P (y=j | x;θ) --- mid-level network output node be j, the last layer it is single input be x, output node model ginseng
When number is θ, pass through the estimated probability of data calculating;
(9) if cross entropy J is not minimum, the continuation iteration since step (5), until cross entropy J is minimum;
(10) terminate, obtain fingerprint fusion feature.
2. the intelligent cockpit embedded fingerprint feature extracting method according to claim 1 based on deep learning, feature
Be: in the step (3), CNN network uses the feature convolution of Global, by constructing a triple sampleAnd using stochastic gradient descent training algorithm training CNN network, the feature space of the triple sample isSpecific step is as follows:
3.1 input training sample finger images data { Ii};
3.2 outputs weight network parameter { w };
As long as 3.3 t < T, then repeating the operation of 3.3.1~3.3.6:
3.3.1t←t+1;
3.3.2 pass through the feature space of propagated forward network query function triple sample
3.3.3 triple sample characteristics space is calculated to the partial derivative of weight network parameter by counterpropagation network:With
3.3.4 inter- object distance is calculated to the partial derivative of weight network parameter according to following two formulaWith inter- object distance and class spacing
Partial derivative of the deviation to weight network parameter
3.3.5 loss function is calculated to the partial derivative of output weight network parameter according to following 3 formula
3.3.6 updating weighting network parameter
In formula:
T-- moment thus;
T+1-- is subsequent time;
T-1 --- last moment;
T-- is iteration finish time;
-- partial derivative of the loss function to output weight network parameter;
-- it is original sample;
-- it is positive sample;
-- it is negative sample;
I-- is sample number;
N --- it is number of samples;
h1(Ii, w) -- for inter- object distance and between class distance difference activation primitive fragmentation value;
h2(Ii, w) -- it is inter- object distance activation primitive fragmentation value;
λt--- it is weight coefficient of the loss function to the partial derivative of weight network parameter, λtValue range be 0≤λt≤1;
T1--- it is the limitation parameter of inter- object distance and between class distance difference, T1Value range be 0.5≤T1≤5.5;
T2--- it is the limitation parameter of inter- object distance, T2Value range be 0.5≤T1≤1.6。
3. the intelligent cockpit embedded fingerprint feature extracting method according to claim 2 based on deep learning, feature
Be: the distance function of the triple sample architecture includes inter- object distance and between class distance is poor, inter- object distance, the triple
The distance function of sample architecture meets following constraint:
In formula:
--- it is poor for inter- object distance and between class distance;
--- it is inter- object distance;
W --- weight network parameter;
--- the feature space of original sample;
--- the feature space of positive sample;
--- it is the feature space of negative sample.
4. the intelligent cockpit embedded fingerprint feature extracting method according to claim 3 based on deep learning, feature
It is: the total losses of the loss function:
In formula:
L (I, w) --- it is total losses function;
Max --- it is max function;
N --- it is number of samples;
β --- it is Balance parameter.
5. the intelligent cockpit embedded fingerprint feature extracting method according to claim 3 based on deep learning, feature
Be: the feature convolution of the Global generates 4096D Feature.
6. the intelligent cockpit embedded fingerprint feature extracting method according to claim 2 based on deep learning, feature
It is: λ in the step 3.3.6tValue is 0.3 or 0.8.
7. the intelligent cockpit embedded fingerprint feature extracting method according to claim 1 based on deep learning, feature
It is: T in the step 3.31Value is 3, T2Value is 1.
8. the intelligent cockpit embedded fingerprint feature extracting method according to claim 1 based on deep learning, feature
It is: ratio in the step (5)1Initial value be 0.5, ratio2Initial value be 1.05.
9. the intelligent cockpit embedded fingerprint feature extracting method according to claim 1 based on deep learning, feature
It is: in the step (6)Initial value value be 0.9.
10. the intelligent cockpit embedded fingerprint feature extracting method according to claim 1 based on deep learning, feature
It is: b in the step (6)FusionInitial value value be 0.09.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811630650.9A CN109711361A (en) | 2018-12-29 | 2018-12-29 | Intelligent cockpit embedded fingerprint feature extracting method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811630650.9A CN109711361A (en) | 2018-12-29 | 2018-12-29 | Intelligent cockpit embedded fingerprint feature extracting method based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109711361A true CN109711361A (en) | 2019-05-03 |
Family
ID=66259300
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811630650.9A Pending CN109711361A (en) | 2018-12-29 | 2018-12-29 | Intelligent cockpit embedded fingerprint feature extracting method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109711361A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112418191A (en) * | 2021-01-21 | 2021-02-26 | 深圳阜时科技有限公司 | Fingerprint identification model construction method, storage medium and computer equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050188213A1 (en) * | 2004-02-23 | 2005-08-25 | Xiaoshu Xu | System for personal identity verification |
CN102216941A (en) * | 2008-08-19 | 2011-10-12 | 数字标记公司 | Methods and systems for content processing |
CN103646457A (en) * | 2013-12-27 | 2014-03-19 | 重庆集诚汽车电子有限责任公司 | Low frequency calibration method for PEPS (passive entry passive start) system |
US20180082304A1 (en) * | 2016-09-21 | 2018-03-22 | PINN Technologies | System for user identification and authentication |
CN108875907A (en) * | 2018-04-23 | 2018-11-23 | 北方工业大学 | A kind of fingerprint identification method and device based on deep learning |
-
2018
- 2018-12-29 CN CN201811630650.9A patent/CN109711361A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050188213A1 (en) * | 2004-02-23 | 2005-08-25 | Xiaoshu Xu | System for personal identity verification |
CN102216941A (en) * | 2008-08-19 | 2011-10-12 | 数字标记公司 | Methods and systems for content processing |
CN103646457A (en) * | 2013-12-27 | 2014-03-19 | 重庆集诚汽车电子有限责任公司 | Low frequency calibration method for PEPS (passive entry passive start) system |
US20180082304A1 (en) * | 2016-09-21 | 2018-03-22 | PINN Technologies | System for user identification and authentication |
CN108875907A (en) * | 2018-04-23 | 2018-11-23 | 北方工业大学 | A kind of fingerprint identification method and device based on deep learning |
Non-Patent Citations (3)
Title |
---|
DE CHENG等: "Person Re-identification by Multi-Channel Parts-Based CNN with Improved Triplet Loss Function", 《2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 * |
SHANGXUAN WU等: "An Enhanced Deep Feature Representaion for person re-identification", 《2016 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION(WACV)》 * |
景晨凯等: "基于深度卷积神经网络的人脸识别技术综述", 《计算机应用与软件》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112418191A (en) * | 2021-01-21 | 2021-02-26 | 深圳阜时科技有限公司 | Fingerprint identification model construction method, storage medium and computer equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Malach et al. | Proving the lottery ticket hypothesis: Pruning is all you need | |
CN106407986B (en) | A kind of identification method of image target of synthetic aperture radar based on depth model | |
CN112308158B (en) | Multi-source field self-adaptive model and method based on partial feature alignment | |
CN107748895B (en) | Unmanned aerial vehicle landing landform image classification method based on DCT-CNN model | |
CN110020682B (en) | Attention mechanism relation comparison network model method based on small sample learning | |
CN109002845B (en) | Fine-grained image classification method based on deep convolutional neural network | |
CN104615983B (en) | Activity recognition method based on recurrent neural network and human skeleton motion sequence | |
CN107529650B (en) | Closed loop detection method and device and computer equipment | |
CN108985317A (en) | A kind of image classification method based on separable convolution sum attention mechanism | |
CN111582397B (en) | CNN-RNN image emotion analysis method based on attention mechanism | |
CN109902546A (en) | Face identification method, device and computer-readable medium | |
CN109344731A (en) | The face identification method of lightweight neural network based | |
CN109255340A (en) | It is a kind of to merge a variety of face identification methods for improving VGG network | |
CN110300095A (en) | A kind of deep learning network inbreak detection method based on improvement learning rate | |
CN110135460B (en) | Image information enhancement method based on VLAD convolution module | |
CN109902667A (en) | Human face in-vivo detection method based on light stream guide features block and convolution GRU | |
CN111738058B (en) | Reconstruction attack method for biological template protection based on generation of countermeasure network | |
US11625589B2 (en) | Residual semi-recurrent neural networks | |
WO2020168796A1 (en) | Data augmentation method based on high-dimensional spatial sampling | |
CN112434608A (en) | Human behavior identification method and system based on double-current combined network | |
CN112633234A (en) | Method, device, equipment and medium for training and applying face glasses-removing model | |
CN107330387A (en) | Pedestrian detection method based on view data | |
CN114596622A (en) | Iris and periocular antagonism adaptive fusion recognition method based on contrast knowledge drive | |
CN115330620A (en) | Image defogging method based on cyclic generation countermeasure network | |
CN112084895A (en) | Pedestrian re-identification method based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20190503 |
|
WD01 | Invention patent application deemed withdrawn after publication |