CN109800677A - A kind of cross-platform palm grain identification method - Google Patents

A kind of cross-platform palm grain identification method Download PDF

Info

Publication number
CN109800677A
CN109800677A CN201811642465.1A CN201811642465A CN109800677A CN 109800677 A CN109800677 A CN 109800677A CN 201811642465 A CN201811642465 A CN 201811642465A CN 109800677 A CN109800677 A CN 109800677A
Authority
CN
China
Prior art keywords
source domain
aiming field
image
hidden layer
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811642465.1A
Other languages
Chinese (zh)
Other versions
CN109800677B (en
Inventor
钟德星
邵会凯
杜学峰
姚润昭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GRASP TECHNOLOGY WUXI Co.,Ltd.
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN201811642465.1A priority Critical patent/CN109800677B/en
Publication of CN109800677A publication Critical patent/CN109800677A/en
Application granted granted Critical
Publication of CN109800677B publication Critical patent/CN109800677B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a kind of cross-platform palm grain identification methods, belong to biometrics identification technology field.Present invention employs depth migration self-encoding encoder models.It is lost by optimal reconfiguration, the hidden layer of depth self-encoding encoder can extract identifiable low-dimensional palm print characteristics, and have the advantages that low loss property and recoverability.Migration models can be reduced the difference between different field, source domain is converted into the feature distribution of aiming field consistent by the Largest Mean difference between constraint hidden layer feature.It finally using source domain characteristic quantity and classification information training classifier, and directly uses in aiming field, realizes personal recognition unsupervised in aiming field.The present invention be able to solve the prior art can not cross-platform identification the shortcomings that, reduce requirement of the model to training data, promote the convenience of personal recognition.

Description

A kind of cross-platform palm grain identification method
[technical field]
The invention belongs to biometrics identification technology fields, are related to a kind of cross-platform palm grain identification method.
[background technique]
With the development of modern information technologies and popularizing for network, ensuring information security is particularly important.People's In daily life, the verifying for carrying out identity, traditional auth method, such as password, key, certificate are required whenever and wherever possible Deng easily losing, be damaged or leak, therefore having now to efficient, convenient and fast, safe identity identifying method urgent Demand.Before the study found that some physiological characteristics such as fingerprint, palmmprint, iris of human body etc. have preferable stability, uniqueness With it is indeformable, therefore obtained using the biometrics identification technology that the physiological characteristic of people or behavioural characteristic carry out authentication Extensive concern.Since 21st century, using fingerprint recognition and face recognition technology as the living things feature recognition skill of representative Art is quickly grown, and is had been applied in production and life at present.With other biological feature, personal recognition has richer line Information is managed, recognition accuracy is higher, and has uniqueness and remaining unchanged for a long period of time property.
However existing palmprint recognition technology has certain deficiency.Firstly, the prior art mostly be based on single identification equipment, Data source and form are more single, it is clear that this is not able to satisfy what palmprint recognition technology was applied on more equipment are multi-platform It is required that.For example, when we register and identify respectively on different acquisition environment and acquisition equipment, the standard of the prior art True property will be greatly reduced.In addition, existing palm-print identifying arithmetic is mostly supervised learning algorithm, their high-accuracy needs a large amount of Marker samples guarantee, and to obtaining a large amount of tape label sample in real life, need to spend a large amount of manpower and Material resources, sometimes even not possible with.So there is significant limitations for existing palmprint recognition technology.To solve existing skill Art there are the problem of, need to develop new algorithm, realize efficiently, accurate cross-platform personal recognition.
[summary of the invention]
It is an object of the invention to overcome the above-mentioned prior art, a kind of cross-platform palm grain identification method is provided.Knot The low loss of conjunction depth self-encoding encoder hidden layer information and recoverability and Largest Mean difference can reduce source domain and mesh Mark characteristic of field distributional difference property, realize cross-platform unsupervised personal recognition, improve using existing transfer learning into In the cross-platform personal recognition of row the shortcomings that the loss of hidden layer information content.
In order to achieve the above objectives, the present invention is achieved by the following scheme:
A kind of cross-platform palm grain identification method, comprising the following steps:
Step 1: several palmprint images comprising label and several palmprint images without label are obtained using distinct device, Palmprint image comprising label is source domainPalmprint image not comprising label is aiming field
Step 2: a palmprint image in source domain and aiming field being input in depth coding network respectively, by the two point It does not project on regeneration Hilbert space, respectively obtains the feature vector of the hidden layer of source domain and aiming field, and then obtain two The low-dimensional coding characteristic in a domain;
Step 3: the hidden layer feature vector of source domain and aiming field being input in depth decoding network, wherein source domain and mesh Mark domain uses network structure identical with coding network and transformation matrix respectively, and transposition is later respectively and from source domain and aiming field Low-dimensional feature vector be multiplied, hidden layer feature and image after being reconstructed, and then the palmprint image reconstructed;
Step 4: the palmprint image of reconstruct being compared with the palmprint image in step 1, obtains reconstruct loss;
Step 5: calculating separately and corresponded on hidden layer in the depth coding network and depth decoding network of source domain and aiming field Feature distribution difference, obtain feature distribution difference loss;
Step 6: source domain is added loss letter as a whole with the loss of feature distribution difference with the reconstruct loss of aiming field Number, training depth self-encoding encoder;
Step 7: the palmprint image of source domain and aiming field being inputted to the depth self-encoding encoder trained respectively, obtains palmmprint Low-dimensional set of eigenvectors;
Step 8: building classifier, and using source domain low-dimensional set of eigenvectors and source domain training classifier, it calculates in source domain The recognition accuracy of palmprint image;
Step 9: the source domain classifier that step 8 constructs in source domain being used for target domain characterization collection, realizes and is slapped in aiming field Line accurately identifies.
A further improvement of the present invention lies in that:
In step 2, the depth coding network of source domain and aiming field uses identical hidden layer network structure, hidden comprising three layers Containing layer.
Depth coding network and depth decoding network include multiple hidden layers, can extract multiple hidden layer characteristic quantities; Wherein the hidden layer characteristic quantity of the output of depth coding network and the input of depth decoding network is the palmmprint coding eventually for identification Feature.
In step 5, calculated between source domain and aiming field between corresponding hidden layer feature distribution by Largest Mean difference Difference, according to coding and decoding network structure, calculate separately the Largest Mean difference between each corresponding hidden layer j:
fj() indicates from pixel domain to the nonlinear transformation of hidden layer j;According to Kernel Function Transformation, above formula can be write as
MMD(Sj,Tj)=tr (KjLj)
Wherein KjIt is a symmetrical kernel matrix, is write as:
WhereinWithIt is kernel function in (xm,xn) source domain, aiming field and friendship are taken respectively Pitch value when domain, LjIt is MMD matrix, form is as follows:
The calculation method of loss function in step 6 is as follows:
The loss function of model entirety is made of three parts: MMD loss function, source domain image reconstruction loss function and mesh It marks area image and reconstructs loss function;Defining loss function L (W) is following formula:
Wherein,And XSReconstructed image and original image respectively in source domain,And XTReconstruct respectively in aiming field Image and original image, α, β are weight scalar, for balancing source domain reconstruct loss function and aiming field reconstruct loss function in totality Shared specific gravity in loss function.
In step 6, the specific method is as follows for training depth self-encoding encoder:
Two images from source domain and aiming field are inputted each time when training, and gradient is used by above-mentioned loss function The method of decline optimizes final loss function.
In step 8, calculating the recognition accuracy of source domain, the specific method is as follows:
The low-dimensional coding characteristic P from source domain image is used firstSAs final feature, construct full articulamentum and Softmax layers, it can classify to source domain palmprint image, be compared with legitimate reading, can access in source domain and slap The recognition accuracy of print image.
Realize that palmmprint accurately identifies that the specific method is as follows in aiming field in step 9:
Using the classifier obtained in source domain, the low-dimensional coding characteristic P that aiming field palmprint image is obtainedTIt is input to source In the classifier of domain, corresponding classification results are obtained, realize the identification of aiming field palmmprint.
Compared with prior art, the invention has the following advantages:
The present invention be suitable for cross-platform personal recognition, it is only necessary to using in source domain palmprint image and label can be real The unsupervised identification of palmmprint in existing aiming field.Present invention employs depth self-encoding encoder and learning model is migrated, merges theirs Advantage proposes depth migration self-encoding encoder.It is special that the hidden layer of depth self-encoding encoder can extract identifiable low-dimensional palmmprint Sign, has the advantages that low loss property and recoverability.Migration models based on Largest Mean difference can reduce different field it Between difference, source domain is converted into the feature distribution of aiming field consistent.It is proposed by the present invention to be based on depth migration self-encoding encoder Cross-platform palm grain identification method, the shortcomings that capable of overcoming the prior art can only be towards same identification equipment.In addition of the invention Do not need a large amount of label information of aiming field, reduce requirement of the trained identification model to sample data, reduce identification at This.
[Detailed description of the invention]
Fig. 1 is model training process schematic of the invention.
Fig. 2 is model identification process schematic diagram of the invention.
Wherein: 1. equipment is the source domain of known sample label, 2., 3. ... equipment is to identify equipment, a source in aiming field Domain identification equipment can correspond to multiple aiming field identification equipment, and the signal of one of aiming field identification equipment is only gived in figure Figure.
[specific embodiment]
In order to enable those skilled in the art to better understand the solution of the present invention, below in conjunction in the embodiment of the present invention Attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is only The embodiment of a part of the invention, the embodiment being not all of, and it is not intended to limit range disclosed by the invention.In addition, with In lower explanation, descriptions of well-known structures and technologies are omitted, obscures concept disclosed by the invention to avoid unnecessary.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment should fall within the scope of the present invention.
The various structural schematic diagrams for disclosing embodiment according to the present invention are shown in the attached drawings.These figures are not in proportion It draws, wherein some details are magnified for the purpose of clear expression, and some details may be omitted.As shown in the figure The shape in various regions, layer and relative size, the positional relationship between them out is merely exemplary, in practice may be due to Manufacturing tolerance or technical restriction and be deviated, and those skilled in the art may be additionally designed as required have not Similar shape, size, the regions/layers of relative position.
In context disclosed by the invention, when one layer/element is referred to as located at another layer/element "upper", the layer/member Part can may exist intermediate layer/element on another layer/element or between them.In addition, if in one kind One layer/element is located at another layer/element "upper" in, then when turn towards when, which can be positioned at this is another Layer/element "lower".
It should be noted that description and claims of this specification and term " first " in above-mentioned attached drawing, " Two " etc. be to be used to distinguish similar objects, without being used to describe a particular order or precedence order.It should be understood that using in this way Data be interchangeable under appropriate circumstances, so as to the embodiment of the present invention described herein can in addition to illustrating herein or Sequence other than those of description is implemented.In addition, term " includes " and " having " and their any deformation, it is intended that cover Cover it is non-exclusive include, for example, the process, method, system, product or equipment for containing a series of steps or units are not necessarily limited to Step or unit those of is clearly listed, but may include be not clearly listed or for these process, methods, product Or other step or units that equipment is intrinsic.
The invention will be described in further detail with reference to the accompanying drawing:
Referring to Fig. 1, cross-platform personal recognition is realized the present invention is based on deep neural network, self-encoding encoder, transfer learning, It is related to different palmmprint acquisition and identification equipment.It wherein needs to know that the equipment of palmprint image label information is known as source domain, is not required to It is to be understood that label information is known as aiming field.Source domain can include at least an equipment, and a source domain can correspond to multiple and different Aiming field illustrate specific embodiment party by taking a source domain (equipment is 1.) and an aiming field (equipment is 2.) as an example here referring to Fig. 1 Formula.
In training, after the palmprint image that two domains of tester are obtained by acquisition equipment, inputs in depth encoder and obtain Multiple hidden layer feature vectors in Hilbert space must be regenerated, these hidden layer characteristic quantities are then input to depth decoding The palmprint image reconstructed in device, the difference between palmprint image by calculating original palmprint image and reconstruct can be with Reconstruct loss is obtained, the information loss of low-dimensional characteristic quantity is reduced.In addition the low-dimensional that hidden layer is corresponded in source domain and aiming field is calculated Feature distribution in two domains can be transformed into unanimously, so as to be applicable in by the Largest Mean difference (MMD) between characteristic quantity Same classifier.Largest Mean difference between the reconstruct loss and whole hidden layer vectors in two domains is instructed as loss function Practice depth migration self-encoding encoder, and obtains the low-dimensional feature quantity set of source domain and aiming field.Finally in source domain according to extracting Low-dimensional feature quantity set and sample label, obtain classifier by way of supervised training, which may be directly applied to mesh The personal recognition for marking domain, achievees the purpose that identification.
The principle of the present invention is as follows:
1. depth encoder extracts palm print characteristics
The low-dimensional depth characteristic of palmmprint is obtained using depth encoder, for training subsequent classifier.There is class label The identification equipment of information is known as source domain, and the identification equipment of no label classification information is known as aiming field.Divide in source domain and aiming field It Xuan Ze not a palmprint imageWithWherein n1、n2The quantity of palmprint image in respectively two domains. The palmprint image from source domain and aiming field is handled using the multilayer hidden layer in depth encoder, the two is projected again respectively On raw Hilbert space (Reproducing Kernel Hilbert Space, RHKS).Assuming that source domain coding network is implicit Layer is expressed as (WS1、WS2、…、WSM), aiming field coding network hidden layer is expressed as (WT1、WT2、…、WTM), wherein M > 1 is hidden Quantity containing layer.The character representation that two domain hidden layers obtain is hS1、hS2、…、hSM、hT1、hT2、…、hTM, the low-dimensional list of feature values It is shown as PS、PT, in which:
hS1=XSWS1
hS2=hS1WS2
hSM=hS(M-1)WSM
hT1=XTWT1
hT2=hT1WT2
hTM=hT(M-1)WTM
PS=hSM
PT=hTM
2. depth decoder reconstruction palmprint image
Using multilayer depth decoding network, transformation matrix W identical with coding network is setS1、WS2、…、WSMAnd WT1、 WT2、…、WTM, transposition is multiplied with the hidden layer feature vector from source domain and aiming field respectively later, hidden after being reconstructed Feature containing layerAnd palmprint imageBasic process It is as follows:
3. calculating the hidden layer feature distribution difference of source domain and aiming field:
By Largest Mean difference (Maximum Mean Discrepancy, MMD), we are defined in source domain and aiming field Difference between the feature distribution of corresponding hidden layer j are as follows:
Wherein fj() indicates from pixel domain to the nonlinear transformation of hidden layer j.According to Kernel Function Transformation, above formula can be write At:
MMD(Sj, Tj)=tr (KjLj)
Wherein KjIt is a symmetrical kernel matrix, can be written to:
WhereinWithIt is kernel function in (xm,xn) source domain, aiming field and friendship are taken respectively Pitch value when domain, LjIt is MMD matrix, form is as follows:
4. calculating loss function of the depth migration from encoding model
The loss function of model entirety is made of three parts: MMD loss function, source domain image reconstruction loss function, mesh It marks area image and reconstructs loss function.Defining loss function L (W) is following formula:
Wherein, α, β are weight scalar, for balancing source domain reconstruct loss function and aiming field reconstruct loss function in totality Shared specific gravity in loss function.
5. training depth migration self-encoding encoder and classifier
Two images from source domain and aiming field are inputted each time when training, pass through the mode computation model in above-mentioned 4 Loss function, optimize final loss function using gradient descent method, training depth migration self-encoding encoder obtains source domain and mesh Mark the low-dimensional set of eigenvectors of palmmprint in domain.
Next the low-dimensional set of eigenvectors from source domain image is used, according to the label training classifier of source domain image, Corresponding classification results are exported, and are compared with legitimate reading, the discrimination of source domain palmprint image can be obtained.Then, will The low-dimensional set of eigenvectors that target area image obtains is input in the trained classifier of source domain, exports corresponding classification knot The identification of aiming field palmprint image can be realized in fruit, notices that the label of target area image in this process does not have in training It uses.
So far, unsupervised cross-platform personal recognition process is realized.
Specific implementation process of the present invention is as follows:
Step 1: it obtains palmprint image: 1. 2. acquiring palmprint image with equipment using equipment respectively, and obtain region of interest Domain.It picks up from the palmmprint of equipment 1. and needs to record identity information, referred to as source domain, n1Palmprint image is opened to be expressed as Their class label is expressed asIt acquires and is not necessarily to record identity information from the palmmprint of equipment 2., Referred to as aiming field, collected n2Palmprint image is opened to be expressed as
Step 2: the hidden layer feature and low-dimensional coding characteristic of palmprint image are obtained: being selected respectively in source domain and aiming field Select a palmprint imageWherein n1, n2The quantity of palmprint image in respectively two domains.By source The palmprint image of domain and aiming field is input in depth coding network, and the two is projected to regeneration Hilbert space respectively On (Reproducing Kernel Hilbert Space, RHKS).The depth coding network of source domain and aiming field is using identical Hidden layer network structure, include 3 layers of hidden layer, specific structure is referring to Fig. 1.Assuming that equipment 1. with 2. three layers coding of equipment The transformation matrix of network is expressed as WS1、WS2、WS3And WT1、WT2、WT3.Respectively obtain the feature of two domain difference hidden layers Vector is expressed as hS1=XSWS1
hS2=hS1WS2
hS3=hS2WS3
hT1=XTWT1
hT2=hT1WT2
hT3=hT2WT3
The low-dimensional coding characteristic in two domains is expressed as PS=hS3And PT=hT3
Step 3: palmprint image is reconstructed by hidden layer feature vector: hidden layer feature vector is input to depth decoding net In network, two domains use network structure identical with coding network and transformation matrix W respectivelyS1、WS2、WS3And WT1、WT2、WT3, turn It is multiplied respectively with the low-dimensional feature vector from source domain and aiming field after setting, hidden layer feature and figure after being reconstructed Picture, basic process are as follows:
The image of reconstruct is expressed as
Step 4: the hidden layer feature distribution difference of source domain and aiming field is calculated: by Largest Mean difference (Maximum Mean Discrepancy, MMD), calculate the difference between source domain and aiming field between corresponding hidden layer feature distribution.According to The structure for coding and decoding network, calculates separately the Largest Mean difference between each corresponding hidden layer j
fj() indicates from pixel domain to the nonlinear transformation of hidden layer j.According to Kernel Function Transformation, above formula can be write as
MMD(Sj,Tj)=tr (KjLj)
Wherein KjIt is a symmetrical kernel matrix, can be written to:
WhereinWithIt is kernel function in (xm,xn) source domain, aiming field and friendship are taken respectively Pitch value when domain, LjIt is MMD matrix, form is as follows:
Step 5: the loss function of computation model: the loss function of model entirety is made of three parts: MMD loses letter Number, source domain image reconstruction loss function, target image reconstruct loss function.Defining loss function L (W) is following formula:
Wherein, α, β are weight scalar, for balancing source domain reconstruct loss function and object reconstruction loss function in overall damage Lose specific gravity shared in function.
Step 6: training depth self-encoding encoder: inputting two images from source domain and aiming field when training each time, leads to The mode for crossing above-mentioned steps five calculates loss function, optimizes final loss function using the method that gradient declines.
Step 7: it carries out the identification of aiming field palmprint image: using the low-dimensional coding characteristic P from source domain image firstS As final feature, simple full articulamentum and Softmax layers are constructed, can be classified for source domain image, and it is true As a result it is compared, the recognition correct rate of palmprint image in source domain can be obtained.Then, it is directly used in trained point of source domain Class device, the low-dimensional coding characteristic P that target area image is obtainedTIt is input in the classifier, obtains corresponding classification results, realize The identification of aiming field palmmprint.Notice that the label of target area image in this process is not used in training.So far, unsupervised Cross-platform personal recognition process is realized.
The above content is merely illustrative of the invention's technical idea, and this does not limit the scope of protection of the present invention, all to press According to technical idea proposed by the present invention, any changes made on the basis of the technical scheme each falls within claims of the present invention Protection scope within.

Claims (8)

1. a kind of cross-platform palm grain identification method, which comprises the following steps:
Step 1: obtaining several palmprint images comprising label and several palmprint images without label using distinct device, include The palmprint image of label is source domainPalmprint image not comprising label is aiming field
Step 2: a palmprint image in source domain and aiming field being input in depth coding network respectively, the two is thrown respectively It is mapped on regeneration Hilbert space, respectively obtains the feature vector of the hidden layer of source domain and aiming field, and then obtain two domains Low-dimensional coding characteristic;
Step 3: the hidden layer feature vector of source domain and aiming field being input in depth decoding network, wherein source domain and aiming field Network structure identical with coding network and transformation matrix are used respectively, and transposition is low with what it is from source domain and aiming field respectively later Dimensional feature vector is multiplied, hidden layer feature and image after being reconstructed, and then the palmprint image reconstructed;
Step 4: the palmprint image of reconstruct being compared with the palmprint image in step 1, obtains reconstruct loss;
Step 5: calculating separately the spy corresponded on hidden layer in the depth coding network and depth decoding network of source domain and aiming field Distributional difference is levied, the loss of feature distribution difference is obtained;
Step 6: source domain is added loss function as a whole, instruction with the loss of feature distribution difference with the reconstruct loss of aiming field Practice depth self-encoding encoder;
Step 7: the palmprint image of source domain and aiming field being inputted to the depth self-encoding encoder trained respectively, obtains the low-dimensional of palmmprint Set of eigenvectors;
Step 8: building classifier, and using source domain low-dimensional set of eigenvectors and source domain training classifier, calculate palmmprint in source domain The recognition accuracy of image;
Step 9: the source domain classifier that step 8 constructs in source domain being used for target domain characterization collection, realizes palmmprint in aiming field It accurately identifies.
2. cross-platform palm grain identification method according to claim 1, which is characterized in that in step 2, the depth of source domain and aiming field It spends coding network and uses identical hidden layer network structure, include three layers of hidden layer.
3. cross-platform palm grain identification method according to claim 1 or claim 2, which is characterized in that depth coding network and depth solution Code network includes multiple hidden layers, can extract multiple hidden layer characteristic quantities;The wherein output of depth coding network and depth solution The hidden layer characteristic quantity of code network inputs is the palmmprint coding characteristic eventually for identification.
4. cross-platform palm grain identification method according to claim 1, which is characterized in that in step 5, by Largest Mean difference The difference between source domain and aiming field between corresponding hidden layer feature distribution is calculated, according to the structure of coding and decoding network, Calculate separately the Largest Mean difference between each corresponding hidden layer j:
fj() indicates from pixel domain to the nonlinear transformation of hidden layer j;According to Kernel Function Transformation, above formula can be write as
MMD(Sj,Tj)=tr (KjLj)
Wherein KjIt is a symmetrical kernel matrix, is write as:
WhereinWithIt is kernel function in (xm,xn) source domain, aiming field and cross-domain are taken respectively When value, LjIt is MMD matrix, form is as follows:
5. cross-platform palm grain identification method according to claim 1, which is characterized in that the calculating of the loss function in step 6 Method is as follows:
The loss function of model entirety is made of three parts: MMD loss function, source domain image reconstruction loss function and aiming field Image reconstruction loss function;Defining loss function L (W) is following formula:
Wherein,And XSReconstructed image and original image respectively in source domain,And XTReconstructed image respectively in aiming field And original image, α, β are weight scalar, for balancing source domain reconstruct loss function and aiming field reconstruct loss function in overall loss Shared specific gravity in function.
6. cross-platform palm grain identification method according to claim 5, which is characterized in that in step 6, training depth self-encoding encoder The specific method is as follows:
Two images from source domain and aiming field are inputted each time when training to decline by above-mentioned loss function using gradient Method optimize final loss function.
7. cross-platform palm grain identification method according to claim 1, which is characterized in that in step 8, the identification for calculating source domain is quasi- The specific method is as follows for true rate:
The low-dimensional coding characteristic P from source domain image is used firstSAs final feature, full articulamentum and Softmax are constructed Layer, can classify to source domain palmprint image, be compared with legitimate reading, can access palmprint image in source domain Recognition accuracy.
8. cross-platform palm grain identification method according to claim 7, which is characterized in that realize palmmprint in aiming field in step 9 Accurately identify that the specific method is as follows:
Using the classifier obtained in source domain, the low-dimensional coding characteristic P that aiming field palmprint image is obtainedTIt is input to source domain point In class device, corresponding classification results are obtained, realize the identification of aiming field palmmprint.
CN201811642465.1A 2018-12-29 2018-12-29 Cross-platform palm print identification method Active CN109800677B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811642465.1A CN109800677B (en) 2018-12-29 2018-12-29 Cross-platform palm print identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811642465.1A CN109800677B (en) 2018-12-29 2018-12-29 Cross-platform palm print identification method

Publications (2)

Publication Number Publication Date
CN109800677A true CN109800677A (en) 2019-05-24
CN109800677B CN109800677B (en) 2021-11-02

Family

ID=66558326

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811642465.1A Active CN109800677B (en) 2018-12-29 2018-12-29 Cross-platform palm print identification method

Country Status (1)

Country Link
CN (1) CN109800677B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110473557A (en) * 2019-08-22 2019-11-19 杭州派尼澳电子科技有限公司 A kind of voice signal decoding method based on depth self-encoding encoder
CN111274973A (en) * 2020-01-21 2020-06-12 同济大学 Crowd counting model training method based on automatic domain division and application
CN111444765A (en) * 2020-02-24 2020-07-24 北京市商汤科技开发有限公司 Image re-recognition method, training method of related model, related device and equipment
CN112001398A (en) * 2020-08-26 2020-11-27 科大讯飞股份有限公司 Domain adaptation method, domain adaptation device, domain adaptation apparatus, image processing method, and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220337A (en) * 2017-05-25 2017-09-29 北京大学 A kind of cross-media retrieval method based on mixing migration network
CN107403153A (en) * 2017-07-20 2017-11-28 大连大学 A kind of palmprint image recognition methods encoded based on convolutional neural networks and Hash
CN107704926A (en) * 2017-11-23 2018-02-16 清华大学 A kind of depth migration learning method of the cross-cutting analysis of big data
CN108416338A (en) * 2018-04-28 2018-08-17 深圳信息职业技术学院 A kind of non-contact palm print identity authentication method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220337A (en) * 2017-05-25 2017-09-29 北京大学 A kind of cross-media retrieval method based on mixing migration network
CN107403153A (en) * 2017-07-20 2017-11-28 大连大学 A kind of palmprint image recognition methods encoded based on convolutional neural networks and Hash
CN107704926A (en) * 2017-11-23 2018-02-16 清华大学 A kind of depth migration learning method of the cross-cutting analysis of big data
CN108416338A (en) * 2018-04-28 2018-08-17 深圳信息职业技术学院 A kind of non-contact palm print identity authentication method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DEXING ZHONG ET AL: "PALMPRINT AND DORSAL HAND VEIN DUALMODAL BIOMETRICS", 《2018 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW)》 *
FUZHEN ZHUANG ET AL: "Supervised Representation Learning:Transfer Learning with Deep Autoencoders", 《PROCEEDINGS OF THE TWENTY-FOURTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI 2015)》 *
LONG WEN ET AL: "A New Deep Transfer Learning Based on Sparse Auto-Encoder for Fault Diagnosis", 《IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS》 *
PAN XIAO ET AL: "TLR: TRANSFER LATENT REPRESENTATION FOR UNSUPERVISED DOMAIN ADAPTATION", 《2018 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME) 》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110473557A (en) * 2019-08-22 2019-11-19 杭州派尼澳电子科技有限公司 A kind of voice signal decoding method based on depth self-encoding encoder
CN111274973A (en) * 2020-01-21 2020-06-12 同济大学 Crowd counting model training method based on automatic domain division and application
CN111444765A (en) * 2020-02-24 2020-07-24 北京市商汤科技开发有限公司 Image re-recognition method, training method of related model, related device and equipment
CN111444765B (en) * 2020-02-24 2023-11-24 北京市商汤科技开发有限公司 Image re-identification method, training method of related model, related device and equipment
CN112001398A (en) * 2020-08-26 2020-11-27 科大讯飞股份有限公司 Domain adaptation method, domain adaptation device, domain adaptation apparatus, image processing method, and storage medium
CN112001398B (en) * 2020-08-26 2024-04-12 科大讯飞股份有限公司 Domain adaptation method, device, apparatus, image processing method, and storage medium

Also Published As

Publication number Publication date
CN109800677B (en) 2021-11-02

Similar Documents

Publication Publication Date Title
CN109800677A (en) A kind of cross-platform palm grain identification method
CN104866829B (en) A kind of across age face verification method based on feature learning
CN105488536B (en) A kind of agricultural pests image-recognizing method based on multiple features depth learning technology
CN108509854B (en) Pedestrian re-identification method based on projection matrix constraint and discriminative dictionary learning
Zhao et al. Fingerprint image synthesis based on statistical feature models
CN106096557A (en) A kind of semi-supervised learning facial expression recognizing method based on fuzzy training sample
CN109583482A (en) A kind of infrared human body target image identification method based on multiple features fusion Yu multicore transfer learning
Abrahim et al. RETRACTED ARTICLE: Splicing image forgery identification based on artificial neural network approach and texture features
CN106529499A (en) Fourier descriptor and gait energy image fusion feature-based gait identification method
CN106228129A (en) A kind of human face in-vivo detection method based on MATV feature
CN106408037A (en) Image recognition method and apparatus
CN108388862A (en) Face identification method based on LBP features and nearest neighbor classifier
CN104298974A (en) Human body behavior recognition method based on depth video sequence
CN106897669A (en) A kind of pedestrian based on consistent iteration various visual angles transfer learning discrimination method again
CN104268552B (en) One kind is based on the polygonal fine classification sorting technique of part
Li et al. Dating ancient paintings of Mogao Grottoes using deeply learnt visual codes
CN110119695A (en) A kind of iris activity test method based on Fusion Features and machine learning
CN109598671A (en) Image generating method, device, equipment and medium
CN109472733A (en) Image latent writing analysis method based on convolutional neural networks
CN103366182A (en) Face recognition method based on all-supervision non-negative matrix factorization
CN103714331A (en) Facial expression feature extraction method based on point distribution model
CN109492570A (en) A kind of SAR image target recognition method based on multiple dimensioned rarefaction representation
CN113762326A (en) Data identification method, device and equipment and readable storage medium
Singh et al. Efficient face identification and authentication tool for biometric attendance system
Jiang et al. Rotation-invariant feature learning in VHR optical remote sensing images via nested Siamese structure with double center loss

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210714

Address after: 214063 7th floor, building A3, No.777 Jianshe West Road, Binhu District, Wuxi City, Jiangsu Province

Applicant after: GRASP TECHNOLOGY WUXI Co.,Ltd.

Address before: Beilin District Xianning West Road 710049, Shaanxi city of Xi'an province No. 28

Applicant before: XI'AN JIAOTONG University

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant