CN111652166B - Palm print and face recognition method based on cellular neural network different association memory model - Google Patents

Palm print and face recognition method based on cellular neural network different association memory model Download PDF

Info

Publication number
CN111652166B
CN111652166B CN202010515194.4A CN202010515194A CN111652166B CN 111652166 B CN111652166 B CN 111652166B CN 202010515194 A CN202010515194 A CN 202010515194A CN 111652166 B CN111652166 B CN 111652166B
Authority
CN
China
Prior art keywords
picture data
palm print
vector
face
binary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010515194.4A
Other languages
Chinese (zh)
Other versions
CN111652166A (en
Inventor
韩琦
杨恒
叶刚强
解燕
曹瑞
林日煌
翁腾飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Science and Technology
Original Assignee
Chongqing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Science and Technology filed Critical Chongqing University of Science and Technology
Priority to CN202010515194.4A priority Critical patent/CN111652166B/en
Publication of CN111652166A publication Critical patent/CN111652166A/en
Application granted granted Critical
Publication of CN111652166B publication Critical patent/CN111652166B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The invention relates to a palm print and face recognition method based on a cellular neural network different association memory model, and belongs to the technical field of intelligent recognition. The method includes registration and identification. The invention combines the different associative memory with the cellular neural network model, converts the palm print picture data and the human face picture data into a series of parameters for storage, has high verification reliability because the identity information is the palm print picture data and the human face picture data, has strong secrecy of the storage mode of the pictures and high safety factor, and effectively prevents the identity information of people from being leaked; the form of converting the picture into the parameters through the model is adopted, so that the method is simple and convenient, good in practicability, good in picture identification effect and good in face picture data protection effect.

Description

Palm print and face recognition method based on cellular neural network different association memory model
Technical Field
The invention belongs to the technical field of intelligent recognition, and relates to a palm print and face recognition method based on a cellular neural network different associative memory model.
Background
Palm print recognition, similar to fingerprint recognition, is a biometric identification technique defined for human individuals that has been recognized as relatively quick and reliable so far, and has universality, uniqueness, stability, acceptability, anti-counterfeiting property and the like. Different people, even different fingers of the same person, have different palm print characteristics. By using the palm print recognition technology, the personal identity can be verified without carrying any auxiliary identity identification article. The face recognition is a technology of obtaining a personal face feature distribution map through a face recognition instrument, storing feature values, matching and identifying the identity of a person. The identification technology can be widely applied to the fields of bank finance, educational traffic and the like, and is a more convenient and faster technology than body surface characteristic identification technologies such as fingerprint identification, iris identification and the like. Because of the universality of face recognition application and the high safety of palm print characteristics, the method for recognizing the palm print and the face is combined, and the strong research value is highlighted. In the future, the multi-modal biometric identification technology will become a popular research topic and application field.
In the internet environment, whether face recognition or palm print recognition is adopted, the biological feature data generated by biological feature authentication is stored in a computer. These pieces of biometric information are stored in the form of computer code and face threats such as interception, replay, and reconstruction. The server side stores a large number of feature databases of users, and once the feature databases are obtained by hackers or criminals, the results cannot be recovered. The traditional biological characteristic recognition technology usually adopts a biological characteristic database, even the encryption of biological characteristic data is carried out through an encryption algorithm, the possibility of being cracked also exists theoretically, therefore, the invention converts the knowledge used in the biological characteristic recognition into a series of model parameters, namely synapse values and bias values, thereby providing an implicit model embedded in the environment, the model has specificity, so that a malicious attacker can obtain detailed parameters in the model, theoretically, the biological characteristic data of the user can not be cracked, further protecting the safety information of the user, and meanwhile, the biological characteristic data of the user related in the recognition process adopts a mode of one-time extraction and one-time verification, namely seamless access of the user data, further protecting the safety of the data.
The traditional face recognition system based on the neural network consists of 4 parts, namely preprocessing, feature extraction, a classifier based on the neural network and a database, wherein the feature extraction and the classifier are key parts for solving the problem of face recognition. However, this does not fundamentally solve the security problem. Associative memory is a mapping system that stores a particular input to a particular output. That is, a system can associate two modes, one of which is reliably memorized when the other is given. The associative memory is a content addressable memory. Content Addressable Memory (CAM) is simply a method for directly retrieving information. Content addressable memory refers to brain-like devices that can store standard patterns and allow probes with partial pattern information content to retrieve standard content. The retrieval of associative memory requires the system to converge to an equilibrium point representing a standard pattern. In associative memory, the standard pattern should be robustly retrievable by the probe. There are two types of associative memory, self-associative memory and hetero-associative memory. Self-associative memory refers to retrieval of standard patterns similar in content and form to probes. By heteroassociative memory is meant that the standard pattern differs from the probe in content and form.
The invention makes self contribution to the safety problem of biological feature recognition and further enrichment and perfection of the different associative neurodynamic algorithm through the multi-mode recognition of the different associative neurodynamic algorithm.
Disclosure of Invention
In view of the above, the present invention provides a method for recognizing a palm print and a human face based on a cellular neural network associative memory model.
In order to achieve the purpose, the invention provides the following technical scheme:
a palm print and face recognition method based on a cellular neural network different association memory model comprises registration and recognition;
1) registering:
s1: collecting palm print picture data and face picture data of a crowd, and grouping and numbering the collected palm print picture data and the collected face picture data corresponding to each person;
s2: acquiring palm print picture data and face picture data through preprocessing;
s3: constructing a palm print face recognition model with 6 cell neural networks;
s4: respectively calculating 6 cellular neural network parameters in the step S3 according to the palm print picture data and the human face picture data obtained in the step S2 and the cellular neural network palm print human face recognition model obtained in the step S3, and finally determining 6 cellular neural network palm print human face recognition models;
2) identification
S5: acquiring palm print picture data and face picture data of the visitor, and acquiring the palm print picture data and the face picture data according to the preprocessing method mentioned in S2;
s6: inputting the palm print image data into a cellular neural network model to obtain output data, and then matching and identifying the output data and the face image data.
Optionally, in S6, when performing the authentication, the palm print image data of the user is associated with the face image data, and the face image data of the user shot by the camera is checked according to the output face image data.
Optionally, the palm print image data and the face image data each include r groups of images, and the number corresponding to the palm print image data is P 1 ,P 2 ,…,P r And the number corresponding to the face is F 1 ,F 2 ,…,F r
Firstly, converting gray scaleAnd (4) processing all the palm print picture data and the face picture data obtained in the step (S1) into a gray-scale image matrix with N rows and M columns: p 1 ′,P 2 ′,…,P r ′,F 1 ′,F 2 ′,…,F r ', such that N ═ 3N, M ═ 3M; n belongs to N + ,m∈N +
Compressing the gray level image matrix and designing a compression template
Figure RE-GDA0002564328400000031
And satisfy l 0 +4×l 1 +4×l 2 1 is ═ 1; compressing the gray-scale image matrix of N rows and M columns in the first image into a gray-scale image matrix P of N rows and M columns 1 ″,P 2 ″,…,P r ″,F 1 ″,F 2 ″,…,F r ″;
The specific compression process is as follows:
1) the grey-scale map matrix of N rows and M columns is decomposed into a number of small grey-scale map matrices of 3 rows and 3 columns,
2) performing point multiplication on the small gray scale image matrix of 3 rows and 3 columns in sequence by using a compression template, summing elements in the matrix obtained after the point multiplication, and rounding off a numerical value obtained by the summation, wherein the numerical value is certainly 0-255;
3) finally obtaining a new matrix with n rows and m columns in the step 2);
thirdly, according to the obtained n rows and m columns of gray level image matrix, counting the number of gray level values of each gray level image matrix, and converting the gray level values into a 16 multiplied by 16 gray level histogram matrix P 1 ″′,P 2 ″′,…,P i ″′,…,P r ″′,F 1 ″′,F 2 ″′,…,F i ″′,…,F r ", wherein P i The position of an element in the matrix is represented as k ═ 16(i-1) + j, wherein i is more than or equal to 1 and less than or equal to 16, j is more than or equal to 1 and less than or equal to 16, then k-1 is the gray value represented by the element, and the value of the element is the number of the gray values;
the 16 x 16 gray statistics histogram matrix obtained in S2-c is converted into 1 x 256 vectors in rows, for example,
Figure RE-GDA0002564328400000032
is converted into
Figure RE-GDA0002564328400000033
Is converted into
Figure RE-GDA0002564328400000034
Converting the gray statistic histogram vector obtained in the step S2-the step S into a 1 × 256 binary vector with k equal to 6 layers, wherein the conversion step comprises the following steps:
1) binary translation of gray-level histogram input vectors
Each element in the gray scale statistical histogram vector is converted to a 6-bit binary number, e.g., will
Figure RE-GDA0002564328400000035
Converting the value of the middle element into binary number, taking the highest bit of each binary number in the vector to form a new 1 x 256 vector,
Figure RE-GDA0002564328400000036
the vector is used as a first layer binary input vector of the ith palm print picture data; obtaining a first layer binary input vector set by the same method
Figure RE-GDA0002564328400000037
Then, the next highest bit of each binary number in the vector is taken in turn to form a new 1 × 256 vector,
Figure RE-GDA0002564328400000041
the vector is used as a second layer binary input vector of the ith palm print picture data; obtaining a second layer binary input vector set by the same method
Figure RE-GDA0002564328400000042
Analogizing in turn to obtain 6 binary input vector sets of palm print picture data
Figure RE-GDA0002564328400000043
Wherein
Figure RE-GDA0002564328400000044
Representing a j-th layer binary input vector in the ith piece of palm print picture data;
wherein, k is 6, the peak value of the number of gray values of the gray image matrix of all the pictures is 63 according to 400 pieces of face picture data selected from the face database<2 6 Thus obtaining the compound;
2) binary translation of gray level histogram output vectors
The binary conversion of the output vector of the gray level histogram is the same as the conversion step of the fourth to 1), and 6 binary output vector sets f of the human face image data are obtained in the same way (j) ={f 1 (j) ,f 2 (j) ,…,f i (j) ,…,f r (j) },j∈{1,2,…,6},f i (j) I ∈ {1,2, …, r }, where f i (j) Representing a jth layer binary output vector in ith human face picture data;
wherein, the binary vectors of the palm print image data and the face image data are respectively 6 × r, and the binary input vector of the palm print image data
Figure RE-GDA0002564328400000047
Binary output vector f of face picture data i (j) One-to-one correspondence, i ∈ {1,2, …, r }, j ∈ {1,2, …, 6}.
Optionally, step S3 specifically includes:
setting binary input vector set of k-th layer palm print as input vector set of associative memory
Figure RE-GDA0002564328400000049
Wherein the content of the first and second substances,
Figure RE-GDA00025643284000000410
Figure RE-GDA00025643284000000411
representing the vector formed by all the pixel points in the k-th layer binary input vector in the ith palm print,
Figure RE-GDA00025643284000000412
expressing the value of the jth pixel point in the kth layer binary input vector in the ith palm print;
setting binary output vector of k-th layer face as output vector set of associative memory
Figure RE-GDA00025643284000000413
Wherein the content of the first and second substances,
Figure RE-GDA00025643284000000414
i∈{1,2,…,r},j∈{1,2,…,256},f i (k) representing the vector formed by all pixel points in the k-th layer binary output vector in the ith human face,
Figure RE-GDA00025643284000000416
expressing the value of the jth pixel point in the kth layer binary output vector in the ith human face;
constructing a cellular neural network palm print image data recognition face image data model of the k (k is equal to {1,2, …,6}) layer, specifically:
Figure RE-GDA0002564328400000051
where k ∈ {1,2, …,6}, x ═ x 1 ,x 2 ,…,x i ,…,x 256 ) T I e {1,2, …,256}, input vector p (k) =(p 1 (k) ,p 2 (k) ,…,p i (k) ,…,p 256 (k) ) T I e {1,2, …,256}, and the offset vector V ═ V (V) 1 ,v 2 ,…,v i ,…,v 256 ) T ,i∈{1,2,…,256},C=diag(c 1 ,c 2 ,…,c i ,…,c 256 ) I e {1,2, …,256}, activation function f (x) ═ f (x) 1 ),…,f(x i ),…,f(x 256 )) T .
In formula (1), the matrix a is (a) ij ) 256×256 The method consists of the following matrixes:
Figure RE-GDA0002564328400000052
wherein, the first and the second end of the pipe are connected with each other,
Figure RE-GDA0002564328400000053
and
Figure RE-GDA0002564328400000054
the matrix D ═ D ij ) 256×256 Is defined similarly to A.
Make it
Figure RE-GDA0002564328400000055
x i =1 or x i =-1,i=1,2,…,256},C(f (k) )={x=(x 1 ,x 2 ,…,x i ,…,x 256 ) T ∈R 256 |x i f i (k) > 1, i ═ 1,2, …,256}. therefore, equation (1) translates to
Figure RE-GDA0002564328400000056
Optionally, in the step S4, the specific step of calculating the cellular neural network parameter of the k-th layer in the step S3 is:
s41: equation (2) is written in the form:
Figure RE-GDA0002564328400000061
in the formula (3), let x i (0)=0,
(i) If it is not
Figure RE-GDA0002564328400000062
Equation (3) converges to a positive stable equilibrium point and the value of this equilibrium point is greater than 1;
(ii) if it is not
Figure RE-GDA0002564328400000063
Equation (3) converges to a negative stable equilibrium point and the value of this equilibrium point is less than-1.
According to the above theorem, the following reasoning is:
inference 1 order
Figure RE-GDA0002564328400000064
λ i >max{c i },c i Constant, i ∈ {1,2, …,256 }; when in use
Figure RE-GDA0002564328400000065
When the equation (3) converges to a positive stable equilibrium point, the value of this equilibrium point is greater than 1; when in use
Figure RE-GDA0002564328400000066
Equation (3) converges to a negative stable equilibrium point, and the value of this equilibrium point is less than-1.
The following symbols are introduced
Figure RE-GDA0002564328400000067
Wherein λ is i >0,
Figure RE-GDA0002564328400000068
LD=(d -1,-1, d -1,0 ,d -1,1, d 0,-1 ,d 0,0 ,d 0,1 ,d 1,-1 ,d 1,0 ,d 1,1 ) T ,
LA=(a -1,-1 ,a -1,0 ,a -1,1 ,a 0,-1 ,a 0,0 ,a 0,1 ,a 1,-1 ,a 1,0 ,a 1,1 ) T ,
l∈{1,2,…,r},q∈{1,2,…,16},
Figure RE-GDA0002564328400000069
Figure RE-GDA0002564328400000071
Figure RE-GDA0002564328400000072
Figure RE-GDA0002564328400000073
Figure RE-GDA0002564328400000074
Figure RE-GDA0002564328400000075
Figure RE-GDA0002564328400000076
According to inference 1, equations (4), (5) and (6) are obtained
Figure RE-GDA0002564328400000077
Figure RE-GDA0002564328400000078
And
Figure RE-GDA0002564328400000079
equation (5) to
Figure RE-GDA00025643284000000710
From equation (7)
Figure RE-GDA0002564328400000081
Wherein pinv (·) represents the pseudo-inverse of the matrix;
equation (6) to
Figure RE-GDA0002564328400000082
From equation (8)
Figure RE-GDA0002564328400000083
S42: the binary output vector set p of the face picture data obtained in the step S2 (k) All vectors in the image are converted into a matrix omega together, and similarly, a binary input vector set f of the palm print image data is obtained (k) Is converted into a matrix xi together and is substituted into the equations (8) and (10) to obtain
Figure RE-GDA0002564328400000084
Figure RE-GDA0002564328400000085
S43: obtaining an output parameter LA of associative memory of the face picture data and an input parameter LD of associative memory of the palm print picture data according to the formula (8) and the formula (10) in the step S42, so as to convert the parameters into a parameter A and a parameter D in the formula (1); obtaining an offset vector V according to a formula (4); obtaining K-th layer cell neural network palm print image data from A, D, V and C to identify a human face image data model;
and respectively determining the palm print image data of the cellular neural networks from the first layer to the sixth layer according to the steps to identify the human face image data model.
Optionally, in the steps S5 and S6, the specific steps of recognizing the face picture data of the user through the palmprint picture data are as follows:
s51: two sets of equipment are prepared simultaneously, wherein one set of equipment acquires palm print picture data P of the visitor, and 6 binary input vector sets of the palm print picture data are obtained through preprocessing in step S2
Figure RE-GDA0002564328400000086
Figure RE-GDA0002564328400000087
Wherein
Figure RE-GDA0002564328400000088
Representing a j-th layer binary input vector in the ith piece of palm print picture data;
s52: the second set of equipment is a camera for acquiring the face picture data F of the visitor, and 6 binary output vector sets of the face picture data
Figure RE-GDA0002564328400000089
Wherein
Figure RE-GDA00025643284000000810
Representing a jth layer binary output vector in ith human face picture data;
s61: the palm print picture data p of the first layer to the sixth layer obtained in the step S51 (j) J is equal to {1,2, …,6} and is respectively input into six cellular neural network models to obtain output data f from the first layer to the sixth layer (j) ,j∈{1,2,…,6};
S62: then output data f (j) Matching and recognizing the face picture data in the step S52; the output obtained in step S52(Vector)
Figure RE-GDA0002564328400000091
And the model output vector f obtained in step S52 (j) Respectively matching the face picture data from the first layer to the sixth layer;
s63: setting the success rate of matching the face picture data as H, and judging whether the matching degree H of the identity authentication is greater than a matching set value H, wherein H is 0-1; if so, the matching is successful, otherwise, the matching is failed.
The invention has the beneficial effects that: the method combines the different associative memory with the cellular neural network model, converts the palm print picture data and the human face picture data into a series of parameters for storage, has high verification reliability because the identity information is the palm print picture data and the human face picture data, has strong secrecy of the storage mode of the pictures and high safety factor, and effectively prevents the identity information of people from being leaked; the form of converting the picture into the parameters through the model is adopted, so that the method is simple and convenient, good in practicability, good in picture identification effect and good in face picture data protection effect.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Drawings
For the purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a flow chart of a method for image recognition according to the present invention;
fig. 2 is a schematic diagram of solving the position parameters of the cell neural network human face image data recognition model.
Detailed Description
The following embodiments of the present invention are provided by way of specific examples, and other advantages and effects of the present invention will be readily apparent to those skilled in the art from the disclosure herein. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and embodiments may be combined with each other without conflict.
Wherein the showings are for the purpose of illustrating the invention only and not for the purpose of limiting the same, and in which there is shown by way of illustration only and not in the drawings in which there is no intention to limit the invention thereto; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there is an orientation or positional relationship indicated by terms such as "upper", "lower", "left", "right", "front", "rear", etc., based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not an indication or suggestion that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes, and are not to be construed as limiting the present invention, and the specific meaning of the terms may be understood by those skilled in the art according to specific situations.
Referring to fig. 1 to 2, a method for identifying a palm print and a human face based on a cellular neural network for different associative memory includes two steps: and (4) registering and identifying.
1) Registration
S1: collecting palm print picture data and face picture data of a crowd, and carrying out grouping numbering on the collected palm print picture data and the collected face picture data corresponding to each person;
s2: acquiring palm print picture data and face picture data through preprocessing;
s3: constructing a palm print face recognition model with 6 cell neural networks;
s4: respectively calculating 6 cellular neural network parameters in the step S3 according to the palm print picture data and the human face picture data obtained in the step S2 and the cellular neural network palm print human face recognition model obtained in the step S3, and finally determining 6 cellular neural network palm print human face recognition models;
2) identification
S5: acquiring palm print picture data and face picture data of the visitor, and acquiring the palm print picture data and the face picture data according to the preprocessing method mentioned in S2;
s6: inputting the palm print image data into a cellular neural network model to obtain output data, and then matching and identifying the output data and the face image data.
When the identity authentication is carried out, the palm print picture data of the user is associated with the face picture data, and the face picture data of the user shot by the camera is checked according to the output face picture data.
Each of the palm print and the face includes r groups of pictures, and the number corresponding to the data of the palm print picture is P 1 ,P 2 ,…,P r And the number corresponding to the face is F 1 ,F 2 ,…,F r
Converting a gray level picture, and processing all palm print picture data and face picture data obtained in the step S1 into a gray level picture matrix with N rows and M columns: p 1 ′,P 2 ′,…,P r ′,F 1 ′,F 2 ′,…,F r ', such that N ═ 3N, M ═ 3M; n belongs to N + ,m∈N +
Compressing the gray level image matrix and designing a compression template
Figure RE-GDA0002564328400000101
And satisfy l 0 +4×l 1 +4×l 2 1. Compressing the gray map matrix of N rows and M columns in the first step into a gray map matrix P of N rows and M columns 1 ″,P 2 ″,…,P r ″,F 1 ″,F 2 ″,…,F r ″;
The specific compression process is as follows:
1) the grey-scale map matrix of N rows and M columns is decomposed into a number of small grey-scale map matrices of 3 rows and 3 columns,
2) performing point multiplication on the small gray scale image matrix of 3 rows and 3 columns in sequence by using a compression template, summing elements in the matrix obtained after the point multiplication, and rounding off a numerical value obtained by the summation, wherein the numerical value is certainly 0-255;
3) finally obtaining a new matrix with n rows and m columns in the step 2);
thirdly, according to the obtained n rows and m columns of gray level image matrix, counting the number of gray level values of each gray level image matrix, and converting the gray level values into a 16 multiplied by 16 gray level histogram matrix P 1 ″′,P 2 ″′,…,P i ″′,…,P r ″′,F 1 ″′,F 2 ″′,…,F i ″′,…,F r ", wherein P i The position of an element in the matrix is represented as k ═ 16(i-1) + j, wherein i is more than or equal to 1 and less than or equal to 16, j is more than or equal to 1 and less than or equal to 16, then k-1 is the gray value represented by the element, and the value of the element is the number of the gray values;
and fourthly, converting the 16 x 16 gray statistics histogram matrix obtained in the step S2-third into a 1 x 256 vector in rows, for example,
Figure RE-GDA0002564328400000111
is converted into
Figure RE-GDA0002564328400000112
Is converted into
Figure RE-GDA0002564328400000113
Converting the gray statistic histogram vector obtained in the step S2-the step S into a 1 × 256 binary vector with k equal to 6 layers, wherein the conversion step comprises the following steps:
1) binary translation of gray-level histogram input vectors
Each element in the gray scale statistical histogram vector is converted to a 6-bit binary number, e.g., will
Figure RE-GDA0002564328400000114
The value of the middle element is converted into binary number, the highest bit of each binary number in the vector is taken in turn to form a new 1 x 256 vector,
Figure RE-GDA0002564328400000115
the vector is used as a first layer binary input vector of the ith palm print picture data; obtaining a first layer binary input vector set by the same method
Figure RE-GDA0002564328400000116
Then, the next highest bit of each binary number in the vector is taken in turn to form a new 1 × 256 vector,
Figure RE-GDA0002564328400000117
the vector is used as a second layer binary input vector of the ith palm print picture data; obtaining a second layer binary input vector set by the same method
Figure RE-GDA0002564328400000118
6 binary input vector sets for obtaining palm print picture data by analogy in turn
Figure RE-GDA0002564328400000119
Wherein
Figure RE-GDA00025643284000001110
And representing a j-th layer binary input vector in the ith palm print picture data.
Note: k is 6, which is the data of 400 face pictures selected from the face database, and the peak value of the number of gray values of the gray map matrix of all the pictures is 63<2 6 And obtaining the product.
2) Binary translation of gray-level histogram output vectors
The binary conversion of the output vector of the gray level histogram is the same as the conversion step of the fourth-1), and 6 binaries of the human face image data can be obtained in the same waySet of binary output vectors f (j) ={f 1 (j) ,f 2 (j) ,…,f i (j) ,…,f r (j) },j∈{1,2,…,6},f i (j) I ∈ {1,2, …, r }, where f i (j) And representing a j-th layer binary output vector in the ith piece of face picture data.
Note: the binary vectors of the palm print picture data and the face picture data are respectively 6 x r, and the binary input vector of the palm print picture data
Figure RE-GDA0002564328400000121
Binary output vector f with face picture data i (j) One-to-one correspondence, i ∈ {1,2, …, r }, j ∈ {1,2, …, 6}.
The specific content of step S3 is:
setting binary input vector set of k-th layer palm print as input vector set of associative memory
Figure RE-GDA0002564328400000122
Wherein the content of the first and second substances,
Figure RE-GDA0002564328400000123
Figure RE-GDA0002564328400000124
representing the vector formed by all the pixel points in the k-th layer binary input vector in the ith palm print,
Figure RE-GDA0002564328400000125
expressing the value of the jth pixel point in the kth layer binary input vector in the ith palm print;
setting binary output vector of k-th layer face as output vector set of associative memory
Figure RE-GDA00025643284000001210
Wherein the content of the first and second substances,
Figure RE-GDA0002564328400000126
Figure RE-GDA0002564328400000127
f i (k) representing the vector formed by all the pixel points in the k-th layer binary output vector in the ith human face,
Figure RE-GDA0002564328400000128
and expressing the value of the jth pixel point in the kth layer binary output vector in the ith human face.
Constructing a cellular neural network palm print image data recognition face image data model of the k (k is equal to {1,2, …,6}) layer, specifically:
Figure RE-GDA0002564328400000129
where k ∈ {1,2, …,6}, x ═ x 1 ,x 2 ,…,x i ,…,x 256 ) T I e {1,2, …,256}, input vector p (k) =(p 1 (k) ,p 2 (k) ,…,p i (k) ,…,p 256 (k) ) T I e {1,2, …,256}, and the offset vector V ═ V (V) 1 ,v 2 ,…,v i ,…,v 256 ) T ,i∈{1,2,…,256},C=diag(c 1 ,c 2 ,…,c i ,…,c 256 ) I e {1,2, …,256}, activation function f (x) ═ f (x) 1 ),…,f(x i ),…,f(x 256 )) T .
In formula (1), the matrix a ═ a ij ) 256×256 The method consists of the following matrixes:
Figure RE-GDA0002564328400000131
wherein the content of the first and second substances,
Figure RE-GDA0002564328400000132
and
Figure RE-GDA0002564328400000133
the matrix D ═ D ij ) 256×256 Is defined similarly to A.
Make it
Figure RE-GDA0002564328400000134
x i =1 or x i =-1,i=1,2,…,256},C(f (k) )={x=(x 1 ,x 2 ,…,x i ,…,x 256 ) T ∈R 256 |x i f i (k) > 1, i ═ 1,2, …,256}. therefore, equation (1) translates to
Figure RE-GDA0002564328400000137
The specific steps of calculating the parameters of the cellular neural network of the k-th layer in the step S3 in the step S4 are as follows:
s41: equation (2) can be written as follows:
Figure RE-GDA0002564328400000138
according to the literature (Hanqi, the stability of neural network and its application in associative memory research [ D)]University of Chongqing, 2012), in equation (3), let x i (0)=0,
(i) If it is not
Figure RE-GDA0002564328400000141
Equation (3) converges to a positive stable equilibrium point and the value of this equilibrium point is greater than 1;
(ii) if it is not
Figure RE-GDA0002564328400000142
The formula (3) converges to a negative stable equilibrium point, and the value of this equilibrium point is less than-1.
According to the above theorem, the following reasoning is:
inference 1 order
Figure RE-GDA0002564328400000143
λ i >max{c i },c i Constant, i ∈ {1,2, …,256 }; when in use
Figure RE-GDA0002564328400000144
When the formula (3) converges to a positive stable equilibrium point, and the value of this equilibrium point is greater than 1; when the temperature is higher than the set temperature
Figure RE-GDA0002564328400000145
Equation (3) converges to a negative stable equilibrium point, and the value of this equilibrium point is less than-1.
The following symbols are introduced
Figure RE-GDA0002564328400000146
Wherein λ is i >0,
Figure RE-GDA0002564328400000147
LD=(d -1,-1 ,d -1,0 ,d -1,1 ,d 0,-1 ,d 0,0 ,d 0,1 ,d 1,-1 ,d 1,0 ,d 1,1 ) T ,
LA=(a -1,-1 ,a -1,0 ,a -1,1 ,a 0,-1 ,a 0,0 ,a 0,1 ,a 1,-1 ,a 1,0 ,a 1,1 ) T ,
l∈{1,2,…,r},q∈{1,2,…,16},
Figure RE-GDA0002564328400000148
Figure RE-GDA0002564328400000149
Figure RE-GDA00025643284000001410
Figure RE-GDA0002564328400000151
Figure RE-GDA0002564328400000152
Figure RE-GDA0002564328400000153
Figure RE-GDA0002564328400000154
According to inference 1, equations (4), (5) and (6) are obtained
Figure RE-GDA0002564328400000155
Figure RE-GDA0002564328400000156
And
Figure RE-GDA0002564328400000157
equation (5) can be converted to
Figure RE-GDA0002564328400000158
Thus, from the formula (7), it can be obtained
Figure RE-GDA0002564328400000159
Wherein pinv (·) represents the pseudo-inverse of the matrix.
Equation (6) can be converted to
Figure RE-GDA00025643284000001510
Thus, from the formula (8), it can be obtained
Figure RE-GDA0002564328400000161
S42: the binary output vector set p of the face picture data obtained in the step S2 (k) All vectors in the image are converted into a matrix omega together, and similarly, a binary input vector set f of the obtained palm print image data (k) Is converted into a matrix xi together and is substituted into the equations (8) and (10) to obtain
Figure RE-GDA0002564328400000162
Figure RE-GDA0002564328400000163
S43: obtaining an output parameter LA of associative memory of the face picture data and an input parameter LD of associative memory of the palm print picture data according to the formula (8) and the formula (10) in the step S42, so as to convert the parameters into a parameter A and a parameter D in the formula (1); obtaining an offset vector V according to a formula (4); acquiring the k-th layer cellular neural network palm print image data from A, D, V and C to identify a human face image data model;
therefore, the cellular neural network palm print image data of the first layer to the sixth layer are respectively determined according to the steps to identify the human face image data model.
The specific steps of identifying the face picture data of the user through the palm print picture data in the steps S5 and S6 are as follows:
s51: two sets of equipment were prepared at the same time,one set of equipment obtains the palm print picture data P of the visitor, and 6 binary input vector sets of the palm print picture data are obtained through the preprocessing of the step S2
Figure RE-GDA0002564328400000164
Figure RE-GDA0002564328400000165
Wherein
Figure RE-GDA0002564328400000166
Representing a j-th layer binary input vector in the ith piece of palm print picture data;
s52: the second set of equipment is a camera for acquiring the face picture data F of the visitor, and 6 binary output vector sets of the face picture data
Figure RE-GDA0002564328400000167
Wherein
Figure RE-GDA0002564328400000168
Representing a jth layer binary output vector in ith human face picture data;
s61: the palm print picture data p from the first layer to the sixth layer obtained in the step S51 (j) J is equal to {1,2, …,6} and is respectively input into six cellular neural network models to obtain output data f from the first layer to the sixth layer (j) ,j∈{1,2,…,6};
S62: then outputs the data f (j) And performing matching identification with the face picture data in the step S52. The output vector obtained in step S52
Figure RE-GDA0002564328400000169
And the model output vector f obtained in step S52 (j) Respectively matching the face picture data from the first layer to the sixth layer;
s63: setting the success rate of matching the face picture data as H, and judging whether the matching degree H of the identity authentication is greater than a matching set value H, wherein H is 0-1; if so, the matching is successful, otherwise, the matching is failed.
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.

Claims (5)

1. A palmprint and face recognition method based on a cellular neural network different association memory model is characterized by comprising the following steps: the method comprises the steps of registering and identifying;
1) registering:
s1: collecting palm print picture data and face picture data of a crowd, and carrying out grouping numbering on the collected palm print picture data and the collected face picture data corresponding to each person;
s2: acquiring palm print picture data and face picture data through preprocessing;
s3: constructing a palm print face recognition model with 6 cell neural networks;
s4: respectively calculating 6 cellular neural network parameters in the step S3 according to the palm print picture data and the human face picture data obtained in the step S2 and the cellular neural network palm print human face recognition model obtained in the step S3, and finally determining 6 cellular neural network palm print human face recognition models;
2) identification
S5: acquiring palm print picture data and face picture data of the visitor, and acquiring the palm print picture data and the face picture data according to the preprocessing method mentioned in S2;
s6: inputting the palm print picture data into a cellular neural network model to obtain output data, and then matching and identifying the output data and the face picture data;
the palm print picture data and the face picture data respectively comprise r groups of pictures, and the number corresponding to the palm print picture data is P 1 ,P 2 ,…,P r And the number corresponding to the face is F 1 ,F 2 ,…,F r
Converting the gray level picture, and converting all the palm print picture data and the human face obtained in the step S1Processing the picture data into a gray-scale image matrix with N rows and M columns: p 1 ′,P 2 ′,…,P r ′,F 1 ′,F 2 ′,…,F r ', such that N ═ 3N, M ═ 3M; n belongs to N +, m belongs to N +;
compressing the gray level image matrix and designing a compression template
Figure FDA0003744260460000011
And satisfy l 0 +4×l 1 +4×l 2 1 is ═ 1; compressing the gray map matrix of N rows and M columns in the first step into a gray map matrix P of N rows and M columns 1 ″,P 2 ″,…,P r ″,F 1 ″,F 2 ″,…,F r ″;
The specific compression process is as follows:
1) the grey-scale map matrix of N rows and M columns is decomposed into a number of small grey-scale map matrices of 3 rows and 3 columns,
2) performing point multiplication on the small gray scale image matrix of 3 rows and 3 columns in sequence by using a compression template, summing elements in the matrix obtained after the point multiplication, and rounding off a numerical value obtained by the summation, wherein the numerical value is certainly 0-255;
3) finally obtaining a new matrix with n rows and m columns in the step 2);
thirdly, according to the obtained n rows and m columns of gray level image matrix, counting the number of gray level values of each gray level image matrix, and converting the gray level values into a 16 multiplied by 16 gray level histogram matrix P 1 ″′,P 2 ″′,…,P i ″′,…,P r ″′,F 1 ″′,F 2 ″′,…,F i ″′,…,F r ", wherein P i The position of an element in the matrix is represented as k ═ 16(i-1) + j, wherein i is more than or equal to 1 and less than or equal to 16, j is more than or equal to 1 and less than or equal to 16, then k-1 is the gray value represented by the element, and the value of the element is the number of the gray values;
fourthly, the 16 multiplied by 16 gray scale statistical histogram matrix obtained in the S2-third step is converted into a 1 multiplied by 256 vector by rows,
Figure FDA0003744260460000021
is converted into
Figure FDA0003744260460000022
Is converted into
Figure FDA0003744260460000023
Converting the gray statistic histogram vector obtained in the step S2-the step S into a 1 × 256 binary vector with k equal to 6 layers, wherein the conversion step comprises the following steps:
1) binary translation of gray-level histogram input vectors
Each element in the gray scale statistical histogram vector is converted into a 6-bit binary number, and
Figure FDA0003744260460000024
the value of the middle element is converted into binary number, the highest bit of each binary number in the vector is taken in turn to form a new 1 x 256 vector,
Figure FDA0003744260460000025
the vector is used as a first layer binary input vector of the ith palm print picture data; obtaining a first layer binary input vector set by the same method
Figure FDA0003744260460000026
Then, the next highest bit of each binary number in the vector is taken in turn to form a new 1 × 256 vector,
Figure FDA0003744260460000027
the vector is used as a second layer binary input vector of the ith palm print picture data; obtaining a second layer binary input vector set by the same method
Figure FDA0003744260460000028
Analogizing in turn to obtain 6 binary input vector sets of palm print picture data
Figure FDA0003744260460000029
Wherein
Figure FDA00037442604600000210
Representing a j-th layer binary input vector in the ith piece of palm print picture data;
wherein, k is 6, the peak value of the number of gray values of the gray image matrix of all the pictures is 63 according to 400 pieces of face picture data selected from the face database<2 6 And then obtaining;
2) binary translation of gray level histogram output vectors
The binary conversion of the gray histogram output vector is the same as the conversion steps in 1) the binary conversion of the gray histogram input vector, and 6 binary output vector sets of the face picture data are obtained in the same way
Figure FDA00037442604600000211
f i (j) I ∈ {1,2, …, r }, where f i (j) Representing a jth layer binary output vector in ith human face picture data;
wherein, the binary vectors of the palm print image data and the face image data are respectively 6 x r, and the binary input vector of the palm print image data
Figure FDA0003744260460000031
Binary output vector f with face picture data i (j) One-to-one correspondence, i ∈ {1,2, …, r }, j ∈ {1,2, …, 6}.
2. The method for recognizing palmprints and human faces based on the cellular neural network different association memory model as claimed in claim 1, wherein: in S6, in the authentication, the palm print image data of the user is associated with the face image data, and the face image data of the user photographed by the camera is checked based on the output face image data.
3. The method for recognizing palmprints and human faces based on the cellular neural network different associative memory model according to claim 1, wherein: the step S3 specifically includes:
setting the binary input vector set of the k-th layer palm print as the input vector set of the associative memory
Figure FDA0003744260460000032
Wherein the content of the first and second substances,
Figure FDA0003744260460000033
Figure FDA0003744260460000034
Figure FDA0003744260460000035
representing the vector formed by all the pixel points in the k-th layer binary input vector in the ith palm print,
Figure FDA0003744260460000036
expressing the value of the jth pixel point in the kth layer binary input vector in the ith palm print;
setting binary output vector of k-th layer face as output vector set of associative memory
Figure FDA0003744260460000037
Wherein the content of the first and second substances,
Figure FDA0003744260460000038
Figure FDA0003744260460000039
f i (k ) Representing the vector formed by all the pixel points in the k-th layer binary output vector in the ith human face,
Figure FDA00037442604600000310
expressing the value of the jth pixel point in the kth layer binary output vector in the ith human face;
constructing a cell neural network palm print image data recognition face image data model of the kth layer, wherein k belongs to {1,2, …,6}, and specifically comprises the following steps:
Figure FDA00037442604600000311
where k ∈ {1,2, …,6}, and x ═ x (x) 1 ,x 2 ,…,x i ,…,x 256 ) T I e {1,2, …,256}, input vector p (k) =(p 1 (k) ,p 2 (k) ,…,p i (k) ,…,p 256 (k) ) T I e {1,2, …,256}, and the offset vector V ═ V (V) 1 ,v 2 ,…,v i ,…,v 256 ) T ,i∈{1,2,…,256},C=diag(c 1 ,c 2 ,…,c i ,…,c 256 ) I e {1,2, …,256}, activation function f (x) ═ f (x) 1 ),…,f(x i ),…,f(x 256 )) T
In formula (1), the matrix a ═ a ij ) 256×256 The method consists of the following matrixes:
Figure FDA0003744260460000041
wherein the content of the first and second substances,
Figure FDA0003744260460000042
and
Figure FDA0003744260460000043
the matrix D ═ D ij ) 256×256 Is defined similarly to A;
make it
Figure FDA0003744260460000044
x i =1orx i =-1,i=1,2,…,256},C(f (k) )={x=(x 1 ,x 2 ,…,x i ,…,x 256 ) T ∈R 256 |x i f i (k) 1, … 2, …,256 }; equation (1) to
Figure FDA0003744260460000045
4. The method for recognizing palmprints and human faces based on the cellular neural network different associative memory model according to claim 3, wherein: in step S4, the specific step of calculating the parameters of the cellular neural network at the k-th layer in step S3 is:
s41: equation (2) is written in the form:
Figure FDA0003744260460000046
in the formula (3), let x i (0)=0,
(i) If it is not
Figure FDA0003744260460000047
Equation (3) converges to a positive stable equilibrium point and the value of this equilibrium point is greater than 1;
(ii) if it is not
Figure FDA0003744260460000051
Equation (3) converges to a negative stable equilibrium point, and the value of this equilibrium point is less than-1;
according to the above theorem, the following reasoning is:
inference 1 order
Figure FDA0003744260460000052
λ i >max{c i },c i Constant …, i ∈ {1,2, … ∈ [ ]256 }; when f is i (k) When 1, equation (3) converges to a positive stable equilibrium point, and the value of this equilibrium point is greater than 1; when f is i (k) When-1, equation (3) converges to a negative stable equilibrium point, and the value of this equilibrium point is less than-1;
the following symbols are introduced
Figure FDA0003744260460000053
Wherein λ is i >0,
Figure FDA0003744260460000054
LD=(d -1,-1 ,d -1,0 ,d -1,1 ,d 0,-1 ,d 0,0 ,d 0,1 ,d 1,-1 ,d 1,0 ,d 1,1 ) T ,
LA=(a -1,-1 ,a -1,0 ,a -1,1 ,a 0,-1 ,a 0,0 ,a 0,1 ,a 1,-1 ,a 1,0 ,a 1,1 ) T ,
l∈{1,2,…,r},q∈{1,2,…,16},
Figure FDA0003744260460000055
Figure FDA0003744260460000056
Figure FDA0003744260460000057
Figure FDA0003744260460000058
Figure FDA0003744260460000061
Figure FDA0003744260460000062
Figure FDA0003744260460000063
According to inference 1, equations (4), (5) and (6) are obtained
Figure FDA0003744260460000064
Figure FDA0003744260460000065
And
Figure FDA0003744260460000066
equation (5) to
Figure FDA0003744260460000067
From equation (7)
Figure FDA0003744260460000068
Wherein pinv (·) represents the pseudo-inverse of the matrix;
equation (6) to
Figure FDA0003744260460000069
From equation (8)
Figure FDA00037442604600000610
S42: converting all vectors in the binary output vector set p (k) of the face picture data obtained in the step S2 into a matrix Ω, and similarly, obtaining the binary input vector set f of the palm print picture data (k) Is converted into a matrix xi together and is substituted into the equations (8) and (10) to obtain
Figure FDA0003744260460000071
Figure FDA0003744260460000072
S43: obtaining an output parameter LA of associative memory of the face picture data and an input parameter LD of associative memory of the palm print picture data according to the formula (8) and the formula (10) in the step S42, so as to convert the parameters into a parameter A and a parameter D in the formula (1); obtaining an offset vector V according to a formula (4); acquiring the k-th layer cellular neural network palm print image data from A, D, V and C to identify a human face image data model;
and respectively determining the palm print image data of the cellular neural networks from the first layer to the sixth layer according to the steps to identify the human face image data model.
5. The method for recognizing palmprints and human faces based on the cellular neural network different associative memory model according to claim 4, wherein: in the steps S5 and S6, the specific steps of recognizing the face picture data of the user through the palm print picture data are as follows:
s51: two sets of equipment are prepared simultaneously, wherein one set of equipment acquires palm print picture data P of the visitor, and 6 binary input vector sets of the palm print picture data are obtained through preprocessing in step S2
Figure FDA0003744260460000073
Figure FDA0003744260460000074
Wherein
Figure FDA0003744260460000075
Representing a j-th layer binary input vector in the ith piece of palm print picture data;
s52: the second set of equipment is a camera for acquiring the face picture data F of the visitor, and 6 binary output vector sets of the face picture data
Figure FDA0003744260460000076
Wherein
Figure FDA0003744260460000077
Representing a jth layer binary output vector in ith human face picture data;
s61: the palm print picture data p of the first layer to the sixth layer obtained in the step S51 (j) J is equal to {1,2, …,6} and is respectively input into six cellular neural network models to obtain output data f from the first layer to the sixth layer (j) ,j∈{1,2,…,6};
S62: then output data f (j) Matching and recognizing the face picture data in the step S52; the output vector obtained in step S52
Figure FDA0003744260460000078
And the model output vector f obtained in step S52 (j) Respectively matching the face picture data from the first layer to the sixth layer;
s63: setting the success rate of matching the face picture data as H, and judging whether the matching degree H of the identity authentication is greater than a matching set value H, wherein H is 0-1; if so, the matching is successful, otherwise, the matching is failed.
CN202010515194.4A 2020-06-08 2020-06-08 Palm print and face recognition method based on cellular neural network different association memory model Active CN111652166B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010515194.4A CN111652166B (en) 2020-06-08 2020-06-08 Palm print and face recognition method based on cellular neural network different association memory model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010515194.4A CN111652166B (en) 2020-06-08 2020-06-08 Palm print and face recognition method based on cellular neural network different association memory model

Publications (2)

Publication Number Publication Date
CN111652166A CN111652166A (en) 2020-09-11
CN111652166B true CN111652166B (en) 2022-08-30

Family

ID=72343503

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010515194.4A Active CN111652166B (en) 2020-06-08 2020-06-08 Palm print and face recognition method based on cellular neural network different association memory model

Country Status (1)

Country Link
CN (1) CN111652166B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298702B (en) * 2021-06-23 2023-08-04 重庆科技学院 Reordering and segmentation method based on large-size image pixel points

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2259214A1 (en) * 2009-06-04 2010-12-08 Honda Research Institute Europe GmbH Implementing a neural associative memory based on non-linear learning of discrete synapses
CN105005765A (en) * 2015-06-29 2015-10-28 北京工业大学 Facial expression identification method based on Gabor wavelet and gray-level co-occurrence matrix
CN105809132A (en) * 2016-03-08 2016-07-27 山东师范大学 Improved compressed sensing-based face recognition method
CN106203391A (en) * 2016-07-25 2016-12-07 上海蓝灯数据科技股份有限公司 Face identification method based on intelligent glasses
CN107330404A (en) * 2017-06-30 2017-11-07 重庆科技学院 Personal identification method based on cell neural network autoassociative memories model
EP3553709A1 (en) * 2018-04-12 2019-10-16 Gyrfalcon Technology Inc. Deep learning image processing systems using modularly connected cnn based integrated circuits
CN110348570A (en) * 2019-05-30 2019-10-18 中国地质大学(武汉) A kind of neural network associative memory method based on memristor

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2259214A1 (en) * 2009-06-04 2010-12-08 Honda Research Institute Europe GmbH Implementing a neural associative memory based on non-linear learning of discrete synapses
CN105005765A (en) * 2015-06-29 2015-10-28 北京工业大学 Facial expression identification method based on Gabor wavelet and gray-level co-occurrence matrix
CN105809132A (en) * 2016-03-08 2016-07-27 山东师范大学 Improved compressed sensing-based face recognition method
CN106203391A (en) * 2016-07-25 2016-12-07 上海蓝灯数据科技股份有限公司 Face identification method based on intelligent glasses
CN107330404A (en) * 2017-06-30 2017-11-07 重庆科技学院 Personal identification method based on cell neural network autoassociative memories model
EP3553709A1 (en) * 2018-04-12 2019-10-16 Gyrfalcon Technology Inc. Deep learning image processing systems using modularly connected cnn based integrated circuits
CN110348570A (en) * 2019-05-30 2019-10-18 中国地质大学(武汉) A kind of neural network associative memory method based on memristor

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Hetero-associative memories via globally asymptotically stable discrete-time cellular neural networks;G. Grassi等;《Proceedings of the 2000 6th IEEE International Workshop on Cellular Neural Networks and their Applications》;20020806;141-145 *
基于7T磁共振图像的猕猴脑组织及脑解剖学结构自动分割方法研究;陈伟导;《中国优秀博硕士学位论文全文数据库(硕士)医药卫生科技辑》;20181015(第10期);E060-23 *
基于卷积神经网络的人脸识别研究;郭佳鹏;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20200215(第2期);I138-1954 *
基于压缩感知理论的低功耗无线图像传输系统研制;殷粤捷;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20140115(第1期);I136-588 *

Also Published As

Publication number Publication date
CN111652166A (en) 2020-09-11

Similar Documents

Publication Publication Date Title
Manisha et al. Cancelable biometrics: a comprehensive survey
Galbally et al. Iris image reconstruction from binary templates: An efficient probabilistic approach based on genetic algorithms
CN109344731B (en) Lightweight face recognition method based on neural network
US6963659B2 (en) Fingerprint verification system utilizing a facial image-based heuristic search method
Feng et al. Masquerade attack on transform-based binary-template protection based on perceptron learning
Jindal et al. Securing face templates using deep convolutional neural network and random projection
CN111914919A (en) Open set radiation source individual identification method based on deep learning
Kumar et al. Palmprint recognition using rank level fusion
CN108875907A (en) A kind of fingerprint identification method and device based on deep learning
CN112949468A (en) Face recognition method and device, computer equipment and storage medium
CN111652166B (en) Palm print and face recognition method based on cellular neural network different association memory model
Singh et al. A generic framework for deep incremental cancelable template generation
Qin et al. Label enhancement-based multiscale transformer for palm-vein recognition
Al-Nima Human authentication with earprint for secure telephone system
Soni et al. Face recognition using cloud Hopfield neural network
Arora et al. FKPIndexNet: An efficient learning framework for finger-knuckle-print database indexing to boost identification
Ge et al. Deep and discriminative feature learning for fingerprint classification
Fattahi et al. Damaged fingerprint recognition by convolutional long short-term memory networks for forensic purposes
Srinivas et al. Artificial intelligence based optimal biometric security system using palm veins
Sathiaraj A study on the neural network model for finger print recognition
Kirchgasser et al. Biometric menagerie in time-span separated fingerprint data
JP5279007B2 (en) Verification system, verification method, program, and recording medium
Liu Fingerprint analysis and singular point definition by deep neural network
Guzzi et al. Distillation of a CNN for a high accuracy mobile face recognition system
Kumar et al. Feature extraction using sparse SVD for biometric fusion in multimodal authentication

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant