CN111652166A - Palm print and face recognition method based on cellular neural network different association memory model - Google Patents

Palm print and face recognition method based on cellular neural network different association memory model Download PDF

Info

Publication number
CN111652166A
CN111652166A CN202010515194.4A CN202010515194A CN111652166A CN 111652166 A CN111652166 A CN 111652166A CN 202010515194 A CN202010515194 A CN 202010515194A CN 111652166 A CN111652166 A CN 111652166A
Authority
CN
China
Prior art keywords
picture data
palm print
face
vector
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010515194.4A
Other languages
Chinese (zh)
Other versions
CN111652166B (en
Inventor
韩琦
杨恒
叶刚强
解燕
曹瑞
林日煌
翁腾飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Science and Technology
Original Assignee
Chongqing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Science and Technology filed Critical Chongqing University of Science and Technology
Priority to CN202010515194.4A priority Critical patent/CN111652166B/en
Publication of CN111652166A publication Critical patent/CN111652166A/en
Application granted granted Critical
Publication of CN111652166B publication Critical patent/CN111652166B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The invention relates to a palm print and face recognition method based on a cellular neural network different association memory model, and belongs to the technical field of intelligent recognition. The method includes registration and identification. The invention combines the different associative memory with the cellular neural network model, converts the palm print picture data and the human face picture data into a series of parameters for storage, has high verification reliability because the identity information is the palm print picture data and the human face picture data, has strong secrecy of the storage mode of the pictures and high safety factor, and effectively prevents the identity information of people from being leaked; the form of converting the picture into the parameters through the model is adopted, so that the method is simple and convenient, good in practicability, good in picture identification effect and good in face picture data protection effect.

Description

Palm print and face recognition method based on cellular neural network different association memory model
Technical Field
The invention belongs to the technical field of intelligent recognition, and relates to a palm print and face recognition method based on a cellular neural network different associative memory model.
Background
Palm print recognition, similar to fingerprint recognition, is a biometric identification technique defined for human individuals that has been recognized as relatively quick and reliable so far, and has universality, uniqueness, stability, acceptability, anti-counterfeiting property and the like. Different people, even different fingers of the same person, have different palm print characteristics. By using the palm print recognition technology, the personal identity can be verified without carrying any auxiliary identity identification article. The face recognition is a technology of obtaining a personal face feature distribution map through a face recognition instrument, storing feature values, matching and identifying the identity of a person. The identification technology can be widely applied to the fields of bank finance, education traffic and the like, and is more convenient and faster than the body surface characteristic identification technologies such as fingerprint identification, iris identification and the like. Due to the wide application of face recognition and the high safety of palm print features, the method for recognizing the palm print and the face is combined, and the method has high research value. In the future, the multi-modal biometric identification technology will become a popular research topic and application field.
In the internet environment, whether face recognition or palm print recognition is adopted, the biological feature data generated by biological feature authentication is stored in a computer. These pieces of biometric information are stored in the form of computer code and face threats such as interception, replay, and reconstruction. The server side stores a large number of feature databases of users, and once the feature databases are obtained by hackers or criminals, the results cannot be recovered. The traditional biological characteristic recognition technology usually adopts a biological characteristic database, even the encryption of biological characteristic data is carried out through an encryption algorithm, the possibility of being cracked also exists theoretically, therefore, the invention converts the knowledge used in the biological characteristic recognition into a series of model parameters, namely synapse values and bias values, thereby providing an implicit model embedded in the environment, the model has specificity, so that a malicious attacker can obtain detailed parameters in the model, theoretically, the biological characteristic data of the user can not be cracked, further protecting the safety information of the user, and meanwhile, the biological characteristic data of the user related in the recognition process adopts a mode of one-time extraction and one-time verification, namely seamless access of the user data, further protecting the safety of the data.
The traditional face recognition system based on the neural network consists of 4 parts, namely preprocessing, feature extraction, a classifier based on the neural network and a database, wherein the feature extraction and the classifier are key parts for solving the problem of face recognition. However, this does not fundamentally solve the security problem. Associative memory is a mapping system that stores a particular input to a particular output. That is, a system can associate two modes, one of which is reliably memorized when the other is given. The associative memory is a content addressable memory. Content addressable memory is simply a method of directly retrieving information content. Content addressable memory refers to brain-like devices that can store standard patterns and allow probes with partial pattern information content to retrieve standard content. The retrieval of associative memory requires the system to converge to an equilibrium point representing a standard pattern. In associative memory, the standard pattern should be robustly retrievable by the probe. There are two types of associative memory, self-associative memory and hetero-associative memory. Self-associative memory means that the retrieved standard patterns are similar in content and form to the probe. By heteroassociative memory is meant that the standard pattern differs from the probe in content and form.
The invention makes self contribution to the safety problem of biological feature recognition and further enrichment and perfection of the different associative neurodynamic algorithm through the multi-mode recognition of the different associative neurodynamic algorithm.
Disclosure of Invention
In view of the above, the present invention provides a method for recognizing a palm print and a human face based on a cellular neural network associative memory model.
In order to achieve the purpose, the invention provides the following technical scheme:
a palm print and face recognition method based on a cellular neural network different association memory model comprises registration and recognition;
1) registering:
s1: collecting palm print picture data and face picture data of a crowd, and carrying out grouping numbering on the collected palm print picture data and the collected face picture data corresponding to each person;
s2: acquiring palm print picture data and face picture data through preprocessing;
s3: constructing a palm print face recognition model with 6 cell neural networks;
s4: respectively calculating 6 cellular neural network parameters in the step S3 according to the palm print picture data and the human face picture data obtained in the step S2 and the cellular neural network palm print human face recognition model obtained in the step S3, and finally determining 6 cellular neural network palm print human face recognition models;
2) identification
S5: acquiring palm print picture data and face picture data of the visitor, and acquiring the palm print picture data and the face picture data according to the preprocessing method mentioned in S2;
s6: inputting the palm print image data into a cellular neural network model to obtain output data, and then matching and identifying the output data and the face image data.
Optionally, in S6, when performing the authentication, the palm print image data of the user is associated with the face image data, and the face image data of the user shot by the camera is checked according to the output face image data.
Optionally, the palm print image data and the face image data each include r groups of images, and the number corresponding to the palm print image data is P1,P2,…,PrAnd the number corresponding to the face is F1,F2,…,Fr
① converting the gray picture, and converting all the palm print picture data and the face picture data obtained in step S1Processing into a grayscale matrix of N rows and M columns: p1′,P2′,…,Pr′,F1′,F2′,…,Fr', such that N is 3N, M is 3M, N ∈ N+,m∈N+
Compressing the gray level image matrix and designing a compression template
Figure RE-GDA0002564328400000031
And satisfy l0+4×l1+4×l2Compressing the N rows and M columns gray-scale map matrix in ① into N rows and M columns gray-scale map matrix P1″,P2″,…,Pr″,F1″,F2″,…,Fr″;
The specific compression process is as follows:
1) the grey-scale map matrix of N rows and M columns is decomposed into a number of small grey-scale map matrices of 3 rows and 3 columns,
2) performing point multiplication on the small gray scale image matrix of 3 rows and 3 columns in sequence by using a compression template, summing elements in the matrix obtained after the point multiplication, and rounding off a numerical value obtained by the summation, wherein the numerical value is certainly 0-255;
3) finally obtaining a new matrix with n rows and m columns in the step 2);
③ calculating the number of gray scale values of each gray scale matrix according to the gray scale matrix with n rows and m columns obtained in ②, and converting into a 16 × 16 gray scale histogram matrix P1″′,P2″′,…,Pi″′,…,Pr″′,F1″′,F2″′,…,Fi″′,…,Fr", wherein PiThe position of an element in the matrix is represented as k ═ 16(i-1) + j, wherein i is more than or equal to 1 and less than or equal to 16, j is more than or equal to 1 and less than or equal to 16, then k-1 is the gray value represented by the element, and the value of the element is the number of the gray values;
④ converts the resulting gray scale statistical histogram matrix of 16 × 16 in S2- ③ into vectors of 1 × 256 in rows, e.g.,
Figure RE-GDA0002564328400000032
is converted into
Figure RE-GDA0002564328400000033
Is converted into
Figure RE-GDA0002564328400000034
Converting the gray statistic histogram vector obtained in the step S2-the step S into a 1 × 256 binary vector with k equal to 6 layers, wherein the conversion step comprises the following steps:
1) binary translation of gray-level histogram input vectors
Each element in the gray scale statistical histogram vector is converted to a 6-bit binary number, e.g., will
Figure RE-GDA0002564328400000035
The values of the middle elements are converted into binary numbers, the highest bit of each binary number in the vector is sequentially taken to form a new vector of 1 × 256,
Figure RE-GDA0002564328400000036
the vector is used as a first layer binary input vector of the ith palm print picture data; obtaining a first layer binary input vector set by the same method
Figure RE-GDA0002564328400000037
Then, the next highest bit of each binary number in the vector is taken in turn to form a new vector of 1 × 256,
Figure RE-GDA0002564328400000041
the vector is used as a second layer binary input vector of the ith palm print picture data; obtaining a second layer binary input vector set by the same method
Figure RE-GDA0002564328400000042
Analogizing in turn to obtain 6 binary input vector sets of palm print picture data
Figure RE-GDA0002564328400000043
Wherein
Figure RE-GDA0002564328400000044
Representing a j-th layer binary input vector in the ith piece of palm print picture data;
wherein, k is 6, the peak value of the number of gray values of the gray image matrix of all the pictures is 63 according to 400 pieces of face picture data selected from the face database<26And then obtaining;
2) binary translation of gray-level histogram output vectors
The binary conversion of the gray level histogram output vector is the same as the conversion step of ④ -1), and the 6 binary output vector sets f of the face picture data are obtained in the same way(j)={f1 (j),f2 (j),…,fi (j),…,fr (j)},j∈{1,2,…,6},fi (j)I ∈ {1,2, …, r }, where fi (j)Representing a jth layer binary output vector in ith human face picture data;
the binary vectors of the palm print picture data and the face picture data are respectively 6 × r, and the binary input vector of the palm print picture data
Figure RE-GDA0002564328400000047
Binary output vector f with face picture datai (j)One-to-one correspondence, i ∈ {1,2, …, r }, j ∈ {1,2, …, 6}.
Optionally, step S3 specifically includes:
setting binary input vector set of k-th layer palm print as input vector set of associative memory
Figure RE-GDA0002564328400000049
Wherein the content of the first and second substances,
Figure RE-GDA00025643284000000410
Figure RE-GDA00025643284000000411
representing all pixel point groups in k-th layer binary input vector in ith palm printThe vector of the component (A),
Figure RE-GDA00025643284000000412
expressing the value of the jth pixel point in the kth layer binary input vector in the ith palm print;
setting binary output vector of k-th layer face as output vector set of associative memory
Figure RE-GDA00025643284000000413
Wherein the content of the first and second substances,
Figure RE-GDA00025643284000000414
i∈{1,2,…,r},j∈{1,2,…,256},fi (k)representing the vector formed by all pixel points in the k-th layer binary output vector in the ith human face,
Figure RE-GDA00025643284000000416
expressing the value of the jth pixel point in the kth layer binary output vector in the ith human face;
constructing a cellular neural network palm print image data recognition face image data model of the k (k is equal to {1,2, …,6}) layer, specifically:
Figure RE-GDA0002564328400000051
where k ∈ {1,2, …,6}, x ═ x (x)1,x2,…,xi,…,x256)TI ∈ {1,2, …,256}, input vector p(k)=(p1 (k),p2 (k),…,pi (k),…,p256 (k))TI ∈ {1,2, …,256}, and the offset vector V ═ V (V)1,v2,…,vi,…,v256)T,i∈{1,2,…,256},C=diag(c1,c2,…,ci,…,c256) I ∈ {1,2, …,256}, activation function f (x) ═ f (x)1),…,f(xi),…,f(x256))T.
In formula (1), the matrix a ═ aij)256×256The method consists of the following matrixes:
Figure RE-GDA0002564328400000052
wherein the content of the first and second substances,
Figure RE-GDA0002564328400000053
and
Figure RE-GDA0002564328400000054
the matrix D ═ Dij)256×256Is defined similarly to A.
Make it
Figure RE-GDA0002564328400000055
xi=1 or xi=-1,i=1,2,…,256},C(f(k))={x=(x1,x2,…,xi,…,x256)T∈R256|xifi (k)> 1, i ═ 1,2, …,256}. therefore, equation (1) translates to
Figure RE-GDA0002564328400000056
Optionally, in the step S4, the specific step of calculating the cellular neural network parameter of the k-th layer in the step S3 is:
s41: equation (2) is written in the form:
Figure RE-GDA0002564328400000061
in the formula (3), let xi(0)=0,
(i) If it is not
Figure RE-GDA0002564328400000062
Equation (3) converges to a positive stable equilibrium point,and the value of this equilibrium point is greater than 1;
(ii) if it is not
Figure RE-GDA0002564328400000063
Equation (3) converges to a negative stable equilibrium point and the value of this equilibrium point is less than-1.
According to the above theorem, the following reasoning is:
inference 1 order
Figure RE-GDA0002564328400000064
λi>max{ci},ciI ∈ {1,2, …,256}, when
Figure RE-GDA0002564328400000065
When the equation (3) converges to a positive stable equilibrium point, the value of this equilibrium point is greater than 1; when in use
Figure RE-GDA0002564328400000066
Equation (3) converges to a negative stable equilibrium point, and the value of this equilibrium point is less than-1.
The following symbols are introduced
Figure RE-GDA0002564328400000067
Wherein λ isi>0,
Figure RE-GDA0002564328400000068
LD=(d-1,-1,d-1,0,d-1,1,d0,-1,d0,0,d0,1,d1,-1,d1,0,d1,1)T,
LA=(a-1,-1,a-1,0,a-1,1,a0,-1,a0,0,a0,1,a1,-1,a1,0,a1,1)T,
l∈{1,2,…,r},q∈{1,2,…,16},
Figure RE-GDA0002564328400000069
Figure RE-GDA0002564328400000071
Figure RE-GDA0002564328400000072
Figure RE-GDA0002564328400000073
Figure RE-GDA0002564328400000074
Figure RE-GDA0002564328400000075
Figure RE-GDA0002564328400000076
According to inference 1, equations (4), (5) and (6) are obtained
Figure RE-GDA0002564328400000077
Figure RE-GDA0002564328400000078
And
Figure RE-GDA0002564328400000079
equation (5) to
Figure RE-GDA00025643284000000710
From equation (7)
Figure RE-GDA0002564328400000081
Wherein pinv (·) represents the pseudo-inverse of the matrix;
equation (6) to
Figure RE-GDA0002564328400000082
From equation (8)
Figure RE-GDA0002564328400000083
S42: the binary output vector set p of the face picture data obtained in the step S2(k)All vectors in the image are converted into a matrix omega together, and similarly, a binary input vector set f of the palm print image data is obtained(k)Is converted into a matrix xi together and is substituted into the equations (8) and (10) to obtain
Figure RE-GDA0002564328400000084
Figure RE-GDA0002564328400000085
S43: obtaining an output parameter LA of associative memory of the face picture data and an input parameter LD of associative memory of the palm print picture data according to the formula (8) and the formula (10) in the step S42, so as to convert the parameters into a parameter A and a parameter D in the formula (1); obtaining an offset vector V according to a formula (4); acquiring the k-th layer cellular neural network palm print image data from A, D, V and C to identify a human face image data model;
and respectively determining the palm print image data of the cellular neural networks from the first layer to the sixth layer according to the steps to identify the human face image data model.
Optionally, in the steps S5 and S6, the specific steps of recognizing the face picture data of the user through the palmprint picture data are as follows:
s51: two sets of equipment are prepared at the same time, one set of equipment acquires palm print picture data P of the visitor,obtaining 6 binary input vector sets of palm print picture data through preprocessing of step S2
Figure RE-GDA0002564328400000086
Figure RE-GDA0002564328400000087
Wherein
Figure RE-GDA0002564328400000088
Representing a j-th layer binary input vector in the ith piece of palm print picture data;
s52: the second set of equipment is a camera for acquiring the face picture data F of the visitor, and 6 binary output vector sets of the face picture data
Figure RE-GDA0002564328400000089
Wherein
Figure RE-GDA00025643284000000810
Representing a jth layer binary output vector in ith human face picture data;
s61: the palm print picture data p of the first layer to the sixth layer obtained in the step S51(j)J ∈ {1,2, …,6} are respectively input into six cellular neural network models to obtain output data f from the first layer to the sixth layer(j),j∈{1,2,…,6};
S62: then outputs the data f(j)Matching and recognizing the face picture data in the step S52; the output vector obtained in step S52
Figure RE-GDA0002564328400000091
And the model output vector f obtained in step S52(j)Respectively matching the face picture data from the first layer to the sixth layer;
s63: setting the success rate of matching the face picture data as H, and judging whether the matching degree H of the identity authentication is greater than a matching set value H, wherein H is 0-1; if so, the matching is successful, otherwise, the matching is failed.
The invention has the beneficial effects that: the method combines the different associative memory with the cellular neural network model, converts the palm print picture data and the human face picture data into a series of parameters for storage, has high verification reliability because the identity information is the palm print picture data and the human face picture data, has strong secrecy of the storage mode of the pictures and high safety factor, and effectively prevents the identity information of people from being leaked; the form of converting the picture into the parameters through the model is adopted, so that the method is simple and convenient, good in practicability, good in picture identification effect and good in face picture data protection effect.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Drawings
For the purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a flow chart of a method for image recognition according to the present invention;
fig. 2 is a schematic diagram of solving the position parameters of the cell neural network human face image data recognition model.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and examples may be combined with each other without conflict.
Wherein the showings are for the purpose of illustrating the invention only and not for the purpose of limiting the same, and in which there is shown by way of illustration only and not in the drawings in which there is no intention to limit the invention thereto; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there is an orientation or positional relationship indicated by terms such as "upper", "lower", "left", "right", "front", "rear", etc., based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not an indication or suggestion that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes, and are not to be construed as limiting the present invention, and the specific meaning of the terms may be understood by those skilled in the art according to specific situations.
Referring to fig. 1 to 2, a method for identifying a palm print and a human face by combining different associative memories based on a cellular neural network includes two steps: and (4) registering and identifying.
1) Registration
S1: collecting palm print picture data and face picture data of a crowd, and carrying out grouping numbering on the collected palm print picture data and the collected face picture data corresponding to each person;
s2: acquiring palm print picture data and face picture data through preprocessing;
s3: constructing a palm print face recognition model with 6 cell neural networks;
s4: respectively calculating 6 cellular neural network parameters in the step S3 according to the palm print picture data and the human face picture data obtained in the step S2 and the cellular neural network palm print human face recognition model obtained in the step S3, and finally determining 6 cellular neural network palm print human face recognition models;
2) identification
S5: acquiring palm print picture data and face picture data of the visitor, and acquiring the palm print picture data and the face picture data according to the preprocessing method mentioned in S2;
s6: inputting the palm print image data into a cellular neural network model to obtain output data, and then matching and identifying the output data and the face image data.
When the identity authentication is carried out, the palm print picture data of the user is associated with the face picture data, and the face picture data of the user shot by the camera is checked according to the output face picture data.
The palm print and the face respectively comprise r groups of pictures, and the number corresponding to the data of the palm print picture is P1,P2,…,PrAnd the number corresponding to the face is F1,F2,…,Fr
① converting the gray picture, and processing all the palm print picture data and face picture data obtained in step S1 into N rows and M columns of gray picture matrix P1′,P2′,…,Pr′,F1′,F2′,…,Fr', such that N is 3N, M is 3M, N ∈ N+,m∈N+
Compressing the gray level image matrix and designing a compression template
Figure RE-GDA0002564328400000101
And satisfy l0+4×l1+4×l2Compressing the N-row and M-column gray scale map matrix in ① into N-row and M-column gray scale map matrix P1″,P2″,…,Pr″,F1″,F2″,…,Fr″;
The specific compression process is as follows:
1) the grey-scale map matrix of N rows and M columns is decomposed into a number of small grey-scale map matrices of 3 rows and 3 columns,
2) performing point multiplication on the small gray scale image matrix of 3 rows and 3 columns in sequence by using a compression template, summing elements in the matrix obtained after the point multiplication, and rounding off a numerical value obtained by the summation, wherein the numerical value is certainly 0-255;
3) finally obtaining a new matrix with n rows and m columns in the step 2);
③ calculating the number of gray scale values of each gray scale matrix according to the gray scale matrix with n rows and m columns obtained in ②, and converting into a 16 × 16 gray scale histogram matrix P1″′,P2″′,…,Pi″′,…,Pr″′,F1″′,F2″′,…,Fi″′,…,Fr", wherein PiThe position of an element in the matrix is represented as k ═ 16(i-1) + j, wherein i is more than or equal to 1 and less than or equal to 16, j is more than or equal to 1 and less than or equal to 16, then k-1 is the gray value represented by the element, and the value of the element is the number of the gray values;
④ converts the resulting gray scale statistical histogram matrix of 16 × 16 in S2- ③ into vectors of 1 × 256 in rows, e.g.,
Figure RE-GDA0002564328400000111
is converted into
Figure RE-GDA0002564328400000112
Is converted into
Figure RE-GDA0002564328400000113
Converting the gray statistic histogram vector obtained in the step S2-the step S into a 1 × 256 binary vector with k equal to 6 layers, wherein the conversion step comprises the following steps:
1) binary translation of gray-level histogram input vectors
Each element in the gray scale statistical histogram vector is converted to a 6-bit binary number, e.g., will
Figure RE-GDA0002564328400000114
The values of the middle elements are converted into binary numbers, the highest bit of each binary number in the vector is sequentially taken to form a new vector of 1 × 256,
Figure RE-GDA0002564328400000115
the vector is used as a first layer binary input vector of the ith palm print picture data; obtaining a first layer binary input vector set by the same method
Figure RE-GDA0002564328400000116
Then, the next highest bit of each binary number in the vector is taken in turn to form a new vector of 1 × 256,
Figure RE-GDA0002564328400000117
the vector is used as a second layer binary input vector of the ith palm print picture data; obtaining a second layer binary input vector set by the same method
Figure RE-GDA0002564328400000118
6 binary input vector sets capable of obtaining palm print picture data by analogy in turn
Figure RE-GDA0002564328400000119
Wherein
Figure RE-GDA00025643284000001110
And representing a j-th layer binary input vector in the ith palm print picture data.
Note: k is 6, the peak value of the number of gray values of the gray image matrix of all the pictures is 63 according to 400 pieces of face picture data selected from the face database<26And obtaining the product.
2) Binary translation of gray-level histogram output vectors
The binary conversion of the gray histogram output vector is the same as the conversion step of ④ -1), and 6 binary output vector sets f of the face picture data can be obtained in the same way(j)={f1 (j),f2 (j),…,fi (j),…,fr (j)},j∈{1,2,…,6},fi (j)I ∈ {1,2, …, r }, where fi (j)And representing a j-th layer binary output vector in the ith piece of face picture data.
Note that the binary vectors of the palm print picture data and the face picture data are respectively 6 × r, and the binary input vector of the palm print picture data
Figure RE-GDA0002564328400000121
Binary output vector f with face picture datai (j)One-to-one correspondence, i ∈ {1,2, …, r }, j ∈ {1,2, …, 6}.
The specific content of step S3 is:
setting binary input vector set of k-th layer palm print as input vector set of associative memory
Figure RE-GDA0002564328400000122
Wherein the content of the first and second substances,
Figure RE-GDA0002564328400000123
Figure RE-GDA0002564328400000124
representing the vector formed by all the pixel points in the k-th layer binary input vector in the ith palm print,
Figure RE-GDA0002564328400000125
expressing the value of the jth pixel point in the kth layer binary input vector in the ith palm print;
setting binary output vector of k-th layer face as output vector set of associative memory
Figure RE-GDA00025643284000001210
Wherein the content of the first and second substances,
Figure RE-GDA0002564328400000126
Figure RE-GDA0002564328400000127
fi (k)representing the vector formed by all pixel points in the k-th layer binary output vector in the ith human face,
Figure RE-GDA0002564328400000128
and expressing the value of the jth pixel point in the kth layer binary output vector in the ith human face.
Constructing a cellular neural network palm print image data recognition face image data model of the k (k is equal to {1,2, …,6}) layer, specifically:
Figure RE-GDA0002564328400000129
where k ∈ {1,2, …,6}, x ═ x (x)1,x2,…,xi,…,x256)TI ∈ {1,2, …,256}, input vector p(k)=(p1 (k),p2 (k),…,pi (k),…,p256 (k))TI ∈ {1,2, …,256}, and the offset vector V ═ V (V)1,v2,…,vi,…,v256)T,i∈{1,2,…,256},C=diag(c1,c2,…,ci,…,c256) I ∈ {1,2, …,256}, activation function f (x) ═ f (x)1),…,f(xi),…,f(x256))T.
In formula (1), the matrix a ═ aij)256×256The method consists of the following matrixes:
Figure RE-GDA0002564328400000131
wherein the content of the first and second substances,
Figure RE-GDA0002564328400000132
and
Figure RE-GDA0002564328400000133
the matrix D ═ Dij)256×256Is defined similarly to A.
Make it
Figure RE-GDA0002564328400000134
xi=1 or xi=-1,i=1,2,…,256},C(f(k))={x=(x1,x2,…,xi,…,x256)T∈R256|xifi (k)>1, i=1,2, …,256}. accordingly, equation (1) translates to
Figure RE-GDA0002564328400000137
The specific steps of calculating the parameters of the cellular neural network of the k-th layer in the step S3 in the step S4 are as follows:
s41: equation (2) can be written as follows:
Figure RE-GDA0002564328400000138
according to the literature (Hanqi, the stability of neural network and its application in associative memory research [ D)]University of Chongqing, 2012), in equation (3), let xi(0)=0,
(i) If it is not
Figure RE-GDA0002564328400000141
Equation (3) converges to a positive stable equilibrium point and the value of this equilibrium point is greater than 1;
(ii) if it is not
Figure RE-GDA0002564328400000142
Equation (3) converges to a negative stable equilibrium point and the value of this equilibrium point is less than-1.
According to the above theorem, the following reasoning is:
inference 1 order
Figure RE-GDA0002564328400000143
λi>max{ci},ciI ∈ {1,2, …,256}, when
Figure RE-GDA0002564328400000144
When the equation (3) converges to a positive stable equilibrium point, the value of this equilibrium point is greater than 1; when in use
Figure RE-GDA0002564328400000145
While, equation (3) converges to a negative stable equilibriumAnd the value of this equilibrium point is less than-1.
The following symbols are introduced
Figure RE-GDA0002564328400000146
Wherein λ isi>0,
Figure RE-GDA0002564328400000147
LD=(d-1,-1,d-1,0,d-1,1,d0,-1,d0,0,d0,1,d1,-1,d1,0,d1,1)T,
LA=(a-1,-1,a-1,0,a-1,1,a0,-1,a0,0,a0,1,a1,-1,a1,0,a1,1)T,
l∈{1,2,…,r},q∈{1,2,…,16},
Figure RE-GDA0002564328400000148
Figure RE-GDA0002564328400000149
Figure RE-GDA00025643284000001410
Figure RE-GDA0002564328400000151
Figure RE-GDA0002564328400000152
Figure RE-GDA0002564328400000153
Figure RE-GDA0002564328400000154
According to inference 1, equations (4), (5) and (6) are obtained
Figure RE-GDA0002564328400000155
Figure RE-GDA0002564328400000156
And
Figure RE-GDA0002564328400000157
equation (5) can be converted to
Figure RE-GDA0002564328400000158
Thus, from the formula (7), it can be obtained
Figure RE-GDA0002564328400000159
Wherein pinv (·) represents the pseudo-inverse of the matrix.
Formula (6) can be converted into
Figure RE-GDA00025643284000001510
Thus, from the formula (8), it can be obtained
Figure RE-GDA0002564328400000161
S42: the binary output vector set p of the face picture data obtained in the step S2(k)All vectors in the image are converted into a matrix omega together, and similarly, a binary input vector set f of the palm print image data is obtained(k)Is converted into a matrix xi together and is substituted into the equations (8) and (10) to obtain
Figure RE-GDA0002564328400000162
Figure RE-GDA0002564328400000163
S43: obtaining an output parameter LA of associative memory of the face picture data and an input parameter LD of associative memory of the palm print picture data according to the formula (8) and the formula (10) in the step S42, so as to convert the parameters into a parameter A and a parameter D in the formula (1); obtaining an offset vector V according to a formula (4); acquiring the k-th layer cellular neural network palm print image data from A, D, V and C to identify a human face image data model;
therefore, the cellular neural network palm print image data of the first layer to the sixth layer are respectively determined according to the steps to identify the human face image data model.
The specific steps of identifying the face picture data of the user through the palm print picture data in the steps S5 and S6 are as follows:
s51: two sets of equipment are prepared simultaneously, wherein one set of equipment acquires palm print picture data P of the visitor, and 6 binary input vector sets of the palm print picture data are obtained through preprocessing in step S2
Figure RE-GDA0002564328400000164
Figure RE-GDA0002564328400000165
Wherein
Figure RE-GDA0002564328400000166
Representing a j-th layer binary input vector in the ith piece of palm print picture data;
s52: the second set of equipment is a camera for acquiring the face picture data F of the visitor, and 6 binary output vector sets of the face picture data
Figure RE-GDA0002564328400000167
Wherein
Figure RE-GDA0002564328400000168
Representing the ith human face imageA jth layer binary output vector in the tile data;
s61: the palm print picture data p of the first layer to the sixth layer obtained in the step S51(j)J ∈ {1,2, …,6} are respectively input into six cellular neural network models to obtain output data f from the first layer to the sixth layer(j),j∈{1,2,…,6};
S62: then outputs the data f(j)And performing matching identification with the face picture data in the step S52. The output vector obtained in step S52
Figure RE-GDA0002564328400000169
And the model output vector f obtained in step S52(j)Respectively matching the face picture data from the first layer to the sixth layer;
s63: setting the success rate of matching the face picture data as H, and judging whether the matching degree H of the identity authentication is greater than a matching set value H, wherein H is 0-1; if so, the matching is successful, otherwise, the matching is failed.
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.

Claims (6)

1. A palm print and face recognition method based on a cellular neural network different association memory model is characterized by comprising the following steps: the method comprises the steps of registering and identifying;
1) registering:
s1: collecting palm print picture data and face picture data of a crowd, and carrying out grouping numbering on the collected palm print picture data and the collected face picture data corresponding to each person;
s2: acquiring palm print picture data and face picture data through preprocessing;
s3: constructing a palm print face recognition model with 6 cell neural networks;
s4: respectively calculating 6 cellular neural network parameters in the step S3 according to the palm print picture data and the human face picture data obtained in the step S2 and the cellular neural network palm print human face recognition model obtained in the step S3, and finally determining 6 cellular neural network palm print human face recognition models;
2) identification
S5: acquiring palm print picture data and face picture data of the visitor, and acquiring the palm print picture data and the face picture data according to the preprocessing method mentioned in S2;
s6: inputting the palm print image data into a cellular neural network model to obtain output data, and then matching and identifying the output data and the face image data.
2. The method for recognizing palmprints and human faces based on the cellular neural network different associative memory model according to claim 1, wherein: in S6, in the authentication, the palm print image data of the user is associated with the face image data, and the face image data of the user photographed by the camera is checked based on the output face image data.
3. The method for recognizing palmprints and human faces based on the cellular neural network different associative memory model according to claim 1, wherein: the palm print picture data and the face picture data respectively comprise r groups of pictures, and the number corresponding to the palm print picture data is P1,P2,…,PrAnd the number corresponding to the face is F1,F2,…,Fr
① converting the gray picture, and processing all the palm print picture data and face picture data obtained in step S1 into N rows and M columns of gray picture matrix P1′,P2′,…,Pr′,F1′,F2′,…,Fr', such that N is 3N, M is 3M, N ∈ N+,m∈N+
Compressing the gray level image matrix and designing a compression template
Figure RE-FDA0002564328390000011
And satisfy l0+4×l1+4×l2Compressing the N rows and M columns gray-scale map matrix in ① into N rows and M columns gray-scale map matrix P1″,P2″,…,Pr″,F1″,F2″,…,Fr″;
The specific compression process is as follows:
1) the grey-scale map matrix of N rows and M columns is decomposed into a number of small grey-scale map matrices of 3 rows and 3 columns,
2) performing point multiplication on the small gray scale image matrix of 3 rows and 3 columns in sequence by using a compression template, summing elements in the matrix obtained after the point multiplication, and rounding off a numerical value obtained by the summation, wherein the numerical value is certainly 0-255;
3) finally obtaining a new matrix with n rows and m columns in the step 2);
③ calculating the number of gray scale values of each gray scale matrix according to the gray scale matrix with n rows and m columns obtained in ②, and converting into 16 × 16 gray scale histogram matrix P1″′,P2″′,…,Pi″′,…,Pr″′,F1″′,F2″′,…,Fi″′,…,Fr", wherein PiThe position of an element in the matrix is represented as k ═ 16(i-1) + j, wherein i is more than or equal to 1 and less than or equal to 16, j is more than or equal to 1 and less than or equal to 16, then k-1 is the gray value represented by the element, and the value of the element is the number of the gray values;
④ converts the resulting gray scale statistical histogram matrix of 16 × 16 in S2- ③ into vectors of 1 × 256 in rows, e.g.,
Figure RE-FDA0002564328390000021
is converted into
Figure RE-FDA0002564328390000022
Is converted into
Figure RE-FDA0002564328390000023
Converting the gray statistic histogram vector obtained in the step S2-the step S into a 1 × 256 binary vector with k equal to 6 layers, wherein the conversion step comprises the following steps:
1) binary translation of gray-level histogram input vectors
Each element in the gray scale statistical histogram vector is converted to a 6-bit binary number, e.g., will
Figure RE-FDA0002564328390000024
The values of the middle elements are converted into binary numbers, the highest bit of each binary number in the vector is sequentially taken to form a new vector of 1 × 256,
Figure RE-FDA0002564328390000025
i ∈ {1,2, …, r }, which is used as the first layer binary input vector of the ith palm print picture data, and obtaining the first layer binary input vector set in the same way
Figure RE-FDA0002564328390000026
Then, the next highest bit of each binary number in the vector is taken in turn to form a new vector of 1 × 256,
Figure RE-FDA0002564328390000027
i ∈ {1,2, …, r }, which is used as the second layer binary input vector of the ith palm print picture data, and obtaining the second layer binary input vector set in the same way
Figure RE-FDA0002564328390000028
Analogizing in turn to obtain 6 binary input vector sets of palm print picture data
Figure RE-FDA0002564328390000029
j∈{1,2,…,6},
Figure RE-FDA00025643283900000210
i ∈ {1,2, …, r }, wherein
Figure RE-FDA00025643283900000211
Represents the ith webA j-th layer binary input vector in the palm print image data;
wherein, k is 6, the peak value of the number of gray values of the gray image matrix of all the pictures is 63 according to 400 pieces of face picture data selected from the face database<26And then obtaining;
2) binary translation of gray-level histogram output vectors
The binary conversion of the gray level histogram output vector is the same as the conversion step of ④ -1), and the 6 binary output vector sets f of the face picture data are obtained in the same way(j)={f1 (j),f2 (j),…,fi (j),…,fr (j)},j∈{1,2,…,6},fi (j)I ∈ {1,2, …, r }, where fi (j)Representing a jth layer binary output vector in ith human face picture data;
the binary vectors of the palm print picture data and the face picture data are respectively 6 × r, and the binary input vector of the palm print picture data
Figure RE-FDA0002564328390000031
Binary output vector f with face picture datai (j)One-to-one correspondence, i ∈ {1,2, …, r }, j ∈ {1,2, …, 6}.
4. The method for recognizing palmprints and human faces based on the cellular neural network different associative memory model according to claim 1, wherein: the step S3 specifically includes:
setting binary input vector set of k-th layer palm print as input vector set of associative memory
Figure RE-FDA0002564328390000033
Wherein the content of the first and second substances,
Figure RE-FDA0002564328390000034
j∈{1,2,…,256},
Figure RE-FDA0002564328390000035
representing the vector formed by all the pixel points in the k-th layer binary input vector in the ith palm print,
Figure RE-FDA0002564328390000036
expressing the value of the jth pixel point in the kth layer binary input vector in the ith palm print;
setting binary output vector of k-th layer face as output vector set of associative memory
Figure RE-FDA0002564328390000037
Wherein the content of the first and second substances,
Figure RE-FDA0002564328390000038
i∈{1,2,…,r},j∈{1,2,…,256},fi (k)representing the vector formed by all pixel points in the k-th layer binary output vector in the ith human face,
Figure RE-FDA0002564328390000039
expressing the value of the jth pixel point in the kth layer binary output vector in the ith human face;
constructing a cellular neural network palm print image data recognition face image data model of the k (k is equal to {1,2, …,6}) layer, specifically:
Figure RE-FDA00025643283900000310
where k ∈ {1,2, …,6}, x ═ x (x)1,x2,…,xi,…,x256)TI ∈ {1,2, …,256}, input vector p(k)=(p1 (k),p2 (k),…,pi (k),…,p256 (k))TI ∈ {1,2, …,256}, and the offset vector V ═ V (V)1,v2,…,vi,…,v256)T,i∈{1,2,…,256},C=diag(c1,c2,…,ci,…,c256) I ∈ {1,2, …,256}, activation function f (x) ═ f (x)1),…,f(xi),…,f(x256))T.
In formula (1), the matrix a ═ aij)256×256The method consists of the following matrixes:
Figure RE-FDA0002564328390000041
wherein the content of the first and second substances,
Figure RE-FDA0002564328390000042
and
Figure RE-FDA0002564328390000043
the matrix D ═ Dij)256×256Is defined similarly to A.
Make it
Figure RE-FDA0002564328390000044
xi=1 or xi=-1,i=1,2,…,256},
Figure RE-FDA0002564328390000045
Thus, equation (1) is converted to 1,2, …,256
Figure RE-FDA0002564328390000046
5. The method for recognizing palmprints and human faces based on the cellular neural network different associative memory model according to claim 4, wherein: in step S4, the specific step of calculating the parameters of the cellular neural network at the k-th layer in step S3 is:
s41: equation (2) is written in the form:
Figure RE-FDA0002564328390000047
in the formula (3), let xi(0)=0,
(i) If it is not
Figure RE-FDA0002564328390000048
Equation (3) converges to a positive stable equilibrium point and the value of this equilibrium point is greater than 1;
(ii) if it is not
Figure RE-FDA0002564328390000051
Equation (3) converges to a negative stable equilibrium point and the value of this equilibrium point is less than-1.
According to the above theorem, the following reasoning is:
inference 1 order
Figure RE-FDA0002564328390000052
λi>max{ci},ciI ∈ {1,2, …,256}, when fi (k)When 1, equation (3) converges to a positive stable equilibrium point, and the value of this equilibrium point is greater than 1; when f isi (k)When-1, equation (3) converges to a negative stable equilibrium point, and the value of this equilibrium point is less than-1.
The following symbols are introduced
Figure RE-FDA0002564328390000053
Wherein λ isi>0,
Figure RE-FDA0002564328390000054
LD=(d-1,-1,d-1,0,d-1,1,d0,-1,d0,0,d0,1,d1,-1,d1,0,d1,1)T,
LA=(a-1,-1,a-1,0,a-1,1,a0,-1,a0,0,a0,1,a1,-1,a1,0,a1,1)T,
l∈{1,2,…,r},q∈{1,2,…,16},
Figure RE-FDA0002564328390000055
Figure RE-FDA0002564328390000056
Figure RE-FDA0002564328390000057
Figure RE-FDA0002564328390000058
Figure RE-FDA0002564328390000061
Figure RE-FDA0002564328390000062
Figure RE-FDA0002564328390000063
According to inference 1, equations (4), (5) and (6) are obtained
Figure RE-FDA0002564328390000064
Figure RE-FDA0002564328390000065
And
Figure RE-FDA0002564328390000066
equation (5) to
Figure RE-FDA0002564328390000067
From equation (7)
Figure RE-FDA0002564328390000068
Wherein pinv (·) represents the pseudo-inverse of the matrix;
equation (6) to
Figure RE-FDA0002564328390000069
From equation (8)
Figure RE-FDA00025643283900000610
S42: the binary output vector set p of the face picture data obtained in the step S2(k)All vectors in the image are converted into a matrix omega together, and similarly, a binary input vector set f of the palm print image data is obtained(k)Is converted into a matrix xi together and is substituted into the equations (8) and (10) to obtain
Figure RE-FDA0002564328390000071
Figure RE-FDA0002564328390000072
S43: obtaining an output parameter LA of associative memory of the face picture data and an input parameter LD of associative memory of the palm print picture data according to the formula (8) and the formula (10) in the step S42, so as to convert the parameters into a parameter A and a parameter D in the formula (1); obtaining an offset vector V according to a formula (4); acquiring the k-th layer cellular neural network palm print image data from A, D, V and C to identify a human face image data model;
and respectively determining the palm print image data of the cellular neural networks from the first layer to the sixth layer according to the steps to identify the human face image data model.
6. The method for recognizing palmprints and human faces based on the cellular neural network different associative memory model according to claim 5, wherein: in the steps S5 and S6, the specific steps of recognizing the face picture data of the user through the palm print picture data are as follows:
s51: two sets of equipment are prepared simultaneously, wherein one set of equipment acquires palm print picture data P of the visitor, and 6 binary input vector sets of the palm print picture data are obtained through preprocessing in step S2
Figure RE-FDA0002564328390000073
j∈{1,2,…,6},
Figure RE-FDA0002564328390000074
Wherein
Figure RE-FDA0002564328390000075
Representing a j-th layer binary input vector in the ith piece of palm print picture data;
s52: the second set of equipment is a camera for acquiring the face picture data F of the visitor, and 6 binary output vector sets of the face picture data
Figure RE-FDA0002564328390000076
j∈{1,2,…,6},
Figure RE-FDA0002564328390000077
i ∈ {1,2, …, r }, wherein
Figure RE-FDA0002564328390000078
Representing a jth layer binary output vector in ith human face picture data;
s61: the palm print picture data p of the first layer to the sixth layer obtained in the step S51(j)J ∈ {1,2, …,6} are respectively input into six cellular neural network models to obtain output data f from the first layer to the sixth layer(j),j∈{1,2,…,6};
S62: then outputs the data f(j)Matching and recognizing the face picture data in the step S52; the output vector obtained in step S52
Figure RE-FDA0002564328390000079
And the model output vector f obtained in step S52(j)Respectively matching the face picture data from the first layer to the sixth layer;
s63: setting the success rate of matching the face picture data as H, and judging whether the matching degree H of the identity authentication is greater than a matching set value H, wherein H is 0-1; if so, the matching is successful, otherwise, the matching is failed.
CN202010515194.4A 2020-06-08 2020-06-08 Palm print and face recognition method based on cellular neural network different association memory model Active CN111652166B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010515194.4A CN111652166B (en) 2020-06-08 2020-06-08 Palm print and face recognition method based on cellular neural network different association memory model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010515194.4A CN111652166B (en) 2020-06-08 2020-06-08 Palm print and face recognition method based on cellular neural network different association memory model

Publications (2)

Publication Number Publication Date
CN111652166A true CN111652166A (en) 2020-09-11
CN111652166B CN111652166B (en) 2022-08-30

Family

ID=72343503

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010515194.4A Active CN111652166B (en) 2020-06-08 2020-06-08 Palm print and face recognition method based on cellular neural network different association memory model

Country Status (1)

Country Link
CN (1) CN111652166B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298702A (en) * 2021-06-23 2021-08-24 重庆科技学院 Reordering and dividing method based on large-size image pixel points

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2259214A1 (en) * 2009-06-04 2010-12-08 Honda Research Institute Europe GmbH Implementing a neural associative memory based on non-linear learning of discrete synapses
CN105005765A (en) * 2015-06-29 2015-10-28 北京工业大学 Facial expression identification method based on Gabor wavelet and gray-level co-occurrence matrix
CN105809132A (en) * 2016-03-08 2016-07-27 山东师范大学 Improved compressed sensing-based face recognition method
CN106203391A (en) * 2016-07-25 2016-12-07 上海蓝灯数据科技股份有限公司 Face identification method based on intelligent glasses
CN107330404A (en) * 2017-06-30 2017-11-07 重庆科技学院 Personal identification method based on cell neural network autoassociative memories model
EP3553709A1 (en) * 2018-04-12 2019-10-16 Gyrfalcon Technology Inc. Deep learning image processing systems using modularly connected cnn based integrated circuits
CN110348570A (en) * 2019-05-30 2019-10-18 中国地质大学(武汉) A kind of neural network associative memory method based on memristor

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2259214A1 (en) * 2009-06-04 2010-12-08 Honda Research Institute Europe GmbH Implementing a neural associative memory based on non-linear learning of discrete synapses
CN105005765A (en) * 2015-06-29 2015-10-28 北京工业大学 Facial expression identification method based on Gabor wavelet and gray-level co-occurrence matrix
CN105809132A (en) * 2016-03-08 2016-07-27 山东师范大学 Improved compressed sensing-based face recognition method
CN106203391A (en) * 2016-07-25 2016-12-07 上海蓝灯数据科技股份有限公司 Face identification method based on intelligent glasses
CN107330404A (en) * 2017-06-30 2017-11-07 重庆科技学院 Personal identification method based on cell neural network autoassociative memories model
EP3553709A1 (en) * 2018-04-12 2019-10-16 Gyrfalcon Technology Inc. Deep learning image processing systems using modularly connected cnn based integrated circuits
CN110348570A (en) * 2019-05-30 2019-10-18 中国地质大学(武汉) A kind of neural network associative memory method based on memristor

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
G. GRASSI等: "Hetero-associative memories via globally asymptotically stable discrete-time cellular neural networks", 《PROCEEDINGS OF THE 2000 6TH IEEE INTERNATIONAL WORKSHOP ON CELLULAR NEURAL NETWORKS AND THEIR APPLICATIONS》 *
殷粤捷: "基于压缩感知理论的低功耗无线图像传输系统研制", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *
郭佳鹏: "基于卷积神经网络的人脸识别研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *
陈伟导: "基于7T磁共振图像的猕猴脑组织及脑解剖学结构自动分割方法研究", 《中国优秀博硕士学位论文全文数据库(硕士)医药卫生科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298702A (en) * 2021-06-23 2021-08-24 重庆科技学院 Reordering and dividing method based on large-size image pixel points
CN113298702B (en) * 2021-06-23 2023-08-04 重庆科技学院 Reordering and segmentation method based on large-size image pixel points

Also Published As

Publication number Publication date
CN111652166B (en) 2022-08-30

Similar Documents

Publication Publication Date Title
Galbally et al. Iris image reconstruction from binary templates: An efficient probabilistic approach based on genetic algorithms
CN109344731B (en) Lightweight face recognition method based on neural network
Jindal et al. Securing face templates using deep convolutional neural network and random projection
CN107122725B (en) Face recognition method and system based on joint sparse discriminant analysis
CN108875907A (en) A kind of fingerprint identification method and device based on deep learning
CN107944356A (en) The identity identifying method of the hierarchical subject model palmprint image identification of comprehensive polymorphic type feature
CN112949468A (en) Face recognition method and device, computer equipment and storage medium
CN111652166B (en) Palm print and face recognition method based on cellular neural network different association memory model
Al-Nima Human authentication with earprint for secure telephone system
Pham et al. Personal identification based on deep learning technique using facial images for intelligent surveillance systems
Soni et al. Face recognition using cloud Hopfield neural network
Ge et al. Deep and discriminative feature learning for fingerprint classification
Srinivas et al. Artificial intelligence based optimal biometric security system using palm veins
CN114187644A (en) Mask face living body detection method based on support vector machine
Sathiaraj A study on the neural network model for finger print recognition
Liu Fingerprint analysis and singular point definition by deep neural network
Kumar et al. Feature extraction using sparse SVD for biometric fusion in multimodal authentication
Santosh et al. Recent Trends in Image Processing and Pattern Recognition: Third International Conference, RTIP2R 2020, Aurangabad, India, January 3–4, 2020, Revised Selected Papers, Part I
Zhang et al. Human Face Recognition Based on improved CNN Model with Multi-layers
Abdel-Kader et al. Rotation invariant face recognition based on hybrid LPT/DCT features
CN112926041B (en) Remote identity authentication system based on biological characteristics
Delipersad et al. Face recognition using neural networks
Abraham et al. An AFIS candidate list centric fingerprint likelihood ratio model based on morphometric and spatial analyses (MSA)
CHELAOUA et al. The Random Forest Classifier Applied In Biometric Recognition
Singh A Neural Network based Attendance Monitoring and Database Management System using Fingerprint Recognition and Matching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant