CN112396005A - Biological characteristic image recognition method and device, electronic equipment and readable storage medium - Google Patents

Biological characteristic image recognition method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN112396005A
CN112396005A CN202011322604.XA CN202011322604A CN112396005A CN 112396005 A CN112396005 A CN 112396005A CN 202011322604 A CN202011322604 A CN 202011322604A CN 112396005 A CN112396005 A CN 112396005A
Authority
CN
China
Prior art keywords
image
image set
sample image
training
target sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011322604.XA
Other languages
Chinese (zh)
Inventor
李佳琳
李昌昊
王健宗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202011322604.XA priority Critical patent/CN112396005A/en
Publication of CN112396005A publication Critical patent/CN112396005A/en
Priority to PCT/CN2021/097072 priority patent/WO2022105179A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Abstract

The invention relates to artificial intelligence, and discloses a biological characteristic image identification method, which comprises the following steps: performing data enhancement on the first sample image set by using the trained data enhancement model to obtain a second sample image set; summarizing the first sample image set and the second sample image set to obtain a first target sample image set; performing data amplification on the first target sample image set to obtain a second target sample image set; performing feature extraction on the biological feature image to be recognized by using the image recognition model trained by the second target sample image set to obtain a feature vector; and comparing and identifying the feature vectors in a preset image feature vector library to obtain an identification result. The invention also relates to a blockchain technique in which a second set of target sample images may be stored in a blockchain. The invention also provides a biological characteristic image recognition device, electronic equipment and a computer readable storage medium. The invention can improve the accuracy of biological characteristic image identification.

Description

Biological characteristic image recognition method and device, electronic equipment and readable storage medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a biological characteristic image identification method and device, electronic equipment and a readable storage medium.
Background
With the rapid development of communication and traffic technologies, biometric identification is widely used in various fields such as stations, airports, finance, network security, and the like as an effective way to identify the identity of passengers.
At present, biological feature recognition is to collect an image to be recognized of a recognized person, and extract image features of the image to be recognized through a trained deep learning model to match with image features in a database so as to achieve the recognition effect.
However, a large number of biometric images are required for training the deep learning model, and since the biometric images are private and are not easy to acquire, the deep learning model often has poor feature extraction capability due to lack of enough training samples, thereby affecting the accuracy of biometric image recognition.
Disclosure of Invention
The invention provides a biological characteristic image identification method, a biological characteristic image identification device, electronic equipment and a computer readable storage medium, and mainly aims to improve the accuracy of biological characteristic image identification.
In order to achieve the above object, the present invention provides a biometric image recognition method, including:
acquiring a first training image set, and training a pre-constructed generated confrontation network model by using the first training image set to obtain a data enhancement model;
acquiring a first sample image set, and performing data enhancement on the first sample image set by using the data enhancement model to obtain a second sample image set;
summarizing the first sample image set and the second sample image set to obtain a first target sample image set;
performing data amplification on the first target sample image set to obtain a second target sample image set;
training a pre-constructed deep learning network model by using the second target sample image set to obtain an image recognition model;
when receiving a biological characteristic image to be identified, performing characteristic extraction on the biological characteristic image to be identified by using the image identification model to obtain a characteristic vector;
and comparing and identifying the characteristic vectors in a preset image characteristic vector library to obtain an identification result.
Optionally, the training a pre-constructed generated confrontation network model by using the first training image set to obtain a data enhancement model includes:
constructing a first loss function;
performing alternating iterative training of a generator and a discriminator on the generative confrontation network model with the first training image set based on the first loss function;
and when the value of the first loss function reaches a first preset threshold value, stopping training to obtain the data enhancement model.
Optionally, the performing data amplification on the first target sample image set to obtain a second target sample image set includes:
performing translation and turning and color adjustment operations on all images in the first target sample image set to obtain an amplified image set;
summarizing the augmented image set and the first target sample image set;
and marking the label area of the images in the summarized image set to obtain the second target sample image set.
Optionally, the training the pre-constructed deep learning network model by using the second target sample image set to obtain an image recognition model includes:
a characteristic extraction step: performing convolution pooling operation on the second target sample image set according to preset convolution pooling times to obtain a feature set;
and a loss calculation step: calculating the feature set by using a preset activation function to obtain a predicted value, obtaining a label value of the label area corresponding to each image in the second target sample image set, and calculating by using a pre-constructed second loss function according to the predicted value and the label value to obtain a loss value;
training and judging: comparing the loss value with a second preset threshold value, and returning to the feature extraction step when the loss value is greater than or equal to the second preset threshold value; or when the loss value is smaller than the second preset threshold value, stopping training to obtain the image recognition model.
Optionally, the performing feature extraction on the to-be-recognized biometric image by using the image recognition model to obtain a feature vector includes:
carrying out image recognition on the image to be recognized by utilizing the image recognition model;
and extracting the output value of the full-connection layer in the image recognition model after the image recognition is finished to obtain the characteristic vector.
Optionally, the comparing and identifying in a preset image feature vector library by using the feature vector to obtain an identification result includes:
calculating the similarity between the feature vector and each image feature vector in the image feature vector library to obtain a corresponding similarity value;
summarizing all the similarity values to obtain a similarity value set;
and screening and comparing according to the similarity value set to obtain the identification result.
Optionally, the screening and comparing according to the similarity value set to obtain the identification result includes:
if the similarity value in the similarity value set is greater than or equal to a third preset threshold value, the identification result is that the identification is passed; or
And if the similarity value does not exist in the similarity value set and is greater than or equal to the third preset threshold value, the identification result is that the identification is failed.
In order to solve the above problem, the present invention also provides a biometric image recognition apparatus, comprising:
the training sample generation module is used for acquiring a first training image set, and training a pre-constructed generation confrontation network model by using the first training image set to obtain a data enhancement model; acquiring a first sample image set, and performing data enhancement on the first sample image set by using the data enhancement model to obtain a second sample image set; summarizing the first sample image set and the second sample image set to obtain a first target sample image set; performing data amplification on the first target sample image set to obtain a second target sample image set;
the model training module is used for training a pre-constructed deep learning network model by utilizing the second target sample image set to obtain an image recognition model;
the image recognition module is used for performing feature extraction on the biological feature image to be recognized by using the image recognition model when the biological feature image to be recognized is received to obtain a feature vector; and comparing and identifying the characteristic vectors in a preset image characteristic vector library to obtain an identification result.
In order to solve the above problem, the present invention also provides an electronic device, including:
a memory storing at least one computer program; and
and a processor for executing the computer program stored in the memory to realize the biometric image recognition method.
In order to solve the above problem, the present invention also provides a computer-readable storage medium, in which at least one computer program is stored, the at least one computer program being executed by a processor in an electronic device to implement the biometric image recognition method described above.
In the embodiment of the invention, a pre-constructed generation confrontation network model is trained by utilizing the first training image set to obtain a data enhancement model; performing data enhancement on the first sample image set by using the data enhancement model to obtain a second sample image set; summarizing the first sample image set and the second sample image set to obtain a first target sample image set, and amplifying samples through a model to improve the robustness of a subsequent model; performing data amplification on the first target sample image set to obtain a second target sample image set, and further performing amplification on the sample to improve the feature extraction capability of a subsequent model; training a pre-constructed deep learning network model by using the second target sample image set to obtain an image recognition model; when receiving a biological characteristic image to be identified, performing characteristic extraction on the biological characteristic image to be identified by using the image identification model to obtain a characteristic vector; and comparing and identifying the feature vectors in a preset image feature vector library to obtain an identification result, and improving the feature extraction capability of the model by increasing the sample number of the amplified model so as to improve the accuracy of biological feature image identification.
Drawings
Fig. 1 is a schematic flow chart of a biometric image recognition method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart illustrating a recognition result obtained in the biometric image recognition method according to an embodiment of the present invention;
fig. 3 is a block diagram of a biometric image recognition apparatus according to an embodiment of the present invention;
fig. 4 is a schematic diagram of an internal structure of an electronic device implementing a biometric image recognition method according to an embodiment of the present invention;
the implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the invention provides a biological characteristic image identification method. The execution subject of the biometric image recognition method includes, but is not limited to, at least one of electronic devices such as a server and a terminal, which can be configured to execute the method provided by the embodiments of the present application. In other words, the biometric image recognition method may be performed by software or hardware installed in the terminal device or the server device, and the software may be a blockchain platform. The server includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like.
Referring to fig. 1, a flow diagram of a biometric image recognition method according to an embodiment of the present invention is shown, in the embodiment of the present invention, the biometric image recognition method includes:
s1, obtaining a first training image set, and training a pre-constructed generated confrontation network model by using the first training image set to obtain a data enhancement model;
in an embodiment of the present invention, the first training image set is an iris image set, and the iris image is an eye image including an iris region.
Those skilled in the art will appreciate that the generation of the antagonistic network model is divided into two parts, a generator and a discriminator. In the training process, the generator is used for generating images according to the first training image set, the discriminator is used for judging whether the generated images are false images generated by the generator or corresponding real images in the first training image set, the generated images generated by the generator can approach the real images in the first training image set, and a generated model is obtained according to the generated network at the moment and is used for generating new images to solve the problem that image samples are too small.
In detail, in the embodiment of the present invention, the training of the pre-constructed generative confrontation network model by using the first training image set includes: constructing a first loss function; performing generator and discriminator alternating iterative training of the first generation antagonistic network model using the first training image set based on the first loss function; when the value of the first loss function reaches a first preset threshold value, stopping training to obtain the data enhancement model;
wherein the first loss function is:
Figure BDA0002793357450000051
wherein, L isGANFor a pre-constructed countermeasure loss function, z represents a preset random parameter variable, D is a generator in the first generation countermeasure network model, G is a discriminator in the first generation countermeasure network model, x is a real image in the generator generation image, P(x)For the probability distribution, P, of the real image in the generated image(z)And E is an expectation value calculation function for the probability distribution of the false images in the generated image.
S2, acquiring a first sample image set, and performing data enhancement on the first sample image set by using the data enhancement model to obtain a second sample image set;
in an embodiment of the present invention, the first set of sample images is a different set of iris images than the first set of training images.
Since the first sample image set has a small sample and is not easy to obtain, the embodiment of the present invention uses the data enhancement model to process all images in the first sample image set to obtain the second sample image set. For example: and the first sample image set comprises images a and B, the image a is input into the data enhancement model to obtain an image A, the image B is input into the data enhancement model to obtain an image B, and the image A and the image B are summarized to obtain a second sample image set.
S3, summarizing the first sample image set and the second sample image set to obtain a first target sample image set;
s4, performing data amplification on the first target sample image set to obtain a second target sample image set;
in the embodiment of the invention, in order to improve the robustness of a subsequent model, data amplification is performed on the first target sample image set.
In detail, in an embodiment of the present invention, the performing data amplification on the first target sample image set includes: performing translation and turning and color adjustment operations on all images in the first target sample image set to obtain an amplified image set; summarizing the augmented image set and the first target sample image set; and marking the label area of the images in the summarized image set to obtain the second target sample image set. In the embodiment of the present invention, the Label region is an iris region, and preferably, Label region labeling may be performed manually by using a Label Me image labeling tool in the embodiment of the present invention.
In another embodiment of the present invention, in order to ensure data privacy, the second target sample image set may be stored in block chain nodes.
S5, training a pre-constructed deep learning network model by using the second target sample image set to obtain an image feature extraction model;
preferably, in an embodiment of the present invention, the deep learning network model may include a convolutional neural network model and the like.
Further, the training of the pre-constructed deep learning network model by using the second target sample image set in the embodiment of the present invention includes:
step A: performing convolution pooling operation on the second target sample image set according to preset convolution pooling times to obtain a feature set;
and B: calculating the feature set by using a preset activation function to obtain a predicted value, obtaining a label value of the label area corresponding to each image in the second target sample image set, and calculating by using a pre-constructed second loss function according to the predicted value and the label value to obtain a loss value;
in the embodiment of the present invention, the tag values and the tag areas are in one-to-one correspondence, for example: the label value corresponding to the label area is 1, and the label value corresponding to the non-label area is 0.
And C: comparing the loss value with a second preset threshold value, and returning to the step A when the loss value is greater than or equal to the second preset threshold value; or when the loss value is smaller than the second preset threshold value, stopping training to obtain the image recognition model.
In detail, in the embodiment of the present invention, the performing convolution pooling on the second target sample image set to obtain a first feature set includes: performing convolution operation on the second target sample image set to obtain a first convolution data set; performing a maximum pooling operation on the first convolved data set to obtain the feature set.
Further, the convolution operation is:
Figure BDA0002793357450000071
and ω' represents the number of channels of the first convolution data set, ω represents the number of channels of the second target sample image set, k is the size of a preset convolution kernel, f is the step of a preset convolution operation, and p is a preset data zero padding matrix.
Further, in a preferred embodiment of the present invention, the first activation function includes:
Figure BDA0002793357450000072
wherein, mutRepresenting the predicted values, s represents data in the feature set.
In detail, the first loss function according to the preferred embodiment of the present invention includes:
Figure BDA0002793357450000073
wherein L isceRepresenting the loss value, N is the number of data of the second target sample image set, i is a positive integer, yiIs the tag value, piAnd the predicted value is used.
S6, when receiving a biological feature image to be recognized, performing feature extraction on the biological feature image to be recognized by using the image recognition model to obtain a feature vector;
in the embodiment of the invention, the biological characteristic image to be identified is an iris image needing to be identified.
In detail, in the embodiment of the present invention, the extracting features of the to-be-identified biometric image by using the image identification model to obtain a feature vector includes: and performing image recognition on the image to be recognized by using the image recognition model, and extracting output values of all connection layers in the image recognition model after the image recognition is completed to obtain the feature vector.
Further, in the embodiment of the present invention, the extracting the output value of the full connection layer in the image recognition model after completing the image recognition to obtain the feature vector includes: according to the sequence of the nodes in the full-link layer in the image recognition model, extracting the output values of all the nodes in the full-link layer in the image recognition model after the image recognition is completed and longitudinally combining the output values to obtain the feature vector, for example: the total connection layer has 3 nodes which are respectively a first node, a second node and a third node in sequence, the output value of the first node of the total connection layer after image recognition is 1, the output value of the second node is 3 and the output value of the third node is 5, and the three characteristic values of 1, 3 and 5 are longitudinally combined according to the node sequence to obtain a characteristic vector
Figure BDA0002793357450000081
And S7, comparing and identifying the characteristic vectors in a preset image characteristic vector library to obtain an identification result.
In detail, in the embodiment of the present invention, referring to fig. 2, the comparing and identifying the feature vector in a preset image feature vector library to obtain an identification result includes:
s71, calculating the similarity between the feature vector and each image feature vector in the image feature vector library to obtain a corresponding similarity value;
preferably, the embodiment of the present invention calculates the similarity between the feature vector and each image feature vector in the image feature vector library by using a cosine similarity algorithm.
S72, summarizing all the similarity values to obtain a similarity value set;
and S73, screening and comparing according to the similarity value set to obtain the identification result.
In the embodiment of the invention, if the similarity value in the similarity value set is greater than or equal to a third preset threshold, the identification result is successful; and if the similarity value does not exist in the similarity value set and is greater than or equal to the third preset threshold value, the identification result is identification failure. For example: the similarity set has three similarity values of 0.6, 0.7 and 0.9, when the third preset threshold is 0.85, the similarity value 0.9 is greater than the third preset threshold 0.85, and the identification result is successful; when the third preset threshold is 0.95, the similarity value is not greater than or equal to 0.95, and the identification result is identification failure.
Fig. 3 is a functional block diagram of the biometric image recognition apparatus according to the present invention.
The biometric image recognition apparatus 100 according to the present invention may be installed in an electronic device. According to the implemented functions, the biometric image recognition apparatus may include a training sample generation module 101, a model training module 102, and an image recognition module 103, which may also be referred to as a unit, and refer to a series of computer program segments that can be executed by a processor of an electronic device and can perform fixed functions, and are stored in a memory of the electronic device.
In the present embodiment, the functions regarding the respective modules/units are as follows:
the training sample generation module 101 is configured to obtain a first training image set, train a pre-constructed generation confrontation network model by using the first training image set, and obtain a data enhancement model; acquiring a first sample image set, and performing data enhancement on the first sample image set by using the data enhancement model to obtain a second sample image set; summarizing the first sample image set and the second sample image set to obtain a first target sample image set; and performing data amplification on the first target sample image set to obtain a second target sample image set.
In an embodiment of the present invention, the first training image set is an iris image set, and the iris image is an eye image including an iris region.
Those skilled in the art will appreciate that the generation of the antagonistic network model is divided into two parts, a generator and a discriminator. In the training process, the generator is used for generating images according to the first training image set, the discriminator is used for judging whether the generated images are false images generated by the generator or corresponding real images in the first training image set, the generated images generated by the generator can approach the real images in the first training image set, and a generating model is obtained according to the generating network at the moment and used for solving the problem that image samples are too small for generating new images.
In detail, in the embodiment of the present invention, the training sample generation module 101 utilizes the first training image set to train a pre-constructed generative confrontation network model, including: constructing a first loss function; performing generator and discriminator alternating iterative training of the first generation antagonistic network model using the first training image set based on the first loss function; when the value of the first loss function reaches a first preset threshold value, stopping training to obtain the data enhancement model;
wherein the first loss function is:
Figure BDA0002793357450000091
wherein, L isGANFor a pre-constructed countermeasure loss function, z represents a preset random parameter variable, D is a generator in the first generation countermeasure network model, G is a discriminator in the first generation countermeasure network model, x is a real image in the generator generation image, P(x)For the probability distribution, P, of the real image in the generated image(z)And E is an expectation value calculation function for the probability distribution of the false images in the generated image.
In an embodiment of the present invention, the first set of sample images is a different set of iris images than the first set of training images.
Since the first sample image set has a small sample and is not easy to obtain, the training sample generation module 101 in the embodiment of the present invention processes all images in the first sample image set by using the data enhancement model to obtain the second sample image set. For example: and the first sample image set comprises images a and B, the image a is input into the data enhancement model to obtain an image A, the image B is input into the data enhancement model to obtain an image B, and the image A and the image B are summarized to obtain a second sample image set.
The training sample generation module 101 summarizes the first sample image set and the second sample image set to obtain a first target sample image set;
in the embodiment of the present invention, in order to improve the robustness of a subsequent model, the training sample generation module 101 performs data amplification on the first target sample image set.
In detail, in an embodiment of the present invention, the performing, by the training sample generation module 101, data amplification on the first target sample image set includes: performing translation and turning and color adjustment operations on all images in the first target sample image set to obtain an amplified image set; summarizing the augmented image set and the first target sample image set; and marking the label area of the images in the summarized image set to obtain the second target sample image set. In the embodiment of the present invention, the Label region is an iris region, and preferably, Label region labeling may be performed manually by using a Label Me image labeling tool in the embodiment of the present invention.
In another embodiment of the present invention, in order to ensure data privacy, the second target sample image set may be stored in block chain nodes.
The model training module 102 is configured to train a pre-constructed deep learning network model by using the second target sample image set, so as to obtain an image recognition model.
Preferably, in an embodiment of the present invention, the deep learning network model may include a convolutional neural network model and the like.
Further, the model training module 102 according to the embodiment of the present invention trains the pre-constructed deep learning network model by using the following means, including:
step A: performing convolution pooling operation on the second target sample image set according to preset convolution pooling times to obtain a feature set;
and B: calculating the feature set by using a preset activation function to obtain a predicted value, obtaining a label value of the label area corresponding to each image in the second target sample image set, and calculating by using a pre-constructed second loss function according to the predicted value and the label value to obtain a loss value;
in the embodiment of the present invention, the tag values and the tag areas are in one-to-one correspondence, for example: the label value corresponding to the label area is 1, and the label value corresponding to the non-label area is 0.
And C: comparing the loss value with a second preset threshold value, and returning to the step A when the loss value is greater than or equal to the second preset threshold value; or when the loss value is smaller than the second preset threshold value, stopping training to obtain the image recognition model.
In detail, in the embodiment of the present invention, the model training module 102 performs a convolution pooling operation on the second target sample image set to obtain a first feature set, which includes: performing convolution operation on the second target sample image set to obtain a first convolution data set; performing a maximum pooling operation on the first convolved data set to obtain the feature set.
Further, the convolution operation is:
Figure BDA0002793357450000111
and ω' represents the number of channels of the first convolution data set, ω represents the number of channels of the second target sample image set, k is the size of a preset convolution kernel, f is the step of a preset convolution operation, and p is a preset data zero padding matrix.
Further, in a preferred embodiment of the present invention, the first activation function includes:
Figure BDA0002793357450000112
wherein, mutRepresenting the predicted values, s represents data in the feature set.
In detail, the first loss function according to the preferred embodiment of the present invention includes:
Figure BDA0002793357450000113
wherein L isceRepresenting the loss value, N is the number of data of the second target sample image set, i is a positive integer, yiIs the tag value, piAnd the predicted value is used.
The image recognition module 103 is configured to, when receiving a biometric image to be recognized, perform feature extraction on the biometric image to be recognized by using the image recognition model to obtain a feature vector; and comparing and identifying the characteristic vectors in a preset image characteristic vector library to obtain an identification result.
In the embodiment of the invention, the biological characteristic image to be identified is an iris image needing to be identified.
In detail, in the embodiment of the present invention, the feature extraction performed on the to-be-identified biometric image by the image identification module 103 using the image identification model to obtain a feature vector includes: and performing image recognition on the image to be recognized by using the image recognition model, and extracting output values of all connection layers in the image recognition model after the image recognition is completed to obtain the feature vector.
Further, in this embodiment of the present invention, the obtaining, by the image recognition module 103, the feature vector by using the following means includes: extracting according to the sequence of the nodes in the full connection layer in the image recognition modelAfter the image recognition is completed, the output values of all nodes of the full connection layer in the image recognition model are longitudinally combined to obtain the feature vector, for example: the total connection layer has 3 nodes which are respectively a first node, a second node and a third node in sequence, the output value of the first node of the total connection layer after image recognition is 1, the output value of the second node is 3 and the output value of the third node is 5, and the three characteristic values of 1, 3 and 5 are longitudinally combined according to the node sequence to obtain a characteristic vector
Figure BDA0002793357450000121
In detail, the image recognition module 103 in the embodiment of the present invention obtains the recognition result by using the following means, including:
calculating the similarity between the feature vector and each image feature vector in the image feature vector library to obtain a corresponding similarity value;
preferably, the embodiment of the present invention calculates the similarity between the feature vector and each image feature vector in the image feature vector library by using a cosine similarity algorithm.
Summarizing all the similarity values to obtain a similarity value set;
and screening and comparing according to the similarity value set to obtain the identification result.
In the embodiment of the invention, if the similarity value in the similarity value set is greater than or equal to a third preset threshold, the identification result is successful; and if the similarity value does not exist in the similarity value set and is greater than or equal to the third preset threshold value, the identification result is identification failure. For example: the similarity set has three similarity values of 0.6, 0.7 and 0.9, when the third preset threshold is 0.85, the similarity value 0.9 is greater than the third preset threshold 0.85, and the identification result is successful; when the third preset threshold is 0.95, the similarity value is not greater than or equal to 0.95, and the identification result is identification failure.
Fig. 4 is a schematic structural diagram of an electronic device implementing the biometric image recognition method according to the present invention.
The electronic device 1 may comprise a processor 10, a memory 11 and a bus, and may further comprise a computer program, such as a biometric image recognition program, stored in the memory 11 and executable on the processor 10.
The memory 11 includes at least one type of readable storage medium, which includes flash memory, removable hard disk, multimedia card, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device 1, such as a removable hard disk of the electronic device 1. The memory 11 may also be an external storage device of the electronic device 1 in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device 1. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device 1. The memory 11 may be used not only to store application software installed in the electronic device 1 and various types of data, such as codes of a biometric image recognition program, etc., but also to temporarily store data that has been output or is to be output.
The processor 10 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the whole electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device 1 by running or executing programs or modules (e.g., biometric image recognition programs, etc.) stored in the memory 11 and calling data stored in the memory 11.
The bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The bus is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like.
Fig. 4 only shows an electronic device with components, and it will be understood by those skilled in the art that the structure shown in fig. 4 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than those shown, or some components may be combined, or a different arrangement of components.
For example, although not shown, the electronic device 1 may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so as to implement functions of charge management, discharge management, power consumption management, and the like through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device 1 may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
Further, the electronic device 1 may further include a network interface, and optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a bluetooth interface, etc.), which are generally used for establishing a communication connection between the electronic device 1 and other electronic devices.
Optionally, the electronic device 1 may further comprise a user interface, which may be a Display (Display), an input unit (such as a Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable for displaying information processed in the electronic device 1 and for displaying a visualized user interface, among other things.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The biometric image recognition program 12 stored in the memory 11 of the electronic device 1 is a combination of computer programs that, when executed in the processor 10, enable:
acquiring a first training image set, and training a pre-constructed generated confrontation network model by using the first training image set to obtain a data enhancement model;
acquiring a first sample image set, and performing data enhancement on the first sample image set by using the data enhancement model to obtain a second sample image set;
summarizing the first sample image set and the second sample image set to obtain a first target sample image set;
performing data amplification on the first target sample image set to obtain a second target sample image set;
training a pre-constructed deep learning network model by using the second target sample image set to obtain an image recognition model;
when receiving a biological characteristic image to be identified, performing characteristic extraction on the biological characteristic image to be identified by using the image identification model to obtain a characteristic vector;
and comparing and identifying the characteristic vectors in a preset image characteristic vector library to obtain an identification result.
Specifically, the processor 10 may refer to the description of the relevant steps in the embodiment corresponding to fig. 1 for a specific implementation method of the computer program, which is not described herein again.
Further, the integrated modules/units of the electronic device 1, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. The computer readable medium may be non-volatile or volatile. The computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM).
Further, the computer usable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the blockchain node, and the like.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A biometric image recognition method, the method comprising:
acquiring a first training image set, and training a pre-constructed generated confrontation network model by using the first training image set to obtain a data enhancement model;
acquiring a first sample image set, and performing data enhancement on the first sample image set by using the data enhancement model to obtain a second sample image set;
summarizing the first sample image set and the second sample image set to obtain a first target sample image set;
performing data amplification on the first target sample image set to obtain a second target sample image set;
training a pre-constructed deep learning network model by using the second target sample image set to obtain an image recognition model;
when receiving a biological characteristic image to be identified, performing characteristic extraction on the biological characteristic image to be identified by using the image identification model to obtain a characteristic vector;
and comparing and identifying the characteristic vectors in a preset image characteristic vector library to obtain an identification result.
2. The biometric image recognition method of claim 1, wherein training a pre-constructed generative confrontation network model with the first training image set to obtain a data enhancement model comprises:
constructing a first loss function;
performing alternating iterative training of a generator and a discriminator on the generative confrontation network model with the first training image set based on the first loss function;
and when the value of the first loss function reaches a first preset threshold value, stopping training to obtain the data enhancement model.
3. The biometric image recognition method of claim 1, wherein the data amplification of the first target sample image set to obtain a second target sample image set comprises:
performing translation and turning and color adjustment operations on all images in the first target sample image set to obtain an amplified image set;
summarizing the augmented image set and the first target sample image set;
and marking the label area of the images in the summarized image set to obtain the second target sample image set.
4. The biometric image recognition method of claim 3, wherein the training of the pre-constructed deep learning network model with the second target sample image set to obtain the image recognition model comprises:
a characteristic extraction step: performing convolution pooling operation on the second target sample image set according to preset convolution pooling times to obtain a feature set;
and a loss calculation step: calculating the feature set by using a preset activation function to obtain a predicted value, obtaining a label value of the label area corresponding to each image in the second target sample image set, and calculating by using a pre-constructed second loss function according to the predicted value and the label value to obtain a loss value;
training and judging: comparing the loss value with a second preset threshold value, and returning to the feature extraction step when the loss value is greater than or equal to the second preset threshold value; or when the loss value is smaller than the second preset threshold value, stopping training to obtain the image recognition model.
5. The method for recognizing the biometric image according to claim 1, wherein the extracting the features of the biometric image to be recognized by using the image recognition model to obtain the feature vector comprises:
carrying out image recognition on the image to be recognized by utilizing the image recognition model;
and extracting the output value of the full-connection layer in the image recognition model after the image recognition is finished to obtain the characteristic vector.
6. The method for recognizing a biometric image according to claim 1, wherein the comparing and recognizing the feature vector in a preset image feature vector library to obtain a recognition result comprises:
calculating the similarity between the feature vector and each image feature vector in the image feature vector library to obtain a corresponding similarity value;
summarizing all the similarity values to obtain a similarity value set;
and screening and comparing according to the similarity value set to obtain the identification result.
7. The method of claim 5, wherein the screening and comparing according to the similarity value set to obtain the recognition result comprises:
if the similarity value in the similarity value set is greater than or equal to a third preset threshold value, the identification result is that the identification is passed; or
And if the similarity value does not exist in the similarity value set and is greater than or equal to the third preset threshold value, the identification result is that the identification is failed.
8. A biometric image recognition apparatus, comprising:
the training sample generation module is used for acquiring a first training image set, and training a pre-constructed generation confrontation network model by using the first training image set to obtain a data enhancement model; acquiring a first sample image set, and performing data enhancement on the first sample image set by using the data enhancement model to obtain a second sample image set; summarizing the first sample image set and the second sample image set to obtain a first target sample image set; performing data amplification on the first target sample image set to obtain a second target sample image set;
the model training module is used for training a pre-constructed deep learning network model by utilizing the second target sample image set to obtain an image recognition model;
the image recognition module is used for performing feature extraction on the biological feature image to be recognized by using the image recognition model when the biological feature image to be recognized is received to obtain a feature vector; and comparing and identifying the characteristic vectors in a preset image characteristic vector library to obtain an identification result.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the biometric image recognition method according to any one of claims 1 to 7.
10. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the biometric image recognition method according to any one of claims 1 to 7.
CN202011322604.XA 2020-11-23 2020-11-23 Biological characteristic image recognition method and device, electronic equipment and readable storage medium Pending CN112396005A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011322604.XA CN112396005A (en) 2020-11-23 2020-11-23 Biological characteristic image recognition method and device, electronic equipment and readable storage medium
PCT/CN2021/097072 WO2022105179A1 (en) 2020-11-23 2021-05-30 Biological feature image recognition method and apparatus, and electronic device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011322604.XA CN112396005A (en) 2020-11-23 2020-11-23 Biological characteristic image recognition method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN112396005A true CN112396005A (en) 2021-02-23

Family

ID=74606952

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011322604.XA Pending CN112396005A (en) 2020-11-23 2020-11-23 Biological characteristic image recognition method and device, electronic equipment and readable storage medium

Country Status (2)

Country Link
CN (1) CN112396005A (en)
WO (1) WO2022105179A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022105179A1 (en) * 2020-11-23 2022-05-27 平安科技(深圳)有限公司 Biological feature image recognition method and apparatus, and electronic device and readable storage medium
WO2023155299A1 (en) * 2022-02-21 2023-08-24 平安科技(深圳)有限公司 Image enhancement processing method and apparatus, computer device and storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115035463B (en) * 2022-08-09 2023-01-17 阿里巴巴(中国)有限公司 Behavior recognition method, behavior recognition device, behavior recognition equipment and storage medium
CN116052141B (en) * 2023-03-30 2023-06-27 北京市农林科学院智能装备技术研究中心 Crop growth period identification method, device, equipment and medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108921123A (en) * 2018-07-17 2018-11-30 重庆科技学院 A kind of face identification method based on double data enhancing
US20200335086A1 (en) * 2019-04-19 2020-10-22 Behavioral Signal Technologies, Inc. Speech data augmentation
CN110889457B (en) * 2019-12-03 2022-08-19 深圳奇迹智慧网络有限公司 Sample image classification training method and device, computer equipment and storage medium
CN111666994A (en) * 2020-05-28 2020-09-15 平安科技(深圳)有限公司 Sample image data enhancement method and device, electronic equipment and storage medium
CN112396005A (en) * 2020-11-23 2021-02-23 平安科技(深圳)有限公司 Biological characteristic image recognition method and device, electronic equipment and readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022105179A1 (en) * 2020-11-23 2022-05-27 平安科技(深圳)有限公司 Biological feature image recognition method and apparatus, and electronic device and readable storage medium
WO2023155299A1 (en) * 2022-02-21 2023-08-24 平安科技(深圳)有限公司 Image enhancement processing method and apparatus, computer device and storage medium

Also Published As

Publication number Publication date
WO2022105179A1 (en) 2022-05-27

Similar Documents

Publication Publication Date Title
CN112396005A (en) Biological characteristic image recognition method and device, electronic equipment and readable storage medium
CN112446025A (en) Federal learning defense method and device, electronic equipment and storage medium
CN111932562B (en) Image identification method and device based on CT sequence, electronic equipment and medium
CN112052850A (en) License plate recognition method and device, electronic equipment and storage medium
CN112699775A (en) Certificate identification method, device and equipment based on deep learning and storage medium
CN112581227A (en) Product recommendation method and device, electronic equipment and storage medium
CN113283446B (en) Method and device for identifying object in image, electronic equipment and storage medium
CN113705462B (en) Face recognition method, device, electronic equipment and computer readable storage medium
CN113704614A (en) Page generation method, device, equipment and medium based on user portrait
CN113961473A (en) Data testing method and device, electronic equipment and computer readable storage medium
CN114022841A (en) Personnel monitoring and identifying method and device, electronic equipment and readable storage medium
CN114049568A (en) Object shape change detection method, device, equipment and medium based on image comparison
CN111476225B (en) In-vehicle human face identification method, device, equipment and medium based on artificial intelligence
CN112668575A (en) Key information extraction method and device, electronic equipment and storage medium
CN113157739A (en) Cross-modal retrieval method and device, electronic equipment and storage medium
CN112329666A (en) Face recognition method and device, electronic equipment and storage medium
CN111814743A (en) Handwriting recognition method and device and computer readable storage medium
CN112580505B (en) Method and device for identifying network point switch door state, electronic equipment and storage medium
CN113255456B (en) Inactive living body detection method, inactive living body detection device, electronic equipment and storage medium
CN114708461A (en) Multi-modal learning model-based classification method, device, equipment and storage medium
CN114996386A (en) Business role identification method, device, equipment and storage medium
CN114463685A (en) Behavior recognition method and device, electronic equipment and storage medium
CN112561893A (en) Picture matching method and device, electronic equipment and storage medium
CN114187476A (en) Vehicle insurance information checking method, device, equipment and medium based on image analysis
CN113627394A (en) Face extraction method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination