CN113298060A - Privacy-protecting biometric feature recognition method and device - Google Patents

Privacy-protecting biometric feature recognition method and device Download PDF

Info

Publication number
CN113298060A
CN113298060A CN202110849620.2A CN202110849620A CN113298060A CN 113298060 A CN113298060 A CN 113298060A CN 202110849620 A CN202110849620 A CN 202110849620A CN 113298060 A CN113298060 A CN 113298060A
Authority
CN
China
Prior art keywords
image
feature
face
mask plate
feature extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110849620.2A
Other languages
Chinese (zh)
Other versions
CN113298060B (en
Inventor
胡永恒
马晨光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202111407775.7A priority Critical patent/CN114140823A/en
Priority to CN202110849620.2A priority patent/CN113298060B/en
Publication of CN113298060A publication Critical patent/CN113298060A/en
Application granted granted Critical
Publication of CN113298060B publication Critical patent/CN113298060B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Abstract

The embodiment of the specification provides a biological feature recognition method and device for protecting privacy. One embodiment of the method comprises: firstly, acquiring a raw image acquired by a lens-free imaging device for biological characteristics, wherein the lens-free imaging device comprises a mask plate and an image sensor, and the raw image is an image acquired by the image sensor after being modulated by the mask plate. And secondly, reconstructing the original image based on the modulation parameters of the mask plate to obtain a reconstructed image. And then, performing feature extraction on the reconstructed image by using a pre-trained feature extraction network to obtain feature representation of the reconstructed image. Finally, biometric identification is performed on the reconstructed image based on the feature representation.

Description

Privacy-protecting biometric feature recognition method and device
Technical Field
One or more embodiments of the present disclosure relate to the field of computer technologies, and in particular, to a method and an apparatus for biometric identification for privacy protection.
Background
The continuous maturity and large-scale application of the biological feature recognition technology bring great convenience to the daily life of people. For example, the common applications of fingerprint payment, fingerprint attendance checking, face brushing payment, face attendance checking, face recognition access control and the like make the life of people more convenient. However, people are also increasingly concerned about their privacy protection while achieving these benefits. The camera which is visible everywhere can collect the biological characteristic information, once the biological characteristic information is leaked, the privacy of people can be seriously damaged. Therefore, privacy protection is an important step in biometric identification technology.
Disclosure of Invention
One or more embodiments of the present disclosure describe a biometric feature recognition method and apparatus for protecting privacy, which perform image acquisition on a biometric feature using a lens-less imaging device to form an original image that is not recognizable by naked eyes, and then recognize the biometric feature based on the original image, thereby implementing biometric feature recognition for protecting privacy and improving security of biometric information.
According to a first aspect, there is provided a privacy preserving biometric method comprising: acquiring an original image acquired by a lens-free imaging device aiming at biological characteristics, wherein the lens-free imaging device comprises a mask plate and an image sensor, and the original image is an image acquired by the image sensor after being modulated by the mask plate; reconstructing the original image based on the modulation parameters of the mask plate to obtain a reconstructed image; extracting the characteristics of the reconstructed image by using a pre-trained characteristic extraction network to obtain the characteristic representation of the reconstructed image; and performing biometric recognition on the reconstructed image based on the feature representation.
In one embodiment, the above method is performed in a trusted execution environment.
In one embodiment, the pixels of the mask plate are aligned with the pixels of the image sensor, and modulation parameters are correspondingly set at the positions of the pixels of the mask plate; and reconstructing the original image based on the modulation parameters of the mask plate to obtain a reconstructed image, including: determining a reconstruction matrix for reconstructing an image according to the modulation parameters of each pixel position of the mask plate; and reconstructing the original image according to the pixel value of the original image and the reconstruction matrix to obtain a reconstructed image.
In one embodiment, the mask plate is provided with a pattern, and the pattern is randomly generated and includes a light-transmitting portion and a light-proof portion, wherein the modulation parameter of the pixel position of the light-transmitting portion is 1, and the modulation parameter of the pixel position of the light-proof portion is 0.
In one embodiment, the biometric feature is a human face; and the biometric recognition of the reconstructed image based on the feature representation includes: acquiring a face feature template for verification, wherein the face feature template is obtained by performing feature extraction on a face image for verification by using the feature extraction network; and determining whether the reconstructed image and the verification face image indicate the same person or not according to the feature representation and the face feature template.
In one embodiment, the biometric feature is a human face; and the biometric recognition of the reconstructed image based on the feature representation includes: acquiring each feature template corresponding to each face image in a face image set, wherein each feature template is obtained by performing feature extraction on each face image in the face image set by using the feature extraction network; and identifying a face image indicated as the same person as the reconstructed image from the face image set according to the feature representation and the feature templates.
In one embodiment, the feature extraction network is trained by: acquiring a sample set, wherein samples of the sample set comprise sample images and class identifications corresponding to the sample images, and the sample images are images obtained by reconstructing images acquired by a lens-free imaging device; taking the sample image as input, taking a class identifier corresponding to the input sample image as expected output, and training a classifier to be trained, wherein the classifier to be trained comprises a neural network for extracting image features; and taking the neural network in the trained classifier as the feature extraction network.
In one embodiment, the biometric characteristic comprises one of: face, fingerprint, iris, and palm print.
According to a second aspect, there is provided a privacy-preserving biometric apparatus comprising: the device comprises an acquisition unit, a display unit and a processing unit, wherein the acquisition unit is configured to acquire an original image acquired by a lens-free imaging device for biological characteristics, the lens-free imaging device comprises a mask plate and an image sensor, and the original image is an image acquired by the image sensor after being modulated by the mask plate; the reconstruction unit is configured to reconstruct the original image based on the modulation parameters of the mask plate to obtain a reconstructed image; the feature extraction unit is configured to perform feature extraction on the reconstructed image by using a pre-trained feature extraction network to obtain feature representation of the reconstructed image; and the identification unit is configured to perform biological feature identification on the reconstructed image based on the feature representation.
According to a third aspect, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method as described in any one of the implementations of the first aspect.
According to a fourth aspect, a computing device is provided, which includes a memory and a processor, and is characterized in that the memory stores executable codes, and the processor executes the executable codes to implement the method as described in any implementation manner of the first aspect.
According to the biological feature recognition method and device for protecting privacy, provided by the embodiment of the specification, the original image which is collected by the lens-free imaging device and can not be recognized by naked eyes is firstly obtained, then the original image is reconstructed to obtain the reconstructed image, and finally the biological feature recognition is carried out based on the feature representation of the reconstructed image.
Drawings
FIG. 1 illustrates a flow diagram of a privacy preserving biometric method according to one embodiment;
FIG. 2 shows a schematic diagram of lensless imaging device imaging;
FIG. 3 shows a schematic view of a reticle in one example;
FIG. 4 shows a schematic diagram of a real face, an original face image, and a reconstructed face image in one example;
FIG. 5 is a schematic diagram showing an object point modulated by a mask to form an image on an image sensor;
fig. 6 shows a schematic block diagram of a privacy preserving biometric identification apparatus according to one embodiment.
Detailed Description
The technical solutions provided in the present specification are described in further detail below with reference to the accompanying drawings and embodiments. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. It should be noted that the embodiments and features of the embodiments in the present specification may be combined with each other without conflict.
FIG. 1 shows a flow diagram of a privacy preserving biometric method according to one embodiment. It is to be appreciated that the method can be performed by any apparatus, device, platform, cluster of devices having computing and processing capabilities. As shown in fig. 1, the method for biometric identification with privacy protection includes the following steps:
step 101, acquiring an original image acquired by a lens-free imaging device for biological characteristics.
In the present embodiment, an executing subject for executing the privacy-preserving biometric authentication method may acquire, from a lensless imaging apparatus, an original image acquired by the lensless imaging apparatus for a biometric. Here, the lens-less imaging device may include a mask and an image sensor, and the original image is an image that is modulated by the mask and collected by the image sensor, and the image is an image that is not recognizable to the naked eye.
The lens-free imaging is a technology for realizing indirect imaging by using a special template to replace a traditional lens, and the principle is as follows: lens-less imaging uses a reticle in front of the image sensor photosurface, which are used in combination. As an example, fig. 2 shows a schematic diagram of a lensless imaging device imaging. In fig. 2, the lens-less imaging apparatus includes a mask 201 and an image sensor 202, light emitted (or reflected) from a photographic subject 203 is modulated by the mask 201 and then projected onto the image sensor 202, and an original image 204 is captured by the image sensor 202. The original image 204 is an image that is not recognizable to the naked eye. After the computer acquires the original image data of the image sensor, the computer decodes (or reconstructs) the image data by using a digital image processing method to restore the object image. Because the original image acquired by the lens-less imaging device is an image which can not be identified by naked eyes, and modulation parameters of a mask plate are needed to reconstruct an object image containing a shooting object from the image, even if an illegal person acquires the original image from the lens-less imaging device, the corresponding reconstruction method is difficult to find to reconstruct the original image, so that the information of the shooting object can not be obtained, and the purpose of protecting privacy can be achieved.
In practice, the mask plate in the lens-free imaging device may be a mask plate for etching a pattern by using an etching technique after plating a metal film on a glass substrate. The template may be a variety of styles of templates including, but not limited to, fresnel zone plates, URA (uniform Redundant Array), and the like.
In some optional implementations of the embodiment, the biometric features involved in biometric recognition in the embodiment of the present application may include one of: face, fingerprint, iris, palm print, etc.
In some optional implementations of this embodiment, the mask plate in the lensless imaging device is provided with a pattern, which may be randomly generated and includes a light-transmitting portion and a light-non-transmitting portion. Wherein, the modulation parameter of the pixel position of the light-transmitting portion may be set to 1, and the modulation parameter of the pixel position of the light-opaque portion may be set to 0. Referring to fig. 3, for example, a mask with a randomly generated pattern may be as shown in fig. 3, and the mask includes a white transparent portion and a black opaque portion. It is understood that fig. 3 is only used to explain that the mask includes a light-transmitting portion and a light-blocking portion, and not to limit the pattern of the mask, and in practice, the pattern on the mask may be various patterns. In this implementation, since the patterns on the reticle are randomly generated, different lensless imaging devices may correspond to different reticles. Therefore, the modulation parameters used in reconstructing the original images acquired by different lensless imaging devices are different. Therefore, difficulty of reconstruction of the original image by illegal personnel is increased, and safety of biological characteristics is further improved.
And 102, reconstructing the original image based on the modulation parameters of the mask plate to obtain a reconstructed image.
In this embodiment, the execution body may reconstruct the original image based on the modulation parameters of the mask plate of the lensless imaging device, resulting in a reconstructed image. The reconstructed image is an image recognizable to the naked eye. Referring to fig. 4, for example, when the biometric feature is a human face, the lens-less imaging device may collect an image of a real human face 401 to obtain an original human face image 402, as shown in the figure, the original human face image 402 is an image that is not recognizable to naked eyes. The original face image 402 is reconstructed to obtain a reconstructed face image 403, and the reconstructed face image 403 is an image recognizable by naked eyes. Here, the modulation parameter of the mask may refer to a modulation weight for modulating the light at each pixel position of the mask.
In some alternative implementations of the present embodiment, the mask plate of the lensless imaging device is pixel aligned with the image sensor. For example, the mask plate may be the same size as the image sensor arrangement, with the pixels being the same. For example, if the image sensor generates an image of 640 pixels by 480 pixels, the mask size may be set to 640 pixels by 480 pixels. Pixel alignment is achieved by adjusting the position of the mask plate, which is superimposed on the image sensor, for example, the first pixel at the upper left corner of the mask plate corresponds to the first pixel at the upper left corner of the image sensor. In addition, modulation parameters are correspondingly arranged at each pixel position of the mask plate.
The above step 102 may also be performed as follows:
step S1 is to determine a reconstruction matrix for reconstructing the image according to the modulation parameters at each pixel position of the mask.
In this implementation, the execution body may determine a reconstruction matrix for reconstructing the image according to the modulation parameter of each pixel position of the mask plate. By way of example, a maskplate modulation based lensless imaging approach may be expressed using a mathematical model when the pixels between the maskplate and the image sensor are aligned. The imaging pixel A' of the object point A modulated by the mask plate is the linear addition sum of the product of the corresponding modulation parameter on the mask plate and the light. By analogy, as shown in fig. 5, the same holds true for a plurality of object points, such as object point a, object point B, and object point C. The following set of equations can be derived:
Figure DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 158093DEST_PATH_IMAGE002
respectively representing modulation parameters corresponding to the pixel points 1, 2, 3, 4 and 5 on the mask plate.
Figure DEST_PATH_IMAGE003
And the pixel value of each pixel of the original image acquired after the modulation of the mask plate is represented. The above equation set can be expressed in the form of a matrix as follows:
Figure 777031DEST_PATH_IMAGE004
if, the left part of the matrix form is used
Figure DEST_PATH_IMAGE005
It is shown that,for the middle part
Figure 963292DEST_PATH_IMAGE006
Shown in the right part by
Figure DEST_PATH_IMAGE007
Then the matrix form can be expressed as:
Figure 499447DEST_PATH_IMAGE008
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE009
the reconstruction matrix is determined according to the modulation parameters of each pixel position of the mask plate and is used for reconstructing the image.
Figure 21476DEST_PATH_IMAGE010
Figure DEST_PATH_IMAGE011
Is known, the final solution
Figure 781622DEST_PATH_IMAGE012
Is the reconstruction process.
It is to be understood that the above reconstruction process is only used for explaining the principle of reconstructing the original image acquired by the lensless imaging device, and is not a limitation of the reconstruction process. Since more object points may be included in the actual shooting scene, the reconstruction process in the actual application scene may be more complicated than the above reconstruction process, and is not described in detail here.
And step S2, reconstructing the original image according to the pixel value of the original image and the reconstruction matrix to obtain a reconstructed image.
In this implementation, the execution subject may reconstruct the original image according to the pixel values of the original image and the reconstruction matrix, so as to obtain a reconstructed image. The reconstructed image is an image recognizable to the naked eye.
And 103, performing feature extraction on the reconstructed image by using a pre-trained feature extraction network to obtain feature representation of the reconstructed image.
In this embodiment, a feature extraction network trained in advance and used for extracting features of an image may be stored in the execution subject. By way of example, the feature extraction network may be any neural network having an image feature extraction function, and the neural network may include a convolutional layer, a pooling layer, a nonlinear activation layer, a full connection layer, and other layer structures. In this way, the executing entity may perform feature extraction on the reconstructed image obtained in step 102 using the above-described feature extraction network to obtain a feature representation of the reconstructed image.
In some optional implementations of this embodiment, the feature extraction network may be obtained by training the execution subject or other execution subjects for training the feature extraction network by:
1) a sample set is obtained.
In this implementation, an executing agent for training a feature extraction network may obtain a sample set. Wherein the samples in the sample set may include a sample image and a category identification corresponding to the sample image. Here, the sample image may be an image obtained by reconstructing an image acquired by a lensless imaging apparatus. The class corresponding to the sample image identifies the class to which the object indicated by the sample image belongs. In order to make the features extracted by the trained feature extraction network more accurate and effective, the sample image may be an image of a biological feature. For example, if the biological feature related to the biological feature recognition method for protecting privacy is a human face, the sample image included in the sample set may be a human face image, and the human face image may be a human face image obtained by reconstructing an image acquired by a lens-less imaging device. The category identification corresponding to the face image may be a user id (identification) of the person indicated by the face image.
2) And taking the sample image as input, taking the class identification corresponding to the input sample image as expected output, and training the classifier to be trained.
In this implementation, the executing entity for training the feature extraction network may input the sample images in the samples in the sample set into the classifier to be trained, obtain the output class prediction result of the sample images, use the class identifiers in the samples as the expected output of the classifier to be trained, and train the classifier to be trained by using a machine learning method. Here, the classifier to be trained described above includes a neural network for extracting features of an image. The training process may specifically first calculate the difference between the obtained output class prediction result and the class identifier in the sample using a preset loss function. Then, based on the calculated difference, the network parameters of the classifier to be trained may be adjusted, and the training may be ended if a preset training end condition is satisfied. For example, the preset training end condition may include, but is not limited to, at least one of the following: the training time exceeds the preset time; the training times exceed the preset times; the calculated difference is less than a preset difference threshold.
Here, various implementations may be employed to adjust the network parameters of the classifier to be trained based on the difference between the generated output class prediction result and the class identification in the sample. For example, a BP (Back Propagation) algorithm or an SGD (Stochastic Gradient Descent) algorithm may be used to adjust network parameters of the classifier to be trained.
3) And taking the neural network in the trained classifier as a feature extraction network.
In this implementation, the executing entity for training the feature extraction network may use the neural network in the trained classifier as the feature extraction network for performing feature extraction on the reconstructed image, which is used in the privacy-preserving biometric identification method.
In this implementation, when training the classifier, the sample image used is an image obtained by reconstructing an image acquired by the lensless imaging device, that is, the sample image is generated in a manner similar to that of the reconstructed image. Therefore, the feature extraction network obtained by training the sample image is used for carrying out feature extraction on the reconstructed image, so that the obtained feature representation is more accurate, and the identification result of biological feature identification is more accurate.
Returning to fig. 1, the features extracted by the feature extraction network based on the training are represented, and then biometric recognition is performed on the reconstructed image based on the feature representation in step 104.
In this embodiment, the executing subject may perform biometric recognition on the reconstructed image based on the feature representation of the reconstructed image obtained in step 103, so as to obtain a biometric recognition result.
In some optional implementations of the present embodiment, the biometric identification involves a biometric alignment for a particular subject. Taking the biological feature as an example of a human face, the step 104 may be specifically performed as follows:
first, a face feature template for verification is acquired.
In this implementation, the executing agent may obtain the face feature template for verification. The face feature template for verification is obtained by extracting the features of a preset face image for verification by using the feature extraction network.
Then, it is determined whether the reconstructed image and the face image for verification indicate the same person based on the feature representation and the face feature template.
In this implementation, the execution subject may determine whether the reconstructed image and the verification-use face image indicate the same person based on the feature representation of the reconstructed image and the verification-use face feature template. For example, the execution subject may calculate a similarity between the feature representation of the reconstructed image and the face feature template for verification, and when the similarity is greater than a preset threshold, it may be determined that the reconstructed image and the face image for verification indicate the same person. As an example, the present implementation may be applied to a face-brushing payment process, and in a face-brushing payment scenario, if it is determined that the reconstructed image and the verification-use face image indicate the same person, it may be determined that the face-brushing verification is successful, and a subsequent payment process may be performed.
In some optional implementations of the present embodiment, the biometric identification may also involve identifying the target object from an unspecified set of objects. Still taking the biometric feature as an example, the step 104 may be specifically performed as follows:
firstly, obtaining each feature template corresponding to each face image in a face image set.
In this implementation manner, the execution subject may obtain each feature template corresponding to each face image in the preset face image set. Here, each of the acquired feature templates may be obtained by performing feature extraction on each of the face images in the face image set using the feature extraction network.
Then, according to the feature representation and each feature template, a face image which is indicated as the same person as the reconstructed image is determined from the face image set.
In this implementation, the execution subject may determine, from the face image set, a face image that is indicated as the same person as the reconstructed image according to the feature representation of the reconstructed image and each feature template. For example, the executing subject may first calculate the similarity between the feature representation of the reconstructed image and the respective feature templates. And then, determining the face image corresponding to the feature template with the maximum similarity and larger than a preset threshold as the face image indicated as the same person as the reconstructed image.
It will be appreciated that the above two biometric scenarios may equally be applied to other biometric identification such as fingerprint identification.
In some optional implementations of this embodiment, the steps of the above-described privacy-preserving biometric method may be performed in a Trusted Execution Environment (TEE).
The TEE is a secure area within a CPU (central processing unit). It runs in a separate environment and in parallel with the operating system. The CPU ensures that the confidentiality and integrity of the code and data in the TEE are protected. By using both hardware and software to protect data and code, TEE is more secure than operating systems. According to the implementation mode, the steps of reconstructing the original image, recognizing the biological characteristics of the reconstructed image and the like are executed in the TEE, so that the leakage of information such as the reconstructed image, the biological characteristic recognition result and the like can be effectively prevented, the information such as the reconstructed image, the biological characteristic recognition result and the like is effectively protected in privacy, and the security is higher.
According to an embodiment of another aspect, a biometric identification apparatus that protects privacy is provided. The privacy preserving biometric apparatus may be deployed in any device, platform, or cluster of devices having computing and processing capabilities.
Fig. 6 shows a schematic block diagram of a privacy preserving biometric identification apparatus according to one embodiment. As shown in fig. 6, the biometric recognition apparatus 600 includes: an acquisition unit 601 configured to acquire an original image acquired by a lens-less imaging device for a biometric feature, wherein the lens-less imaging device includes a mask and an image sensor, and the original image is an image acquired by the image sensor after being modulated by the mask; a reconstructing unit 602 configured to reconstruct the original image based on the modulation parameter of the mask plate to obtain a reconstructed image; a feature extraction unit 603 configured to perform feature extraction on the reconstructed image by using a pre-trained feature extraction network to obtain a feature representation of the reconstructed image; an identification unit 604 configured to perform biometric identification on the reconstructed image based on the feature representation.
In some optional implementations of the present embodiment, the biometric recognition apparatus 600 is provided in a trusted execution environment.
In some optional implementation manners of this embodiment, pixels of the mask plate are aligned with those of the image sensor, and modulation parameters are set at positions of the pixels of the mask plate correspondingly; and the reconstructing unit 602 is further configured to: determining a reconstruction matrix for reconstructing an image according to the modulation parameters of each pixel position of the mask plate; and reconstructing the original image according to the pixel value of the original image and the reconstruction matrix to obtain a reconstructed image.
In some optional implementation manners of this embodiment, a pattern is disposed on the mask plate, and the pattern is randomly generated and includes a light-transmitting portion and a light-proof portion, where a modulation parameter of a pixel position of the light-transmitting portion is 1, and a modulation parameter of a pixel position of the light-proof portion is 0.
In some optional implementations of this embodiment, the biometric feature is a human face; and the identifying unit 604 is further configured to: acquiring a face feature template for verification, wherein the face feature template is obtained by performing feature extraction on a face image for verification by using the feature extraction network; and determining whether the reconstructed image and the verification face image indicate the same person or not according to the feature representation and the face feature template.
In some optional implementations of this embodiment, the biometric feature is a human face; and the identifying unit 604 is further configured to: acquiring each feature template corresponding to each face image in a face image set, wherein each feature template is obtained by performing feature extraction on each face image in the face image set by using the feature extraction network; and identifying a face image indicated as the same person as the reconstructed image from the face image set according to the feature representation and the feature templates.
In some optional implementations of this embodiment, the feature extraction network is obtained by training in the following manner: acquiring a sample set, wherein samples of the sample set comprise sample images and class identifications corresponding to the sample images, and the sample images are images obtained by reconstructing images acquired by a lens-free imaging device; taking the sample image as input, taking a class identifier corresponding to the input sample image as expected output, and training a classifier to be trained, wherein the classifier to be trained comprises a neural network for extracting image features; and taking the neural network in the trained classifier as the feature extraction network.
In some optional implementations of this embodiment, the biometric characteristic includes one of: face, fingerprint, iris, and palm print.
According to an embodiment of another aspect, there is also provided a computer-readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method described in fig. 1.
According to an embodiment of still another aspect, there is also provided a computing device including a memory and a processor, wherein the memory stores executable code, and the processor executes the executable code to implement the method described in fig. 1.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
It will be further appreciated by those of ordinary skill in the art that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, a software module executed by a processor, or a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A privacy preserving biometric method, performed in a trusted execution environment, TEE, comprising:
acquiring an original image which is output by a lens-free imaging device and is collected aiming at biological characteristics, wherein the lens-free imaging device comprises a mask plate and an image sensor, and the original image is an image which is collected by the image sensor after being modulated by the mask plate;
reconstructing the original image based on the modulation parameters of the mask plate to obtain a reconstructed image;
performing feature extraction on the reconstructed image by using a pre-trained feature extraction network to obtain feature representation of the reconstructed image;
performing biometric recognition on the reconstructed image based on the feature representation.
2. The method of claim 1, wherein the mask is aligned with the image sensor in pixels, and modulation parameters are set corresponding to each pixel position of the mask; and
reconstructing the original image based on the modulation parameters of the mask plate to obtain a reconstructed image, wherein the reconstructing comprises:
determining a reconstruction matrix for reconstructing an image according to the modulation parameters of each pixel position of the mask plate;
and reconstructing the original image according to the pixel value of the original image and the reconstruction matrix to obtain a reconstructed image.
3. The method according to claim 1 or 2, wherein the mask plate is provided with a pattern, the pattern is randomly generated and comprises a light-transmitting part and a light-proof part, wherein the modulation parameter of the pixel position of the light-transmitting part is 1, and the modulation parameter of the pixel position of the light-proof part is 0.
4. The method of claim 1, wherein the biometric feature is a human face; and
the performing biometric identification on the reconstructed image based on the feature representation includes:
acquiring a face feature template for verification, wherein the face feature template is obtained by performing feature extraction on a face image for verification by using the feature extraction network;
and determining whether the reconstructed image and the face image for verification indicate the same person or not according to the feature representation and the face feature template.
5. The method of claim 1, wherein the biometric feature is a human face; and
the performing biometric identification on the reconstructed image based on the feature representation includes:
acquiring each feature template corresponding to each face image in a face image set, wherein each feature template is obtained by performing feature extraction on each face image in the face image set by using the feature extraction network;
and according to the feature representation and the feature templates, determining a face image which is indicated as the same person as the reconstructed image from the face image set.
6. The method of claim 1, wherein the feature extraction network is trained by:
acquiring a sample set, wherein samples of the sample set comprise sample images and class identifications corresponding to the sample images, and the sample images are images obtained by reconstructing images acquired by a lens-free imaging device;
taking the sample image as input, taking a class identifier corresponding to the input sample image as expected output, and training a classifier to be trained, wherein the classifier to be trained comprises a neural network for extracting image features;
and taking the neural network in the trained classifier as the feature extraction network.
7. The method of claim 1, wherein the biometric characteristic comprises one of: face, fingerprint, iris, and palm print.
8. A privacy preserving biometric apparatus deployed in a trusted execution environment, TEE, comprising:
the device comprises an acquisition unit, a display unit and a processing unit, wherein the acquisition unit is configured to acquire an original image acquired by a lens-free imaging device for biological characteristics, the lens-free imaging device comprises a mask plate and an image sensor, and the original image is an image acquired by the image sensor after being modulated by the mask plate;
the reconstruction unit is configured to reconstruct the original image based on the modulation parameters of the mask plate to obtain a reconstructed image;
the feature extraction unit is configured to perform feature extraction on the reconstructed image by using a pre-trained feature extraction network to obtain feature representation of the reconstructed image;
an identification unit configured to perform biometric identification on the reconstructed image based on the feature representation.
9. A computer-readable storage medium, on which a computer program is stored which, when executed in a computer, causes the computer to carry out the method of any one of claims 1-7.
10. A computing device comprising a memory and a processor, wherein the memory has stored therein executable code that, when executed by the processor, implements the method of any of claims 1-7.
CN202110849620.2A 2021-07-27 2021-07-27 Privacy-protecting biometric feature recognition method and device Active CN113298060B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111407775.7A CN114140823A (en) 2021-07-27 2021-07-27 Privacy-protecting biometric feature recognition method and device
CN202110849620.2A CN113298060B (en) 2021-07-27 2021-07-27 Privacy-protecting biometric feature recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110849620.2A CN113298060B (en) 2021-07-27 2021-07-27 Privacy-protecting biometric feature recognition method and device

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202111407775.7A Division CN114140823A (en) 2021-07-27 2021-07-27 Privacy-protecting biometric feature recognition method and device

Publications (2)

Publication Number Publication Date
CN113298060A true CN113298060A (en) 2021-08-24
CN113298060B CN113298060B (en) 2021-10-15

Family

ID=77331168

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202111407775.7A Pending CN114140823A (en) 2021-07-27 2021-07-27 Privacy-protecting biometric feature recognition method and device
CN202110849620.2A Active CN113298060B (en) 2021-07-27 2021-07-27 Privacy-protecting biometric feature recognition method and device

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202111407775.7A Pending CN114140823A (en) 2021-07-27 2021-07-27 Privacy-protecting biometric feature recognition method and device

Country Status (1)

Country Link
CN (2) CN114140823A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114677766A (en) * 2022-05-26 2022-06-28 中国科学院西安光学精密机械研究所 Non-lens imaging technology-based sign language recognition method and system and interaction equipment
CN114724288A (en) * 2022-02-24 2022-07-08 中国科学院西安光学精密机械研究所 Access control system based on coded mask imaging technology and control method
WO2023138629A1 (en) * 2022-01-21 2023-07-27 清华大学 Encrypted image information obtaining device and method

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024018537A1 (en) * 2022-07-19 2024-01-25 日本電信電話株式会社 Image acquisition device, method, and program

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102231049A (en) * 2011-06-29 2011-11-02 广东工业大学 Large-area projection lithography system and alignment method thereof
CN103345618A (en) * 2013-06-21 2013-10-09 银江股份有限公司 Traffic violation detection method based on video technology
CN107942338A (en) * 2017-09-28 2018-04-20 北京华航无线电测量研究所 A kind of multi-wavelength relevance imaging system based on Digital Micromirror Device
CN110119746A (en) * 2019-05-08 2019-08-13 北京市商汤科技开发有限公司 A kind of characteristic recognition method and device, computer readable storage medium
CN111553235A (en) * 2020-04-22 2020-08-18 支付宝(杭州)信息技术有限公司 Network training method for protecting privacy, identity recognition method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102231049A (en) * 2011-06-29 2011-11-02 广东工业大学 Large-area projection lithography system and alignment method thereof
CN103345618A (en) * 2013-06-21 2013-10-09 银江股份有限公司 Traffic violation detection method based on video technology
CN107942338A (en) * 2017-09-28 2018-04-20 北京华航无线电测量研究所 A kind of multi-wavelength relevance imaging system based on Digital Micromirror Device
CN110119746A (en) * 2019-05-08 2019-08-13 北京市商汤科技开发有限公司 A kind of characteristic recognition method and device, computer readable storage medium
CN111553235A (en) * 2020-04-22 2020-08-18 支付宝(杭州)信息技术有限公司 Network training method for protecting privacy, identity recognition method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张地: "联合运动估计与基于模式的超分辨率图像重构", 《电子学报》 *
谢逊: "计算光学成像系统图像重构技术研究", 《中国优秀博硕士论文全文数据库(硕士)》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023138629A1 (en) * 2022-01-21 2023-07-27 清华大学 Encrypted image information obtaining device and method
CN114724288A (en) * 2022-02-24 2022-07-08 中国科学院西安光学精密机械研究所 Access control system based on coded mask imaging technology and control method
CN114677766A (en) * 2022-05-26 2022-06-28 中国科学院西安光学精密机械研究所 Non-lens imaging technology-based sign language recognition method and system and interaction equipment

Also Published As

Publication number Publication date
CN113298060B (en) 2021-10-15
CN114140823A (en) 2022-03-04

Similar Documents

Publication Publication Date Title
CN113298060B (en) Privacy-protecting biometric feature recognition method and device
Ferrara et al. Face demorphing
Chan et al. Face liveness detection using a flash against 2D spoofing attack
Dua et al. Biometric iris recognition using radial basis function neural network
Raval et al. Protecting visual secrets using adversarial nets
Marasco et al. A survey on antispoofing schemes for fingerprint recognition systems
Ferrara et al. On the effects of image alterations on face recognition accuracy
RU2431190C2 (en) Facial prominence recognition method and device
Chen et al. A finger vein image-based personal identification system with self-adaptive illuminance control
JP2018508888A (en) System and method for performing fingerprint-based user authentication using an image captured using a mobile device
CN106133752A (en) Eye gaze is followed the tracks of
Vega et al. Biometric personal identification system based on patterns created by finger veins
WO2020065851A1 (en) Iris recognition device, iris recognition method and storage medium
Galbally et al. An introduction to fingerprint presentation attack detection
RU2608001C2 (en) System and method for biometric behavior context-based human recognition
Barni et al. Iris deidentification with high visual realism for privacy protection on websites and social networks
Kolberg et al. Colfispoof: A new database for contactless fingerprint presentation attack detection research
CN112600886B (en) Privacy protection method, device and equipment with combination of end cloud and device
Ajakwe et al. Real-time monitoring of COVID-19 vaccination compliance: a ubiquitous IT convergence approach
Galbally et al. Fingerprint anti-spoofing in biometric systems
Chugh et al. Fingerprint spoof detection: Temporal analysis of image sequence
KR102151851B1 (en) Face recognition method based on infrared image and learning method for the same
Lubna et al. Detecting Fake Image: A Review for Stopping Image Manipulation
Schuiki et al. Extensive threat analysis of vein attack databases and attack detection by fusion of comparison scores
SulaimanAlshebli et al. The Cyber Security Biometric Authentication based on Liveness Face-Iris Images and Deep Learning Classifier

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40056987

Country of ref document: HK