CN112001262B - Method for generating accessory capable of influencing face authentication - Google Patents
Method for generating accessory capable of influencing face authentication Download PDFInfo
- Publication number
- CN112001262B CN112001262B CN202010738081.0A CN202010738081A CN112001262B CN 112001262 B CN112001262 B CN 112001262B CN 202010738081 A CN202010738081 A CN 202010738081A CN 112001262 B CN112001262 B CN 112001262B
- Authority
- CN
- China
- Prior art keywords
- face
- accessory
- image
- user
- face image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 230000009466 transformation Effects 0.000 claims description 13
- 238000004364 calculation method Methods 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 4
- 238000013507 mapping Methods 0.000 claims description 3
- 230000001131 transforming effect Effects 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 abstract description 5
- 238000011161 development Methods 0.000 abstract description 4
- 210000000214 mouth Anatomy 0.000 description 9
- 210000001331 nose Anatomy 0.000 description 8
- 238000002474 experimental method Methods 0.000 description 7
- 210000001061 forehead Anatomy 0.000 description 6
- 210000000887 face Anatomy 0.000 description 5
- 230000001680 brushing effect Effects 0.000 description 4
- 238000011160 research Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 210000001508 eye Anatomy 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 238000012795 verification Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 239000000126 substance Substances 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 230000007123 defense Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/211—Selection of the most significant subset of features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Collating Specific Patterns (AREA)
Abstract
With the development of scientific technology, biometric identification becomes the mainstream of identity authentication, from fingerprint identification and iris identification to the face identification which is most widely applied at present, and the floor application of various biometric authentications develops in a blowout manner. The invention provides a method for positioning key areas of a human face, generating various accessories, such as a hair band, a mask and the like, for the human face in the areas, and influencing whether the human face can be correctly identified during human face authentication according to whether the accessories are worn or not during human face information acquisition and the difference of the positions of the accessories. Specifically, firstly, calculating the discrete entropy of image blocks of the face image, taking the product of the average value of the discrete entropy of all the image blocks and a parameter as a threshold, and then taking the area of which the discrete entropy of the image blocks is greater than the threshold as a more key area on the face; after a key area of the face is positioned, various accessories are generated by using a generation algorithm; finally, an example study was conducted.
Description
Technical Field
The invention relates to a method for generating various face accessories, in particular to a method for generating user personalized face accessories after detecting a face key area, and influencing whether the face can be correctly identified during face authentication according to whether the accessories are worn or not during face information acquisition and the difference of accessory positions.
Background
Biological information is increasingly used in the field of identity authentication, and with the development of science and technology, various applications are more developed in a blowout manner. Among the biological information, human face information is the most common identity feature due to its non-contact and easy-to-acquire properties. The purpose of face recognition is mainly to extract personalized features of a person from a face image so as to identify the identity of the person. A simple face automatic identification system is mainly realized by the processes of face detection, face alignment, face comparison (and face information which is already input by the system), identification result return and the like. From 2015 to now, the face recognition technology is subjected to blowout development from rapid landing to multi-field application, and most users need to input face information when handling business, so that most people do not have little defense against the technologies by brushing faces, brushing faces during face brushing and automatic withdrawal even in public toilets as ' face recognition ' paper dispensers … …, and the ' brushing faces ' are integrated into the aspects of people's lives and are increasingly applied to the fields of finance, traffic, education, security protection, social security and the like.
With the development of face recognition technology, some application requirements appear, and not only the face needs to be correctly recognized, but also the result of face recognition needs to be guided sometimes. For example, most people can be identified according to different user grades of the face recognition system, and a small part of high-grade users are not identified. Or only a small part of advanced users are identified, and most people are not identified. The need arises to develop a strategy to guide the results of face recognition. Currently, there are some studies that affect the result of face recognition. Song et al propose A 3 GN generates false face image, and the face recognition network recognizes the false face image as target identity. Sharif et al propose a method of AGNs to generate various accessories of human faces to attack the face recognition network, so that the network outputs wrong recognition results. Pautov et al propose a method for generating an adversarial image block to attack an ArcFace face recognition system, paste the generated adversarial image block on a face,the user can be made not to be correctly identified. Most of the existing methods influencing the face recognition result are attacks on the face recognition system, so that the system outputs wrong recognition results or target identities. However, these methods do not discuss the selection of attack positions on the face, nor do they study the influence of the differences in attack positions on the recognition result.
Disclosure of Invention
The invention provides a method for detecting key areas of a human face by using discrete entropy, generating user personalized accessories in the areas by using a generation algorithm, and influencing whether the human face can be correctly identified during human face authentication according to the fact that whether the accessories are worn or not during human face information acquisition and the difference of accessory positions. For the face image of the user, the discrete entropy is used for expressing the information quantity of the face image, the discrete entropy is calculated for each image block, the information quantity of the image block with the entropy value larger than the threshold value is obviously higher than that of other areas, and the area is used as a key area of the face. After the human face key area is detected, for different human face recognition systems and different human face positions, the method can generate the user personalized accessory by using the generation algorithm, and new facial biological data can be formed after the accessory is attached to the face. Subsequently, the invention carries out example research that the nose and the mouth respectively wear accessories, and the research shows that whether the user wears the accessories during the face information acquisition and the positions of the accessories are different can influence whether the user can be correctly identified during the face authentication.
The specific technical scheme of the invention is as follows:
a method for generating an accessory capable of influencing face authentication, wherein the generated accessory can influence whether a face is correctly authenticated or recognized according to whether the accessory is worn or not during face information acquisition and the difference of the accessory position, and the method mainly comprises the following steps:
Step 1: calculating the discrete entropy of the image block, and judging a key area in the face image according to the size of the entropy, wherein the specific method comprises the following steps: the method comprises the following steps of obtaining image blocks by adopting a certain sliding step length for a face image, then calculating discrete entropy for each image block, judging the image block with the discrete entropy larger than a threshold value as a key area of the face, wherein the information content of the image block is more than that of other image blocks, and calculating as follows:
for a gray-scale human face image block with the pixel level of N and the size of W multiplied by H, the discrete entropy calculation formula is as follows:
f j =s j /(W×H) j=0,1,2,…,N-1
wherein s is j Number of pixels representing a pixel value of j, e i Representing the ith image block in the face image, E (E) i ) Representing image blocks e i Discrete entropy of (d);
when processing a color face image, the average of R, G, B three-channel discrete entropies of each image block is used as the discrete entropy of the image block, and the threshold is set to beThe detailed calculation process is as follows:
L x =L i
wherein the content of the first and second substances,representing image blocks e i Average of discrete entropies over R, G, B channels, E r (e i )、E g (e i ) And E b (e i ) Respectively representing the discrete entropies of the image block in the R channel, the G channel and the B channel, phi representing the returnEcho block e i The function of the position on the face, T is the discrete entropy threshold, is used to select the image block with more information content, L i The position of the ith image block is represented, avr represents the mean value of the discrete entropies of all the image blocks of the whole human face image, and alpha is a parameter for controlling the size of the threshold;
step 2: selecting the size of the accessory, initializing the accessory by adopting different forms of noises to obtain an initialized rectangular accessory x init ;
And step 3: transforming the initialized rectangular accessory into an arc shape through parabolic transformation, and mapping the arc shape onto a face image, wherein the parabolic transformation formula is as follows:
wherein x is init Is an initialized rectangular accessory, x t Is an accessory after parabolic transformation, and a, b and c are parameters for controlling the parabolic transformation;
and 4, step 4: generating an accessory by adopting a generating algorithm, wherein the accessory of the user p when the user p inputs the face recognition system F is as follows:
p(x|F)=x init +Δ(x init |F)
wherein, Delta (x) init If) represents the disturbance added to the initialization accessory of the face recognition system F;
the accessory generated by the generation algorithm needs to satisfy the following conditions: when a specific user collects face information of a specific face authentication system, the specific user simultaneously meets a condition (1) and a condition (2), wherein the condition (1) is as follows: when an original face image is input, when a user carries out system authentication, only the original face image can be correctly recognized, the face image of an accessory generated by wearing a generation algorithm at any position of a face cannot be correctly recognized by a system, wherein the generated accessory is specific to the user and the system, and the specific expression is as follows:
The condition (2) is: when a user inputs a face image wearing the personalized accessories at a certain position of the face, when the user carries out authentication, only the face image wearing the personalized accessories at the same position can be correctly identified, and an original face image or the face image wearing the personalized accessories at different positions cannot be correctly identified, wherein the specific expression is as follows:
wherein, the first and the second end of the pipe are connected with each other,indicating that the user p is at the face position l i Face image of the accessory unique to user p, e, of the wearing system F p Representing the original face image of the user p,representing that the user p is not at the face i A face image of the position wearing system F for the accessory unique to the user p;
and 5: the user wears the personalized accessory generated in the step 4, and whether the user can be correctly identified in face authentication can be influenced according to whether the user wears the accessory during face information acquisition and the difference of the positions of the accessory.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
FIG. 2 is a diagram of a sample generated by the method of the present invention.
FIG. 3 is a sample graph of experimental results in an example study of the present invention. The left-most column is a face image recorded with a face recognition model, and the original face image, the accessory face image worn at the forehead position, the accessory face image worn at the nose position and the accessory face image worn at the mouth position are respectively arranged from top to bottom. The four right-side columns are test images, and the original face images, the ornament face images worn at the forehead position, the ornament face images worn at the nose position and the ornament face images worn at the mouth position are respectively arranged from left to right.
Detailed Description
In the embodiment, experiments are performed on the face recognition system ArcFace with the best recognition effect at present. The experiment mainly comprises three parts, namely positioning a face key area, generating accessories at different positions of different users according to the positioning, and finally testing the recognition results when a face image wearing the accessories and an original face image are respectively input into a face recognition system.
1. Data set processing:
the CASIA-Webface dataset contained 494414 images of 10575 individuals. In this embodiment, the data set is used in the pre-training process of the ArcFace face recognition system, because the data set has a large data volume and many human types, the model obtained by training can be more general.
The MMSys _ Face and MMSys _ Attack data sets are collected and constructed in the embodiment, and the collection can be applied to non-commercial scientific research through the consent of the collector. The main reason for performing experiments using self-collected data sets is that the user is required to wear accessories in the physical world when performing face recognition verification. Therefore, the data set is constructed by the images of friends around the user, so that the experiment is more convenient to carry out. The MMSys _ Face data set contains 952 pictures of 25 persons, and all Face images are original Face images without accessories. The MMSys _ attach dataset contains 7533 pictures of 11 people, all of which are face images of the person wearing the accessory generated by the ArcFace face recognition system.
All images used for training the ArcFace model and generating accessories are subjected to three preprocessing processes of face detection, key point positioning and face alignment in advance through an MTCNN network to obtain face images with the size of 128 multiplied by 128.
2. The method comprises the following specific implementation steps:
step 1: calculating the discrete entropy of the image block to judge the key area in the face image, and the specific method comprises the following steps: the method includes the steps of obtaining image blocks by adopting a certain sliding step length for a face image, then calculating discrete entropy for each image block, judging the image blocks with the discrete entropy larger than a threshold value to be key areas of a face, wherein the information content of the image blocks is more than that of other image blocks, the images adopted in the embodiment are all color face images, the size of each image block is 16 x 16, the step length of a sliding window is 1 when the image blocks are obtained, and the calculation is as follows:
for a gray-scale human face image block with the pixel level of N and the size of W multiplied by H, the discrete entropy calculation formula is as follows:
f j =s j /(W×H) j=0,1,2,…,N-1
wherein s is j Number of pixels, e, representing a pixel value of j i Representing the ith image block in the face image, E (E) i ) Representing image blocks e i Discrete entropy of (2).
When processing a color face image, the invention proposes to use the average value of the discrete entropies of the three channels R, G and B of each image block as the discrete entropy of the image block. The invention sets the threshold value to be The detailed calculation process is as follows:
L x =L i
wherein, the first and the second end of the pipe are connected with each other,representing image blocks e i Average value of discrete entropy on three channels of R, G and B,E r (e i )、E g (e i ) And E b (e i ) Denotes the discrete entropy of the image block in the R, G and B channels, respectively, and ψ denotes the returned image block e i Function of position, T is the discrete entropy threshold, L i The position of the ith image block is represented, avr represents the mean value of the discrete entropies of all the image blocks of the whole human face image, and alpha is a parameter for controlling the size of the threshold, and in the embodiment, the value of alpha is 1.13;
step 2: selecting accessory size, initializing the accessory with different forms of noise (such as Gaussian noise, random noise, or pure color) to obtain initialized rectangular accessory x init In this embodiment, the size of the initialization accessory is 900 × 400, and the initialization state is pure white;
and step 3: transforming the initialized rectangular accessory into an arc shape through parabolic transformation, and mapping the arc shape onto a face image, wherein the parabolic transformation formula is as follows:
wherein x is init Is an initialized rectangular accessory, x t The accessory is transformed by a parabola, a, b and c are parameters for controlling the parabola transformation, the initialized accessory is mapped to the face through the parabola transformation, and a new accessory is generated through a generation algorithm;
And 4, step 4: the accessory is generated by adopting a generation algorithm, a generated countermeasure network (GANs) with a better generation effect at present can be selected, a traditional method for adding disturbance to the image can also be selected, and the accessory of the user p when the user p inputs the face recognition system F is as follows:
p(x|F)=x init +Δ(x init |F)
wherein, Delta (x) init If) represents the perturbation added to the initializing accessory of the face recognition system F.
In this embodiment, the accessory is generated by using fgsm (fast Gradient signal method), and the specific implementation formula is as follows:
wherein the content of the first and second substances,initialization accessory x representing face recognition model F to input init Then the gradient direction is obtained through a sign function sign, and then the step length epsilon is multiplied to obtain the disturbance added to the initialized accessory, so as to generate a new accessory.
The accessory generated by the generation algorithm needs to satisfy the following conditions: when a specific user carries out face information acquisition of a specific face authentication system, the condition (1) is that an original face image is input, when the user carries out system authentication, only the original face image can be correctly identified, the face image of an accessory generated by wearing a generation algorithm at any position of the face cannot be correctly identified by the system, wherein the generated accessory is specific to the user and the system, and the specific expression is as follows:
In the condition (2), when a user enters a face image wearing a personalized accessory at a certain position of a face, and when the user performs authentication, only the face image wearing the personalized accessory at the same position can be correctly identified, an original face image or the face images wearing the personalized accessories at different positions cannot be correctly identified, and the specific expression is as follows:
wherein, the first and the second end of the pipe are connected with each other,indicating that the user p is at the face position l i Face image of the accessory unique to user p, e, of the wearing system F p Representing the original face image of the user p,
and 5: the user wears the personalized accessory generated in the step 4, and whether the user can be correctly identified in face authentication can be influenced according to whether the user wears the accessory during face information acquisition and the difference of the positions of the accessory.
3. Algorithm implementation
(1) Positioning key areas of the human face:
experimental data: 23 images were randomly selected from the MMSys _ Face dataset and 38 images from the CASIA-Webface dataset for experimentation, with male and female ratios of approximately 1: 1.
data preprocessing: the size of the image block used in the example research is 16 multiplied by 16, the sliding step length on the face image when the image block is taken is 1, and one face image is divided into a plurality of image blocks with the same size.
Calculating the discrete entropy of the image block: respectively calculating the discrete entropy E of three channels for each obtained image block r (e i )、E g (e i ) And E b (e i ) Then, the discrete entropies of the three channels are averaged to obtain the final discrete entropy value of the image block
Determining a threshold value of discrete entropy: averaging the discrete entropies of all image blocks of a face image, and taking the average value of 1.13 times as a threshold value, namely
Determining a face key area: and the image blocks with the discrete entropy larger than the threshold indicate that the image blocks contain more information, and the image blocks with the discrete entropy larger than the threshold are used as key areas of the human face.
The experiment of the key regions of the human face on the selected data set is carried out, and the experimental result is shown in table 1, wherein N represents the number of human face images used for testing, and μ represents the ratio of the image of the judged key region of the human face to the total image. As can be seen from table 1, the key locations of most human faces are located near the forehead, eyes, nose and mouth. While it is impossible to shield the eyes when the user actually wears the accessory, the present invention does not generate a corresponding accessory for the eye position. Considering that the position of each user wearing the accessory in the physical world is not necessarily in a specific place, and the human face sizes of each person are not consistent, the key position is roughly determined, rather than being explicitly specified. Therefore, this embodiment places the accessory on the forehead, nose or mouth to fit most users.
TABLE 1 statistical table of key positions of face images
(2) Generating an accessory:
and after the key area of the face is selected, generating a corresponding accessory according to the selected area.
Initializing the accessory state: the invention uses the pure white accessory state for initialization, the size of the accessory is 900 multiplied by 400, and after the initialization, the initial accessory is mapped to the relevant area of the human face through the parabola transformation.
Generating an accessory: the present invention uses the FGSM algorithm to generate personalized accessories for the forehead, nose or mouth of the user, an example of which is shown in fig. 2. The personalized ornament is generated by the face and the mouth, wherein (a) is the personalized ornament generated aiming at the nose part of the face, and (b) is the personalized ornament generated aiming at the mouth part of the face.
4. Result verification
After the accessories of different areas of the face of the user are generated, experimental verification is carried out, and whether the face recognition direction can be guided by the user before and after the user wears the accessories is researched. After the ArcFace face recognition model is trained, the original face image of the user and the face image with accessories on the forehead, nose or mouth are respectively used for registration, and then the output result of face recognition is respectively tested. The experimental result is shown in fig. 3, for example, in which a face image that can be correctly recognized is framed and labeled with a user ID, and a face image that is not framed is not correctly recognized by the system. It can be seen that if the face image entered into the face recognition model is an original face image, the face image wearing the accessory at any position cannot be correctly recognized. On the contrary, if the face image recorded with the face recognition model is an image with an accessory in a certain area, only the personalized face image with the same accessory in the same area can be correctly recognized, and other situations can not be correctly recognized. Subsequently, the present embodiment respectively tests the accuracy of the ArcFace Face recognition model on the public data set LFW commonly used in Face recognition, the data set MMSys _ Face collected in the present embodiment, and the Face data set MMSys _ attach with accessories. The used face recognition models are a model (ArcFace _ Pre) trained by ArcFace on a CASIA-WebFace data set in advance and a model (ArcFace _ Our) trained on the own data set. The results of the experiment are shown in table 2. As can be seen from table 2, the decorated face image data does not reduce the accuracy of face recognition. Experiments verify that the method for generating the personalized human face accessory after detecting the key region of the human face can influence whether the human face can be correctly identified during human face authentication according to whether the accessory is worn or not and the position of the accessory is different during human face information acquisition on the premise of not reducing the performance of a human face identification model.
TABLE 2 face recognition accuracy on different datasets
Claims (1)
1. A method for generating an accessory capable of influencing face authentication, wherein the generated accessory can influence whether a face is correctly authenticated or recognized according to whether the accessory is worn or not during face information acquisition and the difference of the accessory position, and the method mainly comprises the following steps:
step 1: calculating the discrete entropy of the image block, and judging a key area in the face image according to the size of the entropy, wherein the specific method comprises the following steps: the method comprises the following steps of obtaining image blocks by adopting a certain sliding step length for a face image, then calculating discrete entropy for each image block, judging the image block with the discrete entropy larger than a threshold value as a key area of the face, wherein the information content of the image block is more than that of other image blocks, and calculating as follows:
for a gray-scale human face image block with the pixel level of N and the size of W multiplied by H, the discrete entropy calculation formula is as follows:
f j =s j /(W×H)j=0,1,2,…,N-1
wherein s is j Number of pixels representing a pixel value of j, e i Representing the ith image block in the face image, E (E) i ) Representing image blocks e i Discrete entropy of (d);
when processing a color face image, the average of R, G, B three-channel discrete entropies of each image block is used as the discrete entropy of the image block, and the threshold is set to beThe detailed calculation process is as follows:
L x =L i
Wherein, the first and the second end of the pipe are connected with each other,representing image blocks e i Average of discrete entropies over R, G, B channels, E r (e i )、E g (e i ) And E b (e i ) Denotes the discrete entropy of the image block in the R, G and B channels, respectively, and ψ denotes the returned image block e i The function of the position on the face, T is the discrete entropy threshold, is used to select the image block with more information content, L i The position of the ith image block is represented, avr represents the mean value of the discrete entropies of all the image blocks of the whole human face image, and alpha is a parameter for controlling the size of the threshold;
step 2: selecting the size of the accessory, initializing the accessory by adopting different forms of noises to obtain an initialized rectangular accessory x init ;
And step 3: transforming the initialized rectangular accessory into an arc shape through parabolic transformation, and mapping the arc shape onto a face image, wherein the parabolic transformation formula is as follows:
wherein x is init Is an initialized rectangular accessory, x t Is an accessory after parabolic transformation, and a, b and c are parameters for controlling the parabolic transformation;
and 4, step 4: generating an accessory by adopting a generating algorithm, wherein the accessory of the user p when the user p inputs the face recognition system F is as follows:
p(x|F)=x init +Δ(x init |F)
wherein, Delta (x) init If) represents the disturbance added to the initialization accessory of the face recognition system F;
the accessory generated by the generation algorithm needs to satisfy the following conditions: when a specific user collects face information of a specific face authentication system, the specific user simultaneously meets a condition (1) and a condition (2), wherein the condition (1) is as follows: when an original face image is input, when a user carries out system authentication, only the original face image can be correctly recognized, the face image of an accessory generated by wearing a generation algorithm at any position of a face cannot be correctly recognized by a system, wherein the generated accessory is specific to the user and the system, and the specific expression is as follows:
The condition (2) is: when a user inputs a face image wearing the personalized accessories at a certain position of the face, when the user carries out authentication, only the face image wearing the personalized accessories at the same position can be correctly identified, and an original face image or the face image wearing the personalized accessories at different positions cannot be correctly identified, wherein the specific expression is as follows:
wherein, the first and the second end of the pipe are connected with each other,indicating that the user p is at the face position l i Face image of the accessory unique to user p, e, of the wearing system F p Representing the original face image of the user p,representing that the user p is not at the face i A face image of the position wearing system F for the accessory unique to the user p;
and 5: the user wears the personalized accessory generated in the step 4, and whether the user can be correctly identified in face authentication can be influenced according to whether the user wears the accessory during face information acquisition and the difference of the positions of the accessory.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010738081.0A CN112001262B (en) | 2020-07-28 | 2020-07-28 | Method for generating accessory capable of influencing face authentication |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010738081.0A CN112001262B (en) | 2020-07-28 | 2020-07-28 | Method for generating accessory capable of influencing face authentication |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112001262A CN112001262A (en) | 2020-11-27 |
CN112001262B true CN112001262B (en) | 2022-07-29 |
Family
ID=73467726
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010738081.0A Active CN112001262B (en) | 2020-07-28 | 2020-07-28 | Method for generating accessory capable of influencing face authentication |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112001262B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013131407A1 (en) * | 2012-03-08 | 2013-09-12 | 无锡中科奥森科技有限公司 | Double verification face anti-counterfeiting method and device |
CN103914686A (en) * | 2014-03-11 | 2014-07-09 | 辰通智能设备(深圳)有限公司 | Face comparison authentication method and system based on identification photo and collected photo |
CN104268843A (en) * | 2014-10-16 | 2015-01-07 | 桂林电子科技大学 | Image self-adaptation enhancing method based on histogram modification |
CN104994364A (en) * | 2015-04-30 | 2015-10-21 | 西安电子科技大学 | Image processing method and apparatus |
WO2017070923A1 (en) * | 2015-10-30 | 2017-05-04 | 厦门中控生物识别信息技术有限公司 | Human face recognition method and apparatus |
WO2019128508A1 (en) * | 2017-12-28 | 2019-07-04 | Oppo广东移动通信有限公司 | Method and apparatus for processing image, storage medium, and electronic device |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060133651A1 (en) * | 2002-12-31 | 2006-06-22 | Polcha Andrew J | Recoverable biometric identity system and method |
US8724902B2 (en) * | 2010-06-01 | 2014-05-13 | Hewlett-Packard Development Company, L.P. | Processing image data |
-
2020
- 2020-07-28 CN CN202010738081.0A patent/CN112001262B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013131407A1 (en) * | 2012-03-08 | 2013-09-12 | 无锡中科奥森科技有限公司 | Double verification face anti-counterfeiting method and device |
CN103914686A (en) * | 2014-03-11 | 2014-07-09 | 辰通智能设备(深圳)有限公司 | Face comparison authentication method and system based on identification photo and collected photo |
CN104268843A (en) * | 2014-10-16 | 2015-01-07 | 桂林电子科技大学 | Image self-adaptation enhancing method based on histogram modification |
CN104994364A (en) * | 2015-04-30 | 2015-10-21 | 西安电子科技大学 | Image processing method and apparatus |
WO2017070923A1 (en) * | 2015-10-30 | 2017-05-04 | 厦门中控生物识别信息技术有限公司 | Human face recognition method and apparatus |
WO2019128508A1 (en) * | 2017-12-28 | 2019-07-04 | Oppo广东移动通信有限公司 | Method and apparatus for processing image, storage medium, and electronic device |
Non-Patent Citations (3)
Title |
---|
A novel method of determining parameters of CLAHE based on image entropy;Min B S;《researchgate》;20130930;论文全文 * |
On FrameSelection for Video Face Recognition;T.I.Dhamecha;《Springer》;20160402;论文全文 * |
基于人脸识别算法的门禁系统的设计与实现;潘磊;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20170315;论文全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112001262A (en) | 2020-11-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yang et al. | Personal identification based on finger-vein features | |
Galdi et al. | Multimodal authentication on smartphones: Combining iris and sensor recognition for a double check of user identity | |
CN100423020C (en) | Human face identifying method based on structural principal element analysis | |
CN111931758B (en) | Face recognition method and device combining facial veins | |
CN109800643A (en) | A kind of personal identification method of living body faces multi-angle | |
Naveen et al. | Face recognition and authentication using LBP and BSIF mask detection and elimination | |
Sharma et al. | An efficient partial occluded face recognition system | |
Kim et al. | Lip print recognition for security systems by multi-resolution architecture | |
Al-ani et al. | Biometrics fingerprint recognition using discrete cosine transform (DCT) | |
Mohamed et al. | Automated face recogntion system: Multi-input databases | |
Bharadi et al. | Multi-instance iris recognition | |
CN112001262B (en) | Method for generating accessory capable of influencing face authentication | |
Gogoi et al. | Facial mole detection: an approach towards face identification | |
Demirel et al. | Iris recognition system using combined histogram statistics | |
Sharma et al. | Human recognition methods based on biometric technologies | |
Chai et al. | Towards contactless palm region extraction in complex environment | |
Seal et al. | Minutiae from bit-plane sliced thermal images for human face recognition | |
Harakannanavar et al. | Performance evaluation of face recognition based on multiple feature descriptors using Euclidean distance classifier | |
Demirel et al. | Iris recognition system using combined colour statistics | |
Shelke et al. | Face recognition and gender classification using feature of lips | |
Chahal et al. | A comparative study of various biometric approaches | |
Gulhane et al. | A review on surgically altered face images recognition using multimodal bio-metric features | |
Abdulla et al. | Exploring Human Biometrics: A Focus on Security Concerns and Deep Neural Networks | |
Kumar et al. | An Advance Approach of Face Recognition using PCA and Region Base Color Segmentation | |
Nair et al. | Face detection and recognition in smartphones |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |