CN117831089A - Face image processing method, related device and storage medium - Google Patents

Face image processing method, related device and storage medium Download PDF

Info

Publication number
CN117831089A
CN117831089A CN202211191524.4A CN202211191524A CN117831089A CN 117831089 A CN117831089 A CN 117831089A CN 202211191524 A CN202211191524 A CN 202211191524A CN 117831089 A CN117831089 A CN 117831089A
Authority
CN
China
Prior art keywords
face
image
face image
countermeasure
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211191524.4A
Other languages
Chinese (zh)
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Real AI Technology Co Ltd
Original Assignee
Beijing Real AI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Real AI Technology Co Ltd filed Critical Beijing Real AI Technology Co Ltd
Priority to CN202211191524.4A priority Critical patent/CN117831089A/en
Publication of CN117831089A publication Critical patent/CN117831089A/en
Pending legal-status Critical Current

Links

Landscapes

  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the field of computer vision, and provides a face image processing method, a related device and a storage medium. The method comprises the following steps: determining an initial face image; acquiring candidate countermeasure images; determining a target face image conforming to a first preset condition from a preset face image set according to the initial face image and the candidate countermeasure image; a face challenge sample is generated based on the target face image and the initial face image. The face countermeasure sample obtained in the embodiment of the application is obtained based on optimization of a plurality of different target faces, so that the face countermeasure sample has high attack success rate and strong attack stability in the physical world.

Description

Face image processing method, related device and storage medium
Technical Field
The embodiment of the application relates to the field of computer vision, in particular to a face image processing method, a related device and a storage medium.
Background
The misuse problem of face recognition systems based on artificial intelligence technology is becoming serious. For example, real estate companies collect customers' willingness to buy houses through a face recognition system and give higher offers to customers with strong willingness; in another example, the market obtains the buying habit of the consumer through the face recognition system, and promotes the commodity to the crowd with the fixed buying habit. Abuse of the like gives rise to unprecedented concerns about personal privacy.
Currently, there are a number of solutions to the face recognition abuse problem. For example, wearing counterglasses or countercaps with countermeasures against disturbance, or attaching countermeasures stickers to the face. These entities with countermeasures against disturbances may mislead or confuse the face recognition system. For example, wearing the user a with the anti-disturbance entity may cause the face recognition system to incorrectly identify it as the user B, and thus, the face recognition system may not accurately identify the user identity, which may play a role in protecting the privacy of the face.
However, existing methods of generating the countermeasure glasses, the countermeasure stickers, and the countermeasure caps tend to simply optimize the countermeasure image including the countermeasure disturbance in the most similar direction to the target face image, with the optimization target being single. That is, users wearing the same disturbance-resistant entity are easy to be identified as the same target user, for example, the face recognition system of the mall where the users come in and go out can cause the face recognition system of the mall to frequently obtain the result of the target user coming in and going out, and the result is easy to be found by the defending strategy of the face recognition system, so that the attack success rate of disturbance resistance and the attack stability in the physical world are low.
Disclosure of Invention
The embodiment of the application provides a face image processing method, a related device and a storage medium, which can optimize candidate anti-disturbance and different target faces in a more similar direction in an iterative process to obtain a target anti-disturbance with high attack success rate and stronger attack stability in the physical world.
In a first aspect, an embodiment of the present application provides a face image processing method, including:
determining an initial face image;
acquiring candidate countermeasure images, wherein the candidate countermeasure images are updated based on historical candidate countermeasure images;
determining a target face image conforming to a first preset condition from a preset face image set according to the initial face image and the candidate countermeasure image;
a face challenge sample is generated based on the target face image and the initial face image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the input/output module is used for determining an initial face image;
the processing module is used for acquiring candidate countermeasure images, wherein the candidate countermeasure images are updated based on historical candidate countermeasure images;
the processing module is further configured to determine, from a preset face image set, a target face image that meets a first preset condition according to the initial face image and the candidate countermeasure image; and
A face challenge sample is generated based on the target face image and the initial face image.
In a third aspect, embodiments of the present application provide a processing apparatus, including:
at least one processor, memory, and input output unit;
wherein the memory is for storing a computer program and the processor is for invoking the computer program stored in the memory to perform the method described in the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium comprising instructions which, when run on a computer, cause the computer to perform the method described in the first aspect.
Compared with the prior art, according to the face image processing method, the related device and the storage medium, the target face image meeting the first preset condition is determined from the preset face image set according to the initial face image and the candidate countermeasure image, and then the face countermeasure sample is generated according to the target face image and the initial face image. That is, the face countermeasure sample generated in the embodiment of the present application is not directly optimized in the direction in which the similarity with a single attacker increases or is optimized in the direction in which the similarity with a single protected person decreases in the prior art, but is optimized for multiple times in the direction most similar to a plurality of target faces based on different face images in a preset face image set by means of a preset face image set. Thus, the resulting face challenge sample is not just similar or dissimilar to a single image, but is similar to a target image that is similar or dissimilar to a plurality of said single images; that is, the face challenge sample may acquire more challenge features for achieving a challenge based on multiple target images, rather than a single challenge feature optimized based on only the single image. Therefore, the face countermeasure sample generated by the embodiment of the application can be similar to a plurality of target images, has more countermeasure characteristics capable of realizing countermeasure attack, and can acquire different effective countermeasure characteristics when facing different face recognition systems, so that the face countermeasure sample generated by the application has the same attack result, namely the face countermeasure sample has strong attack robustness and strong migration aggressiveness, and can generate stable attack effects facing a plurality of different face recognition models. Therefore, the attack success rate of the face against the attack of the sample and the attack stability in the physical world are high, the face privacy can be well protected, and the attack resistance test of more face recognition models can be realized through migration.
Drawings
The objects, features and advantages of the embodiments of the present application will become readily apparent from the detailed description of the embodiments of the present application read with reference to the accompanying drawings. Wherein:
FIG. 1 is a schematic view of a pair of prior art anti-eyeglass lenses;
FIG. 2 is a schematic view of a prior art anti-stickers;
FIG. 3 is a schematic flow chart of the prior art for generating a countermeasure hat;
fig. 4 is a schematic diagram of a face image processing system according to an embodiment of the present application;
fig. 5 is a step diagram of a face image processing method provided in an embodiment of the present application;
fig. 6 is a flow chart of a face image processing method according to an embodiment of the present application;
fig. 7 is a schematic wearing diagram of a mask made of disturbance-resistant mask by using a face image processing method provided in an embodiment of the present application;
fig. 8 is a scene diagram of a mask made of disturbance-resistant mask, which is obtained by using the face image processing method provided by the embodiment of the present application, after the mask is worn on the face of a target user, the mask is recognized by a face recognition system;
FIG. 9 is a schematic diagram of an image processing apparatus according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a processing apparatus according to an embodiment of the present application;
fig. 11 is a schematic diagram of a part of a structure of a mobile phone related to a terminal device according to an embodiment of the present application;
Fig. 12 is a schematic structural diagram of a server according to an embodiment of the present application.
In the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
The terms first, second and the like in the description and in the claims of the embodiments and in the above-described figures are used for distinguishing between similar objects (e.g., a first similarity and a second similarity are each represented by a different similarity, and other similarities), and are not necessarily used for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or modules is not necessarily limited to those explicitly listed but may include other steps or modules not expressly listed or inherent to such process, method, article, or apparatus, such that the partitioning of modules by embodiments of the application is only one logical partitioning, such that a plurality of modules may be combined or integrated in another system, or some features may be omitted, or not implemented, and further that the coupling or direct coupling or communication connection between modules may be via some interfaces, such that indirect coupling or communication connection between modules may be electrical or other like, none of the embodiments of the application are limited. The modules or sub-modules described as separate components may or may not be physically separate, may or may not be physical modules, or may be distributed in a plurality of circuit modules, and some or all of the modules may be selected according to actual needs to achieve the purposes of the embodiments of the present application.
The embodiment of the application provides a face image processing method, a related device and a storage medium, which can be applied to an image processing system. The image processing device is at least used for determining a target face image which accords with a first preset condition from a preset face image set according to an initial face image and a candidate countermeasure image, and generating a face countermeasure sample based on the target face image and the initial face image. The image processing apparatus may be an application program that determines a target face image, generates a face challenge sample, or a server that installs an application program that determines a target face image, generates a face challenge sample. The image recognition device may be a recognition program for obtaining the face features of the face image by obtaining the face features of the face image, for example, the recognition program is a face recognition model, and the image recognition device may also be a terminal device deployed with the face recognition model.
The solution provided in the embodiments of the present application relates to techniques such as artificial intelligence (Artificial Intelligence, AI), natural language processing (Nature Language processing, NLP), machine Learning (ML), and the like, and is specifically described by the following embodiments:
Among these, artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
Computer Vision (CV) is a science of studying how to "look" a machine, and more specifically, to replace human eyes with a camera and a Computer to perform machine Vision such as recognition, tracking and measurement on a target, and further perform graphic processing to make the Computer process into an image more suitable for human eyes to observe or transmit to an instrument to detect. As a scientific discipline, computer vision research-related theory and technology has attempted to build artificial intelligence systems that can acquire information from images or multidimensional data. Computer vision techniques typically include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D techniques, virtual reality, augmented reality, synchronous positioning, and map construction, among others, as well as common biometric recognition techniques such as face recognition, fingerprint recognition, and others.
Machine Learning (ML) is a multi-domain interdisciplinary, involving multiple disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory, etc. It is specially studied how a computer simulates or implements learning behavior of a human to acquire new knowledge or skills, and reorganizes existing knowledge structures to continuously improve own performance. Machine learning is the core of artificial intelligence, a fundamental approach to letting computers have intelligence, which is applied throughout various areas of artificial intelligence. Machine Learning and Deep Learning (DL) generally includes techniques such as artificial neural networks, confidence networks, reinforcement Learning, transfer Learning, induction Learning, and the like.
The inventor researches that, in order to generate the countermeasure disturbance, the countermeasure image including the countermeasure disturbance is often optimized toward the direction most similar to the single target face image, and the optimization target is single. Thus, the countering disturbance generated by the prior art may be acquired by different face recognition systems to obtain different recognition features, thereby resulting in different attack results. That is, the attack robustness against disturbance generated by the prior art is not strong, the migration attack performance is weak, and stable attack effects can not be generated on a plurality of different face recognition models.
As shown in fig. 1, the anti-lens uses a generating model to generate an anti-disturbance in the shape of the lens, and the generated anti-disturbance is printed on the lens, so that after the initial face wears the lens with the anti-disturbance, the recognition result of the face recognition system can be made to be wrong. During training, the generated model is optimized through the corresponding objective function, so that the generated countermeasure disturbance is printed on the glasses, and the countermeasure image obtained after the initial person wears the model is optimized towards the direction most similar to the single target face. For the method, the anti-disturbance image and the single target face image are simply optimized in the most similar direction, the optimized target is single, and the finally obtained attack success rate of the anti-disturbance and the attack stability in the physical world are low; in addition, the anti-glasses cannot hide the key information of the human face due to the small area, so that the anti-attack performance is reduced.
As shown in fig. 2, the inventor has found that the countermeasure stickers are obtained by a position search method, and the obtained countermeasure images are similar to the actual faces except the initial face, and the positions and angles of the stickers stuck on the initial face when the similarity is highest are obtained, and then the positions and the sticker angles of the proper stickers when the similarity is highest are selected, and the proper stickers are stuck on the initial face, so that when the face recognition system captures the protection face, the false judgment is made as other people. For example, in fig. 2a, the initial face is James can, after the face is pasted with a sticker, be erroneously recognized as John gold by the face recognition system; in fig. 2b, the initial face is Josie Bissett, and after the face is pasted with a sticker, the face can be incorrectly identified as Patricia Arquette by the face recognition system; in fig. 2c, the initial face is Harrison Ford, and after the face is pasted with a sticker, the face can be incorrectly recognized as Tom Daschle by the face recognition system. For the anti-sticking paper, the anti-sticking paper is a natural pattern, the face recognition system is made to recognize errors only by changing the position and the angle in the optimization process, and the pattern of the anti-sticking paper is not helpful to the attack process, so that the anti-sticking paper has no algorithm resistance, and the attack success rate is low; on the other hand, the anti-sticking paper has a smaller area, and when the anti-sticking paper is stuck on the initial face, key information of the initial face cannot be hidden.
As shown in fig. 3, the inventor has also found that the countermeasure cap is printed with the countermeasure disturbance, so that the face recognition system can be made wrong after the initial face wears the cap. Firstly, initializing a rectangle to resist disturbance, pasting a two-dimensional projection image (GT) of the rectangle onto an initial Face image with a hat to obtain a resist image, extracting facial information features (Arc Face) of the resist image, calculating the cosine distance (Cosine Similarity loss) between the facial information features and the initial Face, adding TV loss (TV loss), and minimizing an objective function, thereby maximizing the cosine distance between the facial information features of the resist image and the initial Face. As can be seen, most of countermeasure algorithms against caps actually use facial information features of countermeasure samples, and the number of parts involved in countermeasure disturbance is small, so that the algorithm has poor countermeasure and low attack success rate. In addition, although the area of the countermeasure cap is large, the cap body is not in the face range, and is easily removed by the cutting operation in the face recognition system, and cannot exert the countermeasure effect.
Compared with the prior art, the method and the device have the advantages that the target face image meeting the first preset condition is obtained from the preset face image set according to the initial face image and the countermeasure image, and then the face countermeasure sample is generated based on the target face image, so that the face countermeasure sample generated in the method and the device is not directly oriented to the direction of increasing the similarity with the attacked or oriented to the direction of reducing the similarity with the protected, but optimized based on the plurality of face images by means of the preset face image set, and the optimization target is not optimized towards a single target face as in the prior art, but optimized towards a plurality of target faces, and therefore the attack success rate of the finally obtained face countermeasure sample and the attack stability in the physical world are both high. The face countermeasure sample generated by the method can be obtained by different face recognition systems to obtain different effective disturbance characteristics, so that the same attack result is achieved, namely the face countermeasure sample generated by the method is strong in attack robustness and migration aggressiveness, and a stable attack effect can be produced in the face of a plurality of different face recognition models. In addition, in some embodiments of the present application, the face countermeasure sample may be materialized into a preset object occupying a larger face area, and then the face countermeasure attack test is performed on the face recognition model or the user is helped to perform face privacy protection. Compared with the anti-glasses and the anti-stickers, the preset object of the embodiment of the application has larger area and more anti-attack characteristics, so that better anti-attack test effect and face privacy protection effect can be realized; compared with the countermeasure hat, the preset object generated by the embodiment of the application is used for covering the face, but not the head, so that the preset object is not easy to remove by the cutting operation in the face recognition system, and the countermeasure attack effect can be better achieved. The countermeasure image or the face countermeasure sample may be generated by an image processing system including an image processing apparatus and an image recognition apparatus in the embodiments of the present application.
In some embodiments, the image processing apparatus and the image recognition apparatus are disposed separately, as shown in fig. 4, and the face image processing method provided in the embodiments of the present application may be implemented based on one image processing system shown in fig. 4. The image processing system may include a server 01 and a terminal device 02.
The server 01 may be an image processing apparatus in which an image processing program, such as a face image processing program, may be deployed.
The terminal device 02 may be an image recognition apparatus in which a recognition model, for example, an image recognition model trained based on a machine learning method, may be deployed. Wherein the image recognition model may be a face recognition model or the like.
The server 01 may receive a preset face image set, an initial face image, from the outside, then acquire candidate countermeasure images based on the initial face image, and transmit the face images to the terminal device 02. The terminal device 02 may process each face image by using a face recognition model, obtain face features of each face image, and then feed back to the server 01. The server 01 may receive initial features of an initial face image, countermeasure features of a candidate countermeasure image, face features of face images of a preset face image set, then acquire a target face image from the face images of the face image set based on the countermeasure features and the initial features, determine first similarity between the candidate countermeasure image and the target face image, update candidate countermeasure disturbance if the first similarity is smaller than a third preset value, and obtain a new target face image according to the updated candidate countermeasure disturbance and the initial face image until the first similarity between the candidate countermeasure image and the target face image is not smaller than the third preset value, and take the candidate countermeasure image when the first similarity is not smaller than the third preset value as a face countermeasure sample, and may take the countermeasure disturbance corresponding to the candidate countermeasure image as a target countermeasure disturbance.
It should be noted that, the server according to the embodiments of the present application may be an independent physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, a cloud database, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, and basic cloud computing services such as big data and an artificial intelligence platform.
The terminal device according to the embodiments of the present application may be a device that provides voice and/or data connectivity to a user, a handheld device with wireless connection functionality, or other processing device connected to a wireless modem. Such as mobile telephones (or "cellular" telephones) and computers with mobile terminals, which can be portable, pocket, hand-held, computer-built-in or car-mounted mobile devices, for example, which exchange voice and/or data with radio access networks. For example, personal communication services (English full name: personal Communication Service, english short name: PCS) telephones, cordless telephones, session Initiation Protocol (SIP) phones, wireless local loop (Wireless Local Loop, english short name: WLL) stations, personal digital assistants (English full name: personal Digital Assistant, english short name: PDA) and the like.
The following describes the technical scheme of the application in detail with reference to several embodiments.
The face image processing method according to the embodiment of the present application is described with reference to fig. 5 and 6, and the method may be applied to the image processing system shown in fig. 4, and executed by a server, to update candidate countermeasure disturbances to obtain target countermeasure disturbances, where the target countermeasure disturbances may be used to generate an object that blocks a face of a user, and protect privacy of the face of the user after the object is worn by the user. The method comprises the following steps:
step S100: an initial face image is determined.
In the embodiment of the present application, the initial face image may be an initial face image that meets a preset privacy protection condition, or an arbitrary purpose face image (for example, a challenge-attack resistance test for performing a face recognition model). The preset privacy protection condition may be a scene that the user needs to perform privacy protection, for example, the user needs to protect personal face privacy in a mall or a real estate sales center so as to avoid illegal collection of merchants, and then the privacy requirement of the user meets the preset privacy protection condition. The preset privacy protection condition may also be a source user identity of the initial face image, for example, the identity of the user is legal, and if illegal crimes are not involved, the preset privacy protection condition may be considered to be met.
For example, real estate companies collect customers' willingness to buy houses through a face recognition system and give higher offers to highly willing customers. When the face privacy of the customer is protected, the face recognition system of the real estate company cannot determine the identity of the customer according to the face information of the customer, and further cannot randomly adjust the price according to the purchase intention of the customer.
In another example, the market obtains the buying habit of the consumer through the face recognition system, and promotes the commodity to the crowd with the fixed buying habit. When the face privacy of the consumer is protected, the face recognition system of the mall cannot determine the identity of the consumer according to the face information of the consumer, and further cannot maliciously promote the consumer.
It should be noted that, since the attack countermeasure may include a targeted attack and a non-targeted attack, the attack objectives are not the same. Thus, the generation basis of the target countermeasure images for achieving both are also different. In the target attack, the initial face image may be a face image of an attacker, that is, after the face of the protected person is recognized by the face recognition system, the face image may be incorrectly recognized as the attacker (the source user of the initial face image).
In the case of no target attack, the initial face image may be a face image of the protected person, that is, after the face of the protected person (initial face image) is recognized by the face recognition system, the face image may be erroneously recognized as a person other than the protected person.
Step S200: candidate countermeasure images are acquired.
In general, the challenge image may be obtained by directly superimposing the challenge disturbance on the protected face image. Therefore, the process of iteratively updating the candidate countermeasure image and obtaining the target countermeasure image may be regarded as a process of iteratively updating the candidate countermeasure disturbance and obtaining the target countermeasure disturbance without any modification to the protected face image.
Thus, in the embodiment of the present application, an initial countermeasure disturbance may be directly initialized, and then the initial countermeasure disturbance is iteratively updated, where in the iterative update process, a plurality of historical candidate countermeasure disturbances may be obtained as a basis for updating the candidate countermeasure disturbance in a next time step until the target countermeasure disturbance is obtained. That is, the target countermeasure disturbance is updated from the candidate countermeasure disturbance obtained by the last time step update, for example, assuming that the target countermeasure disturbance C is obtained by 3 updates based on the initial countermeasure disturbance C1, the candidate countermeasure disturbance C2 is obtained by first updating based on the initial countermeasure disturbance C1, the candidate countermeasure disturbance C3 is obtained by second updating based on the candidate countermeasure disturbance C2, and the target countermeasure disturbance C is obtained by updating based on the candidate countermeasure disturbance C3.
In an embodiment of the present application, the initial countermeasure disturbance may be generated by a preset pattern generation model based on a preset hidden vector. The pattern generation model comprises an encoder and a decoder, wherein the encoder encodes a preset hidden vector, and the decoder decodes the encoded hidden vector to generate initial disturbance countermeasure. It should be noted that the initial countermeasure disturbance needs to be smaller than the initial face image.
In the embodiment of the present application, candidate countermeasure images may be obtained by the following methods (1) and (2):
method (1): and carrying out weighted calculation on the candidate countermeasure disturbance and the protected face image to obtain the candidate countermeasure image.
In this embodiment of the present application, a first weight vector and a second weight vector may be preset, so that a protected face image and a candidate countermeasure disturbance may be operated with the preset first weight vector and second weight vector to obtain a candidate countermeasure image.
For example, a plurality of weight vector elements may be included in the first weight vector, and the plurality of weight vector elements included in the first weight vector may be in one-to-one correspondence with pixels included in the candidate countermeasure disturbance. The second weight vector may include a plurality of weight vector elements, and the plurality of weight vector elements included in the second weight vector may correspond to pixels included in the protected face image one by one. In addition, the candidate countermeasure disturbance is smaller than the protected face image, so the weight vector elements in the first weight vector are fewer than those in the second weight vector, and the first weight vector and the second weight vector of the position of the candidate countermeasure disturbance corresponding to the protected face image are complementary. For example, the candidate countermeasure disturbance is a pattern of a nose region in the face, and then the sum of the weight vector elements of the first weight vector in the nose region and the corresponding weight vector elements of the second weight vector in the nose region is 1, and the weight vector elements of the second weight vector except the nose region are all 1.
Thus, the candidate countermeasure image is obtained by adding the product of each weight vector element in the first weight vector and each pixel of the candidate countermeasure disturbance to the product of each weight vector element in the second weight vector and each pixel of the protected face image.
Method (2): and replacing the preset area of the protected face image with the candidate countermeasure disturbance to obtain the candidate countermeasure image.
In the embodiment of the present application, clipping may be performed in a preset area in the protected face image, for example, the candidate countermeasure disturbance is an image of a nose area, and then the nose area of the protected face image is the preset area, and then the nose area in the protected face image may be clipped, and the candidate countermeasure disturbance is added to the nose area of the protected face image that has been clipped, so that the candidate countermeasure image is obtained.
The candidate countermeasure images can be obtained by the above-described method (1) and method (2) based on the face image to be protected and the candidate countermeasure disturbance.
Step S300: and determining a target face image which accords with a first preset condition from a preset face image set according to the initial face image and the candidate countermeasure image.
In this embodiment of the present application, the preset face image set may be some open-source face image sets, or a face image set obtained by temporarily shooting a plurality of faces. The preset face image set contains face data of a plurality of different faces, such as face images and photos. Since the target face image is obtained from a preset face image set, not a single target (original) face image in the prior art, the face challenge sample generated based on the target face image is similar to not a single target (original) face image but a plurality of target face images. The external appearance of the face countermeasure sample is similar to that of a plurality of target face images, so that the faces represented by the visual characterization are more popular, and the true identity of the user can not be found by naked eyes. In addition, it should be noted that, in the non-target attack scenario, the preset face image set may also include the face image of the initial face, or may also not include the face image of the initial face, and in the target attack scenario, the preset face image may not include the initial face image (the attacker face image), so as to prevent optimization only toward the single target face direction of the attacker face.
In this embodiment of the present application, the target face image may be used as an optimization direction of the candidate countermeasure image, that is, in each iteration round of iteratively updating the candidate countermeasure image, a target face image is determined, and the candidate countermeasure image is optimized in a direction similar to the target face image. The target face image is a face image satisfying a first preset condition in each face image of the preset face image set, i.e. a face image capable of helping to achieve the purpose of attack resistance.
In the embodiment of the application, two conditions of target attack and no target attack are respectively carried out when the target face image is acquired, and the acquisition of the target face image when the target attack and no target attack are respectively carried out is explained.
When the face challenge sample is for a no-target attack:
the target face image is determined from a set of preset face images by the following steps a-d.
a. And acquiring initial characteristics of the initial face image.
In the embodiment of the application, the face countermeasure sample is used for no-target attack, and the initial face image is a protected face image, so that the initial features can be obtained by extracting image features from the protected face image. The extraction of the image features in the protected face image may be achieved through a preset face recognition model, for example, the protected face image is input into the face recognition model, and the face recognition model may perform feature extraction on the input protected face image to obtain the initial features (face features of the protected face image).
b: and acquiring the countermeasure characteristics of the countermeasure image.
After the countermeasure images are obtained by the methods (1) and (2), the method for obtaining the countermeasure features may be the same as the method for obtaining the initial features, for example, the candidate countermeasure images are input into a face recognition model, and the face recognition model may perform feature extraction on the input candidate countermeasure images to obtain the countermeasure features.
c: and acquiring the face characteristics of each face image in the preset face image set.
In this embodiment of the present application, face features corresponding to each face image in the preset face image set may be obtained based on the preset face image set. The face features of each face image in the preset face image set can be obtained by adopting the same extraction method as that of the countermeasure features and the initial features. For example, each face image in the preset face image set is input into a face recognition model, and the face recognition model can perform feature extraction on each input face image to obtain face features corresponding to each face image. It should be noted that, after extracting the face features corresponding to each face image in the preset face image set once, the face features can be stored in a local or cloud end, and can be directly used in the next use without carrying out face feature extraction actions each time.
d: and selecting the target face image according to a first preset condition and the similarity between each face feature and the initial feature and the countermeasure feature respectively.
In the embodiment of the present application, the third similarity between each face feature and the countermeasure feature may be used to represent the first similarity between each face image and the candidate countermeasure image, and the fourth similarity between each face feature and the initial feature (protected face feature) may be used to represent the second similarity between each face image and the initial face image (protected face image). Wherein a third similarity between each face feature and the challenge feature, and a fourth similarity between each face feature and the initial feature, may be obtained based on distances (e.g., euclidean distance, chebyshev distance, or cosine similarity) between the respective face feature and the challenge feature and the initial feature.
In the embodiment of the present application, a third similarity between each face feature and the countermeasure feature may be used to represent the first similarity between each face image and the candidate countermeasure image, where the higher the third similarity is, the higher the first similarity is, and vice versa the lower the third similarity is; the second similarity between each face image and the original face image (protected face image) is represented by a fourth similarity between each face feature and the original feature (protected face feature), and the higher the fourth similarity is, the higher the second similarity is, and the lower the second similarity is. And then the difference between the third similarity and the fourth similarity represents the difference between the first similarity and the second similarity, and when the difference between the third similarity and the fourth similarity is the largest, the difference between the first similarity and the second similarity is the largest, and then the difference between the first similarity and the second similarity can be considered to be larger than a first preset value. When the difference between the third similarity and the fourth similarity is the largest, the difference between the first similarity and the second similarity is the largest, that is, the first similarity is the largest while the second similarity is the smallest, and the obtained target face image has the highest similarity with the candidate countermeasure image and has the lowest similarity with the initial face image (protected face image).
Therefore, the target face image can be determined from the preset face image set according to the third similarity and the fourth similarity.
In this embodiment of the present application, specifically, the target face image may be obtained from the preset face image set according to the third similarity and the fourth similarity based on the following formula (1).
Wherein, T is a face feature set composed of face features corresponding to each face image in a preset face image set, x is any face feature in the face feature set, O is an initial feature (protected face feature), and adv is an countermeasure feature. I x-O I 2 For a first distance from the face feature x to the original feature (protected face feature), representing a fourth similarity, ||x-adv|| 2 And representing a third similarity for the second distance from the face feature x to the countermeasure feature, wherein P is one face feature with the largest difference between the second distance and the first distance associated with each face feature in the face feature set T, namely a target face feature, and a face image corresponding to the target face feature is a target face image.
The difference between the fourth similarity and the third similarity corresponding to the target face feature is the largest, that is, the similarity between the target face feature and the countermeasure feature is the highest, the second distance is the smallest, and the similarity between the target face feature and the initial feature (the protected face feature) is the lowest, and the first distance is the largest. The face image corresponding to the target face feature is the target face image, so that the similarity between the target face image and the initial face image (the protected face image) is the lowest, and the similarity between the target face image and the candidate countermeasure image is the highest. Therefore, when the candidate countermeasure image obtained by fusing the candidate countermeasure disturbance and the initial face image (protected face image) is recognized by the face recognition system, the candidate countermeasure image is more likely to be erroneously recognized as the target face image than the initial face image (protected face image), so that the privacy protection effect is achieved on the initial face image (protected face image).
When the face challenge sample is for a targeted attack:
and e, determining a target face image from the preset face image set through the following steps of e-h.
e. And acquiring initial characteristics of the initial face image.
In the embodiment of the application, the face countermeasure sample is used for target attack, and the initial face image is an attacked face image, so that the initial characteristics can be obtained by extracting image characteristics from the attacked face image. The method for extracting the image features of the face image of the attacked person may be the same as the method for extracting the image features in the protected face image, which is not described in detail herein.
f: and acquiring the countermeasure characteristics of the countermeasure image.
In the embodiment of the present application, the detailed steps of step f are the same as those of step b, and are not described in detail herein.
g: and acquiring the face characteristics of each face image in the preset face image set.
In the embodiment of the present application, the detailed steps of step g are the same as step c, and are not described in detail herein.
h: and selecting the target face image according to a first preset condition and the similarity between each face feature and the initial feature and the countermeasure feature respectively.
In the embodiment of the present application, a third similarity between each face feature and the countermeasure feature may be used to represent the first similarity between each face image and the candidate countermeasure image, where the higher the third similarity is, the higher the first similarity is, and vice versa the lower the third similarity is; the second similarity between each face image and the initial face image (the attacked face image) is represented by a fourth similarity between each face feature and the initial feature (the attacked face feature), and the higher the fourth similarity is, the higher the second similarity is, and vice versa is, the lower the second similarity is. The calculation method of each similarity refers to step d, and is not described in detail herein.
And then the sum of the third similarity and the fourth similarity represents the sum of the first similarity and the second similarity, and when the sum of the third similarity and the fourth similarity is maximum, the sum of the first similarity and the second similarity is maximum, and then the sum of the first similarity and the second similarity can be considered to be larger than a second preset value. When the sum of the third similarity and the fourth similarity is maximum, the sum of the first similarity and the second similarity is maximum, that is, the first similarity is maximum and the second similarity is also maximum, at this time, the obtained target face image has the highest similarity with the candidate countermeasure image, and simultaneously has the highest similarity with the initial face image (the face image of the attacked person), that is, the candidate countermeasure face image has higher similarity with the initial face image (the face image of the attacked person).
In this embodiment of the present application, specifically, the target face image may be obtained from the preset face image set according to the third similarity and the fourth similarity based on the following formula (2).
Wherein, T is a face feature set composed of face features corresponding to each face image in a preset face image set, x is any face feature in the face feature set, O is an initial feature (face feature of an attacker), and adv is an countermeasure feature. I x-O I 2 For a first distance from the face feature x to the initial feature (the face feature of the attacked person), the first distance represents a fourth similarity, |x-adv|| 2 And representing a third similarity for the second distance from the face feature x to the countermeasure feature, wherein P is one face feature with the largest sum of the second distance and the first distance associated with each face feature in the face feature set T, namely a target face feature, and a face image corresponding to the target face feature is a target face image.
The sum of the fourth similarity and the third similarity corresponding to the target face feature is the largest, that is, the similarity between the target face feature and the countermeasure feature is the highest, the second distance is the smallest, and the similarity between the target face feature and the initial feature (the face feature of the attacked person) is the highest, and the first distance is the smallest. Therefore, when the candidate countermeasure image obtained after the fusion of the candidate countermeasure disturbance and the protected face image is recognized by a face recognition system, the candidate countermeasure image is more easily and incorrectly recognized as the face of the attacked person compared with the protected face image, thereby playing a privacy protection role on the protected face image.
The above steps a-d and e-h are respectively described for the target face image acquisition method in the case of no target attack and target attack, and after the target face image is obtained, step S400 may be performed: a face challenge sample is generated based on the target face image and the initial face image.
In the embodiment of the present application, whether there is a target attack or no target attack, through steps a-d or e-f, it may be ensured that the obtained target face image has a higher similarity with the candidate challenge image, and simultaneously has a lower similarity with the protected face image (when there is no target attack) or has a higher similarity with the attacker face (when there is a target attack), but it cannot be ensured that the face recognition system may be erroneously recognized as the target face image or as the attacker face image when recognizing the candidate challenge image. Therefore, a third preset value may be set, after the target face image is obtained, whether the candidate countermeasure image can be wrongly identified as the target face image or as the face image of the attacked person by comparing whether the first similarity between the target face image and the candidate countermeasure image reaches the third preset value, and if the third preset value is not reached, the candidate countermeasure disturbance needs to be continuously updated.
When the face recognition system recognizes an image, the face recognition system distinguishes which face the image belongs to according to the characteristic information of the image. Accordingly, the first similarity between the candidate countermeasure image and the target face feature of the target face image can be calculated based on the countermeasure feature. The smaller the feature loss between the target face image and the candidate challenge image, the more likely the candidate challenge image is to be used to attack the face recognition system, the more likely the attacked face recognition system will be erroneously determined to be the target face image (when no target attack is performed) or the attacked face image (when a target attack is performed).
In the embodiment of the present application, the formula for calculating the feature loss based on the countermeasure feature of the candidate countermeasure image and the target face feature of the target face image is as follows:
wherein adv is the countermeasure feature of the candidate countermeasure image, best is the target face feature corresponding to the target face image, N is the total number of vector elements in the feature vector, loss is the feature Loss between the countermeasure feature and the target face feature, the first similarity between the candidate countermeasure image and the target face image is represented, the smaller the feature Loss is, the higher the first similarity between the candidate countermeasure image and the target face image is represented, the higher the attack success rate is, and otherwise the attack success rate is lower.
According to the above formula (3), a first similarity between the candidate countermeasure image obtained from the candidate countermeasure disturbance and the target face image obtained from the candidate countermeasure image can be calculated.
In the embodiment of the application, a feature loss preset value can be preset to judge whether the feature loss between the candidate countermeasure image and the target face image reaches the feature loss preset value, and when the feature loss between the candidate countermeasure image and the target face image reaches the feature loss preset value, the first similarity can be considered to reach a third preset value; alternatively, when the feature loss between the candidate countermeasure image and the target face image is minimized (at which time the first similarity reaches the maximum), the first similarity may be considered to reach a third preset value.
In the embodiment of the present application, when the first similarity between the candidate countermeasure image obtained from the candidate countermeasure disturbance and the target face image obtained from the candidate countermeasure image does not reach the third preset value, the candidate countermeasure disturbance may be updated by the following steps i-k:
i: and acquiring gradient change information of the first similarity relative to the hidden vector.
j: updating the hidden vector based on the gradient change information.
k: and updating the candidate countermeasure disturbance based on the updated hidden vector.
In the embodiment of the present application, the direction in which the first similarity between the candidate countermeasure image and the target face image is maximized is the direction in which the feature loss between the candidate countermeasure image and the target face image is minimized. The candidate countermeasure disturbance is generated by a preset pattern generation model based on the hidden vector, so that the hidden vector of the pattern generation model can be optimized by using optimizers (such as a Momentum optimizer, an AdaGrad optimizer, a RMSProp optimizer and an Adam optimizer) towards the direction of reducing the characteristic Loss.
Considering that the manner of directly adjusting (e.g. superposing) the parameters to resist disturbance is to directly linearly modify the resist disturbance, the finally generated resist image may only have a resist attack effect on a limited number of face recognition models or image recognition models used in generation, and the migration attack effect on other face recognition models is poor. In an embodiment of the present invention, the method for updating the challenge disturbance by means of indirect optimization adjustment generates a challenge image with more migration aggressiveness, specifically including: updating the countermeasures by updating hidden vectors of the preset pattern generation model.
In this embodiment, the hidden vector may be adjusted by a gradient iterative optimization method, specifically, calculating a gradient of the first similarity expectation with respect to the hidden vector; calculating optimization parameters according to a preset step length and the direction of the gradient; then, adjusting the hidden vector according to the optimization parameters; finally, generating updated counterperturbation based on the hidden vector.
In this embodiment, the countermeasures are updated by means of an indirect optimization adjustment. The direct optimization of the counterdisturbance is converted into the input-hidden vector when the counterdisturbance is generated in an optimized manner. The change in the hidden vector results in a change in the generated challenge disturbance, which results in a change in the challenge image. Namely, the generation process of the countermeasure image is controlled and coordinated through the generation model, so that the countermeasure disturbance in the countermeasure image is not directly and linearly superimposed on the initial face image, but the countermeasure disturbance is generated at a semantic level; the anti-disturbance method has the advantages that the anti-disturbance is more natural, the initial face image is attached to the anti-disturbance method, the anti-disturbance method is not easy to be perceived by a face recognition model, and the anti-disturbance method has stronger migration attack performance.
In the embodiment of the application, the updated hidden vector is encoded by using an encoder of the pattern generation model, and is decoded by using a decoder to obtain updated candidate counterdisturbance, namely, candidate counterdisturbance obtained by a new iteration.
After obtaining the new candidate countermeasure disturbance, the method of step S200 is utilized to obtain a new candidate countermeasure image based on the new candidate countermeasure disturbance and the protected face image. And then acquiring target face images under the new candidate countermeasure images from the preset face image set. It should be noted that the target face image at this time may be the same face image as the target face image of the previous round, or may be a different face image. And then calculating to obtain the characteristic loss between the new candidate countermeasure image and the new target face image, judging whether the first similarity reaches a second preset value according to the characteristic loss, if the first similarity does not reach a third preset value, repeating the steps S200-S400 until the first similarity between the candidate countermeasure image corresponding to the candidate countermeasure disturbance obtained by the last update and the target face image reaches the third preset value or reaches the maximum value, and taking the candidate countermeasure disturbance obtained by the last update as the target countermeasure disturbance, wherein the countermeasure image obtained by the target countermeasure disturbance is taken as the target countermeasure image, and the target countermeasure image at the moment is the face countermeasure sample.
In addition, in another embodiment of the present application, it may also be determined whether the first similarity between the candidate antagonistic face image and the target face image reaches a preset value by setting the update times. For example, the number of updates of the candidate disturbance is set to be 100, and after the initial candidate disturbance is updated to 100 th, the disturbance obtained by the 100 th update can be considered as the target disturbance.
Compared with the prior art, the face image processing method of the embodiment of the application firstly determines a target face image which accords with the first preset condition from the preset face image set according to the initial face image and the candidate countermeasure image, and then generates a face countermeasure sample according to the target face image and the initial face image. That is, the face countermeasure sample generated in the embodiment of the present application is not directly optimized in the direction in which the similarity with a single attacker increases or is optimized in the direction in which the similarity with a single protected person decreases in the prior art, but is optimized for multiple times in the direction most similar to a plurality of target faces based on different face images in a preset face image set by means of a preset face image set. Thus, the resulting face challenge sample is not just similar or dissimilar to a single image, but is similar to a target image that is similar or dissimilar to a plurality of said single images; that is, the face challenge sample may acquire more challenge features for achieving a challenge based on multiple target images, rather than a single challenge feature optimized based on only the single image. Therefore, the face countermeasure sample generated by the embodiment of the application can be similar to a plurality of target images, has more countermeasure characteristics capable of realizing countermeasure attack, and can acquire different effective countermeasure characteristics when facing different face recognition systems, so that the face countermeasure sample generated by the application has the same attack result, namely the face countermeasure sample has strong attack robustness and strong migration aggressiveness, and can generate stable attack effects facing a plurality of different face recognition models. Therefore, the attack success rate of the face against the sample against the attack and the attack stability in the physical world are high, the face privacy can be well protected, and the attack resistance test of more face recognition models can be realized through migration.
Through the steps S100-S400, the face recognition system can recognize the error human sample countermeasures, and meanwhile, the target countermeasures disturbance corresponding to the human face countermeasures are obtained, so that after the target countermeasures disturbance is materialized, a preset object for covering the protected human face can be obtained, and a user can wear the preset object to play a privacy protection role on the human face.
In order to protect the privacy of the face of the user in the physical world, in the embodiment of the present application, after determining that the target is against the disturbance, the method further includes: the target anti-disturbance is materialized into a preset object, the preset object is used for covering a target area of the protected face, and the proportion of the area of the target area to the area of the protected face is larger than a preset proportion.
As shown in fig. 7, in the embodiment of the present application, the preset object is an anti-mask, and fig. 7a is a target anti-disturbance obtained by using the image processing method in each of the above embodiments; FIG. 7b is a UV image of the target challenge disturbance m1 of FIG. 7a converted into a mask shape; fig. 7c is a schematic view of an anti-mask obtained by printing a UV image of the mask shape of fig. 7b on the mask, and being worn on the face of the target user (protected face). Compared with the prior art, the target anti-disturbance is updated and optimized for multiple times, and the target faces which are optimized and updated towards the similarity direction of the target face images each time can be the same or different, namely, the target anti-disturbance is optimized towards a more similar direction with a plurality of different target faces, so that the finally obtained target anti-disturbance attack has strong robustness and strong migration aggressiveness, and can generate stable attack effects facing a plurality of different recognition models. Therefore, after the target anti-disturbance is converted into the UV image in the shape of the mask and is printed on the mask, the target user can play a good privacy protection role in facing different face recognition systems after wearing the mask. In addition, the mask has larger area, the area occupation ratio on the face of the target user is higher than that of the countermeasure glasses, the countermeasure stickers and the countermeasure hats, the key information of most of the face of the target user can be shielded, and the face of the target user can be protected in the physical world. It should be noted that, in the embodiment of the present application, the materialized preset object is a mask, and in other embodiments, other materialized objects, such as a face towel and a face mask, may also be used.
As shown in fig. 8, after the target countering disturbance is materialized as a preset object, the user may wear the preset object to play a privacy protection role on his face. Assuming that the preset object is a mask, when the user wears the mask and is identified by the face recognition system, the identity information of the user is judged based on the following steps:
the method comprises the steps of obtaining a face privacy protection image, wherein the face privacy protection image at least comprises an image of a preset object, in some embodiments, the face privacy protection image can be an image acquired after the preset object is covered on a face (a protected face) of a target user or an image of the preset object, as shown in fig. 8, the target user wears an anti-mask, a camera of a face recognition system obtains a face image of the target user wearing the anti-mask, namely the face privacy protection image, and then the face privacy protection image is transmitted to a face recognition model to recognize the identity of the target user.
And extracting the image characteristics of the face privacy protection image.
And determining the identification identity of the target user based on the similarity between the image features of the face privacy protection image and each feature in a preset face feature library.
The identification identity of the target user is a label of a target face in the preset face image set, the similarity between the image characteristics of the target face and the image characteristics of the face privacy protection image is larger than the third preset value, and the label of the target face is different from the true identity of the target user.
In this embodiment, after the target challenge disturbance is materialized into a preset object (e.g., mask), the target user (e.g., the source user of the protected face) may cover the preset object on the target area of the face (e.g., the lower half of the face). When the target user wears the preset object and faces the preset object and is acquired by the face recognition system, the image acquired by the face recognition system is a face privacy protection image of the preset object covered on the face of the target user, and the face privacy protection image comprises the preset object, so that the face recognition system is equivalent to acquire a countermeasure image (face privacy protection image), then the face recognition system can acquire the image characteristics of the face recognition system according to the countermeasure image, calculate the similarity between the face recognition system and each characteristic in a preset face characteristic library based on the image characteristics of the countermeasure image, and select an identity tag with the largest similarity as the recognition result of the countermeasure image, wherein the recognition result is the identity recognition result of the target user after the face recognition system faces the preset object. The similarity between the features of the target face in the face feature library and the countermeasure image (face privacy protecting image) is larger than a third preset value, namely the face recognition system judges that the feature similarity between the features of the countermeasure image and the features of the target face in the face feature library is highest, and the identity of the target face is not the true identity of the target user, namely the face recognition system faces the target user wearing the preset object, and can erroneously judge that the identity of the target face is the identity of the target face, so that the face privacy of the target user is protected.
A face image processing method in the embodiment of the present application is described above, and a face image processing apparatus (e.g., a server) that executes the face image processing method is described below.
Referring to fig. 9, a schematic diagram of an image processing apparatus 60 shown in fig. 9 is applicable to a server, and is used for determining an initial face image, obtaining candidate countermeasure images, determining a target face image meeting a first preset condition from a preset face image set according to the initial face image and the candidate countermeasure images, and generating a face countermeasure sample based on the target face image and the initial face image. The image processing apparatus 60 in the embodiment of the present application can implement steps corresponding to the face image processing method performed in the embodiment corresponding to fig. 5 described above. The functions performed by the image processing apparatus 60 may be realized by hardware, or may be realized by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the functions described above, which may be software and/or hardware. The image processing apparatus 60 may include a processing module 620 and an input/output module 610, and the functional implementation of the processing module 620 and the input/output module 610 may refer to the operations performed in the embodiment corresponding to fig. 5, which are not described herein.
In the embodiment of the present application, the image processing apparatus 60 includes:
an input-output module 610, configured to determine an initial face image;
a processing module 620 for obtaining candidate countermeasure images, the candidate countermeasure images updated based on historical candidate countermeasure images;
the processing module is further configured to determine, from a preset face image set, a target face image that meets a first preset condition according to the initial face image and the candidate countermeasure image; and
a face challenge sample is generated based on the target face image and the initial face image.
In an embodiment of the present application, the processing module 620 is configured to determine the target face image based on the following first preset conditions:
when the face challenge sample is used for a no-target attack,
the first preset condition includes: the difference between the first similarity and the second similarity is larger than a first preset value;
when the face challenge sample is used for targeted attacks,
the first preset condition includes: the sum of the first similarity and the second similarity is larger than a second preset value;
the first similarity is the similarity between the target face image and the candidate countermeasure image, and the second similarity is the similarity between the target face image and the initial face image.
In embodiments at the time of this application, the processing module 620 is further configured to:
acquiring candidate countermeasure disturbance, wherein the candidate countermeasure disturbance is updated based on historical candidate countermeasure disturbance;
obtaining the candidate countermeasure face image based on the candidate countermeasure disturbance and the initial face image;
determining target face images meeting the first preset conditions from a preset face image set;
and if the first similarity is smaller than a third preset value, updating the candidate countermeasure disturbance until the first similarity is not smaller than the third preset value, and taking the candidate countermeasure face image when the first similarity is not smaller than the third preset value as a face countermeasure sample.
In embodiments at the time of this application, the processing module 620 is further configured to:
acquiring candidate countermeasure disturbance, wherein the candidate countermeasure disturbance is updated based on historical candidate countermeasure disturbance;
weighting and calculating the candidate countermeasure disturbance and the initial face image to obtain the candidate countermeasure image; or alternatively
Replacing the preset area of the initial face image with the candidate countermeasure disturbance to obtain the candidate countermeasure image;
when the face countermeasure sample is used for targeted attack, the initial face image is an attacked face image;
When the face challenge sample is used for no-target attack, the initial face image is a protected face image.
In embodiments at the time of this application, the processing module 620 is further configured to:
acquiring initial characteristics of the initial face image;
acquiring the countermeasure features of the candidate countermeasure images;
acquiring face features of each face image in the preset face image set;
and selecting the target face image according to a first preset condition and the similarity between each face feature and the initial feature and the countermeasure feature respectively.
In embodiments at the time of this application, the processing module 620 is further configured to:
respectively acquiring a third similarity between each face feature and the countermeasure feature, and respectively acquiring a fourth similarity between each face feature and the initial feature;
based on the third similarity and the fourth similarity, obtaining the difference between the third similarity and the fourth similarity related to each face feature and the sum of the third similarity and the fourth similarity related to each face feature;
when the face countermeasure sample is used for target attack, selecting a face image corresponding to a face feature with the largest sum of third similarity and fourth similarity as the target face image;
When the face countermeasure sample is used for no-target attack, selecting a face image corresponding to the face feature with the largest difference between the third similarity and the fourth similarity as the target face image.
In this embodiment, if the first similarity is smaller than a third preset value, the processing module 620 is further configured to:
acquiring gradient change information of the first similarity relative to hidden vectors of a pattern generation model for generating the candidate disturbance;
updating the hidden vector based on the gradient change information;
and updating the candidate countermeasure disturbance based on the updated hidden vector.
In an embodiment of the present application, the processing module 620 is further configured to:
the corresponding countermeasure disturbance of the face countermeasure sample is materialized into a preset object, wherein the preset object is used for covering a target area of the protected face, and the proportion of the area of the target area to the area of the protected face is larger than a preset proportion; the preset object comprises one of the following:
mask, face towel, face mask.
In an embodiment of the present application, the processing module 620 is further configured to:
acquiring a face privacy protection image, wherein the face privacy protection image is an image acquired after the preset object covers the face of the target user or an image of the preset object;
Extracting image features of the face privacy protection image;
determining the identification identity of the target user based on the similarity of the image features of the face privacy protection image and each feature in a preset face feature library;
the identification identity of the target user is a label of a target face in the preset face image set, the similarity between the image characteristics of the target face and the image characteristics of the face privacy protection image is larger than the third preset value, and the label of the target face is different from the true identity of the target user.
According to the image processing apparatus of the embodiment of the present application, after the target anti-disturbance is materialized into the preset object (for example, mask), the target user (for example, the source user of the protected face) can cover the preset object on the target area of the face (for example, the lower half face). When the target user wears the preset object and faces the preset object and is acquired by the face recognition system, the image acquired by the face recognition system is a face privacy protection image of the preset object covered on the face of the target user, and the face privacy protection image comprises the preset object, so that the face recognition system is equivalent to acquire a countermeasure image (face privacy protection image), then the face recognition system can acquire the image characteristics of the face recognition system according to the countermeasure image, calculate the similarity between the face recognition system and each characteristic in a preset face characteristic library based on the image characteristics of the countermeasure image, and select an identity tag with the largest similarity as the recognition result of the countermeasure image, wherein the recognition result is the identity recognition result of the target user after the face recognition system faces the preset object. The similarity between the features of the target face in the face feature library and the countermeasure image (face privacy protecting image) is larger than a third preset value, namely the face recognition system judges that the feature similarity between the features of the countermeasure image and the features of the target face in the face feature library is highest, and the identity of the target face is not the true identity of the target user, namely the face recognition system faces the target user wearing the preset object, and can erroneously judge that the identity of the target face is the identity of the target face, so that the face privacy of the target user is protected.
The specific implementation method in each embodiment of the image processing apparatus is referred to each embodiment of the face image processing method, and is not described herein in detail.
According to the image processing device, firstly, a target face image meeting a first preset condition is determined from a preset face image set according to an initial face image and a candidate countermeasure image, and then a face countermeasure sample is generated according to the target face image and the initial face image. That is, the face countermeasure sample generated in the embodiment of the present application is not directly optimized in the direction in which the similarity with a single attacker increases or is optimized in the direction in which the similarity with a single protected person decreases in the prior art, but is optimized for multiple times in the direction most similar to a plurality of target faces based on different face images in a preset face image set by means of a preset face image set. Thus, the resulting face challenge sample is not just similar or dissimilar to a single image, but is similar to a target image that is similar or dissimilar to a plurality of said single images; that is, the face challenge sample may acquire more challenge features for achieving a challenge based on multiple target images, rather than a single challenge feature optimized based on only the single image. Therefore, the face countermeasure sample generated by the embodiment of the application can be similar to a plurality of target images, has more countermeasure characteristics capable of realizing countermeasure attack, and can acquire different effective countermeasure characteristics when facing different face recognition systems, so that the face countermeasure sample generated by the application has the same attack result, namely the face countermeasure sample has strong attack robustness and strong migration aggressiveness, and can generate stable attack effects facing a plurality of different face recognition models. Therefore, the attack success rate of the face against the sample against the attack and the attack stability in the physical world are high, the face privacy can be well protected, and the attack resistance test of more face recognition models can be realized through migration.
Having described the methods and apparatus of the embodiments of the present application, a description will now be made of a computer-readable storage medium of the embodiments of the present application, in which the computer-readable storage medium is an optical disc, and a computer program (i.e., a program product or instructions) is stored thereon, which when executed by a computer, implements the steps described in the embodiments of the methods described above, for example, determining an initial face image; acquiring candidate countermeasure images; determining a target face image conforming to a first preset condition from a preset face image set according to the initial face image and the candidate countermeasure image; a face challenge sample is generated based on the target face image and the initial face image. The specific implementation of each step is not repeated here.
It should be noted that examples of the computer readable storage medium may also include, but are not limited to, a phase change memory (PRAM), a Static Random Access Memory (SRAM), a Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a flash memory, or other optical or magnetic storage medium, which will not be described in detail herein.
The image processing apparatus in the embodiment of the present application is described above from the viewpoint of a modularized functional entity, and the server and the terminal device for executing the image processing method in the embodiment of the present application are described below from the viewpoint of hardware processing, respectively.
It should be noted that, in the embodiment of the image processing apparatus of the present application, the physical device corresponding to the input/output module 610 shown in fig. 9 may be an input/output unit, a transceiver, a radio frequency circuit, a communication module, an input/output (I/O) interface, etc., and the physical device corresponding to the processing module 620 may be a processor. The image processing apparatus shown in fig. 9 may have a structure as shown in fig. 10, and when the image processing apparatus shown in fig. 9 has a structure as shown in fig. 10, the processor and the transceiver in fig. 10 can implement the same or similar functions as the processing module 620 and the input-output module 610 provided in the foregoing apparatus embodiment corresponding to the apparatus, and the memory in fig. 10 stores a computer program to be called when the processor performs the above-described image acquisition method.
The embodiment of the present application further provides a terminal device, as shown in fig. 11, for convenience of explanation, only the portion relevant to the embodiment of the present application is shown, and specific technical details are not disclosed, please refer to the method portion of the embodiment of the present application. The terminal device may be any terminal device including a mobile phone, a tablet computer, a personal digital assistant (Personal Digital Assistant, PDA), a Point of Sales (POS), a vehicle-mounted computer, and the like, taking the terminal device as an example of the mobile phone:
Fig. 11 is a block diagram showing a part of the structure of a mobile phone related to a terminal device provided in an embodiment of the present application. Referring to fig. 11, the mobile phone includes: radio Frequency (RF) circuitry 1010, memory 1020, input unit 1030, display unit 1040, sensor 1050, audio circuitry 1060, wireless fidelity (wireless fidelity, wiFi) module 1070, processor 1080, and power source 1090. Those skilled in the art will appreciate that the handset configuration shown in fig. 11 is not limiting of the handset and may include more or fewer components than shown, or may combine certain components, or may be arranged in a different arrangement of components.
The following describes the components of the mobile phone in detail with reference to fig. 11:
the RF circuit 1010 may be used for receiving and transmitting signals during a message or a call, and particularly, after receiving downlink information of a base station, the signal is processed by the processor 1080; in addition, the data of the design uplink is sent to the base station. Generally, RF circuitry 1010 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low noise amplifier (Low NoiseAmplifier, LNA), a duplexer, and the like. In addition, the RF circuitry 1010 may also communicate with networks and other devices via wireless communications. The wireless communications may use any communication standard or protocol including, but not limited to, global system for mobile communications (GlobalSystem of Mobile communication, GSM), general Packet radio service (General Packet RadioService, GPRS), code division multiple access (Code Division Multiple Access, CDMA), wideband code division multiple access (Wideband Code Division Multiple Access, WCDMA), long term evolution (Long Term Evolution, LTE), email, short message service (Short Messaging Service, SMS), and the like.
The memory 1020 may be used to store software programs and modules that the processor 1080 performs various functional applications and data processing of the handset by executing the software programs and modules stored in the memory 1020. The memory 1020 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, memory 1020 may include high-speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state memory device.
The input unit 1030 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the handset. In particular, the input unit 1030 may include a touch panel 1031 and other input devices 1032. The touch panel 1031, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch panel 1031 or thereabout using any suitable object or accessory such as a finger, stylus, etc.), and drive the corresponding connection device according to a predetermined program. Alternatively, the touch panel 1031 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into touch point coordinates, which are then sent to the processor 1080 and can receive commands from the processor 1080 and execute them. Further, the touch panel 1031 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. The input unit 1030 may include other input devices 1032 in addition to the touch panel 1031. In particular, other input devices 1032 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a track ball, a mouse, a joystick, etc.
The display unit 1040 may be used to display information input by a user or information provided to the user and various menus of the mobile phone. The display unit 1040 may include a display panel 1041, and alternatively, the display panel 1041 may be configured in the form of a Liquid crystal display (Liquid CrystalDisplay, LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 1031 may overlay the display panel 1041, and when the touch panel 1031 detects a touch operation thereon or thereabout, the touch panel is transferred to the processor 1080 to determine a type of touch event, and then the processor 1080 provides a corresponding visual output on the display panel 1041 according to the type of touch event. Although in fig. 11, the touch panel 1031 and the display panel 1041 are two independent components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 1031 and the display panel 1041 may be integrated to implement the input and output functions of the mobile phone.
The handset may also include at least one sensor 1050, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 1041 according to the brightness of ambient light, and the proximity sensor may turn off the display panel 1041 and/or the backlight when the mobile phone moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for applications of recognizing the gesture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc. that may also be configured with the handset are not described in detail herein.
Audio circuitry 1060, a speaker 1061, and a microphone 1062 may provide an audio interface between a user and a cell phone. Audio circuit 1060 may transmit the received electrical signal after audio data conversion to speaker 1061 for conversion by speaker 1061 into an audio signal output; on the other hand, microphone 1062 converts the collected sound signals into electrical signals, which are received by audio circuit 1060 and converted into audio data, which are processed by audio data output processor 1080 for transmission to, for example, another cell phone via RF circuit 1010 or for output to memory 1020 for further processing.
WiFi belongs to a short-distance wireless transmission technology, and a mobile phone can help a user to send and receive emails, browse webpages, access streaming media and the like through a WiFi module 1070, so that wireless broadband Internet access is provided for the user. Although fig. 11 shows a WiFi module 1070, it is understood that it does not belong to the necessary constitution of the handset, and can be omitted entirely as required within the scope of not changing the essence of the invention.
Processor 1080 is the control center of the handset, connects the various parts of the entire handset using various interfaces and lines, and performs various functions and processes of the handset by running or executing software programs and/or modules stored in memory 1020, and invoking data stored in memory 1020, thereby performing overall monitoring of the handset. Optionally, processor 1080 may include one or more processing units; alternatively, processor 1080 may integrate an application processor primarily handling operating systems, user interfaces, applications, etc., with a modem processor primarily handling wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1080.
The handset further includes a power source 1090 (e.g., a battery) for powering the various components, optionally in logical communication with the processor 1080 via a power management system, such as for managing charge, discharge, and power consumption by the power management system.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which will not be described herein.
In the embodiment of the present application, the processor 1080 included in the mobile phone further has a control step of executing the above facial image capturing feature based on input executed by the image recognition device. The processor 1080 included in the mobile phone also has control to execute the above steps executed by the image processing apparatus, such as:
determining an initial face image;
acquiring candidate countermeasure images;
determining a target face image conforming to a first preset condition from a preset face image set according to the initial face image and the candidate countermeasure image;
a face challenge sample is generated based on the target face image and the initial face image.
The embodiment of the present application further provides a server, please refer to fig. 12, fig. 12 is a schematic diagram of a server structure provided in the embodiment of the present application, where the server 1100 may have a relatively large difference due to different configurations or performances, and may include one or more central processing units (in english: central processing units, in english: CPU) 1122 (for example, one or more processors) and a memory 1132, and one or more storage media 1130 (for example, one or more mass storage devices) storing application 1142 or data 1144. Wherein the memory 1132 and the storage medium 1130 may be transitory or persistent. The program stored on the storage medium 1130 may include one or more modules (not shown in fig. 12), each of which may include a series of instruction operations on a server. Still further, the central processor 1122 may be provided in communication with a storage medium 1130, executing a series of instruction operations in the storage medium 1130 on the server 1100.
The Server 1100 may also include one or more power supplies 1120, one or more wired or wireless network interfaces 1150, one or more input-output interfaces 1158, and/or one or more operating systems 1141, such as Windows Server, mac OS X, unix, linux, freeBSD, and the like.
The steps performed by the server in the above embodiments may be based on the structure of the server 1100 shown in fig. 12. The steps performed by the image processing apparatus shown in fig. 9 in the above-described embodiment, for example, may be based on the server structure shown in fig. 12. For example, the CPU 1122 may perform the following operations by calling instructions in the memory 1132:
determining an initial face image through the input-output interface 1158, and acquiring candidate countermeasure images;
the central processor 1122 determines a target face image from the set of preset face images;
determining a target face image conforming to a first preset condition from a preset face image set according to the initial face image and the candidate countermeasure image;
a face challenge sample is generated based on the target face image and the initial face image.
The target countermeasure disturbance corresponding to the countermeasure sample can be output through the input/output interface 1158, so that the target countermeasure disturbance is realized and is superimposed on the actual initial face, and privacy protection is performed on the protected face.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the systems, apparatuses and modules described above may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in the embodiments of the present application, it should be understood that the disclosed systems, apparatuses, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When the computer program is loaded and executed on a computer, the flow or functions described in accordance with embodiments of the present application are fully or partially produced. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be stored by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
The foregoing describes in detail the technical solution provided by the embodiments of the present application, in which specific examples are applied to illustrate the principles and implementations of the embodiments of the present application, where the foregoing description of the embodiments is only used to help understand the methods and core ideas of the embodiments of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope according to the ideas of the embodiments of the present application, the present disclosure should not be construed as limiting the embodiments of the present application in view of the above.

Claims (12)

1. A face image processing method, the method comprising:
determining an initial face image;
acquiring candidate countermeasure images, wherein the candidate countermeasure images are updated based on historical candidate countermeasure images;
determining a target face image conforming to a first preset condition from a preset face image set according to the initial face image and the candidate countermeasure image;
a face challenge sample is generated based on the target face image and the initial face image.
2. The face image processing method of claim 1, wherein the first preset condition includes:
when the face confrontation sample is used for no-target attack, the difference between the first similarity and the second similarity is larger than a first preset value;
Or when the face confrontation sample is used for targeted attack, the sum of the first similarity and the second similarity is larger than a second preset value;
the first similarity is the similarity between the target face image and the candidate countermeasure face image, and the second similarity is the similarity between the target face image and the initial face image.
3. The face image processing method of claim 1, wherein the candidate countermeasure image is acquired, and a target face image meeting a first preset condition is determined from a preset face image set according to the initial face image and the candidate countermeasure image; generating a face challenge sample based on the target face image and the initial face image, comprising:
acquiring candidate countermeasure disturbance, wherein the candidate countermeasure disturbance is updated based on historical candidate countermeasure disturbance;
obtaining the candidate countermeasure face image based on the candidate countermeasure disturbance and the initial face image;
determining target face images meeting the first preset conditions from a preset face image set;
and if the first similarity is smaller than a third preset value, updating the candidate countermeasure disturbance until the first similarity is not smaller than the third preset value, and taking the candidate countermeasure face image when the first similarity is not smaller than the third preset value as a face countermeasure sample.
4. The face image processing method according to claim 1 or 2, the acquiring candidate countermeasure images comprising:
acquiring candidate countermeasure disturbance, wherein the candidate countermeasure disturbance is updated based on historical candidate countermeasure disturbance;
weighting and calculating the candidate countermeasure disturbance and the initial face image to obtain the candidate countermeasure image; or alternatively
Replacing the preset area of the initial face image with the candidate countermeasure disturbance to obtain the candidate countermeasure image;
when the face countermeasure sample is used for targeted attack, the initial face image is an attacked face image;
when the face challenge sample is used for no-target attack, the initial face image is a protected face image.
5. The face image processing method according to claim 1 or 2, wherein the determining, from a preset face image set, a target face image that meets a first preset condition from the initial face image and the candidate countermeasure image, includes:
acquiring initial characteristics of the initial face image;
acquiring the countermeasure features of the candidate countermeasure images;
acquiring face features of each face image in the preset face image set;
And selecting the target face image according to a first preset condition and the similarity between each face feature and the initial feature and the countermeasure feature respectively.
6. The face image processing method according to claim 5, wherein the selecting the target face image according to the first preset condition and the similarity between each face feature and the initial feature and the countermeasure feature respectively includes:
respectively acquiring a third similarity between each face feature and the countermeasure feature, and respectively acquiring a fourth similarity between each face feature and the initial feature;
based on the third similarity and the fourth similarity, obtaining the difference between the third similarity and the fourth similarity related to each face feature and the sum of the third similarity and the fourth similarity related to each face feature;
when the face countermeasure sample is used for target attack, selecting a face image corresponding to a face feature with the largest sum of third similarity and fourth similarity as the target face image;
when the face countermeasure sample is used for no-target attack, selecting a face image corresponding to the face feature with the largest difference between the third similarity and the fourth similarity as the target face image.
7. A face image processing method as defined in claim 3, the updating the candidate countermeasure perturbation comprising:
acquiring gradient change information of the first similarity relative to hidden vectors of a pattern generation model for generating the candidate disturbance;
updating the hidden vector based on the gradient change information;
and updating the candidate countermeasure disturbance based on the updated hidden vector.
8. A face image processing method as defined in claim 3, the method further comprising, after determining the face challenge sample:
the confrontation disturbance corresponding to the face confrontation sample is materialized into a preset object, wherein the preset object is used for covering a target area of a face of a target user, and the proportion of the area of the target area to the area of the face of the target user is larger than a preset proportion; the preset object comprises one of the following:
mask, face towel, face mask.
9. The face image processing method of claim 8, the method further comprising, after the target anti-disturbance is materialized as a preset object:
acquiring a face privacy protection image, wherein the face privacy protection image is an image acquired after the preset object covers the face of the target user or an image of the preset object;
Extracting image features of the face privacy protection image;
determining the identification identity of the target user based on the similarity of the image features of the face privacy protection image and each feature in a preset face feature library;
the identification identity of the target user is a label of a target face in the preset face image set, the similarity between the image characteristics of the target face and the image characteristics of the face privacy protection image is larger than the third preset value, and the label of the target face is different from the true identity of the target user.
10. An image processing apparatus comprising:
the input/output module is used for determining an initial face image;
the processing module is used for acquiring candidate countermeasure images, wherein the candidate countermeasure images are updated based on historical candidate countermeasure images;
the processing module is further configured to determine, from a preset face image set, a target face image that meets a first preset condition according to the initial face image and the candidate countermeasure image; and
a face challenge sample is generated based on the target face image and the initial face image.
11. A processing apparatus, the processing apparatus comprising:
At least one processor, memory, and input output unit;
wherein the memory is for storing a computer program and the processor is for invoking the computer program stored in the memory to perform the method of any of claims 1-9.
12. A computer readable storage medium comprising instructions which, when run on a computer, cause the computer to perform the method of any of claims 1 to 9.
CN202211191524.4A 2022-09-28 2022-09-28 Face image processing method, related device and storage medium Pending CN117831089A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211191524.4A CN117831089A (en) 2022-09-28 2022-09-28 Face image processing method, related device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211191524.4A CN117831089A (en) 2022-09-28 2022-09-28 Face image processing method, related device and storage medium

Publications (1)

Publication Number Publication Date
CN117831089A true CN117831089A (en) 2024-04-05

Family

ID=90521477

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211191524.4A Pending CN117831089A (en) 2022-09-28 2022-09-28 Face image processing method, related device and storage medium

Country Status (1)

Country Link
CN (1) CN117831089A (en)

Similar Documents

Publication Publication Date Title
CN111461089B (en) Face detection method, and training method and device of face detection model
US20230360357A1 (en) Target detection method and apparatus, model training method and apparatus, device, and storage medium
CN111985265B (en) Image processing method and device
CN114387647B (en) Anti-disturbance generation method, device and storage medium
CN111709398A (en) Image recognition method, and training method and device of image recognition model
CN116310745B (en) Image processing method, data processing method, related device and storage medium
CN115859220A (en) Data processing method, related device and storage medium
CN116486463B (en) Image processing method, related device and storage medium
CN114281936A (en) Classification method and device, computer equipment and storage medium
CN114282035A (en) Training and searching method, device, equipment and medium of image searching model
CN115171196B (en) Face image processing method, related device and storage medium
CN116958715A (en) Method and device for detecting hand key points and storage medium
CN115239941A (en) Confrontation image generation method, related device and storage medium
CN113569822B (en) Image segmentation method and device, computer equipment and storage medium
CN115392405A (en) Model training method, related device and storage medium
CN117831089A (en) Face image processing method, related device and storage medium
CN111597823B (en) Method, device, equipment and storage medium for extracting center word
CN116308978B (en) Video processing method, related device and storage medium
CN114943639B (en) Image acquisition method, related device and storage medium
CN114499903B (en) Data transmission method and related device in face recognition scene
CN117079356A (en) Object fake identification model construction method, false object detection method and false object detection device
CN117315395A (en) Face countermeasure sample generation method, related device, equipment and storage medium
CN117132851A (en) Anti-patch processing method, related device and storage medium
CN116704567A (en) Face picture processing method, related equipment and storage medium
CN117218506A (en) Model training method for image recognition, image recognition method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination