CN115410257A - Image protection method and related equipment - Google Patents

Image protection method and related equipment Download PDF

Info

Publication number
CN115410257A
CN115410257A CN202211057401.1A CN202211057401A CN115410257A CN 115410257 A CN115410257 A CN 115410257A CN 202211057401 A CN202211057401 A CN 202211057401A CN 115410257 A CN115410257 A CN 115410257A
Authority
CN
China
Prior art keywords
image
protection
source image
distance
source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211057401.1A
Other languages
Chinese (zh)
Inventor
温东超
梁玲燕
崔星辰
史宏志
赵雅倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Beijing Electronic Information Industry Co Ltd
Original Assignee
Inspur Beijing Electronic Information Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Beijing Electronic Information Industry Co Ltd filed Critical Inspur Beijing Electronic Information Industry Co Ltd
Priority to CN202211057401.1A priority Critical patent/CN115410257A/en
Publication of CN115410257A publication Critical patent/CN115410257A/en
Priority to PCT/CN2022/139692 priority patent/WO2024045421A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/247Aligning, centring, orientation detection or correction of the image by affine transforms, e.g. correction due to perspective effects; Quadrilaterals, e.g. trapezoids
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Abstract

The application discloses an image protection method, which comprises the steps of obtaining a source image and a random image, wherein the source image comprises a target protection object, and the random image does not comprise the target protection object; initializing according to a source image to generate an initial protection image; calculating a first characteristic distance between the initial protection image and the random image, and a second characteristic distance and an apparent distance between the initial protection image and the source image; calculating according to the first characteristic distance, the second characteristic distance and the apparent distance to obtain a loss function; and based on the loss function, carrying out iterative update on the initial protection image by using a back propagation algorithm to obtain a protection image about the target protection object. By applying the technical scheme provided by the application, the target object in the image can be subjected to security protection, the image is prevented from being forged or stolen for an illegal attack way, and the information security is ensured. The application also discloses an image protection device, equipment and a computer readable storage medium, which have the beneficial effects.

Description

Image protection method and related equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image protection method and a related device.
Background
With the development of deep learning technology, artificial intelligence applications (such as face recognition, voice recognition, natural language processing, automatic driving and the like) based on the deep learning technology have penetrated into the aspects of social life, and the production and life style of the human society are changed profoundly. At the same time, artificial intelligence technology also implies risks and challenges, threatening human life and property safety, such as: the automatic driving automobile does not timely detect pedestrians on the road surface or the front automobile, so that traffic accidents are caused; a malicious attacker spoofs the face recognition system by using forged/stolen photos or videos; the social group uses the forged video for false publicity. Preventing various risks caused by artificial intelligence applications becomes an important concern in all circles of society.
Among the risks, unauthorized and illegal access to and use of facial information poses a serious threat to the personal and property safety of individuals. In view of this, the owner of the face image (usually himself) wishes to protect the face image from unauthorized use, such as: the owner of the face image does not want the face image to be obtained by a commercial company for commercial promotion or training of a face recognition model; the owner of the facial image is even more unlikely that the facial image would be stolen and used to attack the individual's bank account.
Therefore, how to perform security protection on a target object in an image, to prevent the image from being forged or stolen for an illegal attack path, and to ensure information security is a problem to be urgently solved by those skilled in the art.
Disclosure of Invention
The image protection method can carry out security protection on a target object in an image, prevent the image from being forged or stolen for an illegal attack way, and ensure information security; another object of the present application is to provide an image protection apparatus, a device, and a computer-readable storage medium, all having the above-mentioned advantageous effects.
In a first aspect, the present application provides an image protection method, including:
acquiring a source image and a random image, wherein the source image comprises a target protection object, and the random image does not comprise the target protection object;
initializing and generating an initial protection image according to the source image;
calculating a first feature distance between the initial guard image and the random image, a second feature distance and an apparent distance between the initial guard image and the source image;
calculating to obtain a loss function according to the first characteristic distance, the second characteristic distance and the apparent distance;
iteratively updating the initial protected image using a back propagation algorithm based on the loss function to obtain a protected image about the target protected object
Optionally, calculating a first feature distance between the initial guard image and the random image comprises:
respectively processing the initial protection image and the random image by using a face recognition model to obtain a first feature vector and a second feature vector;
and calculating the cosine distance between the first characteristic vector and the second characteristic vector to obtain the first characteristic distance.
Optionally, calculating a second feature distance between the initial guard image and the source image comprises:
processing the source image by using the face recognition model to obtain a third feature vector;
and calculating the cosine distance between the first feature vector and the third feature vector to obtain the second feature distance.
Optionally, the face recognition model is a neural network model based on ResNet 50.
Optionally, calculating an apparent distance between the initial guard image and the source image comprises:
and F-norm calculation is carried out on the initial protection image and the source image to obtain the apparent distance.
Optionally, calculating an apparent distance between the initial guard image and the source image comprises:
and inputting the initial protection image and the source image into an image classification network for processing to obtain the apparent distance.
Optionally, the image classification network is a neural network model based on VGG-16.
Optionally, before generating the initial protection image according to the initialization of the source image, the method further includes:
respectively standardizing the source image and the random image according to the input specification of the face recognition model to obtain a standardized source image and a standardized random image;
wherein the image formats of the standardized source image and the standardized random image are the image formats specified by the face recognition model.
Optionally, normalizing the source image to obtain the normalized source image includes:
performing type recognition on the source image to determine the image type;
and standardizing the source image by using a processing strategy corresponding to the image type to obtain the standardized source image.
Optionally, normalizing the source image to obtain the normalized source image includes:
constructing an affine transformation matrix by using the source image and the image sample;
converting the source image into the standardized source image using the affine transformation matrix.
Optionally, the constructing an affine transformation matrix by using the source image and the image sample includes:
acquiring first coordinate information of a preset feature point of the target protection object in the source image;
acquiring second coordinate information of preset feature points of the sample object in each image sample;
and calculating to obtain the affine transformation matrix by using the first coordinate information and each second coordinate information.
Optionally, the obtaining of the affine transformation matrix by using the first coordinate information and each piece of the second coordinate information through calculation includes:
and calculating the first coordinate information and each second coordinate information by using a least square estimation algorithm to obtain the affine transformation matrix.
In a second aspect, the present application further discloses an image protection apparatus, the apparatus comprising:
the device comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring a source image and a random image, the source image comprises a target protection object, and the random image does not comprise the target protection object;
the processing module is used for generating an initial protection image according to the initialization of the source image;
a first calculation module for calculating a first feature distance between the initial guard image and the random image, a second feature distance and an apparent distance between the initial guard image and the source image;
a second calculation module, configured to calculate a loss function according to the first characteristic distance, the second characteristic distance, and the apparent distance;
and the updating module is used for carrying out iterative updating on the initial protection image by utilizing a back propagation algorithm based on the loss function to obtain a protection image related to the target protection object.
In a third aspect, the present application also discloses an image protection apparatus, comprising:
a memory for storing a computer program;
a processor for implementing the steps of any of the image protection methods as described above when executing the computer program.
In a fourth aspect, the present application further discloses a computer readable storage medium having a computer program stored thereon, which when executed by a processor, implements the steps of any of the image protection methods described above.
The image protection method provided by the application comprises the following steps: acquiring a source image and a random image, wherein the source image comprises a target protection object, and the random image does not comprise the target protection object; initializing and generating an initial protection image according to the source image; calculating a first feature distance between the initial guard image and the random image, a second feature distance and an apparent distance between the initial guard image and the source image; calculating to obtain a loss function according to the first characteristic distance, the second characteristic distance and the apparent distance; and based on the loss function, carrying out iterative update on the initial protection image by using a back propagation algorithm to obtain a protection image about the target protection object.
By applying the technical scheme provided by the application, the characteristic distance between the protection image and the source image and the characteristic distance between the protection image and the random image are simultaneously considered, the characteristic distance between the protection image and the random image is compressed based on the face recognition model and the back propagation algorithm, and the characteristic distance between the protection image and the source image is enlarged, so that the finally generated protection image is far away from the source image and is close to the random image, namely, the finally generated protection image containing the target object is closer to the random image not containing the target object and is far away from the original image containing the target object, at the moment, the target object recognition system can hardly perform identity recognition on the protection image, the privacy protection of the target object is realized, the image can be effectively prevented from being forged or stolen for illegal attack ways, and the information safety is ensured.
The image protection device, the apparatus and the computer-readable storage medium provided by the present application also have the above technical effects, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the prior art and the embodiments of the present application, the drawings used in the description of the prior art and the embodiments of the present application will be briefly described below. Of course, the following description of the drawings related to the embodiments of the present application is only a part of the embodiments of the present application, and it will be obvious to those skilled in the art that other drawings can be obtained from the provided drawings without any creative effort, and the obtained other drawings also belong to the protection scope of the present application.
Fig. 1 is a schematic flowchart of an image protection method provided in the present application;
fig. 2 is a schematic flowchart of an image protection apparatus provided in the present application;
fig. 3 is a schematic structural diagram of an image protection apparatus provided in the present application.
Detailed Description
The core of the application is to provide an image protection method, which can carry out security protection on a target object in an image, prevent the image from being forged or stolen for an illegal attack path and ensure the information security; another core of the present application is to provide an image protection apparatus, a device and a computer readable storage medium, all having the above-mentioned advantages.
In order to more clearly and completely describe the technical solutions in the embodiments of the present application, the technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides an image protection method.
Referring to fig. 1, fig. 1 is a schematic flow chart of an image protection method provided in the present application, which may include the following steps S101 to S105.
S101: acquiring a source image and a random image, wherein the source image comprises a target protection object, and the random image does not comprise the target protection object;
the method comprises the steps of obtaining a source image and a random image, wherein the source image is an image containing a target object, the random image is an image not containing the target object, and the target object is a target needing information protection in the image and is generally face information.
In the implementation process, when the image of a certain target object needs to be protected, a source image including the target object and a random image not including the target object may be obtained first, the obtaining sources of the source image and the random image do not affect the implementation of the technical scheme, and the obtaining sources of the source image and the random image may be image information acquired by an image acquisition device, image information directly input by a user at the front end of the device, or image information called from a map database, which is not limited in the present application. In addition, the image types of the source image and the random image are not unique, for example, the image types may be a three-channel color image or a single-channel gray image, and the present application is not limited to this.
The random image is an image that does not include the image information of the target object, and therefore, the random image may be an image that includes any other target object than the current target object (such as other face images), an image that does not include any target object (such as an image that does not include any face information), or an automatically generated random image.
S102: initializing according to a source image to generate an initial protection image;
the step is to initialize and generate an initial protection image based on a source image, where the initial protection image is an image which is obtained for the first time and contains a target object and protection information about the target object, and update the initial protection image to obtain a final protection image (S105), where the protection image is an image in which protection of the target object in the image is implemented. In terms of visual effect, the protection image finally generated by the image protection method provided by the application is similar to the source image, so that the initial protection image is generated by using the source image for initialization, the initial protection image can be initialized to be close to the target value, and the convergence of the algorithm is accelerated, while the generation of the initial protection image by using other initialization methods (such as random initialization) may cause the divergence of the algorithm.
S103: calculating a first characteristic distance between the initial protection image and the random image, a second characteristic distance between the initial protection image and the source image and an apparent distance;
this step is intended to enable the calculation of a characteristic distance, which refers to the characteristic distance between the initial guard image and the random image, and the characteristic distance between the initial guard image and the source image, and an apparent distance, which refers to the apparent distance between the initial guard image and the source image. After obtaining the source image, the random image and the initial protection image, the first characteristic distance, the second characteristic distance and the apparent distance can be respectively calculated and obtained according to the characteristic information in each image.
S104: calculating according to the first characteristic distance, the second characteristic distance and the apparent distance to obtain a loss function;
this step is intended to implement the calculation of the loss function. In the implementation process, after a first characteristic distance between the initial protection image and the random image, a second characteristic distance between the initial protection image and the source image and an apparent distance are obtained, the initial protection image, the source image and the source image can be brought into a preset loss function calculation formula to obtain an integral loss function.
S105: and based on the loss function, carrying out iterative update on the initial protection image by using a back propagation algorithm to obtain a protection image about the target protection object.
This step is intended to enable an update of the initial protected image, obtaining a final protected image about the target object. In the implementation process, after the overall loss function is obtained through calculation, iterative update processing can be performed on the initial protection image based on the loss function and by combining a back propagation algorithm to obtain a protection image, wherein the protection image is image data which contains a target object and contains strong protection information about the target object.
Therefore, the image protection method provided by the embodiment of the application simultaneously considers the characteristic distance between the protection image and the source image and the characteristic distance between the protection image and the random image, realizes the compression of the characteristic distance between the protection image and the random image based on the face recognition model and the back propagation algorithm, and enlarges the characteristic distance between the protection image and the source image, so that the finally generated protection image is far away from the source image and is close to the random image, namely, the finally generated protection image containing the target object is closer to the random image not containing the target object and is far away from the original image containing the target object, at the moment, the target object recognition system can hardly recognize the identity of the protection image, thereby realizing the privacy protection of the target object, effectively avoiding the image from being forged or stolen for illegal attack approaches, and ensuring the information security.
In one embodiment of the present application, calculating the first feature distance between the initial guard image and the random image may include the steps of:
respectively processing the initial protection image and the random image by using a face recognition model to obtain a first feature vector and a second feature vector;
and calculating the cosine distance between the first characteristic vector and the second characteristic vector to obtain a first characteristic distance.
The embodiment of the application provides an implementation method for calculating a first characteristic distance between an initial protection image and a random image, namely, a cosine distance between the initial protection image and the random image is used as the first characteristic distance. In the implementation process, the initial protection image and the random image can be respectively input into the face recognition model for processing, the feature vectors of the initial protection image and the random image are obtained through the forward propagation of the neural network, namely the first feature vector and the second feature vector, and then the cosine distance between the two feature vectors is calculated, so that the first feature distance can be obtained.
The face recognition model is a neural network learning model which is created in advance, is pre-stored in a corresponding storage space, and can be directly called when in use. It should be noted that, the distance between the features of the initial guard image and the features of the random image is specified in the feature space of the face recognition model, and is smaller than the distance between the features of the initial guard image and the features of the source image. Certainly, the network type specifically used by the face recognition model does not affect the implementation of the technical scheme, and a technician may select the face recognition model according to actual requirements, which is not limited in the present application, and in a possible implementation manner, the face recognition model is a neural network model based on ResNet50, that is, resNet50 is used as a backbone network.
In one embodiment of the present application, calculating the second feature distance between the initial guard image and the source image may comprise the steps of:
processing a source image by using a face recognition model to obtain a third feature vector;
and calculating the cosine distance between the first characteristic vector and the third characteristic vector to obtain a second characteristic distance.
The embodiment of the present application provides an implementation method for calculating a second feature distance between an initial guard image and a source image, that is, taking a cosine distance between the initial guard image and the source image as the second feature distance, and the specific implementation process thereof refers to the previous embodiment, which is not described herein again.
In one embodiment of the present application, calculating the apparent distance between the initial guard image and the source image may comprise the steps of:
and F-norm calculation is carried out on the initial protection image and the source image to obtain an apparent distance.
The embodiment of the application provides an implementation method for calculating the apparent distance between an initial protection image and a source image, namely, the initial protection image and the source image are calculated based on an F-norm calculation formula to obtain the apparent distance between the initial protection image and the source image.
In one embodiment of the present application, calculating the apparent distance between the initial guard image and the source image may comprise the steps of:
and inputting the initial protection image and the source image into an image classification network for processing to obtain an apparent distance.
The embodiment of the application provides another implementation method for calculating the apparent distance between an initial protection image and a source image, which can be implemented based on a pre-created image classification network, after the initial protection image and the source image are obtained, the initial protection image and the source image can be input into the image classification network for processing, and the distance between the outputs of the image classification network is the apparent distance between the outputs of the image classification network and the source image.
The image classification network is a neural network created in advance, is pre-stored in a corresponding storage space, and can be directly called when in use, similar to the image identification model, the network type is not unique, and can be set by technicians according to actual requirements, and in a possible implementation mode, the image classification network can be a neural network model based on a VGG-16 network (an image classification network).
In an embodiment of the present application, before generating the initial protection image based on the source image initialization, the following steps may be further included:
respectively standardizing a source image and a random image according to the input specification of the face recognition model to obtain a standardized source image and a standardized random image;
wherein the image format of the standardized source image and the standardized random image is an image format specified by the face recognition model.
According to the image protection method provided by the embodiment of the application, before the source image and the random image are input into the face recognition model, the source image and the random image are subjected to standardization processing, the standardization processing aims at converting the image formats of the source image and the random image into the image format specified by the face recognition model, so that the subsequent image processing based on the model is convenient, and the accuracy of the image processing result is ensured. Thus, the normalization process may be performed with reference to an input specification of the face recognition model that indicates the format requirements of the face recognition model for the input image.
The specific implementation manner of the formatting process is not unique, and may be set by a technician according to actual requirements, for example, the specific implementation manner may be an image transformation method such as an image rotation operation, an image scaling operation, and an affine transformation process, which is not limited in this application.
In an embodiment of the present application, normalizing the source image to obtain a normalized source image may include the following steps:
performing type recognition on a source image to determine the type of the image;
and carrying out standardization processing on the source image by using a processing strategy corresponding to the image type to obtain a standardized source image.
As mentioned above, the image types of the source image and the random image are not unique, and may be, for example, a three-channel color image or a single-channel gray image. On the basis, different standardization processing methods can be adopted for different types of source images so as to adapt to different types of images. Therefore, preprocessing strategies aiming at different image types can be created in advance, after the source image is obtained, image type recognition can be carried out on the source image to determine the image type of the source image, and then the source image is subjected to standardization processing by utilizing the processing strategy corresponding to the image type to obtain the standardized source image.
Of course, the normalization processing process of the random image may also adopt the above implementation manner to obtain the normalized random image, which is not described herein again.
In an embodiment of the present application, normalizing the source image to obtain a normalized source image may include the following steps:
constructing an affine transformation matrix by utilizing a source image and an image sample;
the source image is converted to a normalized source image using an affine transformation matrix.
The embodiment of the application provides an implementation manner of standardization processing, namely, image standardization processing based on an affine transformation matrix. In the implementation process, firstly, an affine transformation matrix is constructed by utilizing a source image and an image sample, wherein the affine transformation matrix is an affine transformation matrix specific to the source image, and different images correspond to different affine transformation matrices; further, the affine transformation matrix is directly utilized to convert the source image into a standardized source image.
The image sample refers to sample data containing other types of target objects (different from target objects in the source image), taking a face image as an example, the source image contains target face information to be protected, and the sample image also contains face information, but is completely different from the face information in the source image (not belonging to the same face). It will be appreciated that the number of image samples is not unique, and that the more the number of image samples, the higher the accuracy of the affine transformation matrix constructed.
Of course, the above implementation manner may also be adopted in the normalization processing process of the random image to obtain a normalized random image, which is not described herein again.
In an embodiment of the present application, the above-mentioned constructing an affine transformation matrix by using a source image and an image sample may include the following steps:
acquiring first coordinate information of preset feature points of a target protection object in a source image;
acquiring second coordinate information of preset feature points of sample objects in each image sample;
and obtaining an affine transformation matrix by calculation by using the first coordinate information and the second coordinate information.
The embodiment of the application provides an implementation mode for constructing an affine transformation matrix, and the construction of the radiation transformation matrix is realized based on the coordinate information of a source image and the coordinate information of an image sample. Firstly, determining a target protection object in a source image, and acquiring coordinate information of a preset feature point in the target protection object, namely the first coordinate information; further, determining sample objects in each sample image, and obtaining coordinate information of preset feature points in each sample object, namely the second coordinate information; and finally, combining the first coordinate information and each second coordinate information to construct and obtain a final affine transformation matrix.
The preset feature points are feature points preset in the image object, and taking a face image as an example, the preset feature points may be a left eyeball center point, a right eyeball center point, a nose tip point, a left mouth corner point, and a right mouth corner point. Of course, the more the preset feature points are, the higher the accuracy of the constructed affine transformation matrix is. It can be known through experiments that at least three feature points should be included when the affine transformation matrix is constructed.
In an embodiment of the application, the obtaining of the affine transformation matrix by using the first coordinate information and the second coordinate information through calculation may include: and calculating the first coordinate information and each second coordinate information by using a least square estimation algorithm to obtain an affine transformation matrix.
The embodiment of the application provides an implementation method for constructing an affine transformation matrix based on coordinate information, namely the implementation method can be implemented based on a least squares estimation algorithm. The implementation principle of the least squares estimation algorithm may refer to the prior art, and is not described herein again.
The embodiment of the application provides another image protection method.
The embodiment of the application takes a face image as an example, introduces an image protection method, and the implementation flow of the method can comprise the following steps:
step one, inputting a face source image, a target image (namely the random image) and a face recognition model:
(1) Face source image:
the face source image is generally a three-channel color face image (R channel, G channel, and B channel) or a single-channel gray-scale face image or a face image in other format, and the three-channel color face image is described as an example below. Of course, the single-channel gray-scale face image or the face image in other formats can be adapted by simple expansion.
The face source image can be a face image shot by any shooting device, and the shooting device can be a camera, a mobile phone or a monitoring camera. Typically, the face images taken by these devices are high-resolution, three-channel color face images, such as: the face image is contained on a 1920 x 1080 pixel color image.
Further, a normalized face source image (i.e. a face image conforming to the specification of the input image of the face recognition model, for example, an erected face image of 112 × 112 pixels) is obtained from the face source image, and the common implementation manner is: and converting the face area on the face source image into a standardized face source image by using image conversion methods such as rotation, scaling or affine transformation.
In a preferred implementation:
s1: two-dimensional coordinates of five feature points (left eyeball center point, right eyeball center point, nose tip point, left mouth corner point and right mouth corner point respectively) of an upright 'average' human face of 112 pixels by 112 pixels are obtained. Firstly, collecting a large number of face images, and manually marking two-dimensional coordinates of feature points; then, two-dimensional coordinates of five feature points of an orthostatic average human face with the size of 112 x 112 pixels are calculated by a mathematical statistics method and are defined as { x } i ,y i I =1,2,3,4,5. Of these, 112 × 112 pixels are only given as examples.
S2: and acquiring two-dimensional coordinates of five characteristic points of the face to be protected on the face source image. Marking two-dimensional coordinates of five feature points on a human face source image, and defining the coordinates as { x' i ,y′ i And i =1,2,3,4,5, and the labeling method can be manual labeling of coordinates, or can be automatic acquisition of coordinates of five feature points by adopting a high-precision human face feature point positioning algorithm.
S3: using the two sets of coordinates, i.e. { x i ,y i I =1,2,3,4,5 and { x' i ,y′ i I =1,2,3,4,5, { x' i ,y′ i I =1,2,3,4,5 to { x } i ,y i An affine transformation matrix M of i =1,2,3,4,5, which can be implemented using a least squares estimation algorithm. Then, the affine transformation matrix M is used to transform the face region on the face source image into a face image of 112 × 112 pixels, i.e. a normalized face source image.
When calculating the affine transformation matrix M, at least three feature points should be included, for example: the center point of the left eyeball, the center point of the right eyeball and the nose tip point. In general, the more feature points are used, the higher the accuracy of the affine transformation matrix M is, and the above five feature points are a preferable combination.
(2) Target image:
the technical scheme aims to generate a protected face image (after the protected face image is illegally obtained by an attacker, the attacker cannot use the protected face image to attack the face recognition system based on deep learning). From the perspective of human vision, the protected face image and the standardized face source image are the same person, that is, a human observer considers that the protected face image and the standardized face source image belong to the face image of the same person; in addition, in the feature space of the face recognition model, the distance between the features of the protected face image and the features of the target image is smaller than the distance between the features of the protected face image and the features of the standardized face source image.
Based on the above theoretical requirements, the target image should be a face image of another person or an image without a face or an automatically generated random image or the like. In this embodiment, the target image is assumed to be a face image of another person, and a normalized face source image generation method may be applied, where the target image is also a face image of 112 × 112 pixels (a face image that meets the specification of the face recognition model input image).
(3) A face recognition model:
the face recognition model adopts ResNet50 as a backbone network, the face recognition model is obtained by training on a large-scale face recognition training set by using a random gradient descent method, and a loss function used in the training process is ArcFace loss.
The detailed configuration of the face recognition model is as follows: the ResNet50 includes 5 sets of convolution conv1, conv2_ x, conv3_ x, conv4_ x, conv5_ x and BN (batch normalization layer), drop, FC (full-connected layer). Wherein, conv1 layer is convolution layer, convolution step is 1; the input of the FC layer is the output of the output feature map of the conv5_ x layer sequentially passing through the BN layer and the dropout layer, and the output of the FC layer is 512-dimensional features (the 512-dimensional features output by the face recognition model in this technical solution are also referred to as face features or face feature vectors or feature vectors).
Step two, initializing a protected face image:
the protected face image (denoted as f) is a face image generated by an iterative optimization algorithm using a standardized face source image (denoted as s), a target image (denoted as t) and a face recognition model (denoted as Φ (). And, initializing the protected face image using the normalized face source image: f = s.
From the visual effect, the final generated protected face image is similar to the standardized face source image, therefore, the initialization method can initialize the protected face image near the target value, which can accelerate the convergence of the algorithm, while initializing the protected face image in other ways (such as random initialization) may cause the divergence of the algorithm.
Step three, calculating the characteristic distance between the protected face image (f) and the target image (t):
s1: inputting the protected face image into the face recognition model phi (x), and obtaining a feature vector V of the protected face image (f) through forward propagation of a neural network f In which V is f = Φ (f), the feature vector is a 512-dimensional vector according to the definition of the face recognition model.
S2: inputting the target image (t) into the face recognition model phi (x), and obtaining a feature vector V of the target image (t) through forward propagation of a neural network t In which V is t = Φ (t), the feature vector is a 512-dimensional feature vector according to the definition of the face recognition model.
S3: feature vector V f And V t The distance between them is expressed by cosine distance, and the feature vector V is calculated according to the following formula f And V t Cosine distance between:
Figure BDA0003825692780000131
wherein, represents the dot product operation, | × | non-calculation 2 Representing the L2 norm.
Step four, calculating the characteristic distance between the protected face image (f) and the standardized face source image(s):
s1: inputting the standardized face source image(s) into the face recognition model phi (x), and obtaining a feature vector V of the standardized face source image(s) through forward propagation of a neural network s In which V is s = Φ(s), the feature vector is a 512-dimensional feature vector according to the definition of the face recognition model.
S2: feature vector V f And V s The distance between them is expressed by cosine distance, and the eigenvector V is calculated according to the following formula f And V s Cosine distance between:
Figure BDA0003825692780000132
wherein, represents the dot product operation, | × | non-calculation 2 Representing the L2 norm.
Calculating the apparent distance between the protected face image (f) and the standardized face source image(s):
in order to make the final generated protected face image (f) as visually similar as possible to the standardized face source image(s), the apparent distance can be measured using the output of the middle layer (relu 2-2 layer) of the VGG-16 image classification network as an expression of the image appearance, the VGG-16 network being an image classification network pre-trained on top of the ImageNet dataset:
Figure BDA0003825692780000141
wherein, F (×) represents the output feature vector of the middle layer (relu 2-2 layer) extracted by the VGG-16 image classification network.
In addition, can also useThe apparent distance is calculated from the F-norm of the protected face image (F) and the normalized face source image(s), then D vis =||f-s|| F Wherein | | Qi | purple F Representing the F norm.
Step six, calculating an integral loss function:
the loss function is calculated as:
L(f,s,t)=max(T f→t -D f→t ,0)+λ 1 max(D f→s -T f→s ,0)+λ 2 D vis
l (f, s, t) represents a loss value;
D f→t representing the distance between the feature vector of the protected face image (f) and the feature vector of the target image (t); d f→s Representing the distance between the feature vector of the protected face image (f) and the feature vector of the normalized face source image(s); d vis Representing the apparent distance of the protected face image (f) and the normalized face source image(s);
T f→t is a threshold value (set T) f→t Not less than 0.5) for constraint D f→t When D is present f→t ≥T f→t When, T f→t -D f→t ≤0,max(T f→t -D f→t ,0)=0,max(T f→t -D f→t 0) means that: when D is present f→t >T f→t When the image (f) and the image (t) are close enough in the feature space, the loss value is zero;
T f→s is a threshold value (set T) f→s Less than or equal to 0.5) for restraining D f→s When D is present f→s ≤T f→s When, D f→s -T f→s ≤0,max(D f→s -T f→s ,0)=0,max(D f→s -T f→s 0) means that: when D is f→s <T f→s When the image (f) and the image(s) are far enough in the feature space, the loss value is zero;
max (, x) represents the maximum of the two inputs;
λ 1 coefficient of performanceFor balancing the importance of the first two terms, take λ 1 =1.0;
λ 2 The coefficients being used to constrain the apparent distance D vis Taking λ 2 =1/L, L represents the length of the output feature vector of the middle layer (relu 2-2 layer) extracted by the VGG-16 image classification network; if the apparent distance is calculated by using the F norm between the image matrixes, take lambda 2 = 1/(3 × w × h), where w represents the image width, h represents the image height, and "3" represents the number of image channels.
Wherein, the first term of the loss function calculation formula is used for pushing the protected face image (f) to approach the target image (t) on the feature space; the second term of the loss function calculation formula is used for pushing the protected face image (f) to be far away from the standardized face source image(s) in the feature space; the third term of the loss function calculation formula is used to ensure that the protected face image (f) is visually close to the face source image(s).
And seventhly, updating the protected face image (f) by using a back propagation algorithm:
assume that the initialized protected face image is f 0 Face source image is(s), then f 0 =s。
The rule for updating the protected face image (f) is:
Figure BDA0003825692780000151
wherein j =1, 2., j _ max _ iters, j denotes the jth iteration update, j _ max _ iters represents the maximum number of iterations, j _ max _ iters =1000; lr represents learning rate, lr =0.01;
Figure BDA0003825692780000152
calculating L (f) for representatives j-1 S, t) to f j-1 The derivative of (c).
When the iteration is completed, f j_max_iters Is the protected face image that is finally obtained. In the training process, the parameters of the face recognition model are fixed and unchanged; the parameters of the VGG-16 image classification network used to extract visual features are fixed.
Therefore, the image protection method provided by the embodiment of the application simultaneously considers the characteristic distance between the protection image and the source image and the characteristic distance between the protection image and the random image, realizes the compression of the characteristic distance between the protection image and the random image based on the face recognition model and the back propagation algorithm, and enlarges the characteristic distance between the protection image and the source image, so that the finally generated protection image is far away from the source image and is close to the random image, namely, the finally generated protection image containing the target object is closer to the random image not containing the target object and is far away from the original image containing the target object, at the moment, the target object recognition system can hardly recognize the identity of the protection image, thereby realizing the privacy protection of the target object, effectively avoiding the image from being forged or stolen for illegal attack approaches, and ensuring the information security.
The embodiment of the application provides an image protection device.
Referring to fig. 2, fig. 2 is a schematic structural diagram of an image protection apparatus provided in the present application, where the image protection apparatus may include:
the device comprises an acquisition module 1, a storage module and a processing module, wherein the acquisition module is used for acquiring a source image and a random image, the source image comprises a target protection object, and the random image does not comprise the target protection object;
the processing module 2 is used for generating an initial protection image according to the initialization of the source image;
the first calculation module 3 is used for calculating a first characteristic distance between the initial protection image and the random image, a second characteristic distance between the initial protection image and the source image and an apparent distance;
the second calculation module 4 is used for calculating and obtaining a loss function according to the first characteristic distance, the second characteristic distance and the apparent distance;
and the updating module 5 is used for performing iterative updating on the initial protection image by using a back propagation algorithm based on the loss function to obtain a protection image related to the target protection object.
Therefore, the image protection device provided in the embodiment of the application takes the feature distance between the protection image and the source image and the feature distance between the protection image and the random image into consideration, compresses the feature distance between the protection image and the random image based on the face recognition model and the back propagation algorithm, and expands the feature distance between the protection image and the source image, so that the finally generated protection image is far away from the source image and is close to the random image, that is, the finally generated protection image containing the target object is closer to the random image not containing the target object and is far away from the original image containing the target object, at this time, the target object recognition system cannot identify the protection image, thereby realizing privacy protection of the target object, effectively avoiding the image from being forged or stolen for illegal attack approaches, and ensuring information security.
In an embodiment of the present application, the first calculating module 3 may be specifically configured to utilize a face recognition model to respectively process an initial protection image and a random image, so as to obtain a first feature vector and a second feature vector; and calculating the cosine distance between the first characteristic vector and the second characteristic vector to obtain a first characteristic distance.
In an embodiment of the present application, the first computing module 3 may be specifically configured to process a source image by using a face recognition model, so as to obtain a third feature vector; and calculating the cosine distance between the first characteristic vector and the third characteristic vector to obtain a second characteristic distance.
In an embodiment of the present application, the face recognition model may be a neural network model based on ResNet 50.
In an embodiment of the present application, the first calculation module 3 may be specifically configured to perform F-norm calculation on the initial guard image and the source image to obtain the apparent distance.
In an embodiment of the present application, the first calculating module 3 may be specifically configured to input the initial guard image and the source image into an image classification network for processing, so as to obtain the apparent distance.
In one embodiment of the present application, the image classification network may be a VGG-16 based neural network model.
In an embodiment of the application, the image protection apparatus may further include a normalization module, configured to, before the initial protection image is generated based on the source image, perform normalization processing on the source image and the random image respectively according to the input specification of the face recognition model, so as to obtain a normalized source image and a normalized random image; wherein the image format of the standardized source image and the standardized random image is an image format specified by the face recognition model.
In an embodiment of the present application, the aforementioned standardization module may include:
the identification unit is used for constructing an affine transformation matrix by utilizing the source image and the image sample;
and the processing module is used for converting the source image into a standardized source image by utilizing the affine transformation matrix.
In an embodiment of the present application, the normalization module may include:
the construction unit is used for constructing an affine transformation matrix by utilizing the source image and the image sample;
and the conversion unit is used for converting the source image into a standardized source image by utilizing an affine transformation matrix.
In an embodiment of the present application, the above-mentioned construction unit may include:
the first obtaining subunit is used for obtaining first coordinate information of a preset feature point of a target protection object in a source image;
the second acquiring subunit is used for acquiring second coordinate information of the preset feature points of the sample object in each image sample;
and the calculation subunit is used for calculating to obtain an affine transformation matrix by using the first coordinate information and each second coordinate information.
In an embodiment of the application, the calculating subunit may be specifically configured to calculate the first coordinate information and each of the second coordinate information by using a least squares estimation algorithm, so as to obtain an affine transformation matrix.
For the introduction of the apparatus provided in the embodiment of the present application, please refer to the method embodiment described above, which is not described herein again.
The embodiment of the application provides an image protection device.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an image protection apparatus provided in the present application, where the image protection apparatus may include:
a memory for storing a computer program;
a processor for implementing the steps of any of the image protection methods as described above when executing the computer program.
As shown in fig. 3, which is a schematic diagram of a composition structure of the image protection apparatus, the image protection apparatus may include: a processor 10, a memory 11, a communication interface 12 and a communication bus 13. The processor 10, the memory 11 and the communication interface 12 all communicate with each other through a communication bus 13.
In the embodiment of the present application, the processor 10 may be a Central Processing Unit (CPU), an application specific integrated circuit, a digital signal processor, a field programmable gate array or other programmable logic device, etc.
The processor 10 may call a program stored in the memory 11, and in particular, the processor 10 may perform operations in an embodiment of the image protection method.
The memory 11 is used for storing one or more programs, the program may include program codes, the program codes include computer operation instructions, in this embodiment, the memory 11 stores at least the program for implementing the following functions:
acquiring a source image and a random image, wherein the source image comprises a target protection object, and the random image does not comprise the target protection object;
initializing and generating an initial protection image according to a source image;
calculating a first characteristic distance between the initial protection image and the random image, a second characteristic distance between the initial protection image and the source image and an apparent distance;
calculating according to the first characteristic distance, the second characteristic distance and the apparent distance to obtain a loss function;
and based on the loss function, carrying out iterative update on the initial protection image by using a back propagation algorithm to obtain a protection image about the target protection object.
In one possible implementation, the memory 11 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created during use.
Further, the memory 11 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device or other volatile solid state storage device.
The communication interface 12 may be an interface of a communication module for connecting with other devices or systems.
Of course, it should be noted that the structure shown in fig. 3 does not constitute a limitation of the image protection apparatus in the embodiment of the present application, and the image protection apparatus may include more or less components than those shown in fig. 3 or some components in combination in practical applications.
The embodiment of the application provides a computer readable storage medium.
The computer-readable storage medium provided in the embodiments of the present application stores a computer program, and when the computer program is executed by a processor, the steps of any one of the image protection methods described above may be implemented.
The computer-readable storage medium may include: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
For introduction of the computer-readable storage medium provided in the embodiment of the present application, please refer to the method embodiment described above, which is not described herein again.
The embodiments are described in a progressive mode in the specification, the emphasis of each embodiment is on the difference from the other embodiments, and the same and similar parts among the embodiments can be referred to each other. The device disclosed in the embodiment corresponds to the method disclosed in the embodiment, so that the description is simple, and the relevant points can be referred to the description of the method part.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, read-only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The technical solutions provided by the present application are described in detail above. The principles and embodiments of the present application are described herein using specific examples, which are only used to help understand the method and its core idea of the present application. It should be noted that, for those skilled in the art, without departing from the principle of the present application, the present application can also make several improvements and modifications, and those improvements and modifications also fall into the protection scope of the present application.

Claims (15)

1. An image protection method, characterized in that the method comprises:
acquiring a source image and a random image, wherein the source image comprises a target protection object, and the random image does not comprise the target protection object;
initializing and generating an initial protection image according to the source image;
calculating a first feature distance between the initial guard image and the random image, a second feature distance and an apparent distance between the initial guard image and the source image;
calculating to obtain a loss function according to the first characteristic distance, the second characteristic distance and the apparent distance;
and based on the loss function, carrying out iterative update on the initial protection image by using a back propagation algorithm to obtain a protection image about the target protection object.
2. The method of claim 1, wherein computing a first feature distance between the initial guard image and the random image comprises:
respectively processing the initial protection image and the random image by using a face recognition model to obtain a first feature vector and a second feature vector;
and calculating the cosine distance between the first characteristic vector and the second characteristic vector to obtain the first characteristic distance.
3. The method of claim 2, wherein computing a second feature distance between the initial guard image and the source image comprises:
processing the source image by using the face recognition model to obtain a third feature vector;
and calculating the cosine distance between the first characteristic vector and the third characteristic vector to obtain the second characteristic distance.
4. The method of claim 3, wherein the face recognition model is a ResNet 50-based neural network model.
5. The method of claim 1, wherein calculating an apparent distance between the initial guard image and the source image comprises:
and F-norm calculation is carried out on the initial protection image and the source image to obtain the apparent distance.
6. The method of claim 1, wherein calculating an apparent distance between the initial guard image and the source image comprises:
and inputting the initial protection image and the source image into an image classification network for processing to obtain the apparent distance.
7. The method of claim 6, wherein the image classification network is a VGG-16 based neural network model.
8. The method of any one of claims 2 to 7, wherein prior to said generating an initial guard image from said source image initialization, further comprising:
respectively standardizing the source image and the random image according to the input specification of the face recognition model to obtain a standardized source image and a standardized random image;
wherein the image formats of the standardized source image and the standardized random image are the image formats specified by the face recognition model.
9. The method according to claim 8, wherein the normalizing the source image to obtain the normalized source image comprises:
performing type recognition on the source image to determine the image type;
and standardizing the source image by using a processing strategy corresponding to the image type to obtain the standardized source image.
10. The method according to claim 8, wherein normalizing the source image to obtain the normalized source image comprises:
constructing an affine transformation matrix by using the source image and the image sample;
converting the source image into the normalized source image using the affine transformation matrix.
11. The method of claim 10, wherein constructing an affine transformation matrix using the source image and image samples comprises:
acquiring first coordinate information of a preset feature point of the target protection object in the source image;
acquiring second coordinate information of preset feature points of sample objects in the image samples;
and calculating to obtain the affine transformation matrix by using the first coordinate information and each second coordinate information.
12. The method according to claim 11, wherein the obtaining the affine transformation matrix by using the first coordinate information and each of the second coordinate information includes:
and calculating the first coordinate information and each second coordinate information by using a least square estimation algorithm to obtain the affine transformation matrix.
13. An image protection apparatus, characterized in that the apparatus comprises:
the device comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring a source image and a random image, the source image comprises a target protection object, and the random image does not comprise the target protection object;
the processing module is used for generating an initial protection image according to the initialization of the source image;
a first calculation module for calculating a first feature distance between the initial guard image and the random image, a second feature distance between the initial guard image and the source image, and an apparent distance;
a second calculation module, configured to calculate a loss function according to the first feature distance, the second feature distance, and the apparent distance;
and the updating module is used for carrying out iterative updating on the initial protection image by utilizing a back propagation algorithm based on the loss function to obtain a protection image related to the target protection object.
14. An image protection apparatus, characterized by comprising:
a memory for storing a computer program;
a processor for implementing the steps of the image protection method according to any one of claims 1 to 12 when executing said computer program.
15. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the image protection method according to any one of claims 1 to 12.
CN202211057401.1A 2022-08-30 2022-08-30 Image protection method and related equipment Pending CN115410257A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211057401.1A CN115410257A (en) 2022-08-30 2022-08-30 Image protection method and related equipment
PCT/CN2022/139692 WO2024045421A1 (en) 2022-08-30 2022-12-16 Image protection method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211057401.1A CN115410257A (en) 2022-08-30 2022-08-30 Image protection method and related equipment

Publications (1)

Publication Number Publication Date
CN115410257A true CN115410257A (en) 2022-11-29

Family

ID=84163374

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211057401.1A Pending CN115410257A (en) 2022-08-30 2022-08-30 Image protection method and related equipment

Country Status (2)

Country Link
CN (1) CN115410257A (en)
WO (1) WO2024045421A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024045421A1 (en) * 2022-08-30 2024-03-07 浪潮(北京)电子信息产业有限公司 Image protection method and related device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070071323A1 (en) * 2005-09-26 2007-03-29 Cognisign Llc Apparatus and method for processing user-specified search image points
CN112418332B (en) * 2020-11-26 2022-09-23 北京市商汤科技开发有限公司 Image processing method and device and image generation method and device
CN113657350A (en) * 2021-05-12 2021-11-16 支付宝(杭州)信息技术有限公司 Face image processing method and device
CN115410257A (en) * 2022-08-30 2022-11-29 浪潮(北京)电子信息产业有限公司 Image protection method and related equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024045421A1 (en) * 2022-08-30 2024-03-07 浪潮(北京)电子信息产业有限公司 Image protection method and related device

Also Published As

Publication number Publication date
WO2024045421A1 (en) 2024-03-07

Similar Documents

Publication Publication Date Title
US11487995B2 (en) Method and apparatus for determining image quality
JP6754619B2 (en) Face recognition method and device
US10713532B2 (en) Image recognition method and apparatus
CN108921782B (en) Image processing method, device and storage medium
CN111444881A (en) Fake face video detection method and device
WO2020103700A1 (en) Image recognition method based on micro facial expressions, apparatus and related device
KR20170000748A (en) Method and apparatus for face recognition
KR102476016B1 (en) Apparatus and method for determining position of eyes
CN109271930B (en) Micro-expression recognition method, device and storage medium
CN112052831A (en) Face detection method, device and computer storage medium
JP2005327076A (en) Parameter estimation method, parameter estimation device and collation method
CN110852310A (en) Three-dimensional face recognition method and device, terminal equipment and computer readable medium
CN108229375B (en) Method and device for detecting face image
CN110705353A (en) Method and device for identifying face to be shielded based on attention mechanism
WO2022188697A1 (en) Biological feature extraction method and apparatus, device, medium, and program product
WO2023124040A1 (en) Facial recognition method and apparatus
CN111680544B (en) Face recognition method, device, system, equipment and medium
CN110599187A (en) Payment method and device based on face recognition, computer equipment and storage medium
KR20220076398A (en) Object recognition processing apparatus and method for ar device
CN112052832A (en) Face detection method, device and computer storage medium
CN111985454A (en) Face recognition method, device, equipment and computer readable storage medium
CN114241459B (en) Driver identity verification method and device, computer equipment and storage medium
CN115410257A (en) Image protection method and related equipment
KR20170057118A (en) Method and apparatus for recognizing object, and method and apparatus for training recognition model
CN113255512B (en) Method, apparatus, device and storage medium for living body identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination