CN111539382A - Image recognition model privacy risk assessment method and device and electronic equipment - Google Patents

Image recognition model privacy risk assessment method and device and electronic equipment Download PDF

Info

Publication number
CN111539382A
CN111539382A CN202010442718.1A CN202010442718A CN111539382A CN 111539382 A CN111539382 A CN 111539382A CN 202010442718 A CN202010442718 A CN 202010442718A CN 111539382 A CN111539382 A CN 111539382A
Authority
CN
China
Prior art keywords
recognition model
image
target object
reverse
image recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010442718.1A
Other languages
Chinese (zh)
Inventor
翁海琴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202010442718.1A priority Critical patent/CN111539382A/en
Publication of CN111539382A publication Critical patent/CN111539382A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0635Risk analysis of enterprise or organisation activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Abstract

The embodiment of the specification provides an assessment method for privacy risks of an image recognition model. The method comprises the following steps: and performing feature extraction on an original image corresponding to the target object based on the input vector in the image recognition model to obtain feature data of the target object. And inputting the characteristic data of the target object into a reverse model corresponding to the image recognition model to obtain a reverse image corresponding to the target object, wherein the reverse model has an input vector which is expressed in an opposite way to the input vector in the image recognition model, and training is carried out by taking an original image corresponding to a sample object as output and the characteristic data of the sample object in the corresponding original image as input. And evaluating the privacy risk of the image recognition model based on the similarity of the original image and the reverse image corresponding to the target object.

Description

Image recognition model privacy risk assessment method and device and electronic equipment
Technical Field
The document relates to the technical field of data security, in particular to an evaluation method for privacy risks of an image recognition model.
Background
Deep learning models have become increasingly popular by virtue of having the ability to mechanically manipulate information. An image recognition model (e.g., a face recognition model) is a common business-morphed model. The principle of the image recognition model is that the image characteristics of the object to be recognized and the image characteristics of the sample object are approximately matched, so that the identity of the object to be recognized is determined.
At the present stage, the problem of privacy is not considered in the construction of an image recognition model, once model parameters are disclosed, the possibility of performing model reverse engineering on image characteristics and restoring the image exists. In many business scenes, the images belong to privacy information, and therefore, an evaluation scheme aiming at privacy risks of an image recognition model is urgently needed at present, and certain reference can be provided for an issuing strategy of the image recognition model.
Disclosure of Invention
The embodiment of the specification aims to provide an evaluation method for privacy risks of an image recognition model, which can evaluate whether the image recognition model has privacy risks.
In order to achieve the above object, the embodiments of the present specification are implemented as follows:
in a first aspect, a method for evaluating privacy risks of an image recognition model is provided, which includes:
based on an input vector in an image recognition model, performing feature extraction on an original image corresponding to a target object to obtain feature data of the target object;
inputting the feature data of the target object into a reverse model corresponding to the image recognition model to obtain a reverse image corresponding to the target object, wherein the reverse model has an input vector which is expressed in an opposite way to the input vector in the image recognition model, and training is carried out by taking an original image corresponding to a sample object as output and the feature data of the sample object in the corresponding original image as input;
and evaluating the privacy risk of the image recognition model based on the similarity of the original image and the reverse image corresponding to the target object.
In a second aspect, an apparatus for evaluating privacy risk of an image recognition model is provided, including:
the characteristic acquisition module is used for extracting the characteristics of an original image corresponding to a target object based on an input vector in the image recognition model to obtain the characteristic data of the target object;
the image restoration module is used for inputting the characteristic data of the target object into a reverse model corresponding to the image recognition model to obtain a reverse image corresponding to the target object, wherein the reverse model is provided with an input vector which is expressed in a reverse way with the input vector in the image recognition model, and an original image corresponding to a sample object is used as output, and the characteristic data of the sample object in the corresponding original image is used as input for training;
and the privacy evaluation module is used for evaluating the privacy risk of the image recognition model based on the original image and the reverse image corresponding to the target object.
In a third aspect, an electronic device is provided that includes: a memory, a processor, and a computer program stored on the memory and executable on the processor, the computer program being executed by the processor to:
based on an input vector in an image recognition model, performing feature extraction on an original image corresponding to a target object to obtain feature data of the target object;
inputting the feature data of the target object into a reverse model corresponding to the image recognition model to obtain a reverse image corresponding to the target object, wherein the reverse model has an input vector which is expressed in an opposite way to the input vector in the image recognition model, and training is carried out by taking an original image corresponding to a sample object as output and the feature data of the sample object in the corresponding original image as input;
and evaluating the privacy risk of the image recognition model based on the similarity of the original image and the reverse image corresponding to the target object.
In a fourth aspect, a computer-readable storage medium is provided, having stored thereon a computer program which, when executed by a processor, performs the steps of:
based on an input vector in an image recognition model, performing feature extraction on an original image corresponding to a target object to obtain feature data of the target object;
inputting the feature data of the target object into a reverse model corresponding to the image recognition model to obtain a reverse image corresponding to the target object, wherein the reverse model has an input vector which is expressed in an opposite way to the input vector in the image recognition model, and training is carried out by taking an original image corresponding to a sample object as output and the feature data of the sample object in the corresponding original image as input;
and evaluating the privacy risk of the image recognition model based on the similarity of the original image and the reverse image corresponding to the target object.
In the scheme of the embodiment of the description, based on the input vector in the image recognition model, a reverse model of the input vector expressed reversely is constructed, and the original image corresponding to the sample object is used as output, and the feature data of the sample object in the corresponding original image is used as input to train the reverse model, so that the reverse model has the capability of reversely restoring the image. And then, inputting the characteristic data recognized by the image recognition model from the original image of the target object into the reverse model, giving the reverse image corresponding to the target object by the reverse model, evaluating the privacy risk of the image recognition model according to the similarity between the original image corresponding to the target object and the reverse image, providing a certain reference for the release strategy of the image recognition model, and having the function of avoiding privacy disclosure.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the embodiments of the present specification, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative efforts.
Fig. 1 is a schematic flowchart of a method for evaluating privacy risks of an image recognition model according to an embodiment of the present disclosure.
Fig. 2 is a schematic flowchart of a second method for evaluating privacy risks of an image recognition model according to an embodiment of the present disclosure.
Fig. 3 is a schematic structural diagram of an apparatus for evaluating privacy risk of an image recognition model according to an embodiment of the present disclosure.
Fig. 4 is a schematic structural diagram of an electronic device provided in an embodiment of this specification.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present specification, and not all of the embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments in the present specification without any inventive step should fall within the scope of protection of the present specification.
At the present stage, the problem of privacy is not considered in the construction of an image recognition model, once model parameters are disclosed, the possibility of performing model reverse engineering on image characteristics and restoring the image exists. In many business scenarios, these images belong to private information. Therefore, after the image recognition model is released, the risk of privacy disclosure exists. Particularly, under the large background of 'federal modeling', model sharing becomes a future trend, and model parameters no longer belong to sensitive information. Future pioneer image recognition models present a very serious challenge to privacy protection.
Therefore, the document aims to provide an evaluation scheme aiming at the privacy risk of the image recognition model, the privacy performance of the image recognition model can be accurately quantified, and a certain reference is provided for the release strategy of the image recognition model.
Fig. 1 is a flow chart of a method for evaluating privacy risks of an image recognition model according to an embodiment of the present disclosure. The method shown in fig. 1 may be performed by a corresponding apparatus, comprising:
and S102, extracting the characteristics of the original image corresponding to the target object based on the input vector in the image recognition model to obtain the characteristic data of the target object.
In an embodiment of the present specification, an original image of a target object is used to verify the privacy performance of an image recognition model.
The input vector in the image recognition model is an expression rule of variables, and for the image recognition model with high privacy protection, the input vector has the encryption expression capability of irreversible or almost irreversible variable, but at present, the privacy of the image recognition model is not known, so that the input vector has no requirement. Many image recognition models are constructed without taking into account the cryptographic representation capability of the input vector.
In this step, an original image of the target object may be input to the image recognition model, and feature data of the target object recognized by the image recognition model in the original image may be extracted from an input vector in at least one functional layer (e.g., embedded in the Embedding layer) of the image recognition model.
And step S104, inputting the characteristic data of the target object into a reverse model corresponding to the image recognition model to obtain a reverse image corresponding to the target object, wherein the reverse model has an input vector which is expressed in an opposite way to the input vector in the image recognition model, and training is carried out by taking the original image corresponding to the sample object as output and the characteristic data of the sample object in the corresponding original image as input.
For example, if the image recognition model is a convolutional neural network model, the process of restoring the feature data back to the image through the input vector of the convolutional layer in the image recognition model is to reversely express the feature data by using a convolution operation opposite to the input vector of the convolutional layer.
Thus, the inverse model may be constructed from input vectors that are expressed inversely to the input vectors in the image recognition model. And after the reverse model is constructed, training the reverse model based on the original image corresponding to the sample object and the characteristic data of the sample object in the corresponding original image.
In the training process of the reverse model, the feature data of the existing sample object in the corresponding original image is input into the reverse model, and the training result given by the preset learning model can be obtained. The training result is the inverse image corresponding to the sample object. In the embodiment of the present disclosure, an error value between an original image and a reverse image of a sample object may be calculated based on a loss function derived from maximum likelihood estimation, and parameters in the reverse model (for example, a weight coefficient of an input vector) may be adjusted to reduce the error value, so as to achieve a training effect.
Obviously, the inverse model has the capability of reversely restoring back to the image based on the feature data in the image after the training is completed. And inputting the characteristic data of the target object obtained in the step S102 into the reverse model to obtain a reverse image corresponding to the target object.
And step S106, evaluating the privacy risk of the image recognition model based on the similarity of the original image and the reverse image corresponding to the target object.
It should be understood that the method of calculating the similarity between images is not exclusive and is not particularly limited herein.
By way of exemplary introduction, this step may calculate a mathematical distance (e.g., at least one of an euclidean distance, a manhattan distance, a chebyshev distance, and a hamming distance) between an original image and a reverse image corresponding to a target object, and determine a privacy risk of the image recognition model based on the mathematical distance.
Obviously, the value of the mathematical distance is inversely related to the privacy risk degree of the image recognition model. That is to say, the smaller the value of the mathematical distance is, the higher the similarity between the original image and the reverse image corresponding to the target object is, and the worse the privacy protection of the image recognition model is.
The evaluation method in the embodiment of the present specification constructs a reverse model of an inversely expressed input vector based on the input vector in the image recognition model, and trains the reverse model by using the original image corresponding to the sample object as an output and using the feature data of the sample object in the corresponding original image as an input, so that the reverse model has the capability of reversely restoring the image. And then, inputting the characteristic data recognized by the image recognition model from the original image of the target object into the reverse model, giving the reverse image corresponding to the target object by the reverse model, evaluating the privacy risk of the image recognition model according to the similarity between the original image corresponding to the target object and the reverse image, providing a certain reference for the release strategy of the image recognition model, and having the function of avoiding privacy disclosure.
The method of the embodiments of the present disclosure is described below with reference to practical application scenarios.
The application scene evaluates the privacy risk of the face recognition model. The face image is the privacy information of the user, and once the face recognition model can be reversely restored, serious risk of personal information leakage exists.
Therefore, the application scene sets a corresponding reverse model aiming at the face recognition model in advance. Specifically, the variational self-coder model and/or the confrontation network model can be used as the inverse model, and the decryptor in the self-coder model and/or the confrontation network model can be used as the inverse framework, and the inverse framework comprises the input vector which is expressed oppositely to the input vector in the face recognition model. It should be noted here that the face recognition model and the inverse model may be of different types.
After the reverse model is obtained, the privacy performance of the face recognition model can be evaluated, and the corresponding process mainly comprises the following steps:
step S201, inputting an original face image of a target user into a face recognition model, and acquiring characteristic data representing the original face image by an input vector in the face recognition model.
It should be understood that the face recognition model has different functional layers, and different functional layers may correspond to different input vectors. The input vector in the face recognition model used in this step may be any input vector used for characterizing the original face image in the face recognition model.
Step S202, inputting the characteristic data of the target user in the original face image into a reverse model set for the face recognition model, and obtaining the reverse face image of the target user output by the reverse model.
Step S203, calculating the similarity between the original face image of the target user and the reverse face image of the target user.
In the step, the similarity between the original face image of the target user and the reverse face image of the target user can be calculated by directly using the set user identification model. The user recognition model is supervised trained through the facial image features of the target user and the recognition classification labels of the target user, and after the training is finished, the user recognition model has the capacity of recognizing the target user. And then, inputting the reverse face image of the target user into a user recognition model, and providing a recognition result by the user recognition model.
The recognition result can directly indicate whether the target user is recognized based on the reverse face image of the target user, for example, if the user recognition model input result is "1", it indicates that the target user can be recognized based on the reverse face image of the target user, and the original face image of the target user has similarity with the reverse face image of the target user; on the contrary, the user recognition model with the input result of "0" indicates that the target user cannot be recognized based on the reverse face image of the target user, and the original face image of the target user does not have similarity with the reverse face image of the target user.
Or the recognition result can also indicate the confidence degree that the reverse face image belongs to the face image of the target user through the value in the preset interval. For example, setting the value interval of the confidence coefficient as [0,100], and setting the input result of the user recognition model as "90", which means that the probability of recognizing the target user based on the reverse face image of the target user is higher, and correspondingly, the similarity between the original face image of the target user and the reverse face image of the target user is higher; on the contrary, if the input result of the user recognition model is "20", it means that the target user is less likely to be recognized based on the reverse face image of the target user, and correspondingly, the similarity between the original face image of the target user and the reverse face image of the target user is lower.
And step S304, evaluating the privacy performance of the face recognition model based on the similarity between the original face image of the target user and the reverse face image of the target user.
Specifically, the privacy performance of the face recognition model can be comprehensively quantized based on the similarity between the original face image of the target user and the reverse face image of the target user and other image indexes (such as the definition of the reverse face image) of the reverse face image.
It should be understood that, based on the similarity between the original face image of the target user and the reverse face image of the target user, it belongs to a mathematical quantization step to evaluate the privacy performance of the face recognition model, and therefore the method is not unique and is not repeated herein by way of example.
It should be noted that the obtained evaluation result of the privacy risk of the face recognition model is used for specifying the release policy of the face recognition model.
For example, for a face recognition model that has not been released, if there is a privacy risk, the face recognition model is rejected to be released and adjusted (for example, input vectors are re-related to implement irreversible encryption capability); and under a certain safety environment, putting and using the face recognition model.
For the face recognition model which is released, offline adjustment can be carried out; alternatively, the system can be put to use in the environment of higher safety measures.
The above is a description of the method of the embodiments of the present specification. It will be appreciated that appropriate modifications may be made without departing from the principles outlined herein, and such modifications are intended to be included within the scope of the embodiments herein.
Correspondingly, the embodiment of the specification further provides an evaluation device for the privacy risk of the image recognition model. Fig. 3 is a block diagram of an evaluation apparatus 300 according to an embodiment of the present disclosure, including:
the feature obtaining module 310 performs feature extraction on an original image corresponding to a target object based on an input vector in the image recognition model to obtain feature data of the target object.
The image restoring module 320 is configured to input the feature data of the target object to the inverse model corresponding to the image recognition model to obtain an inverse image corresponding to the target object, where the inverse model has an input vector expressed in an opposite manner to the input vector in the image recognition model, and performs training by using the original image corresponding to the sample object as an output and the feature data of the sample object in the corresponding original image as an input.
And the privacy evaluation module 330 is used for evaluating the privacy risk of the image recognition model based on the original image and the reverse image corresponding to the target object.
The evaluation device in the embodiment of the present specification constructs a reverse model of an inversely expressed input vector based on the input vector in the image recognition model, and trains the reverse model by using the original image corresponding to the sample object as an output and using the feature data of the sample object in the corresponding original image as an input, so that the reverse model has the capability of reversely restoring the image. And then, inputting the characteristic data recognized by the image recognition model from the original image of the target object into the reverse model, giving the reverse image corresponding to the target object by the reverse model, evaluating the privacy risk of the image recognition model according to the similarity between the original image corresponding to the target object and the reverse image, providing a certain reference for the release strategy of the image recognition model, and having the function of avoiding privacy disclosure.
Optionally, at least part of feature data of the sample object in the corresponding original image is obtained by performing feature extraction on the original image corresponding to the sample object based on the input vector in the image recognition model.
Optionally, the evaluation result of the privacy risk of the image recognition model is used for determining a release strategy of the image recognition model.
Optionally, the feature obtaining module 310 performs feature extraction on the original image corresponding to the target object based on the input vector in the embedding layer of the image recognition model.
Optionally, the privacy evaluation module 330 specifically calculates a mathematical distance between the original image and the reverse image corresponding to the target object when executing; and then, based on the mathematical distance, determining the privacy risk of the image recognition model, wherein the value of the mathematical distance is in negative correlation with the privacy risk degree of the image recognition model. The mathematical distance comprises at least one of an euclidean distance, a manhattan distance, a chebyshev distance, and a hamming distance.
Optionally, the inverse model comprises at least one of a variational self-encoder model and a generative confrontation network model.
Optionally, the image recognition model is a face recognition model, and the original images corresponding to the target object and the sample object both belong to face images.
Obviously, the evaluation device of the embodiment of the present specification can be used as the execution subject of the evaluation method shown in fig. 1, and thus the functions of the evaluation method realized in fig. 1 and fig. 2 are realized. Since the principle is the same, the detailed description is omitted here.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. Referring to fig. 4, at a hardware level, the electronic device includes a processor, and optionally further includes an internal bus, a network interface, and a memory. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory, such as at least 1 disk Memory. Of course, the electronic device may also include hardware required for other services.
The processor, the network interface, and the memory may be connected to each other via an internal bus, which may be an ISA (Industry Standard Architecture) bus, a PCI (peripheral component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 4, but that does not indicate only one bus or one type of bus.
And the memory is used for storing programs. In particular, the program may include program code comprising computer operating instructions. The memory may include both memory and non-volatile storage and provides instructions and data to the processor.
And the processor reads a corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to form the device for evaluating the privacy risk of the image recognition model on a logic level. The processor is used for executing the program stored in the memory and is specifically used for executing the following operations:
based on an input vector in an image recognition model, performing feature extraction on an original image corresponding to a target object to obtain feature data of the target object;
inputting the feature data of the target object into a reverse model corresponding to the image recognition model to obtain a reverse image corresponding to the target object, wherein the reverse model has an input vector which is expressed in an opposite way to the input vector in the image recognition model, and training is carried out by taking an original image corresponding to a sample object as output and the feature data of the sample object in the corresponding original image as input;
and evaluating the privacy risk of the image recognition model based on the similarity of the original image and the reverse image corresponding to the target object.
The electronic device in the embodiment of the present description can construct a reverse model of an inversely expressed input vector based on the input vector in the image recognition model, and train the reverse model by using the original image corresponding to the sample object as an output and using the feature data of the sample object in the corresponding original image as an input, so that the reverse model has the capability of reversely restoring the image. And then, inputting the characteristic data recognized by the image recognition model from the original image of the target object into the reverse model, giving the reverse image corresponding to the target object by the reverse model, evaluating the privacy risk of the image recognition model according to the similarity between the original image corresponding to the target object and the reverse image, providing a certain reference for the release strategy of the image recognition model, and having the function of avoiding privacy disclosure.
The method for evaluating privacy risk of an image recognition model disclosed in the embodiment of fig. 1 in the present specification may be applied to or implemented by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present specification may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present specification may be embodied directly in a hardware decoding processor, or in a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
It should be understood that the electronic device according to the embodiment of the present disclosure may implement the functions of the above-described apparatus for evaluating privacy risk of an image recognition model in the embodiments shown in fig. 1 and fig. 2, which are not described herein again.
Of course, besides the software implementation, the electronic device in this specification does not exclude other implementations, such as logic devices or a combination of software and hardware, and the like, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices.
Furthermore, the present specification embodiments also propose a computer-readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a portable electronic device comprising a plurality of application programs, are capable of causing the portable electronic device to perform the method of the embodiment shown in fig. 1, and in particular to perform the following method:
and performing feature extraction on an original image corresponding to the target object based on the input vector in the image recognition model to obtain feature data of the target object.
And inputting the characteristic data of the target object into a reverse model corresponding to the image recognition model to obtain a reverse image corresponding to the target object, wherein the reverse model has an input vector which is expressed in an opposite way to the input vector in the image recognition model, and training is carried out by taking an original image corresponding to a sample object as output and the characteristic data of the sample object in the corresponding original image as input.
And evaluating the privacy risk of the image recognition model based on the similarity of the original image and the reverse image corresponding to the target object.
It should be understood that the above-mentioned instructions, when executed by a portable electronic device comprising a plurality of application programs, can enable the above-mentioned evaluation apparatus to implement the functions of the embodiments shown in fig. 1 and fig. 2, which are not described in detail herein.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The above description is only an example of the present specification, and is not intended to limit the present specification. Various modifications and alterations to this description will become apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present specification should be included in the scope of the claims of the present specification. Moreover, all other embodiments obtained by a person skilled in the art without making any inventive step shall fall within the scope of protection of this document.

Claims (11)

1. A method for evaluating privacy risks of an image recognition model comprises the following steps:
based on an input vector in an image recognition model, performing feature extraction on an original image corresponding to a target object to obtain feature data of the target object;
inputting the feature data of the target object into a reverse model corresponding to the image recognition model to obtain a reverse image corresponding to the target object, wherein the reverse model has an input vector which is expressed in an opposite way to the input vector in the image recognition model, and training is carried out by taking an original image corresponding to a sample object as output and the feature data of the sample object in the corresponding original image as input;
and evaluating the privacy risk of the image recognition model based on the similarity of the original image and the reverse image corresponding to the target object.
2. The method of claim 1, wherein the first and second light sources are selected from the group consisting of,
at least part of feature data of the sample object in the corresponding original image is obtained by performing feature extraction on the original image corresponding to the sample object based on the input vector in the image recognition model.
3. The method of claim 1, wherein the first and second light sources are selected from the group consisting of,
and the evaluation result of the privacy risk of the image recognition model is used for determining the release strategy of the image recognition model.
4. The method of any one of claims 1-3,
based on an input vector in an image recognition model, performing feature extraction on an original image corresponding to a target object to obtain feature data of the target object, wherein the feature data comprises:
and performing feature extraction on an original image corresponding to the target object based on the input vector in the embedding layer of the image recognition model.
5. The method of any one of claims 1-3,
evaluating privacy risks of the image recognition model based on the original image and the reverse image corresponding to the target object, wherein the evaluating privacy risks comprise:
calculating a mathematical distance between an original image and a reverse image corresponding to the target object;
and determining the privacy risk of the image recognition model based on the mathematical distance, wherein the value of the mathematical distance is in negative correlation with the privacy risk degree of the image recognition model.
6. The method as set forth in claim 5,
the mathematical distance comprises at least one of an euclidean distance, a manhattan distance, a chebyshev distance, and a hamming distance.
7. The method of any one of claims 1-3,
the inverse model includes at least one of a variational self-encoder model and a generative confrontation network model.
8. The method of any one of claims 1-3,
the image recognition model is a face recognition model, and the target object and the original image corresponding to the sample object both belong to face images.
9. An apparatus for evaluating privacy risk of an image recognition model, comprising:
the characteristic acquisition module is used for extracting the characteristics of an original image corresponding to a target object based on an input vector in the image recognition model to obtain the characteristic data of the target object;
the image restoration module is used for inputting the characteristic data of the target object into a reverse model corresponding to the image recognition model to obtain a reverse image corresponding to the target object, wherein the reverse model is provided with an input vector which is expressed in a reverse way with the input vector in the image recognition model, and an original image corresponding to a sample object is used as output, and the characteristic data of the sample object in the corresponding original image is used as input for training;
and the privacy evaluation module is used for evaluating the privacy risk of the image recognition model based on the original image and the reverse image corresponding to the target object.
10. An electronic device includes: a memory, a processor, and a computer program stored on the memory and executable on the processor, the computer program being executed by the processor to:
based on an input vector in an image recognition model, performing feature extraction on an original image corresponding to a target object to obtain feature data of the target object;
inputting the feature data of the target object into a reverse model corresponding to the image recognition model to obtain a reverse image corresponding to the target object, wherein the reverse model has an input vector in the image recognition model and is obtained by training with an original image corresponding to a sample object as an output and the feature data of the sample object in the corresponding original image as an input;
and evaluating the privacy risk of the image recognition model based on the original image and the reverse image corresponding to the target object.
11. A computer-readable storage medium having a computer program stored thereon, which when executed by a processor, performs the steps of:
based on an input vector in an image recognition model, performing feature extraction on an original image corresponding to a target object to obtain feature data of the target object;
inputting the feature data of the target object into a reverse model corresponding to the image recognition model to obtain a reverse image corresponding to the target object, wherein the reverse model has an input vector in the image recognition model and is obtained by training with an original image corresponding to a sample object as an output and the feature data of the sample object in the corresponding original image as an input;
and evaluating the privacy risk of the image recognition model based on the original image and the reverse image corresponding to the target object.
CN202010442718.1A 2020-05-22 2020-05-22 Image recognition model privacy risk assessment method and device and electronic equipment Pending CN111539382A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010442718.1A CN111539382A (en) 2020-05-22 2020-05-22 Image recognition model privacy risk assessment method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010442718.1A CN111539382A (en) 2020-05-22 2020-05-22 Image recognition model privacy risk assessment method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN111539382A true CN111539382A (en) 2020-08-14

Family

ID=71980820

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010442718.1A Pending CN111539382A (en) 2020-05-22 2020-05-22 Image recognition model privacy risk assessment method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111539382A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112084962A (en) * 2020-09-11 2020-12-15 贵州大学 Face privacy protection method based on generation type countermeasure network
CN112084948A (en) * 2020-09-10 2020-12-15 深圳市迈航信息技术有限公司 Method and system for improving security check safety by adopting human face data mode

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170220817A1 (en) * 2016-01-29 2017-08-03 Samsung Electronics Co., Ltd. System and method to enable privacy-preserving real time services against inference attacks
CN109670342A (en) * 2018-12-30 2019-04-23 北京工业大学 The method and apparatus of information leakage risk measurement
CN110084108A (en) * 2019-03-19 2019-08-02 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Pedestrian re-identification system and method based on GAN neural network
US20190392305A1 (en) * 2018-06-25 2019-12-26 International Business Machines Corporation Privacy Enhancing Deep Learning Cloud Service Using a Trusted Execution Environment
CN111126623A (en) * 2019-12-17 2020-05-08 支付宝(杭州)信息技术有限公司 Model updating method, device and equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170220817A1 (en) * 2016-01-29 2017-08-03 Samsung Electronics Co., Ltd. System and method to enable privacy-preserving real time services against inference attacks
US20190392305A1 (en) * 2018-06-25 2019-12-26 International Business Machines Corporation Privacy Enhancing Deep Learning Cloud Service Using a Trusted Execution Environment
CN109670342A (en) * 2018-12-30 2019-04-23 北京工业大学 The method and apparatus of information leakage risk measurement
CN110084108A (en) * 2019-03-19 2019-08-02 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Pedestrian re-identification system and method based on GAN neural network
CN111126623A (en) * 2019-12-17 2020-05-08 支付宝(杭州)信息技术有限公司 Model updating method, device and equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112084948A (en) * 2020-09-10 2020-12-15 深圳市迈航信息技术有限公司 Method and system for improving security check safety by adopting human face data mode
CN112084962A (en) * 2020-09-11 2020-12-15 贵州大学 Face privacy protection method based on generation type countermeasure network
CN112084962B (en) * 2020-09-11 2021-05-25 贵州大学 Face privacy protection method based on generation type countermeasure network

Similar Documents

Publication Publication Date Title
CN111291740B (en) Training method of face recognition model, face recognition method and hardware
CN109711358B (en) Neural network training method, face recognition system and storage medium
CN112734045B (en) Exception handling method and device for federated learning and electronic equipment
US10984225B1 (en) Masked face recognition
CN110378301B (en) Pedestrian re-identification method and system
CN111191568A (en) Method, device, equipment and medium for identifying copied image
US20230076017A1 (en) Method for training neural network by using de-identified image and server providing same
CN111539382A (en) Image recognition model privacy risk assessment method and device and electronic equipment
CN111553320B (en) Feature extraction method for protecting personal data privacy, model training method and hardware
CN108416343A (en) A kind of facial image recognition method and device
CN110674800A (en) Face living body detection method and device, electronic equipment and storage medium
CN110765843B (en) Face verification method, device, computer equipment and storage medium
CN111680181A (en) Abnormal object identification method and terminal equipment
CN111476269A (en) Method, device, equipment and medium for constructing balanced sample set and identifying copied image
CN113609900A (en) Local generation face positioning method and device, computer equipment and storage medium
CN112257689A (en) Training and recognition method of face recognition model, storage medium and related equipment
CN110795993A (en) Method and device for constructing model, terminal equipment and medium
CN112395448A (en) Face retrieval method and device
CN112418189B (en) Face recognition method, device and equipment for wearing mask and storage medium
CN111783742A (en) Image classification method for defending against attack, service decision method and device
CN111400764B (en) Personal information protection wind control model training method, risk identification method and hardware
JP2014067352A (en) Apparatus, method, and program for biometric authentication
CN111931148A (en) Image processing method and device and electronic equipment
CN114596638A (en) Face living body detection method, device and storage medium
CN110084147B (en) Gender privacy protection method and system for face recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40035487

Country of ref document: HK

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200814