CN117635404A - Image encryption method and device - Google Patents

Image encryption method and device Download PDF

Info

Publication number
CN117635404A
CN117635404A CN202210974386.0A CN202210974386A CN117635404A CN 117635404 A CN117635404 A CN 117635404A CN 202210974386 A CN202210974386 A CN 202210974386A CN 117635404 A CN117635404 A CN 117635404A
Authority
CN
China
Prior art keywords
image
original
result
encryption
encrypted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210974386.0A
Other languages
Chinese (zh)
Inventor
颜聪泉
杨彭举
谢迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202210974386.0A priority Critical patent/CN117635404A/en
Publication of CN117635404A publication Critical patent/CN117635404A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The embodiment of the application provides an image encryption method and device, wherein the method comprises the following steps: acquiring an original image; inputting the original image into a trained target encryption model to obtain an encrypted image output by the target encryption model; wherein the target encryption model is determined by: encrypting the original training image through the original encryption model to obtain an encrypted training image; respectively carrying out preset processing on the original training image and the encrypted training image to obtain a processing result; determining encryption loss according to the difference between the processing results; and updating the original encryption model according to the encryption loss to obtain a target encryption model. Thereby improving the applicability of the encrypted image.

Description

Image encryption method and device
Technical Field
The present application relates to the field of image processing technologies, and in particular, to the field of image encryption processing technologies.
Background
Deep learning-based biometric algorithms require a large amount of biometric data (e.g., biometric images or biometric videos) for model training, but require the acquisition and retention of the original biometric images during training, which creates a significant challenge for the provider information security of these biometric images (e.g., face information), and therefore, for protection of privacy information, the biometric images need to be desensitized and encrypted prior to use, such that the human eye cannot recognize the semantics of the desensitized and encrypted biometric images. In the related art, the inventor finds that after the biological characteristic image is desensitized and encrypted, the desensitized and encrypted biological characteristic image cannot be applied to image processing technologies such as model training, image recognition and the like, and has low applicability.
Disclosure of Invention
An embodiment of the application aims to provide an image encryption method and device so as to improve applicability of encrypted images.
The specific technical scheme is as follows:
in a first aspect of the present application, there is provided an image encryption method, the method including:
acquiring an original image;
inputting the original image into a trained target encryption model to obtain an encrypted image output by the target encryption model;
wherein the target encryption model is determined by:
encrypting the original training image through the original encryption model to obtain an encrypted training image;
respectively carrying out preset processing on the original training image and the encrypted training image to obtain a processing result;
determining encryption loss according to the difference between the processing results;
and updating the original encryption model according to the encryption loss to obtain a target encryption model.
In a second aspect of the present application, there is provided an image encryption apparatus, comprising:
the image acquisition module is used for acquiring an original image;
the encryption module is used for inputting the original image into the trained target encryption model to obtain an encrypted image output by the target encryption model;
Wherein the target encryption model is determined by:
encrypting the original training image through the original encryption model to obtain an encrypted training image;
respectively carrying out preset processing on the original training image and the encrypted training image to obtain a processing result;
determining encryption loss according to the difference between the processing results;
and updating the original encryption model according to the encryption loss to obtain a target encryption model.
In a third aspect of the present application, an electronic device is provided, which includes a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory perform communication with each other through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of the above first aspects when executing a program stored on a memory.
In a fourth aspect of the present application, a computer-readable storage medium is provided, wherein a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, implements the method steps of any of the first aspects.
The beneficial effects of the embodiment of the application are that:
according to the image encryption method provided by the embodiment of the application, the original image is input into the trained target encryption model to obtain the encrypted image, the target encryption model is obtained by updating parameters of the original encryption model according to the encryption loss, the encryption loss is determined by determining the difference of the processing result after the preset processing of the encrypted training image relative to the processing result after the preset processing of the original training image, it can be understood that the parameters of the original encryption model are updated by determining the encryption loss of the encrypted training model and the encrypted training model after the encryption of the original encryption model, the gap between the original training image and the encrypted training image is restrained, the target encryption model obtained by training can restrain the gap between the original image and the encrypted image, the difference of the encrypted image after the preset processing relative to the original image is reduced, the encrypted image can replace the original image to carry out subsequent processing, and the applicability of the encrypted image is improved.
Of course, not all of the above-described advantages need be achieved simultaneously in practicing any one of the products or methods of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description will briefly introduce the drawings that are required to be used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other embodiments may also be obtained according to these drawings to those skilled in the art.
Fig. 1 is a schematic flow chart of an image encryption method according to an embodiment of the present application;
fig. 2 is a flow chart of an encryption loss determining method according to an embodiment of the present application;
fig. 3 is a flow chart of another encryption loss determining method according to an embodiment of the present application;
fig. 4 is a flow chart of another encryption loss determining method according to an embodiment of the present application;
fig. 5 is a flow chart of another encryption loss determining method according to an embodiment of the present application;
fig. 6 is a flow chart of another encryption loss determining method according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an encryption loss device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. Based on the embodiments herein, a person of ordinary skill in the art would be able to obtain all other embodiments based on the disclosure herein, which are within the scope of the disclosure herein.
The image encryption is to transform the image, so that the visual difference between the changed encrypted image and the original image is large, and therefore, the human eyes cannot recognize the information in the original image, and the information in the original image is protected.
In some application scenarios, model training using images containing specific content is required, such as training a model that can accurately identify a vehicle in severe weather, and model training using images containing a vehicle in a severe weather environment is required. Because the image encryption changes the image, the image containing the specific content may not contain the specific content after the image encryption, so that the encrypted image cannot be directly used for model training, and the applicability of the encrypted image is low.
S101, acquiring an original image.
S102, inputting the original image into the trained target encryption model to obtain an encrypted image output by the target encryption model.
The method comprises the steps of selecting the embodiment, inputting an original image into a trained target encryption model to obtain an encrypted image, wherein the target encryption model is obtained by updating parameters of the original encryption model according to encryption loss, the encryption loss is determined by determining the difference of a processing result after the preset processing of the encrypted training image relative to a processing result after the preset processing of the original training image, and it is understood that the parameters of the original encryption model are updated by determining the encryption loss of the encrypted training model and the encrypted training model after the encryption of the original encryption model, so that the difference between the original training image and the encrypted training image is restrained, the target encryption model obtained through training can restrain the difference between the original image and the encrypted image, the difference of the encrypted image after the preset processing relative to the original image is reduced, the encrypted image can replace the original image to carry out subsequent processing, and the applicability of the encrypted image is improved.
In S101, the original image may be a biometric image including a biometric feature, such as a face image, a body image, an iris image, a fingerprint image, or the like, or an image not including a biometric feature, such as a vehicle image, a building image, or the like, or a video frame including a biometric feature in the video may be selected as the original image. The original image can be obtained from a gallery needing desensitization encryption, such as a face gallery needing to be used for training a face recognition model, and can be input according to the needs of a person skilled in the art.
Wherein, in S102, the target encryption model is determined by: encrypting the original training image through the original encryption model to obtain an encrypted training image; respectively carrying out preset treatment on the original training image and the encrypted training image to obtain a treatment result; determining encryption loss according to the difference between the processing results; and updating the original encryption model according to the encryption loss to obtain a target encryption model.
The original encryption model may be a model with an image encryption function formed by a convolutional neural network, or may be a model with an image encryption function formed by another neural network. The original training image is an unencrypted image for training the original encryption model, the original training image is input into the original encryption model for encryption, and an encrypted training image is obtained, wherein the encrypted training image is an encrypted image corresponding to the original training image, and it can be understood that the difference between the encrypted training image and the original training image in visual effect is larger than a preset threshold, namely, semantic information expressed by the original training image cannot be recognized from the encrypted training image through human eyes. For example, assuming that the original training image is an image of person a, person a should not be recognized from the encrypted training image by the human eye.
It can be understood that, in order to enable the encrypted training image to have the same effect as the original training image in the subsequent training model, in an ideal case, the encrypted training image should be different from the original training image only in terms of semantics of human visual effect, while other layers such as semantics of machine visual effect are identical, for example, the encrypted training image and the original training image are both subjected to face recognition through the model, and the obtained recognition results should be identical, so in the training process of the original encryption model, the gap between the encrypted training image obtained by encrypting the original encryption model and the original training image in terms of machine recognition effect needs to be reduced. Hereinafter, for convenience of description, the semantics of the visual effect of human eyes will be referred to as human eye visual semantics, and the semantics of the visual effect of machine will be referred to as machine visual semantics, and it will be understood that the machine in the present application is not limited to a machine device having an entity, and the machine in the present application may be an electronic device, or may be a model network, or the like.
The encryption loss is determined by the difference between processing results obtained by performing preset processing on the original training image and the encrypted training image, specifically, the preset processing may be one or more of image segmentation, feature extraction, local detection and the like, the original training image and the encrypted training image are respectively subjected to preset processing to obtain corresponding processing results, the difference between different processing results is determined, the difference is used as the encryption loss, and when the preset processing is the combination of the multiple modes, the encryption loss may be the superposition result of the differences between the processing results of the multiple encrypted training images and the original training image in the preset processing. It can be understood that the preset processing can extract the performance of the original training image and the encrypted training image in terms of machine vision semantics, and different processing results correspond to different performance results of the original training image and the encrypted training image in terms of machine vision semantics, and the encryption loss obtained according to the difference of the different processing results can represent the difference of the original training image and the encrypted training image in terms of machine vision semantics.
And updating model parameters of the original encryption model in a direction capable of reducing the encryption loss until the encryption loss of the encrypted training image obtained by encrypting the original encryption model relative to the original training image is smaller than a preset threshold, wherein the preset threshold can be set by a person skilled in the art according to experience or according to requirements.
As an example, in updating the model parameters of the original encryption model in a direction such that the encryption loss is reduced, a loss function regarding the encryption loss may be established, the encryption loss is made smaller than a predetermined threshold value by a gradient descent method, and the model parameters at that time are determined, and the original encryption model is updated based on the model parameters, resulting in the target encryption model.
Therefore, the target encryption model obtained by the method can reduce the encryption loss of the encrypted image obtained by encryption and the original image to a certain extent, so that the difference between the obtained encrypted image and the processing result of the original image after preset processing is small, and the encrypted image can protect the privacy information in the original image and replace the use of the original image in a subsequent model algorithm.
If the machine vision semantics of the encrypted image obtained by encrypting the target encryption model are consistent with those of the original image, the machine vision semantics of the partial images of the original image and the encrypted image should also be consistent, so that the machine vision semantics information of the original training image and the partial images of the encrypted training image need to be considered when determining the encryption loss, and then the target encryption model is obtained by updating the original encryption model through the encryption loss, based on the fact, the application provides a method for determining the encryption loss, wherein the preset processing comprises the following steps: image segmentation, as shown in fig. 2, the method for determining encryption loss includes:
s201, respectively carrying out image segmentation on the original training image and the encrypted training image to obtain a first segmentation result and a second segmentation result.
S202, determining encryption loss according to the difference of the second segmentation result relative to the first segmentation result.
In S201, the first segmentation result is an image segmentation result of the original training image, and the second segmentation result is an image segmentation result of the encrypted training image.
The preset processing includes image segmentation, when the original training image and the encrypted training image are respectively subjected to preset processing, the original training image and the encrypted training image can be respectively segmented into a plurality of local images through image segmentation according to a segmentation rule, the original training image is taken as a face image as an example, the image segmentation can be used for segmenting face parts and non-face parts in the original training image and the corresponding encrypted training image as local images, correspondingly, the first segmentation result can be the face local image and the non-face local image in the original training image, and the second segmentation result can be the face local image and the non-face local image in the encrypted training image. Specifically, the image segmentation may be implemented by an image segmentation tool or an image segmentation algorithm in the related art, or may be implemented by a trained image segmentation model, which is not limited in this application.
In S202, the difference between the second segmentation result and the first segmentation result may represent the difference in machine vision effect between the image area obtained by image segmentation of the encrypted training image and the image area obtained by image segmentation of the original training image, so that the difference is used as an encryption loss to represent the machine vision semantic difference between the original training image and the encrypted training image on the local image, and further update the original encryption model according to the encryption loss, and adjust the parameters of the original encryption model on the local image, so as to reduce the encryption loss between the updated original training image and the encrypted training image, thereby enabling the original image of the target encryption model and the encrypted image thereof to be consistent with the machine vision semantic of the local image.
By adopting the embodiment, the first segmentation result and the second segmentation result can be obtained by segmenting the original training image and the encrypted training image, and the encryption loss is determined according to the first segmentation result and the second segmentation result, so that the target encryption model obtained by updating the encryption loss can better encrypt the local area of the image, the encrypted image with smaller machine vision difference after the segmentation processing of the original image is obtained, the encrypted image can better replace the original image to perform subsequent model training and other operations, and the applicability of the encrypted image is further improved.
It will be appreciated that, if the encrypted image obtained by encrypting the target encryption model is consistent with the original image in machine vision semantics, the image content information of machine identification of the original image and the encrypted image should also be consistent, and if the original image includes a face, the image detection machine or the image detection algorithm should detect the face in the image, so that the detection results of the original training image and the encrypted training image need to be considered when determining the encryption loss, and then the original encryption model needs to be updated through the encryption loss to obtain the target encryption model, based on this, the application provides a method for determining the encryption loss, where the preset process includes: image detection, as shown in fig. 3, the method for determining encryption loss includes:
s301, respectively performing image detection on the original training image and the encrypted training image to obtain a first detection result and a second detection result.
S302, determining encryption loss according to the difference of the second detection result relative to the first detection result.
In S301, the first detection result is an image detection result of the original training image, and the second detection result is an image detection result of the encrypted training image.
When the preset processing comprises image detection, the image content in the original training image and the image content in the encrypted training image can be identified through image detection when the original training image and the encrypted training image are subjected to the preset processing. Taking an original training image as a face image as an example, the face orientation, the illumination direction, the face definition and whether the face is blocked or not can be identified by performing image detection on the original training image, the identification content is taken as a first detection result, the identification content contained in the encrypted training image is identified by performing image detection on the encrypted training image, and the identification content can be taken as a second detection result. Specifically, the image detection may be implemented by an image detection tool or an image detection algorithm in the related art, or may be implemented by a trained image detection model, which is not limited in this application.
In S302, when the first detection result and the second detection result are inconsistent, it is indicated that the machine vision semantics of the original training image and the encrypted training image are inconsistent, where the detection result may include an object detected from the image and a position where the object is located, the first detection result and the second detection result are inconsistent, which may mean that the object included in the first detection result is different from the object included in the second detection result, or that the position where the same object is located in the first detection result and the second detection result is different, by taking the original training image as an example, if the original training image and the encrypted training image are detected respectively, the first detection result displays that the image includes the face and the vehicle, and the second detection result displays that the image includes only the vehicle, and the difference between the first detection result and the second detection result may represent a difference between the image content information of the encrypted training image recognized by the machine and the original training image, or may represent a difference between the encrypted training image obtained according to the difference and the image content information of the image content of the original training image, and the encrypted training image may be updated according to the difference between the obtained after the difference and the original training image, and the original vision loss of the encrypted training image is reduced by updating the machine vision loss is compared with the original vision loss model, and the original vision loss is reduced.
By adopting the embodiment, the first detection result and the second detection result can be obtained through detecting the images of the original training image and the encrypted training image, and the encryption loss is determined according to the first detection result and the second detection result, so that the target encryption model updated according to the encryption loss can better encrypt the content information of the image, the encrypted image with smaller gap from the original image after the detection processing is obtained, the encrypted image can better replace the original image to perform subsequent model training and other operations, and the applicability of the encrypted image is improved.
The key points of the image are also interest points of the image, which represent important or unique content in the image, the key points of the image are generally distributed in corners, edges and other areas of the image content, the matching degree of the key points of the two images can represent the matching degree of the two images, so that the encrypted image obtained by encrypting the trained target encryption model is matched with the key points of the original image, and the encryption loss is taken as the basis of the original encryption model updating model parameter to obtain the target encryption model, so that the matching degree of the key points of the original training image and the encrypted training image is reflected, the original encryption model is enabled to update the parameters related to the key points of the image according to the encryption loss to obtain the target encryption model, and based on the matching degree, the application further provides a method for determining the encryption loss, wherein the preset processing comprises the following steps: the key point detection, as shown in fig. 4, the method for determining encryption loss includes:
S401, performing key point detection on the original training image and the encrypted training image respectively to obtain a first key point result and a second key point result.
S402, determining encryption loss according to the difference of the second key point result relative to the first key point result.
In S401, the first key point result is a key point detection result of the original training image, and the second key point result is a key point detection result of the encrypted training image.
When the preset processing comprises key point detection, the key points in the original training image and the key points in the encrypted training image can be identified through the key point detection when the original training image and the encrypted training image are subjected to the preset processing. Taking the original training image containing fingerprints as an example, edge points of the fingerprints in the original training image, such as turning points of fingerprint outlines, points with the largest curvature in the fingerprint outlines and the like, can be identified through key point detection on the original training image, the identified edge points of the fingerprints are used as a first key point result, key points of image contents contained in the encrypted training image are identified through key point detection on the encrypted training image, and the key points are used as a second key point result. Specifically, the keypoint detection may be implemented by a keypoint detection tool or a keypoint detection algorithm in the related art, or may be implemented by a trained keypoint detection model, which is not limited in this application.
In one possible embodiment, the preset process further includes: before the key point detection is carried out, the original training image and the encrypted training image can be firstly subjected to image buckling, and the part of the image needing to be buckled is determined on the original training image, so that the corresponding part of the image is buckled on the encrypted training image, and the buckling images respectively corresponding to the original training image and the encrypted training image are obtained to carry out the key point detection. Taking the original training image containing the fingerprint as an example, when the image needing to be buckled is determined on the original training image, the area where the fingerprint is located can be selected as the image needing to be buckled, the corresponding area where the encrypted partial image corresponding to the fingerprint in the original training image is located in the encrypted training image is buckled in the encrypted training image, and the subsequent detection of key points can be carried out on the buckled image respectively. Specifically, the image capturing may be implemented by an image capturing tool or an image capturing algorithm in the related art, or may be implemented by a trained image capturing model, which is not limited in this application.
It can be understood that by capturing part of the image targeted by the original training image and the encrypted training image and then performing the key point detection, only the image containing important content captured by the capturing can be detected, so that the key point detection result is more accurate, and the key point detection efficiency is improved.
In S402, when the first key point result and the second key point result are inconsistent, it is illustrated that the machine vision semantics of the original training image and the encrypted training image are inconsistent, and an example is given that the original training image includes fingerprints, if the key points of the first key point result are 5, the positions are respectively distributed at the edge of the original training image, and the key points of the second key point result are 4, and the positions are respectively distributed at the center of the encrypted training image, then the key points in the first key point result and the key points in the second key point result are not matched, that is, there is a difference between the first key point result and the second key point result, and the difference between the second key point result and the first key point result can reflect the image matching degree of the encrypted training image and the original training image, the encryption loss obtained according to the difference can reflect the image matching degree of the encrypted training image and the original training image, and then the target training model obtained by updating the original training model according to the encryption loss can make the matching degree of the encrypted image and the original image higher.
By selecting the embodiment, the encryption loss can be determined through the key point results of the original training image and the encrypted training image, so that the encryption loss can reflect the matching degree of the original training image and the encrypted training image, further, the target encryption model obtained by updating the model parameters of the original encryption model according to the encryption loss outputs the encryption image with higher matching degree with the original image, and the encryption image can better replace the original image to perform subsequent model training and other operations, and further, the applicability of the encryption image is improved.
In determining the semantics of the encrypted image and the original image in machine vision, the machine vision semantics of the image may be embodied by determining different attributes of the image and scoring the quality of the image, where the scoring of the image is used to represent the amount of information contained in the image about a particular object. For convenience of description, the following will exemplarily describe the attribute and the score of the image by taking the original image as a portrait. In this example, the attribute may be sex, age, wearing, etc. of a person, specifically, if the person in the original image is the elderly person, the black cotta is worn, and the sex is male, the attribute of the image obtained from the encrypted image should also be that the person in the image is the elderly person, the black cotta is worn, and the sex is male. Further, if the face in the original image is blocked and the face is directed to the left, the resolution of the image is high, the score obtained by scoring the face recognition degree of the original image should be low because the image contains less information about the face, and if the encrypted image corresponding to the original image is scored, the score obtained by scoring the face recognition degree should be low. It can be understood that if the semantic information of the encrypted image and the original image are consistent in the machine vision semantic information, the machine should identify that the content of the encrypted image is consistent with the attribute of the content of the original image, and the score of the encrypted image and the score of the original image should also be consistent, so that the encryption loss should reflect the attribute and the score of the encrypted training image and the original training image, so that the original encryption model can be updated subsequently, and a target encryption model that can make the attribute score result of the original image and the attribute score result of the encrypted image consistent is obtained, based on this, the application further provides a determination method of the encryption loss as shown in fig. 5, where the preset process includes: the method for determining encryption loss includes the steps of:
S501, performing image alignment on the original training image and the key points of the original training image, and performing image alignment on the encrypted training image and the key points of the original training image to obtain a first alignment image and a second alignment image.
S502, scoring the first alignment image and the second alignment image respectively to obtain a first scoring result and a second scoring result.
S503, determining encryption loss according to the difference of the second scoring result relative to the first scoring result.
In S501, the first alignment image is an image obtained by aligning the original training image with the key points of the original training image, and the second alignment image is an image obtained by aligning the encrypted training image with the key points of the original training image.
It can be understood that the key points of the original training image may be obtained when the preset process includes key point detection, or may be obtained separately, specifically, the obtaining of the key points may be implemented by a key point detection tool or a key point detection algorithm in the related art, or may be implemented by a trained key point detection model, which is not limited in this application.
The method comprises the steps of performing image alignment on key points of an original training image and the original training image to obtain specific positions of the key points of the original training image in the original training image, performing image alignment on key points of an encrypted training image and the original training image to obtain specific positions of the key points of the original training image in the encrypted training image, and respectively labeling the positions of the key points on the original training image and the encrypted training image to obtain a first alignment image and a second alignment image.
In S502, the first scoring result is a score for the attribute of the first aligned image, and the second scoring result is a score for the attribute of the second aligned image.
Specifically, when attribute identification is performed, whether an aligned image successfully matched with key points is identified, or whether a person in an original training image is an old person, a black short sleeve is worn, and the sex is a male person, for example, attribute identification can be performed according to characteristics (such as beards, etc.) of the person, clothes and sex in a first aligned image, and then the image quality of the first aligned image, such as the shielding condition, illumination condition, face orientation, image resolution, etc., of the face is scored, so that a first scoring result which can embody attribute identification and scoring scores is obtained. Correspondingly, for the second aligned image, attribute results of the encrypted portrait, clothes and sex features (such as beard) are required to be determined, and the image quality of the second aligned image is also subjected to the scoring to obtain a second scoring result which can reflect attribute identification and scoring scores, so that a first scoring result and a second scoring result are obtained.
Specifically, the image scoring may be implemented by an image scoring tool or an image scoring algorithm in the related art, or may be implemented by a trained image scoring model, which is not limited in this application.
In S503, taking the person included in the original training image as the old person, wearing the black cotta and the sex as the male person as an example, if the person in the image is identified as the old person, wearing the black cotta, the sex is the male person, and the person in the image is identified as the young person, wearing the black cotta, and the sex is the female person in the second scoring result, it is indicated that the attribute identification results in the first alignment image and the second alignment image are inconsistent, that is, the first scoring result and the second scoring result are inconsistent. Or if the face of the old person in the first scoring result display diagram is blocked, the face faces to the left and the image resolution is not high, so that the scoring score of the first alignment image is lower, and the face of the old person in the second scoring result display diagram is not blocked, the face faces to the front and the image resolution is higher, so that the scoring score of the second alignment image is higher, the first scoring result and the scoring score of the second alignment image are inconsistent, namely, the first scoring result and the second scoring result are inconsistent. As described above, the scoring result of the image may represent attribute information and image quality of the image content, and the attribute information and image quality of the image may represent machine vision semantics of the image, so that the encryption loss obtained based on the difference between the first scoring result and the second scoring result may also represent inconsistency of the original training image and the encryption training image in the machine vision semantics, and then the original encryption model continuously updates model parameters according to the encryption loss, and the obtained updated original encryption model may reduce the difference between the first scoring result and the second scoring result, thereby obtaining the target encryption model that enables the machine vision semantics of the original image and the encryption image semantic information to be consistent.
According to the embodiment, the encryption loss is determined according to the scoring result, so that the encryption loss can reflect the difference of machine vision semantics of the original training image and the encryption training image, further, the target encryption model obtained by updating the model parameters of the original encryption model according to the encryption loss outputs the encryption image consistent with the machine vision semantics of the original image, the encryption image can better replace the original image to perform subsequent model training and other operations, and the applicability of the encryption image is improved. Meanwhile, the image attribute scoring is carried out on the basis of obtaining the key point information, the processing result of the key point detection in the preset processing mode can be correlated with the processing result of the image alignment and the image scoring, and further, the difference of a plurality of different layers of information of the correlated images is obtained, so that the encryption loss of the machine vision semantics of the original training image and the encryption training image can be better reflected, further, the target encryption model obtained through updating according to the encryption loss is further enabled to output the encryption image with stronger substitution to the original image, and the applicability of the encryption image is improved.
The image features are also important parts for describing the semantic information of the image, and generally, the image features mainly comprise color features, texture features, shape features and spatial relationship features of the image, so if the machine vision semantics of the semantic information of the image to be encrypted and the original image are consistent, the features of the encrypted image and the original image are required to be consistent, and further the encryption loss which can embody the features of the encrypted training image and the original training image is required, and based on this, the application further provides a method for determining the encryption loss, as shown in fig. 6, wherein the preset processing comprises: the method for determining encryption loss comprises the following steps of image alignment and feature extraction:
S601, performing image alignment on the original training image and the key points of the original training image, and performing image alignment on the encrypted training image and the key points of the original training image to obtain a first alignment image and a second alignment image.
In S601, the first alignment image is an image obtained by aligning the original training image with the key points of the original training image, and the second alignment image is an image obtained by aligning the encrypted training image with the key points of the original training image.
This step is the same as S501, and reference may be made to the description of S501, which is not repeated here.
S602, respectively extracting features of the first alignment image and the second alignment image to obtain a first feature result and a second feature result.
S603, determining encryption loss according to the difference of the second characteristic result relative to the first characteristic result.
In S602, the first feature result is a result of extracting an image feature of the first aligned image, and the second feature result is a result of extracting an image feature of the second aligned image.
Specifically, the same feature extraction mode is adopted to perform feature extraction on the images of the first object image and the second alignment image respectively, and the obtained first feature result and second feature result are taken as an example of whether the original training image contains a human face, wherein the first feature result can be a feature vector used for representing the human face in the original training image, and then the human face contained in the original training image can be identified according to the extracted feature vector.
Specifically, the feature extraction may be implemented by a feature extraction tool or a feature extraction algorithm in the related art, or may be implemented by a trained feature extraction model, which is not limited in this application.
In S603, as described above, the features of the image may represent the machine vision semantics of the image, so that the encryption loss obtained based on the first feature result and the second feature result may also represent the inconsistency of the machine vision semantics of the original training image and the encrypted training image, so that the subsequent original encryption model continuously updates the model parameters according to the encryption loss, and the obtained updated original encryption model may reduce the difference between the first feature result and the second feature result, so as to obtain the target encryption model that the machine vision semantic information of the original image and the encrypted image are consistent at other layers.
According to the embodiment, the encryption loss is determined according to the characteristic result, so that the encryption loss can reflect the difference of the original training image and the encryption training image in the image characteristic, further, the target encryption model obtained by updating the model parameters of the original encryption model according to the encryption loss outputs the encryption image consistent with the machine vision semantics of the original image, the encryption image can better replace the original image to perform subsequent model training and other operations, and the applicability of the encryption image is improved. Meanwhile, feature extraction is carried out on the basis of obtaining key point information, a processing result of key point detection in a preset processing mode can be correlated with a processing result of image alignment and feature extraction, and further, differences of a plurality of different layers of information of correlated images are obtained, so that encryption loss of machine vision semantics of an original training image and an encryption training image can be better reflected, further, an encryption image with stronger substitution to the original image is output according to a target encryption model obtained by updating the encryption loss, and applicability of the encryption image is improved.
As mentioned above, obtaining the encrypted image generally requires the use of subsequent training of models or algorithms, and thus, in one possible embodiment, the encrypted image is input into an image feature algorithm model to obtain a feature calibration result; according to the difference between the feature calibration result and the pre-calibration true value of the original image, adjusting the model parameters of the image feature algorithm model; and taking the adjusted image characteristic algorithm model as a trained image characteristic algorithm model.
Specifically, the image feature algorithm model may be a model for performing corresponding processing on images, such as face recognition, fingerprint recognition, and the like, where a certain number of images are required to be trained before the model is used to obtain a model capable of accurately and efficiently implementing the preset function. The encrypted image is consistent with the machine vision semantics of the original image, so that the encrypted image is input into the image feature algorithm model, the obtained feature calibration result is consistent with the feature calibration result of the original image, namely, the machine vision semantics of the encrypted image and the original image are consistent, the identification results of the machine encrypted image and the original image are identical, so that the feature calibration result of the image feature algorithm model on the encrypted image is theoretically consistent with the pre-calibration truth value of the original image, and therefore, the difference between the feature calibration result of the encrypted image and the pre-calibration truth value of the original image can represent the problem of the image feature algorithm model in identifying the image, and further, the model parameters of the image feature algorithm model can be adjusted based on the difference between the feature calibration result and the pre-calibration truth value of the original image, so as to train the image feature algorithm model. Therefore, the model training is performed by using the encrypted image instead of the original image, so that the normal operation of the training process can be ensured while the safety of the image information in the original image aiming at the human eye visible layer is ensured. It will be appreciated that the image feature algorithm model in this embodiment may be any model that processes an image, and the present application is not limited to the model function and type.
In addition to the foregoing manner, in a possible embodiment, the preset process may further include: image segmentation, image detection, keypoint detection, image alignment, image scoring, and feature extraction.
Correspondingly, the encryption loss at the moment is determined according to a first segmentation result, a second segmentation result, a first detection result, a second detection result, a first key point result, a second key point result, a first grading result, a second grading result, a first characteristic result and a second characteristic result of the original training image and the encryption training image, the results can be mutually overlapped to obtain the encryption loss, the encryption loss reflects the difference of the original training image and the encryption training image in a plurality of layers of information, and the multiple layers of semantic information are mutually associated to each other to obtain the encryption loss capable of better reflecting the machine vision semantics of the original training image and the encryption training image, so that the target encryption model obtained by updating the model parameters of the original encryption model according to the encryption loss outputs the encryption image consistent with the machine vision semantic information of the original image, the encryption image can better replace the original image to carry out subsequent model training and other operations, the applicability of the encryption image is further comprehensively improved, and the encryption image can be subsequently applied to various types and functions of model training processes.
Corresponding to the image encryption method provided in the embodiment of the present application, the embodiment of the present application further provides an image encryption device, as shown in fig. 7, including:
an image acquisition module 701, configured to acquire an original image;
the encryption module 702 is configured to input the original image into a trained target encryption model, so as to obtain an encrypted image output by the target encryption model;
wherein the target encryption model is determined by:
encrypting the original training image through the original encryption model to obtain an encrypted training image;
respectively carrying out preset processing on the original training image and the encrypted training image to obtain a processing result;
determining encryption loss according to the difference between the processing results;
and updating the original encryption model according to the encryption loss to obtain a target encryption model.
In a possible embodiment, the preset process includes: image segmentation, the apparatus further comprising:
a segmentation loss determination module for determining an encryption loss by:
respectively carrying out image segmentation on the original training image and the encrypted training image to obtain a first segmentation result and a second segmentation result, wherein the first segmentation result is the image segmentation result of the original training image, and the second segmentation result is the image segmentation result of the encrypted training image;
And determining encryption loss according to the difference of the second segmentation result relative to the first segmentation result.
In a possible embodiment, the preset process includes: image detection, the apparatus further comprising:
the key point loss determining module is used for respectively carrying out key point detection on the original training image and the encrypted training image to obtain a first key point result and a second key point result, wherein the first key point result is a key point detection result of the original training image, and the second key point result is a key point detection result of the encrypted training image;
and determining encryption loss according to the difference of the second key point result relative to the first key point result.
In a possible embodiment, the preset process includes: image alignment and image scoring, the apparatus further comprising:
the scoring loss determining module is used for performing image alignment on the original training image and the key points of the original training image, performing image alignment on the encrypted training image and the key points of the original training image to obtain a first aligned image and a second aligned image, wherein the first aligned image is an image obtained after the original training image and the key points of the original training image are aligned, and the second aligned image is an image obtained after the encrypted training image and the key points of the original training image are aligned;
Scoring the first aligned image and the second aligned image respectively to obtain a first scoring result and a second scoring result, wherein the first scoring result is the attribute scoring of the first aligned image, and the second scoring result is the attribute scoring of the second aligned image;
and determining encryption loss according to the difference of the second scoring result relative to the first scoring result.
In a possible embodiment, the preset process includes: image alignment and feature extraction, the apparatus further comprising:
the feature loss determining module is used for performing image alignment on the original training image and the key points of the original training image, performing image alignment on the encrypted training image and the key points of the original training image to obtain a first aligned image and a second aligned image, wherein the first aligned image is an image obtained by aligning the original training image and the key points of the original training image, and the second aligned image is an image obtained by aligning the encrypted training image and the key points of the original training image;
respectively carrying out feature extraction on the first alignment image and the second alignment image to obtain a first feature result and a second feature result, wherein the first feature result is a result of extracting the image features of the first alignment image, and the second feature result is a result of extracting the image features of the second alignment image;
And determining encryption loss according to the difference of the second characteristic result relative to the first characteristic result.
In one possible embodiment, the apparatus further comprises:
the calibration result determining module is used for inputting the encrypted image into an image feature algorithm model to obtain a feature calibration result;
the model parameter determining module is used for adjusting the model parameters of the image characteristic algorithm model according to the difference between the characteristic calibration result and the pre-calibration true value of the original image;
and the model training module is used for taking the adjusted image characteristic algorithm model as a trained image characteristic algorithm model.
The embodiment of the present application further provides an electronic device, as shown in fig. 8, including a processor 801, a communication interface 802, a memory 803, and a communication bus 804, where the processor 801, the communication interface 802, and the memory 803 complete communication with each other through the communication bus 804,
a memory 803 for storing a computer program;
the processor 801, when executing the program stored in the memory 803, implements the following steps:
acquiring an original image;
inputting the original image into a trained target encryption model to obtain an encrypted image output by the target encryption model;
The target encryption model is obtained by updating an original encryption model in advance according to encryption loss, the encryption loss is used for representing differences between processing results obtained by respectively carrying out preset processing on an original training image and an encrypted training image, and the encrypted training image is obtained by encrypting the original training image through the original encryption model.
The communication bus mentioned above for the electronic devices may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the electronic device and other devices.
The Memory may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
In yet another embodiment provided herein, there is also provided a computer readable storage medium having stored therein a computer program which when executed by a processor implements the steps of any of the image encryption methods described above.
In yet another embodiment provided herein, there is also provided a computer program product containing instructions that, when run on a computer, cause the computer to perform any of the image encryption methods of the above embodiments.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for embodiments of the apparatus, the electronic device, the computer-readable storage medium, and the computer program product, the description is relatively simple, as relevant to the method embodiments being referred to in the section of the description of the method embodiments.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the scope of the present application. Any modifications, equivalent substitutions, improvements, etc. that are within the spirit and principles of the present application are intended to be included within the scope of the present application.

Claims (10)

1. An image encryption method, the method comprising:
acquiring an original image;
inputting the original image into a trained target encryption model to obtain an encrypted image output by the target encryption model;
wherein the target encryption model is determined by:
encrypting the original training image through the original encryption model to obtain an encrypted training image;
respectively carrying out preset processing on the original training image and the encrypted training image to obtain a processing result;
determining encryption loss according to the difference between the processing results;
and updating the original encryption model according to the encryption loss to obtain a target encryption model.
2. The method according to claim 1, wherein the preset process comprises: dividing an image;
the encryption loss is determined by:
respectively carrying out image segmentation on the original training image and the encrypted training image to obtain a first segmentation result and a second segmentation result, wherein the first segmentation result is the image segmentation result of the original training image, and the second segmentation result is the image segmentation result of the encrypted training image;
And determining encryption loss according to the difference of the second segmentation result relative to the first segmentation result.
3. The method according to claim 1, wherein the preset process comprises: detecting an image;
the encryption loss is determined by:
respectively carrying out image detection on the original training image and the encrypted training image to obtain a first detection result and a second detection result, wherein the first detection result is the image detection result of the original training image, and the second detection result is the image detection result of the encrypted training image;
and determining encryption loss according to the difference of the second detection result relative to the first detection result.
4. The method according to claim 1, wherein the preset process comprises: detecting key points;
the encryption loss is determined by:
performing key point detection on the original training image and the encrypted training image respectively to obtain a first key point result and a second key point result, wherein the first key point result is a key point detection result of the original training image, and the second key point result is a key point detection result of the encrypted training image;
And determining encryption loss according to the difference of the second key point result relative to the first key point result.
5. The method according to claim 1, wherein the preset process comprises: image alignment and image scoring;
the encryption loss is determined by:
aligning the original training image with the key points of the original training image, and aligning the encrypted training image with the key points of the original training image to obtain a first aligned image and a second aligned image, wherein the first aligned image is an image obtained by aligning the original training image with the key points of the original training image, and the second aligned image is an image obtained by aligning the encrypted training image with the key points of the original training image;
scoring the first aligned image and the second aligned image respectively to obtain a first scoring result and a second scoring result, wherein the first scoring result is the attribute scoring of the first aligned image, and the second scoring result is the attribute scoring of the second aligned image;
and determining encryption loss according to the difference of the second scoring result relative to the first scoring result.
6. The method according to claim 1, wherein the preset process comprises: image alignment and feature extraction;
the encryption loss is determined by:
aligning the original training image with the key points of the original training image, and aligning the encrypted training image with the key points of the original training image to obtain a first aligned image and a second aligned image, wherein the first aligned image is an image obtained by aligning the original training image with the key points of the original training image, and the second aligned image is an image obtained by aligning the encrypted training image with the key points of the original training image;
respectively carrying out feature extraction on the first alignment image and the second alignment image to obtain a first feature result and a second feature result, wherein the first feature result is a result of extracting the image features of the first alignment image, and the second feature result is a result of extracting the image features of the second alignment image;
and determining encryption loss according to the difference of the second characteristic result relative to the first characteristic result.
7. The method according to any one of claims 1-6, further comprising:
inputting the encrypted image into an image feature algorithm model to obtain a feature calibration result;
according to the difference between the feature calibration result and the pre-calibration true value of the original image, adjusting the model parameters of the image feature algorithm model;
and taking the adjusted image characteristic algorithm model as a trained image characteristic algorithm model.
8. The method of claim 1, wherein the acquiring the original image comprises:
acquiring a biological feature image containing biological features as an original image;
and/or the number of the groups of groups,
and acquiring a video frame containing biological characteristics in the video as an original image.
9. An image encryption apparatus, characterized in that the apparatus comprises:
the image acquisition module is used for acquiring an original image;
the encryption module is used for inputting the original image into the trained target encryption model to obtain an encrypted image output by the target encryption model;
wherein the target encryption model is determined by:
encrypting the original training image through the original encryption model to obtain an encrypted training image;
Respectively carrying out preset processing on the original training image and the encrypted training image to obtain a processing result;
determining encryption loss according to the difference between the processing results;
and updating the original encryption model according to the encryption loss to obtain a target encryption model.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored therein a computer program which, when executed by a processor, implements the method steps of any of claims 1-8.
CN202210974386.0A 2022-08-15 2022-08-15 Image encryption method and device Pending CN117635404A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210974386.0A CN117635404A (en) 2022-08-15 2022-08-15 Image encryption method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210974386.0A CN117635404A (en) 2022-08-15 2022-08-15 Image encryption method and device

Publications (1)

Publication Number Publication Date
CN117635404A true CN117635404A (en) 2024-03-01

Family

ID=90015129

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210974386.0A Pending CN117635404A (en) 2022-08-15 2022-08-15 Image encryption method and device

Country Status (1)

Country Link
CN (1) CN117635404A (en)

Similar Documents

Publication Publication Date Title
KR102299847B1 (en) Face verifying method and apparatus
WO2020207189A1 (en) Method and device for identity authentication, storage medium, and computer device
US10726244B2 (en) Method and apparatus detecting a target
US11288504B2 (en) Iris liveness detection for mobile devices
US11861937B2 (en) Facial verification method and apparatus
US10579872B2 (en) Method and apparatus with iris region extraction
US10095927B2 (en) Quality metrics for biometric authentication
KR101309889B1 (en) Texture features for biometric authentication
US11138455B2 (en) Liveness test method and apparatus
EP2883189B1 (en) Spoof detection for biometric authentication
US20200380279A1 (en) Method and apparatus for liveness detection, electronic device, and storage medium
US20180034852A1 (en) Anti-spoofing system and methods useful in conjunction therewith
US11869272B2 (en) Liveness test method and apparatus and biometric authentication method and apparatus
WO2022033220A1 (en) Face liveness detection method, system and apparatus, computer device, and storage medium
US11625954B2 (en) Method and apparatus with liveness testing
EP3642756B1 (en) Detecting artificial facial images using facial landmarks
EP2370932B1 (en) Method, apparatus and computer program product for providing face pose estimation
US20220327189A1 (en) Personalized biometric anti-spoofing protection using machine learning and enrollment data
Ilankumaran et al. Multi-biometric authentication system using finger vein and iris in cloud computing
CN113283377B (en) Face privacy protection method, system, medium and electronic terminal
CN108288023B (en) Face recognition method and device
CN117635404A (en) Image encryption method and device
US20230259600A1 (en) Adaptive personalization for anti-spoofing protection in biometric authentication systems
WO2023286251A1 (en) Adversarial image generation apparatus, control method, and computer-readable storage medium
KR20210050649A (en) Face verifying method of mobile device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination