CN114676396B - Protection method and device for deep neural network model, electronic equipment and medium - Google Patents

Protection method and device for deep neural network model, electronic equipment and medium Download PDF

Info

Publication number
CN114676396B
CN114676396B CN202210595796.4A CN202210595796A CN114676396B CN 114676396 B CN114676396 B CN 114676396B CN 202210595796 A CN202210595796 A CN 202210595796A CN 114676396 B CN114676396 B CN 114676396B
Authority
CN
China
Prior art keywords
value
channel
channel value
neural network
network model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210595796.4A
Other languages
Chinese (zh)
Other versions
CN114676396A (en
Inventor
邓富城
罗韵
陈振杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Jijian Technology Co.,Ltd.
Original Assignee
Shandong Jivisual Angle Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Jivisual Angle Technology Co ltd filed Critical Shandong Jivisual Angle Technology Co ltd
Priority to CN202210595796.4A priority Critical patent/CN114676396B/en
Publication of CN114676396A publication Critical patent/CN114676396A/en
Application granted granted Critical
Publication of CN114676396B publication Critical patent/CN114676396B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/10Protecting distributed programs or content, e.g. vending or licensing of copyrighted material ; Digital rights management [DRM]
    • G06F21/12Protecting executable software
    • G06F21/121Restricting unauthorised execution of programs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/10Protecting distributed programs or content, e.g. vending or licensing of copyrighted material ; Digital rights management [DRM]
    • G06F21/107License processing; Key processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Computer Security & Cryptography (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Technology Law (AREA)
  • Computer Hardware Design (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a protection method and device of a deep neural network model, electronic equipment and a medium, relates to the field of neural network models, and is used for reducing the risk of stealing the deep neural network model. The method comprises the following steps: acquiring a deep neural network model; obtaining an original training sample set, wherein the original training sample set at least comprises two original training samples; acquiring an encryption transformation function according to the image type of the original training sample set; carrying out encryption conversion processing on original training samples in the original training sample set according to an encryption transformation function to generate an encrypted training sample set, wherein the encrypted training samples in the encrypted training sample set have different pixel point parameters from the original training samples in the training sample set; inputting the encrypted training samples in the encrypted training sample set into the deep neural network model for training until the deep neural network model is trained.

Description

Protection method and device for deep neural network model, electronic equipment and medium
Technical Field
The present application relates to the field of neural network models, and in particular, to a method, an apparatus, an electronic device, and a medium for protecting a deep neural network model.
Background
In recent years, artificial intelligence techniques represented by deep neural network model techniques have been widely used in many fields such as security, retail, industry, medical care, finance, and automatic driving. Computer vision is one of the most important technical fields of artificial intelligence, mainly comprises various visual task directions such as image recognition, target detection, semantic segmentation, text detection and recognition and the like, and has wide application scenes. And computer vision mainly depends on deep neural network model support. A deep neural network model applied to an actual scene is obtained by consuming a large amount of computing resources on a large amount of training sample data for training, is the most valuable and core part of an artificial intelligence visual algorithm, and is often installed and deployed in the operating environment of a user as a part of artificial intelligence algorithm software.
However, the deep neural network model has the possibility of being stolen, which damages the results of developers who consume huge manpower and material resources, and how to protect the deep neural network model becomes a main research topic. The existing deep neural network model protection technical scheme can be divided into two categories:
the deep neural network model encryption method comprises the steps of firstly, encrypting a deep neural network model by using an encryption algorithm, then loading the encrypted deep neural network model when a program runs, decrypting the deep neural network model to a memory, loading the decrypted deep neural network model from the memory for use, and obtaining the encrypted data of the deep neural network model outside the memory.
And secondly, a model watermarking technology, namely implanting digital watermarks into the deep neural network model by adopting a specific strategy in the development and training stages of the deep neural network model, then extracting and recovering digital watermark information from the deep neural network model to be proved, and comparing the extracted watermarks with the implanted watermarks so as to judge whether the piracy exists.
However, except for performing complex deep neural network model encryption and decryption processing, the deep neural network model decrypted in the memory still has the possibility of being stolen, and not all inference engines support loading the deep neural network model from the memory. The model watermarking technology needs a regulated deep neural network model development and training process, increases the training difficulty of the deep neural network model and brings complexity, and the model watermarking technology belongs to post-mortem authentication and is difficult to effectively protect the deep neural network model from being stolen.
In summary, the deep neural network model in the existing protection scheme has a risk of being stolen after being stolen.
Disclosure of Invention
The first aspect of the present application provides a protection method for a deep neural network model, which is characterized by comprising:
acquiring a deep neural network model;
acquiring an original training sample set, wherein the original training sample set at least comprises two original training samples;
acquiring an encryption transformation function according to the image type of the original training sample set;
carrying out encryption conversion processing on original training samples in an original training sample set according to an encryption transformation function to generate an encrypted training sample set, wherein the encrypted training samples in the encrypted training sample set have different pixel point parameters from the original training samples in the original training sample set;
and inputting the encrypted training samples in the encrypted training sample set into the deep neural network model for training until the deep neural network model is trained.
Optionally, performing encryption transformation processing on the original training samples in the original training sample set according to the encryption transformation function to generate an encrypted training sample set, including:
acquiring RGB channel parameters of a target pixel point in an original training sample, wherein the RGB channel parameters comprise an R channel value, a G channel value and a B channel value;
carrying out reduction processing on the R channel value, the G channel value and the B channel value;
calculating and generating a first channel value of the target pixel point according to the R channel value, the G channel value and the B channel value after the reduction processing and the encryption parameter in the encryption transformation function;
judging whether the maximum value of the RGB channel parameters of the target pixel point is 0 or not;
if the channel value is 0, setting the second channel value of the target pixel point to be 255;
if not, calculating a second channel value of the target pixel point according to the maximum value of the RGB channel parameters and the minimum value of the RGB channel parameters;
calculating a third channel value of the target pixel point according to the maximum value, the minimum value, the R channel value, the G channel value and the B channel value of the RGB channel parameters;
integrating the first channel value, the second channel value and the third channel value to generate a new target pixel point;
and processing the original training samples in the original training sample set according to the steps to generate an encrypted training sample set.
Optionally, the R channel value, the G channel value, and the B channel value are reduced, specifically:
Figure 512391DEST_PATH_IMAGE001
Figure 672983DEST_PATH_IMAGE002
is the R-channel value of the target pixel,
Figure DEST_PATH_IMAGE003
is the G channel value of the target pixel point,
Figure 250595DEST_PATH_IMAGE004
is the B-channel value of the target pixel,
Figure DEST_PATH_IMAGE005
for the reduced R channel value of the target pixel,
Figure 48917DEST_PATH_IMAGE006
for the reduced G channel value of the target pixel,
Figure DEST_PATH_IMAGE007
the B channel value of the reduced target pixel point is obtained;
calculating and generating a first channel value of a target pixel point according to the R channel value, the G channel value and the B channel value after the reduction processing and an encryption parameter in an encryption transformation function, specifically:
Figure 93097DEST_PATH_IMAGE008
Figure DEST_PATH_IMAGE009
is a target imageThe first channel values of the prime points, a, b and c are the encryption parameters in the encryption transformation function,
Figure 694979DEST_PATH_IMAGE010
the intermediate value of the first channel value is that a, b and c are added to be 1, and a, b and c are all larger than 0;
judging whether the maximum value of the RGB channel parameter of the target pixel point is 0, if so, setting the second channel value of the target pixel point to be 255, and if not, calculating the second channel value of the target pixel point according to the maximum value of the RGB channel parameter and the minimum value of the RGB channel parameter, specifically:
Figure DEST_PATH_IMAGE011
Figure 856708DEST_PATH_IMAGE012
Figure DEST_PATH_IMAGE013
is the R channel value
Figure 493226DEST_PATH_IMAGE005
G channel value
Figure 442727DEST_PATH_IMAGE006
And B channel value
Figure 266327DEST_PATH_IMAGE007
The minimum value of (a) is greater than (b),
Figure 202053DEST_PATH_IMAGE014
is the R channel value
Figure 365181DEST_PATH_IMAGE005
G channel value
Figure 610218DEST_PATH_IMAGE006
And B channel value
Figure 655534DEST_PATH_IMAGE007
The maximum value of (a) is,
Figure DEST_PATH_IMAGE015
is the second channel value of the target pixel,
Figure 96749DEST_PATH_IMAGE016
is the middle value of the second channel value;
calculating a third channel value of the target pixel point according to the maximum value, the minimum value, the R channel value, the G channel value and the B channel value of the RGB channel parameters, specifically:
Figure DEST_PATH_IMAGE017
Figure 645542DEST_PATH_IMAGE018
is the third channel value of the target pixel point,
Figure DEST_PATH_IMAGE019
is a first intermediate value of the third channel value,
Figure 530321DEST_PATH_IMAGE020
a second intermediate value of the third channel value.
Optionally, inputting the encrypted training samples in the encrypted training sample set into the deep neural network model for training until the deep neural network model training is completed, including:
selecting a first encrypted training sample from the encrypted training sample set, inputting the first encrypted training sample into a deep neural network model, marking the first encrypted training sample with a training expected value, and setting a loss function in the deep neural network model;
extracting the characteristics of the first encrypted training sample through the weight in the deep neural network model;
calculating a measured value of a first encrypted training sample for the feature;
calculating a loss value according to the measured value, the training expected value and the loss function to generate loss value change data and iteration turns, wherein the loss value change data is statistical data of the loss value generated by each training;
judging whether the loss value change data and/or the iteration turns meet the training conditions;
and if the loss value change data and/or the iteration turns meet the training conditions, determining that the deep neural network model is trained completely.
Optionally, the loss value is calculated according to the measured value, the training expected value and the loss function to generate loss value change data, which specifically includes:
Figure DEST_PATH_IMAGE021
Figure 344824DEST_PATH_IMAGE022
in order to obtain the value of the loss,
Figure DEST_PATH_IMAGE023
in order to be able to measure the value,
Figure 137200DEST_PATH_IMAGE024
is the ith type of training expectation.
Optionally, after determining whether the loss value change data and/or the iteration turns satisfy the training condition, the protection method further includes:
if the loss value change data and/or the iteration turns do not meet the training conditions, selecting a second encrypted training sample from the encrypted training sample set to input into the deep neural network model after the weight of the deep neural network model is updated according to the small batch gradient descent method, or inputting the first encrypted training sample into the deep neural network model again after the weight of the deep neural network model is updated according to the small batch gradient descent method.
Optionally, after inputting the encrypted training samples in the encrypted training sample set into the deep neural network model for training until the deep neural network model is trained, the protection method further includes:
acquiring an image to be detected;
carrying out encryption conversion processing on an image to be detected according to an encryption transformation function;
inputting the image to be detected after the encryption conversion processing into a deep neural network model;
and generating a detection result of the image to be detected through the deep neural network model.
The present application provides, in a second aspect, a protection device for a deep neural network model, including:
the first acquisition unit is used for acquiring a deep neural network model;
the second acquisition unit is used for acquiring an original training sample set, and the original training sample set at least comprises two original training samples;
a third obtaining unit, configured to obtain an encryption transformation function according to an image type of an original training sample set;
the first encryption unit is used for carrying out encryption conversion processing on original training samples in an original training sample set according to an encryption transformation function to generate an encrypted training sample set, and the encrypted training samples in the encrypted training sample set are different from the original training sample pixel point parameters of the original training sample set;
and the training unit is used for inputting the encrypted training samples in the encrypted training sample set into the deep neural network model for training until the deep neural network model is trained.
Optionally, the first encryption unit includes:
acquiring RGB channel parameters of a target pixel point in an original training sample, wherein the RGB channel parameters comprise an R channel value, a G channel value and a B channel value;
carrying out reduction processing on the R channel value, the G channel value and the B channel value;
calculating and generating a first channel value of the target pixel point according to the R channel value, the G channel value and the B channel value after the reduction processing and the encryption parameter in the encryption transformation function;
judging whether the maximum value of the RGB channel parameters of the target pixel point is 0 or not;
if the channel value is 0, setting the second channel value of the target pixel point to be 255;
if not, calculating a second channel value of the target pixel point according to the maximum value of the RGB channel parameters and the minimum value of the RGB channel parameters;
calculating a third channel value of the target pixel point according to the maximum value, the minimum value, the R channel value, the G channel value and the B channel value of the RGB channel parameters;
integrating the first channel value, the second channel value and the third channel value to generate a new target pixel point;
and processing the original training samples in the original training sample set according to the steps to generate an encrypted training sample set.
Optionally, the reduction processing is performed on the R channel value, the G channel value, and the B channel value, specifically:
Figure 274920DEST_PATH_IMAGE001
Figure 533863DEST_PATH_IMAGE002
is the R-channel value of the target pixel,
Figure 136795DEST_PATH_IMAGE003
is the G-channel value of the target pixel point,
Figure 608227DEST_PATH_IMAGE004
is the B-channel value of the target pixel point,
Figure 725088DEST_PATH_IMAGE005
for the reduced R channel value of the target pixel,
Figure 92615DEST_PATH_IMAGE006
for the reduced G channel value of the target pixel,
Figure 678449DEST_PATH_IMAGE007
the B channel value of the reduced target pixel point is obtained;
calculating and generating a first channel value of a target pixel point according to the R channel value, the G channel value and the B channel value after the reduction processing and an encryption parameter in an encryption transformation function, specifically:
Figure 687993DEST_PATH_IMAGE008
Figure 659360DEST_PATH_IMAGE009
is the first channel value of the target pixel point, a, b and c are the encryption parameters in the encryption transformation function,
Figure 525685DEST_PATH_IMAGE010
the intermediate value of the first channel value is that a, b and c are added to be 1, and a, b and c are all larger than 0;
judging whether the maximum value of the RGB channel parameters of the target pixel point is 0 or not, if so, setting the second channel value of the target pixel point to be 255, and if not, calculating the second channel value of the target pixel point according to the maximum value of the RGB channel parameters and the minimum value of the RGB channel parameters, specifically:
Figure 192290DEST_PATH_IMAGE011
Figure 379426DEST_PATH_IMAGE012
Figure 877404DEST_PATH_IMAGE013
is the R channel value
Figure 117892DEST_PATH_IMAGE005
G channel value
Figure 396427DEST_PATH_IMAGE006
And B channel value
Figure 747774DEST_PATH_IMAGE007
The minimum value of (a) is greater than (b),
Figure 444465DEST_PATH_IMAGE014
is the R channel value
Figure 652593DEST_PATH_IMAGE005
G channel value
Figure 559369DEST_PATH_IMAGE006
And B channel value
Figure 573461DEST_PATH_IMAGE007
The maximum value of (a) is,
Figure 249293DEST_PATH_IMAGE015
is the second channel value of the target pixel,
Figure 628322DEST_PATH_IMAGE016
is the middle value of the second channel value;
calculating a third channel value of the target pixel point according to the maximum value, the minimum value, the R channel value, the G channel value and the B channel value of the RGB channel parameters, which specifically comprises the following steps:
Figure 396296DEST_PATH_IMAGE017
Figure 823866DEST_PATH_IMAGE018
is the third channel value of the target pixel point,
Figure 744417DEST_PATH_IMAGE019
is the first intermediate value of the third channel value,
Figure 28768DEST_PATH_IMAGE020
a second intermediate value of the third channel value.
Optionally, the training unit includes:
selecting a first encrypted training sample from the encrypted training sample set, inputting the first encrypted training sample into a deep neural network model, marking the first encrypted training sample with a training expected value, and setting a loss function in the deep neural network model;
extracting the characteristics of the first encrypted training sample through the weight in the deep neural network model;
calculating a measured value of a first encrypted training sample for the feature;
calculating a loss value according to the measured value, the training expected value and the loss function to generate loss value change data and iteration turns, wherein the loss value change data is statistical data of the loss value generated by each training;
judging whether the loss value change data and/or the iteration turns meet the training conditions;
and if the loss value change data and/or the iteration turns meet the training conditions, determining that the deep neural network model is trained completely.
Optionally, the loss value is calculated according to the measured value, the training expected value and the loss function to generate loss value change data, specifically:
Figure 910137DEST_PATH_IMAGE021
Figure 16764DEST_PATH_IMAGE022
in order to obtain the value of the loss,
Figure 667188DEST_PATH_IMAGE023
in order to be able to measure the value,
Figure 715916DEST_PATH_IMAGE024
is the ith type of training expectation.
Optionally, the training unit further comprises:
if the loss value change data and/or the iteration turns do not meet the training conditions, selecting a second encrypted training sample from the encrypted training sample set to input into the deep neural network model after the weight of the deep neural network model is updated according to the small batch gradient descent method, or inputting the first encrypted training sample into the deep neural network model again after the weight of the deep neural network model is updated according to the small batch gradient descent method.
Optionally, the protection device further includes:
the fourth acquisition unit is used for acquiring an image to be detected;
the second encryption unit is used for carrying out encryption conversion processing on the image to be detected according to the encryption transformation function;
the input unit is used for inputting the image to be detected after the encryption conversion processing into the deep neural network model;
and the generating unit is used for generating a detection result of the image to be detected through the deep neural network model.
A third aspect of the present application provides an electronic device, comprising:
the device comprises a processor, a memory, an input and output unit and a bus;
the processor is connected with the memory, the input and output unit and the bus;
the memory holds a program that the processor calls to perform any of the optional protection methods as described in the first aspect and the first aspect.
A fourth aspect of the present application provides a computer readable storage medium having a program stored thereon, the program, when executed on a computer, performing the method of the first aspect and any optional protection method of the first aspect.
According to the technical scheme, the method has the following advantages:
in the application, a deep neural network model is obtained at first, and the deep neural network model is a neural network model with set basic parameters. And then, acquiring an original training sample set according to the training task, wherein the original training sample set at least comprises two original training samples and meets the requirement of the training task. And then, acquiring an encryption transformation function according to the image type of the original training sample set, and performing encryption conversion processing on the original training samples in the original training sample set according to the encryption transformation function to generate an encrypted training sample set, wherein the encrypted training samples in the encrypted training sample set are different from the original training sample pixel point parameters of the original training sample set. And finally, inputting the encrypted training samples in the encrypted training sample set into the deep neural network model for training until the deep neural network model is trained. And transforming the pixel point parameters of the original training sample through an encryption transformation function, so that the generated encrypted training sample is different from the original training sample in color, and then inputting the encrypted training sample into the deep neural network model for training, wherein the obtained deep neural network model is only suitable for the type of the encrypted training sample, but not suitable for the original training sample. Even if the deep neural network model is stolen, as the deep neural network model is only suitable for the type of the encrypted training sample, a stealer cannot use the deep neural network model to detect the conventional type of image, and the risk of stealing the deep neural network model is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a schematic diagram of an embodiment of a protection method for a deep neural network model of the present application;
2-1, 2-2, and 2-3 are schematic diagrams of another embodiment of the protection method of the deep neural network model of the present application;
FIG. 3 is a flow chart illustrating an embodiment of a deep neural network model layer according to an embodiment of the present disclosure;
FIG. 4 is a schematic structural diagram of another embodiment of a deep neural network model network layer in an embodiment of the present application;
FIG. 5 is a schematic structural diagram of another embodiment of a deep neural network model network layer in an embodiment of the present application;
FIG. 6 is a schematic diagram of another embodiment of a protection device of the deep neural network model of the present application;
fig. 7 is a schematic diagram of an embodiment of an electronic device of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
In the prior art, the existing model encryption library and the model watermarking technology are used as protection technical schemes of a deep neural network model, and have different defects respectively. Except for carrying out complex deep neural network model encryption and decryption processing, the model encryption library still has the possibility of being stolen for the deep neural network model after decryption in the memory, and not all inference engines support the deep neural network model to be loaded in the memory. The deep neural network model development and training process which needs to be adjusted by the model watermarking technology increases the training difficulty of the deep neural network model and brings complexity, and the model watermarking technology belongs to post-mortem proof authentication and is difficult to effectively protect the deep neural network model from being stolen. In summary, the deep neural network model in the existing protection scheme has a risk of being stolen after being stolen.
Based on the above, the application discloses a protection method, device, electronic device and medium for a deep neural network model, which are used for reducing the risk of embezzlement of the deep neural network model.
The technical solutions in the present application will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The method of the present application may be applied to a server, a device, a terminal, or other devices with logic processing capability, and the present application is not limited thereto. For convenience of description, the following description will be given taking the execution body as an example.
Referring to fig. 1, the present application provides an embodiment of a method for protecting a deep neural network model, including:
101. acquiring a deep neural network model;
in this embodiment, before using the deep neural network model, certain parameter settings need to be performed, for example: loss functions and initial weights. And after initial training is carried out to a certain degree through the sample set, the training can be used for the next training.
The deep neural network model used in this embodiment needs to be determined according to the requirements of a developer, and may be a VGG-16 deep neural network model, a VGG-19 deep neural network model, or a deep neural network model designed by the developer, which is not limited herein.
In the present embodiment, a VGG-16 deep neural network model is taken as an example, where the VGG-16 deep neural network model includes conv3-64, conv3-64, maxpool, conv3-128, conv3-128, maxpool, conv3-256, conv3-256, conv3-256, maxpool, conv3-512, conv3-512, conv3-512, maxpool, conv3-512, conv3-512, conv3-512, maxpool, FC-4096, FC-1000, softmax.
102. Acquiring an original training sample set, wherein the original training sample set at least comprises two original training samples;
the terminal obtains an original training sample set according to the requirements of a developer, wherein the original training sample set comprises a plurality of original training samples. The original training samples may be vehicle images, face images, cat images, etc., and are not limited herein.
After the original training sample set is obtained, the terminal may perform sample expansion preprocessing on the original training samples in the original training sample set, where the sample expansion preprocessing includes scaling processing, clipping processing, rotation processing, and photo background gray level dc component unification processing. For an initial training sample set obtained by photographing, before the initial training sample set is sent to a deep neural network model for training, sample expansion preprocessing needs to be carried out on a photo.
A large number of training samples are needed during deep neural network model training, modeling is carried out through data features learned from the large number of training samples, an original sample set is not sufficient in some cases, defective samples need to be artificially increased in a data enhancement mode, data enhancement comprises image operations such as rotation, deviation, mirror image, cutting and stretching on a picture, the new picture and the original picture are made to be different in appearance, the new picture is generated in a certain sense, and the data set is expanded.
Cutting treatment: the original training sample set obtained by shooting with the camera includes some surrounding background portions in addition to the core feature portion, when the proportion of the background portion is high, the training and detection of the deep neural network model may be affected, and the time cost and the GPU video memory consumption during the training and testing of the deep neural network model may be increased by the extra images, so that the extra background portions need to be removed by means of clipping, and more core feature portions are retained.
In addition to the cropping process, the rotation process, the offset process, the mirroring process, and the stretching process, the sample expansion may be performed on the images in the original training sample set.
Unifying the DC component of the background gray level of the photo: because the training samples are shot by a plurality of shooting teams, the shooting conditions of different shooting teams are different, and the background gray levels of different original training samples are different, the training and the detection of the deep neural network model are inconvenient, and the final detection result may be influenced due to the different background gray levels of the images concentrated by the original training samples. The original training samples comprise gray level direct current components of the background and gray level alternating current components of the core features, and the gray level direct current components of the background in all the original training samples are unified by retaining the gray level alternating current components of the core features in the image, so that the deep neural network model can be adapted to all the original training samples with different background gray levels in the original training sample set.
Secondly, processing steps of RGB three-channel original training samples existing in the original training sample set are slightly different, the gray level average value of the original training samples is calculated, a preset gray level value is added after the gray level average value is subtracted from the current pixel value of the original training samples, and finally overflow and underflow processing is carried out on the gray level value. However, the specific operation is slightly different, the green G channel is processed according to the above steps, the added uniform gray value is different from the green channel after the average gray value of the pixels of the red R channel and the blue B channel is subtracted, and the uniform gray value of the corresponding proportion is added according to the proportion of the average gray value of the respective channels to the average gray value of the green channel.
For example: the average pixel gray levels of the RGB channels are 50, 75 and 100, respectively, and the green channel is incremented by a uniform gray level of 128, i.e., all pixel values of the green channel minus 50 plus 128, all pixel values of the red channel minus 75 plus 128 × 75/50, all pixel values of the blue channel minus 100 plus 128 × 100/50.
103. Acquiring an encryption transformation function according to the image type of the original training sample set;
the terminal obtains different encryption transformation functions according to the types of the original training sample set, and the encryption transformation functions are mainly used for carrying out pixel transformation on the original training samples in the original training sample set.
The encryption transformation function is determined according to the type of the original training sample set, for example, different encryption transformation functions are respectively used for the vehicle image and the face image, and if the same encryption transformation function is used, the feature pixel points may be merged into the environment of the image. For example, an encryption transformation function is used for processing the vehicle image, so that the pixel points of the vehicle image are changed, and the pixel points of the vehicle region and the environment region are changed, but the vehicle region and the environment region can still be distinguished. However, the effect may be deteriorated when the face image is processed by the encryption transformation function, and the pixel points of the face region in the face image are transformed and then merged into the environment region, so that the image cannot be subsequently trained. The cryptographic transformation function needs to be obtained by the type of the original training sample set.
104. Carrying out encryption conversion processing on original training samples in an original training sample set according to an encryption transformation function to generate an encrypted training sample set, wherein the encrypted training samples in the encrypted training sample set have different pixel point parameters from the original training samples in the original training sample set;
and the terminal carries out pixel point transformation on the original training samples in the original training sample set according to the encryption transformation function, and converts the pixel point parameters of the original training sample set into new pixel point parameters through a preset formula so as to generate the encryption training sample set.
The pixel parameter may be an RGB channel parameter or a YUV channel parameter, which is not limited herein, and mainly refers to a transformation operation related to a pixel, and all of which belong to the protection scope of this embodiment.
105. And inputting the encrypted training samples in the encrypted training sample set into the deep neural network model for training until the deep neural network model is trained.
And after the original training samples are subjected to encryption transformation of the encryption transformation function, the terminal inputs the encrypted training samples in the encrypted training sample set into the deep neural network model for training, so that the deep neural network model updates the weight until the deep neural network model is trained. The condition for judging the completion of the deep neural network model training may be that the loss value reaches below a preset threshold, the number of encrypted training samples for model training reaches a preset value, or the loss value belongs to a convergence state in approximately 10000 times of training, and is not limited herein.
In the application, firstly, a terminal obtains a deep neural network model, and the deep neural network model is a neural network model with set basic parameters. And then the terminal acquires an original training sample set according to the training task, wherein the original training sample set at least comprises two original training samples and meets the requirement of the training task. And next, the terminal acquires an encryption transformation function according to the original training sample set, and performs encryption conversion processing on the original training samples in the original training sample set according to the encryption transformation function to generate an encrypted training sample set, wherein the encrypted training samples in the encrypted training sample set have different pixel point parameters from the original training samples in the original training sample set. And finally, inputting the encrypted training samples in the encrypted training sample set into the deep neural network model by the terminal for training until the deep neural network model is trained. The pixel point parameters of the original training sample are transformed through an encryption transformation function, so that the generated encrypted training sample is different from the original training sample in color, the encrypted training sample is input into the deep neural network model for training, and the obtained deep neural network model is only suitable for the type of the encrypted training sample and is not suitable for the original training sample. Even if the deep neural network model is stolen, as the deep neural network model is only suitable for the type of the encrypted training sample, a stealer cannot use the deep neural network model to detect the conventional type of image, and the risk of stealing the deep neural network model is reduced.
Referring to fig. 2-1, 2-2, and 2-3, the present application provides another embodiment of a protection method for a deep neural network model, including:
201. acquiring a deep neural network model;
202. acquiring an original training sample set, wherein the original training sample set at least comprises two original training samples;
steps 201 to 202 in this embodiment are similar to steps 101 to 102 in the previous embodiment, and are not described again here.
203. Acquiring an encryption transformation function according to an original training sample set;
step 203 in this embodiment is similar to step 103 in the previous embodiment, and is not described herein again.
204. Acquiring RGB channel parameters of a target pixel point in an original training sample, wherein the RGB channel parameters comprise an R channel value, a G channel value and a B channel value;
205. carrying out reduction processing on the R channel value, the G channel value and the B channel value;
206. calculating and generating a first channel value of the target pixel point according to the R channel value, the G channel value and the B channel value after the reduction processing and the encryption parameter in the encryption transformation function;
207. judging whether the maximum value of the RGB channel parameters of the target pixel point is 0 or not;
208. if the channel value is 0, setting the second channel value of the target pixel point to be 255;
209. if not, calculating a second channel value of the target pixel point according to the maximum value of the RGB channel parameters and the minimum value of the RGB channel parameters;
210. calculating a third channel value of the target pixel point according to the maximum value, the minimum value, the R channel value, the G channel value and the B channel value of the RGB channel parameters;
211. integrating the first channel value, the second channel value and the third channel value to generate a new target pixel point;
212. processing the original training samples in the original training sample set according to the steps to generate an encrypted training sample set;
firstly, a terminal obtains RGB channel parameters of a target pixel point in an original training sample, wherein the RGB channel parameters comprise R channel values
Figure 819001DEST_PATH_IMAGE002
G channel value
Figure 650691DEST_PATH_IMAGE003
And B channel value
Figure 529523DEST_PATH_IMAGE004
Wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE025
,
Figure 217993DEST_PATH_IMAGE026
,
Figure DEST_PATH_IMAGE027
the terminal performs reduction processing on the R channel value, the G channel value and the B channel value:
Figure 73953DEST_PATH_IMAGE001
the terminal calculates and generates a first channel value of a target pixel point according to the R channel value, the G channel value and the B channel value after the reduction processing and the encryption parameter in the encryption transformation function
Figure 787963DEST_PATH_IMAGE009
And the encryption parameters are artificially set. In this embodiment, the encryption parameters in the encryption transformation function are constants a, b, and c, where a, b, and c satisfy a + b + c =1, and a>0,b>0,c>0。
Figure 147400DEST_PATH_IMAGE008
And the terminal judges whether the maximum value of the RGB channel parameters of the target pixel point is 0 or not. If the channel value is 0, the terminal sets the second channel value of the target pixel point to be 255. If not, the terminal calculates a second channel value of the target pixel point according to the maximum value of the RGB channel parameters and the minimum value of the RGB channel parameters
Figure 210034DEST_PATH_IMAGE015
Suppose that
Figure 412345DEST_PATH_IMAGE028
Figure DEST_PATH_IMAGE029
Figure 360404DEST_PATH_IMAGE012
Calculating a third channel value of the target pixel point according to the maximum value, the minimum value, the R channel value, the G channel value and the B channel value of the RGB channel parameters
Figure 636665DEST_PATH_IMAGE018
Figure 73462DEST_PATH_IMAGE017
When the temperature is higher than the set temperature
Figure 497490DEST_PATH_IMAGE020
If the value is less than 0, 360 is added, and subsequent calculation is carried out.
The terminal is according to the first channel value
Figure 677936DEST_PATH_IMAGE009
Second channel value
Figure 887331DEST_PATH_IMAGE015
And a third channel value
Figure 291768DEST_PATH_IMAGE018
And integrating to generate new target pixel points, and processing the original training samples in the original training sample set by the terminal according to the mode to generate an encrypted training sample set.
213. Selecting a first encrypted training sample from the encrypted training sample set, inputting the first encrypted training sample into a deep neural network model, marking the first encrypted training sample with a training expected value, and setting a loss function in the deep neural network model;
the terminal can randomly extract a certain number of samples from the encrypted training sample set, and train at the same time, or only select one. For example: and (3) a small batch of 32 training deep neural network models are adopted, and the training effect is achieved through multiple iterations. In this embodiment, the number of iterations is about 25000.
The terminal selects a first encrypted training sample from the encrypted training sample set, and inputs the first encrypted training sample into the deep neural network model, so that the deep neural network model performs learning analysis on the first encrypted training sample. The first encrypted training sample is marked with a training expected value, and the deep neural network model is provided with a corresponding loss function.
214. Extracting the characteristics of the first encrypted training sample through the weight in the deep neural network model;
215. calculating a measured value of a first encrypted training sample for the feature;
and the terminal acquires the characteristics of the first encrypted training sample through the deep neural network model and generates a measured value of the target type to which the encrypted training sample belongs.
And the fully-connected layer in the deep neural network model is used for representing the importance degree of the feature through the feature weight obtained by the learning analysis of the encrypted training sample. And the convolution layer in the deep neural network model is mainly used for filtering the features and screening out the features with strong category distinguishing capability. And performing maximum pooling operation on a pooling layer in the deep neural network model, and improving the generalization of the deep neural network model. And finally, combining the features to generate an image, and finally calculating and generating a measured value of the encrypted training sample attributive target type through a softmax layer.
In this embodiment, the calculation formula of the softmax layer is as follows:
Figure 78458DEST_PATH_IMAGE030
assuming that the input of the softmax layer is WX, assuming that the input sample of the deep neural network model is I, a 3-class problem (class 1, class 2, class 3) is discussed, and the true class of the sample I is class 2, then the sample I gets WX before passing through all network layers to reach the softmax layer, that is, WX is a vector with 3 elements, then a in the above formula j It indicates the j-th value in the vector with the element being 3 (finally resulting in S1, S2, S3), and a in the denominator k It represents 3 values in a vector with an element of 3, soThere will be a summation sign (where summation is k from 1 to T, T is equal to T in the formula, i.e. the number of classes, j also ranges from 1 to T). Due to e x Constantly greater than 0, so the numerator is always a positive number, the denominator is again the sum of a plurality of positive numbers, so the denominator is also definitely a positive number, so S j Is a positive number and the range is (0, 1). If the deep neural network model is tested instead of being trained, when a sample passes through the softmax layer and a vector of T x 1 elements is output, the maximum element value in the vector is taken as the predicted value of the encrypted training sample.
The following is illustrated by way of example: assuming that your WX = [1,2,3], after softmax layer, we get [0.09,0.24,0.67], and these three numbers indicate that the probability of this sample belonging to category 1,2,3 is 0.09,0.24,0.67, respectively. The probability is taken to be the maximum of 0.67, so the predicted value obtained here is the third class. For another example: y =2.0,1.0,0.1, and after passing through the softmax function (softmax layer), the corresponding probability values s (y) =0.7,0.2,0.1 are calculated, and the maximum probability value is 0.7.
216. Calculating a loss value according to the measured value, the training expected value and the loss function to generate loss value change data and iteration turns, wherein the loss value change data is statistical data of the loss value generated by each training;
and the terminal calculates a loss value according to the measured value of the target class to which the first encrypted training sample belongs, the training expected value of the target class to which the first encrypted training sample belongs and a loss function of the deep neural network model, and records the iteration turn of the current deep neural network model. And generating loss value change data according to the statistical data of the loss value generated by each training.
In this embodiment, the Loss function may be various, such as Regression Loss (Regression Loss), classification Loss, and the like, and the two categories of Regression Loss and classification Loss are further divided into a plurality of specific Loss functions, which are not limited herein, and all of them belong to the protection scope of the present application as long as they are used to calculate the Loss value.
In this embodiment, the loss function of the deep neural network model takes a cross entropy loss function as an example, and the calculation method of the cross entropy function is as follows:
Figure 187229DEST_PATH_IMAGE021
the cross entropy mainly characterizes the distance between the actual output (probability) and the expected output (probability), that is, the smaller the value calculated by the cross entropy function is, the closer the two probability distributions are. Assuming the true distribution (training expectation) is y, the output distribution (measured value) of the deep neural network model is
Figure 375765DEST_PATH_IMAGE023
The total number of categories is n. The following is given by way of simple illustration:
in the digital recognition task, if the digital sample is the number "5", then the true distribution should be: [0, 0,0, 0,0, 1, 0,0, 0,0 ] if the distribution of the network outputs is: [ 0.1, 0.1, 0,0, 0, 0.7,0, 0.1, 0,0 ] should be 10, then the loss function is calculated as:
Figure DEST_PATH_IMAGE031
if the distribution of the output of the deep neural network model is as follows: [ 0.2, 0.3, 0.1, 0,0, 0.3, 0.1, 0,0, 0 ], then the loss function is calculated as:
Figure 528266DEST_PATH_IMAGE032
compared with the two cases, the loss value calculated by the cross entropy function of the loss of the first distribution is obviously lower than the loss value calculated by the cross entropy function of the second distribution, and the first distribution is closer to the real distribution.
After the loss values of the model probability distribution and the real probability distribution are calculated in the above mode, all loss values from the training of the deep neural network model to the present are counted to generate loss value change data.
217. Judging whether the loss value change data and/or the iteration turns meet the training conditions;
the terminal judges whether the loss value of the loss value change data in the preset interval is converged to 0, and when the loss value change data in the preset interval and the magnitudes and trends of all the loss values are converged to 0, it can be determined that the deep neural network model training is completed, and step 219 is executed.
218. If the loss value change data and/or the iteration turns meet the training conditions, determining that the deep neural network model is trained;
when the loss value variation data are in the preset interval and all the loss values are stable and do not rise any more, the deep neural network model can be determined to be trained completely, and the deep neural network model can be put into use. Or judging whether the iteration times of the deep neural network model reach the preset times, and when the preset times are reached, indicating that the deep neural network model is trained and can be put into use.
The loss value change data is illustrated below: in the loss value change data, all the loss values generated in the interval of 10000 times of latest training are less than 0.001, and each loss value is smaller than the absolute value of the previous loss value, namely, the loss values are not increased any more, so that the completion of the deep neural network model training can be determined.
219. If the loss value change data and/or the iteration turns do not meet the training conditions, selecting a second encrypted training sample from the encrypted training sample set to input into the deep neural network model after the weight of the deep neural network model is updated according to a small batch gradient descent method, or inputting the first encrypted training sample into the deep neural network model again after the weight of the deep neural network model is updated according to the small batch gradient descent method;
and when the loss value change data is in a preset interval, the sizes and the trends of all the loss values do not rise any more, or the iteration times of the deep neural network model do not reach the preset times, determining that the deep neural network model training is not finished. At this time, it is necessary to determine whether the training frequency of the training sample reaches the standard, i.e. whether the current training sample completes the training for the preset number of times
The weight updating of the deep neural network model may be in various manners, in this embodiment, a small batch gradient descent method is taken as an example to update the deep neural network model, and a formula of a gradient updating manner of batch training is as follows:
Figure 67832DEST_PATH_IMAGE033
n is the batch size (batch size),
Figure 386818DEST_PATH_IMAGE035
is a learning rate (learning rate).
Using inverse gradient derivation, please refer to fig. 3, fig. 3 is a schematic diagram of a deep neural network model layer.
The left side is the first layer, also the input layer, which contains two neurons a and b. In the middle is a second layer, also the hidden layer, which contains two neurons c and d. The third layer, also the output layer, on the right, contains e and f, marked on each line
Figure 820073DEST_PATH_IMAGE036
Is the weight of the connections between layers.
Figure 769575DEST_PATH_IMAGE036
Represents the jth neuron of the ith layer and outputs a weight corresponding to the kth neuron of the last layer (l-1).
Figure 406223DEST_PATH_IMAGE037
Representing the jth neuron output at the l-th layer.
Figure 466583DEST_PATH_IMAGE038
Represents the l-th layerThe jth neuron input.
Figure 426449DEST_PATH_IMAGE039
Representing the jth neuron bias at layer l.
W represents a weight matrix, Z represents an input matrix, A represents an output matrix, and Y represents a standard answer.
L represents the number of layers of the convolutional neural network model.
Figure 937065DEST_PATH_IMAGE040
The forward propagation method is to transmit the signal of the input layer to the hidden layer, taking hidden layer node c as an example, and looking backward on node c (in the direction of the input layer), it can be seen that there are two arrows pointing to node c, so the information of nodes a and b will be transmitted to node c, and each arrow has a certain weight, so for node c, the input signal is:
Figure 920064DEST_PATH_IMAGE041
similarly, the input signal of the node d is:
Figure 158016DEST_PATH_IMAGE042
since the terminal is good at doing tasks with loops, it can be represented by matrix multiplication:
Figure 441230DEST_PATH_IMAGE043
therefore, the output of the hidden layer node after the nonlinear transformation is represented as follows:
Figure 529272DEST_PATH_IMAGE044
similarly, the input signal of the output layer is represented as the weight matrix multiplied by the output of the above layer:
Figure 389781DEST_PATH_IMAGE045
similarly, the final output of the output layer node after nonlinear mapping is represented as:
Figure 791943DEST_PATH_IMAGE046
the input signal gets the output of each layer with the help of the weight matrix, and finally reaches the output layer. Therefore, the weight matrix plays a role of a transportation soldier in the process of forward signal propagation and plays a role of starting and starting.
Referring to fig. 4, fig. 4 is a schematic diagram of a deep neural network model network layer. The backward propagation method, since gradient descent requires explicit error in each layer to update the parameters, the next focus is on how to backward propagate the error of the output layer to the hidden layer.
Wherein, the errors of the nodes of the output layer and the hidden layer are shown in the figure, the error of the output layer is known, and then the error analysis is carried out on the first node c of the hidden layer. Or on node c, except this time looking forward (in the direction of the output layer), it can be seen that the two blue thick arrows pointing to node c start from node e and node f, so the error for node c must be related to nodes e and f of the output layer. The node e of the output layer has arrows pointing to the nodes c and d of the hidden layer, respectively, so that the error of the hidden node e cannot be owned by the hidden node c, but the error of the node f follows the principle of distribution according to the labor (distribution according to the weight), and the error of the node f follows the principle, so that the error of the node c of the hidden layer is:
Figure 805030DEST_PATH_IMAGE047
similarly, the error for the hidden layer node d is:
Figure 63973DEST_PATH_IMAGE048
to reduce the workload, we can write in the form of matrix multiplication:
Figure 287144DEST_PATH_IMAGE049
the matrix is relatively complicated, can be simplified to a forward propagation form, and does not destroy the proportion of the forward propagation form, so that the denominator part can be omitted, and the matrix is formed again as follows:
Figure 617631DEST_PATH_IMAGE050
the weight matrix is actually the transpose of the weight matrix w in forward propagation, so the form is abbreviated as follows:
Figure 344278DEST_PATH_IMAGE051
the output layer errors are passed to the hidden layer with the help of the transposed weight matrix, so that we can update the weight matrix connected to the hidden layer with indirect errors. It can be seen that the weight matrix also acts as a transportation engineer during back propagation, but this time the output error of the transport, not the input signal.
Referring to fig. 5, fig. 5 is a schematic diagram of a deep neural network model layer. Next, a chain derivation is performed, which introduces the forward propagation of the input information and the backward propagation of the output error, and then the parameters are updated according to the obtained error.
First of all for w of the hidden layer 11 Updating parameters, before updating let us deduce from back to front until w is foreseen 11 Until nowThe calculation method is as follows:
Figure 616866DEST_PATH_IMAGE052
the error is therefore biased towards w11 as follows:
Figure 124070DEST_PATH_IMAGE053
the following formula is derived (all values are known):
Figure 868035DEST_PATH_IMAGE054
similarly, the error has the following partial derivative for w 12:
Figure 839402DEST_PATH_IMAGE055
likewise, the evaluation formula for w12 is derived:
Figure 643410DEST_PATH_IMAGE056
similarly, the error is biased for the offset as follows:
Figure 637911DEST_PATH_IMAGE057
similarly, the error is biased for the offset as follows:
Figure 60933DEST_PATH_IMAGE058
followed by w for the input layer 11 Updating parameters, and before updating, deriving the parameters from back to front until predicting w of the first layer 11 So far:
Figure 762173DEST_PATH_IMAGE059
Figure 127295DEST_PATH_IMAGE060
the error is therefore biased as follows for w11 for the input layer:
Figure 343513DEST_PATH_IMAGE061
the derivation is as follows:
Figure 694860DEST_PATH_IMAGE062
similarly, the other three parameters of the input layer can be used to calculate their respective partial derivatives according to the same method, which is not described herein. In the case where the partial derivative of each parameter is definite, the gradient descent formula is substituted by:
Figure 895946DEST_PATH_IMAGE063
the task of updating each layer of parameters using the chain rule has been completed.
After the weights of the deep neural network model are updated, model storage needs to be performed on the characteristics and the measured values of the encrypted training samples acquired in the training process, and the purpose is to reserve a deep neural network model after the training times of each group of training samples reach the standard, so that when problems such as generalization and overfitting occur in the subsequent training process, the originally stored deep neural network model can be used.
After the deep neural network model is updated, the first encrypted training sample can be selected to be input into the deep neural network model again for training, or the second encrypted training sample can be selected from the encrypted training sample set to be input into the deep neural network model for training.
220. Acquiring an image to be detected;
221. carrying out encryption conversion processing on an image to be detected according to an encryption transformation function;
222. inputting the image to be detected after the encryption conversion processing into a deep neural network model;
223. and generating a detection result of the image to be detected through the deep neural network model.
After the deep neural network model is trained, the terminal acquires an image to be detected, the terminal conducts encryption conversion processing on the image to be detected according to an encryption transformation function, the terminal inputs the image to be detected after the encryption conversion processing into the deep neural network model, and the terminal generates a detection result of the image to be detected through the deep neural network model.
In the application, firstly, a terminal obtains a deep neural network model, and the deep neural network model is a neural network model with set basic parameters. And then the terminal acquires an original training sample set according to the training task, wherein the original training sample set at least comprises two original training samples and meets the requirement of the training task. And the terminal performs sample expansion pretreatment on the original training samples in the original training sample set, wherein the sample expansion pretreatment comprises scaling treatment, cutting treatment, rotation treatment and photo background gray level direct current component unification treatment.
And then, the terminal acquires an encryption transformation function according to the image type of the original training sample set, and the terminal acquires RGB channel parameters of the target pixel points in the original training sample, wherein the RGB channel parameters comprise an R channel value, a G channel value and a B channel value. And the terminal performs reduction processing on the R channel value, the G channel value and the B channel value, and calculates and generates a first channel value of the target pixel point according to the R channel value, the G channel value and the B channel value after the reduction processing and the encryption parameter in the encryption transformation function. The terminal judges whether the maximum value of the RGB channel parameters of the target pixel point is 0 or not, if so, the second channel value of the target pixel point of the terminal is set to be 255, and if not, the terminal calculates the second channel value of the target pixel point according to the maximum value of the RGB channel parameters and the minimum value of the RGB channel parameters. And the terminal calculates a third channel value of the target pixel point according to the maximum value, the minimum value, the R channel value, the G channel value and the B channel value of the RGB channel parameters, and integrates the first channel value, the second channel value and the third channel value to generate a new target pixel point. And the terminal processes the original training samples in the original training sample set according to the steps to generate an encrypted training sample set.
And finally, the terminal selects a first encrypted training sample from the encrypted training sample set, inputs the first encrypted training sample into the deep neural network model, marks the first encrypted training sample with a training expected value, and sets a loss function in the deep neural network model. The terminal extracts the characteristics of the first encrypted training sample through the deep neural network model, the terminal calculates the measured values of the first encrypted training sample according to the characteristics, the terminal calculates the loss values according to the measured values, the training expected values and the loss functions to generate loss value change data and iteration turns, and the loss value change data are statistical data of the loss values generated by each training. The terminal judges whether the loss value change data and/or the iteration turns meet the training conditions or not, if the loss value change data and/or the iteration turns meet the training conditions, the terminal determines that the deep neural network model is trained completely, if the loss value change data and/or the iteration turns do not meet the training conditions, the terminal updates the weight of the deep neural network model according to a small batch gradient descent method, then selects a second encrypted training sample from the encrypted training sample set to be input into the deep neural network model, or after updating the weight of the deep neural network model according to the small batch gradient descent method, the terminal inputs the first encrypted training sample into the deep neural network model again.
The terminal obtains an image to be detected, the terminal conducts encryption conversion processing on the image to be detected according to an encryption transformation function, the terminal inputs the image to be detected after the encryption conversion processing into the deep neural network model, and the terminal generates a detection result of the image to be detected through the deep neural network model.
In this embodiment, the pixel point parameters of the original training sample are transformed through the encryption transformation function, so that the generated encrypted training sample is different from the original training sample in color, and then the encrypted training sample is input into the deep neural network model for training, and the obtained deep neural network model is only applicable to the type of the encrypted training sample, but not applicable to the original training sample. Even if the deep neural network model is stolen, as the deep neural network model is only suitable for the type of the encrypted training sample, a stealer cannot use the deep neural network model to detect the conventional type of image, and the risk of stealing the deep neural network model is reduced.
Secondly, in the deep neural network model deployment reasoning stage, the original RGB image (image to be detected) is not directly learned and analyzed, but the detection image is firstly converted into an encrypted image by adopting an encryption transformation function which is the same as that in the development training stage, and then the encrypted deep neural network model obtained by the training is directly loaded in the model reasoning engine to carry out reasoning analysis on the image to be detected, so that an analysis result is obtained.
Secondly, the encrypted training samples and the images to be detected do not affect normal neural network model training and normal model reasoning.
Meanwhile, the encrypted deep neural network model does not need to worry about the stealing of the model when a user side deploys, because the deep neural network model only can reason the encrypted image to obtain a correct result, and the accuracy of the deep neural network model is greatly reduced when the common RGB image is analyzed, so that the deep neural network model becomes an unavailable model, and the purpose of protecting the model is further achieved.
Referring to fig. 6, the present application provides another embodiment of a protection device for a deep neural network model, including:
a first obtaining unit 601, configured to obtain a deep neural network model;
a second obtaining unit 602, configured to obtain an original training sample set, where the original training sample set includes at least two original training samples;
a third obtaining unit 603, configured to obtain an encryption transformation function according to an image type of an original training sample set;
a first encryption unit 604, configured to perform encryption conversion processing on original training samples in an original training sample set according to an encryption transformation function to generate an encrypted training sample set, where pixel point parameters of the encrypted training samples in the encrypted training sample set are different from pixel point parameters of the original training samples in the original training sample set;
optionally, the first encrypting unit 604 includes:
acquiring RGB channel parameters of target pixel points in an original training sample, wherein the RGB channel parameters comprise R channel values, G channel values and B channel values;
carrying out reduction processing on the R channel value, the G channel value and the B channel value;
calculating and generating a first channel value of a target pixel point according to the R channel value, the G channel value and the B channel value after the reduction processing and an encryption parameter in an encryption transformation function;
judging whether the maximum value of the RGB channel parameters of the target pixel point is 0 or not;
if the channel value is 0, setting the second channel value of the target pixel point to be 255;
if not, calculating a second channel value of the target pixel point according to the maximum value of the RGB channel parameters and the minimum value of the RGB channel parameters;
calculating a third channel value of the target pixel point according to the maximum value, the minimum value, the R channel value, the G channel value and the B channel value of the RGB channel parameters;
integrating the first channel value, the second channel value and the third channel value to generate a new target pixel point;
and processing the original training samples in the original training sample set according to the steps to generate an encrypted training sample set.
Optionally, the reduction processing is performed on the R channel value, the G channel value, and the B channel value, specifically:
Figure 41757DEST_PATH_IMAGE001
Figure 73167DEST_PATH_IMAGE002
is the R-channel value of the target pixel,
Figure 759363DEST_PATH_IMAGE003
is the G channel value of the target pixel point,
Figure 700774DEST_PATH_IMAGE004
is the B-channel value of the target pixel point,
Figure 892852DEST_PATH_IMAGE005
in order to reduce the R channel value of the target pixel,
Figure 286924DEST_PATH_IMAGE006
for the reduced G channel value of the target pixel,
Figure 104708DEST_PATH_IMAGE007
the B channel value of the reduced target pixel point is obtained;
calculating and generating a first channel value of a target pixel point according to the R channel value, the G channel value and the B channel value after the reduction processing and an encryption parameter in an encryption transformation function, specifically:
Figure 431784DEST_PATH_IMAGE008
Figure 184976DEST_PATH_IMAGE009
is the first channel value of the target pixel point, a, b and c are the encryption parameters in the encryption transformation function,
Figure 440246DEST_PATH_IMAGE010
the intermediate value of the first channel value is that a, b and c are added to be 1, and a, b and c are all larger than 0;
judging whether the maximum value of the RGB channel parameters of the target pixel point is 0 or not, if so, setting the second channel value of the target pixel point to be 255, and if not, calculating the second channel value of the target pixel point according to the maximum value of the RGB channel parameters and the minimum value of the RGB channel parameters, specifically:
Figure 671507DEST_PATH_IMAGE011
Figure 446565DEST_PATH_IMAGE012
Figure 105080DEST_PATH_IMAGE013
is the R channel value
Figure 270482DEST_PATH_IMAGE005
G channel value
Figure 915221DEST_PATH_IMAGE006
And B channel value
Figure 685731DEST_PATH_IMAGE007
The minimum value of (a) to (b),
Figure 311884DEST_PATH_IMAGE014
is the R channel value
Figure 292478DEST_PATH_IMAGE005
G channel value
Figure 131121DEST_PATH_IMAGE006
And B channel value
Figure 864460DEST_PATH_IMAGE007
The maximum value of (a) is,
Figure 864777DEST_PATH_IMAGE015
is the second channel value of the target pixel,
Figure 67088DEST_PATH_IMAGE016
is the middle value of the second channel value;
calculating a third channel value of the target pixel point according to the maximum value, the minimum value, the R channel value, the G channel value and the B channel value of the RGB channel parameters, specifically:
Figure 506160DEST_PATH_IMAGE064
Figure 985683DEST_PATH_IMAGE065
is the third channel value of the target pixel point,
Figure 32267DEST_PATH_IMAGE066
is the first intermediate value of the third channel value,
Figure 597241DEST_PATH_IMAGE067
a second intermediate value of the third channel value.
A training unit 605, configured to input the encrypted training samples in the encrypted training sample set into the deep neural network model for training until the training of the deep neural network model is completed;
optionally, the training unit 605 includes:
selecting a first encrypted training sample from the encrypted training sample set, inputting the first encrypted training sample into a deep neural network model, marking the first encrypted training sample with a training expected value, and setting a loss function in the deep neural network model;
extracting the characteristics of the first encrypted training sample through the weight in the deep neural network model;
calculating a measured value of a first encrypted training sample for the feature;
calculating a loss value according to the measured value, the training expected value and the loss function to generate loss value change data and iteration turns, wherein the loss value change data is statistical data of the loss value generated by each training;
judging whether the loss value change data and/or the iteration turns meet the training conditions;
and if the loss value change data and/or the iteration turns meet the training conditions, determining that the deep neural network model is trained completely.
Optionally, the loss value is calculated according to the measured value, the training expected value and the loss function to generate loss value change data, specifically:
Figure 902320DEST_PATH_IMAGE021
Figure 33087DEST_PATH_IMAGE022
in order to obtain the value of the loss,
Figure 640786DEST_PATH_IMAGE023
in order to be able to measure the value,
Figure 801378DEST_PATH_IMAGE024
is the ith type of training expectation.
Optionally, the training unit 605 further includes:
if the loss value change data and/or the iteration turns do not meet the training conditions, selecting a second encrypted training sample from the encrypted training sample set to be input into the deep neural network model after the weight of the deep neural network model is updated according to a small batch gradient descent method, or inputting the first encrypted training sample into the deep neural network model again after the weight of the deep neural network model is updated according to the small batch gradient descent method.
A fourth acquiring unit 606 for acquiring an image to be detected;
a second encryption unit 607, configured to perform encryption conversion processing on the image to be detected according to the encryption transformation function;
an input unit 608, configured to input the encrypted and converted image to be detected into the deep neural network model;
and the generating unit 609 is used for generating a detection result of the image to be detected through the deep neural network model.
Referring to fig. 7, the present application provides an electronic device, including:
a processor 701, a memory 702, an input-output unit 703, and a bus 704.
The processor 701 is connected to a memory 702, an input-output unit 703 and a bus 704.
The memory 702 holds a program that the processor 701 calls to perform the protection method as in fig. 1, 2-2, and 2-3.
The present application provides a computer-readable storage medium having a program stored thereon, the program, when executed on a computer, performing the protection method as in fig. 1, 2-2 and 2-3.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one type of logical functional division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and the like.

Claims (8)

1. A protection method of a deep neural network model is characterized by comprising the following steps:
acquiring a deep neural network model;
obtaining an original training sample set, wherein the original training sample set at least comprises two original training samples;
acquiring an encryption transformation function according to the image type of the original training sample set;
acquiring RGB channel parameters of a target pixel point in the original training sample, wherein the RGB channel parameters comprise an R channel value, a G channel value and a B channel value;
Figure 746694DEST_PATH_IMAGE001
Figure 462978DEST_PATH_IMAGE002
is the R-channel value of the target pixel point,
Figure 799281DEST_PATH_IMAGE003
is the G channel value of the target pixel point,
Figure 383846DEST_PATH_IMAGE004
is the B-channel value of the target pixel point,
Figure 161309DEST_PATH_IMAGE005
for the reduced R channel value of the target pixel,
Figure 48494DEST_PATH_IMAGE006
for the reduced G channel value of the target pixel,
Figure 606514DEST_PATH_IMAGE007
the reduced B channel value of the target pixel point is obtained;
calculating and generating a first channel value of the target pixel point according to the R channel value, the G channel value, the B channel value and the encryption parameter in the encryption transformation function after the reduction processing;
Figure 870136DEST_PATH_IMAGE008
Figure 830002DEST_PATH_IMAGE009
is the first channel value of the target pixel point, a, b and c are the encryption parameters in the encryption transformation function,
Figure 12722DEST_PATH_IMAGE010
a, b and c are added to be 1 as the middle value of the first channel value, and a, b and c are all larger than 0;
judging whether the maximum value of the RGB channel parameters of the target pixel point is 0 or not;
if the channel value is 0, setting the second channel value of the target pixel point to be 255;
if not, calculating a second channel value of the target pixel point according to the maximum value of the RGB channel parameters and the minimum value of the RGB channel parameters;
Figure 196054DEST_PATH_IMAGE011
Figure 122422DEST_PATH_IMAGE012
Figure 936794DEST_PATH_IMAGE013
is the value of the R channel
Figure 165781DEST_PATH_IMAGE005
The G channel value
Figure 963973DEST_PATH_IMAGE006
And the B channel value
Figure 428452DEST_PATH_IMAGE007
The minimum value of (a) is greater than (b),
Figure 238276DEST_PATH_IMAGE014
is the value of the R channel
Figure 762798DEST_PATH_IMAGE005
The G channel value
Figure 923652DEST_PATH_IMAGE006
And the B channel value
Figure 926244DEST_PATH_IMAGE007
The maximum value of (a) is,
Figure 980787DEST_PATH_IMAGE015
is the second channel value of the target pixel point,
Figure 551577DEST_PATH_IMAGE016
is the median of the second channel values;
calculating a third channel value of the target pixel point according to the maximum value and the minimum value of the RGB channel parameters, the R channel value, the G channel value and the B channel value;
Figure 58782DEST_PATH_IMAGE017
Figure 865064DEST_PATH_IMAGE018
is the third channel value of the target pixel point,
Figure 915059DEST_PATH_IMAGE019
is a first intermediate value of the third channel value,
Figure 515805DEST_PATH_IMAGE020
a second intermediate value of the third channel value;
integrating the first channel value, the second channel value and the third channel value to generate a new target pixel point;
processing the original training samples in the original training sample set according to the steps to generate an encrypted training sample set;
inputting the encrypted training samples in the encrypted training sample set into the deep neural network model for training until the deep neural network model is trained.
2. The protection method according to claim 1, wherein the inputting the encrypted training samples in the encrypted training sample set into the deep neural network model for training until the deep neural network model training is completed comprises:
selecting a first encrypted training sample from the encrypted training sample set, and inputting the first encrypted training sample into the deep neural network model, wherein the first encrypted training sample is marked with a training expected value, and the deep neural network model is provided with a loss function;
extracting features of the first encrypted training sample through weights in the deep neural network model;
calculating a measurement of the first encrypted training sample for the feature;
calculating a loss value according to the measured value, the training expected value and the loss function to generate loss value change data and iteration turns, wherein the loss value change data is statistical data of the loss value generated by each training;
judging whether the loss value change data and/or the iteration turns meet training conditions;
and if the loss value change data and/or the iteration turns meet the training conditions, determining that the deep neural network model is trained completely.
3. The protection method according to claim 2, wherein the calculating of the loss value according to the measured value, the training expected value, and the loss function to generate the loss value variation data includes:
Figure 651251DEST_PATH_IMAGE021
Figure 995645DEST_PATH_IMAGE022
in order to obtain the value of the loss,
Figure 24781DEST_PATH_IMAGE023
in order to be able to measure the value,
Figure 671794DEST_PATH_IMAGE024
is the ith type of training expectation.
4. The protection method according to claim 2, wherein after the determining whether the loss value change data and/or the iteration turns satisfy a training condition, the protection method further comprises:
and if the loss value change data and/or the iteration turns do not meet the training conditions, selecting a second encrypted training sample from the encrypted training sample set to be input into the deep neural network model after updating the weight of the deep neural network model according to a small batch gradient descent method, or inputting the first encrypted training sample into the deep neural network model again after updating the weight of the deep neural network model according to the small batch gradient descent method.
5. The protection method according to any one of claims 1 to 3, wherein after the inputting of the encrypted training samples in the encrypted training sample set into the deep neural network model for training until the deep neural network model training is completed, the protection method further comprises:
acquiring an image to be detected;
carrying out encryption conversion processing on the image to be detected according to the encryption transformation function;
inputting the image to be detected after the encryption conversion processing into the deep neural network model;
and generating a detection result of the image to be detected through the deep neural network model.
6. A protection device for a deep neural network model is characterized by comprising:
the first acquisition unit is used for acquiring a deep neural network model;
the second acquisition unit is used for acquiring an original training sample set, and the original training sample set at least comprises two original training samples;
a third obtaining unit, configured to obtain an encryption transformation function according to an image type of the original training sample set;
the first encryption unit is used for carrying out encryption conversion processing on original training samples in the original training sample set according to an encryption transformation function to generate an encrypted training sample set, and the encrypted training samples in the encrypted training sample set are different from the original training sample pixel point parameters of the original training sample set;
the first encryption unit specifically includes:
acquiring RGB channel parameters of target pixel points in the original training sample, wherein the RGB channel parameters comprise R channel values, G channel values and B channel values;
Figure 153591DEST_PATH_IMAGE001
Figure 567255DEST_PATH_IMAGE002
is the R-channel value of the target pixel point,
Figure 323333DEST_PATH_IMAGE003
is the value of the G channel of the target pixel point,
Figure 531461DEST_PATH_IMAGE004
is the B-channel value of the target pixel point,
Figure 375920DEST_PATH_IMAGE005
for the reduced R channel value of the target pixel,
Figure 62116DEST_PATH_IMAGE006
for the reduced G channel value of the target pixel,
Figure 800265DEST_PATH_IMAGE007
the reduced B channel value of the target pixel point is obtained;
calculating and generating a first channel value of the target pixel point according to the R channel value, the G channel value, the B channel value and the encryption parameter in the encryption transformation function after the reduction processing;
calculating and generating a first channel value of a target pixel point according to the R channel value, the G channel value and the B channel value after the reduction processing and an encryption parameter in an encryption transformation function, specifically:
Figure 54660DEST_PATH_IMAGE008
Figure 245470DEST_PATH_IMAGE009
is the first channel value of the target pixel point, a, b and c are the encryption parameters in the encryption transformation function,
Figure 735357DEST_PATH_IMAGE010
is the middle value of the first channel value, a, b and c are added to be 1, and a, b and c are all larger than 0;
judging whether the maximum value of the RGB channel parameters of the target pixel point is 0 or not;
if the channel value is 0, setting the second channel value of the target pixel point to be 255;
if not, calculating a second channel value of the target pixel point according to the maximum value of the RGB channel parameters and the minimum value of the RGB channel parameters;
judging whether the maximum value of the RGB channel parameter of the target pixel point is 0, if so, setting the second channel value of the target pixel point to be 255, and if not, calculating the second channel value of the target pixel point according to the maximum value of the RGB channel parameter and the minimum value of the RGB channel parameter, specifically:
Figure 468958DEST_PATH_IMAGE011
Figure 284467DEST_PATH_IMAGE012
Figure 962573DEST_PATH_IMAGE013
is the R channel value
Figure 865938DEST_PATH_IMAGE005
G channel value
Figure 578680DEST_PATH_IMAGE006
And B channel value
Figure 299511DEST_PATH_IMAGE007
The minimum value of (a) to (b),
Figure 605858DEST_PATH_IMAGE014
is the R channel value
Figure 437548DEST_PATH_IMAGE005
G channel value
Figure 880162DEST_PATH_IMAGE006
And B channel value
Figure 771895DEST_PATH_IMAGE007
The maximum value of (a) is,
Figure 690172DEST_PATH_IMAGE015
is the second channel value of the target pixel,
Figure 935340DEST_PATH_IMAGE025
is the middle value of the second channel value;
calculating a third channel value of the target pixel point according to the maximum value and the minimum value of the RGB channel parameters, the R channel value, the G channel value and the B channel value;
calculating a third channel value of the target pixel point according to the maximum value, the minimum value, the R channel value, the G channel value and the B channel value of the RGB channel parameters, specifically:
Figure 622673DEST_PATH_IMAGE017
Figure 685307DEST_PATH_IMAGE018
is the third channel value of the target pixel point,
Figure 697738DEST_PATH_IMAGE019
is a first intermediate value of the third channel value,
Figure 136809DEST_PATH_IMAGE020
a second intermediate value that is a third channel value;
integrating the first channel value, the second channel value and the third channel value to generate a new target pixel point;
processing the original training samples in the original training sample set according to the steps to generate an encrypted training sample set;
and the training unit is used for inputting the encrypted training samples in the encrypted training sample set into the deep neural network model for training until the deep neural network model is trained.
7. An electronic device, comprising:
the device comprises a processor, a memory, an input and output unit and a bus;
the processor is connected with the memory, the input and output unit and the bus;
the memory holds a program that the processor calls to execute the protection method according to any one of claims 1 to 5.
8. A computer-readable storage medium having a program stored thereon, the program, when executed on a computer, performing the protection method of any one of claims 1 to 5.
CN202210595796.4A 2022-05-30 2022-05-30 Protection method and device for deep neural network model, electronic equipment and medium Active CN114676396B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210595796.4A CN114676396B (en) 2022-05-30 2022-05-30 Protection method and device for deep neural network model, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210595796.4A CN114676396B (en) 2022-05-30 2022-05-30 Protection method and device for deep neural network model, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN114676396A CN114676396A (en) 2022-06-28
CN114676396B true CN114676396B (en) 2022-08-30

Family

ID=82080073

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210595796.4A Active CN114676396B (en) 2022-05-30 2022-05-30 Protection method and device for deep neural network model, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN114676396B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113159147A (en) * 2021-04-08 2021-07-23 平安科技(深圳)有限公司 Image identification method and device based on neural network and electronic equipment
CN114037596A (en) * 2022-01-07 2022-02-11 湖南菠萝互娱网络信息有限公司 End-to-end image steganography method capable of resisting physical transmission deformation

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108122236B (en) * 2017-12-18 2020-07-31 上海交通大学 Iterative fundus image blood vessel segmentation method based on distance modulation loss
WO2019227009A1 (en) * 2018-05-24 2019-11-28 Wade Mamadou Ibra Distributed homomorphic image ecryption and decryption
CN114419712A (en) * 2020-05-14 2022-04-29 支付宝(杭州)信息技术有限公司 Feature extraction method for protecting personal data privacy, model training method and hardware
CN112991217A (en) * 2021-03-24 2021-06-18 吴统明 Medical image acquisition method, device and equipment
CN113592696A (en) * 2021-08-12 2021-11-02 支付宝(杭州)信息技术有限公司 Encryption model training, image encryption and encrypted face image recognition method and device
CN114298202A (en) * 2021-12-23 2022-04-08 上海高德威智能交通系统有限公司 Image encryption method and device, electronic equipment and storage medium
CN114553499B (en) * 2022-01-28 2024-02-13 中国银联股份有限公司 Image encryption and image processing method, device, equipment and medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113159147A (en) * 2021-04-08 2021-07-23 平安科技(深圳)有限公司 Image identification method and device based on neural network and electronic equipment
CN114037596A (en) * 2022-01-07 2022-02-11 湖南菠萝互娱网络信息有限公司 End-to-end image steganography method capable of resisting physical transmission deformation

Also Published As

Publication number Publication date
CN114676396A (en) 2022-06-28

Similar Documents

Publication Publication Date Title
CN107529650B (en) Closed loop detection method and device and computer equipment
CN110728330A (en) Object identification method, device, equipment and storage medium based on artificial intelligence
CN110879982B (en) Crowd counting system and method
CN113111979B (en) Model training method, image detection method and detection device
US11392800B2 (en) Computer vision systems and methods for blind localization of image forgery
CN112149500B (en) Face recognition small sample learning method with partial shielding
CN112001983B (en) Method and device for generating occlusion image, computer equipment and storage medium
CN115526891B (en) Training method and related device for defect data set generation model
CN113674190A (en) Image fusion method and device for generating countermeasure network based on dense connection
CN116994044A (en) Construction method of image anomaly detection model based on mask multi-mode generation countermeasure network
CN113642717A (en) Convolutional neural network training method based on differential privacy
CN114021704A (en) AI neural network model training method and related device
CN116644439B (en) Model safety assessment method based on denoising diffusion model
CN111126566B (en) Abnormal furniture layout data detection method based on GAN model
CN114676396B (en) Protection method and device for deep neural network model, electronic equipment and medium
CN112348808A (en) Screen perspective detection method and device
CN116543433A (en) Mask wearing detection method and device based on improved YOLOv7 model
CN116391193A (en) Method and apparatus for energy-based latent variable model based neural networks
CN113111957B (en) Anti-counterfeiting method, device, equipment, product and medium based on feature denoising
CN113506272B (en) False video detection method and system
KR102421289B1 (en) Learning method and learning device for image-based detection of visibility according to parallel decision voting algorithm and testing method and testing device using the same
CN116563615B (en) Bad picture classification method based on improved multi-scale attention mechanism
Shindo et al. Recognition of Object Hardness from Images Using a Capsule Network
Singhal Comparative Analysis of Passive Image Forgery Detection between CNN and CNN-LSTM Models
CN115393898A (en) Multi-task pedestrian attribute classification training method, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 266000 F3, Jingkong building, No. 57 Lushan Road, Huangdao District, Qingdao, Shandong

Patentee after: Shandong Jijian Technology Co.,Ltd.

Address before: 266000 F3, Jingkong building, No. 57 Lushan Road, Huangdao District, Qingdao, Shandong

Patentee before: Shandong jivisual angle Technology Co.,Ltd.

CP01 Change in the name or title of a patent holder