CN116824647B - Image forgery identification method, network training method, device, equipment and medium - Google Patents

Image forgery identification method, network training method, device, equipment and medium Download PDF

Info

Publication number
CN116824647B
CN116824647B CN202311099611.1A CN202311099611A CN116824647B CN 116824647 B CN116824647 B CN 116824647B CN 202311099611 A CN202311099611 A CN 202311099611A CN 116824647 B CN116824647 B CN 116824647B
Authority
CN
China
Prior art keywords
fingerprint
image
training
information
decoding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311099611.1A
Other languages
Chinese (zh)
Other versions
CN116824647A (en
Inventor
崔星辰
史宏志
温东超
赵健
葛沅
张英杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202311099611.1A priority Critical patent/CN116824647B/en
Publication of CN116824647A publication Critical patent/CN116824647A/en
Application granted granted Critical
Publication of CN116824647B publication Critical patent/CN116824647B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention discloses an image forgery identification method, a network training method, a device, equipment and a medium, which relate to the technical field of computer vision and are used for solving the problem that a depth forgery image is difficult to accurately identify, wherein the identification method comprises the following steps: acquiring an image to be detected; decoding the image to be detected by using an image decoder to obtain an image decoding result; wherein the image decoding result includes fingerprint decoding information; correcting the fingerprint decoding information by using a fingerprint corrector to obtain fingerprint correction information; identifying the true or false condition of the image to be detected according to the fingerprint correction information and a preset fingerprint set; the invention takes the fingerprint information as a part of the input of the generation countermeasure network, so that the depth counterfeit image generated by the generation countermeasure network can contain the content corresponding to the fingerprint information, thus obtaining the fingerprint information and the preset fingerprint set after decoding and correcting the image to be detected, identifying the true and false condition of the image to be detected and realizing the accurate identification of the depth counterfeit image.

Description

Image forgery identification method, network training method, device, equipment and medium
Technical Field
The present invention relates to the field of computer vision, and in particular, to an image forgery identification method, an image forgery identification network training method, an image forgery identification device, an electronic device, and a computer readable storage medium.
Background
With the development of computer technology, it has become easier to tamper with or synthesize images by generating countermeasure networks (Generative Adversarial Network, GAN), and images obtained in this way are called deep forgery images.
Currently, the depth counterfeit image has reached a level of spurious, and it is difficult for people to visually recognize and distinguish the depth counterfeit image from the real image. Therefore, how to accurately identify the depth counterfeit image is an urgent problem to be solved nowadays.
Disclosure of Invention
The invention aims to provide an image counterfeiting identification method, an image counterfeiting identification network training device, electronic equipment and a computer readable storage medium, so as to accurately identify a depth counterfeiting image.
In order to solve the above technical problems, the present invention provides an image forgery identification method, including:
acquiring an image to be detected;
decoding the image to be detected by using an image decoder to obtain an image decoding result; wherein the image decoding result includes fingerprint decoding information;
Correcting the fingerprint decoding information by using a fingerprint corrector to obtain fingerprint correction information;
identifying the authenticity of the image to be detected according to the fingerprint correction information and a preset fingerprint set; the preset fingerprint set comprises preset fingerprint information, and the true or false condition is a real image or a deep fake image; and when the true or false condition is that the image is the depth counterfeit image, the image to be detected is an image generated by generating an countermeasure network by the image noise information and the fingerprint characteristics corresponding to a certain preset fingerprint information.
In some embodiments, the image decoder includes a decoding convolution layer, a decoding activation function, a decoding full-connection layer, and a decoding normalization layer, and the decoding, with the image decoder, the image to be detected to obtain an image decoding result includes:
performing feature extraction on the image to be detected by utilizing the decoding convolution layer and the decoding activation function to obtain decoding extraction features;
mapping the decoding extracted features by using the decoding full-connection layer to obtain decoding output features; the decoding output features comprise decoding fingerprint features and decoding noise features, and the dimension of the decoding extraction features is the sum of the dimension of the decoding fingerprint features and the dimension of the decoding noise features;
Normalizing the decoded fingerprint features by using the decoding normalization layer to obtain the fingerprint decoding information; wherein the fingerprint decoding information is a digital sequence consisting of 0 and 1.
In some embodiments, the preset fingerprint set further includes tracing information corresponding to each preset fingerprint information, and after identifying the true or false condition of the image to be detected according to the fingerprint correction information and the preset fingerprint set, the method further includes:
and if the true or false condition is the depth counterfeit image, determining the tracing information corresponding to the fingerprint decoding information according to the preset fingerprint set.
In some embodiments, the identifying the true or false condition of the image to be detected according to the fingerprint correction information and a preset fingerprint set includes:
judging whether the preset fingerprint set has preset countermeasure network fingerprint information which is the same as the fingerprint correction information;
if yes, determining the true and false condition as the depth counterfeit image;
if not, determining the true and false condition as the true image.
In some embodiments, the fingerprint feature corresponding to a certain preset fingerprint information is a feature obtained by extracting a feature of the certain preset fingerprint information by the newly added linear layer of the generation countermeasure network.
In some embodiments, the fingerprint decoding information, the fingerprint correcting information, and the preset fingerprint information are each a digital sequence of a preset fingerprint length.
In some embodiments, the fingerprint corrector includes an error correction convolution layer, an error correction activation function, an error correction full connection layer, and a mapping function.
In some embodiments, the correcting the fingerprint decoding information by using a fingerprint corrector to obtain fingerprint correction information includes:
converting the fingerprint decoding information into a decoding matrix with a preset matrix specification; the preset matrix specification is u multiplied by v, u is the number of rows, v is the number of columns, and u and v are positive integers greater than or equal to 1;
convolutionally filling the decoding matrix by using a convolutionally filling module to obtain a final output vector with a preset dimension; each convolution filling module comprises an error correction convolution layer and an error correction activation function which are respectively corresponding to each other, wherein the preset dimension is u multiplied by v multiplied by w, w is the number of layers, and w is a positive integer which is more than or equal to 2;
mapping the final output vector by using the error correction full-connection layer to obtain a mapping matrix of the preset matrix specification;
obtaining the mapping probability corresponding to each matrix element in the mapping matrix by using the mapping function;
Acquiring an error correction matrix corresponding to the mapping matrix according to the mapping probability; wherein each matrix element in the error correction matrix is 1 or 0 respectively;
converting the error correction matrix into the fingerprint correction information; wherein the fingerprint correction information is a digital sequence consisting of 0 and 1.
The invention also provides an image forgery identification device, which comprises:
the image acquisition module is used for acquiring an image to be detected;
the image decoding module is used for decoding the image to be detected by utilizing an image decoder to obtain an image decoding result; wherein the image decoding result includes fingerprint decoding information;
the fingerprint error correction module is used for correcting the fingerprint decoding information by utilizing a fingerprint error corrector to obtain fingerprint correction information;
the fingerprint identification module is used for identifying the authenticity of the image to be detected according to the fingerprint correction information and a preset fingerprint set; the preset fingerprint set comprises preset fingerprint information, and the true and false situation is a real image or a deep fake image.
The invention also provides a network training method for image forgery identification, which comprises the following steps:
acquiring identification training data; the recognition training data comprises training fake data or training fake data and training real data, the training fake data comprises training fingerprint information, training image noise information and/or training depth fake images, and the training depth fake images are images generated by generating an countermeasure network according to the training image noise information and fingerprint characteristics corresponding to the training fingerprint information;
And carrying out network training on the image decoder by utilizing a preset recognition loss function according to the recognition training data, and obtaining the trained image decoder so as to recognize the true or false condition of the image to be detected by utilizing the trained image decoder matched with the fingerprint error corrector.
In some embodiments, the performing network training on the image decoder according to the recognition training data by using a preset recognition loss function to obtain a trained image decoder includes:
and according to the recognition training data, performing joint training on the image decoder and the generation countermeasure network by using a preset recognition loss function, and acquiring the trained image decoder and the generated countermeasure network so as to perform image generation on fingerprint features corresponding to the image noise information and the fingerprint information by using the trained generation countermeasure network.
In some embodiments, the performing joint training on the image decoder and the generating countermeasure network by using a preset recognition loss function according to the recognition training data, obtaining the trained image decoder and generating countermeasure network includes:
and extracting the characteristics of the training fingerprint information by utilizing the newly added linear layer of the generated countermeasure network to obtain the fingerprint characteristics corresponding to the training fingerprint information.
In some embodiments, the preset recognition loss function includes a generator loss function of the generated countermeasure network, a arbiter loss function of the generated countermeasure network, and a loss function of the image decoder.
In some embodiments, the generator penalty function includes a first penalty function for characterizing a gap between an image generated by the generator generating the countermeasure network and a real image, and a second penalty function for characterizing a gap of the generated image when different training fingerprint information is used as input for the same image noise information.
In some embodiments, the arbiter loss function includes a third loss function for characterizing a difference between the actual image and the image generated by the arbiter generating the countermeasure network and an actual image, and a fourth loss function for characterizing a difference between the image generated by the arbiter for different training fingerprint information under the same image noise information.
In some embodiments, the loss function of the image decoder includes a fifth loss function for characterizing a gap between the fingerprint decoding information output by the image decoder and the corresponding training fingerprint information during training, and a sixth loss function for characterizing a gap between the noise decoding information output by the image decoder and the corresponding training image noise information during training.
In some embodiments, the predetermined recognition loss function is
Wherein,L 1 for the preset recognition loss function,G(. Cndot.) A. Cndot. CD(. Cndot.) are the outputs of the generator and arbiter respectively of the generation of the countermeasure network,F(-) is the output of the image decoder,ztraining image noise information for the current training batch,ctraining image noise information for current training batchzThe corresponding fingerprint information for training is used for training,xtraining image noise information for training the current training batch in the training real datazA corresponding real image is provided with a display screen,the first training batch of the current training batch output for the image decoderqFingerprint decoding information->Decoding information for the image noise output by the image decoder,c 1 andc 2 for any two training fingerprint information of the current training batch,c q the first training lotqFingerprint information for personal training-> /> /> /> />And->Respectively preset super parameters.
In some embodiments, the network training method further comprises:
acquiring error correction training data; the error correction training data comprises error correction training fingerprint information and labels corresponding to each digit in the error correction training fingerprint information;
and according to the error correction training data, performing network training on the fingerprint error corrector by using a preset error correction loss function, and obtaining the trained fingerprint error corrector so as to perform error correction on fingerprint decoding information output by the image decoder by using the trained fingerprint error corrector.
In some embodiments, the predetermined error correction loss function is
The method comprises the steps of carrying out a first treatment on the surface of the Wherein,Nfor the number of convolution stuffing modules in the fingerprint error corrector, the convolution stuffing modules comprise an error correction convolution layer and an error correction activation function,ya tag corresponding to any one of the error correction training fingerprint information,>fingerprint correction information corresponding to the error correction training fingerprint information output by the fingerprint error corrector;Mfor the number of features in any of the convolution fill modules,f ij is the firstiThe first convolution filling modulejThe feature values of the individual features are used,S(. Cndot.) is the output of the mapping function in the fingerprint corrector.
In some embodiments, the network training method further comprises:
acquiring a generation countermeasure network downloading request sent by a client device; wherein the generating an antagonism network download request includes a requester information;
generating preset fingerprint information corresponding to the requester information according to the generation countermeasure network downloading request, taking the requester information as tracing information, and storing the preset fingerprint information and the requester information into a preset fingerprint set;
and transmitting the preset fingerprint information and the generated countermeasure network to the client device.
The invention also provides a network training device for image forgery identification, which comprises:
the data acquisition module is used for acquiring identification training data; the recognition training data comprises training fake data or training fake data and training real data, the training fake data comprises training fingerprint information, training image noise information and/or training depth fake images, and the training depth fake images are images generated by generating an countermeasure network according to the training image noise information and fingerprint characteristics corresponding to the training fingerprint information;
and the identification training module is used for carrying out network training on the image decoder by utilizing a preset identification loss function according to the identification training data to obtain the trained image decoder so as to identify the true or false condition of the image to be detected by utilizing the trained image decoder in combination with the fingerprint error corrector.
The invention also provides an electronic device, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the image forgery identification method as described above and/or the network training method of image forgery identification as described above when executing the computer program.
Furthermore, the present invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the image falsification recognition method as described above and/or the network training method of image falsification recognition as described above.
The invention provides an image counterfeiting identification method, which comprises the following steps: acquiring an image to be detected; decoding the image to be detected by using an image decoder to obtain an image decoding result; wherein the image decoding result includes fingerprint decoding information; correcting the fingerprint decoding information by using a fingerprint corrector to obtain fingerprint correction information; identifying the true or false condition of the image to be detected according to the fingerprint correction information and a preset fingerprint set; the preset fingerprint set comprises preset fingerprint information, and the true or false condition is a real image or a deep fake image; when the true or false condition is a depth counterfeit image, the image to be detected is an image generated by generating an countermeasure network by the image noise information and fingerprint characteristics corresponding to certain preset fingerprint information;
the fingerprint information is used as a part of input of the generation countermeasure network, so that the depth counterfeit image generated by the generation countermeasure network can contain the content corresponding to the fingerprint information, the fingerprint information and the preset fingerprint set can be obtained after decoding and correcting the image to be detected, the true or false condition of the image to be detected is identified, and the accurate identification of the depth counterfeit image generated by the generation countermeasure network is realized; and by utilizing the fingerprint error corrector to correct the fingerprint decoding information to obtain fingerprint correction information, the fingerprint information obtained after the image to be detected is decoded can be corrected to obtain correct fingerprint information, so that the identification accuracy of the depth counterfeit image is improved. In addition, the invention also provides a network training method and device for image forgery identification, electronic equipment and a computer readable storage medium, and the network training method and device have the same beneficial effects.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of an image forgery identification method according to an embodiment of the present invention;
FIG. 2 is a flowchart of a process for generating a deep forgery image according to an embodiment of the present invention;
FIG. 3 is a flowchart of a network training method for image forgery identification according to an embodiment of the present invention;
fig. 4 is a block diagram of an image falsification recognition device according to an embodiment of the present invention;
fig. 5 is a block diagram of a network training device for image forgery recognition according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a specific structure of an electronic device according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a computer readable storage medium according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, fig. 1 is a flowchart of an image forgery identification method according to an embodiment of the invention. The method may include:
step 101: and acquiring an image to be detected.
The image to be detected in the step can be an image which needs to be detected in the true or false condition; that is, the processor in this embodiment may perform detection and identification on the true or false condition of the image to be detected, so as to identify whether the image to be detected generates an image against network (GAN) forgery (i.e., a deep forgery image).
Correspondingly, for the specific mode that the processor obtains the image to be detected in the step, the specific mode can be set by a designer according to a practical scene and user requirements, for example, the processor can receive the image to be detected, for example, the processor of the server can receive the image to be detected sent by the client device. The processor may also read the image to be detected, for example, the processor of the computer may read the stored image to be detected for the detection of the authenticity. The present embodiment does not impose any limitation on this.
Step 102: decoding the image to be detected by using an image decoder to obtain an image decoding result; wherein the image decoding result includes fingerprint decoding information.
It will be appreciated that the image decoder in this step may be a software device that decodes the input image, i.e. the image to be detected. The image decoder can be used for decoding the image to be detected to obtain a decoding result (namely fingerprint decoding information) of the fingerprint information corresponding to the image to be detected; that is, the image decoder may decode the image to be detected as a depth counterfeit image to obtain a decoding result corresponding to fingerprint information (or corresponding fingerprint features) that needs to be input when the depth counterfeit image is generated using GAN.
Correspondingly, the input of the image decoder in this embodiment may be an image (i.e. an image to be detected), such as a real image or a depth counterfeit image; the output of the image decoder may include the decoded fingerprint information (i.e., fingerprint decoding information); for example, the image decoding result output by the image decoder may include only fingerprint decoding information, or may include decoding results (i.e., image noise decoding information) corresponding to fingerprint decoding information and noise information (i.e., image noise information) when the GAN is used to generate the depth counterfeit image, i.e., the image decoder may be used to restore fingerprint information embedded (or fingerprint information corresponding to embedded fingerprint features) and noise information used when the GAN is used to generate the depth counterfeit image. The present embodiment does not impose any limitation on this.
It should be noted that, the fingerprint decoding information in the image decoding result in the present embodiment may be a digital sequence, for example, a digital sequence (e.g., 0/1 sequence) with a preset fingerprint length. For the embodiment, the processor decodes the image to be detected by using the image decoder to obtain a specific mode of an image decoding result, that is, a specific structure of the image decoder, which can be set by a designer according to a use scene and a user requirement, for example, the image decoder can comprise a series of convolution layers, an activation function and a full connection layer; for example, the image decoder may include a decoding convolutional layer, a decoding activation function, a decoding full-link layer, and a decoding normalization layer, and in this step, the processor may perform feature extraction on the image to be detected by using the decoding convolutional layer and the decoding activation function to obtain decoding extraction features; mapping the decoding extracted features by using the decoding full-connection layer to obtain decoding output features; the decoding output features comprise decoding fingerprint features and decoding noise features, and the dimension of the decoding extraction features is the sum of the dimension of the decoding fingerprint features and the dimension of the decoding noise features; normalizing the decoded fingerprint features by using a decoding normalization layer to obtain fingerprint decoding information; the fingerprint decoding information is a digital sequence consisting of 0 and 1, namely the preset fingerprint information is a digital sequence consisting of 0 and 1.
For example, in the case that the resolution of the image to be detected is 128×128×3, the dimension of the noise feature is 512, and the dimension of the fingerprint feature is 50, in this step, the processor may firstly perform step-by-step downsampling on the input image to be detected, for example, perform feature extraction on the image to be detected through a convolution layer (i.e., a decoding convolution layer), the number of channels of the convolution layer may be 32, the convolution kernel size may be 3×3, the step size is 1, and an activation function (i.e., a decoding activation function, such as a leakage rectifying linear unit, a leakage rectifying ReLU) is adopted, then perform downsampling on the extracted feature through a downsampling layer, which uses 2×2 maximum pooling operation, continuously performing the above operations until the feature map is extracted into 8×8 feature map, and then performing feature extraction on the feature map through a plurality of convolution layers (i.e., decoding convolution layers), so as to obtain the decoded extracted feature; then, the feature with the dimension 562 (namely decoding output feature) is output through 1 full connection layer (namely decoding full connection layer), wherein 512 dimensions are input noise of GAN (namely image noise decoding information), and 50 dimensions are fingerprint features (namely decoding fingerprint features); and finally, the fingerprint features pass through a decoding normalization layer (such as a normalization exponential function and a softmax function) to obtain fingerprint decoding information in a 0/1 expression form.
Step 103: and correcting the fingerprint decoding information by using a fingerprint corrector to obtain fingerprint correction information.
It can be appreciated that the processor in this embodiment may utilize the fingerprint error corrector to detect and correct the error of the fingerprint decoding information output by the image decoder, so as to obtain corrected fingerprint correction information, so as to improve the accuracy of the fingerprint information obtained from the image to be detected.
Correspondingly, the input of the fingerprint error corrector in this embodiment may be fingerprint decoding information, such as a digital sequence (i.e. 0/1 sequence) composed of 0 and 1 of the preset fingerprint length; the output of the fingerprint error corrector can be fingerprint decoding information (namely fingerprint correction information) after correcting the numerical value of the corresponding position; the fingerprint correction information may be a sequence of numbers, such as a sequence of numbers of a predetermined fingerprint length (e.g., a 0/1 sequence).
It should be noted that, for the processor in this embodiment, the specific manner of correcting the fingerprint decoding information by using the fingerprint corrector to obtain the fingerprint correction information, that is, the specific structure of the fingerprint corrector may be set by a designer according to the usage scenario and the user requirement, for example, the fingerprint corrector may include an error correction convolution layer, an error correction activation function, an error correction full connection layer and a mapping function; for example, the fingerprint corrector may employ a Deep Learning (DL) based neural network, and the fingerprint corrector may include a plurality of convolution stuffing modules, each of which may include a convolution layer (i.e., an error correction convolution layer) and an activation function (i.e., an error correction activation function, such as a modified linear unit ReLU), so that the fingerprint corrector can process arbitrary length sequence data unlike a conventional error correction code that can only process fixed length data blocks through the Deep Learning based neural network arrangement in the fingerprint corrector; the model robustness and generalization capability of the fingerprint error corrector can be improved by learning a large amount of data, so that more complex error conditions can be processed; and the output of each convolution filling module can be used as a part of a loss function during training, so that the fingerprint error corrector can learn the characteristics of input data better, and the error detection and correction capability of the fingerprint error corrector is improved.
For example, the fingerprint error corrector may adjust the dimension of the input data (i.e. the fingerprint decoding information) firstly, that is, the convolution stuffing module and the error correction full-connection layer are formed by the error correction convolution layer and the error correction activation function, the digital sequence (e.g. 0/1 sequence) with the preset fingerprint length of 1 dimension is changed into a matrix of u×v, for example, the fingerprint error corrector may change the input data from 1×128 to 8×16, the error correction convolution layer adopts a convolution kernel of 3×3, the stuffing (padding) is 1, the vector dimension is output through the first layer convolution stuffing module and is 8×16×2, the output of the second layer convolution stuffing module is 8×16×4, the output of the third layer convolution stuffing module is 8×16×8, the output of the fourth layer convolution stuffing module is 8×16×4, the output of the fifth layer convolution stuffing module is 8×16×2, and finally the matrix with the dimension of 8×16 is output through the error correction full-connection layer; and mapping the result of each position to be between 0 and 1 by adopting a mapping function (such as an S-shaped mapping function), outputting the result to be 0 or 1 according to the probability, and changing the dimension to be 1 multiplied by 128 to obtain fingerprint correction information.
That is, when the fingerprint correction information is a digital sequence consisting of 0 and 1 of the preset fingerprint length, the processor in this step may convert the fingerprint decoding information into a decoding matrix of the preset matrix specification by using the fingerprint error corrector; the preset matrix specification is u multiplied by v, u is the number of rows, v is the number of columns, and u and v are positive integers greater than or equal to 1; convolutionally filling the decoding matrix by utilizing a convolutionally filling module to obtain a final output vector with a preset dimension; each convolution filling module comprises an error correction convolution layer and an error correction activation function which are respectively corresponding, wherein the preset dimension is u multiplied by v multiplied by w, w is the number of layers, and w is a positive integer which is more than or equal to 2; mapping the final output vector by using an error correction full-connection layer to obtain a mapping matrix with a preset matrix specification; obtaining the mapping probability corresponding to each matrix element in the mapping matrix by using the mapping function; acquiring an error correction matrix corresponding to the mapping matrix according to the mapping probability; wherein each matrix element in the error correction matrix is 1 or 0 respectively; converting the error correction matrix into fingerprint correction information; wherein the fingerprint correction information is a digital sequence consisting of 0 and 1.
Correspondingly, when w is 2, the process of convolutionally filling the decoding matrix by using the convolutionally filling module to obtain the final output vector with the preset dimension may include: convolutionally filling the decoding matrix by using a first convolutionally filling module to obtain a first output vector; the dimension of the first output vector is u multiplied by v multiplied by 2; performing convolution filling on the decoding matrix by using a second convolution filling module to obtain a second output vector; wherein the dimension of the second output vector is u×v×4; performing convolution filling on the decoding matrix by using a third convolution filling module to obtain a third output vector; the dimension of the third output vector is u multiplied by v multiplied by 8; convolving the decoding matrix by using a fourth convolution filling module to obtain a fourth output vector; the dimension of the fourth output vector is u multiplied by v multiplied by 4; and convolving the decoding matrix by using a fifth convolution filling module to obtain a final output vector.
Step 104: identifying the true or false condition of the image to be detected according to the fingerprint correction information and a preset fingerprint set; the preset fingerprint set comprises preset fingerprint information, and the true or false condition is a real image or a deep fake image; when the true or false condition is a depth counterfeit image, the image to be detected is an image generated by generating an countermeasure network by the image noise information and fingerprint characteristics corresponding to certain preset fingerprint information.
It is understood that the preset fingerprint set in this embodiment may be a preset set of preset fingerprint information. The preset fingerprint information may be preset fingerprint information used when generating the deep forgery image by the countermeasure network (GAN), that is, fingerprint information input by each countermeasure network when generating the deep forgery image or fingerprint information corresponding to the input fingerprint features; the predetermined fingerprint information may be a sequence of numbers, such as a sequence of numbers (e.g., 0/1 sequence) of predetermined fingerprint lengths.
That is, in this step, the processor may determine whether the image to be detected is a deep counterfeit image generated by GAN using the corresponding preset fingerprint information by comparing the fingerprint correction information with each preset fingerprint information in the preset fingerprint set, so as to implement the identification of the authenticity of the image to be detected.
Correspondingly, in the case that the true or false condition of the image to be detected is identified as a depth counterfeit image in the embodiment, the image to be detected can be an image generated by generating an countermeasure network for the fingerprint feature corresponding to the image noise information and some preset fingerprint information (such as the preset fingerprint information matched by the fingerprint correction information) in the preset fingerprint set, that is, the generating process of the image to be detected can be that the image to be detected is generated by utilizing the input image noise information corresponding to the image to be detected and some preset fingerprint information (or corresponding fingerprint feature) through the GAN; for example, in the case that the true or false condition of the image to be detected is a deep counterfeit image, the image to be detected may be an image generated by generating an antagonism network by using image noise information and a certain preset fingerprint information, that is, a fingerprint feature corresponding to the certain preset fingerprint information may be a feature obtained by extracting features of the preset fingerprint information by using the antagonism network; if the existing generation countermeasure network is improved, a linear layer (i.e. a newly added linear layer) is newly added in the generation countermeasure network, so that the linear layer is utilized to perform feature extraction on fingerprint information (such as preset fingerprint information) input by the generation countermeasure network, i.e. the fingerprint feature corresponding to a certain preset fingerprint information can be used to generate the feature obtained by performing feature extraction on a certain preset fingerprint information by the newly added linear layer of the generation countermeasure network.
It should be noted that, for the specific mode of identifying the true or false condition of the image to be detected by the processor according to the fingerprint correction information and the preset fingerprint set in the step, that is, the specific comparison matching mode of the fingerprint correction information and the preset fingerprint information in the preset fingerprint set, the processor can be set by the designer according to the use scene and the user requirement, for example, the processor can directly identify the true or false condition of the image to be detected by comparing the fingerprint correction information and each preset fingerprint information in the preset fingerprint set; for example, the processor may determine the preset fingerprint information identical to the fingerprint correction information as preset fingerprint information to which the fingerprint correction information is matched, that is, preset fingerprint information corresponding to the fingerprint correction information; in this step, the processor may determine whether the preset fingerprint set has the same preset fingerprint information as the fingerprint correction information; if yes, determining the true and false condition as a depth counterfeit image; if not, determining the true and false condition as the true image. The processor may also determine preset fingerprint information, for which the similarity (i.e., the quotient of the number of the same positions with the same number of positions and the preset fingerprint length) with the fingerprint correction information reaches a similarity threshold, as preset fingerprint information to which the fingerprint correction information is matched; in this step, the processor may determine whether the preset fingerprint set has preset fingerprint information with similarity reaching a similarity threshold with the fingerprint correction information; if yes, determining the true and false condition as a depth counterfeit image; if not, determining the true and false condition as the true image.
In this embodiment, the comparison between the fingerprint information (i.e. the fingerprint correction information) obtained by decoding and correcting the image to be detected and each preset fingerprint information in the preset fingerprint set is taken as an example to identify the true and false condition of the image to be detected; in some embodiments, the authenticity of the image to be detected can be identified directly by comparing the fingerprint information (i.e. fingerprint decoding information) obtained by decoding the image to be detected with each preset fingerprint information in the preset fingerprint set.
Further, the method provided by the embodiment may further include a tracing process of the depth counterfeit image. For example, the preset fingerprint set may further include tracing information corresponding to each of the preset fingerprint information, for example, the tracing information corresponding to the fingerprint correction information may be determined according to the preset fingerprint set when the processor determines that the true or false condition of the image to be detected is a deep counterfeit image in the embodiment, if the request is that the request is to generate the requester information (such as registration identity information or IP address) of the countermeasure network; that is, the processor can use the trace information corresponding to one piece of preset fingerprint information matched with the fingerprint correction information in the preset fingerprint set as the trace information of the image to be detected, so as to realize trace of the deep fake image.
In the embodiment of the invention, fingerprint information is used as a part of input of the generation countermeasure network, so that the depth counterfeit image generated by the generation countermeasure network can contain the content corresponding to the fingerprint information, thus obtaining the fingerprint information and a preset fingerprint set after decoding and correcting the image to be detected, and identifying the authenticity of the image to be detected, thereby realizing the accurate identification of the depth counterfeit image generated by the generation countermeasure network; and by utilizing the fingerprint error corrector to correct the fingerprint decoding information to obtain fingerprint correction information, the fingerprint information obtained after the image to be detected is decoded can be corrected to obtain correct fingerprint information, so that the identification accuracy of the depth counterfeit image is improved.
Based on the above embodiment, the image forgery identification method provided by the embodiment of the present invention may further include a process of generating a depth forgery image, that is, the electronic device (such as a server) serving as a recognition party of the depth forgery image may further provide a function of generating the depth forgery image, and fingerprint information is input as part of the GAN, so as to ensure that the depth forgery image generated by the GAN can include content corresponding to the fingerprint information, thereby ensuring accuracy of recognition of the depth forgery image. Accordingly, referring to fig. 2, fig. 2 is a flowchart of a process for generating a deep forgery image according to an embodiment of the present invention. The method may include:
Step 201: image noise information is acquired.
The image noise information in this embodiment may be noise information that is used to generate an image for the generation of the reactance network (GAN). The specific content of the image noise information in this embodiment may be set by a designer, for example, may be implemented in the same or similar manner as the configuration method of the noise information input by the GAN in the prior art, which is not limited in this embodiment.
Step 202: and acquiring fingerprint characteristics corresponding to the fingerprint information.
The fingerprint information in this embodiment may be a digital sequence of a preset fingerprint length, such as a digital sequence consisting of 0 and 1 (i.e. a 0/1 sequence). The fingerprint information in this embodiment may be any one of the preset fingerprint information in the preset fingerprint set in the foregoing embodiment, so as to ensure the accuracy of subsequent identification of the generated deep counterfeit image.
Correspondingly, the fingerprint features in the embodiment may be features extracted from the fingerprint information, and the fingerprint features may be the same as the input dimensions of the mapping network in the generation of the countermeasure network, so as to ensure that the fingerprint features may be used as part of the input of the mapping network in the generation of the countermeasure network and participate in the subsequent image generation of the countermeasure network, so that the deep forgery image corresponding to the image noise information generated by the generation of the countermeasure network can contain the content corresponding to the fingerprint information.
It should be noted that, for the specific manner of acquiring the fingerprint feature corresponding to the fingerprint information by the processor in this step, the designer may set itself according to the practical scenario and the user's requirement, for example, the processor may directly read the pre-stored fingerprint feature corresponding to the generation countermeasure network, for example, when the issuer of the generation countermeasure network issues the generation countermeasure network to the requester requesting the generation countermeasure network, the fingerprint feature corresponding to the requester and the generation countermeasure network may be directly transmitted to the requester, or the fingerprint feature may be directly stored in the generation countermeasure network transmitted to the requester, so that the requester may directly use the fingerprint feature as a partial input when generating the deep forgery image through the generation countermeasure network, so that the deep forgery image generated by the requester through the generation countermeasure network includes the content of the fingerprint information. The processor can also acquire fingerprint information; and extracting the characteristics of the fingerprint information to obtain fingerprint characteristics. The present embodiment does not impose any limitation on this.
Correspondingly, the specific mode of acquiring the fingerprint information by the processor can be set by a designer, for example, the processor can read the pre-stored fingerprint information corresponding to the generated countermeasure network. The processor may also generate fingerprint information of a preset fingerprint length, for example, the processor may randomly generate a digital sequence (i.e., fingerprint information) of 0 and 1 of the preset fingerprint length; correspondingly, the processor can also add the generated fingerprint information to a preset fingerprint set. The present embodiment does not impose any limitation on this.
Correspondingly, the specific mode of extracting the characteristics of the fingerprint information by the processor to obtain the fingerprint characteristics can be set by a designer, for example, the processor can extract the fingerprint characteristics with preset dimensions from the fingerprint information by using a linear layer or other characteristic extraction modes; the preset dimension may be an input dimension to a mapping network in the reactive network. For example, in this embodiment, an existing generation countermeasure network is improved, and a linear layer is newly added (i.e., a linear layer is newly added) in the generation countermeasure network, so that feature extraction is performed on fingerprint information input by the generation countermeasure network by using the linear layer, and fingerprint features of preset dimensions are obtained; that is, the processor may perform feature extraction on the fingerprint information to obtain the fingerprint feature by generating an additional linear layer against the network. The present embodiment does not impose any limitation on this.
Step 203: generating a depth counterfeit image by utilizing a generation countermeasure network according to the image noise information and the fingerprint characteristics; wherein the dimensions of the fingerprint feature are the same as the input dimensions of the mapping network in the generation of the countermeasure network.
It can be understood that in this embodiment, by taking the fingerprint feature corresponding to the fingerprint information as a part of input of the mapping network in the generation countermeasure network, the depth counterfeit image generated by the generation countermeasure network includes the content corresponding to the fingerprint information, so that in the identification process of the subsequent depth counterfeit image, the fingerprint information can be decoded from the depth counterfeit image, and accurate identification of the depth counterfeit image is realized; in addition, in the embodiment, fingerprint information or corresponding fingerprint features are directly input as part of the generation countermeasure network, so that the aim that the content corresponding to the fingerprint information is directly displayed in the deep counterfeit image can be fulfilled, artificial information is not required to be added in the training data set for generating the countermeasure network through a steganography technology, the training difficulty for generating the countermeasure network is reduced, and the recognition accuracy and recognition efficiency of the subsequent deep counterfeit image are ensured.
Correspondingly, the specific mode of generating the depth counterfeit image by the processor according to the image noise information and the fingerprint characteristics in the step and by utilizing the generation countermeasure network can be set by a designer, for example, the depth counterfeit image can be realized in a mode similar to the image generation method for generating the countermeasure network in the prior art, and the depth counterfeit image generated by the generation countermeasure network only needs to be added with the fingerprint characteristics in the mapping network in the generation countermeasure network as part of input, so that the depth counterfeit image generated by the generation countermeasure network contains the content of the fingerprint information.
In this embodiment, the embodiment of the present invention generates the depth counterfeit image by using the generation countermeasure network according to the image noise information and the fingerprint feature, and may add the fingerprint feature as a partial input into the mapping network in the generation countermeasure network, so that the depth counterfeit image generated by the generation countermeasure network includes the content of the fingerprint information, thereby ensuring the accuracy of the subsequent recognition of the generated depth counterfeit image.
Based on the above embodiment, the embodiment of the invention also provides a network training method for image falsification identification, so as to realize the network training of the image decoder in the above embodiment and ensure the accuracy of the depth falsification image identification. Accordingly, referring to fig. 3, fig. 3 is a flowchart of a network training method for image forgery identification according to an embodiment of the present invention. The method may include:
Step 301: acquiring identification training data; the training data comprises training fake data or training fake data and training real data, the training fake data comprises training fingerprint information, training image noise information and/or training depth fake images, and the training depth fake images are images generated by generating an countermeasure network through fingerprint features corresponding to the training image noise information and the training fingerprint information.
The identification training data in this embodiment may be data required for training the image decoder.
Correspondingly, for the specific content of the identification training data in the embodiment, the specific content can be set by a designer, for example, the identification training data can include training fake data corresponding to the depth fake image for training, or training real data corresponding to the training fake data and the real image; the training forgery data may include fingerprint information (i.e., training fingerprint information, such as preset fingerprint information in the above embodiment) corresponding to each training depth forgery image (i.e., training depth forgery image), image noise information (i.e., training image noise information), and/or training depth forgery image; for example, when the recognition training data is used only for training the image decoder, the training falsification data may include training fingerprint information, training image noise information, and training depth falsification images; when identifying training data for use in a joint training image decoder and generating an countermeasure network, training falsification data may include training fingerprint information and training image noise information, or training fingerprint information, training image noise information, and training depth falsification images. The training forgery data may also include other data such as fingerprint characteristics corresponding to the training fingerprint information, which is not limited in this embodiment.
Step 302: according to the identification training data, carrying out network training on the image decoder by utilizing a preset identification loss function, and obtaining the trained image decoder so as to identify the true or false condition of the image to be detected by utilizing the trained image decoder in combination with the fingerprint error corrector.
It is to be understood that the image decoder in the present embodiment may be a software device for decoding an input image, which is set in advance; in this embodiment, the processor may perform network training on the image decoder according to the identification training data by using a preset identification loss function, so as to obtain a trained image decoder, so that the trained image decoder may cooperate with the fingerprint error corrector to implement the process of determining the true or false condition of the image to be detected in the foregoing embodiment.
Correspondingly, for the embodiment, the processor performs network training on the image decoder by using a preset recognition loss function according to the recognition training data, so as to obtain a specific mode of the trained image decoder, which can be set by a designer according to a practical scene and user requirements, for example, the processor can perform network training on the image decoder by using the preset recognition loss function independently according to the recognition training data, so as to obtain the trained image decoder. In order to save training cost, the processor in this embodiment may perform joint training on the generated countermeasure network (i.e. the image generator) and the image decoder, that is, perform joint training on the image decoder and the generated countermeasure network by using a preset recognition loss function according to recognition training data, and acquire the trained image decoder and the generated countermeasure network, so as to perform image generation on fingerprint features corresponding to image noise information and fingerprint information by using the trained generated countermeasure network. The present embodiment does not impose any limitation on this.
Correspondingly, for the processor in this embodiment, according to the recognition training data, the image decoder is network-trained by using the preset recognition loss function, so that a specific process of obtaining the trained image decoder can be set by a designer, for example, in a manner similar to that of a network training method in the prior art; for example, when a linear layer (i.e., a newly added linear layer) for extracting features of fingerprint information input by a generated countermeasure network is provided in the generated countermeasure network under the condition that the image decoder and the generated countermeasure network are jointly trained by using a preset recognition loss function according to recognition training data, in the joint training process, the processor in the step can perform feature extraction on training fingerprint information by using the newly added linear layer of the generated countermeasure network, so as to obtain fingerprint features corresponding to the training fingerprint information; that is, the training fingerprint information which can be input by the newly added linear layer can be utilized to perform feature extraction on the generated countermeasure network in the combined training process, and the corresponding fingerprint features obtained by extraction are utilized to perform subsequent deep forgery image generation so as to complete the combined training of the image decoder and the generated countermeasure network.
It should be noted that, the preset recognition loss function in this embodiment may be a loss function used in the training process of the image decoder. For the specific content of the preset recognition loss function in this embodiment, the specific content may be set by a designer, for example, in the case of performing joint training on the image decoder and the generation of the countermeasure network by using the preset recognition loss function according to the recognition training data, the preset recognition loss function may include a loss function of the image decoder and a loss function corresponding to a generator of the countermeasure network (i.e., a generator loss function) and/or a loss function corresponding to a discriminator of the generation of the countermeasure network (i.e., a discriminator loss function); for example, the preset recognition loss function may include a loss function of the image decoder, a generator loss function, and a arbiter loss function; or the preset recognition loss function may also include a loss function of the image decoder and a generator loss function, or a loss function of the image decoder and a discriminator loss function. The present embodiment does not impose any limitation on this.
Correspondingly, for the specific number and types of the loss functions in the image decoder, the generator loss function and the discriminator loss function, the specific number and types of the specific loss functions can be set by a designer according to a use scene and user requirements, for example, the generator loss function can comprise a first loss function and/or a second loss function, the first loss function is used for representing the difference between an image generated by a generator generating an countermeasure network and a real image, and the second loss function is used for representing the difference between images generated by different training fingerprint information under the same image noise information as input. The discriminator loss function may include a third loss function for characterizing a difference between discrimination of the discriminator generating the countermeasure network to the real image and the generated image and the actual, and/or a fourth loss function for characterizing a difference between discrimination of the discriminator to the image generated by different training fingerprint information under the same image noise information. The loss function of the image decoder may include a fifth loss function for characterizing a gap between fingerprint decoding information output by the image decoder and corresponding training fingerprint information during training, and/or a sixth loss function for characterizing a gap between noise decoding information output by the image decoder and corresponding training image noise information during training. The present embodiment does not impose any limitation on this.
For example, in the case where the predetermined recognition loss function may include a loss function of the image decoder, a generator loss function, and a arbiter loss function, the predetermined recognition loss function may beThe method comprises the steps of carrying out a first treatment on the surface of the Wherein,L 1 in order to preset the recognition loss function,G(. Cndot.) A. Cndot. CD(. Cndot.) the outputs of the generator and the arbiter respectively generating the reactance network,F(·) is the output of the image decoder,ztraining image noise information for the current training batch,ctraining image noise information for current training batchzThe corresponding fingerprint information for training is used for training,xtraining image noise information for training current training batch in training real datazCorresponding real image->The first training batch of the current training batch output for the image decoderqFingerprint solutionThe code information is used to determine the code information,decoding information for the image noise output from the image decoder,c 1 andc 2 for any two training fingerprint information of the current training batch,c q the first training lotqFingerprint information for personal training-> /> /> /> />And->Respectively preset super parameters.
Correspondingly, presetting the identification loss functionL 1 The first part of%Part) may be a first loss function capable of representing the loss of the generator generating the countermeasure network, meaning that the difference between the image generated by the generator and the real image is measured; second part (+) >Part) can be a second loss function, can also represent the loss of the generator, and means that the generator is used for measuring the same noise, when different fingerprints are taken as input, the distance of the generated image is limited, so that the generated image is kept consistent under the condition of the same noise and the different fingerprints; third part (+)>Part) can be a third loss function capable of representing the loss of the arbiter generating the countermeasure network, the role of which is to enable the arbiter to correctly distinguish between the real image and the generated image in order to constrain the classification of the real data and the false data by the arbiter; fourth part (+)>Part) may be a fourth loss function, also capable of representing a loss of the arbiter, for constraining the arbiter, which functions to make the arbiter have the same degree of discrimination for images generated by different fingerprints of the same noise; fifth part (+)>Part) can be a fifth loss function capable of representing the loss of the image decoder to constrain the fingerprint decoding network, which acts to keep the output GAN fingerprint of the image decoder consistent with the GAN fingerprint embedded in the image; sixth part (+)>Part) may be a sixth loss function, which can also represent the loss of the image decoder, which functions to represent the euclidean distance of the input noise information and the noise of the image decoder after decoding the generated image, the closer the two euclidean distances, the higher the accuracy of the image decoder, so the accuracy of the fingerprint decoding network can be constrained from another dimension. / > /> /> /> And->The parameters can be preset super parameters; the present embodiment is not limited to-> /> /> /> />And->Specific values of (a), e.g., in some embodiments +.> /> />And->Can be 1, & gt>And->May be 1.5; in another embodiment +.> /> /> /> />And->Other numerical values may be employed.
Furthermore, the network training method provided in this embodiment may further include a training process of the fingerprint error corrector, for example, the processor may obtain error correction training data; the error correction training data comprises error correction training fingerprint information and labels corresponding to each digit in the error correction training fingerprint information; according to the error correction training data, the fingerprint error correction device is subjected to network training by utilizing a preset error correction loss function, the trained fingerprint error correction device is obtained, the fingerprint decoding information output by the image decoder is subjected to error correction by utilizing the trained fingerprint error correction device, and the true or false condition of the image to be detected is identified by matching with the trained image decoder.
Correspondingly, the preset error correction loss function may be a loss function used in the training process of the preset error correction loss function. For the specific content of the preset error correction loss function in this embodiment, the specific content may be set by the designer, for example, the preset error correction loss function may be The method comprises the steps of carrying out a first treatment on the surface of the Wherein,Nfor the number of convolution stuffing blocks in the fingerprint corrector, the convolution stuffing blocks comprise one sum error correction convolution layer and one error correction activation function,ytraining a tag corresponding to the fingerprint information for any error correction, < >>Fingerprint correction information corresponding to the error correction training fingerprint information output by the fingerprint error corrector;Mpopulating features in a module for any convolutionThe number of the pieces of the plastic material,f ij is the firstiThe first convolution filling modulejThe feature values of the individual features are used,S(. Cndot.) is the output of a mapping function, such as the output of a sigmoid function, in the fingerprint corrector.It may be shown that the features in a certain convolution filling module are averaged and then each position thereof is output as 0 or 1 by a sigmoid function.
Accordingly, in order to enable the trained fingerprint corrector to have error detection and correction capabilities, the error correction training data used as the training set of the fingerprint corrector in this embodiment may include fingerprint information for error correction training (i.e., error correction training fingerprint information), such as all or part of the correctly positioned fingerprint information; and the method can also comprise a label corresponding to each digit in the error correction training fingerprint information, and the label is used for marking all fingerprint information with correct positions.
Further, the network training method provided in this embodiment may further include: the training is completed to generate a release process of the countermeasure network. For example, the processor may obtain a generation countermeasure network download request sent by the client device; wherein generating the challenge network download request includes requesting person information; generating preset fingerprint information corresponding to the requester information according to the generated antagonism network downloading request, taking the requester information as tracing information, and storing the preset fingerprint information and the requester information into a preset fingerprint set; the method comprises the steps of sending preset fingerprint information and a generated countermeasure network to the client device, so that a user of the client device can generate a deep fake image with corresponding content of the preset fingerprint information by utilizing the generated countermeasure network.
Correspondingly, the processor may send the fingerprint feature corresponding to the preset fingerprint information and the generation countermeasure network to the client device, or directly send the generation countermeasure network embedded with the preset fingerprint information or the corresponding fingerprint feature to the client device, which is not limited in this embodiment.
Further, the network training method provided in this embodiment may further include: the generation process of the depth counterfeit image, namely the electronic equipment (such as a server) serving as a training party of the depth counterfeit image can also provide the function of generating the depth counterfeit image so as to realize the direct use of a generated countermeasure network after training is finished. For example, the generation process of the depth counterfeit image may include acquiring image noise information; acquiring fingerprint characteristics corresponding to the fingerprint information; generating a depth counterfeit image by utilizing a generation countermeasure network according to the image noise information and the fingerprint characteristics; wherein the dimensions of the fingerprint feature are the same as the input dimensions of the mapping network in the generation of the countermeasure network.
In the embodiment of the invention, the image decoder is subjected to network training by utilizing the preset recognition loss function according to the recognition training data to obtain the trained image decoder, so that the network training of the image decoder is realized, the true and false conditions of the image to be detected can be recognized by utilizing the trained image decoder in combination with the fingerprint error corrector, and the accuracy of the recognition of the deep fake image is ensured.
Corresponding to the above method embodiments, the present invention also provides an image forgery identification apparatus, and an image forgery identification apparatus described below and an image forgery identification method described above may be referred to in correspondence with each other.
Referring to fig. 4, fig. 4 is a block diagram illustrating an image falsification recognition apparatus according to an embodiment of the present invention. The apparatus may include:
an image acquisition module 10 for acquiring an image to be detected;
the image decoding module 20 is configured to decode an image to be detected by using an image decoder to obtain an image decoding result; wherein the image decoding result includes fingerprint decoding information;
the fingerprint error correction module 30 is configured to correct the fingerprint decoding information by using a fingerprint error corrector to obtain fingerprint correction information;
the fingerprint identification module 40 is configured to identify the authenticity of the image to be detected according to the fingerprint correction information and a preset fingerprint set; the preset fingerprint set comprises preset fingerprint information, and the true and false situation is a real image or a deep fake image.
In some embodiments, the image decoder includes a decoding convolutional layer, a decoding activation function, a decoding full-join layer, and a decoding normalization layer, and the image decoding module 20 may include:
The feature extraction submodule is used for extracting features of the image to be detected by utilizing the decoding convolution layer and the decoding activation function to obtain decoding extraction features;
the feature mapping sub-module is used for mapping the decoding extracted features by using the decoding full-connection layer to obtain decoding output features; the decoding output features comprise decoding fingerprint features and decoding noise features, and the dimension of the decoding extraction features is the sum of the dimension of the decoding fingerprint features and the dimension of the decoding noise features;
the normalization sub-module is used for normalizing the decoded fingerprint characteristics by utilizing the decoding normalization layer to obtain fingerprint decoding information; wherein the fingerprint decoding information is a digital sequence consisting of 0 and 1.
In some embodiments, the preset fingerprint set further includes trace source information corresponding to each preset fingerprint information, and the apparatus may further include:
and the tracing module is used for determining tracing information corresponding to the fingerprint decoding information according to the preset fingerprint set if the true or false condition is the depth counterfeit image.
In some embodiments, the fingerprint identification module 40 may be specifically configured to determine whether the preset countermeasure network fingerprint information that is the same as the fingerprint correction information exists in the preset fingerprint set; if yes, determining the true and false condition as a depth counterfeit image; if not, determining the true and false condition as the true image.
In some embodiments, the fingerprint features corresponding to a certain preset fingerprint information generate features obtained by extracting features from the certain preset fingerprint information by using a newly added linear layer of the reactance network.
In some embodiments, the fingerprint decoding information, the fingerprint correcting information, and the preset fingerprint information are each a sequence of numbers of a preset fingerprint length.
In some embodiments, the fingerprint error corrector includes an error correction convolution layer, an error correction activation function, an error correction full connection layer, and a mapping function.
In some embodiments, the fingerprint error correction module 30 may include:
the first conversion sub-module is used for converting the fingerprint decoding information into a decoding matrix with a preset matrix specification; the preset matrix specification is u multiplied by v, u is the number of rows, v is the number of columns, and u and v are positive integers greater than or equal to 1;
the convolution filling sub-module is used for carrying out convolution filling on the decoding matrix by utilizing the convolution filling module to obtain a final output vector with a preset dimension; each convolution filling module comprises an error correction convolution layer and an error correction activation function which are respectively corresponding, wherein the preset dimension is u multiplied by v multiplied by w, w is the number of layers, and w is a positive integer which is more than or equal to 2;
the full-connection sub-module is used for mapping the final output vector by utilizing the error correction full-connection layer to obtain a mapping matrix with a preset matrix specification;
The mapping submodule is used for obtaining the mapping probability corresponding to each matrix element in the mapping matrix by using the mapping function; acquiring an error correction matrix corresponding to the mapping matrix according to the mapping probability; wherein each matrix element in the error correction matrix is 1 or 0 respectively;
the second conversion sub-module is used for converting the error correction matrix into fingerprint correction information; wherein the fingerprint correction information is a digital sequence consisting of 0 and 1.
In this embodiment, by taking the fingerprint information as a part of the input of the generation countermeasure network, the embodiment of the invention enables the depth counterfeit image generated by the generation countermeasure network to contain the content corresponding to the fingerprint information, so that the fingerprint information and the preset fingerprint set can be obtained by the fingerprint identification module 40 after decoding and correcting the image to be detected, and the true or false condition of the image to be detected is identified, thereby realizing the accurate identification of the depth counterfeit image generated by the generation countermeasure network; and the fingerprint correction module 30 corrects the fingerprint decoding information by using a fingerprint corrector to obtain fingerprint correction information, and can correct the fingerprint information obtained by decoding the image to be detected to obtain correct fingerprint information, thereby improving the identification accuracy of the deep fake image.
Corresponding to the above method embodiment, the embodiment of the present invention further provides an image forgery-inhibited network training device, and the network training device for image forgery inhibition described below and the network training method for image forgery inhibition described above may be referred to correspondingly.
Referring to fig. 5, fig. 5 is a block diagram of a network training device for image forgery identification according to an embodiment of the present invention. The apparatus may include:
a data acquisition module 50 for acquiring identification training data; the training data comprises training fake data or training fake data and training real data, the training fake data comprises training fingerprint information, training image noise information and/or training depth fake images, and the training depth fake images are images generated by generating an countermeasure network through fingerprint features corresponding to the training image noise information and the training fingerprint information;
the recognition training module 60 is configured to perform network training on the image decoder according to the recognition training data by using a preset recognition loss function, and obtain a trained image decoder, so as to recognize the authenticity of the image to be detected by using the trained image decoder in combination with the fingerprint error corrector.
In some embodiments, the recognition training module 60 may be specifically configured to perform joint training on the image decoder and the generating countermeasure network according to the recognition training data by using a preset recognition loss function, and acquire the trained image decoder and generating countermeasure network, so as to perform image generation on fingerprint features corresponding to the image noise information and the fingerprint information by using the trained generating countermeasure network.
In some embodiments, the recognition training module 60 may include:
and the feature extraction sub-module is used for carrying out feature extraction on the training fingerprint information by utilizing the newly added linear layer for generating the countermeasure network to obtain fingerprint features corresponding to the training fingerprint information.
In some embodiments, the preset recognition loss function includes generating a generator loss function for the countermeasure network, generating a arbiter loss function for the countermeasure network, and generating a loss function for the image decoder.
In some embodiments, the generator penalty function includes a first penalty function for characterizing a gap between an image generated by the generator generating the countermeasure network and the real image, and a second penalty function for characterizing a gap of the generated image when different training fingerprint information is used as input for the same image noise information.
In some embodiments, the arbiter loss function includes a third loss function for characterizing a difference between the actual image and the discrimination of the generated image by the arbiter generating the countermeasure network and a fourth loss function for characterizing a difference between the discrimination of the image generated by the arbiter for different training fingerprint information under the same image noise information.
In some embodiments, the penalty function of the image decoder includes a fifth penalty function for characterizing a gap between the fingerprint decoding information output by the image decoder during training and the corresponding training fingerprint information, and a sixth penalty function for characterizing a gap between the noise decoding information output by the image decoder during training and the corresponding training image noise information.
In some embodiments, the predetermined recognition loss function isThe method comprises the steps of carrying out a first treatment on the surface of the Wherein,L 1 in order to preset the recognition loss function,G(. Cndot.) A. Cndot. CD(. Cndot.) the outputs of the generator and the arbiter respectively generating the reactance network,F(·) is the output of the image decoder,ztraining image noise information for the current training batch,ctraining image noise information for current training batch zThe corresponding fingerprint information for training is used for training,xtraining image noise information for training current training batch in training real datazCorresponding real image->The first training batch of the current training batch output for the image decoderqThe information is decoded by the fingerprint and,decoding information for the image noise output from the image decoder,c 1 andc 2 for any two training fingerprint information of the current training batch,c q the first training lotqFingerprint information for personal training-> /> /> /> />And->Respectively preset super parameters. />
In some embodiments, the apparatus may further comprise:
the error correction data acquisition module is used for acquiring error correction training data; the error correction training data comprises error correction training fingerprint information and labels corresponding to each digit in the error correction training fingerprint information;
and the error correction training module is used for carrying out network training on the fingerprint error corrector by utilizing a preset error correction loss function according to the error correction training data to obtain the trained fingerprint error corrector so as to carry out error correction on fingerprint decoding information output by the image decoder by utilizing the trained fingerprint error corrector.
In some embodiments, the predetermined error correction loss function is a predetermined error correction loss function ofThe method comprises the steps of carrying out a first treatment on the surface of the Wherein,Nnumber of convolutions filling modules in a fingerprint error corrector The convolutional fill-in module comprises a sum error correction convolutional layer and an error correction activation function,ytraining a tag corresponding to the fingerprint information for any error correction, < >>Fingerprint correction information corresponding to the error correction training fingerprint information output by the fingerprint error corrector;Mthe number of features in the module is filled for any convolution,f ij is the firstiThe first convolution filling modulejThe feature values of the individual features are used,S(. Cndot.) is the output of the mapping function in the fingerprint corrector.
In some embodiments, the apparatus may further comprise:
the request acquisition module is used for acquiring a generation countermeasure network downloading request sent by the client device; wherein generating the challenge network download request includes requesting person information;
the fingerprint generation module is used for generating preset fingerprint information corresponding to the requester information according to the generated antagonism network downloading request, taking the requester information as tracing information and storing the preset fingerprint information and the requester information into a preset fingerprint set;
and the network issuing module is used for sending the preset fingerprint information and the generated countermeasure network to the client device.
In this embodiment, the recognition training module 60 performs network training on the image decoder according to the recognition training data by using the preset recognition loss function to obtain the trained image decoder, so that the true or false condition of the image to be detected can be recognized by using the trained image decoder in combination with the fingerprint error corrector, and the accuracy of recognition of the deep counterfeit image is ensured.
Corresponding to the above method embodiment, the embodiment of the present invention further provides an electronic device, and an electronic device described below and an image forgery identification method and a network training method for image forgery identification described above may be referred to correspondingly.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the invention. The electronic device may include:
a memory D1 for storing a computer program;
and the processor D2 is used for implementing the image forgery identification method and/or the network training method for image forgery identification provided by the method embodiment when executing the computer program.
Specifically, referring to fig. 7, fig. 7 is a schematic diagram of a specific structure of an electronic device according to an embodiment of the present invention, where the electronic device 410 may have a relatively large difference due to different configurations or performances, and may include one or more processors (central processing units, CPU) 422 (e.g., one or more processors) and a memory 432, and one or more storage media 430 (e.g., one or more mass storage devices) storing application programs 442 or data 444. Wherein memory 432 and storage medium 430 may be transitory or persistent storage. The program stored on the storage medium 430 may include one or more units (not shown), each of which may include a series of instruction operations on a host. Still further, the central processor 422 may be configured to communicate with the storage medium 430 and execute a series of instruction operations in the storage medium 430 on the electronic device 410.
The electronic device 410 may also include one or more power supplies 426, one or more wired or wireless network interfaces 450, one or more input/output interfaces 458, and/or one or more operating systems 441. For example, windows ServerTM, mac OS XTM, unixTM, linuxTM, freeBSDTM, etc.
The electronic device 410 may be embodied as a server or as a computer device.
The steps in the image forgery identification method and/or the network training method of image forgery identification described above may be implemented by the structure of the electronic device.
Corresponding to the above method embodiments, the present invention further provides a computer readable storage medium, where a computer readable storage medium described below and an image forgery identification method and a network training method for image forgery identification described above may be referred to correspondingly to each other.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a computer readable storage medium according to an embodiment of the invention. The computer readable storage medium 70 has stored thereon a computer program 71, which when executed by a processor, implements the steps of the image forgery identification method and/or the network training method of image forgery identification as provided by the above-described method embodiments.
The computer readable storage medium 70 may be a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, etc. which may store various program codes.
In the description, each embodiment is described in a progressive manner, and each embodiment is mainly described by the differences from other embodiments, so that the same similar parts among the embodiments are mutually referred. The apparatus, the electronic device and the computer readable storage medium disclosed in the embodiments have a relatively simple description, and the relevant points refer to the description of the method section since the apparatus, the electronic device and the computer readable storage medium correspond to the method disclosed in the embodiments.
The image forgery identification method, the image forgery generation method, the network training method, the device, the electronic equipment and the computer readable storage medium provided by the invention are described in detail. The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to facilitate an understanding of the method of the present invention and its core ideas. It should be noted that it will be apparent to those skilled in the art that various modifications and adaptations of the invention can be made without departing from the principles of the invention and these modifications and adaptations are intended to be within the scope of the invention as defined in the following claims.

Claims (20)

1. An image forgery identification method, characterized by comprising:
acquiring an image to be detected;
decoding the image to be detected by using an image decoder to obtain an image decoding result; wherein the image decoding result includes fingerprint decoding information;
correcting the fingerprint decoding information by using a fingerprint corrector to obtain fingerprint correction information;
identifying the authenticity of the image to be detected according to the fingerprint correction information and a preset fingerprint set; the preset fingerprint set comprises preset fingerprint information, and the true or false condition is a real image or a deep fake image; when the true and false situation is the depth counterfeit image, the image to be detected is an image generated by generating an countermeasure network by image noise information and fingerprint characteristics corresponding to certain preset fingerprint information; and the fingerprint characteristics corresponding to a certain preset fingerprint information are characteristics obtained by extracting the characteristics of the certain preset fingerprint information by the newly added linear layer of the generated countermeasure network.
2. The image falsification recognition method of claim 1, wherein the image decoder comprises a decoding convolution layer, a decoding activation function, a decoding full-connection layer and a decoding normalization layer, wherein the decoding the image to be detected by the image decoder to obtain an image decoding result comprises:
Performing feature extraction on the image to be detected by utilizing the decoding convolution layer and the decoding activation function to obtain decoding extraction features;
mapping the decoding extracted features by using the decoding full-connection layer to obtain decoding output features; the decoding output features comprise decoding fingerprint features and decoding noise features, and the dimension of the decoding extraction features is the sum of the dimension of the decoding fingerprint features and the dimension of the decoding noise features;
normalizing the decoded fingerprint features by using the decoding normalization layer to obtain the fingerprint decoding information; wherein the fingerprint decoding information is a digital sequence consisting of 0 and 1.
3. The image forgery identification method according to claim 1, wherein the preset fingerprint set further includes traceability information corresponding to each of the preset fingerprint information, and after the identifying the true or false condition of the image to be detected according to the fingerprint correction information and the preset fingerprint set, the method further includes:
and if the true or false condition is the depth counterfeit image, determining the tracing information corresponding to the fingerprint decoding information according to the preset fingerprint set.
4. The method for recognizing counterfeit image according to claim 1, wherein said recognizing the authenticity of the image to be detected based on the fingerprint correction information and a preset fingerprint set comprises:
Judging whether the preset fingerprint set has preset countermeasure network fingerprint information which is the same as the fingerprint correction information;
if yes, determining the true and false condition as the depth counterfeit image;
if not, determining the true and false condition as the true image.
5. The image forgery identification method of claim 1, wherein the fingerprint decoding information, the fingerprint correcting information, and the preset fingerprint information are each a digital sequence of a preset fingerprint length.
6. The image forgery identification method of any of claims 1 to 5, characterized in that the fingerprint corrector comprises an error correction convolution layer, an error correction activation function, an error correction full connection layer and a mapping function.
7. The image forgery identification method of claim 6, wherein the correcting the fingerprint decoding information by the fingerprint corrector to obtain fingerprint correction information comprises:
converting the fingerprint decoding information into a decoding matrix with a preset matrix specification; the preset matrix specification is u multiplied by v, u is the number of rows, v is the number of columns, and u and v are positive integers greater than or equal to 1;
convolutionally filling the decoding matrix by using a convolutionally filling module to obtain a final output vector with a preset dimension; each convolution filling module comprises an error correction convolution layer and an error correction activation function which are respectively corresponding to each other, wherein the preset dimension is u multiplied by v multiplied by w, w is the number of layers, and w is a positive integer which is more than or equal to 2;
Mapping the final output vector by using the error correction full-connection layer to obtain a mapping matrix of the preset matrix specification;
obtaining the mapping probability corresponding to each matrix element in the mapping matrix by using the mapping function;
acquiring an error correction matrix corresponding to the mapping matrix according to the mapping probability; wherein each matrix element in the error correction matrix is 1 or 0 respectively;
converting the error correction matrix into the fingerprint correction information; wherein the fingerprint correction information is a digital sequence consisting of 0 and 1.
8. An image falsification recognition apparatus, comprising:
the image acquisition module is used for acquiring an image to be detected;
the image decoding module is used for decoding the image to be detected by utilizing an image decoder to obtain an image decoding result; wherein the image decoding result includes fingerprint decoding information;
the fingerprint error correction module is used for correcting the fingerprint decoding information by utilizing a fingerprint error corrector to obtain fingerprint correction information;
the fingerprint identification module is used for identifying the authenticity of the image to be detected according to the fingerprint correction information and a preset fingerprint set; the preset fingerprint set comprises preset fingerprint information, and the true or false condition is a real image or a deep fake image; and generating the characteristic obtained by extracting the characteristic of a certain preset fingerprint information by a newly added linear layer of the reactive network according to the fingerprint characteristic corresponding to the certain preset fingerprint information.
9. A network training method for image forgery identification, comprising:
acquiring identification training data; the recognition training data comprises training fake data or training fake data and training real data, the training fake data comprises training fingerprint information, training image noise information and/or training depth fake images, and the training depth fake images are images generated by generating an countermeasure network according to the training image noise information and fingerprint characteristics corresponding to the training fingerprint information;
according to the identification training data, performing joint training on the image decoder and the generation countermeasure network by using a preset identification loss function, obtaining the trained image decoder and the generated countermeasure network, identifying the true or false condition of the image to be detected by using the trained image decoder and the fingerprint error corrector, and performing image generation on fingerprint features corresponding to image noise information and fingerprint information by using the trained generated countermeasure network;
correspondingly, the performing joint training on the image decoder and the generating countermeasure network by using a preset recognition loss function according to the recognition training data to obtain the trained image decoder and generating countermeasure network comprises the following steps:
And extracting the characteristics of the training fingerprint information by utilizing the newly added linear layer of the generated countermeasure network to obtain the fingerprint characteristics corresponding to the training fingerprint information.
10. The network training method of image forgery identification of claim 9, wherein the preset identification loss function includes a generator loss function of the generated countermeasure network, a arbiter loss function of the generated countermeasure network, and a loss function of the image decoder.
11. The network training method of image forgery identification of claim 10, wherein the generator loss function includes a first loss function for characterizing a gap between an image generated by the generator generating the countermeasure network and a real image, and a second loss function for characterizing a gap of the generated image when different training fingerprint information is used as input under the same image noise information.
12. The network training method of image forgery identification of claim 10, wherein the discriminator loss function includes a third loss function for characterizing a difference between discrimination of the discriminator for generating the countermeasure network to a real image and a generated image and an actual difference, and a fourth loss function for characterizing a difference between discrimination of the discriminator to images generated by different training fingerprint information under the same image noise information.
13. The network training method of image forgery identification according to claim 10, wherein the loss function of the image decoder includes a fifth loss function for characterizing a gap between fingerprint decoding information output by the image decoder and corresponding training fingerprint information during training, and a sixth loss function for characterizing a gap between noise decoding information output by the image decoder and corresponding training image noise information during training.
14. The network training method of image forgery identification of claim 10, wherein the preset identification loss function isThe method comprises the steps of carrying out a first treatment on the surface of the Wherein,L 1 for the preset recognition loss function,G(. Cndot.) A. Cndot. CD(. Cndot.) are the outputs of the generator and arbiter respectively of the generation of the countermeasure network,F(-) is the output of the image decoder,ztraining image noise information for the current training batch,ctraining image noise information for current training batchzThe corresponding fingerprint information for training is used for training,xfor the training of real dataImage noise information for batch training of pre-trainingzCorresponding real image->The first training batch of the current training batch output for the image decoder qFingerprint decoding information->Decoding information for the image noise output by the image decoder,c 1 andc 2 for any two training fingerprint information of the current training batch,c q the first training lotqFingerprint information for personal training-> /> /> />And->Respectively preset super parameters.
15. The network training method of image forgery identification of claim 9, further comprising:
acquiring error correction training data; the error correction training data comprises error correction training fingerprint information and labels corresponding to each digit in the error correction training fingerprint information;
and according to the error correction training data, performing network training on the fingerprint error corrector by using a preset error correction loss function, and obtaining the trained fingerprint error corrector so as to perform error correction on fingerprint decoding information output by the image decoder by using the trained fingerprint error corrector.
16. The network training method of image forgery identification of claim 15, wherein the preset error correction loss function isThe method comprises the steps of carrying out a first treatment on the surface of the Wherein,Nfor the number of convolution stuffing modules in the fingerprint error corrector, the convolution stuffing modules comprise an error correction convolution layer and an error correction activation function, yA tag corresponding to any one of the error correction training fingerprint information,>fingerprint correction information corresponding to the error correction training fingerprint information output by the fingerprint error corrector;Mfor the number of features in any of the convolution fill modules,f ij is the firstiThe first convolution filling modulejThe feature values of the individual features are used,S(. Cndot.) is the output of the mapping function in the fingerprint corrector.
17. A network training method for image forgery identification as claimed in any one of claims 9 to 16 further comprising:
acquiring a generation countermeasure network downloading request sent by a client device; wherein the generating an antagonism network download request includes a requester information;
generating preset fingerprint information corresponding to the requester information according to the generation countermeasure network downloading request, taking the requester information as tracing information, and storing the preset fingerprint information and the requester information into a preset fingerprint set;
and transmitting the preset fingerprint information and the generated countermeasure network to the client device.
18. A network training device for image forgery identification, comprising:
the data acquisition module is used for acquiring identification training data; the recognition training data comprises training fake data or training fake data and training real data, the training fake data comprises training fingerprint information, training image noise information and/or training depth fake images, and the training depth fake images are images generated by generating an countermeasure network according to the training image noise information and fingerprint characteristics corresponding to the training fingerprint information;
The recognition training module is used for carrying out network training on the image decoder by utilizing a preset recognition loss function according to the recognition training data to obtain a trained image decoder so as to recognize the true or false condition of the image to be detected by utilizing the trained image decoder in combination with the fingerprint error corrector;
the recognition training module is specifically configured to perform joint training on the image decoder and the generated countermeasure network by using a preset recognition loss function according to the recognition training data, and acquire the trained image decoder and the trained generated countermeasure network, so as to perform image generation on fingerprint features corresponding to image noise information and fingerprint information by using the trained generated countermeasure network;
correspondingly, the recognition training module comprises:
and the characteristic extraction sub-module is used for extracting the characteristics of the training fingerprint information by utilizing the newly added linear layer of the generated countermeasure network to obtain the fingerprint characteristics corresponding to the training fingerprint information.
19. An electronic device, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the image forgery identification method as claimed in any one of claims 1 to 7 and/or the network training method of image forgery identification as claimed in any one of claims 9 to 17 when executing the computer program.
20. A computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the steps of the image forgery identification method according to any one of claims 1 to 7 and/or the network training method of image forgery identification according to any one of claims 9 to 17.
CN202311099611.1A 2023-08-29 2023-08-29 Image forgery identification method, network training method, device, equipment and medium Active CN116824647B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311099611.1A CN116824647B (en) 2023-08-29 2023-08-29 Image forgery identification method, network training method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311099611.1A CN116824647B (en) 2023-08-29 2023-08-29 Image forgery identification method, network training method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN116824647A CN116824647A (en) 2023-09-29
CN116824647B true CN116824647B (en) 2024-01-23

Family

ID=88114892

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311099611.1A Active CN116824647B (en) 2023-08-29 2023-08-29 Image forgery identification method, network training method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN116824647B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117809139B (en) * 2024-02-29 2024-05-03 苏州元脑智能科技有限公司 Network training method for composite image recognition and composite image recognition method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113627503A (en) * 2021-07-30 2021-11-09 中国科学院计算技术研究所 Tracing method and device for generating image, model training method and device, electronic equipment and storage medium
CN113988180A (en) * 2021-10-28 2022-01-28 杭州中科睿鉴科技有限公司 Model fingerprint-based generated image tracing method
CN114663924A (en) * 2022-03-29 2022-06-24 浙江工业大学 Fake fingerprint automatic detection method based on optical coherence tomography
CN115830723A (en) * 2023-02-23 2023-03-21 苏州浪潮智能科技有限公司 Correlation method and correlation device for training set images
CN116630727A (en) * 2023-07-26 2023-08-22 苏州浪潮智能科技有限公司 Model training method, deep pseudo image detection method, device, equipment and medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113627503A (en) * 2021-07-30 2021-11-09 中国科学院计算技术研究所 Tracing method and device for generating image, model training method and device, electronic equipment and storage medium
CN113988180A (en) * 2021-10-28 2022-01-28 杭州中科睿鉴科技有限公司 Model fingerprint-based generated image tracing method
CN114663924A (en) * 2022-03-29 2022-06-24 浙江工业大学 Fake fingerprint automatic detection method based on optical coherence tomography
CN115830723A (en) * 2023-02-23 2023-03-21 苏州浪潮智能科技有限公司 Correlation method and correlation device for training set images
CN116630727A (en) * 2023-07-26 2023-08-22 苏州浪潮智能科技有限公司 Model training method, deep pseudo image detection method, device, equipment and medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Attributing Fake Images to GANs: Learning and Analyzing GAN Fingerprints;Yu Ning 等;《arxiv》;第1-8页 *

Also Published As

Publication number Publication date
CN116824647A (en) 2023-09-29

Similar Documents

Publication Publication Date Title
Xiao et al. Image splicing forgery detection combining coarse to refined convolutional neural network and adaptive clustering
CN102799850B (en) A kind of barcode recognition method and device
CN113435556B (en) Code generation and decoding method and anti-counterfeiting method of dot matrix code
CN116824647B (en) Image forgery identification method, network training method, device, equipment and medium
CN110766594B (en) Information hiding method and device, detection method and device and anti-counterfeiting tracing method
EP4085369A1 (en) Forgery detection of face image
CN110765795B (en) Two-dimensional code identification method and device and electronic equipment
CN106169064A (en) The image-recognizing method of a kind of reality enhancing system and system
CN110956080A (en) Image processing method and device, electronic equipment and storage medium
Krish et al. Pre‐registration of latent fingerprints based on orientation field
Mantecón et al. Visual face recognition using bag of dense derivative depth patterns
CN110401488B (en) Demodulation method and device
CN114897000A (en) Annular two-dimensional code positioning position detection generation and identification method
Liu Anti-counterfeit system based on mobile phone QR code and fingerprint
CN116822623B (en) Method, device, equipment and storage medium for generating countermeasures network joint training
CN116630727B (en) Model training method, deep pseudo image detection method, device, equipment and medium
CN113283466A (en) Instrument reading identification method and device and readable storage medium
CN110766708B (en) Image comparison method based on contour similarity
CN116798041A (en) Image recognition method and device and electronic equipment
CN115311664A (en) Method, device, medium and equipment for identifying text type in image
CN114998347A (en) Semiconductor panel corner positioning method and device
CN116129484A (en) Method, device, electronic equipment and storage medium for model training and living body detection
CN111738248B (en) Character recognition method, training method of character decoding model and electronic equipment
CN116266259A (en) Image and text structured output method and device, electronic equipment and storage medium
CN113723294A (en) Data processing method and device and object identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant