CN112364824A - Copying detection method of multi-resolution network structure - Google Patents

Copying detection method of multi-resolution network structure Download PDF

Info

Publication number
CN112364824A
CN112364824A CN202011374565.8A CN202011374565A CN112364824A CN 112364824 A CN112364824 A CN 112364824A CN 202011374565 A CN202011374565 A CN 202011374565A CN 112364824 A CN112364824 A CN 112364824A
Authority
CN
China
Prior art keywords
module
image
sending
group
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011374565.8A
Other languages
Chinese (zh)
Inventor
徐华建
袁顺杰
施炎
汤敏伟
李�真
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi Electronic Commerce Co Ltd
Original Assignee
Tianyi Electronic Commerce Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi Electronic Commerce Co Ltd filed Critical Tianyi Electronic Commerce Co Ltd
Priority to CN202011374565.8A priority Critical patent/CN112364824A/en
Publication of CN112364824A publication Critical patent/CN112364824A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/32Normalisation of the pattern dimensions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a reproduction detection method of a multi-resolution network structure, which comprises the following steps: s1: storing the qualification material license image needing to be stored by using a data storage module; s2: using a data preprocessing module to perform boundary filling, Resize, filtering and Normalize operations on an image to be detected and output the image after data preprocessing; s3: scaling the input image by a convolution structure by using a down-sampling module; s4: and using a backbone network module to extract the features of the input feature map. In addition, because the acquired data set is small, in order to reduce the interference of noise in a sample, the input picture is subjected to edge extraction before the model training, the unique texture characteristic and the frame characteristic of Moire patterns are highlighted, and the model is prevented from being interfered by other irrelevant information.

Description

Copying detection method of multi-resolution network structure
Technical Field
The invention relates to the technical field of electronic information, in particular to a reproduction detection method of a multi-resolution network structure.
Background
In a scene that relevant qualification materials of a user are actually required to be collected on site, an illegal user usually takes a picture uploaded by other people stored in a mobile phone, a computer or other electronic equipment for the second time to forge the qualification materials of the illegal user. During the two shots, the dot matrix of the shooting devices forms a certain overlap, so that the shot picture usually has a unique texture (Moire). In addition, reflected light brought by the screen and a peripheral frame of the screen enable the normal picture and the moire pattern picture to be distinguished.
In order to solve the problem that an illegal user usually takes a picture uploaded by other people stored in a mobile phone, a computer or other electronic equipment for the second time and forges own qualification materials, the invention adopts a deep learning-based method and forms a data set through related data to carry out model training. In addition, because the acquired data set is small, in order to reduce the interference of noise in a sample, the project carries out edge extraction on an input picture before model training, highlights the unique texture characteristic and the frame characteristic of Moire patterns, and avoids the interference of other irrelevant information on the model.
Since the distinction of the screenshot from the normal picture is usually a partial tiny texture, there is no way to capture the local difference by using the high semantic features extracted by the deep network alone. Therefore, the model used by the invention adopts the features extracted by the convolution networks with different resolutions and different depths to classify, and experiments prove that the model using the framework has a far better effect than the mainstream neural networks in the market, such as ResNet, VGG, inclusion, Xception, SENet and the like.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a reproduction detection method of a multi-resolution network structure.
In order to solve the technical problems, the invention provides the following technical scheme:
the invention discloses a reproduction detection method of a multi-resolution network structure, which comprises the following steps:
s1: storing the qualification material license image needing to be stored by using a data storage module;
s2: using a data preprocessing module to perform boundary filling, Resize, filtering and Normalize operations on an image to be detected and output the image after data preprocessing;
s3: scaling the input image by a convolution structure by using a down-sampling module;
s4: using a backbone network module to extract the features of the input feature map;
s5: splicing the multiple groups of feature vectors by using a vector splicing module;
s6: using a full connection layer module, classifying according to the input feature vector, and outputting whether the probability value is a reproduction probability value;
s7: and comparing the input probability of the photo-flipping with a preset threshold value by using a decision module, and outputting whether the final decision is the photo-flipping.
As a preferred embodiment of the present invention, the step S1 includes:
s1.1: and storing the qualification material certificate by using a data storage module, and sequentially outputting the images to be detected.
As a preferred embodiment of the present invention, the step S2 includes:
s2.1: carrying out boundary filling on the image obtained in the step S1.1 to obtain an image with the same width and height;
s2.2: resize scaling (224X224) the image obtained in step S2.1 to obtain a Resize image;
s2.3: filtering the image obtained in the step S2.2 to obtain a filtered image;
s2.4: and (4) normalizing (normaize) the image obtained in the step (S2.3) to obtain a normalized image.
As a preferred embodiment of the present invention, the step S3 includes:
s3.1: sending the normalized image obtained in the step S2.4 into a 1st layer of a down-sampling module to obtain a group of characteristic graphs;
s3.2: sending the group of characteristic graphs obtained in the step S3.1 into a 2nd layer of a down-sampling module to obtain a group of characteristic graphs;
s3.3: sending the group of characteristic graphs obtained in the step S3.2 to a 3rd layer of a down-sampling module to obtain a group of characteristic graphs;
s3.4: sending the group of feature maps obtained in the step S3.3 into a 4th layer of a down-sampling module to obtain a group of feature maps;
s3.5: and (4) sending the group of feature maps obtained in the step (S3.4) to a 5th layer of a down-sampling module to obtain a group of feature maps.
As a preferred embodiment of the present invention, the step S4 includes:
s4.1: sending the group of feature maps obtained in the step S3.1 to a 1st layer of a backbone network module to obtain a feature vector;
s4.2: sending the group of feature maps obtained in the step S3.2 into a 2nd layer of a backbone network module to obtain a feature vector;
s4.3: sending the group of characteristic graphs obtained in the step S3.3 into a 3rd layer of a backbone network module to obtain a characteristic vector;
s4.4: sending the group of feature maps obtained in the step S3.4 into a 4th layer of a backbone network module to obtain a feature vector;
s4.5: sending the group of feature maps obtained in the step S3.5 to a 5th layer of a backbone network module to obtain a feature vector;
as a preferred embodiment of the present invention, the step S5 includes:
s5.1: and (4) sending the eigenvectors obtained in the step (S4.1), the step (S4.2), the step (S4.3), the step (S4.4) and the step (S4.5) to a vector splicing module, and splicing into one eigenvector.
As a preferred embodiment of the present invention, the step S6 includes:
s6.1: and (5) sending the characteristic vector obtained in the step (S5.1) to a full connection layer module to obtain the probability value of whether the picture is a reproduction picture.
As a preferred embodiment of the present invention, the step S7 includes:
s7.1: and (4) sending the probability value of the photo-flipping obtained in the step (S6.1) to a decision module to obtain whether the final decision is to photo-flip.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention most importantly provides a set of technical scheme for solving the problem of copying detection.
2. The data preprocessing module of the invention consists of four different preprocessing operations for testing, wherein the boundary filling and filtering processing are finally found through various different experiments that the sequential preprocessing mode can improve the accuracy of the model, which is the fundamental point that the detection performance of the invention is superior to other similar inventions.
3. The down-sampling module of the invention enables the image information to be richer by acquiring the characteristics of the images with different resolutions, which is the core of the invention different from other similar inventions and is the root of the invention that the detection performance is superior to other similar inventions.
4. The main network module of the invention has stronger feature extraction capability by carrying out depth extraction on the images with different resolutions, can play the greatest effect by being matched with a down-sampling module for use, and is the root of the invention that the detection performance is superior to other similar inventions.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic view of the overall scheme of the present invention;
FIG. 2 is a schematic diagram of a data storage module of the present invention;
FIG. 3 is a schematic diagram of a data pre-processing module of the present invention;
FIG. 4 is a schematic diagram of a down-sampling module of the present invention;
FIG. 5 is a DSB (Down Sample Block) diagram of the present invention;
FIG. 6 is a schematic diagram of a backbone network module of the present invention;
FIG. 7 is a schematic representation of RNB (RiskNet Block) of the present invention;
FIG. 8 is a schematic representation of CBRB (CBR Block) according to the invention;
FIG. 9 is a schematic diagram of a vector stitching module of the present invention;
FIG. 10 is a schematic diagram of a fully connected layer module of the present invention;
FIG. 11 is a schematic view of FC (Fully Connected layer) of the present invention;
FIG. 12 is a schematic diagram of a decision module of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
Example 1
The embodiment of the invention provides a method for detecting the reproduction of a multiresolution network structure, which considers whether a qualification material license is reproduced as an image classification problem and provides a multiresolution network structure for detecting whether the qualification material license is reproduced. In specific implementation, as the information such as moire fringes, reflection, frames and the like exists on the photo which is subjected to the over-shoot, the image characteristics can be obtained through the multi-resolution network structure for prediction, and whether the photo is subjected to the over-shoot or not is finally judged.
As shown in fig. 1-12, the present invention provides a method for detecting a duplication of a multi-resolution network structure, where fig. 1 is a schematic diagram of the entire scheme, and includes a data storage module, a data preprocessing module, a down-sampling module, a backbone network module, a vector splicing module, a full link layer module, and a decision module. The specific process of the scheme comprises the following steps:
s1: and storing the data by using the data storage module, and outputting the data to be detected.
Specifically, as shown in fig. 2, the data storage module is used to store the image to be detected and sequentially output the image to be detected to the next stage according to an exemplary embodiment.
S2: and performing various kinds of preprocessing on the output image of the S1 by using a data preprocessing module to obtain a preprocessed image.
Specifically, the data preprocessing of the image output at S1 is a very important process. If the detection is directly carried out on the image without preprocessing, strong copying characteristic information can not be obtained generally. Therefore, the preprocessing can effectively highlight the relevant features in advance, so that the relevant features can be effectively extracted later in the process of advancing the features.
Fig. 3 is a schematic diagram illustrating data preprocessing of the image output at S1 using a data preprocessing module according to an exemplary embodiment, and referring to fig. 3, it includes the following steps:
s2.1: sending the image to be detected obtained in the step S1 to boundary filling pretreatment to obtain an image subjected to boundary filling pretreatment;
specifically, step S2.1 compares the width and height of the image, and when the width is larger than the height, the height is expanded to make the width and height of the image as large as each other, the expanded area is filled with 0, and so on, when the width is smaller than the height, the width is expanded to make the width and height of the image as large as each other, the expanded area is filled with 0, and when the width and height are as large as each other, the image is not expanded and the original size is maintained.
S2.2: sending the image obtained in the step S2.1 to a Resize (zooming) pretreatment to obtain a Resize pretreated image;
specifically, step S2.2 employs scaling the image to a size of 224X224 in width and height.
S2.3: sending the image obtained in the step S2.2 to filtering pretreatment to obtain an image after the filtering pretreatment;
specifically, step S2.3 is to filter the image through the following filter, where kernel is 3X3 size, and the matrix is:
Figure BDA0002807836520000061
s2.4: the image obtained in the step S2.3 is sent to a normaize (normalization) pretreatment to obtain a normaize pretreated image;
specifically, step S2.4 uses a transformation of the image from 0-255 to 0-1 in grayscale, and a transformation of 0-1 to 1, where the first step is to (image-image.min ()) (1/(image.max () -image.min ())) and the second step is to (image-mean)/std, where mean and std are preset by (0.5,0.5,0.5) and (0.5,0.5,0.5), respectively.
S3: and (4) performing down-sampling on the S2.4 output image by using a down-sampling module to obtain a down-sampled image.
Specifically, it is a very important process to perform down-sampling processing on the image output from S2.4. Without the down-sampling module, there is usually no way to obtain very comprehensive reproduction characteristic information. The down-sampling module can therefore obtain relevant features from the feature map at different scales.
Fig. 4 is a schematic diagram illustrating a down-sampling process performed on an image output from S2.4 by using a down-sampling module according to an exemplary embodiment, and referring to fig. 4, the down-sampling process includes the following steps:
s3.1: sending the image obtained in the step S2.4 into a DSB of a 1st layer in a down-sampling module for processing to obtain an image processed by the DSB;
specifically, step S3.1 adopts DSB processing of 1st layer in the downsampling module for the image, where fig. 5 is a schematic diagram of a DSB, and consists of one convolution layer (where the kernel size: 3X3, step size: 1, output channel number: 32), one ReLU, one convolution layer (where the kernel size: 3X3, step size: 1, output channel number: 32), and one ReLU.
S3.2: sending the characteristic diagram obtained in the step S3.1 into a DSB of a 2st layer in a down-sampling module for processing to obtain an image processed by the DSB;
specifically, step S3.2 adopts the DSB processing of the 2st layer in the downsampling module for the image, where fig. 5 is a schematic diagram of the DSB, and is composed of one convolution layer (where the kernel size: 3X3, step size: 2, output channel number: 32), one ReLU, one convolution layer (where the kernel size: 3X3, step size: 1, output channel number: 64), and one ReLU.
S3.3: sending the characteristic diagram obtained in the step S3.2 into a DSB of a 3st layer in a down-sampling module for processing to obtain an image processed by the DSB;
specifically, step S3.3 adopts DSB processing of 3st layer in the downsampling module for the image, where fig. 5 is a schematic diagram of a DSB, and consists of one convolution layer (where the kernel size: 3X3, step size: 2, output channel number: 64), one ReLU, one convolution layer (where the kernel size: 3X3, step size: 1, output channel number: 64), and one ReLU.
S3.4: sending the characteristic diagram obtained in the step S3.3 into a DSB of a 4st layer in a down-sampling module for processing to obtain an image processed by the DSB;
specifically, step S3.4 adopts DSB processing of 4st layer in the downsampling module for the image, where fig. 5 is a schematic diagram of a DSB, and consists of one convolution layer (where the kernel size: 3X3, step size: 2, output channel number: 64), one ReLU, one convolution layer (where the kernel size: 3X3, step size: 1, output channel number: 64), and one ReLU.
S3.5: sending the characteristic diagram obtained in the step S3.4 into a DSB of a 5st layer in a down-sampling module for processing to obtain an image processed by the DSB;
specifically, step S3.5 adopts DSB processing of 5st layer in the downsampling module for the image, where fig. 5 is a schematic diagram of a DSB, and consists of one convolution layer (where the kernel size: 3X3, step size: 2, output channel number: 64), one ReLU, one convolution layer (where the kernel size: 3X3, step size: 1, output channel number: 64), and one ReLU.
S4: and (4) performing convolution on the output image of each stage of S3 by using a backbone network module to obtain high-dimensional characteristic map information.
Specifically, RNB processing is performed on the feature map output at each stage of S3, and high-dimensional feature semantic information is obtained by increasing the network depth. Fig. 6 is a flowchart illustrating RNB processing performed on an image output from each stage of S3 by using a backbone network module according to an exemplary embodiment, and referring to fig. 6, the RNB processing includes the following steps:
s4.1: sending the image obtained in the step S3.1 into an RNB of a 1st layer in a backbone network module for processing to obtain an image processed by the RNB;
specifically, the schematic structure of the RNB is shown in fig. 7, and is composed of four CBRB modules and an avgplog layer, where the CBRB module is shown in fig. 8 and is composed of a convolutional layer (where the core size: 3X3, step size: 1, number of output channels: 64), a BN layer and a ReLU.
S4.2: sending the image obtained in the step S3.2 into an RNB of a 2st layer in a backbone network module for processing to obtain an image processed by the RNB;
specifically, the schematic structure of the RNB is shown in fig. 7, and is composed of four CBRB modules and an avgplog layer, where the CBRB module is shown in fig. 8 and is composed of a convolutional layer (where the core size: 3X3, step size: 1, number of output channels: 64), a BN layer and a ReLU.
S4.3: sending the image obtained in the step S3.3 into an RNB of a 3st layer in a backbone network module for processing to obtain an image processed by the RNB;
specifically, the schematic structure of the RNB is shown in fig. 7, and is composed of four CBRB modules and an avgplog layer, where the CBRB module is shown in fig. 8 and is composed of a convolutional layer (where the core size: 3X3, step size: 1, number of output channels: 64), a BN layer and a ReLU.
S4.4: sending the image obtained in the step S3.4 into an RNB of a 4st layer in a backbone network module for processing to obtain an image processed by the RNB;
specifically, the schematic structure of the RNB is shown in fig. 7, and is composed of four CBRB modules and an avgplog layer, where the CBRB module is shown in fig. 8 and is composed of a convolutional layer (where the core size: 3X3, step size: 1, number of output channels: 64), a BN layer and a ReLU.
S4.5: sending the image obtained in the step S3.5 into an RNB of a 5st layer in a backbone network module for processing to obtain an image processed by the RNB;
specifically, the schematic structure of the RNB is shown in fig. 7, and is composed of four CBRB modules and an avgplog layer, where the CBRB module is shown in fig. 8 and is composed of a convolutional layer (where the core size: 3X3, step size: 1, number of output channels: 64), a BN layer and a ReLU.
S5: and splicing the characteristics output by the S4 stages.
Specifically, as shown in fig. 9, the feature vector output by the 1st layer, the feature vector output by the 2st layer, the feature vector output by the 3st layer, the feature vector output by the 4st layer, and the feature vector output by the 5st layer of S4 are vector-spliced and output.
S6: the characteristics output at S5 are input to the fully connected module.
Specifically, referring to fig. 10, the spliced features of S5 are input into a fully-connected layer module to perform probability prediction, where the fully-connected layer module includes a drop layer, a Liner layer, and a SoftMax layer, but since the process of the embodiment, that is, the model prediction process, is described, the drop layer will not participate in the calculation at this time, and a specific flow is shown in fig. 11, which includes the following steps:
s6.1: the feature vector output in S5 is input to the Linear layer.
Specifically, the vector of S5 is input to a Linear layer, and Linear transformation is performed to output a score.
S6.2: the score output by S6.1 is input to the SoftMax layer.
Specifically, inputting the score of S6.1 to the SoftMax layer normalizes the score of the S6.1 output to between 0-1.
S7: and (6.2) inputting the output prediction probability of S6.2 into a decision module for final judgment.
Specifically, referring to fig. 11, a threshold is preset, and when the prediction probability is greater than or equal to the threshold, it is determined that the photo is turned over, otherwise, it is not.
The invention mainly uses a multi-resolution network structure to detect the reprinted picture, and has the following technical key points:
1. the invention most importantly provides a set of technical scheme for solving the problem of copying detection.
2. The data preprocessing module of the invention consists of four different preprocessing operations for testing, wherein the boundary filling and filtering processing are finally found through various different experiments that the sequential preprocessing mode can improve the accuracy of the model, which is the fundamental point that the detection performance of the invention is superior to other similar inventions.
3. The down-sampling module of the invention enables the image information to be richer by acquiring the characteristics of the images with different resolutions, which is the core of the invention different from other similar inventions and is the root of the invention that the detection performance is superior to other similar inventions.
4. The main network module of the invention has stronger feature extraction capability by carrying out depth extraction on the images with different resolutions, can play the greatest effect by being matched with a down-sampling module for use, and is the root of the invention that the detection performance is superior to other similar inventions.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. A reproduction detection method of a multi-resolution network structure is characterized by comprising the following steps:
s1: storing the qualification material license image needing to be stored by using a data storage module;
s2: using a data preprocessing module to perform boundary filling, Resize, filtering and Normalize operations on an image to be detected and output the image after data preprocessing;
s3: scaling the input image by a convolution structure by using a down-sampling module;
s4: using a backbone network module to extract the features of the input feature map;
s5: splicing the multiple groups of feature vectors by using a vector splicing module;
s6: using a full connection layer module, classifying according to the input feature vector, and outputting whether the probability value is a reproduction probability value;
s7: and comparing the input probability of the photo-flipping with a preset threshold value by using a decision module, and outputting whether the final decision is the photo-flipping.
2. The method for detecting the duplication of the multi-resolution network structure according to claim 1, wherein the step S1 includes:
s1.1: and storing the qualification material certificate by using a data storage module, and sequentially outputting the images to be detected.
3. The method for detecting the duplication of the multi-resolution network structure according to claim 2, wherein the step S2 includes:
s2.1: carrying out boundary filling on the image obtained in the step S1.1 to obtain an image with the same width and height;
s2.2: carrying out Resize scaling on the image obtained in the step S2.1 to obtain a Resize image;
s2.3: filtering the image obtained in the step S2.2 to obtain a filtered image;
s2.4: and normalizing the image obtained in the step S2.3 to obtain a normalized image.
4. The method for detecting the duplication of the multi-resolution network structure according to claim 3, wherein the step S3 includes:
s3.1: sending the normalized image obtained in the step S2.4 into a 1st layer of a down-sampling module to obtain a group of characteristic graphs;
s3.2: sending the group of feature maps obtained in the step S3.1 into a 2ndlayer of a down-sampling module to obtain a group of feature maps;
s3.3: sending the group of feature maps obtained in the step S3.2 into a 3rdlayer of a down-sampling module to obtain a group of feature maps;
s3.4: sending the group of feature maps obtained in the step S3.3 into a 4th layer of a down-sampling module to obtain a group of feature maps;
s3.5: and (4) sending the group of feature maps obtained in the step (S3.4) to a 5th layer of a down-sampling module to obtain a group of feature maps.
5. The method for detecting the duplication of the multi-resolution network structure according to claim 4, wherein the step S4 includes:
s4.1: sending the group of feature maps obtained in the step S3.1 into a 1stlayer of a backbone network module to obtain a feature vector;
s4.2: sending the group of feature maps obtained in the step S3.2 into a 2ndlayer of a backbone network module to obtain a feature vector;
s4.3: sending the group of characteristic graphs obtained in the step S3.3 into a 3rdlayer of a backbone network module to obtain a characteristic vector;
s4.4: sending the group of feature maps obtained in the step S3.4 into a 4 layer of a backbone network module to obtain a feature vector;
s4.5: and (4) sending the group of feature maps obtained in the step (S3.5) to a 5 layer of the backbone network module to obtain a feature vector.
6. The method for detecting the duplication of the multi-resolution network structure according to claim 5, wherein the step S5 includes:
s5.1: and (4) sending the eigenvectors obtained in the step (S4.1), the step (S4.2), the step (S4.3), the step (S4.4) and the step (S4.5) to a vector splicing module, and splicing into one eigenvector.
7. The method for detecting the duplication of the multi-resolution network structure according to claim 6, wherein the step S6 includes:
s6.1: and (5) sending the characteristic vector obtained in the step (S5.1) to a full connection layer module to obtain the probability value of whether the picture is a reproduction picture.
8. The method for detecting the duplication of the multi-resolution network structure according to claim 7, wherein the step S7 includes:
s7.1: and (4) sending the probability value of the photo-flipping obtained in the step (S6.1) to a decision module to obtain whether the final decision is to photo-flip.
CN202011374565.8A 2020-11-30 2020-11-30 Copying detection method of multi-resolution network structure Pending CN112364824A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011374565.8A CN112364824A (en) 2020-11-30 2020-11-30 Copying detection method of multi-resolution network structure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011374565.8A CN112364824A (en) 2020-11-30 2020-11-30 Copying detection method of multi-resolution network structure

Publications (1)

Publication Number Publication Date
CN112364824A true CN112364824A (en) 2021-02-12

Family

ID=74535629

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011374565.8A Pending CN112364824A (en) 2020-11-30 2020-11-30 Copying detection method of multi-resolution network structure

Country Status (1)

Country Link
CN (1) CN112364824A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114066894A (en) * 2022-01-17 2022-02-18 深圳爱莫科技有限公司 Detection method for display image reproduction, storage medium and processing equipment
CN114553280A (en) * 2022-02-21 2022-05-27 重庆邮电大学 CSI feedback method based on deep learning large-scale MIMO system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114066894A (en) * 2022-01-17 2022-02-18 深圳爱莫科技有限公司 Detection method for display image reproduction, storage medium and processing equipment
CN114553280A (en) * 2022-02-21 2022-05-27 重庆邮电大学 CSI feedback method based on deep learning large-scale MIMO system

Similar Documents

Publication Publication Date Title
Yang et al. Constrained R-CNN: A general image manipulation detection model
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN110852316B (en) Image tampering detection and positioning method adopting convolution network with dense structure
CN1330203C (en) Apparatus and method for recognizing a character image from an image screen
CN111445459B (en) Image defect detection method and system based on depth twin network
US8547438B2 (en) Apparatus, method and program for recognizing an object in an image
CN109376631A (en) A kind of winding detection method and device neural network based
CN112364824A (en) Copying detection method of multi-resolution network structure
CN115131880A (en) Multi-scale attention fusion double-supervision human face in-vivo detection method
CN116453232A (en) Face living body detection method, training method and device of face living body detection model
CN114998261A (en) Double-current U-Net image tampering detection network system and image tampering detection method thereof
CN112560734B (en) Deep learning-based reacquired video detection method, system, equipment and medium
CN113609944A (en) Silent in-vivo detection method
CN112232221A (en) Method, system and program carrier for processing human image
CN111881803A (en) Livestock face recognition method based on improved YOLOv3
CN111127327B (en) Picture inclination detection method and device
JP2004199200A (en) Pattern recognition device, imaging apparatus, information processing system, pattern recognition method, recording medium and program
CN114332955B (en) Pedestrian re-identification method and device and computer readable storage medium
WO2024025134A1 (en) A system and method for real time optical illusion photography
CN117133059B (en) Face living body detection method and device based on local attention mechanism
CN114612798B (en) Satellite image tampering detection method based on Flow model
WO2024065701A1 (en) Image inpainting method and apparatus, device, and non-transitory computer storage medium
Berthet Deep learning methods and advancements in digital image forensics
SUNITHA et al. IMAGE QUALITY ASSESSMENT FOR MAGNETIC RESONANCE IMAGING USING CNN
CN115100730A (en) Iris living body detection model training method, iris living body detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210212