CN117474903B - Image infringement detection method, device, equipment and readable storage medium - Google Patents

Image infringement detection method, device, equipment and readable storage medium Download PDF

Info

Publication number
CN117474903B
CN117474903B CN202311800569.1A CN202311800569A CN117474903B CN 117474903 B CN117474903 B CN 117474903B CN 202311800569 A CN202311800569 A CN 202311800569A CN 117474903 B CN117474903 B CN 117474903B
Authority
CN
China
Prior art keywords
distortion
image
color
sampling point
loss value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311800569.1A
Other languages
Chinese (zh)
Other versions
CN117474903A (en
Inventor
范宝余
张润泽
李仁刚
赵雅倩
郭振华
刘璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Electronic Information Industry Co Ltd
Original Assignee
Inspur Electronic Information Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Electronic Information Industry Co Ltd filed Critical Inspur Electronic Information Industry Co Ltd
Priority to CN202311800569.1A priority Critical patent/CN117474903B/en
Publication of CN117474903A publication Critical patent/CN117474903A/en
Application granted granted Critical
Publication of CN117474903B publication Critical patent/CN117474903B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of artificial intelligence, and particularly discloses an image infringement detection method, an image infringement detection device, image infringement detection equipment and a readable storage medium. Compared with the direct watermark adding, the color distortion is difficult to be identified by human eyes and difficult to be interfered by common preprocessing enhancement, so that the condition that an unauthorized image is smeared with the watermark is effectively avoided, and a trained two-class probe detection model can be detected, thereby realizing detection of whether the image adopted during training of the text-to-graphic model is infringed or not.

Description

Image infringement detection method, device, equipment and readable storage medium
Technical Field
The present invention relates to the field of artificial intelligence technologies, and in particular, to a method, an apparatus, a device, and a readable storage medium for detecting image infringement.
Background
The generated artificial intelligence (Artificial Intelligence Generated Content, AIGC) is a technology for generating related content with an appropriate generalization ability by learning and recognizing existing data based on a technical method for generating artificial intelligence such as a countermeasure network and a large-scale pre-training model. The core idea of the generated artificial intelligence technology is to generate content with certain originality and quality by using an artificial intelligence algorithm. By training the model and learning a large amount of data, content related thereto can be generated according to the inputted conditions or instructions. For example, by inputting keywords, descriptions, or samples, articles, images, audio, etc. that match the keywords, descriptions, or samples can be generated.
The draft graph is a big branch in the generation type artificial intelligence technology, can convert the text modal information into the image mode for display by inputting a section of text description, has extremely high display effect and has wide application prospect. However, the current text-to-image technology has a great number of problems of training text-to-image models by using unauthorized images due to the development stage.
How to avoid the unauthorized image to be used by training is a technical problem that needs to be solved by the person skilled in the art.
Disclosure of Invention
The invention aims to provide an image infringement detection method, device, equipment and readable storage medium, which are used for avoiding unauthorized images from being trained and used and reducing image infringement events of artificial intelligence drawing technology.
In order to solve the above technical problems, the present invention provides an image infringement detection method, including:
taking the color distortion parameters as probes, calling probe injection scripts to perform color distortion processing on part of first sample images in the first sample image data set, and training by using the processed first sample image data set to obtain a two-class probe detection model;
taking the color distortion parameter as a probe, calling the probe injection script to perform color distortion processing on the unauthorized image, and replacing the unauthorized image with the processed unauthorized image to release;
acquiring a text-to-graphic training image data set corresponding to a text-to-graphic model training task;
identifying by using the classification probe detection model to obtain a probe detection result in the culture chart training image data set;
and obtaining a sample infringement event detection result of the culture chart training image data set according to the probe detection result.
In some implementations, invoking the probe injection script to color warp the input image includes:
extracting a sampling point set from the input image;
generating a color distortion offset vector for each sampling point in the set of sampling points;
calculating according to the color distortion offset vector to obtain a distortion loss value of each sampling point;
comparing the distortion loss value of the sampling point with a set loss threshold value of the input image;
if the distortion loss value of each sampling point does not exceed the set loss threshold, performing color distortion processing on each sampling point by using the current color distortion offset vector of each sampling point;
if the distortion loss value exceeds the sampling point of the set loss threshold value, the step of returning to the distortion loss value of each sampling point obtained by calculation according to the color distortion offset vector after regenerating the color distortion offset vector;
wherein the input image is the first sample image or the unauthorized image.
In some implementations, the extracting a set of sampling points from the input image includes:
extracting a texture feature coordinate point set in the input image by using a texture feature extractor;
And taking the texture feature coordinate point set as the sampling point set.
In some implementations, the extracting a set of sampling points from the input image includes:
extracting a texture feature coordinate point set in the input image by using a texture feature extractor;
and sampling from the texture feature coordinate point set to obtain the sampling point set.
In some implementations, the calculating the distortion loss value of each sampling point according to the color distortion offset vector includes:
calculating according to the color distortion offset vector to obtain a color distortion loss value of each sampling point;
and taking the color distortion loss value as the distortion loss value.
In some implementations, the calculating the distortion loss value of each sampling point according to the color distortion offset vector includes:
calculating according to the color distortion offset vector to obtain a natural penalty loss value of each sampling point;
and taking the natural penalty loss value as the distortion loss value.
In some implementations, the calculating the distortion loss value of each sampling point according to the color distortion offset vector includes:
calculating according to the color distortion offset vector to obtain a natural penalty loss value of each sampling point;
Calculating according to the color distortion offset vector to obtain a color distortion loss value of each sampling point;
and calculating the distortion loss value of the sampling point according to the color distortion loss value of the sampling point and the natural penalty loss value of the sampling point.
In some implementations, the color distortion loss value of each sampling point is calculated according to the color distortion offset vector, and is calculated according to the following formula:
wherein,for the color distortion loss value, +.>For the set of sample points to be used,xas a result of the sampling points being present,tg is a color distortion recognition classification model which is trained in advance according to the second sample image data set for the color distortion offset vector,CE() Is a cross entropy loss function; the second sample image dataset includes a second sample image subjected to color warping processing and a second sample image not subjected to color warping processing.
In some implementations, the calculating the natural penalty loss value for each of the sampling points according to the color distortion offset vector includes:
calculating according to the color distortion offset vector to obtain a peak signal-to-noise ratio index of the sampling point and a learning perception image block similarity index of the sampling point;
And calculating the natural penalty loss value according to the peak signal-to-noise ratio index and the learning perception image block similarity index.
In some implementations, the peak signal-to-noise ratio index of the sampling point and the learning perceived image block similarity index of the sampling point are calculated according to the color distortion offset vector, and are calculated according to the following formula:
wherein,for the natural penalty loss value, +.>Normalized coefficient for peak signal-to-noise ratio index, < >>Normalized coefficient for the learning-aware image block similarity index, +.>As a threshold constant for the peak signal-to-noise ratio indicator,threshold constant for the learning-aware image block similarity index, +.>Adding the peak signal-to-noise ratio index after the color distortion offset vector to the sampling point,/for the sampling point>And adding the learning perception image block similarity index after the color distortion offset vector to the sampling point.
In some implementations, the normalized coefficient of the peak signal-to-noise ratio indicator is calculated by:
the normalized coefficient of the similarity index of the learning perception image block is obtained by calculating the following formula:
wherein,k*kfor the total number of the sampling points in the set of sampling points, Is the firstiThe corresponding color torsion is increased by the sampling pointsThe peak signal to noise ratio index after the curved offset vector,is the firstiThe first of the sampling pointsiAnd adding the corresponding learning perception image block similarity indexes after the color distortion offset vectors to the sampling points.
In some implementations, the distortion loss value for the sample point is calculated from the color distortion loss value for the sample point and the natural penalty loss value for the sample point by:
wherein,for the value of the warp loss->For the color distortion loss value, +.>Penalty values are penalized for the nature.
In some implementations, the comparing the distortion loss value for the sample point to a set loss threshold for the input image includes:
determining the sampling point with the maximum distortion loss value;
if the distortion loss value of the sampling point with the largest distortion loss value does not exceed the set loss threshold, determining that the distortion loss value of each sampling point does not exceed the set loss threshold;
if the distortion loss value of the sampling point with the largest distortion loss value exceeds the set loss threshold, directly entering the step of calculating the distortion loss value of each sampling point according to the color distortion offset vector.
In some implementations, the comparing the distortion loss value for the sample point to a set loss threshold for the input image includes:
determining the sampling point with the minimum distortion loss value;
if the distortion loss value of the sampling point with the minimum distortion loss value does not exceed the set loss threshold, continuously comparing the distortion loss values of the rest sampling points with the set loss threshold;
if the distortion loss value of the sampling point with the minimum distortion loss value exceeds the set loss threshold, directly entering the step of calculating the distortion loss value of each sampling point according to the color distortion offset vector.
In some implementations, the generating a color warp offset vector for each sample point in the set of sample points includes:
forming a sampling point particle swarm by the sampling point set;
according toGenerating the color distortion offset vector and a velocity vector for each particle in the sample point particle swarm;
and if the distortion loss value exceeds the sampling point of the set loss threshold, the step of returning the distortion loss value of each sampling point calculated according to the color distortion offset vector after regenerating the color distortion offset vector comprises the following steps:
If the particles with the distortion loss value exceeding the set loss threshold exist, updating the color distortion offset vector of each particle according to the following formula, and returning to the step of calculating the distortion loss value of each sampling point according to the color distortion offset vector;
wherein,is the first after the color distortion treatmentiThe particles, < >>Is the firstiRed channel value of each of said particles, +.>Is the firstiGreen channel value of each of said particles, +.>Is the firstiBlue channel value of each of said particles, < >>To the first pairiRed channel color warp offset of individual said particles,/->To the first pairiA green channel color distortion offset of each of said particles,>to the first pairiThe blue channel color twist offset of each of said particles,>for the velocity momentum coefficient of the population of particles at the sampling point, and (2)>Is the firstiA velocity vector of each of said particles, +.>、/>Setting parameters for the rate vector, ">For the color twist offset vector of the particle having the largest twist loss value among the current sample point particle group, +.>For the color twist offset vector of the particle with the smallest twist loss value in the current sample point particle group +.>Is the firstiSaid rate vector after updating of said particles, >Is the firstiThe color twist offset vector of each of the particles is currently +.>Is the firstiAnd the color distortion offset vector after updating each particle.
In some implementations, identifying the probe detection result that yields an input image using the bifurcated probe detection model includes:
the input image is input into a color channel for analysis to obtain a color red, green and blue three-channel value of the input image;
inputting the input image into a texture channel for analysis to obtain a texture channel value of the input image;
combining the color red, green and blue channel values with the texture channel values to obtain color texture characteristic values of the input image;
inputting the color texture characteristic values into a parallel convolution neural network with a plurality of scales, and outputting a plurality of corresponding characteristic diagrams;
after adding the feature graphs, outputting a probability value as the probe detection result through a global average pooling module, an array flattening function and a multi-layer perceptron;
the input image is the first sample image in the processed first sample image data set or an image to be detected in the text-to-image training image data set.
In some implementations, the loss function of the bifurcated probe detection model is:
Wherein,Las a function of the loss in question,for points in the input image that have not been color warped,xfor the point in said input image where color warping is performed,/->As a function of the color twist, the color of the color filter,L ce () As a function of the cross-entropy,F() For the array of flattened functions,Dfor a set of points in said input image, < > j->A set of points for which color warping is performed in the input image.
In some implementations, the identifying probe detection results in the meristematic map training image dataset using the bifurcated probe detection model includes:
sampling from the culture chart training image data set to obtain an image set to be detected;
detecting the probability of sample infringement event of each image to be detected in the image set to be detected by using the classification probe detection model;
calculating to obtain an expected value of the sample infringement event in the image set to be detected according to the probability of the sample infringement event in each image to be detected;
obtaining the detection error rate of the two classification probe detection models;
the obtaining the sample infringement event detection result according to the probe detection result comprises the following steps:
and carrying out sample infringement event judgment on the training image data set of the draft map according to the expected value and the detection error rate by adopting a hypothesis test method to obtain a detection result of the sample infringement event.
In some implementations, the performing sample infringement event determination of the culture map training image dataset according to the expected value and the detection error rate by using a hypothesis test method to obtain the sample infringement event detection result includes:
by means ofCalculated->
By means ofCalculated->
Order the=0.05, inquiry is done with a degree of freedom of%m-1) a quantile of 1-/for each of the two groups>T distribution value>
If it is>/>Determining that the sample infringement event exists in the meristem training image dataset;
if it is≤/>Determining that the sample infringement event does not exist in the meristem training image dataset;
wherein,to calculate the intermediate variables of the sample infringement event detection value,Wfor the desired value, +.>For the sample infringement event detection value,mfor the number of samples of the image to be detected,βfor the detection error rate, +_>For +.>Quantiles.
In order to solve the above technical problem, the present invention further provides an image infringement detection apparatus, including:
the training unit is used for taking the color distortion parameters as probes, calling the probes to inject scripts to perform color distortion processing on part of first sample images in the first sample image data set, and training the processed first sample image data set to obtain a two-class probe detection model;
The processing unit is used for taking the color distortion parameters as probes, calling the probe injection scripts to perform color distortion processing on the unauthorized images, and replacing the unauthorized images with the processed unauthorized images to issue;
the acquisition unit is used for acquiring a text-to-text graph training image data set corresponding to the text-to-text graph model training task;
the detection unit is used for identifying and obtaining a probe detection result in the culture map training image data set by utilizing the two-classification probe detection model;
and the judging unit is used for obtaining the sample infringement event detection result of the culture chart training image dataset according to the probe detection result.
In order to solve the above technical problem, the present invention further provides an image infringement detection apparatus, including:
a memory for storing a computer program;
a processor for executing the computer program, which when executed by the processor implements the steps of the image infringement detection method as described in any of the above.
To solve the above technical problem, the present invention also provides a readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the image infringement detection method according to any one of the above.
According to the image infringement detection method provided by the invention, the probe injection script is called to carry out color distortion treatment on part of the first sample images in the first sample image data set, then the two-class probe detection model is trained, and the probe injection script is called to carry out color distortion treatment on the unauthorized images and then release the unauthorized images instead of the unauthorized images after treatment, so that the probe detection result in the text-to-image training image data set corresponding to the text-to-image model training task can be identified by utilizing the two-class probe detection model to obtain the text-to-image training image data set so as to detect the sample infringement event. Because the color distortion is difficult to be recognized by human eyes and is difficult to be interfered by common preprocessing enhancement compared with the direct watermark addition, the condition that an unauthorized image is smeared with the watermark is effectively avoided, and a two-class probe detection model which can be trained is detected, so that whether the image adopted during the training of the text-to-pictorial model is infringed is detected, the unauthorized image is prevented from being trained and used, and the image infringement event of the artificial intelligent drawing technology is reduced.
The invention also provides an image infringement detection device, an image infringement detection apparatus and a readable storage medium, which have the beneficial effects and are not described in detail herein.
Drawings
For a clearer description of embodiments of the invention or of the prior art, the drawings that are used in the description of the embodiments or of the prior art will be briefly described, it being apparent that the drawings in the description below are only some embodiments of the invention, and that other drawings can be obtained from them without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an image infringement detection method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a model architecture of a two-classification probe detection model according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an image infringement detection apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an image infringement detection apparatus according to an embodiment of the present invention.
Detailed Description
The core of the invention is to provide an image infringement detection method, an image infringement detection device and a readable storage medium, which are used for avoiding unauthorized images from being trained and used and reducing image infringement events of artificial intelligence drawing technology.
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The following describes an embodiment of the present invention.
For ease of understanding, a system architecture to which the present invention is applicable will first be described. The image infringement detection method provided by the embodiment of the invention can be applied to any computing equipment, and can also be applied to accelerator clusters and heterogeneous accelerator clusters. The accelerators employed may include, but are not limited to, graphics processors (Graphics Processing Unit, GPUs), field programmable gate arrays (Field Programmable Gate Array, FPGAs), and the like.
On the basis of the above architecture, the image infringement detection method provided by the embodiment of the invention is described below with reference to the accompanying drawings.
The second embodiment of the present invention will be described below.
Fig. 1 is a flowchart of an image infringement detection method according to an embodiment of the present invention.
As shown in fig. 1, the image infringement detection method provided by the embodiment of the invention includes:
s101: and (3) taking the color distortion parameters as probes, calling probe injection scripts to perform color distortion processing on part of the first sample images in the first sample image data set, and training by using the processed first sample image data set to obtain a two-class probe detection model.
S102: and (3) taking the color distortion parameter as a probe, calling a probe injection script to perform color distortion processing on the unauthorized image, and releasing the processed unauthorized image by replacing the unauthorized image.
S103: and acquiring a text-to-graphic training image data set corresponding to the text-to-graphic model training task.
S104: and identifying by using the two-classification probe detection model to obtain a probe detection result in the training image dataset of the literature graph.
S105: and obtaining a sample infringement event detection result of the culture chart training image data set according to the probe detection result.
In a specific implementation, there is no sequential relationship between S101 and S102, and there is no sequential relationship between S102 and S103-S105.
The existing unauthorized image protection method is usually to add a watermark, but the watermark which is easy to be identified by human eyes is also easy to be erased by a pirate, and cannot be used for preventing unauthorized images from being stolen. According to the image infringement detection method provided by the embodiment of the invention, the probe is detected in the training image dataset of the venturi chart by training the two classification probe detection models through disclosing after injecting the probe into the unauthorized image, so that the method can be used for checking whether the sample infringement event exists in the training image dataset of the venturi chart before training the venturi chart model and training whether the training image dataset of the venturi chart is used for checking that the trained venturi chart model is used for training.
In the embodiment of the invention, the probes to be injected can be identified by the classified probe detection model without being perceived by naked eyes and are not interfered by common pretreatment data enhancement, so that the effectiveness, naturalness and robustness of an unauthorized data protection scheme are realized. Considering that the object in the visual recognition image is mainly based on shape and the attention to color and texture is weak, the injection probe provided by the image infringement detection method provided by the embodiment of the invention is based on a distorted mapping mode, and the mapping mode is different from the current main stream preprocessing data enhancement, so that the method has stronger robustness.
The probe injection script provided by the embodiment of the invention is used for realizing a function S on an unauthorized data set D, and generating a new image (marked as S (x)) on any image x in the D through the S. S (x) needs to possess three aspects: s (x) is difficult to perceive by humans; s (x) is easy to mislead the conclusion that the model generates errors; s (x) is difficult to interfere with by the usual pre-processing data enhancement.
When training the classification probe detection model, color distortion processing can be performed on a part of the first sample image in the first sample image data set, specifically, color distortion processing is performed on a part of sampling points in the part of the first sample head image, and the first sample image requiring color distortion processing is not easy to identify by naked eyes. Similarly, the use of the probe to inject script should minimize the impact of the injection probe on the quality of the artwork when processing unauthorized images.
Invoking the probe injection script to color warp the input image may include:
extracting a sampling point set from an input image;
generating a color distortion offset vector for each sampling point in the set of sampling points;
calculating according to the color distortion offset vector to obtain a distortion loss value of each sampling point;
Comparing the distortion loss value of the sampling point with a set loss threshold value of the input image;
if the distortion loss value of each sampling point does not exceed the set loss threshold value, performing color distortion processing on each sampling point by using the color distortion offset vector of each current sampling point;
if the distortion loss value exceeds the sampling point of the set loss threshold value, the step of returning to the distortion loss value of each sampling point according to the calculation of the color distortion offset vector after regenerating the color distortion offset vector;
wherein the input image is a first sample image or an unauthorized image.
That is, the probe injection script provided by the embodiment of the present invention should ensure that the distortion loss value of the input image does not exceed the set loss threshold value after the probe is injected into the input image. The color distortion loss between the sampling point subjected to the color distortion processing and other points on the input image can be detected by the two-classification probe detection model.
According to the image infringement detection method provided by the embodiment of the invention, the probe injection script is called to carry out color distortion treatment on part of the first sample images in the first sample image data set, then the two-class probe detection model is trained, and the probe injection script is called to carry out color distortion treatment on the unauthorized images and then release the unauthorized images instead of the unauthorized images after treatment, so that the probe detection result in the text-to-text image training image data set corresponding to the text-to-text image model training task can be identified by utilizing the two-class probe detection model to detect the sample infringement event. Because the color distortion is difficult to be recognized by human eyes and is difficult to be interfered by common preprocessing enhancement compared with the direct watermark addition, the condition that an unauthorized image is smeared with the watermark is effectively avoided, and a two-class probe detection model which can be trained is detected, so that whether the image adopted during the training of the text-to-pictorial model is infringed is detected, the unauthorized image is prevented from being trained and used, and the image infringement event of the artificial intelligent drawing technology is reduced.
The following describes a third embodiment of the present invention.
Based on the above embodiments, the embodiments of the present invention further provide a color texture adaptive probe injection strategy based on particle swarm optimization.
In the method for detecting the infringement of the image provided by the embodiment of the invention, the sampling point set is extracted from the input image, which can be uniformly sampled from the input image, or can be sampled according to the texture characteristics of the input image. Extracting the set of sampling points from the input image may include:
extracting a texture feature coordinate point set in the input image by using a texture feature extractor;
and taking the texture feature coordinate point set as a sampling point set.
Alternatively, the sample may be further sampled based on texture features to obtain a set of sample points for performing color warping processing. Extracting the set of sampling points from the input image may include:
extracting a texture feature coordinate point set in the input image by using a texture feature extractor;
and sampling from the texture feature coordinate point set to obtain a sampling point set.
The texture feature extractor may employ a Scale-invariant feature transform (Scale-invariant feature transform) texture feature extractor. A texture feature extractor can be utilized to extract a set of texture feature coordinate points on an input image Then randomly sampling fromk*kAnd obtaining a sampling point set by the coordinate points. In the practical application of the present invention,kmay be 10.
The embodiment of the invention provides a particle swarm optimization color texture self-adaptive probe injection strategy, which can adaptively inject a proper probe into an input image based on a particle swarm algorithm. Taking sampling points as particles, taking a sampling point set as a particle group, and enablingIndicating the first particle groupiThe corresponding color Red Green Blue (RGB) three channel value of the particle is +.>Corresponding warp offset vector->Color is twistedThe curves are mapped as
In an embodiment of the present invention, generating a color warp offset vector for each sample point in the set of sample points includes:
forming a sampling point particle swarm by the sampling point set;
according toColor distortion offset vectors and velocity vectors are generated for each particle in a group of sample point particles.
At this time, for each particle in the particle groupiRandomly generating a color warp offset vectorAnd rate vector->
The distortion loss value of the input image is calculated from the color distortion offset vector, and one of three schemes may be adopted:
only the color warp loss value of the sampling point may be used as the warp loss value. The distortion loss value of each sampling point is calculated according to the color distortion offset vector, which comprises the following steps: calculating according to the color distortion offset vector to obtain a color distortion loss value of each sampling point; the color distortion loss value is taken as the distortion loss value.
Alternatively, the natural penalty loss of the sample point may be used as the warp loss value. The calculation of the distortion loss value of each sampling point according to the color distortion offset vector may include: calculating according to the color distortion offset vector to obtain a natural penalty loss value of each sampling point; the natural penalty loss value is taken as a distortion loss value.
Alternatively, the distortion loss value may be calculated by combining the color distortion loss value and the natural penalty loss value, and the calculating the distortion loss value of each sampling point according to the color distortion offset vector may include: calculating according to the color distortion offset vector to obtain a natural penalty loss value of each sampling point; calculating according to the color distortion offset vector to obtain a color distortion loss value of each sampling point; and calculating the distortion loss value of the sampling point according to the color distortion loss value of the sampling point and the natural penalty loss value of the sampling point.
The color distortion loss value of each sampling point is calculated according to the color distortion offset vector, and is calculated according to the following formula:
wherein,for the color distortion loss value, +.>For a set of sampling points,xin order to sample the point of the sample,tfor the color distortion offset vector, G is a color distortion recognition classification model trained in advance from the second sample image dataset, CE() Is a cross entropy loss function; the second sample image dataset includes a second sample image subjected to color warping processing and a second sample image not subjected to color warping processing.
The color distortion recognition classification model may select any classification model, such as ResNet50 classification model, with model weights in the second sample image datasetTraining the middle part to obtain. Wherein,E'is the set of second sample images subjected to the color warping process,E-E'is a set of second sample images that have not been color-warped.
The calculating the natural penalty loss value of each sampling point according to the color distortion offset vector may include: calculating according to the color distortion offset vector to obtain a peak signal-to-noise ratio (PSNR) index of a sampling point and a learning perception image block similarity (LPIPS) index of the sampling point; and calculating to obtain a natural penalty loss value according to the peak signal-to-noise ratio index and the learning perception image block similarity index.
It can be understood that, besides the peak signal-to-noise ratio index and the learning perception image block similarity index, other image quality evaluation indexes can be used to calculate the natural penalty loss value.
The method comprises the steps of calculating according to a color distortion offset vector to obtain a peak signal-to-noise ratio index of a sampling point and a learning perception image block similarity index of the sampling point, and calculating according to the following formula:
Wherein,for natural penalty loss value, +.>Normalized coefficient for peak signal-to-noise ratio index, +.>Normalized coefficient for learning a perceived image block similarity index,/->Threshold constant for peak signal-to-noise ratio index, +.>Threshold constant for learning a perceived image block similarity index,/->Peak signal-to-noise ratio index after adding color distortion offset vector to sampling point, < >>And adding the learning perception image block similarity index after the color distortion offset vector to the sampling point.
The normalized coefficient of the peak signal-to-noise ratio index can be calculated by the following formula:
the normalized coefficient of the similarity index of the learning perception image block can be calculated by the following formula:
wherein,k*kfor the total number of sampling points in the set of sampling points,is the firstiPeak signal-to-noise ratio index after adding corresponding color distortion offset vector to each sampling point, +.>Is the firstiThe first sampling pointiAnd adding the learning perception image block similarity index after the corresponding color distortion offset vector to each sampling point.
If a scheme of calculating the distortion loss value by combining the color distortion loss value and the natural penalty loss value is adopted, a summation mode can be adopted, namely, the distortion loss value of the sampling point is calculated according to the color distortion loss value of the sampling point and the natural penalty loss value of the sampling point, and is obtained by the following formula:
Wherein,for the value of distortion loss->For the color distortion loss value, +.>Is a natural penalty loss value.
And setting a corresponding distortion loss value according to the adopted calculation mode of the distortion loss value. When comparing the twisting loss values of the particles in the particle swarm with the set twisting threshold, the twisting loss values of all the particles can be respectively compared with the set twisting threshold, or the maximum value of the twisting loss values or the minimum value of the twisting loss values in the particles can be firstly screened for comparison. Comparing the distortion loss value of the sampling point with a set loss threshold for the input image may include:
determining a sampling point with the maximum distortion loss value;
if the distortion loss value of the sampling point with the maximum distortion loss value does not exceed the set loss threshold value, determining that the distortion loss value of each sampling point does not exceed the set loss threshold value;
if the distortion loss value of the sampling point with the maximum distortion loss value exceeds the set loss threshold value, the method directly enters the step of calculating the distortion loss value of each sampling point according to the color distortion offset vector.
By comparing only the maximum distortion loss value with the set loss value, the calculation amount of comparison is reduced.
Alternatively, comparing the distortion loss value of the sampling point with the set loss threshold value for the input image may include:
Determining a sampling point with the minimum distortion loss value;
if the distortion loss value of the sampling point with the minimum distortion loss value does not exceed the set loss threshold value, continuously comparing the distortion loss values of the rest sampling points with the set loss threshold value;
if the distortion loss value of the sampling point with the minimum distortion loss value exceeds the set loss threshold value, the method directly enters the step of calculating the distortion loss value of each sampling point according to the color distortion offset vector.
By comparing only the minimum distortion loss value with the set loss value, the amount of calculation for comparison is reduced.
If there is a sampling point whose distortion loss value exceeds the set loss threshold, after regenerating the color distortion offset vector, returning to the step of calculating the distortion loss value of each sampling point according to the color distortion offset vector, including:
if the distortion loss value exceeds the set loss threshold value, updating the color distortion offset vector of each particle according to the following formula, and returning to the step of calculating the distortion loss value of each sampling point according to the color distortion offset vector;
wherein,is the first after the color distortion treatmentiParticles (1)>Is the firstiRed channel value of individual particles->Is the firstiGreen channel value of individual particles->Is the first iBlue channel value of individual particles, +.>To the first pairiRed channel color warp offset of individual particles, < >>To the first pairiGreen channel color distortion offset of individual particles, < >>To the first pairiBlue channel color warp offset of individual particles, < >>For the velocity momentum coefficient of the particle swarm of the sampling point, < + >>Is the firstiVelocity vector of individual particles->Setting parameters for the rate vector, ">Color twist offset vector for the particle with the largest twist loss value in the current sampling point particle swarm, +.>The color twist offset vector for the particle with the smallest twist loss value in the current sampling point particle swarm,is the firstiA rate vector after updating of the individual particles, +.>Is the firstiThe current color twist offset vector of the individual particles +.>Is the firstiColor distortion offset vector after each particle update.
The probe injection strategy based on the particle swarm optimization can be adopted to realize the probe injection of the sample image or the unauthorized image when the probe injection script is called in the S101 to perform the color distortion processing on part of the first sample image in the first sample image data set and the probe injection script is called in the S102 to perform the color distortion processing on the unauthorized image, and the probes in the image after the probe injection are not easy to be identified by naked eyes.
The fourth embodiment of the present invention will be described below.
Fig. 2 is a schematic diagram of a model architecture of a two-classification probe detection model according to an embodiment of the present invention.
Based on the above embodiments, the present embodiment further describes a training process of the classification probe detection model.
As shown in fig. 2, the model architecture of the two-classification probe detection model provided by the embodiment of the invention may include a color channel, a texture channel, a channel merging module, a multi-scale parallel convolutional neural network (R1, R2, R3, R4), a bilinear interpolation (ROI alignment) module, a global average pooling module (Global AVG Pooling, GAP), an array flattening function and a multi-layer perceptron.
In the method for detecting the infringement of the image provided by the embodiment of the present invention, the probe detection result for obtaining the input image by using the two classification probe detection model may include:
the input image is input into a color channel for analysis to obtain three channel values of red, green and blue of the input image;
analyzing an input image into a texture channel to obtain a texture channel value of the input image;
combining the color red, green and blue channel values with the texture channel values to obtain color texture characteristic values of the input image;
Inputting the color texture characteristic values into a parallel convolution neural network with a plurality of scales, and outputting a plurality of corresponding characteristic diagrams;
after adding the feature graphs, outputting probability values as probe detection results through a global average pooling module, an array flattening function and a multi-layer perceptron;
the input image is a first sample image in the processed first sample image data set or an image to be detected in the text-to-image training image data set.
The input image is input into a color channel for analysis to obtain RGB values of the input image. The input image is input into the texture channel and can be subjected to the formulaThe texture channel values are calculated, the RGB channel values and the texture channel values are combined together, and then four scales (224, 192, 160, 128 and 128) are set for parallel input into convolutional neural networks R1-R4. R is not limited to the network type, and ResNet50 may be chosen. The size of the R network characteristic diagram is changed into 1/16 of the original size, and the R network characteristic diagram is subsequently passed through double linesThe linear interpolation (ROI alignment) module outputs the feature map size fixed bit 7*7, then adds the multi-scale feature maps, and finally outputs the final probability value through the global average pooling module vector, the array flattening function and the multi-layer perceptron.
The two-classification probe detection model provided by the embodiment of the invention is a two-classification model, and the loss function can be as follows:
wherein,Las a function of the loss,for points in the input image that have not been color warped,xfor the point in the input image where the color warping is performed, is->As a function of the color twist, the color of the color filter,L ce () As a function of the cross-entropy,F() A plurality of sets of the flattening functions,Dfor a set of points in the input image, +.>A set of points in the input image that are color-warped.
The fifth embodiment of the present invention will be described below.
Through the steps of the embodiment, a training optimized two-classification probe detection model can be obtainedFObtaining a two-class probe detection model by using verification set testFError rate of (a)βAfter that, the two classification probes can be detected into a modelFPut into practical use.
The embodiment of the invention provides a scheme for judging a sample infringement event by adopting a hypothesis test mode.
Training an image dataset for a text-to-detect graphAssuming that it is composed of unauthorized imagesThe data set is subjected to the probe implantation process according to the above embodiment of the present invention, and the process is as follows:
image dataset for training of meridional drawings Medium random samplingmThe individual samples were taken->
For a pair ofAccording to the description of the above embodiment of the present invention, an injection probe is obtained
For a pair ofUses a two-class probe detection model for each sampleFCalculating a probability value of 1 for the sample label (i.e. judging that the current sample meets the infringement condition): />
Calculation ofThe hope of discriminating the establishment of the sample infringement event W in the sample +.>And obtaining the probe length of the training image dataset of the culture chart.
Based on this, the probe detection result in the training image dataset of the meristematic map obtained by using the two-classification probe detection model identification may include:
sampling the self-generated picture training image data set to obtain an image set to be detected;
detecting the probability of sample infringement event of each image to be detected in the image set to be detected by using the two-classification probe detection model;
calculating according to the probability of each image to be detected that a sample infringement event exists, and obtaining an expected value of the sample infringement event of the image set to be detected;
obtaining the detection error rate of the two-classification probe detection model;
obtaining a sample infringement event detection result according to the probe detection result, including:
and carrying out sample infringement event judgment on the text-generated picture training image data set by adopting a hypothesis test method according to the expected value and the detection error rate to obtain a sample infringement event detection result.
The sample infringement event W corresponds to the binomial distribution B (1, p), so the sample infringement event is expectedFitting Gaussian distribution->
Setting a null hypothesis:
setting alternative assumptions:
then, a hypothesis testing method is adopted to judge the sample infringement event of the text-generated graph training image data set according to the expected value and the detection error rate, and the detection result of the sample infringement event can be obtained, which comprises the following steps:
by means ofCalculated->
By means ofCalculated->
Order the=0.05, inquiry is done with a degree of freedom of%m-1) a quantile of 1-/for each of the two groups>T distribution value>
If it is>/>Determining that the text-generated graph training image data set has a sample infringement event;
if it is≤/>Determining that the text-generated graph training image dataset does not have a sample infringement event;
wherein,to calculate the intermediate variables of the sample infringement event detection value,Wfor the desired value->For the sample infringement event detection value,mfor the number of samples of the image to be detected,βfor detecting error rate +_>For +.>The number of the sub-digits is calculated,τmay be 0.2.
The embodiment of the invention combines the data set with the hypothesis test in statistics after the data set is injected into the probe, so as to reasonably judge the sample infringement event of the training image data set of the meristematic map. The proving process of the above hypothesis testing step is as follows:
The sample infringement event W accords with Gaussian distribution, then W-β-τAlso conform to Gaussian distribution and constructtStatistics:
;/>
wherein,sis W-β-τStandard deviation.
Wherein,is thattAnd (5) counting values.
To reject the null hypothesis, it is necessary to satisfy。/>The number of bits at the degree of freedom (m-1) is 1-γT distribution values of (c).
The invention further discloses an image infringement detection device, an image infringement detection device and an image infringement detection equipment corresponding to the method and a readable storage medium.
The sixth embodiment of the present invention will be described.
Fig. 3 is a schematic structural diagram of an image infringement detection apparatus according to an embodiment of the present invention.
As shown in fig. 3, the image infringement detection apparatus provided in the embodiment of the present invention includes:
the training unit 301 is configured to use a color distortion parameter as a probe, call a probe injection script to perform color distortion processing on a part of the first sample image in the first sample image data set, and train to obtain a two-class probe detection model by using the processed first sample image data set;
the processing unit 302 is configured to take the color distortion parameter as a probe, call a probe injection script to perform color distortion processing on the unauthorized image, and replace the unauthorized image with the processed unauthorized image for distribution;
An obtaining unit 303, configured to obtain a text-to-graph training image dataset corresponding to a text-to-graph model training task;
the detection unit 304 is used for identifying and obtaining a probe detection result in the culture chart training image data set by using the two classification probe detection model;
and the judging unit 305 is used for obtaining the sample infringement event detection result of the culture chart training image data set according to the probe detection result.
In some implementations, invoking the probe injection script to color warp the input image includes:
extracting a sampling point set from an input image;
generating a color distortion offset vector for each sampling point in the set of sampling points;
calculating according to the color distortion offset vector to obtain a distortion loss value of each sampling point;
comparing the distortion loss value of the sampling point with a set loss threshold value of the input image;
if the distortion loss value of each sampling point does not exceed the set loss threshold value, performing color distortion processing on each sampling point by using the color distortion offset vector of each current sampling point;
if the distortion loss value exceeds the sampling point of the set loss threshold value, the step of returning to the distortion loss value of each sampling point according to the calculation of the color distortion offset vector after regenerating the color distortion offset vector;
Wherein the input image is a first sample image or an unauthorized image.
In some implementations, extracting a set of sampling points from an input image includes:
extracting a texture feature coordinate point set in the input image by using a texture feature extractor;
and taking the texture feature coordinate point set as a sampling point set.
In some implementations, extracting a set of sampling points from an input image includes:
extracting a texture feature coordinate point set in the input image by using a texture feature extractor;
and sampling from the texture feature coordinate point set to obtain a sampling point set.
In some implementations, the calculating the distortion loss value for each sample point based on the color distortion offset vector includes:
calculating according to the color distortion offset vector to obtain a color distortion loss value of each sampling point;
the color distortion loss value is taken as the distortion loss value.
In some implementations, the calculating the distortion loss value for each sample point based on the color distortion offset vector includes:
calculating according to the color distortion offset vector to obtain a natural penalty loss value of each sampling point;
the natural penalty loss value is taken as a distortion loss value.
In some implementations, the calculating the distortion loss value for each sample point based on the color distortion offset vector includes:
Calculating according to the color distortion offset vector to obtain a natural penalty loss value of each sampling point;
calculating according to the color distortion offset vector to obtain a color distortion loss value of each sampling point;
and calculating the distortion loss value of the sampling point according to the color distortion loss value of the sampling point and the natural penalty loss value of the sampling point.
In some implementations, the color distortion loss value for each sample point is calculated from the color distortion offset vector, calculated from the following equation:
wherein,for the color distortion loss value, +.>For a set of sampling points,xin order to sample the point of the sample,tfor the color distortion offset vector, G is a color distortion recognition classification model trained in advance from the second sample image dataset,CE() Is a cross entropy loss function; the second sample image dataset includes a second sample image subjected to color warping processing and a second sample image not subjected to color warping processing.
In some implementations, the natural penalty loss value for each sample point is calculated from the color warp offset vector, including:
calculating according to the color distortion offset vector to obtain a peak signal-to-noise ratio index of the sampling point and a learning perception image block similarity index of the sampling point;
and calculating to obtain a natural penalty loss value according to the peak signal-to-noise ratio index and the learning perception image block similarity index.
In some implementations, the peak signal-to-noise ratio index of the sampling point and the learning perceived image block similarity index of the sampling point are calculated according to the color distortion offset vector, and are calculated according to the following formula:
wherein,for natural penalty loss value, +.>Normalized coefficient for peak signal-to-noise ratio index, +.>Normalized coefficient for learning a perceived image block similarity index,/->Threshold constant for peak signal-to-noise ratio index, +.>Threshold constant for learning a perceived image block similarity index,/->Peak signal-to-noise ratio index after adding color distortion offset vector to sampling point, < >>And adding the learning perception image block similarity index after the color distortion offset vector to the sampling point.
In some implementations, the normalized coefficient of the peak signal-to-noise ratio index is calculated by:
;/>
the normalized coefficient of the similarity index of the learning perception image block is obtained by calculating the following formula:
wherein,k*kfor the total number of sampling points in the set of sampling points,is the firstiPeak signal-to-noise ratio index after adding corresponding color distortion offset vector to each sampling point, +.>Is the firstiThe first sampling pointiAnd adding the learning perception image block similarity index after the corresponding color distortion offset vector to each sampling point.
In some implementations, the distortion loss value for the sample point is calculated from the color distortion loss value for the sample point and the natural penalty loss value for the sample point, by:
wherein,for the value of distortion loss->For the color distortion loss value, +.>Is a natural penalty loss value.
In some implementations, comparing the distortion loss value of the sample point to a set loss threshold for the input image includes:
determining a sampling point with the maximum distortion loss value;
if the distortion loss value of the sampling point with the maximum distortion loss value does not exceed the set loss threshold value, determining that the distortion loss value of each sampling point does not exceed the set loss threshold value;
if the distortion loss value of the sampling point with the maximum distortion loss value exceeds the set loss threshold value, the method directly enters the step of calculating the distortion loss value of each sampling point according to the color distortion offset vector.
In some implementations, comparing the distortion loss value of the sample point to a set loss threshold for the input image includes:
determining a sampling point with the minimum distortion loss value;
if the distortion loss value of the sampling point with the minimum distortion loss value does not exceed the set loss threshold value, continuously comparing the distortion loss values of the rest sampling points with the set loss threshold value;
If the distortion loss value of the sampling point with the minimum distortion loss value exceeds the set loss threshold value, the method directly enters the step of calculating the distortion loss value of each sampling point according to the color distortion offset vector.
In some implementations, generating a color warp offset vector for each sample point in a set of sample points includes:
forming a sampling point particle swarm by the sampling point set;
according toGenerating a color distortion offset vector and a velocity vector for each particle in the sample point particle swarm;
if there is a sampling point whose distortion loss value exceeds the set loss threshold, after regenerating the color distortion offset vector, returning to the step of calculating the distortion loss value of each sampling point according to the color distortion offset vector, including:
if the distortion loss value exceeds the set loss threshold value, updating the color distortion offset vector of each particle according to the following formula, and returning to the step of calculating the distortion loss value of each sampling point according to the color distortion offset vector;
wherein,is the first after the color distortion treatmentiParticles (1)>Is the firstiRed channel value of individual particles->Is the firstiGreen channel value of individual particles->Is the firstiBlue channel value of individual particles, +.>To the first pair iRed channel color warp offset of individual particles, < >>To the first pairiGreen channel color distortion offset of individual particles, < >>To the first pairiBlue channel color warp offset of individual particles, < >>For the velocity momentum coefficient of the particle swarm of the sampling point, < + >>Is the firstiVelocity vector of individual particles->Setting parameters for the rate vector, ">Color twist offset vector for the particle with the largest twist loss value in the current sampling point particle swarm, +.>The color twist offset vector for the particle with the smallest twist loss value in the current sampling point particle swarm,is the firstiA rate vector after updating of the individual particles, +.>Is the firstiThe current color twist offset vector of the individual particles +.>Is the firstiColor distortion offset vector after each particle update.
In some implementations, identifying probe detection results that result in an input image using a two-classification probe detection model includes:
the input image is input into a color channel for analysis to obtain three channel values of red, green and blue of the input image;
analyzing an input image into a texture channel to obtain a texture channel value of the input image;
combining the color red, green and blue channel values with the texture channel values to obtain color texture characteristic values of the input image;
inputting the color texture characteristic values into a parallel convolution neural network with a plurality of scales, and outputting a plurality of corresponding characteristic diagrams;
After adding the feature graphs, outputting probability values as probe detection results through a global average pooling module, an array flattening function and a multi-layer perceptron;
the input image is a first sample image in the processed first sample image data set or an image to be detected in the text-to-image training image data set.
In some implementations, the loss function of the bifurcated probe detection model is:
wherein,Las a function of the loss,for points in the input image that have not been color warped,xfor the point in the input image where the color warping is performed, is->As a function of the color twist, the color of the color filter,L ce () As a function of the cross-entropy,F() A plurality of sets of the flattening functions,Dfor a set of points in the input image, +.>A set of points in the input image that are color-warped.
In some implementations, the detection unit 304 identifies probe detection results in the paperwork training image dataset using a two-classification probe detection model, including:
sampling the self-generated picture training image data set to obtain an image set to be detected;
detecting the probability of sample infringement event of each image to be detected in the image set to be detected by using the two-classification probe detection model;
calculating according to the probability of each image to be detected that a sample infringement event exists, and obtaining an expected value of the sample infringement event of the image set to be detected;
Obtaining the detection error rate of the two-classification probe detection model;
the determining unit 305 obtains a sample infringement event detection result according to the probe detection result, including:
and carrying out sample infringement event judgment on the text-generated picture training image data set by adopting a hypothesis test method according to the expected value and the detection error rate to obtain a sample infringement event detection result.
In some implementations, the determining unit 305 performs sample infringement event determination of the text-to-image training image dataset according to the expected value and the detection error rate using a hypothesis test method, to obtain a sample infringement event detection result, including:
by means ofCalculated->;/>
By means ofCalculated->
Order the=0.05, inquiry is done with a degree of freedom of%m-1) a quantile of 1-/for each of the two groups>T distribution value>
If it is>/>Determining that the text-generated graph training image data set has a sample infringement event;
if it is≤/>Determining that the text-generated graph training image dataset does not have a sample infringement event;
wherein,to calculate the intermediate variables of the sample infringement event detection value,Wfor the desired value->For the sample infringement event detection value,mfor the number of samples of the image to be detected,βfor detecting error rate +_>For +.>Quantiles.
Since the embodiments of the apparatus portion and the embodiments of the method portion correspond to each other, the embodiments of the apparatus portion are referred to the description of the embodiments of the method portion, and are not repeated herein.
The seventh embodiment of the present invention will be described.
Fig. 4 is a schematic structural diagram of an image infringement detection apparatus according to an embodiment of the present invention.
As shown in fig. 4, an image infringement detection apparatus provided by an embodiment of the present invention includes:
a memory 410 for storing a computer program 411;
a processor 420 for executing a computer program 411, which computer program 411 when executed by the processor 420 implements the steps of the image infringement detection method according to any of the embodiments described above.
Processor 420 may include one or more processing cores, such as a 3-core processor, an 8-core processor, etc., among others. The processor 420 may be implemented in at least one hardware form of digital signal processing DSP (Digital Signal Processing), field programmable gate array FPGA (Field-Programmable Gate Array), programmable logic array PLA (Programmable Logic Array). Processor 420 may also include a main processor, which is a processor for processing data in an awake state, also referred to as central processor CPU (Central Processing Unit), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 420 may be integrated with an image processor GPU (Graphics Processing Unit), a GPU for use in responsible for rendering and rendering of the content required to be displayed by the display screen. In some embodiments, the processor 420 may also include an artificial intelligence AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 410 may include one or more readable storage media, which may be non-transitory. Memory 410 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In this embodiment, the memory 410 is at least used for storing a computer program 411, where the computer program 411 can implement relevant steps in the image infringement detection method disclosed in any of the foregoing embodiments after being loaded and executed by the processor 420. In addition, the resources stored in the memory 410 may further include an operating system 412, data 413, and the like, where the storage manner may be transient storage or permanent storage. The operating system 412 may be Windows. The data 413 may include, but is not limited to, data related to the above-described method.
In some embodiments, the image infringement detection device may also include a display 430, a power source 440, a communication interface 450, an input-output interface 460, a sensor 470, and a communication bus 480.
It will be appreciated by those skilled in the art that the structure shown in fig. 4 does not constitute a limitation of the image infringement detection apparatus, and may include more or less components than those illustrated.
The image infringement detection device provided by the embodiment of the invention comprises the memory and the processor, wherein the processor can realize the image infringement detection method when executing the program stored in the memory, and the effects are the same as the above.
The eighth embodiment of the present invention will be described.
It should be noted that the apparatus and device embodiments described above are merely exemplary, and for example, the division of modules is merely a logic function division, and there may be other division manners in actual implementation, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms. The modules illustrated as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present invention may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules.
The integrated modules, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium for performing all or part of the steps of the method according to the embodiments of the present invention.
To this end, an embodiment of the present invention also provides a readable storage medium having stored thereon a computer program which, when executed by a processor, implements steps such as an image infringement detection method.
The readable storage medium may include: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (ram) RAM (Random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The computer program included in the readable storage medium provided in this embodiment can implement the steps of the image infringement detection method described above when executed by a processor, and the same effects are achieved.
The method, the device, the equipment and the readable storage medium for detecting the image infringement provided by the invention are described in detail. In the description, each embodiment is described in a progressive manner, and each embodiment is mainly described by the differences from other embodiments, so that the same similar parts among the embodiments are mutually referred. The apparatus, device and readable storage medium disclosed in the embodiments are relatively simple to describe, and the relevant points refer to the description of the method section since they correspond to the methods disclosed in the embodiments. It should be noted that it will be apparent to those skilled in the art that the present invention may be modified and practiced without departing from the spirit of the present invention.
It should also be noted that in this specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.

Claims (21)

1. An image infringement detection method, comprising:
taking the color distortion parameters as probes, calling probe injection scripts to perform color distortion processing on part of first sample images in the first sample image data set, and training by using the processed first sample image data set to obtain a two-class probe detection model;
taking the color distortion parameter as a probe, calling the probe injection script to perform color distortion processing on the unauthorized image, and replacing the unauthorized image with the processed unauthorized image to release;
acquiring a text-to-graphic training image data set corresponding to a text-to-graphic model training task;
identifying by using the classification probe detection model to obtain a probe detection result in the culture chart training image data set;
obtaining a sample infringement event detection result of the culture chart training image data set according to the probe detection result;
the method for performing color distortion processing on the input image by calling the probe injection script comprises the following steps:
extracting a sampling point set from the input image;
generating a color distortion offset vector for each sampling point in the set of sampling points;
calculating according to the color distortion offset vector to obtain a distortion loss value of each sampling point;
Comparing the distortion loss value of the sampling point with a set loss threshold value of the input image;
if the distortion loss value of each sampling point does not exceed the set loss threshold, performing color distortion processing on each sampling point by using the current color distortion offset vector of each sampling point;
if the distortion loss value exceeds the sampling point of the set loss threshold value, the step of returning to the distortion loss value of each sampling point obtained by calculation according to the color distortion offset vector after regenerating the color distortion offset vector;
wherein the input image is the first sample image or the unauthorized image;
the generating a color warp offset vector for each sample point in the set of sample points includes:
forming a sampling point particle swarm by the sampling point set;
according toGenerating the color distortion offset vector and a velocity vector for each particle in the sample point particle swarm;
wherein,is the first after the color distortion treatmentiThe particles, < >>Is the firstiThe red channel value of each of the particles,is the firstiGreen channel value of each of said particles, +.>Is the firstiBlue channel value of each of said particles, < > >To the first pairiRed channel color warp offset of individual said particles,/->To the first pairiThe green channel color twist offset of each of the particles,to the first pairiThe blue channel color of each of the particles is distorted by an offset.
2. The method of claim 1, wherein the extracting the set of sampling points from the input image comprises:
extracting a texture feature coordinate point set in the input image by using a texture feature extractor;
and taking the texture feature coordinate point set as the sampling point set.
3. The method of claim 1, wherein the extracting the set of sampling points from the input image comprises:
extracting a texture feature coordinate point set in the input image by using a texture feature extractor;
and sampling from the texture feature coordinate point set to obtain the sampling point set.
4. The method according to claim 1, wherein the calculating a distortion loss value for each of the sampling points according to the color distortion offset vector includes:
calculating according to the color distortion offset vector to obtain a color distortion loss value of each sampling point;
And taking the color distortion loss value as the distortion loss value.
5. The method according to claim 1, wherein the calculating a distortion loss value for each of the sampling points according to the color distortion offset vector includes:
calculating according to the color distortion offset vector to obtain a natural penalty loss value of each sampling point;
and taking the natural penalty loss value as the distortion loss value.
6. The method according to claim 1, wherein the calculating a distortion loss value for each of the sampling points according to the color distortion offset vector includes:
calculating according to the color distortion offset vector to obtain a natural penalty loss value of each sampling point;
calculating according to the color distortion offset vector to obtain a color distortion loss value of each sampling point;
and calculating the distortion loss value of the sampling point according to the color distortion loss value of the sampling point and the natural penalty loss value of the sampling point.
7. The method according to claim 4 or 6, wherein the color distortion loss value of each sampling point is calculated according to the color distortion offset vector, and is calculated according to the following formula:
Wherein,for the color distortion loss value, +.>For the set of sample points to be used,xas a result of the sampling points being present,tg is a color distortion recognition classification model which is trained in advance according to the second sample image data set for the color distortion offset vector,CE() Is a cross entropy loss function; the second sample image dataset includes a second sample image subjected to color warping processing and a second sample image not subjected to color warping processing.
8. The method according to claim 5 or 6, wherein the calculating the natural penalty loss value of each sampling point according to the color distortion offset vector includes:
calculating according to the color distortion offset vector to obtain a peak signal-to-noise ratio index of the sampling point and a learning perception image block similarity index of the sampling point;
and calculating the natural penalty loss value according to the peak signal-to-noise ratio index and the learning perception image block similarity index.
9. The method for detecting image infringement according to claim 8, wherein the peak signal-to-noise ratio index of the sampling point and the learning perceived image block similarity index of the sampling point are calculated according to the color distortion offset vector, and are calculated according to the following formula:
Wherein,for the natural penalty loss value, +.>Normalized coefficient for peak signal-to-noise ratio index, < >>Is saidLearning a normalized coefficient of a perceived image block similarity index, < >>Threshold constant for peak signal to noise ratio indicator, < >>Threshold constant for the learning-aware image block similarity index, +.>Adding the peak signal-to-noise ratio index after the color distortion offset vector to the sampling point,/for the sampling point>And adding the learning perception image block similarity index after the color distortion offset vector to the sampling point.
10. The method of claim 9, wherein the normalized coefficient of the peak signal-to-noise ratio index is calculated by the following formula:
the normalized coefficient of the similarity index of the learning perception image block is specifically calculated by the following formula:
wherein,k*kfor the total number of the sampling points in the set of sampling points,is the firstiAdding the peak signal-to-noise ratio index after the corresponding color distortion offset vector to each sampling point, < >>Is the firstiThe first of the sampling pointsiThe sample points are added with the corresponding learning perception image block similarity indexes after the color distortion offset vectors are added, and the sample points are added with the corresponding learning perception image block similarity indexes >Threshold constant for peak signal to noise ratio indicator, < >>And a threshold constant for the learning perception image block similarity index.
11. The method according to claim 6, wherein the distortion loss value of the sampling point is calculated from the color distortion loss value of the sampling point and the natural penalty loss value of the sampling point by the following formula:
wherein,for the value of the warp loss->For the color distortion loss value, +.>Penalty values are penalized for the nature.
12. The image infringement detection method of claim 1, wherein the comparing the distortion loss value of the sampling point with a set loss threshold for the input image includes:
determining the sampling point with the maximum distortion loss value;
if the distortion loss value of the sampling point with the largest distortion loss value does not exceed the set loss threshold, determining that the distortion loss value of each sampling point does not exceed the set loss threshold;
if the distortion loss value of the sampling point with the largest distortion loss value exceeds the set loss threshold, directly entering the step of calculating the distortion loss value of each sampling point according to the color distortion offset vector.
13. The image infringement detection method of claim 1, wherein the comparing the distortion loss value of the sampling point with a set loss threshold for the input image includes:
determining the sampling point with the minimum distortion loss value;
if the distortion loss value of the sampling point with the minimum distortion loss value does not exceed the set loss threshold, continuously comparing the distortion loss values of the rest sampling points with the set loss threshold;
if the distortion loss value of the sampling point with the minimum distortion loss value exceeds the set loss threshold, directly entering the step of calculating the distortion loss value of each sampling point according to the color distortion offset vector.
14. The method according to claim 1, wherein the step of returning the distortion loss value calculated from the color distortion offset vector to each of the sampling points after regenerating the color distortion offset vector if the distortion loss value exceeds the sampling point of the set loss threshold value includes:
if the particles with the distortion loss value exceeding the set loss threshold exist, updating the color distortion offset vector of each particle according to the following formula, and returning to the step of calculating the distortion loss value of each sampling point according to the color distortion offset vector;
Wherein,for the velocity momentum coefficient of the population of particles at the sampling point, and (2)>Is the firstiA velocity vector of each of said particles, +.>Setting parameters for the rate vector, ">For the color twist offset vector of the particle with the smallest twist loss value in the current sample point particle group +.>Is the firstiSaid rate vector after updating of said particles,>is the firstiThe color twist offset vector of each of the particles is currently +.>Is the firstiAnd the color distortion offset vector after updating each particle.
15. The image infringement detection method of claim 1, wherein identifying the probe detection result that yields an input image using the bifurcated probe detection model includes:
the input image is input into a color channel for analysis to obtain a color red, green and blue three-channel value of the input image;
inputting the input image into a texture channel for analysis to obtain a texture channel value of the input image;
combining the color red, green and blue channel values with the texture channel values to obtain color texture characteristic values of the input image;
inputting the color texture characteristic values into a parallel convolution neural network with a plurality of scales, and outputting a plurality of corresponding characteristic diagrams;
After adding the feature graphs, outputting a probability value as the probe detection result through a global average pooling module, an array flattening function and a multi-layer perceptron;
the input image is the first sample image in the processed first sample image data set or an image to be detected in the text-to-image training image data set.
16. The method of claim 15, wherein the loss function of the bifurcated probe detection model is:
wherein,Las a function of the loss in question,for points in the input image that have not been color warped,xfor the point in said input image where color warping is performed,/->As a function of the color twist, the color of the color filter,L ce () As a function of the cross-entropy,F() Flattening the array of functions,DFor a set of points in said input image, < > j->A set of points for which color warping is performed in the input image.
17. The method for detecting the infringement of images according to claim 1, wherein the identifying by using the classification probe detection model to obtain the probe detection result in the training image dataset of the literature map comprises:
sampling from the culture chart training image data set to obtain an image set to be detected;
Detecting the probability of sample infringement event of each image to be detected in the image set to be detected by using the classification probe detection model;
calculating to obtain an expected value of the sample infringement event in the image set to be detected according to the probability of the sample infringement event in each image to be detected;
obtaining the detection error rate of the two classification probe detection models;
the obtaining the sample infringement event detection result according to the probe detection result comprises the following steps:
and carrying out sample infringement event judgment on the training image data set of the draft map according to the expected value and the detection error rate by adopting a hypothesis test method to obtain a detection result of the sample infringement event.
18. The method for detecting image infringement as in claim 17, wherein the performing sample infringement event determination of the training image dataset of the meristematic map by using a hypothesis testing method according to the expected value and the detection error rate, to obtain the detection result of the sample infringement event, comprises:
by means ofCalculated->
By means ofCalculated->
Order the=0.05, inquiry is done with a degree of freedom of%m-1) a quantile of 1-/for each of the two groups>T distribution value>
If it is>/>Determining that the sample infringement event exists in the meristem training image dataset;
If it is≤/>Determining that the sample infringement event does not exist in the meristem training image dataset;
wherein,to calculate the intermediate variables of the sample infringement event detection value,Wfor the desired value, +.>For the sample infringement event detection value,mfor the number of samples of the image to be detected,βfor the detection error rate, +_>For +.>Quantiles.
19. An image infringement detection apparatus, comprising:
the training unit is used for taking the color distortion parameters as probes, calling the probes to inject scripts to perform color distortion processing on part of first sample images in the first sample image data set, and training the processed first sample image data set to obtain a two-class probe detection model;
the processing unit is used for taking the color distortion parameters as probes, calling the probe injection scripts to perform color distortion processing on the unauthorized images, and replacing the unauthorized images with the processed unauthorized images to issue;
the acquisition unit is used for acquiring a text-to-text graph training image data set corresponding to the text-to-text graph model training task;
the detection unit is used for identifying and obtaining a probe detection result in the culture map training image data set by utilizing the two-classification probe detection model;
The judging unit is used for obtaining a sample infringement event detection result of the culture chart training image dataset according to the probe detection result;
the method for performing color distortion processing on the input image by calling the probe injection script comprises the following steps:
extracting a sampling point set from the input image;
generating a color distortion offset vector for each sampling point in the set of sampling points;
calculating according to the color distortion offset vector to obtain a distortion loss value of each sampling point;
comparing the distortion loss value of the sampling point with a set loss threshold value of the input image;
if the distortion loss value of each sampling point does not exceed the set loss threshold, performing color distortion processing on each sampling point by using the current color distortion offset vector of each sampling point;
if the distortion loss value exceeds the sampling point of the set loss threshold value, the step of returning to the distortion loss value of each sampling point obtained by calculation according to the color distortion offset vector after regenerating the color distortion offset vector;
wherein the input image is the first sample image or the unauthorized image;
The generating a color warp offset vector for each sample point in the set of sample points includes:
forming a sampling point particle swarm by the sampling point set;
according toGenerating the color distortion offset vector and a velocity vector for each particle in the sample point particle swarm;
wherein,is the first after the color distortion treatmentiThe particles, < >>Is the firstiThe red channel value of each of the particles,is the firstiGreen channel value of each of said particles, +.>Is the firstiBlue channel value of each of said particles, < >>To the first pairiRed channel color warp offset of individual said particles,/->To the first pairiThe green channel color twist offset of each of the particles,to the first pairiThe blue channel color of each of the particles is distorted by an offset.
20. An image infringement detection apparatus, characterized by comprising:
a memory for storing a computer program;
a processor for executing the computer program, which when executed by the processor implements the steps of the image infringement detection method as claimed in any one of claims 1 to 18.
21. A readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the image infringement detection method according to any of claims 1 to 18.
CN202311800569.1A 2023-12-26 2023-12-26 Image infringement detection method, device, equipment and readable storage medium Active CN117474903B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311800569.1A CN117474903B (en) 2023-12-26 2023-12-26 Image infringement detection method, device, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311800569.1A CN117474903B (en) 2023-12-26 2023-12-26 Image infringement detection method, device, equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN117474903A CN117474903A (en) 2024-01-30
CN117474903B true CN117474903B (en) 2024-03-22

Family

ID=89627790

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311800569.1A Active CN117474903B (en) 2023-12-26 2023-12-26 Image infringement detection method, device, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN117474903B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1277410A (en) * 1999-06-09 2000-12-20 施乐公司 Digital imaging method and apparatus for detecting documents safety mark
CN103914687A (en) * 2014-03-14 2014-07-09 常州大学 Rectangular-target identification method based on multiple channels and multiple threshold values
US9396509B1 (en) * 2011-10-30 2016-07-19 Digimarc Corporation Closed form non-iterative watermark embedding
KR20180104995A (en) * 2017-03-14 2018-09-27 한국과학기술원 Method and apparatus for inserting and detecting wartermark
CN110765890A (en) * 2019-09-30 2020-02-07 河海大学常州校区 Lane and lane mark detection method based on capsule network deep learning architecture
CN110968721A (en) * 2019-11-28 2020-04-07 上海冠勇信息科技有限公司 Method and system for searching infringement of mass images and computer readable storage medium thereof
CN111476268A (en) * 2020-03-04 2020-07-31 中国平安人寿保险股份有限公司 Method, device, equipment and medium for training reproduction recognition model and image recognition
CN114373097A (en) * 2021-12-15 2022-04-19 厦门市美亚柏科信息股份有限公司 Unsupervised image classification method, terminal equipment and storage medium
CN114648681A (en) * 2022-05-20 2022-06-21 浪潮电子信息产业股份有限公司 Image generation method, device, equipment and medium
CN115393920A (en) * 2021-05-21 2022-11-25 福特全球技术公司 Counterfeit image detection
CN115620022A (en) * 2022-09-15 2023-01-17 中汽创智科技有限公司 Object detection method, device, equipment and storage medium
CN116385368A (en) * 2023-03-09 2023-07-04 大连海事大学 Photovoltaic cell defect detection data set augmentation method based on generation countermeasure network
CN116935169A (en) * 2023-09-13 2023-10-24 腾讯科技(深圳)有限公司 Training method for draft graph model and draft graph method

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1277410A (en) * 1999-06-09 2000-12-20 施乐公司 Digital imaging method and apparatus for detecting documents safety mark
US9396509B1 (en) * 2011-10-30 2016-07-19 Digimarc Corporation Closed form non-iterative watermark embedding
CN103914687A (en) * 2014-03-14 2014-07-09 常州大学 Rectangular-target identification method based on multiple channels and multiple threshold values
KR20180104995A (en) * 2017-03-14 2018-09-27 한국과학기술원 Method and apparatus for inserting and detecting wartermark
CN110765890A (en) * 2019-09-30 2020-02-07 河海大学常州校区 Lane and lane mark detection method based on capsule network deep learning architecture
CN110968721A (en) * 2019-11-28 2020-04-07 上海冠勇信息科技有限公司 Method and system for searching infringement of mass images and computer readable storage medium thereof
CN111476268A (en) * 2020-03-04 2020-07-31 中国平安人寿保险股份有限公司 Method, device, equipment and medium for training reproduction recognition model and image recognition
CN115393920A (en) * 2021-05-21 2022-11-25 福特全球技术公司 Counterfeit image detection
CN114373097A (en) * 2021-12-15 2022-04-19 厦门市美亚柏科信息股份有限公司 Unsupervised image classification method, terminal equipment and storage medium
CN114648681A (en) * 2022-05-20 2022-06-21 浪潮电子信息产业股份有限公司 Image generation method, device, equipment and medium
CN115620022A (en) * 2022-09-15 2023-01-17 中汽创智科技有限公司 Object detection method, device, equipment and storage medium
CN116385368A (en) * 2023-03-09 2023-07-04 大连海事大学 Photovoltaic cell defect detection data set augmentation method based on generation countermeasure network
CN116935169A (en) * 2023-09-13 2023-10-24 腾讯科技(深圳)有限公司 Training method for draft graph model and draft graph method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
IR-setnet: sparsity aware ensemble network for infrared imaging based drone localization and tracking in distorted surveillance videos;Raghavan, D et.al;《2023 IEEE International Conference on Image Processing Challenges and Workshops (ICIPCW)》;20231129;第1-4页 *
基于滑坡区域颜色特征模型的SVM遥感检测;陈善静;康青;沈志强;周若冲;;航天返回与遥感;20191215(06);第93-102页 *

Also Published As

Publication number Publication date
CN117474903A (en) 2024-01-30

Similar Documents

Publication Publication Date Title
Liu et al. Attribute-aware face aging with wavelet-based generative adversarial networks
Chen et al. Median filtering forensics based on convolutional neural networks
CN112818862B (en) Face tampering detection method and system based on multi-source clues and mixed attention
WO2019214557A1 (en) Method and system for detecting face image generated by deep network
CN113159147B (en) Image recognition method and device based on neural network and electronic equipment
CN115661144B (en) Adaptive medical image segmentation method based on deformable U-Net
CN111915437B (en) Training method, device, equipment and medium of money backwashing model based on RNN
CN109302410A (en) A kind of internal user anomaly detection method, system and computer storage medium
CN109309675A (en) A kind of network inbreak detection method based on convolutional neural networks
CN110059607B (en) Living body multiplex detection method, living body multiplex detection device, computer equipment and storage medium
CN112088378A (en) Image hidden information detector
CN113987429A (en) Copyright verification method of neural network model based on watermark embedding
CN113159045A (en) Verification code identification method combining image preprocessing and convolutional neural network
CN113099066B (en) Large-capacity image steganography method based on multi-scale fusion cavity convolution residual error network
CN114581646A (en) Text recognition method and device, electronic equipment and storage medium
CN116994044A (en) Construction method of image anomaly detection model based on mask multi-mode generation countermeasure network
CN111814881A (en) Marine fish image identification method based on deep learning
WO2022156214A1 (en) Liveness detection method and apparatus
CN112365551A (en) Image quality processing system, method, device and medium
CN117474903B (en) Image infringement detection method, device, equipment and readable storage medium
CN109344779A (en) A kind of method for detecting human face under ring road scene based on convolutional neural networks
CN117375896A (en) Intrusion detection method and system based on multi-scale space-time feature residual fusion
CN111126566A (en) Abnormal furniture layout data detection method based on GAN model
CN113177599B (en) Reinforced sample generation method based on GAN
Yuan Identification of global histogram equalization by modeling gray-level cumulative distribution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant