CN113674195A - Image detection method, device, equipment and storage medium - Google Patents

Image detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN113674195A
CN113674195A CN202010402493.7A CN202010402493A CN113674195A CN 113674195 A CN113674195 A CN 113674195A CN 202010402493 A CN202010402493 A CN 202010402493A CN 113674195 A CN113674195 A CN 113674195A
Authority
CN
China
Prior art keywords
image
model
sub
detection
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010402493.7A
Other languages
Chinese (zh)
Inventor
张承辉
任志强
丁隆乾
章婷婷
沙源
赵普明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202010402493.7A priority Critical patent/CN113674195A/en
Publication of CN113674195A publication Critical patent/CN113674195A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06395Quality analysis or management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/40Business processes related to the transportation industry
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/49Analysis of texture based on structural texture description, e.g. using primitives or placement rules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Development Economics (AREA)
  • Educational Administration (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Game Theory and Decision Science (AREA)
  • Operations Research (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image detection method, an image detection device, image detection equipment and a storage medium. The method comprises the following steps: acquiring an image to be detected; inputting an image to be detected into a first sub-model in a pre-trained image detection model, and detecting the image to be detected to obtain a first target area in the image to be detected; determining a first target image corresponding to the first target area; inputting the first target image into a second sub-model in the image detection model, and determining first result information; the first sub-model is obtained by training according to the sample detection image and a second target area of the sample detection image in the labeling information of the sample detection image; the second sub-model is obtained by training according to the second target image in the second target area and the information of whether the second target image is tampered in the labeling information of the sample detection image, and the efficiency of detecting whether the image is tampered can be improved.

Description

Image detection method, device, equipment and storage medium
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an image detection method, an image detection device, image detection equipment and a storage medium.
Background
With the wide application of the retouching tool, image tampering phenomena such as image pixel modification and image splicing are more and more serious, so that the authenticity and integrity of an image are damaged.
At present, there are various methods for detecting image tampering based on deep learning. However, in the conventional method for detecting whether an image is falsified based on deep learning, the entire image needs to be detected and analyzed, which reduces the efficiency of detecting whether an image is falsified.
Disclosure of Invention
The embodiment of the invention provides an image detection method, an image detection device, image detection equipment and a storage medium, and can solve the problem of low efficiency of detecting whether an image is tampered.
In a first aspect, an image detection method is provided, which includes:
acquiring an image to be detected;
inputting an image to be detected into a first sub-model in a pre-trained image detection model, and detecting the image to be detected to obtain a first target area in the image to be detected;
determining a first target image corresponding to the first target area;
inputting the first target image into a second sub-model in the image detection model, and determining first result information;
the first sub-model is obtained by training according to the sample detection image and a second target area of the sample detection image in the labeling information of the sample detection image;
the second sub-model is obtained by training according to the second target image in the second target area and the information of whether the second target image is tampered in the labeling information of the sample detection image.
In a second aspect, there is provided an image detection apparatus, comprising:
the acquisition module is used for acquiring an image to be detected;
the detection module is used for inputting the image to be detected into a first sub-model in a pre-trained image detection model, detecting the image to be detected and obtaining a first target area in the image to be detected;
the image determining module is used for determining a first target image corresponding to the first target area;
the information determining module is used for inputting the first target image into a second sub-model in the image detection model and determining first result information;
the first sub-model is obtained by training according to the sample detection image and a second target area of the sample detection image in the labeling information of the sample detection image;
the second sub-model is obtained by training according to the second target image in the second target area and the information of whether the second target image is tampered in the labeling information of the sample detection image.
In a third aspect, an electronic device is provided, the device comprising: a processor and a memory storing computer program instructions; the processor, when executing the computer program instructions, performs the method as in the first aspect or any possible implementation of the first aspect.
In a fourth aspect, there is provided a computer storage medium having stored thereon computer program instructions which, when executed by a processor, implement a method as in the first aspect or any possible implementation of the first aspect.
Based on the provided image detection method, device, equipment and storage medium, the first target area in the image to be detected is detected through the first sub-model in the pre-trained image detection model, and the first target image to be detected can be determined in a targeted manner. And then detecting whether the first target image is tampered by using a second sub-model in the image detection model. Because the first sub-model and the second sub-model in the image detection model are obtained by training the sample detection image and the labeling information of the sample detection image, and the labeling information comprises the second target area and the information of whether the second target image corresponding to the second target area is tampered, the image detection model can detect the image to be detected more specifically, so that the efficiency of detecting whether the image to be detected is tampered by using the image detection model is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of an image detection model training method according to an embodiment of the present invention
FIG. 2 is a schematic flow chart of an image detection method according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of an image detection apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
Features and exemplary embodiments of various aspects of the present invention will be described in detail below, and in order to make objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not to be construed as limiting the invention. It will be apparent to one skilled in the art that the present invention may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present invention by illustrating examples of the present invention.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
With the development of the information age, the development of the telecommunication industry is motivated by the technological progress, and the telecommunication industry is also an important support for promoting economic development.
Currently, image detection methods in the telecommunications industry include expert scoring and machine learning algorithms. And the subjective performance of the stability of the user is evaluated by an expert scoring method. The image detection method based on the machine learning algorithm mainly finds out important factors influencing user stability by acquiring data of history stable users and users leaving the current operator network as a sample training set of machine learning. And outputting factor weights or rule sets of the stability measurement through a machine learning algorithm, so as to obtain the stability of each user.
However, the inventor has found through research that when determining the user stability by using the machine learning algorithm, the stability metric factor can determine the stability of the user in the whole network from the user. Actually, the stability of the group virtual network users is mainly reflected in the stability of the working circle of the virtual network users. Therefore, the machine learning algorithm evaluates the stability of the user based on the user, does not consider the mutual influence of the conversation behaviors of the virtual network user between the users in the working circle and the like, and further determines that the stability of the user is relatively unilateral and inaccurate.
Based on this, the embodiment of the invention provides an image detection method, an image detection device, an image detection apparatus and a storage medium, wherein a first target area in an image to be detected is detected through a first sub-model in a pre-trained image detection model, and the first target image to be detected can be determined in a targeted manner. And then detecting whether the first target image is tampered by using a second sub-model in the image detection model. Because the first sub-model and the second sub-model in the image detection model are obtained by training the sample detection image and the labeling information of the sample detection image, and the labeling information comprises the second target area and the information of whether the second target image corresponding to the second target area is tampered, the image detection model can detect the image to be detected more specifically, so that the efficiency of detecting whether the image to be detected is tampered by using the image detection model is improved.
In the image detection method provided by the embodiment of the present invention, the image detection model trained in advance needs to be used to detect the image, so the image detection model needs to be trained before the image detection model is used to detect the image. The following describes a specific implementation of the training method of the image detection model according to the embodiment of the present invention with reference to the drawings.
Fig. 1 is a schematic flow chart of an image detection model training method according to an embodiment of the present invention.
As shown in fig. 1, the method for training an image detection model according to an embodiment of the present invention may include:
s101: acquiring a training sample set; the training sample set comprises a plurality of training samples; each training sample comprises a sample detection image and annotation information of the sample detection image.
The training samples are the data basis for training and estimating the image detection model, and for this purpose, the training samples need to be acquired first. The training samples are related to input data in an actual application scene after the training of the image detection model is completed, and each training sample comprises a sample detection image and the labeling information of the sample detection image. The labeling information of the sample detection image comprises information of whether the second target area and the second target area in the sample detection image are tampered.
Taking broadband maintenance work as an example, in order to perform quality inspection and examination on maintenance workers in broadband maintenance work, a photo needs to be taken of the spectroscope after broadband maintenance to obtain a spectroscope image. Quality inspection of the assembly and maintenance worker is performed through the spectroscope image, and the spectroscope image is shown in fig. 2. And taking the optical splitter image as a sample detection image, wherein the marking information is information of whether the region where the tail fiber label is located in the optical splitter image and the characters in the tail fiber label are tampered.
S102: and inputting the sample detection image into a first sub-model in a pre-constructed image detection model, and detecting the sample detection image to obtain a second target area in the sample detection image.
The image detection model comprises a first sub-model and a second sub-model. The first sub-model is used for detecting the area needing to be identified in the image, and the second sub-model is used for determining whether the characters in the area are tampered.
In some embodiments, the first sub-model may be a model constructed using a Convolutional Neural network (RCNN) with regional features.
In other embodiments, the first sub-model may also be fast-RCNN (fast-RCNN) for more accurate feature extraction and localization of regions in the image.
The sample detection image is input into a first sub-model, which is capable of determining a second target region in the sample detection image. Wherein the second target area may be a rectangular box. For example, the coordinate information may be normalized by two points corresponding to the upper left corner and the lower right corner of the second target region: (x)1,y1)、(x2,y2). Here, a plurality of second target regions may be included in the sample detection image.
S103: and determining a second target image corresponding to the second target area.
After the second target area is determined, a second target image corresponding to the second target area may be intercepted according to the coordinates of the second target area.
In some embodiments, the size of the second target image obtained by the different second target regions is different. In order to guarantee the precision of the trained image detection model, the second target image can be scaled in an equal proportion.
As an example, let the long side of the second target image be w0The short side of the second target image is h0. Scaling w0To a fixed dimension w1Short side scaling, scaled short side h1Satisfies the following formula (1):
Figure BDA0002490036030000061
where h is1<w1On the short side h1The black pixels are filled at two sides simultaneously, and the width k of the black pixels filled at each side meets the following formula (2):
Figure BDA0002490036030000062
short side length h after black pixel filling1And the long side w1Are equal.
The second target image is scaled equally to be square in shape. The input of the label images with any size is converted into the square images with uniform size, and the scaling of the width, the height and the like is realized, so that the damage of effective features is reduced, and the label images with any size can be trained. Then, the zoomed image is randomly cropped by adopting a data enhancement method, such as dividing w1×w1Randomly selecting a position in the selectable image area, and cutting to w2×w2Size), random horizontal turning, random vertical turning, random rotation, such as random selection of 0 degrees, 90 degrees, 180 degrees and 270 degrees, further increasing sample diversity and improving generalization performance of the model.
S104: and inputting the second target image into a second sub-model in the pre-constructed image detection model, and determining second result information.
After the second target image is input into the second sub-model, the second sub-model can identify whether the characters, images and the like in the second target image are tampered. The second result information refers to result information of whether or not the second target image is falsified. For example, if the text in the second target image is tampered with, the second result information may be "yes" or "1", etc.
S105: and determining a first loss function value of a second sub-model in the pre-constructed image detection model according to the second result information and the labeling information of the sample detection image.
The second result information is detected information, and may have a certain error from the actual result in the second detected image. That is, the detection accuracy of the second submodel has not yet reached a certain requirement, and therefore, it is necessary to determine the first loss function value of the second submodel from the second result information and the label information of the sample detection image. The labeling information of the sample detection image comprises the real position of the second target area in the sample detection image and the real result of whether the image in the second target area is tampered. The first loss function value can be obtained according to the fact that the image in the second target area in the second result information and the annotation information is tampered.
S106: and training a pre-constructed image detection model according to the first loss function value to obtain a trained image detection model.
Based on the first loss function value, parameters of the image detection model can be adjusted to improve the accuracy of the image detection model. Here, the accuracy of the image detection model can be improved by adjusting parameters in the second sub-model in the image detection model through the first loss function value.
In other embodiments of the present invention, in order to improve the accuracy of the image detection model, before S104, the following steps may be further included:
determining a second loss function value of the first sub-model according to the second target area and the marking information;
training a first sub-model in a pre-constructed image detection model according to the second loss function value to obtain a trained first sub-model;
and inputting the sample detection image in the training sample into the trained first sub-model, and determining a second target area in the sample detection image in the training sample.
The labeling information comprises the real position of the second target region in the sample detection image. And determining a second loss function value based on the second target region determined by the first sub-model and the real position of the second target region in the labeling information in the sample detection image. Based on the second loss function value, the first submodel is trained, e.g., parameters of the first submodel are adjusted. Therefore, the detection precision of the first sub-model is improved, and the precision of the image detection model is further improved. And then inputting the sample detection image into the trained first sub-model to obtain a second target area in the sample detection image. And the precision of the second target area obtained by the trained first sub-model detection is higher.
After the parameters of the image detection model are adjusted, the model needs to be trained further, so for the training sample set in S101, a new training sample is selected from the training sample set, and steps S102 to S106 are executed until the final training of the image detection model is completed.
The image detection model training method provided by the embodiment of the invention,
the above is a specific implementation of the image detection model training method provided by the embodiment of the present invention. The image detection model obtained through the training can be applied to the image detection method provided in the following embodiments.
The following describes in detail a specific implementation of the image detection method provided in the present application with reference to fig. 2. It should be noted that, in the following specific implementation, the image to be detected is described as an example of a splitter image.
Fig. 2 is a schematic flowchart of an image detection method according to an embodiment of the present invention.
As shown in fig. 2, an image detection method provided in an embodiment of the present invention may include:
s201: and acquiring an image to be detected.
The image to be detected may be obtained from a database. Taking an image to be detected as an optical splitter image as an example, because the broadband maintenance workload is large, the number of the optical splitter images is relatively large, so that the optical splitter images need to be stored in a response database, and then the optical splitter images in the database can be detected according to a certain period, so that the working quality of broadband maintenance workers can be supervised.
S202: and inputting the image to be detected into a first sub-model in a pre-trained image detection model, and detecting the image to be detected to obtain a first target area in the image to be detected.
And inputting the image to be detected into a pre-trained first sub-model, wherein the first sub-model can detect a first target area in the image to be detected. For example, after the splitter image is input into the trained image detection model, the first sub-model in the image detection model can detect and obtain an image area where the tail fiber label in the splitter image is located.
S203: and determining a first target image corresponding to the first target area.
The first target image belongs to a part of the image to be detected. And when the image to be detected is the image of the optical splitter, the first target image is the image corresponding to the tail fiber label in the image of the optical splitter. After the position of the tail fiber label in the optical splitter image is determined, the image corresponding to the tail fiber label can be cut, cut and the like to obtain a first target image only containing the tail fiber label. The first target area can be marked in the optical splitter image, so that the position of the tail fiber label in the optical splitter image is determined, and then the images except the tail fiber label are eliminated, so that the image of the tail fiber label is obtained.
Here, the first sub-model is trained from the second target region of the sample detection image in the sample detection image and the labeling information of the sample detection image.
S204: and inputting the first target image into a second sub-model in the image detection model, and determining first result information.
The size of the first target image may be the same as the size of the first target region, or may be larger than the size of the first target region. After the image of the tail fiber label is input into the image detection model, a second sub-model in the image detection model can detect whether the tail fiber label is tampered. If the pigtail label is tampered with, the first result information may be "yes" or "1". If the pigtail label is not tampered, the first result information may be "no" or "0".
Here, the second sub-model is trained according to information on whether the second target image is falsified in the second target image in the second target region and the labeling information of the sample detection image.
According to the image detection method provided by the embodiment of the invention, the first target area in the image to be detected is detected through the first sub-model in the pre-trained image detection model, so that the first target image to be detected can be determined in a targeted manner. And then detecting whether the first target image is tampered by using a second sub-model in the image detection model. Because the first sub-model and the second sub-model in the image detection model are obtained by training the sample detection image and the labeling information of the sample detection image, and the labeling information comprises the second target area and the information of whether the second target image corresponding to the second target area is tampered, the image detection model can detect the image to be detected more specifically, so that the efficiency of detecting whether the image to be detected is tampered by using the image detection model is improved.
Based on the image detection method provided by the embodiment of the invention, correspondingly, the invention provides an image detection device.
Fig. 3 is a schematic structural diagram of an image detection apparatus according to an embodiment of the present invention.
As shown in fig. 3, an image detecting apparatus provided in an embodiment of the present invention may include: the system comprises an acquisition module 301, a detection module 302, an image determination module 303 and an information determination module 304.
An obtaining module 301, configured to obtain an image to be detected;
the detection module 302 is configured to input the image to be detected into a first sub-model in a pre-trained image detection model, and detect the image to be detected to obtain a first target region in the image to be detected;
an image determining module 303, configured to determine a first target image corresponding to the first target area;
an information determining module 304, configured to input the first target image into a second sub-model in the image detection model, and determine first result information;
the first sub-model is obtained by training according to a sample detection image and a second target area of the sample detection image in the labeling information of the sample detection image;
and the second sub-model is obtained by training according to the second target image in the second target area and the information of whether the second target image is tampered in the labeling information of the sample detection image.
Optionally, in some embodiments of the present invention, the apparatus further includes:
the obtaining module 301 is further configured to obtain a training sample set; the training sample set comprises a plurality of training samples; each training sample comprises a sample detection image and labeling information of the sample detection image;
the training module is used for respectively executing the following steps for each training sample:
inputting the sample detection image into a first sub-model in a pre-constructed image detection model, detecting the sample detection image, and obtaining a second target area in the sample detection image;
determining a second target image corresponding to the second target area;
inputting a second target image into a second sub-model in a pre-constructed image detection model, and determining second result information;
determining a first loss function value of a second sub-model in the pre-constructed image detection model according to the second result information and the labeling information of the sample detection image;
and training a pre-constructed image detection model according to the first loss function value to obtain a trained image detection model.
Optionally, in some embodiments of the present invention, the apparatus further includes:
the function value determining module is used for determining a second loss function value of the first sub-model according to the second target area and the marking information;
the training module is further used for training a first sub-model in the pre-constructed image detection model according to the second loss function value to obtain a trained first sub-model;
and the region determining module is used for inputting the sample detection image in the training sample into the trained first sub-model and determining a second target region in the sample detection image in the training sample.
Optionally, in some embodiments of the present invention, the apparatus further includes:
the processing module is used for carrying out at least one item of processing on a second target image corresponding to the second target area:
the method comprises the steps of scaling the second target image in equal proportion, randomly cutting the second target image, horizontally turning the second target image, vertically turning the second target image and rotating the second target image.
Optionally, in some embodiments of the present invention, the training module 305 is specifically configured to:
and adjusting parameters of a second sub-model in the pre-constructed image detection model according to the first loss function value to obtain the trained image detection model.
Optionally, in some embodiments of the present invention, the first sub-model is composed of a fast area convolutional neural network; the second submodel is composed of an initial residual neural network.
Optionally, in some embodiments of the present invention, the image to be detected is a splitter image.
The image detection device provided by the embodiment of the invention detects the first target area in the image to be detected through the first sub-model in the pre-trained image detection model, and can pertinently determine the first target image to be detected. And then detecting whether the first target image is tampered by using a second sub-model in the image detection model. Because the first sub-model and the second sub-model in the image detection model are obtained by training the sample detection image and the labeling information of the sample detection image, and the labeling information comprises the second target area and the information of whether the second target image corresponding to the second target area is tampered, the image detection model can detect the image to be detected more specifically, so that the efficiency of detecting whether the image to be detected is tampered by using the image detection model is improved.
The image detection apparatus provided in the embodiment of the present invention executes each step in the methods shown in fig. 1 and fig. 2, and can achieve the technical effect of improving the efficiency of detecting whether an image is tampered, which is not described in detail herein for brevity.
Fig. 4 shows a hardware structure diagram of an electronic device according to an embodiment of the present invention.
The electronic device may include a processor 401 and a memory 402 storing computer program instructions.
Specifically, the processor 401 may include a Central Processing Unit (CPU), or an Application Specific Integrated Circuit (ASIC), or may be configured as one or more Integrated circuits implementing embodiments of the present invention.
Memory 402 may include mass storage for data or instructions. By way of example, and not limitation, memory 402 may include a Hard Disk Drive (HDD), floppy Disk Drive, flash memory, optical Disk, magneto-optical Disk, tape, or Universal Serial Bus (USB) Drive or a combination of two or more of these. Memory 402 may include removable or non-removable (or fixed) media, where appropriate. The memory 402 may be internal or external to the integrated gateway disaster recovery device, where appropriate. In a particular embodiment, the memory 402 is a non-volatile solid-state memory. In a particular embodiment, the memory 402 includes Read Only Memory (ROM). Where appropriate, the ROM may be mask-programmed ROM, Programmable ROM (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or flash memory or a combination of two or more of these.
The processor 401 may implement any of the image detection methods in the embodiments shown in fig. 1 and 2 by reading and executing computer program instructions stored in the memory 402.
In one example, the electronic device can also include a communication interface 404 and a bus 410. As shown in fig. 4, the processor 401, the memory 402, and the communication interface 404 are connected via a bus 410 to complete communication therebetween.
The communication interface 404 is mainly used for implementing communication between modules, apparatuses, units and/or devices in the embodiments of the present invention.
Bus 410 includes hardware, software, or both to couple the components of the electronic device to each other. By way of example, and not limitation, a bus may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a Hypertransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus or a combination of two or more of these. Bus 410 may include one or more buses, where appropriate. Although specific buses have been described and shown in the embodiments of the invention, any suitable buses or interconnects are contemplated by the invention.
The electronic device may execute the image detection method in the embodiment of the present invention, thereby implementing the image detection method described in conjunction with fig. 1 and 2.
In addition, in combination with the image detection method in the above embodiments, the embodiments of the present invention may be implemented by providing a computer storage medium. The computer storage medium having computer program instructions stored thereon; the computer program instructions, when executed by a processor, implement any of the image detection methods in the above embodiments.
It is to be understood that the invention is not limited to the specific arrangements and instrumentality described above and shown in the drawings. A detailed description of known methods is omitted herein for the sake of brevity. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present invention are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications and additions or change the order between the steps after comprehending the spirit of the present invention.
The functional blocks shown in the above-described structural block diagrams may be implemented as hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, plug-in, function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine-readable medium or transmitted by a data signal carried in a carrier wave over a transmission medium or a communication link. A "machine-readable medium" may include any medium that can store or transfer information. Examples of a machine-readable medium include electronic circuits, semiconductor memory devices, ROM, flash memory, Erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, Radio Frequency (RF) links, and so forth. The code segments may be downloaded via computer networks such as the internet, intranet, etc.
It should also be noted that the exemplary embodiments mentioned in this patent describe some methods or systems based on a series of steps or devices. However, the present invention is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be performed in an order different from the order in the embodiments, or may be performed simultaneously.
As described above, only the specific embodiments of the present invention are provided, and it can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system, the module and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. It should be understood that the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present invention, and these modifications or substitutions should be covered within the scope of the present invention.

Claims (10)

1. An image detection method, characterized in that the method comprises:
acquiring an image to be detected;
inputting the image to be detected into a first sub-model in a pre-trained image detection model, and detecting the image to be detected to obtain a first target area in the image to be detected;
determining a first target image corresponding to the first target area;
inputting the first target image into a second sub-model in the image detection model, and determining first result information;
the first sub-model is obtained by training according to a sample detection image and a second target area of the sample detection image in the labeling information of the sample detection image;
and the second sub-model is obtained by training according to the second target image in the second target area and the information of whether the second target image is tampered in the labeling information of the sample detection image.
2. The method of claim 1, further comprising:
acquiring a training sample set; the training sample set comprises a plurality of training samples; each training sample comprises a sample detection image and labeling information of the sample detection image;
for each training sample, the following steps are respectively performed:
inputting the sample detection image into a first sub-model in a pre-constructed image detection model, and detecting the sample detection image to obtain a second target area in the sample detection image;
determining a second target image corresponding to the second target area;
inputting the second target image into a second sub-model in the pre-constructed image detection model, and determining second result information;
determining a first loss function value of a second sub-model in the pre-constructed image detection model according to the second result information and the labeling information of the sample detection image;
and training the pre-constructed image detection model according to the first loss function value to obtain a trained image detection model.
3. The method of claim 2, wherein prior to inputting the second target image into a second sub-model in the pre-constructed image detection model, determining second result information, the method further comprises:
determining a second loss function value of the first sub-model according to the second target area and the marking information;
training a first sub-model in the pre-constructed image detection model according to the second loss function value to obtain a trained first sub-model;
and inputting the sample detection image in the training sample into the trained first sub-model, and determining a second target area in the sample detection image in the training sample.
4. The method of claim 3, wherein prior to inputting the sample detection images in the training sample into the trained first sub-model, determining the second target region in the sample detection images in the training sample, the method further comprises:
performing at least one item of processing in each item on a second target image corresponding to the second target area:
scaling the second target image in equal proportion, randomly cutting the second target image, horizontally turning the second target image, vertically turning the second target image and rotating the second target image.
5. The method of claim 2, wherein training the pre-constructed image detection model according to the first loss function value to obtain a trained image detection model comprises:
and adjusting parameters of a second sub-model in the pre-constructed image detection model according to the first loss function value to obtain the trained image detection model.
6. The method of any of claims 1-5, wherein the first submodel is comprised of a fast area convolutional neural network; the second submodel is composed of an initial residual error neural network.
7. The method according to any one of claims 1 to 5, wherein the image to be detected is a spectroscope image.
8. An identification device, the device comprising:
the acquisition module is used for acquiring an image to be detected;
the detection module is used for inputting the image to be detected into a first sub-model in a pre-trained image detection model, detecting the image to be detected and obtaining a first target area in the image to be detected;
the image determining module is used for determining a first target image corresponding to the first target area;
the information determining module is used for inputting the first target image into a second sub-model in the image detection model and determining first result information;
the first sub-model is obtained by training according to a sample detection image and a second target area of the sample detection image in the labeling information of the sample detection image;
and the second sub-model is obtained by training according to the second target image in the second target area and the information of whether the second target image is tampered in the labeling information of the sample detection image.
9. An electronic device, characterized in that the device comprises: a processor and a memory storing computer program instructions; the processor, when executing the computer program instructions, implements the method of any of claims 1-7.
10. A computer storage medium having computer program instructions stored thereon which, when executed by a processor, implement the method of any one of claims 1-7.
CN202010402493.7A 2020-05-13 2020-05-13 Image detection method, device, equipment and storage medium Pending CN113674195A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010402493.7A CN113674195A (en) 2020-05-13 2020-05-13 Image detection method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010402493.7A CN113674195A (en) 2020-05-13 2020-05-13 Image detection method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113674195A true CN113674195A (en) 2021-11-19

Family

ID=78536836

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010402493.7A Pending CN113674195A (en) 2020-05-13 2020-05-13 Image detection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113674195A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101452568A (en) * 2008-12-04 2009-06-10 上海大学 Blind detection method for tampered image based on deconvolution
CN109543432A (en) * 2018-11-23 2019-03-29 济南中维世纪科技有限公司 Facial image encrypts anti-tamper and retrieval method in a kind of video
WO2019061661A1 (en) * 2017-09-30 2019-04-04 平安科技(深圳)有限公司 Image tamper detecting method, electronic device and readable storage medium
CN110414437A (en) * 2019-07-30 2019-11-05 上海交通大学 Face datection analysis method and system are distorted based on convolutional neural networks Model Fusion
CN111104892A (en) * 2019-12-16 2020-05-05 武汉大千信息技术有限公司 Human face tampering identification method based on target detection, model and identification method thereof
CN111144314A (en) * 2019-12-27 2020-05-12 北京中科研究院 Method for detecting tampered face video

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101452568A (en) * 2008-12-04 2009-06-10 上海大学 Blind detection method for tampered image based on deconvolution
WO2019061661A1 (en) * 2017-09-30 2019-04-04 平安科技(深圳)有限公司 Image tamper detecting method, electronic device and readable storage medium
CN109543432A (en) * 2018-11-23 2019-03-29 济南中维世纪科技有限公司 Facial image encrypts anti-tamper and retrieval method in a kind of video
CN110414437A (en) * 2019-07-30 2019-11-05 上海交通大学 Face datection analysis method and system are distorted based on convolutional neural networks Model Fusion
CN111104892A (en) * 2019-12-16 2020-05-05 武汉大千信息技术有限公司 Human face tampering identification method based on target detection, model and identification method thereof
CN111144314A (en) * 2019-12-27 2020-05-12 北京中科研究院 Method for detecting tampered face video

Similar Documents

Publication Publication Date Title
CN111814850A (en) Defect detection model training method, defect detection method and related device
CN112348765A (en) Data enhancement method and device, computer readable storage medium and terminal equipment
CN110660072B (en) Method and device for identifying straight line edge, storage medium and electronic equipment
CN109740609B (en) Track gauge detection method and device
CN111680750B (en) Image recognition method, device and equipment
CN111797821A (en) Text detection method and device, electronic equipment and computer storage medium
CN111639653A (en) False detection image determining method, device, equipment and medium
CN113781391A (en) Image defect detection method and related equipment
CN112258507B (en) Target object detection method and device of internet data center and electronic equipment
CN111814776B (en) Image processing method, device, server and storage medium
CN111310826A (en) Method and device for detecting labeling abnormity of sample set and electronic equipment
CN116862910A (en) Visual detection method based on automatic cutting production
CN111091068A (en) Density estimation model training method and device, storage medium and electronic equipment
CN111986103A (en) Image processing method, image processing device, electronic equipment and computer storage medium
CN113674195A (en) Image detection method, device, equipment and storage medium
CN115063739B (en) Abnormal behavior detection method, device, equipment and computer storage medium
CN114612889A (en) Instrument information acquisition method and system, electronic equipment and storage medium
CN110705633B (en) Target object detection method and device and target object detection model establishing method and device
CN111256609B (en) Method and device for detecting USB interface depth
CN117523636B (en) Face detection method and device, electronic equipment and storage medium
CN116541713B (en) Bearing fault diagnosis model training method based on local time-frequency characteristic transfer learning
TWI762417B (en) Method for identifying wafer
CN117788394A (en) State detection method, device, equipment and storage medium of power equipment
CN117372428A (en) Wafer defect detection method and device, electronic equipment and storage medium
CN115081500A (en) Training method and device for object recognition model and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination