CN116109597A - Image falsification area detection method and device, electronic equipment and storage medium - Google Patents

Image falsification area detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116109597A
CN116109597A CN202310125121.8A CN202310125121A CN116109597A CN 116109597 A CN116109597 A CN 116109597A CN 202310125121 A CN202310125121 A CN 202310125121A CN 116109597 A CN116109597 A CN 116109597A
Authority
CN
China
Prior art keywords
image
candidate
area
detected
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310125121.8A
Other languages
Chinese (zh)
Inventor
徐有正
韩茂琨
陈远旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202310125121.8A priority Critical patent/CN116109597A/en
Publication of CN116109597A publication Critical patent/CN116109597A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/52Scale-space analysis, e.g. wavelet analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the technical field of artificial intelligence and provides an image tampering area detection method, an image tampering area detection device, electronic equipment and a storage medium. The method comprises the following steps: extracting airspace characteristic information and frequency domain characteristic information of an image to be detected, and inputting the airspace characteristic information and the frequency domain characteristic information into a first image tampering detection network to obtain a first candidate tampering area; extracting edge characteristic information of an image to be detected, calculating a probability value of each pixel in the image to be detected belonging to a tampered region, and determining a region with the probability value larger than a corresponding probability threshold value in the image to be detected as a second candidate tampered region; inputting the airspace characteristic information, the frequency domain characteristic information and the edge characteristic information into a second image tampering detection network to obtain a third candidate tampering area; and determining a tampered region of the image to be detected according to at least two of the first candidate tampered region, the second candidate tampered region and the third candidate tampered region. The method and the device improve the detection precision of the image falsification area and avoid false detection of the image.

Description

Image falsification area detection method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of artificial intelligence, in particular to an image tampering area detection method, an image tampering area detection device, electronic equipment and a storage medium.
Background
Along with the development of image processing technology, the manipulation of image tampering is more advanced, and the concealment of the tampering is better, so that the difficulty of image tampering detection is increased continuously. Most of the existing methods only consider that single characteristics are adopted for image tampering detection, when the image content is complex and changeable, the accuracy of detection results is difficult to ensure by the single-characteristic tampering detection, and false detection or omission is easy to cause, for example, a signature part of a insurance policy has tampering means which are altered and covered at the same time, and the accuracy of detection results is difficult to ensure by utilizing the single characteristics for detection.
Disclosure of Invention
In view of the foregoing, it is necessary to provide an image falsification area detection method, an apparatus, an electronic device, and a storage medium, which solve the problems of low detection accuracy and false detection of the image falsification area.
A first aspect of the present application provides an image falsification area detection method, the method including:
acquiring an image to be detected;
extracting spatial domain characteristic information and frequency domain characteristic information of the image to be detected;
Inputting the airspace characteristic information and the frequency domain characteristic information into a first image tampering detection network to perform image tampering detection to obtain a first candidate tampering area in the image to be detected;
extracting edge characteristic information of the image to be detected;
calculating the probability value of each pixel belonging to the tampered area in the image to be detected according to the edge characteristic information, and determining the area, in the image to be detected, of which the probability value is larger than a corresponding probability threshold value as a second candidate tampered area;
inputting the airspace characteristic information, the frequency domain characteristic information and the edge characteristic information into a second image tampering detection network to carry out image tampering detection, so as to obtain a third candidate tampering area in the image to be detected;
and determining the tampered region of the image to be detected according to at least two of the first candidate tampered region, the second candidate tampered region and the third candidate tampered region.
In some optional embodiments, the extracting spatial domain feature information of the image to be detected includes:
respectively extracting high-frequency components of a plurality of channels of the image to be detected to obtain high-frequency component data of the channels;
Respectively carrying out noise intensity analysis on the high-frequency component data of the channels to obtain noise intensity data of the channels;
respectively filtering the high-frequency component data of the channels based on the noise intensity data to obtain initial noise information of the channels;
reconstructing the initial noise information of the channels to obtain initial airspace characteristic information;
and carrying out noise enhancement processing on the initial airspace characteristic information to obtain the airspace characteristic information.
In some alternative embodiments, the frequency domain characteristic information is obtained by transforming the spatial domain characteristic information into the frequency domain.
In some alternative embodiments, the first image tamper detection network comprises: a first feature fusion layer and a first image tamper detection layer, wherein:
the first feature fusion layer carries out superposition fusion on the airspace feature information and the frequency domain feature information according to an image channel to obtain first feature fusion information;
and the first image tampering detection layer performs image tampering detection according to the first characteristic fusion information to obtain the first candidate tampering region.
In some alternative embodiments, the second image tamper detection network comprises: a second feature fusion layer and a second image tamper detection layer, wherein:
the second feature fusion layer carries out transverse splicing on the airspace feature information, the frequency domain feature information and the edge feature information and/or carries out longitudinal splicing according to a preset dimension to obtain second feature fusion information;
and the second image tampering detection layer performs image tampering detection according to the second characteristic fusion information to obtain the third candidate tampering area.
In some alternative embodiments, before comparing the probability value in the image to be detected with a corresponding probability threshold, the method further comprises:
calculating a gray value corresponding to each pixel in the image to be detected;
and determining the probability threshold according to the gray value.
In some optional embodiments, the determining the tampered region of the image to be detected according to at least two of the first candidate tampered region, the second candidate tampered region, and the third candidate tampered region includes:
determining an area where the first candidate tampered area and the second candidate tampered area overlap as a tampered area of the image to be detected; or (b)
Determining an area where the first candidate tampered area and the third candidate tampered area overlap as a tampered area of the image to be detected; or (b)
Determining a region where the second candidate tampered region overlaps with the third candidate tampered region as a tampered region of the image to be detected; or (b)
And determining the first candidate tampered region, the second candidate tampered region and the third candidate tampered region overlapping regions as tampered regions of the image to be detected.
A second aspect of the present application provides an image tampering area detection device, the device including an acquisition module, a first extraction module, a first detection module, a second extraction module, a second detection module, a third detection module, and a determination module:
the acquisition module is used for acquiring an image to be detected;
the first extraction module is used for extracting spatial domain characteristic information and frequency domain characteristic information of the image to be detected;
the first detection module is used for inputting the airspace characteristic information and the frequency domain characteristic information into a first image tampering detection network to carry out image tampering detection, so as to obtain a first candidate tampering area in the image to be detected;
the second extraction module is used for extracting edge characteristic information of the image to be detected;
The second detection module is configured to calculate, according to the edge feature information, a probability value of each pixel in the image to be detected belonging to a tampered area, and determine an area in the image to be detected, where the probability value is greater than a corresponding probability threshold, as a second candidate tampered area;
the third detection module is configured to input the spatial domain feature information, the frequency domain feature information, and the edge feature information into a second image tampering detection network to perform image tampering detection, so as to obtain a third candidate tampering region in the image to be detected;
the determining module is configured to determine a tampered region of the image to be detected according to at least two of the first candidate tampered region, the second candidate tampered region, and the third candidate tampered region.
A third aspect of the present application provides an electronic device comprising a processor and a memory, the processor being configured to implement the image tampering area detection method when executing a computer program stored in the memory.
A fourth aspect of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the image falsification area detection method.
According to the image tampering area detection method, device, electronic equipment and storage medium, airspace characteristic information and frequency domain characteristic information of an image to be detected are extracted, the airspace characteristic information and the frequency domain characteristic information are input into a first image tampering detection network to obtain a first candidate tampering area, accuracy of identifying the tampering area by using the airspace characteristic information and the frequency domain characteristic information is improved, edge characteristic information of the image to be detected is extracted, probability values of each pixel belonging to the tampering area in the image to be detected are calculated, and an area, in which the probability value in the image to be detected is greater than a corresponding probability threshold value, is determined as a second candidate tampering area, so that accuracy of identifying the tampering area by using the edge characteristic information is improved; inputting the airspace characteristic information, the frequency domain characteristic information and the edge characteristic information into a second image tampering detection network to obtain a third candidate tampering area, and identifying the tampering area by utilizing a plurality of characteristics, so that false detection is avoided; and determining a tampered region of the image to be detected according to at least two of the first candidate tampered region, the second candidate tampered region and the third candidate tampered region. The method and the device improve the detection precision of the image falsification area and avoid false detection of the image falsification area.
Drawings
Fig. 1 is a flowchart of an image falsification area detection method provided in an embodiment of the present application.
FIG. 2 is a flow chart for generating a first candidate tampered region.
FIG. 3 is a flow chart of generating a third candidate tampered region.
Fig. 4 is a block diagram of an image falsification area detection apparatus provided in an embodiment of the present application.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will be more clearly understood, a more particular description of the application will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. It should be noted that, in the case of no conflict, the embodiments of the present application and the features in the embodiments may be combined with each other.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the present application is for the purpose of describing the embodiments in one alternative embodiment only and is not intended to be limiting of the present application.
The image tampering area detection method provided by the embodiment of the application is executed by the electronic equipment, and accordingly, the image tampering area detection device provided by the embodiment of the application is operated in the electronic equipment.
Example 1
The embodiment of the application provides an image tampering area detection method. Fig. 1 is a flowchart of an image falsification area detection method provided in an embodiment of the present application. The image falsification area detection method can reduce the false detection rate of the image falsification area. The order of the steps in the flowchart may be changed and some steps may be omitted according to various needs.
S11, acquiring an image to be detected.
The image to be detected can be any image which needs to be tampered and detected.
In an alternative embodiment, the image to be detected described herein may be an image in which the image information is changed by software editing or other technical means. For example, the image element is mined from other images, and the mined image is originally pasted at a designated position of the target image so as to cover the image information of the designated position of the target image, and then the target image after pasting the image element is the image to be detected. For another example, when the designated image element is removed from the target image, the target image after the designated image element is removed is the image to be detected.
In an optional embodiment, the image to be detected may be an image obtained by modifying a physical object by physical means and photographing the modified physical object. For example, the image to be detected may be an image obtained by altering license plate information and photographing the altered license plate; the image to be detected can also be an image obtained by correcting the insurance policy information and photographing the corrected insurance policy information; the method can also be an image obtained by correcting card information such as a bank card and photographing the corrected card.
In practical applications, the sizes of the images to be detected may be different, and in order to ensure the accuracy of the image tamper detection result, the images to be detected may be adjusted to a preset size. Scaling the image to be detected may destroy the inter-pixel characteristics of the image, and in an embodiment of the present application, the image to be detected may be cut (the center of the image may be kept fixed during cutting) instead of scaling the image to be detected, so as to adjust the image to be detected to a preset size.
S12, extracting spatial domain characteristic information and frequency domain characteristic information of the image to be detected.
The image feature extraction of the image to be detected means that pixel points or areas in the digital image are processed through digital transformation or specific operation to obtain image feature information. The image characteristic information can represent a certain characteristic of the image, and a stronger signal response can be realized in a specific mode.
In this embodiment, spatial domain feature information and frequency domain feature information are extracted from an image to be detected, and noise of the image is reflected by using the spatial domain feature information and the frequency domain feature information.
In an optional embodiment, the extracting spatial domain feature information of the image to be detected includes:
Respectively extracting high-frequency components of a plurality of channels of the image to be detected to obtain high-frequency component data of the channels;
respectively carrying out noise intensity analysis on the high-frequency component data of the channels to obtain noise intensity data of the channels;
respectively filtering the high-frequency component data of the channels based on the noise intensity data to obtain initial noise information of the channels;
reconstructing the initial noise information of the channels to obtain initial airspace characteristic information;
and carrying out noise enhancement processing on the initial airspace characteristic information to obtain the airspace characteristic information.
In an alternative embodiment, the frequency domain feature information is obtained by transforming the spatial domain feature information into the frequency domain.
The spatial domain feature information and the frequency domain feature information may be spatial domain feature information corresponding to image noise and frequency domain feature information corresponding to image noise.
The high-frequency component extraction can be performed on the channels of the image to be detected, so that the high-frequency component data of the channels can be obtained. The multi-scale wavelet transformation processing can be performed on the channels to obtain multi-scale wavelet domain high-frequency component data corresponding to each channel, the multi-scale wavelet transformation can be 4-level wavelet transformation, the wavelet basis can be 4-level multi Bei Xixiao waves, the high-frequency component data comprises high-frequency components in different directions, and the different directions can be horizontal directions, vertical directions and diagonal directions.
Local noise variance analysis can be performed on the high-frequency components in different directions of each channel respectively to obtain target variance data in different directions, and the target variance data in different directions are used as noise intensity data. The noise intensity data may be target variance data of the horizontal-direction high-frequency component, target variance data of the vertical-direction high-frequency component, and target variance data of the diagonal-direction high-frequency component.
The high frequency component of each channel may be filtered by using the target variance data of the high frequency component of each channel, respectively, to obtain initial noise information of each channel. The filter used in the filtering process may be a kalman filter.
The initial noise information may be wavelet domain noise information. The initial noise information of the plurality of channels may be reconstructed, where the reconstructing may be performing inverse wavelet transform on wavelet domain noise information of the plurality of channels to obtain initial spatial domain feature information.
The noise enhancement processing is performed on the initial airspace feature information, which may be the filtering processing of the initial airspace feature information, wherein the filtering processing may be the removing of interference information in the initial airspace feature information through zero-mean filtering processing, and then the fourier peak suppression processing is performed on the initial airspace feature information after the filtering processing to obtain the airspace feature information after the noise enhancement processing.
Transforming the spatial domain feature information to obtain frequency domain feature information may include: performing discrete Fourier transform processing on the spatial domain feature information to obtain initial frequency domain feature information, wherein under the condition that the initial frequency domain feature information is an initial frequency domain feature image, direct current components in the initial frequency domain feature image are usually positioned in four vertex angle areas of the image, performing frequency spectrum centering processing on the initial frequency domain feature information, and converting the direct current components into the center of the image to obtain the frequency domain feature information.
According to the embodiment, the high-frequency component data of the image to be detected is filtered to obtain initial noise information, the wavelet domain noise information of a plurality of channels is subjected to inverse wavelet transformation to obtain initial airspace feature information, and then the initial airspace feature information is subjected to noise enhancement filtering to obtain airspace feature information, so that the accuracy of airspace feature information extraction is improved. The space domain characteristic information is subjected to discrete Fourier transform processing to obtain initial frequency domain characteristic information, and the initial frequency domain characteristic information is subjected to frequency spectrum centering processing to obtain frequency domain characteristic information, so that the accuracy of frequency domain characteristic information extraction is improved.
S13, inputting the airspace characteristic information and the frequency domain characteristic information into a first image tampering detection network to carry out image tampering detection, and obtaining a first candidate tampering area in the image to be detected.
The first image tampering detection network may be obtained after performing image tampering detection training on a preset image tampering detection network based on sample airspace feature information and sample frequency domain feature information, and the first image tampering detection network may include: the first feature fusion layer and the first image tampering detection layer.
In an alternative embodiment, the first image tamper detection network includes: a first feature fusion layer and a first image tamper detection layer, wherein:
the first feature fusion layer carries out superposition fusion on the airspace feature information and the frequency domain feature information according to an image channel to obtain first feature fusion information;
and the first image tampering detection layer performs image tampering detection according to the first characteristic fusion information to obtain the first candidate tampering region.
FIG. 2 is a flow chart for generating a first candidate tampered region. As shown in fig. 2, the following steps may be included:
s21, inputting the spatial domain characteristic information and the frequency domain characteristic information into a first characteristic fusion layer.
In order to avoid single feature recognition, the spatial domain feature information and the frequency domain feature information are simultaneously input into a first feature fusion layer to perform feature fusion, so that a detection basis is provided for improving the detection accuracy of the image.
S22, processing the space domain characteristic information and the frequency domain characteristic information in the first characteristic fusion layer, and outputting the first characteristic fusion information.
In this embodiment, fusion processing may be performed on the spatial domain feature information and the frequency domain feature information, where the fusion processing may be performed by stacking according to an image channel. For example: and if the spatial domain feature information corresponds to a feature map with the size of A×B×512, the frequency domain feature information corresponds to a feature map with the size of A×B×512, A×B is spatial resolution, 512 is the number of image channels, and the superimposed image channels are fusion feature maps of A×B×1024, wherein the fusion feature maps represent first feature fusion information.
S23, inputting the first characteristic fusion information into a first image tampering detection layer to perform image tampering detection, and obtaining a first candidate tampering area.
The first image tamper detection layer may include a first convolution layer, a first global average pooling layer, a first full-connectivity layer, and a first output layer. The first feature fusion information is input into the first image tampering detection layer to carry out image tampering detection, namely the first feature fusion information is input into a first convolution layer to carry out convolution processing, so that feature extraction of the first feature fusion information is realized, the convolved first feature fusion information is input into a first global average pooling layer, downsampling operation is carried out on the convolved first feature fusion information in the first global average pooling layer, the downsampling operation is that the maximum value in a return sampling window is used as downsampling output, and the downsampling operation can simplify calculation complexity and compress the features, so that the main features are extracted. The first full-connection layer can be used as a connection layer between the layers, the first full-connection layer can conduct feature compression processing on first feature fusion information after downsampling operation is conducted on the first global average pooling layer to obtain feature information to be detected, a detection tag for conducting image tampering detection on the feature information to be detected is output on the output layer, image tampering detection can be conducted on the feature information to be detected through an activation function, and the detection tag, namely a first candidate tampering area, is output.
According to the embodiment, image tampering detection training is performed on a preset image tampering detection network based on sample airspace feature information and sample frequency domain feature information to obtain a first image tampering detection network, the first image tampering detection network can comprise a first feature fusion layer and a first image tampering detection layer, the airspace feature information and the frequency domain feature information are overlapped and fused according to an image channel by using the first feature fusion layer to obtain first feature fusion information, the obtained first feature fusion information is input into the first image tampering detection layer to perform image tampering detection, a first candidate tampering area is determined, limitation caused by single feature detection is avoided, and accuracy of detecting the image tampering area by using image noise is improved.
S14, extracting edge characteristic information of the image to be detected.
Edge features of an image are important features of an image, where the distribution of characteristics (e.g., pixel gray, texture, etc.) in the image is discontinuous. Most of information of the image is concentrated at the edge part of the image, and the edge structure and edge characteristics of the image are often important parts for determining the characteristics of the image, so that the edge characteristics are important characteristics for identifying image tampering, and the edge characteristic information is favorable for detecting the image tampering.
The extracting of the edge characteristic information of the image to be detected can be that the image to be detected is preprocessed through bilateral filtering to obtain a preprocessed image, and the preprocessed image is subjected to detection and extraction of the image edge characteristics through a multiscale Gabor imaginary part filter bank to obtain edge characteristic information of different scales and different directions.
And S15, calculating the probability value of each pixel belonging to the tampered area in the image to be detected according to the edge characteristic information, and determining the area, in which the probability value is larger than the corresponding probability threshold value, in the image to be detected as a second candidate tampered area.
In an alternative embodiment, before comparing the probability value in the image to be detected with the corresponding probability threshold value, the method further comprises:
calculating a gray value corresponding to each pixel in the image to be detected;
and determining the probability threshold according to the gray value.
The gray image can be obtained by gray processing of the image to be detected, the gray value corresponding to each pixel in the gray image is calculated, the gray value in the gray image is divided into areas, each area is preset with a gray value range, the gray value in the gray image is divided into areas according to the preset gray value range, and a plurality of areas which are divided according to the gray value ranges are obtained. And calculating the area threshold value of each area according to the gray value corresponding to each area, and determining the probability threshold value corresponding to each pixel according to the area threshold value.
The probability value in the image to be detected may be compared with a corresponding probability threshold, specifically, a pixel value corresponding to each pixel in the image to be detected is calculated, a probability value that the pixel in the image to be detected belongs to the tampered area is calculated according to the pixel value corresponding to each pixel, a corresponding probability threshold is set for each pixel in the image to be detected, the calculated probability value is compared with the corresponding probability threshold, and if the probability value in the image to be detected is greater than the corresponding probability threshold, a corresponding area is obtained as the second candidate tampered area.
S16, inputting the airspace characteristic information, the frequency domain characteristic information and the edge characteristic information into a second image tampering detection network to carry out image tampering detection, and obtaining a third candidate tampering area in the image to be detected.
There are various ways of image tampering, and a detection way using a single feature has a limitation, and by fusing features of different sizes, a situation of false detection caused by recognition using a single feature can be avoided. The second image tampering detection network may be obtained after performing image tampering detection training on the preset image tampering detection network based on sample airspace feature information, sample frequency domain feature information and sample edge feature information, where the second image tampering detection network includes a second feature fusion layer and a second image tampering detection layer.
In an alternative embodiment, the second image tamper detection network includes: a second feature fusion layer and a second image tamper detection layer, wherein:
the second feature fusion layer carries out transverse splicing on the airspace feature information, the frequency domain feature information and the edge feature information and/or carries out longitudinal splicing according to a preset dimension to obtain second feature fusion information;
and the second image tampering detection layer performs image tampering detection according to the second characteristic fusion information to obtain the third candidate tampering area. The airspace characteristic information, the frequency domain characteristic information and the edge characteristic information are fused together, so that the detection of the edge of the image falsified region can be enhanced, and false detection and omission are avoided.
FIG. 3 is a flow chart of generating a third candidate tampered region. As shown in fig. 3, the following steps may be included:
s31, processing the space domain characteristic information and the frequency domain characteristic information to obtain first information to be fused.
The spatial domain feature information and the frequency domain feature information may be spatial domain feature information of the image noise and frequency domain feature information of the image noise.
The processing may be performed on a noise map corresponding to the image noise, where the noise map includes spatial domain feature information of the image noise and frequency domain feature information of the image noise, specifically, downsampling the noise map to obtain a noise map with a size of 128×128×256, and using the noise map as the first information to be fused.
S32, processing the edge characteristic information to obtain second information to be fused.
The edge feature image corresponding to the edge feature information can be subjected to downsampling, the edge feature image comprises a probability image and a threshold image, specifically, the size of the edge feature image is 128×128×1024, the probability image and the threshold image of the edge feature image are extracted through an edge detection model, the size of the probability image and the size of the threshold image are 128×128×1024, the probability image is adjusted through the threshold image, and the probability image and the threshold image can be processed in a consistent processing mode. The threshold map and the probability map are compressed by image channel, and the threshold map and the probability map are compressed from 128×128×1024 to 128×128×256. And respectively convoluting the compressed threshold map and the probability map according to a preset convolution kernel size, deconvolving to obtain a 512 multiplied by 256 original map, processing the convolved threshold map and probability map by using an activation function to obtain a 512 multiplied by 1 threshold map and probability map, and adjusting the processed probability map by using the processed threshold map.
The edge detection model can be obtained by training a preset edge detection model by utilizing sample edge characteristics, and in the training process, a training probability map is formed by inwards shrinking a preset distance according to a preset distance and a shrinking algorithm, and a training threshold map is formed by outwards shrinking the preset distance. In order to better utilize the edge detection model to extract the probability map and the threshold map of the edge feature map, whether training is completed or not is determined by judging the loss function value of the edge detection model, specifically, the loss function value of the edge detection model is calculated until the loss function value is within a preset loss function value, and training is completed. The size of the probability map output by the trained edge detection model can be 512×512×1, and the probability map is used as the second information to be fused.
And S33, fusing the first information to be fused and the second information to be fused in a second feature fusion layer to obtain second feature fusion information.
The noise map and the probability map may be fused by a fusion algorithm, where the size of the noise map is 128×128×256 and the size of the probability map is 512×512×1. And because the sizes of the noise map and the probability map are inconsistent, a double-line difference algorithm is sampled, the noise map is subjected to up-sampling processing to obtain the same spatial resolution of the noise map and the probability map, the size of the noise map is determined to be 512 multiplied by 256, the noise map and the probability map are transversely spliced, and/or the noise map and the probability map are longitudinally spliced according to a preset dimension to obtain second feature fusion information.
And S34, inputting the second characteristic fusion information into a second image tampering detection layer to obtain a third candidate tampering area.
The second image tamper detection layer may include a second convolution layer, a second global averaging pooling layer, a second full-connectivity layer, and a second output layer. The second feature fusion information is input into a second image tampering detection layer to carry out image tampering detection, namely the second feature fusion information is input into a second convolution layer to carry out convolution processing, so that feature extraction of the second feature fusion information is realized, the convolved second feature fusion information is input into a second global average pooling layer, downsampling operation is carried out on the convolved second feature fusion information in the second global average pooling layer, the downsampling operation is that the maximum value in a return sampling window is used as downsampling output, and the downsampling operation can simplify calculation complexity and compress the features, so that the main features are extracted. The second full-connection layer can be used as a connection layer between the layers, the second full-connection layer can conduct feature compression processing on second feature fusion information after downsampling operation is conducted on the second global average pooling layer to obtain feature information to be detected, a detection tag for conducting image tampering detection on the feature information to be detected is output on the output layer, image tampering detection can be conducted on the feature information to be detected through an activation function, and the detection tag is output, namely a third candidate tampering area.
In a specific embodiment, the weight information of the noise graph and the probability graph may be calculated, the weight information is convolved, the activation function is adopted to calculate the weight information of the convolved noise graph and probability graph, a weight matrix is obtained, the larger the tampered area is, the larger the weight corresponding to the place is, and the third candidate tampered area can be obtained by setting a threshold corresponding to the weight for the tampered area and comparing the weight with the corresponding threshold.
S17, determining the tampered region of the image to be detected according to at least two of the first candidate tampered region, the second candidate tampered region and the third candidate tampered region.
The detection is carried out by selecting at least two detection modes, and the tampered area is judged by extracting a plurality of characteristics, so that the detection effectiveness is improved, and the conditions of false detection and missing detection are avoided.
In an optional embodiment, the determining the tampered region of the image to be detected according to at least two of the first candidate tampered region, the second candidate tampered region, and the third candidate tampered region includes:
determining an area where the first candidate tampered area and the second candidate tampered area overlap as a tampered area of the image to be detected;
Or determining the overlapping area of the first candidate tampered area and the third candidate tampered area as a tampered area of the image to be detected;
or determining the overlapping area of the second candidate tampered area and the third candidate tampered area as a tampered area of the image to be detected;
or determining the first candidate tampered region, the second candidate tampered region and the third candidate tampered region overlapping regions as tampered regions of the image to be detected.
According to the method, airspace characteristic information and frequency domain characteristic information are input into a first image tampering detection network to obtain a first candidate tampering area, the probability value of each pixel in an image to be detected belonging to the tampering area is calculated according to edge characteristic information, the area, in the image to be detected, of which the probability value is larger than a corresponding probability threshold value is determined to be a second candidate tampering area, the airspace characteristic information, the frequency domain characteristic information and the edge characteristic information are input into the second image tampering detection network to obtain a third candidate tampering area, and the tampering area of the image to be detected is determined according to at least two of the first candidate tampering area, the second candidate tampering area and the third candidate tampering area. The method and the device improve the accuracy of image tampering detection and avoid false detection.
Example two
Fig. 4 is a block diagram of an image falsification area detection apparatus provided in a second embodiment of the present application.
In some embodiments, the image tampering area detection apparatus 400 may include a plurality of functional modules comprised of computer program segments. The computer program of each program segment in the image tampering area detection apparatus 400 may be stored in a memory of an electronic device and executed by at least one processor to perform (see fig. 1 for details) the functions of image tampering area detection.
In this embodiment, the image falsification area detection apparatus 400 may be divided into a plurality of functional modules according to the functions performed by the same. The functional module may include: an acquisition module 401, a first extraction module 402, a first detection module 403, a second extraction module 404, a second detection module 405, a third detection module 406, and a determination module 407. A module as referred to in this application refers to a series of computer program segments, stored in a memory, capable of being executed by at least one processor and of performing a fixed function. In this embodiment, the definition of the image falsification area detection apparatus 400 may refer to the definition of the image falsification area detection method, which is not described in detail herein.
The acquiring module 401 is configured to acquire an image to be detected;
the first extraction module 402 is configured to extract spatial domain feature information and frequency domain feature information of the image to be detected;
the first detection module 403 is configured to input the spatial domain feature information and the frequency domain feature information into a first image tampering detection network to perform image tampering detection, so as to obtain a first candidate tampered region in the image to be detected;
the second extracting module 404 is configured to extract edge feature information of the image to be detected;
the second detection module 405 is configured to calculate, according to the edge feature information, a probability value of each pixel in the image to be detected belonging to a tampered area, and determine an area in the image to be detected, where the probability value is greater than a corresponding probability threshold, as a second candidate tampered area;
the third detection module 406 is configured to input the spatial domain feature information, the frequency domain feature information, and the edge feature information into a second image tampering detection network to perform image tampering detection, so as to obtain a third candidate tampering region in the image to be detected;
the determining module 407 is configured to determine a tampered area of the image to be detected according to at least two of the first candidate tampered area, the second candidate tampered area, and the third candidate tampered area.
In some alternative embodiments, the first extraction module 402 is further configured to:
respectively extracting high-frequency components of a plurality of channels of the image to be detected to obtain high-frequency component data of the channels;
respectively carrying out noise intensity analysis on the high-frequency component data of the channels to obtain noise intensity data of the channels;
respectively filtering the high-frequency component data of the channels based on the noise intensity data to obtain initial noise information of the channels;
reconstructing the initial noise information of the channels to obtain initial airspace characteristic information;
and carrying out noise enhancement processing on the initial airspace characteristic information to obtain the airspace characteristic information.
In some alternative embodiments, the first extraction module 402 is further configured to: the frequency domain feature information is obtained by transforming the spatial domain feature information into a frequency domain.
In some alternative embodiments, the first detection module 403 is further configured to: a first feature fusion layer and a first image tamper detection layer, wherein:
the first feature fusion layer carries out superposition fusion on the airspace feature information and the frequency domain feature information according to an image channel to obtain first feature fusion information;
And the first image tampering detection layer performs image tampering detection according to the first characteristic fusion information to obtain the first candidate tampering region.
In some alternative embodiments, the third detection module 406 is further configured to: a second feature fusion layer and a second image tamper detection layer, wherein:
the second feature fusion layer carries out transverse splicing on the airspace feature information, the frequency domain feature information and the edge feature information and/or carries out longitudinal splicing according to a preset dimension to obtain second feature fusion information;
and the second image tampering detection layer performs image tampering detection according to the second characteristic fusion information to obtain the third candidate tampering area.
In some alternative embodiments, the second detection module 405 is further configured to:
calculating a gray value corresponding to each pixel in the image to be detected;
and determining the probability threshold according to the gray value.
In some alternative embodiments, the determining module 407 is further configured to:
determining an area where the first candidate tampered area and the second candidate tampered area overlap as a tampered area of the image to be detected;
or determining the overlapping area of the first candidate tampered area and the third candidate tampered area as a tampered area of the image to be detected;
Or determining the overlapping area of the second candidate tampered area and the third candidate tampered area as a tampered area of the image to be detected;
or determining the first candidate tampered region, the second candidate tampered region and the third candidate tampered region overlapping regions as tampered regions of the image to be detected.
Example III
The present embodiment provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps in the above-described image falsification area detection method embodiment, such as S11 to S14 shown in fig. 1:
s11, acquiring an image to be detected.
S12, extracting spatial domain characteristic information and frequency domain characteristic information of the image to be detected.
S13, inputting the airspace characteristic information and the frequency domain characteristic information into a first image tampering detection network to carry out image tampering detection, and obtaining a first candidate tampering area in the image to be detected.
S14, extracting edge characteristic information of the image to be detected.
And S15, calculating the probability value of each pixel belonging to the tampered area in the image to be detected according to the edge characteristic information, and determining the area, in which the probability value is larger than the corresponding probability threshold value, in the image to be detected as a second candidate tampered area.
S16, inputting the airspace characteristic information, the frequency domain characteristic information and the edge characteristic information into a second image tampering detection network to carry out image tampering detection, and obtaining a third candidate tampering area in the image to be detected.
S17, determining the tampered region of the image to be detected according to at least two of the first candidate tampered region, the second candidate tampered region and the third candidate tampered region.
Alternatively, the computer program may be executed by a processor to perform the functions of the modules/units in the above-described apparatus embodiments, e.g. the modules 401-407 in fig. 4:
the acquiring module 401 is configured to acquire an image to be detected;
the first extraction module 402 is configured to extract spatial domain feature information and frequency domain feature information of the image to be detected;
the first detection module 403 is configured to input the spatial domain feature information and the frequency domain feature information into a first image tampering detection network to perform image tampering detection, so as to obtain a first candidate tampered region in the image to be detected;
the second extracting module 404 is configured to extract edge feature information of the image to be detected;
the second detection module 405 is configured to calculate, according to the edge feature information, a probability value of each pixel in the image to be detected belonging to a tampered area, and determine an area in the image to be detected, where the probability value is greater than a corresponding probability threshold, as a second candidate tampered area;
The third detection module 406 is configured to input the spatial domain feature information, the frequency domain feature information, and the edge feature information into a second image tampering detection network to perform image tampering detection, so as to obtain a third candidate tampering region in the image to be detected;
the determining module 407 is configured to determine a tampered area of the image to be detected according to at least two of the first candidate tampered area, the second candidate tampered area, and the third candidate tampered area.
Example IV
Fig. 5 is a schematic structural diagram of an electronic device according to a third embodiment of the present application. In the preferred embodiment of the present application, the electronic device 5 includes a memory 51, at least one processor 52, at least one communication bus 53, and a transceiver 54.
It will be appreciated by those skilled in the art that the configuration of the electronic device shown in fig. 5 is not limiting of the embodiments of the present application, and that either a bus-type configuration or a star-type configuration may be used, and that the electronic device 5 may include more or less other hardware or software than that shown, or a different arrangement of components.
In some embodiments, the electronic device 5 is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and its hardware includes, but is not limited to, a microprocessor, an application specific integrated circuit, a programmable gate array, a digital processor, an embedded device, and the like. The electronic device 5 may also include a client device, which includes, but is not limited to, any electronic product that can interact with a client by way of a keyboard, mouse, remote control, touch pad, or voice control device, such as a personal computer, tablet, smart phone, digital camera, etc.
It should be noted that the electronic device 5 is only used as an example, and other electronic products that may be present in the present application or may be present in the future are also included in the scope of the present application and are incorporated herein by reference.
In some embodiments, the memory 51 has stored therein a computer program which, when executed by the at least one processor 52, performs all or part of the steps in the image manipulation region detection method as described. The Memory 51 includes Read-Only Memory (ROM), programmable Read-Only Memory (PROM), erasable programmable Read-Only Memory (EPROM), one-time programmable Read-Only Memory (One-time Programmable Read-Only Memory, OTPROM), electrically erasable rewritable Read-Only Memory (EEPROM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM) or other optical disc Memory, magnetic tape Memory, or any other medium that can be used for computer-readable carrying or storing data.
In some embodiments, the at least one processor 52 is a Control Unit (Control Unit) of the electronic device 5, connects the various components of the entire electronic device 5 using various interfaces and lines, and performs various functions of the electronic device 5 and processes data by running or executing programs or modules stored in the memory 51, and invoking data stored in the memory 51. For example, the at least one processor 52, when executing the computer program stored in the memory, implements all or part of the steps of the image falsification area detection method described in the embodiments of the present application; or to implement all or part of the functions of the image falsification area detection apparatus. The at least one processor 52 may be comprised of integrated circuits, such as a single packaged integrated circuit, or may be comprised of multiple integrated circuits packaged with the same or different functionality, including one or more central processing units (Central Processing unit, CPU), microprocessors, digital processing chips, graphics processors, combinations of various control chips, and the like.
In some embodiments, the at least one communication bus 53 is arranged to enable connected communication between the memory 51 and the at least one processor 52 or the like.
Although not shown, the electronic device 5 may further include a power source (such as a battery) for powering the various components, and preferably the power source may be logically connected to the at least one processor 52 via a power management device, such that functions of managing charging, discharging, and power consumption are performed by the power management device. The power supply may also include one or more of any of a direct current or alternating current power supply, recharging device, power failure detection circuit, power converter or inverter, power status indicator, etc. The electronic device 5 may further include various sensors, bluetooth modules, wi-Fi modules, camera devices, etc., which will not be described herein.
The integrated units implemented in the form of software functional modules described above may be stored in a computer readable storage medium. The software functional modules described above are stored in a storage medium and include instructions for causing a computer device (which may be a personal computer, an electronic device, or a network device, etc.) or processor (processor) to perform portions of the methods described in various embodiments of the present application.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division when actually implemented.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it will be obvious that the term "comprising" does not exclude other elements or that the singular does not exclude a plurality. Several of the elements or devices recited in the specification may be embodied by one and the same item of software or hardware. The terms first, second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above embodiments are merely for illustrating the technical solution of the present application and not for limiting, and although the present application has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that the technical solution of the present application may be modified or substituted without departing from the spirit and scope of the technical solution of the present application.

Claims (10)

1. A method of detecting an image tampering area, the method comprising:
acquiring an image to be detected;
extracting spatial domain characteristic information and frequency domain characteristic information of the image to be detected;
inputting the airspace characteristic information and the frequency domain characteristic information into a first image tampering detection network to perform image tampering detection to obtain a first candidate tampering area in the image to be detected;
extracting edge characteristic information of the image to be detected;
calculating the probability value of each pixel belonging to the tampered area in the image to be detected according to the edge characteristic information, and determining the area, in the image to be detected, of which the probability value is larger than a corresponding probability threshold value as a second candidate tampered area;
inputting the airspace characteristic information, the frequency domain characteristic information and the edge characteristic information into a second image tampering detection network to carry out image tampering detection, so as to obtain a third candidate tampering area in the image to be detected;
And determining the tampered region of the image to be detected according to at least two of the first candidate tampered region, the second candidate tampered region and the third candidate tampered region.
2. The image falsification area detection method as claimed in claim 1, wherein said extracting spatial domain feature information of the image to be detected comprises:
respectively extracting high-frequency components of a plurality of channels of the image to be detected to obtain high-frequency component data of the channels;
respectively carrying out noise intensity analysis on the high-frequency component data of the channels to obtain noise intensity data of the channels;
respectively filtering the high-frequency component data of the channels based on the noise intensity data to obtain initial noise information of the channels;
reconstructing the initial noise information of the channels to obtain initial airspace characteristic information;
and carrying out noise enhancement processing on the initial airspace characteristic information to obtain the airspace characteristic information.
3. The image falsification area detection method as claimed in claim 2, wherein the frequency domain feature information is obtained by transforming the spatial domain feature information into a frequency domain.
4. The image falsification area detection method as claimed in claim 1, wherein the first image falsification detection network comprises: a first feature fusion layer and a first image tamper detection layer, wherein:
the first feature fusion layer carries out superposition fusion on the airspace feature information and the frequency domain feature information according to an image channel to obtain first feature fusion information;
and the first image tampering detection layer performs image tampering detection according to the first characteristic fusion information to obtain the first candidate tampering region.
5. The image falsification area detection method as claimed in claim 1, wherein the second image falsification detection network comprises: a second feature fusion layer and a second image tamper detection layer, wherein:
the second feature fusion layer carries out transverse splicing on the airspace feature information, the frequency domain feature information and the edge feature information and/or carries out longitudinal splicing according to a preset dimension to obtain second feature fusion information;
and the second image tampering detection layer performs image tampering detection according to the second characteristic fusion information to obtain the third candidate tampering area.
6. The image falsification area detection method as claimed in claim 1, wherein before comparing the probability value in the image to be detected with a corresponding probability threshold value, the method further comprises:
Calculating a gray value corresponding to each pixel in the image to be detected;
and determining the probability threshold according to the gray value.
7. The image falsification area detection method of claim 1, wherein the determining the falsification area of the image to be detected according to at least two of the first candidate falsification area, the second candidate falsification area, and the third candidate falsification area comprises:
determining an area where the first candidate tampered area and the second candidate tampered area overlap as a tampered area of the image to be detected; or (b)
Determining an area where the first candidate tampered area and the third candidate tampered area overlap as a tampered area of the image to be detected; or (b)
Determining a region where the second candidate tampered region overlaps with the third candidate tampered region as a tampered region of the image to be detected; or (b)
And determining the first candidate tampered region, the second candidate tampered region and the third candidate tampered region overlapping regions as tampered regions of the image to be detected.
8. The device is characterized by comprising an acquisition module, a first extraction module, a first detection module, a second extraction module, a second detection module, a third detection module and a determination module:
The acquisition module is used for acquiring an image to be detected;
the first extraction module is used for extracting spatial domain characteristic information and frequency domain characteristic information of the image to be detected;
the first detection module is used for inputting the airspace characteristic information and the frequency domain characteristic information into a first image tampering detection network to carry out image tampering detection, so as to obtain a first candidate tampering area in the image to be detected;
the second extraction module is used for extracting edge characteristic information of the image to be detected;
the second detection module is configured to calculate, according to the edge feature information, a probability value of each pixel in the image to be detected belonging to a tampered area, and determine an area in the image to be detected, where the probability value is greater than a corresponding probability threshold, as a second candidate tampered area;
the third detection module is configured to input the spatial domain feature information, the frequency domain feature information, and the edge feature information into a second image tampering detection network to perform image tampering detection, so as to obtain a third candidate tampering region in the image to be detected;
the determining module is configured to determine a tampered region of the image to be detected according to at least two of the first candidate tampered region, the second candidate tampered region, and the third candidate tampered region.
9. An electronic device comprising a processor and a memory, wherein the processor is configured to implement the image falsification area detection method of any one of claims 1 to 7 when executing a computer program stored in the memory.
10. A computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the image falsification area detection method according to any one of claims 1 to 7.
CN202310125121.8A 2023-02-10 2023-02-10 Image falsification area detection method and device, electronic equipment and storage medium Pending CN116109597A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310125121.8A CN116109597A (en) 2023-02-10 2023-02-10 Image falsification area detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310125121.8A CN116109597A (en) 2023-02-10 2023-02-10 Image falsification area detection method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116109597A true CN116109597A (en) 2023-05-12

Family

ID=86267097

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310125121.8A Pending CN116109597A (en) 2023-02-10 2023-02-10 Image falsification area detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116109597A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117407562A (en) * 2023-12-13 2024-01-16 杭州海康威视数字技术股份有限公司 Image recognition method, system and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117407562A (en) * 2023-12-13 2024-01-16 杭州海康威视数字技术股份有限公司 Image recognition method, system and electronic equipment
CN117407562B (en) * 2023-12-13 2024-04-05 杭州海康威视数字技术股份有限公司 Image recognition method, system and electronic equipment

Similar Documents

Publication Publication Date Title
CN110097564B (en) Image labeling method and device based on multi-model fusion, computer equipment and storage medium
US20200111203A1 (en) Method and apparatus for generating vehicle damage information
CN112446378B (en) Target detection method and device, storage medium and terminal
Chang et al. A forgery detection algorithm for exemplar-based inpainting images using multi-region relation
KR20190069457A (en) IMAGE BASED VEHICLES LOSS EVALUATION METHOD, DEVICE AND SYSTEM,
CN111461170A (en) Vehicle image detection method and device, computer equipment and storage medium
JP2021157847A (en) Method, apparatus, device, and readable storage medium for recognizing abnormal license plate
CN116109597A (en) Image falsification area detection method and device, electronic equipment and storage medium
CN112132216B (en) Vehicle type recognition method and device, electronic equipment and storage medium
CA3136990A1 (en) A human body key point detection method, apparatus, computer device and storage medium
CN114067431A (en) Image processing method, image processing device, computer equipment and storage medium
CN116052026A (en) Unmanned aerial vehicle aerial image target detection method, system and storage medium
CN112991349B (en) Image processing method, device, equipment and storage medium
CN112750162A (en) Target identification positioning method and device
Singh et al. A two-step deep convolution neural network for road extraction from aerial images
CN111539341A (en) Target positioning method, device, electronic equipment and medium
CN113428177B (en) Vehicle control method, device, equipment and storage medium
CN114240816A (en) Road environment sensing method and device, storage medium, electronic equipment and vehicle
EP3044734B1 (en) Isotropic feature matching
CN113191189A (en) Face living body detection method, terminal device and computer readable storage medium
CN110210314B (en) Face detection method, device, computer equipment and storage medium
CN112287905A (en) Vehicle damage identification method, device, equipment and storage medium
CN114998282B (en) Image detection method, device, electronic equipment and storage medium
CN115861791B (en) Method and device for generating litigation clues and storage medium
CN115952531A (en) Image processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination