CN111507931A - Data processing method and device - Google Patents

Data processing method and device Download PDF

Info

Publication number
CN111507931A
CN111507931A CN201910032941.6A CN201910032941A CN111507931A CN 111507931 A CN111507931 A CN 111507931A CN 201910032941 A CN201910032941 A CN 201910032941A CN 111507931 A CN111507931 A CN 111507931A
Authority
CN
China
Prior art keywords
target image
window
gray level
occurrence matrix
scale
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910032941.6A
Other languages
Chinese (zh)
Other versions
CN111507931B (en
Inventor
卢伟
刘永亮
黄继武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201910032941.6A priority Critical patent/CN111507931B/en
Publication of CN111507931A publication Critical patent/CN111507931A/en
Application granted granted Critical
Publication of CN111507931B publication Critical patent/CN111507931B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • G06T7/45Analysis of texture based on statistical description of texture using co-occurrence matrix computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Quality & Reliability (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure relates to a data processing method and apparatus. The method comprises the following steps: sequentially traversing a target image through a plurality of sliding windows with different window scales, and determining an N-order derivative gray level co-occurrence matrix of the target image under any window scale, wherein the N-order derivative gray level co-occurrence matrix is a gray level co-occurrence matrix determined according to the N-order derivative of the target image; determining a detection probability map of the target image under any window scale according to an N-order derivative gray level co-occurrence matrix of the target image under any window scale, wherein the detection probability map comprises the probability that any pixel point in the target image is modified; and sequentially fusing the detection probability maps of the target image under different window scales according to the window scales from large to small to determine a modified area in the target image. The present disclosure can accurately determine a modified region in a target image.

Description

Data processing method and device
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a data processing method and apparatus.
Background
With the rapid development of digital image processing technology, it is quite easy to cheat human eyes by tampering a digital image with image tampering technology. In addition, professional image processing software (e.g., Photoshop) is becoming more and more popular, and tampering with pictures is no longer the ability of professionals to do so, so that the network is flooded with a large number of tampered images. The image tampering operation transmits wrong information, which has very bad influence on society.
At present, a deblurring tampering mode exists in an image tampering technology, namely, a blurred region in an original image is extracted, deblurring tampering operation is performed on the extracted blurred region, and then the deblurring tampering region is spliced back to the original image, so that deblurring tampering of the original image is achieved. Since the deblurred tampered region and the non-tampered region are from the same image, there is no effective method for determining whether the deblurred tampered region exists in the target image, that is, whether the target image is deblurred and tampered cannot be determined. Therefore, there is a need for an efficient data processing method to determine modified regions (deblurred tampered regions) in a target image.
Disclosure of Invention
In view of the above, the present disclosure provides a data processing method and apparatus, so that a modified region in a target image can be accurately determined.
According to a first aspect of the present disclosure, there is provided a data processing method, including: sequentially traversing a target image through a plurality of sliding windows with different window scales, and determining an N-order derivative gray level co-occurrence matrix of the target image under any window scale, wherein the N-order derivative gray level co-occurrence matrix is a gray level co-occurrence matrix determined according to the N-order derivative of the target image; determining a detection probability map of the target image under any window scale according to an N-order derivative gray level co-occurrence matrix of the target image under any window scale, wherein the detection probability map comprises the probability that any pixel point in the target image is modified; and sequentially fusing the detection probability maps of the target image under different window scales according to the window scales from large to small to determine a modified area in the target image.
In a possible implementation manner, determining a detection probability map of the target image at any window scale according to an N-order derivative gray level co-occurrence matrix of the target image at any window scale includes: and aiming at any window scale, obtaining a decision model under the window scale obtained by utilizing image sample training, wherein the decision model is used for determining a detection probability map of the target image under the window scale according to an N-order derivative gray level co-occurrence matrix of the target image under the window scale.
In a possible implementation manner, a decision model under a window scale obtained by training an image sample is obtained aiming at any window scale, and the decision model under the window scale comprises the steps of carrying out fuzzy processing on the image sample to obtain a fuzzy image, carrying out deblurring processing on the fuzzy image to obtain a deblurred image, sequentially traversing the deblurred image through a plurality of sliding windows with different window scales to determine an N-order derivative gray level co-occurrence matrix of the deblurred image under any window scale, and training the N-order derivative gray level co-occurrence matrix of the deblurred image under the window scale by adopting L IBSVM and a radial basis function to obtain the decision model under the window scale aiming at any window scale.
In a possible implementation manner, sequentially fusing the detection probability maps of the target image under different window scales according to the window scales from large to small, and determining a modified region in the target image includes: clustering the detection probability graph of the target image under any window scale through a clustering algorithm, and determining a first detection result graph corresponding to the target image under the window scale, wherein the first detection result graph comprises a modified area and an unmodified area; according to a first detection result graph corresponding to the target image under the window scale, adjusting a detection probability graph of the target image under the window scale through connectivity detection and filtering operation; clustering the detection probability graph of the target image under the adjusted window scale again through the clustering algorithm, and determining a second detection result graph corresponding to the target image under the window scale, wherein the second detection result graph comprises a modified area and an unmodified area; and sequentially fusing second detection result graphs corresponding to the target image under different window scales according to the window scales from large to small, and determining a modified area in the target image.
In one possible implementation, the clustering algorithm is two-centroid K-means clustering.
In one possible implementation, the filtering operation is gaussian weighted filtering.
In one possible implementation, the plurality of sliding windows with different window sizes includes a 4 × 4 sliding window, an 8 × 8 sliding window, a 16 × 16 sliding window, a 32 × 32 sliding window, a 64 × 64 sliding window, and a 128 × 128 sliding window.
According to a second aspect of the present disclosure, there is provided a data processing apparatus comprising: the first determining module is used for sequentially traversing a target image through a plurality of sliding windows with different window scales and determining an N-order derivative gray level co-occurrence matrix of the target image under any window scale, wherein the N-order derivative gray level co-occurrence matrix is a gray level co-occurrence matrix determined according to the N-order derivative of the target image; a second determining module, configured to determine a detection probability map of the target image in any window scale according to an N-order derivative gray level co-occurrence matrix of the target image in any window scale, where the detection probability map includes a probability that any pixel in the target image is modified; and the third determining module is used for sequentially fusing the detection probability maps of the target image under different window scales according to the window scales from large to small so as to determine the modified area in the target image.
In one possible implementation manner, the method further includes: the acquisition module is used for acquiring a decision model under the window scale, which is obtained by training an image sample, aiming at any window scale, wherein the decision model is used for determining a detection probability map of the target image under the window scale according to an N-order derivative gray level co-occurrence matrix of the target image under the window scale.
In a possible implementation manner, the acquisition module comprises a fuzzy processing submodule used for carrying out fuzzy processing on the image sample to obtain a fuzzy image, a deblurring processing submodule used for carrying out deblurring processing on the fuzzy image to obtain a deblurring image, a first determining submodule used for sequentially traversing the deblurring image through a plurality of sliding windows with different window scales to determine an N-order derivative gray level co-occurrence matrix of the deblurring image under any window scale, and a model training submodule used for training the N-order derivative gray level co-occurrence matrix of the deblurring image under the window scale by adopting L IBSVM and a radial basis function to check the window scale according to any window scale to obtain a decision model under the window scale.
In one possible implementation manner, the third determining module includes: the clustering submodule is used for clustering the detection probability graph of the target image under any window scale through a clustering algorithm and determining a first detection result graph corresponding to the target image under the window scale, wherein the first detection result graph comprises a modified area and an unmodified area; the adjusting submodule is used for adjusting the detection probability graph of the target image under the window scale through connectivity detection and filtering operation according to the first detection result graph corresponding to the target image under the window scale; the clustering submodule is further configured to perform clustering again on the adjusted detection probability map of the target image under the window scale through the clustering algorithm, and determine a second detection result map corresponding to the target image under the window scale, where the second detection result map includes a modified region and an unmodified region; and the second determining submodule is used for sequentially fusing second detection result graphs corresponding to the target image under different window scales according to the window scales from large to small so as to determine a modified area in the target image.
In one possible implementation, the clustering algorithm is two-centroid K-means clustering.
In one possible implementation, the filtering operation is gaussian weighted filtering.
In one possible implementation, the N-order derivative gray level co-occurrence matrix at least includes: a first derivative gray level co-occurrence matrix and a second derivative gray level co-occurrence matrix.
In one possible implementation, the plurality of sliding windows with different window sizes includes a 4 × 4 sliding window, an 8 × 8 sliding window, a 16 × 16 sliding window, a 32 × 32 sliding window, a 64 × 64 sliding window, and a 128 × 128 sliding window.
According to a third aspect of the present disclosure, there is provided a data processing apparatus comprising: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to perform the data processing method of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer program instructions, wherein the computer program instructions, when executed by a processor, implement the data processing method of the first aspect described above.
The method comprises the steps of sequentially traversing a target image through a plurality of sliding windows with different window scales, determining an N-order derivative gray level co-occurrence matrix of the target image under any window scale, wherein the N-order derivative gray level co-occurrence matrix is determined according to the N-order derivative of the target image, determining a detection probability map of the target image under the window scale according to the N-order derivative gray level co-occurrence matrix of the target image under any window scale, wherein the detection probability map comprises the probability that any pixel point in the target image is modified, and sequentially fusing the detection probability maps of the target image under different window scales according to the window scales from large to small, so that a modified area in the target image can be accurately determined.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 shows a flow diagram of a data processing method of an embodiment of the present disclosure;
FIG. 2 is a schematic diagram illustrating a method for deblurring a tampered area according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating a second detection result graph corresponding to a target image at different window scales according to an embodiment of the disclosure;
FIG. 4 is a schematic diagram of a data processing apparatus according to an embodiment of the present disclosure;
fig. 5 shows a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. As will be appreciated by those skilled in the art, and/or represents at least one of the connected objects.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
In practical applications, images shot by a camera, a monitor or other shooting devices may have a blurred region, and when a falsifier needs to transmit error information by using the images with the blurred region, the falsifier performs deblurring falsification on the blurred region in the images in order to maintain uniform visual coordination.
For example, in an image obtained by shooting a moving automobile by a camera, the automobile may be blurred, the surrounding environment is clear, and a falsifier needs to falsifie the image so that the license plate number in the image is clearly visible. If a falsifier only falsifies the license plate number part, the definition of the license plate number part is inconsistent with that of the peripheral part of the license plate number, so that the falsified picture is easy to identify. At present, a falsifier usually performs deblurring falsification processing on the whole blurred area in an image, and then further falsifies the license plate number part, so that the definitions of the falsified image are consistent as a whole, and the difficulty in identifying the falsified image is increased.
At present, with the development of image processing technology, more and more scenes are available for deblurring and tampering images, and not limited to the above tampering of license plate numbers, but when a tamperer needs to transmit error information by using an image with a blurred region, a deblurring and tampering operation on the image may exist. Therefore, it becomes increasingly important how to determine deblurred tampered areas in an image to authenticate the image for tampering. The data processing method provided by the disclosure can accurately determine the modified area in the target image, namely determine the deblurred tampered area in the target image.
Fig. 1 shows a schematic flow chart of a data processing method according to an embodiment of the present disclosure. As shown in fig. 1, the method may include:
step S11, sequentially traversing the target image through a plurality of sliding windows with different window scales, and determining an N-order derivative gray level co-occurrence matrix of the target image under any window scale, wherein the N-order derivative gray level co-occurrence matrix is a gray level co-occurrence matrix determined according to the N-order derivative of the target image.
Step S12, determining a detection probability map of the target image under any window scale according to the N-order derivative gray level co-occurrence matrix of the target image under any window scale, wherein the detection probability map comprises the probability that any pixel point in the target image is modified.
And step S13, sequentially fusing the detection probability maps of the target images under different window scales according to the window scales from large to small, and determining the modified area in the target image.
In one possible implementation, the target image may be modified in a manner of deblurring tampering, and the modified region in the target image may be determined as a deblurring tampered region in the target image.
In practical application, if the target image includes the deblurring tampered region, the deblurring tampered region is a blurred region extracted from the target image by a tamper, and is spliced back into the target image after the non-uniform deblurring tampering operation (namely, the non-uniform deblurring tampering operation) is performed on the blurred region, that is, the deblurring tampered region subjected to the non-uniform deblurring tampering and the non-tampered region are from the same image, so that the splicing boundary of the deblurring tampered region is blurred, and the abnormality is hardly visually perceived. In order to avoid that the tampered target image transmits wrong information, the deblurred tampered area in the target image needs to be effectively determined.
In one possible implementation, the plurality of sliding windows with different window dimensions includes a 4 × 4 sliding window, an 8 × 8 sliding window, a 16 × 16 sliding window, a 32 × 32 sliding window, a 64 × 64 sliding window, and a 128 × 128 sliding window.
The plurality of sliding windows with different window dimensions may include sliding windows with other window dimensions in addition to the six different window dimensions, which is not specifically limited in this disclosure.
And converting the target image into a gray image, sequentially traversing the target image through a plurality of sliding windows with different window scales, and determining an N-order derivative gray co-occurrence matrix of the target image under any window scale according to the N-order derivative of the target image.
In one possible implementation, the nth derivative gray level co-occurrence matrix includes at least: a first derivative gray level co-occurrence matrix and a second derivative gray level co-occurrence matrix.
Fig. 2 shows a schematic diagram of a method for determining a deblurred tampered region according to an embodiment of the present disclosure, as shown in fig. 2, a target image is traversed through a 4 × 4 sliding window, an 8 × 8 sliding window, a 16 × 16 sliding window, a 32 × 32 sliding window, a 64 × 64 sliding window, and a 128 × 128 sliding window, respectively, so as to obtain a first derivative gray level co-occurrence matrix and a second derivative gray level co-occurrence matrix of the target image at six different window scales.
At least determining a first derivative gray level co-occurrence matrix and a second derivative gray level co-occurrence matrix of the target image under any window scale, and determining a detection probability map of the target image under the window scale according to the first derivative gray level co-occurrence matrix and the second derivative gray level co-occurrence matrix of the target image under the window scale, namely determining the probability that any pixel point in the target image determined under the window scale is deblurred and tampered.
For any window scale, before determining the first derivative gray level co-occurrence matrix and the second derivative gray level co-occurrence matrix of the target image at the window scale, the first derivative and the second derivative of the target image at the window scale may be determined.
In one possible implementation, the method includes determining a first derivative D ' (u, v) and a second derivative D "(u, v) of the target image by the following equations, D ' (u, v) f (x, y) × q, D" (u, v) f (x, y) × q ', where f (x, y) is the target image,
Figure BDA0001944887180000081
according to a matrix
Figure BDA0001944887180000082
Determining a first derivative D' (u, v) of the target image f (x, y) by adopting a convolution calculation mode; and according to the matrix
Figure BDA0001944887180000083
The second derivative D "(u, v) of the target image f (x, y) is determined by convolution calculation.
The first and second derivatives of the target image may be determined in other ways than those described above, and this disclosure is not limited thereto.
In one possible implementation, determining an nth derivative gray level co-occurrence matrix of the target image at any window scale includes: from the first derivative D' (u, v) of the target image, a first derivative gray level co-occurrence matrix M is determined by the following formula1(m,n),
Figure BDA0001944887180000084
Wherein, when D '(u, v) ═ m and D' (u + du, v + dv) ═ n, [ D '(u, v) ═ m, D' (u + du, v + dv) ═ n]Otherwise, [ D '(u, v) ═ m, D' (u + du, v + dv) ═ n]0; according to the second derivative D' (u, v) of the target image, determining a second derivative gray level co-occurrence matrix through the following formula
Figure BDA0001944887180000085
Wherein when D "(u, v) ═ m and D" (u + du, v + dv) ═ n, [ D "(u, v) ═ m, D" (u + du, v + dv) ═ n]Otherwise, [ D "(u, v) ═ m, D" (u + du, v + dv) ═ n]0, wherein du ∈ { -1,0,1}, dv ∈ { -1,0,1 }.
In a possible implementation manner, determining a detection probability map of a target image under any window scale according to an N-order derivative gray level co-occurrence matrix of the target image under the window scale includes: and aiming at any window scale, obtaining a decision model under the window scale obtained by utilizing image sample training, wherein the decision model is used for determining a detection probability chart of the target image under the window scale according to an N-order derivative gray level co-occurrence matrix of the target image under the window scale.
Before determining a deblurring tampered region in a target image, performing model training on an image sample to determine a decision model under any window scale as prior knowledge, and further after determining an N-order derivative gray level co-occurrence matrix of the target image under any window scale, determining the deblurring tampered probability of any pixel point in the target image under the window scale according to the decision model under the window scale to obtain a detection probability graph of the target image under the window scale.
In a possible implementation manner, a decision model under a window scale obtained by training an image sample is obtained for any window scale, and the decision model under the window scale comprises the steps of carrying out fuzzy processing on the image sample to obtain a fuzzy image, carrying out deblurring processing on the fuzzy image to obtain a deblurred image, sequentially traversing the deblurred image through a plurality of sliding windows with different window scales to determine an N-order derivative gray level co-occurrence matrix of the fuzzy image under any window scale, and carrying out training on the N-order derivative gray level co-occurrence matrix of the fuzzy image under the window scale by adopting a static library Support Vector Machine (L IBSVM, &lTtTtranslation = L "& &l &/T &gTtT ibarray Support Vector Machine) and a radial basis function to obtain the decision model under the window scale.
The image samples may be derived from a Dresden image database, or may be derived from other image databases, which are not specifically limited by this disclosure. The sample volume of the image sample may be determined according to actual conditions (e.g., 1400), which is not specifically limited by the present disclosure.
The method comprises the steps of converting an image sample into a gray image, carrying out fuzzy processing on the image sample to obtain a fuzzy image, further carrying out deblurring processing on the fuzzy image to obtain a deblurring image which meets the deblurring tampered scene and has a maximized Peak signal to Noise Ratio (PSNR).
The ways to deblur the blurred image include, but are not limited to, three deblurring methods, the effective edge likelihood optimization method in blind deconvolution, with L0Regularization strength and gradient a priori deblurring method, and has L0A blind image motion deblurring method of canonical priors.
The method for deblurring the blurred image may include other deblurring methods besides the above three deblurring methods, and the disclosure does not specifically limit this method.
Sequentially traversing the deblurred image through a plurality of sliding windows with different window scales, determining an N-order derivative gray level co-occurrence matrix of the deblurred image under any window scale, and further performing model training on the N-order derivative gray level co-occurrence matrix of the deblurred image under the window scale by adopting L IBSVM and a radial basis function to check the window scale according to any window scale to obtain a decision model under the window scale.
In the process of determining the N-order derivative gray level co-occurrence matrix of the blurred image at any window scale and performing model training on the N-order derivative gray level co-occurrence matrix of the blurred image at any window scale, in order to balance the computational performance and complexity, a truncation threshold T and a dimension reduction parameter N are set, for example, the truncation threshold T is set to 10 and the dimension reduction parameter N is set to 50. The values of the truncation threshold T and the dimensionality reduction parameter n may also be set to other values, which are not specifically limited in this disclosure.
After a decision model under any window scale is obtained by adopting L IBSVM and radial basis function kernel training, a detection probability map under the window scale can be determined according to the decision model under any window scale and an N-order derivative gray level co-occurrence matrix of a target image under the window scale.
Also taking the above FIG. 2 as an example, as shown in FIG. 2, the detection probability map C of the target image under the 4 × 4 sliding window is determined according to the decision model under the 4 × 4 sliding window and the first derivative gray level co-occurrence matrix and the second derivative gray level co-occurrence matrix of the target image under the 4 × 4 sliding window1Determining a detection probability map C of the target image under the 8 × 8 sliding window according to the decision model under the 8 × 8 sliding window and the first derivative gray level co-occurrence matrix and the second derivative gray level co-occurrence matrix of the target image under the 8 × 8 sliding window2Determining a detection probability map C of the target image under the 16 × 16 sliding window according to the decision model under the 16 × 16 sliding window and the first derivative gray level co-occurrence matrix and the second derivative gray level co-occurrence matrix of the target image under the 16 × 16 sliding window3Determining a detection probability map C of the target image under the 32 × 32 sliding window according to the decision model under the 32 × 32 sliding window and the first derivative gray level co-occurrence matrix and the second derivative gray level co-occurrence matrix of the target image under the 32 × 32 sliding window4Determining a detection probability map C of the target image under the 64 × 64 sliding window according to the decision model under the 64 × 64 sliding window and the first derivative gray level co-occurrence matrix and the second derivative gray level co-occurrence matrix of the target image under the 64 × 64 sliding window5Determining a detection probability map C of the target image under the 128 × 128 sliding window according to a decision model under the 128 × 128 sliding window and a first derivative gray level co-occurrence matrix and a second derivative gray level co-occurrence matrix of the target image under the 128 × 128 sliding window6
The accuracy of the detection probability map of the target image under the large window scale is high, the boundary of the deblurring tampered region corresponding to the detection probability map of the target image under the small window scale is accurate, and in order to determine the deblurring tampered region in the target image more accurately, the detection probability maps of the target images under different window scales can be fused in sequence from large to small according to the window scales.
In a possible implementation manner, sequentially fusing detection probability maps of target images under different window scales according to the window scales from large to small to determine a modified region in the target image, including: clustering the detection probability graph of the target image under any window scale through a clustering algorithm, and determining a first detection result graph corresponding to the target image under the window scale, wherein the first detection result graph comprises a modified area and an unmodified area; according to a first detection result graph corresponding to the target image under the window scale, adjusting a detection probability graph of the target image under the window scale through connectivity detection and filtering operation; clustering the adjusted detection probability graph of the target image under the window scale again through a clustering algorithm, and determining a second detection result graph corresponding to the target image under the window scale, wherein the second detection result graph comprises a modified area and an unmodified area; and sequentially fusing second detection result graphs corresponding to the target image under different window scales according to the window scales from large to small to determine a modified area in the target image.
In one possible implementation, the clustering algorithm is two-centroid K-means clustering.
And for any window scale, clustering the detection probability graph of the target image under the window scale through two-centroid K-means clustering to obtain a first detection result graph (binary image) corresponding to the target image under the window scale, wherein the first detection result graph comprises a deblurring tampered region (modified region) and an unmodified region (unmodified region), namely, the initial boundary of the deblurring tampered region in the target image under the window scale is determined.
In one possible implementation, the filtering operation is gaussian weighted filtering.
Still taking the above fig. 2 as an example, as shown in fig. 2, in order to reduce the influence of noise on the determination result of the deblurred tampered region, for any window scale, according to a first detection result diagram corresponding to the target image under the window scale, through connectivity detection and gaussian filtering operation, adjusting a detection probability diagram of the target image under the window scale, and clustering the detection probability diagram of the target image under the window scale again through two centroid K-means clustering, determining a second detection result diagram (binary image) corresponding to the target image under the window scale, where the second detection result diagram includes the deblurred tampered region and the untampered region, that is, determining a detection boundary of the deblurred tampered region in the target image under the window scale.
FIG. 3 is a schematic diagram of a second detection result graph corresponding to a target image under different window scales according to an embodiment of the disclosure, and FIG. 3 includes a second detection result graph C corresponding to a target image under 4 × 4 window scales1', 8 × 8 second detection result graph C corresponding to target image under window scale2', 16 × 16 window scale target image corresponding second detection result graph C3', 32 × 32 second detection result graph C corresponding to target image under window scale4', 64 × 64 second detection result graph C corresponding to target image under window scale5', and a second detection result map C corresponding to the target image at the window scale of 128 × 1286And' second detection result images (binary images) corresponding to the target images under six different window scales. And white pixel points in the second detection result image are pixel points which are deblurred and tampered in the target image, namely, a white area in the second detection result image is a deblurred and tampered area in the target image.
In order to more accurately determine the deblurred tampered region in the target image, the second detection result graphs of the target image under different window scales can be sequentially fused from large to small according to the window scales.
For the six second detection result graphs C in FIG. 31'-C6' carrying out fusion: first, using the second detection result chart C6' As a reference, the second detection result is plotted in a graph C6' and second detection result graph C5' fusion to obtain the third detection result chart C1"; secondly, a third detection result chart C is used1"based on the third detection result, the third detection result is shown in the graph C1"and second test result chart C4' fusion to obtain the third detection result chart C2"; secondly, a third detection result chart C is used2"based on the third detection result, the third detection result is shown in the graph C2"and second test result chart C3' fusion to obtain the third detection result chart C3"; secondly, a third detection result chart C is used3"based on the third detection result, the third detection result is shown in the graph C3"and second test result chart C2' fusion to obtain the third detection result chart C4"; secondly, a third detection result chart C is used4"based on the third detection result, the third detection result is shown in the graph C4"and second test result chart C1' fusion to obtain the third detection result chart C5". According to a third detection result graph C obtained after fusion5", the deblurred tampered region in the target image can be accurately determined.
The method comprises the steps of sequentially traversing a target image through a plurality of sliding windows with different window scales, determining an N-order derivative gray level co-occurrence matrix of the target image under any window scale, wherein the N-order derivative gray level co-occurrence matrix is determined according to the N-order derivative of the target image, determining a detection probability map of the target image under the window scale according to the N-order derivative gray level co-occurrence matrix of the target image under any window scale, wherein the detection probability map comprises the probability that any pixel point in the target image is modified, and sequentially fusing the detection probability maps of the target image under different window scales according to the window scales from large to small, so that a modified area in the target image can be accurately determined.
Fig. 4 shows a schematic structural diagram of a data processing apparatus according to an embodiment of the present disclosure. The apparatus 40 shown in fig. 4 may be used to implement the steps of the above-described method embodiment shown in fig. 1, and the apparatus 50 includes:
the first determining module 41 is configured to sequentially traverse the target image through a plurality of sliding windows with different window scales, and determine an N-order derivative gray level co-occurrence matrix of the target image at any window scale, where the N-order derivative gray level co-occurrence matrix is a gray level co-occurrence matrix determined according to an N-order derivative of the target image;
a second determining module 42, configured to determine a detection probability map of the target image in any window scale according to an N-order derivative gray level co-occurrence matrix of the target image in any window scale, where the detection probability map includes a probability that any pixel in the target image is modified;
and a third determining module 43, configured to sequentially fuse the detection probability maps of the target images at different window scales according to the window scales from large to small, and determine a modified region in the target image.
In one possible implementation, the apparatus 40 further includes:
the acquisition module is used for acquiring a decision model under the window scale, which is obtained by training an image sample, aiming at any window scale, wherein the decision model is used for determining a detection probability map of a target image under the window scale according to an N-order derivative gray level co-occurrence matrix of the target image under the window scale.
In one possible implementation, the obtaining module includes:
the fuzzy processing submodule is used for carrying out fuzzy processing on the image sample to obtain a fuzzy image;
the deblurring processing submodule is used for deblurring the blurred image to obtain a deblurred image;
the first determining submodule is used for sequentially traversing the deblurred image through a plurality of sliding windows with different window scales and determining an N-order derivative gray level co-occurrence matrix of the deblurred image under any window scale;
and the model training submodule is used for training the N-order derivative gray level co-occurrence matrix of the blurred image under the window scale by adopting L IBSVM and a radial basis function core aiming at any window scale to obtain a decision model under the window scale.
In one possible implementation, the third determining module 43 includes:
the clustering submodule is used for clustering the detection probability graph of the target image under any window scale through a clustering algorithm and determining a first detection result graph corresponding to the target image under the window scale, wherein the first detection result graph comprises a modified area and an unmodified area;
the adjusting submodule is used for adjusting the detection probability chart of the target image under the window scale through connectivity detection and filtering operation according to the first detection result chart corresponding to the target image under the window scale;
the clustering submodule is further used for clustering the adjusted detection probability graph of the target image under the window scale again through a clustering algorithm, and determining a second detection result graph corresponding to the target image under the window scale, wherein the second detection result graph comprises a modified area and an unmodified area;
and the second determining submodule is used for sequentially fusing second detection result graphs corresponding to the target image under different window scales according to the window scales from large to small so as to determine the modified area in the target image.
In one possible implementation, the clustering algorithm is two-centroid K-means clustering.
In one possible implementation, the filtering operation is gaussian weighted filtering.
In one possible implementation, the nth derivative gray level co-occurrence matrix includes at least: a first derivative gray level co-occurrence matrix and a second derivative gray level co-occurrence matrix.
In one possible implementation, the plurality of sliding windows with different window sizes includes:
4 × 4 sliding window, 8 × 8 sliding window, 16 × 16 sliding window, 32 × 32 sliding window, 64 × 64 sliding window, and 128 × 128 sliding window.
The apparatus 40 provided in the present disclosure can implement each step in the method embodiment shown in fig. 1, and implement the same technical effect, and is not described herein again to avoid repetition.
Fig. 5 shows a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 5, at the hardware level, the electronic device includes a processor, and optionally further includes an internal bus, a network interface, and a memory. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory, such as at least 1 disk Memory. Of course, the electronic device may also include hardware required for other services.
The processor, the network interface, and the memory may be connected to each other via an internal bus, which may be an ISA (Industry Standard Architecture) bus, a PCI (peripheral component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 5, but this does not indicate only one bus or one type of bus.
And a memory for storing the program. In particular, the program may include program code comprising computer operating instructions. The memory may include both memory and non-volatile storage and provides instructions and data to the processor.
The processor reads a corresponding computer program from the non-volatile memory into the memory and then runs the computer program, thereby forming the data processing device on a logic level. And a processor executing the program stored in the memory and specifically executing the steps of the embodiment of the method shown in fig. 1.
The method described above with reference to fig. 1 may be applied in or implemented by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present specification may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present specification may be embodied directly in a hardware decoding processor, or in a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
The electronic device may execute the method executed in the method embodiment shown in fig. 1, and implement the functions of the method embodiment shown in fig. 1, which are not described herein again in this specification.
Embodiments of the present specification also propose a computer-readable storage medium storing one or more programs, the one or more programs including instructions, which when executed by an electronic device including a plurality of application programs, enable the electronic device to perform the data processing method in the embodiment shown in fig. 1, and specifically perform the steps of the embodiment of the method shown in fig. 1.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
Computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including AN object oriented programming language such as Smalltalk, C + +, or the like, as well as conventional procedural programming languages, such as the "C" language or similar programming languages.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (18)

1. A data processing method, comprising:
sequentially traversing a target image through a plurality of sliding windows with different window scales, and determining an N-order derivative gray level co-occurrence matrix of the target image under any window scale, wherein the N-order derivative gray level co-occurrence matrix is a gray level co-occurrence matrix determined according to the N-order derivative of the target image;
determining a detection probability map of the target image under any window scale according to an N-order derivative gray level co-occurrence matrix of the target image under any window scale, wherein the detection probability map comprises the probability that any pixel point in the target image is modified;
and sequentially fusing the detection probability maps of the target image under different window scales according to the window scales from large to small to determine a modified area in the target image.
2. The method according to claim 1, wherein determining a detection probability map of the target image at any window scale according to an N-order derivative gray level co-occurrence matrix of the target image at the window scale comprises:
and aiming at any window scale, obtaining a decision model under the window scale obtained by utilizing image sample training, wherein the decision model is used for determining a detection probability map of the target image under the window scale according to an N-order derivative gray level co-occurrence matrix of the target image under the window scale.
3. The method of claim 2, wherein obtaining a decision model at any window scale trained using image samples for the window scale comprises:
carrying out fuzzy processing on the image sample to obtain a fuzzy image;
deblurring processing is carried out on the blurred image to obtain a deblurred image;
sequentially traversing the deblurred image through a plurality of sliding windows with different window scales to determine an N-order derivative gray level co-occurrence matrix of the deblurred image under any window scale;
and aiming at any window scale, training an N-order derivative gray level co-occurrence matrix of the deblurred image under the window scale by adopting a static library support vector machine L IBSVM and a radial basis function core to obtain a decision model under the window scale.
4. The method according to claim 1, wherein the step of sequentially fusing the detection probability maps of the target image at different window scales according to the window scales from large to small to determine the modified region in the target image comprises:
clustering the detection probability graph of the target image under any window scale through a clustering algorithm, and determining a first detection result graph corresponding to the target image under the window scale, wherein the first detection result graph comprises a modified area and an unmodified area;
according to a first detection result graph corresponding to the target image under the window scale, adjusting a detection probability graph of the target image under the window scale through connectivity detection and filtering operation;
clustering the detection probability graph of the target image under the adjusted window scale again through the clustering algorithm, and determining a second detection result graph corresponding to the target image under the window scale, wherein the second detection result graph comprises a modified area and an unmodified area;
and sequentially fusing second detection result graphs corresponding to the target image under different window scales according to the window scales from large to small, and determining a modified area in the target image.
5. The method of claim 4, wherein the clustering algorithm is two-centroid K-means clustering.
6. The method of claim 4, wherein the filtering operation is Gaussian weighted filtering.
7. The method according to any of claims 1-6, wherein the Nth derivative gray level co-occurrence matrix comprises at least: a first derivative gray level co-occurrence matrix and a second derivative gray level co-occurrence matrix.
8. The method of claim 1, wherein the plurality of sliding windows having different window dimensions comprises:
4 × 4 sliding window, 8 × 8 sliding window, 16 × 16 sliding window, 32 × 32 sliding window, 64 × 64 sliding window, and 128 × 128 sliding window.
9. A data processing apparatus, comprising:
the first determining module is used for sequentially traversing a target image through a plurality of sliding windows with different window scales and determining an N-order derivative gray level co-occurrence matrix of the target image under any window scale, wherein the N-order derivative gray level co-occurrence matrix is a gray level co-occurrence matrix determined according to the N-order derivative of the target image;
a second determining module, configured to determine a detection probability map of the target image in any window scale according to an N-order derivative gray level co-occurrence matrix of the target image in any window scale, where the detection probability map includes a probability that any pixel in the target image is modified;
and the third determining module is used for sequentially fusing the detection probability maps of the target image under different window scales according to the window scales from large to small so as to determine the modified area in the target image.
10. The apparatus of claim 9, further comprising:
the acquisition module is used for acquiring a decision model under the window scale, which is obtained by training an image sample, aiming at any window scale, wherein the decision model is used for determining a detection probability map of the target image under the window scale according to an N-order derivative gray level co-occurrence matrix of the target image under the window scale.
11. The apparatus of claim 10, wherein the obtaining module comprises:
the fuzzy processing submodule is used for carrying out fuzzy processing on the image sample to obtain a fuzzy image;
the deblurring processing submodule is used for deblurring the blurred image to obtain a deblurred image;
the first determining submodule is used for sequentially traversing the deblurred image through a plurality of sliding windows with different window scales and determining an N-order derivative gray level co-occurrence matrix of the deblurred image under any window scale;
and the model training submodule is used for training the N-order derivative gray level co-occurrence matrix of the deblurred image under the window scale by adopting L IBSVM and a radial basis function check aiming at any window scale to obtain a decision model under the window scale.
12. The apparatus of claim 9, wherein the third determining module comprises:
the clustering submodule is used for clustering the detection probability graph of the target image under any window scale through a clustering algorithm and determining a first detection result graph corresponding to the target image under the window scale, wherein the first detection result graph comprises a modified area and an unmodified area;
the adjusting submodule is used for adjusting the detection probability graph of the target image under the window scale through connectivity detection and filtering operation according to the first detection result graph corresponding to the target image under the window scale;
the clustering submodule is further configured to perform clustering again on the adjusted detection probability map of the target image under the window scale through the clustering algorithm, and determine a second detection result map corresponding to the target image under the window scale, where the second detection result map includes a modified region and an unmodified region;
and the second determining submodule is used for sequentially fusing second detection result graphs corresponding to the target image under different window scales according to the window scales from large to small so as to determine a modified area in the target image.
13. The apparatus of claim 12, wherein the clustering algorithm is two-centroid K-means clustering.
14. The apparatus of claim 12, wherein the filtering operation is gaussian weighted filtering.
15. The apparatus according to any of claims 9-14, wherein the nth derivative gray level co-occurrence matrix comprises at least: a first derivative gray level co-occurrence matrix and a second derivative gray level co-occurrence matrix.
16. The apparatus of claim 9, wherein the plurality of sliding windows having different window dimensions comprises:
4 × 4 sliding window, 8 × 8 sliding window, 16 × 16 sliding window, 32 × 32 sliding window, 64 × 64 sliding window, and 128 × 128 sliding window.
17. A data processing apparatus, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the data processing method of any one of claims 1-8.
18. A non-transitory computer readable storage medium having computer program instructions stored thereon, wherein the computer program instructions, when executed by a processor, implement the data processing method of any one of claims 1 to 8.
CN201910032941.6A 2019-01-14 2019-01-14 Data processing method and device Active CN111507931B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910032941.6A CN111507931B (en) 2019-01-14 2019-01-14 Data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910032941.6A CN111507931B (en) 2019-01-14 2019-01-14 Data processing method and device

Publications (2)

Publication Number Publication Date
CN111507931A true CN111507931A (en) 2020-08-07
CN111507931B CN111507931B (en) 2023-04-18

Family

ID=71863772

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910032941.6A Active CN111507931B (en) 2019-01-14 2019-01-14 Data processing method and device

Country Status (1)

Country Link
CN (1) CN111507931B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112749668A (en) * 2021-01-18 2021-05-04 上海明略人工智能(集团)有限公司 Target image clustering method and device, electronic equipment and computer readable medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011049565A1 (en) * 2009-10-21 2011-04-28 Hewlett-Packard Development Company, L.P. Real-time video deblurring
CN104766095A (en) * 2015-04-16 2015-07-08 成都汇智远景科技有限公司 Mobile terminal image identification method
CN105046265A (en) * 2015-03-03 2015-11-11 沈阳工业大学 Iris image intestinal loop area detection method based on texture difference
CN108269221A (en) * 2018-01-23 2018-07-10 中山大学 A kind of JPEG weight contract drawing is as tampering location method
WO2018152643A1 (en) * 2017-02-24 2018-08-30 Sunnybrook Research Institute Systems and methods for noise reduction in imaging
CN108629743A (en) * 2018-04-04 2018-10-09 腾讯科技(深圳)有限公司 Processing method, device, storage medium and the electronic device of image
CN109190456A (en) * 2018-07-19 2019-01-11 中国人民解放军战略支援部队信息工程大学 Pedestrian detection method is overlooked based on the multiple features fusion of converging channels feature and gray level co-occurrence matrixes

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011049565A1 (en) * 2009-10-21 2011-04-28 Hewlett-Packard Development Company, L.P. Real-time video deblurring
CN105046265A (en) * 2015-03-03 2015-11-11 沈阳工业大学 Iris image intestinal loop area detection method based on texture difference
CN104766095A (en) * 2015-04-16 2015-07-08 成都汇智远景科技有限公司 Mobile terminal image identification method
WO2018152643A1 (en) * 2017-02-24 2018-08-30 Sunnybrook Research Institute Systems and methods for noise reduction in imaging
CN108269221A (en) * 2018-01-23 2018-07-10 中山大学 A kind of JPEG weight contract drawing is as tampering location method
CN108629743A (en) * 2018-04-04 2018-10-09 腾讯科技(深圳)有限公司 Processing method, device, storage medium and the electronic device of image
CN109190456A (en) * 2018-07-19 2019-01-11 中国人民解放军战略支援部队信息工程大学 Pedestrian detection method is overlooked based on the multiple features fusion of converging channels feature and gray level co-occurrence matrixes

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
JIANSHENG CHEN ET AL: "Median Filtering Forensics Based on Convolutional Neural Networks", 《 IEEE SIGNAL PROCESSING LETTERS》 *
和平等: "融合LWT纹理特征的图像复制篡改检测算法", 《计算机工程》 *
欧佳佳等: "基于灰度共生矩的图像区域复制篡改检测", 《计算机应用》 *
赵海涛等: "基于灰度共生矩阵的自适应图像边缘检测", 《微计算机信息》 *
高皜: "基于GLCM与GGM的图像复制—粘贴伪造检测算法", 《中国优秀硕士学位论文全文数据库》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112749668A (en) * 2021-01-18 2021-05-04 上海明略人工智能(集团)有限公司 Target image clustering method and device, electronic equipment and computer readable medium

Also Published As

Publication number Publication date
CN111507931B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
US9760800B2 (en) Method and system to detect objects using block based histogram of oriented gradients
CN108337505B (en) Information acquisition method and device
CN110909712B (en) Moving object detection method and device, electronic equipment and storage medium
CN110796649B (en) Target detection method and device, electronic equipment and storage medium
CN110135301B (en) Traffic sign recognition method, device, equipment and computer readable medium
CN111126108A (en) Training method and device of image detection model and image detection method and device
CN112801888A (en) Image processing method, image processing device, computer equipment and storage medium
WO2018133101A1 (en) Image foreground detection apparatus and method, and electronic device
CN112330576A (en) Distortion correction method, device and equipment for vehicle-mounted fisheye camera and storage medium
CN115393815A (en) Road information generation method and device, electronic equipment and computer readable medium
CN112633066A (en) Aerial small target detection method, device, equipment and storage medium
CN111507931B (en) Data processing method and device
CN113326766B (en) Training method and device of text detection model, text detection method and device
CN114202457A (en) Method for processing low-resolution image, electronic device and computer program product
Zeng et al. Restoration of motion-blurred image based on border deformation detection: A traffic sign restoration model
CN111415371A (en) Sparse optical flow determination method and device
CN112287905A (en) Vehicle damage identification method, device, equipment and storage medium
CN110852250A (en) Vehicle weight removing method and device based on maximum area method and storage medium
CN116543222A (en) Knee joint lesion detection method, device, equipment and computer readable storage medium
AU2011265379A1 (en) Single shot image based depth mapping
CN115393763A (en) Pedestrian intrusion identification method, system, medium and device based on image frequency domain
CN115205553A (en) Image data cleaning method and device, electronic equipment and storage medium
CN115019126A (en) Image sample screening method and device, electronic equipment and storage medium
CN112528970A (en) Guideboard detection method, device, equipment and computer readable medium
JP6623419B2 (en) Display control device, imaging device, smartphone, display control method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant