CN113095380A - Image hash processing method based on adjacent gradient and structural features - Google Patents

Image hash processing method based on adjacent gradient and structural features Download PDF

Info

Publication number
CN113095380A
CN113095380A CN202110327145.2A CN202110327145A CN113095380A CN 113095380 A CN113095380 A CN 113095380A CN 202110327145 A CN202110327145 A CN 202110327145A CN 113095380 A CN113095380 A CN 113095380A
Authority
CN
China
Prior art keywords
image
matrix
hash
gradient
adjacent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110327145.2A
Other languages
Chinese (zh)
Other versions
CN113095380B (en
Inventor
赵琰
马林生
赵倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Electric Power University
Original Assignee
Shanghai Electric Power University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Electric Power University filed Critical Shanghai Electric Power University
Priority to CN202110327145.2A priority Critical patent/CN113095380B/en
Publication of CN113095380A publication Critical patent/CN113095380A/en
Application granted granted Critical
Publication of CN113095380B publication Critical patent/CN113095380B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image hash processing method based on adjacent gradient and structural features, which comprises the steps of reading an image in an image library and preprocessing the image; extracting three components from the preprocessed image, and obtaining the statistical characteristics of the image by using an adjacent gradient and a binarization quantization compression method; converting the preprocessed image into a color space, extracting a brightness component of the color space, and converting the brightness component into a three-dimensional image; extracting structural features of the image according to the brightness component image and the three-dimensional image; and combining the statistical characteristics and the structural characteristics to obtain intermediate hash, and performing position scrambling on the intermediate hash by using a random generator to obtain a final hash sequence. The method has robustness for processing most of the images with the content, blocks the images, does not have robustness for large-angle rotation, and has very low collision rate; for image authentication, image hash is formed by using features in an image library, and the image authentication method has high security performance.

Description

Image hash processing method based on adjacent gradient and structural features
Technical Field
The invention relates to the technical field of image processing, in particular to an image hash processing method based on a proximity gradient and a structural characteristic.
Background
In recent years, the content security problem of digital media has been receiving a lot of attention. With the rapid promotion of science and technology and networks, and intelligent multimedia, a plurality of image editing software is popularized and used, and more images and videos are generated by users and uploaded to the internet community. People can easily obtain a large amount of image information from the image, and can simply operate the image by utilizing various software such as photoshop, photomechanical operation and the like, such as adding text into the image, changing the brightness and contrast of the image, synthesizing a new image and the like. An image may produce many copies and it is therefore important how to distinguish the images. Thus image hashing techniques have been developed to detect and classify images.
The outstanding algorithm that many researchers made in the image hash field, for example Lei et al combine Radon transform and Discrete Fourier Transform (DFT) to construct hash, and quantize and form hash by extracting invariant features after image transform and DFT coefficients of one-dimensional DFT transform [2] consider that the algorithm anti-rotation capability is increased while keeping distinctiveness, so that the maximum inscribed circle of an image is extracted to reconstruct an image block, a secondary image is converted to a frequency domain by DFT, and robust features are extracted from an amplitude matrix of the Fourier coefficients by using non-uniform sampling to form hash, and the like.
Disclosure of Invention
This section is for the purpose of summarizing some aspects of embodiments of the invention and to briefly introduce some preferred embodiments. In this section, as well as in the abstract and the title of the invention of this application, simplifications or omissions may be made to avoid obscuring the purpose of the section, the abstract and the title, and such simplifications or omissions are not intended to limit the scope of the invention.
The present invention has been made in view of the above-mentioned problems of the conventional image hash processing.
Therefore, the technical problem solved by the invention is as follows: the traditional Hash image algorithm cannot detect color change and has weak robustness to noise; and on the other hand, the method is robust to large-angle rotation, but the running time is too long, and the collision rate is high.
In order to solve the technical problems, the invention provides the following technical scheme: reading images in an image library, and preprocessing the images; extracting three components from the preprocessed image, and obtaining the statistical characteristics of the image by using an adjacent gradient and a binarization quantization compression method; converting the preprocessed image into a color space, extracting a brightness component of the color space, and converting the brightness component into a three-dimensional image; extracting structural features of the image according to the brightness component image and the three-dimensional image; and combining the statistical characteristics and the structural characteristics to obtain intermediate hash, and performing position scrambling on the intermediate hash by using a random generator to obtain a final hash sequence.
As a preferable scheme of the image hash processing method based on the adjacent gradient and the structural feature, in the invention: the preprocessing of the image comprises the steps of normalizing the image into the same size, and performing Gaussian low-pass filtering operation after the image is adjusted in size to reduce noise pollution.
As a preferable scheme of the image hash processing method based on the adjacent gradient and the structural feature, in the invention: the gaussian low-pass filtering operation comprises filtering an image by using a gaussian low-pass filter with a template of 3 × 3 and a standard deviation σ of 1, wherein a calculation formula of the filtering process is as follows:
Figure RE-GDA0003090506170000021
Figure RE-GDA0003090506170000022
wherein: mG(i, j) is the value of the ith row and jth column element in the template.
As a preferable scheme of the image hash processing method based on the adjacent gradient and the structural feature, in the invention: the obtaining of the statistical characteristics of the image comprisesExtracting R, G and B three-component I from the preprocessed imageR、IG、IBAnd partitioning each component into blocks with sub-block size b × b, calculating IRMean value of each block, and mean matrix M constituting each componentRThen to IRThe mean matrix calculates 2 adjacent gradients of a row adjacent gradient and a column adjacent gradient, and finally, the characteristic Z of each row of the row adjacent gradient is described through binarization quantization processing by using the mean value mu, the variance delta, the skewness s and the kurtosis omegaHAnd feature Z of each column adjacent to the gradientLIs formed, and then I is obtainedRNear gradient statistic SR=[ZH,ZL]Thereby obtaining an image IG、 IBAre respectively SG,SBJoint statistical feature H of jointly available RGB color images1=[SR, SG,SB]Length of L1=6×N/b。
As a preferable scheme of the image hash processing method based on the adjacent gradient and the structural feature, in the invention: calculating 2 adjacent gradients of the mean matrix comprises that the row adjacent gradient and the column adjacent gradient of the mean matrix are g respectivelyHAnd gLThen the row is adjacent to the ith row vector g of the gradient matrixH(i) Column vector g of j-th column of column-adjacent gradient matrixL(j) The calculation formula of (a) is as follows:
gH(i)=MR(i+1,:)-MR(i,:)
gL(j)=MR(:,j+1)-MR(:,j)
wherein: mR(i,: is the row vector of the ith row of the mean matrix, MR(j) is the column vector of the jth column of the mean matrix, and when i equals N/b, M isR(i + 1:) is MR(1,:); when j is N/b, MR(j +1) is MR(:,1)。
As a preferable scheme of the image hash processing method based on the adjacent gradient and the structural feature, in the invention: the binarization quantization compression comprises statistics and rectificationIndividual matrix C and mean matrix QmL2 norm to obtain the line adjacent gradient statistic DH=[d1,d2,d3,...,dK](ii) a To DHCarrying out binarization processing to obtain statistical characteristics Z of line adjacent gradientH=[z1,z2,…,zK]The calculation formula is as follows:
Figure RE-GDA0003090506170000031
wherein: z is a radical ofH(i) Is ZHWhen i ═ K, dH(i+1)=dH(1)。
As a preferable scheme of the image hash processing method based on the adjacent gradient and the structural feature, in the invention: the extracting of the brightness component comprises the steps of setting the brightness component of an image as Y, converting the Y into a three-dimensional space to extract structural features, setting the transverse position of a coordinate of the Y component as an x axis, setting the longitudinal position of the Y component as a Y axis, setting pixel point values corresponding to the coordinate as a z axis, constructing a three-dimensional curve feature diagram, drawing a peak-top curve and a peak-valley curve of the Y component projected on xoz and yoz planes under 2 visual angles, and simultaneously segmenting the three-dimensional curve of the Y component by utilizing an equidistant slice plane parallel to the yoz plane to obtain a segmentation diagram, thereby extracting the structural features of the image.
As a preferable scheme of the image hash processing method based on the adjacent gradient and the structural feature, in the invention: the image structure feature extraction method comprises the steps of dividing a Y component image into a series of non-overlapping small blocks, enabling the size of each block to be b x b, carrying out mean value calculation on pixel values of each small block to obtain a feature matrix M, obtaining a peak top curve and a peak valley curve of the feature matrix M under xoz and yoz projection, carrying out concave-convex point set calculation on the peak top curve and the peak valley curve under different projections to obtain a combined set, obtaining position information on a xoy surface through the combined set of the concave-convex point set, and carrying out joint binarization to obtain local structure feature Z1(ii) a In order to extract the position information of the concave-convex point set under different projections, construct the position characteristics,as with the extraction of the structural feature point features, intersection is calculated on the position information of different visual angles on the xoy plane to obtain an intersection matrix, row calculation and parallel conversion are carried out on the intersection matrix, and then a position matrix Z is obtained2So as to obtain local structural feature Z ═ Z1,Z2](ii) a And equally dividing the three-dimensional image into N/b section matrixes by using a plane parallel to an axis xoz, and obtaining an overall characteristic S by counting the variance of a pixel point set of the number of pixel points contained in each section and each section matrix and binarizing to obtain a structural characteristic H2=[Z1,Z2,S]Length L of2=6×N/b-2。
As a preferable scheme of the image hash processing method based on the adjacent gradient and the structural feature, in the invention: the overall characteristics comprise a pixel point set obtained by counting the number of pixel points contained in each section
Figure RE-GDA0003090506170000041
The variance of each area matrix is counted
Figure RE-GDA0003090506170000042
Will S1And S2Respectively binarized to obtain S3,S4
Figure RE-GDA0003090506170000043
Combined to obtain integral features
Figure RE-GDA0003090506170000044
As a preferable scheme of the image hash processing method based on the adjacent gradient and the structural feature, in the invention: the final hash sequence includes, in association with the neighborhood gradient statistic H1And structural feature H2Get intermediate Hash HmIs shown as Hm=[H1,H2]The length of which is L ═ L1+L2=12×N/b- 2Bit, then use random generator to generate key K with length L, and according to the following formula, process intermediate hash HmAnd (3) calling the ith bit value after scrambling the secret key K to perform position indexing:
h(i)=Hm(K[i])
wherein: k [ i ]]H representing the ith number in the pseudo-random number sequence K to be indexedmAnd assigning the value to the ith position of the new hash sequence H for position scrambling to obtain the final hash sequence.
The invention has the beneficial effects that: the method has robustness for processing most of the images with the content, and blocks the images, so that the method has no robustness for large-angle rotation and has very low collision rate; the invention can also be used for image authentication, and utilizes the characteristics in the image library to form image hash, thereby having higher security performance.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise. Wherein:
fig. 1 is a schematic basic flowchart of an image hash processing method based on adjacent gradients and structural features according to a first embodiment of the present invention;
fig. 2 is an image hash block diagram of an image hash processing method based on adjacent gradients and structural features according to a first embodiment of the present invention;
FIG. 3 is a three-dimensional image of the Y component of the image hashing method based on the adjacent gradient and the structural feature according to the first embodiment of the invention;
FIG. 4 is a projection graph of the Y component of the image hashing processing method based on the adjacent gradient and the structural feature according to the first embodiment of the present invention;
FIG. 5 is a diagram illustrating the effect of conventional image processing on Hash based on the image Hash processing method of neighboring gradients and structural features according to a second embodiment of the present invention;
FIG. 6 is a diagram illustrating the result of uniqueness analysis of an image hashing method based on neighboring gradients and structural features according to a second embodiment of the present invention;
FIG. 7 is a graph comparing ROC curves of different algorithms for an image hash processing method based on neighboring gradients and structural features according to a second embodiment of the present invention;
fig. 8 shows a lens and 10 attacks thereof to obtain an image based on an image hash processing method of adjacent gradients and structural features according to a second embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, specific embodiments accompanied with figures are described in detail below, and it is apparent that the described embodiments are a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making creative efforts based on the embodiments of the present invention, shall fall within the protection scope of the present invention.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than those specifically described and will be readily apparent to those of ordinary skill in the art without departing from the spirit of the present invention, and therefore the present invention is not limited to the specific embodiments disclosed below.
Furthermore, reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one implementation of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
The present invention will be described in detail with reference to the drawings, wherein the cross-sectional views illustrating the structure of the device are not enlarged partially in general scale for convenience of illustration, and the drawings are only exemplary and should not be construed as limiting the scope of the present invention. In addition, the three-dimensional dimensions of length, width and depth should be included in the actual fabrication.
Meanwhile, in the description of the present invention, it should be noted that the terms "upper, lower, inner and outer" and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of describing the present invention and simplifying the description, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation and operate, and thus, cannot be construed as limiting the present invention. Furthermore, the terms first, second, or third are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The terms "mounted, connected and connected" in the present invention are to be understood broadly, unless otherwise explicitly specified or limited, for example: can be fixedly connected, detachably connected or integrally connected; they may be mechanically, electrically, or directly connected, or indirectly connected through intervening media, or may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Example 1
Referring to fig. 1 to 4, an embodiment of the present invention provides an image hash processing method based on a proximity gradient and a structural feature, including:
s1: reading the images in the image library and preprocessing the images. In which it is to be noted that,
preprocessing the image includes normalizing the image to the same size and performing a gaussian low pass filtering operation after the image is resized to reduce noise pollution.
The gaussian low-pass filtering operation includes filtering the image by using a gaussian low-pass filter with a template of 3 × 3 and a standard deviation σ of 1, where a calculation formula of a filtering process is as follows:
Figure RE-GDA0003090506170000061
Figure RE-GDA0003090506170000062
wherein: mG(i, j) is the value of the ith row and jth column element in the template.
S2: and extracting three components from the preprocessed image, and obtaining the statistical characteristics of the image by using an adjacent gradient and a binarization quantization compression method. In which it is to be noted that,
the statistical characteristics of the image are obtained by extracting three components I of R, G and B from the preprocessed imageR、IG、IBAnd partitioning each component into blocks with sub-block size b × b, calculating IRThe mean value of each block is obtained to obtain a mean value matrix M of each componentRMean matrix MRIs represented as follows:
Figure RE-GDA0003090506170000063
wherein: m isi,jTaking the pixel value of the j column of the ith row as the mean value matrix, and then comparing IRThe mean matrix of (a) calculates the row proximity gradient gHColumn-sum neighboring gradient gL
gH(i)=MR(i+1,:)-MR(i,:)
gL(j)=MR(:,j+1)-MR(:,j)
Wherein: gH(i)、gL(j) I row vector of the row adjacent gradient matrix and j column vector, M, of the column adjacent gradient matrixR(i,: is the row vector of the ith row of the mean matrix, MR(j) is the column vector of the j-th column of the mean matrix, when i equals N/b, M isR(i + 1:) is MR(1,: when j is N/b, MR(j +1) is MR(:,1)。
Finally by using the mean value muHVariance δHDegree of deviation sHDegree of kurtosis ωHThereby obtainingTo the feature vector V describing the ith row of the row adjacent to the gradientH(i)=[μk(i),δk(i),sk(i),ωk(i)]TFinally, these eigenvectors are normalized and aligned to obtain a feature matrix C ═ C with a size of 4 × K1,c2,c3,…,cK]Wherein mean value μHVariance δHDegree of deviation sHDegree of kurtosis ωHExpressed as:
Figure RE-GDA0003090506170000071
Figure RE-GDA0003090506170000072
Figure RE-GDA0003090506170000073
Figure RE-GDA0003090506170000074
wherein: k is N/b, and the matrix C is averaged to obtain a matrix Qm=[q1,q2,q3,q4]T
Figure RE-GDA0003090506170000075
Wherein: q. q.sm(j) Is QmThe j element of (2), Ci(j) Is the jth element of the ith column element of C, and K is QmThe total number of elements contained in the total matrix C and the mean matrix Q are countedmL2 norm to obtain the line adjacent gradient statistic DH=[d1,d2,d3,...,dK],dH(i) Is DHThe ith element of
Figure RE-GDA0003090506170000076
To DHCarrying out binarization processing to obtain statistical characteristics Z of line adjacent gradientH=[z1,z2,…,zK]The calculation formula is expressed as:
Figure RE-GDA0003090506170000077
wherein: z is a radical ofH(i) Is ZHWhen i is equal to K, let dH(i+1)=dH(1) Similarly, the statistical characteristic Z of the near gradient of the IR row can be obtainedLThereby obtaining IRNear gradient statistic SR=[ZH,ZL]Thereby obtaining an image IG、IBAre respectively SG,SBJoint statistical feature H of jointly available RGB color images1=[SR,SG,SB]Length of L1=6×N/b。
S3: and converting the preprocessed image into a color space, extracting a brightness component of the color space, and converting the brightness component into a three-dimensional image. In which it is to be noted that,
extracting the brightness component of the image comprises setting the brightness component of the image as Y, referring to FIG. 3, converting Y into a three-dimensional space for structural feature extraction, setting the horizontal position of the coordinate of the Y component as x-axis, setting the vertical position as Y-axis, setting the pixel point value of the corresponding coordinate as z-axis, constructing a three-dimensional curve feature map, referring to FIG. 4, drawing the peak-top curve and the peak-valley curve of the Y component projected on xoz and yoz plane under 2 visual angles, and simultaneously segmenting the Y component three-dimensional curve by utilizing an equidistant slice plane parallel to yoz plane to obtain a segmentation map, thereby extracting the structural feature of the image.
S4: and extracting the structural features of the image according to the brightness component image and the three-dimensional image. In which it is to be noted that,
the image structure feature extraction method comprises the steps of dividing a preprocessed Y component image into a series of non-overlapping small blocks, wherein the block size is b multiplied by b, carrying out mean calculation on the pixel value of each small block to obtain a feature matrix M and a feature matrix, and obtaining a peak top curve and a peak bottom curve of M under xoz and yoz projection, wherein the calculation formula is as follows:
Figure RE-GDA0003090506170000081
wherein:
Figure RE-GDA0003090506170000082
and
Figure RE-GDA0003090506170000083
respectively a peak-top curve and a peak-bottom curve projected at xoz,
Figure RE-GDA0003090506170000084
and
Figure RE-GDA0003090506170000085
respectively a peak top curve and a peak valley curve under the yoz projection, wherein max (·, 1) and min (·, 1) are respectively operations for taking the maximum value and the minimum value of the matrix M according to rows, and max (·, 2) and min (·, 2) are respectively operations for taking the maximum value and the minimum value of the matrix M according to columns; then the peak-top curve and the peak-valley curve under different projections are subjected to concave-convex point set calculation, and the peak-top curve under xoz projection is firstly subjected to
Figure RE-GDA0003090506170000086
And (3) solving a concave-convex point set and directly carrying out binarization, wherein the calculation formula is as follows:
Figure RE-GDA0003090506170000087
wherein:
Figure RE-GDA0003090506170000088
is a pair of
Figure RE-GDA0003090506170000089
Binarization after obtaining a concave-convex point set is carried out, i is
Figure RE-GDA00030905061700000810
When i-1 is 0 or i +1 is more than N/b, namely the pixel point is positioned on the boundary, the size of the pixel point is only required to be compared with that of an internal adjacent pixel point to judge whether the pixel point is a concave-convex point; the peak-valley curve of xoz planes, the peak-top curve and the concave-convex point set of the peak-valley curve of yoz plane are sequentially obtained
Figure RE-GDA00030905061700000811
Figure RE-GDA00030905061700000812
And
Figure RE-GDA00030905061700000813
because the concave-convex points of the image under different projections are different, in order to compress the hash length of the algorithm and simultaneously keep the good classification performance of the image, the concave-convex points of the peak top curve and the concave-convex points of the peak bottom curve under different projections are subjected to union processing to obtain the characteristics of the local characteristic points
Figure RE-GDA0003090506170000091
Figure RE-GDA0003090506170000092
And
Figure RE-GDA0003090506170000093
combined to obtain local feature point features Z1=[A,B]。
In order to improve the distinctiveness of the algorithm, the position information of the concave-convex point set under different projections is extracted, and the position characteristics are constructed, and the specific steps are as follows: extracting xoz projection peak-valley curve concave-convex point set
Figure RE-GDA0003090506170000094
Position information matrix P ═ P on xoy plane1,P2,…,PN/b]In which P isi=[p1,p2,…,pN/b]T(i-1, 2, …, N/b), and the peak-to-valley curve projected on yoz
Figure RE-GDA00030905061700000916
Position information matrix U ═ U in xoy plane1,U2,…,UM/b]Wherein U isi=[u1,u2,…,uN/b]T(i is 1, 2, …, N/b), similarly, xoz projection peak-valley curve concave-convex point set
Figure RE-GDA0003090506170000095
Position information matrix Q ═ Q in xoy plane1,Q2,…,QN/b]Convex-concave point set of peak-valley curve projected by yoz
Figure RE-GDA0003090506170000096
Position information matrix V ═ V in xoy plane1,V2,…,VN/b]。
Then, as with the extraction of the structural feature point features, intersection is calculated according to the position information of different visual angles on the xoy plane; wherein D1P ═ U is the intersection matrix of matrix P and matrix U, D2The matrix Q and the matrix V are intersected, and then the obtained matrix D is subjected to1,D2By performing a row-and-parallel transposition operation
Figure RE-GDA0003090506170000097
Figure RE-GDA0003090506170000098
Will D3And D4Jointly forming a position matrix
Figure RE-GDA0003090506170000099
Figure RE-GDA00030905061700000910
So as to obtain local structural characteristics Z ═ Z1,Z2]。
In order to enrich the extracted features of the image, the global features of the whole image are extracted: firstly, the three-dimensional image is equally divided into an i-N/b section matrix by a plane parallel to an xoz axis, and a pixel point set of the number of pixel points contained in each section is counted
Figure RE-GDA00030905061700000911
The variance of each area matrix is counted
Figure RE-GDA00030905061700000912
Will S1And S2Respectively binarized to obtain S3,S4The calculation formula is as follows:
Figure RE-GDA00030905061700000913
Figure RE-GDA00030905061700000914
combined to obtain integral features
Figure RE-GDA00030905061700000915
By combining local structural features Z ═ Z1,Z2]Including characteristic point Z1And position feature Z2And obtaining a structural feature H from the global feature S containing the pixel point set and variance of each tangent plane2=[Z1,Z2,S]Length of L2=6×N/b-2。
S5: and combining the statistical characteristics with the structural characteristics to obtain intermediate hash, and performing position scrambling on the intermediate hash by using a random generator to obtain a final hash sequence. In which it is to be noted that,
the final hash sequence includes, in combination, the adjacent gradient statistic H1And structural feature H2Get intermediate Hash HmIs shown as Hm=[H1,H2]The length of which is L ═ L1+L2Generating a key K of length L by using a random generator, and performing intermediate hash H according to the following formulamAnd (3) calling the ith bit value after scrambling the secret key K to perform position indexing:
h(i)=Hm(K[i])
wherein: k [ i ]]H representing the ith number in the pseudo-random number sequence K to be indexedmAnd assigning the value to the ith position of the new hash sequence H for position scrambling to obtain the final hash sequence.
Example 2
Referring to fig. 5 to 8, another embodiment of the present invention is shown, which is to verify the technical effects adopted in the method, and verify the actual effects of the method by means of scientific demonstration.
In carrying out the experiment, the parameters were first set as follows: a 3 × 3 gaussian low pass filter with an image normalization size N of 256, a standard deviation of 1, and an image sub-block size b of 8, so that a hash length L of L is L1+L2=12× N/b-2=382bits。
Firstly, robustness analysis of the Hash image is carried out, 5 test images Airplane, House, Lena, Baboon and Peppers with the length of 512 multiplied by 512 are selected to carry out various conventional processes, robustness attack is carried out on each standard image according to the table 1 to obtain 66 similar images, the standard image and the 66 similar images form a similar image pair, and the Hash Hamming distance of the similar image pair is calculated.
Table 1: parameters used in various conventional image processing in robust performance analysis.
Image processing Parameter(s) Parameter value Number of
Brightness adjustment Rank of -20-101020 4
Contrast adjustment Rank of -20-101020 4
Gamma correction Gamma value 0.750.91.11.25 4
JPEG compression Quality factor 3040…90100 8
Image scaling Ratio of 0.60.81.21.41.61.8 6
Noise of spiced salt Rank of 0.0020.004…0.01 5
Multiplicative noise Variance (variance) 0.0020.004…0.01 5
Gaussian noise Mean value 0.0020.004…0.01 5
Gaussian low pass filtering Standard deviation of 0.10.20.3…0.91 10
Mean value filtering Form panel 3×35×57×79×9 4
Watermark embedding Transparency 0.30.4…0.70.8 6
Rotate Angle of rotation 0.20.40.812 5
The Hash of the original image and the Hash of the image after different processing are calculated to obtain the distance, referring to FIG. 5, the serial number of the horizontal axis in the graph corresponds to various processing listed in Table 1, the vertical axis represents the Hash distance, it can be seen that most of Hash caused by processing except image rotation has little change, the distance is about 80, and due to the adoption of the block scheme, when the rotation angle is larger than 1, the Hash distance rises sharply, the rotation makes the content of the graph block change significantly, which means that the algorithm has better robustness to attack operations except large-angle rotation.
The uniqueness of the Hash image, also called collision, is analyzed, i.e. two images with different contents should have completely different image hashes, see fig. 6, C generated for 1000 different images2 1000Referring to fig. 6, the hamming distance of different image pairs is shown to have a maximum value of 253, a minimum value of 112, a mean value and a standard deviation of 182.57 and 15.34, respectively, and distances substantially greater than 80.
Analyzing from the aspect of threshold of a Hash image, firstly establishing a Hash distance data set, wherein the Hash distance data set comprises 499500 different image pairs and 210000 similar image pairs, when an experiment is carried out, the adopted similar image pairs comprise attacks such as JPEG, Gamma correction, image scaling and the like, attack parameters are shown in the following table 2, as can be seen from the table 2, the distance minimum value of the similar image pairs is 0, the maximum value is 135, a threshold T range is obtained from 112 to 135 through a robustness experiment and a uniqueness experiment, in order to obtain an optimal threshold to distinguish the similar image pairs from the different image pairs, the collision rate and the error detection rate are introduced to analyze the performance of an algorithm, and the calculation formula is as follows:
Figure RE-GDA0003090506170000111
Figure RE-GDA0003090506170000112
the data set was analyzed using the formula, the analysis results of which are shown in table 3 below,
table 2: attack operations and parameter settings.
Figure RE-GDA0003090506170000113
Figure RE-GDA0003090506170000121
Table 3: collision and error detection rates of different thresholds.
Threshold value Collision rate PError detection rate Error detection rate PRate of collision
112 0 2.05×10-4
118 1.80×10-6 1.1×10-4
122 4.00×10-6 6.19×10-3
130 2.44×10-5 1.43×10-5
135 8.11×10-3 0
As can be seen from table 3, the resulting optimal threshold is 122.
For further beneficial effects of the invention, the traditional CS-LBP algorithm, the Itti-Hu algorithm, the TD algorithm and the SG algorithm are selected to be compared with the invention, and in order to ensure the reasonability and fairness of the experiment, all the algorithms are operated under a unified experimental platform and setting, 499500 different image pairs and 210000 similar image pairs are used for verifying the classification performance of the algorithms, and the error acceptance rate PFPR and the correct acceptance rate PTPR are adopted for evaluation, and the calculation formula is expressed as follows,
Figure RE-GDA0003090506170000122
Figure RE-GDA0003090506170000123
and calculating ROC curves of different algorithms, and referring to a comparison graph 7, as can be seen from the graph, the closer the ROC curve of the image is to the upper left corner, the better the classification performance is, the higher the correct acceptance rate is and the lower the false acceptance rate is, and meanwhile compared with other algorithms, the method disclosed by the invention is closer to the upper left corner of the ROC curve graph, namely, the better classification performance is achieved, the near gradient features of the color image are extracted in an outstanding manner by the algorithm, rich statistical features are extracted to enhance robustness, local features are extracted to increase algorithm distinctiveness, and the robustness and distinctiveness of the image are taken into account by the algorithm in image extraction.
In order to test the image retrieval performance of the method of the present invention, 1010 images including 1000 different images and 10 operations on the original lens image, such as lens-JPEG 5, lens-Gamma correction 0.75, lens-contrast +20, lens-gaussian low-pass filtering 0.5, lens-gaussian noise 0.002, lens-salt and pepper noise 0.002, lens-brightness +20, lens-multiplicative noise 0.002, lens-watermark 5, and lens-scale 0.8, were used as a database, referring to fig. 8, image retrieval was performed for the original image Lena as a retrieval image, and table 4 shows some examples of image retrieval:
table 4: the original image is compared to the 1010 test image and the Hash distance.
Sequence of Image of a person Distance between two adjacent plates
1 Original drawing 0
2 Watermark 5 11
3 Luminance +20 18
4 Contrast +20 22
5 Gaussian noise 0.002 22
6 JPEG5 24
7 Gaussian low pass filtering 0.5 26
8 Scaling 0.8 26
9 Multiplicative noise 0.002 30
10 Gamma correction of 0.75 37
11 Noise of spiced salt is 0.002 49
12 Other images >122
It can be seen that the distances of all image pairs except the image subjected to the attack operation are greater than the determined threshold T-122.
Therefore, the method has the advantages of better robustness, better balance between robustness and distinguishability, shorter hash length, higher operation speed and the like, can detect the copied image, and can be widely applied to the field of image authentication and image retrieval.
It should be noted that the above-mentioned embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, which should be covered by the claims of the present invention.

Claims (10)

1. An image hash processing method based on adjacent gradient and structural features, comprising:
reading images in an image library, and preprocessing the images;
extracting three components from the preprocessed image, and obtaining the statistical characteristics of the image by using an adjacent gradient and a binarization quantization compression method;
converting the preprocessed image into a color space, extracting a brightness component of the color space, and converting the brightness component into a three-dimensional image;
extracting structural features of the image according to the brightness component image and the three-dimensional image;
and combining the statistical characteristics and the structural characteristics to obtain intermediate hash, and performing position scrambling on the intermediate hash by using a random generator to obtain a final hash sequence.
2. The image hashing processing method based on adjacent gradients and structural features according to claim 1, wherein: the pre-processing of the image may include,
and normalizing the images into the same size, and performing Gaussian low-pass filtering operation after the image is resized to reduce noise pollution.
3. The image hash processing method based on the neighboring gradient and the structural feature of claim 2, wherein: the gaussian low-pass filtering operation comprises,
filtering the image by using a Gaussian low-pass filter with a template of 3 × 3 and a standard deviation sigma of 1, wherein the calculation formula of the filtering process is as follows:
Figure FDA0002995070690000011
Figure FDA0002995070690000012
wherein: mG(i, j) is the value of the ith row and jth column element in the template.
4. The image hash processing method based on the proximity gradient and the structural feature as claimed in any one of claims 1 to 3, wherein: the statistical characteristics of the obtained images include,
extracting R, G and B three-component I from the preprocessed imageR、IG、IBAnd partitioning each component into blocks with sub-block size b × b, calculating IRMean value of each block, and mean matrix M constituting each componentRThen to IRThe mean matrix calculates 2 adjacent gradients of a row adjacent gradient and a column adjacent gradient, and finally, the characteristic Z of each row of the row adjacent gradient is described through binarization quantization processing by using the mean value mu, the variance delta, the skewness s and the kurtosis omegaHAnd feature Z of each column adjacent to the gradientLIs formed, and then I is obtainedRNear gradient statistic SR=[ZH,ZL]Thereby obtaining an image IG、IBAre respectively SG,SBJoint statistical feature H of jointly available RGB color images1=[SR,SG,SB]Length of L1=6×N/b。
5. The method of image hashing based on adjacent gradients and structural features according to claim 4, wherein: the computation of 2 neighboring gradients to the mean matrix includes,
the row adjacent gradient and the column adjacent gradient of the mean matrix are respectively gHAnd gLThen the row is adjacent to the ith row vector g of the gradient matrixH(i) Column vector g of j-th column of column-adjacent gradient matrixL(j) The calculation formula of (a) is as follows:
gH(i)=MR(i+1,:)-MR(i,:)
gL(j)=MR(:,j+1)-MR(:,j)
wherein: mR(i,: is the row vector of the ith row of the mean matrix, MR(j) is the column vector of the jth column of the mean matrix, and when i equals N/b, M isR(i + 1:) is MR(1,:); when j is N/b, MR(j +1) is MR(:,1)。
6. The method of image hashing based on adjacent gradients and structural features according to claim 5, wherein: the binarization quantization compression comprises that the binary quantization compression comprises,
counting the whole matrix C and the mean matrix QmL2 norm to obtain the line adjacent gradient statistic DH=[d1,d2,d3,…,dK](ii) a To DHCarrying out binarization processing to obtain statistical characteristics Z of line adjacent gradientH=[z1,z2,…,zK]The calculation formula is as follows:
Figure FDA0002995070690000021
wherein: z is a radical ofH(i) Is ZHWhen i ═ K, dH(i+1)=dH(1)。
7. The image hash processing method based on the adjacent gradient and the structural feature as claimed in any one of claims 1 to 3 and 5 to 6, wherein: the extracting of the luminance component thereof includes,
the method comprises the steps of setting a brightness component of an image as Y, converting the Y into a three-dimensional space to extract structural features, setting a coordinate transverse position of the Y component as an x axis, setting a longitudinal position as a Y axis, setting pixel point values corresponding to coordinates as a z axis, constructing a three-dimensional curve feature graph, drawing a peak-top curve and a peak-valley curve of the Y component projected on xoz and a yoz plane under 2 visual angles, and simultaneously segmenting the Y component three-dimensional curve by utilizing an equidistant slice plane parallel to the yoz plane to obtain a segmentation graph, so that the structural features of the image are extracted.
8. The method of image hashing based on adjacent gradients and structural features according to claim 7, wherein: the extracting of the structural features of the image includes,
dividing the Y component image into a series of non-overlapped small blocks, wherein the block size is b multiplied by b, carrying out mean value calculation on the pixel value of each small block to obtain a characteristic matrix M, obtaining a peak top curve and a peak valley curve of the characteristic matrix M under xoz and yoz projections, carrying out concave-convex point set calculation and merging set on the peak top curve and the peak valley curve under different projections, obtaining position information on an xoy surface from the merging set of the concave-convex point set, and combining and binarizing to obtain a local structural characteristic Z1(ii) a In order to extract the position information of the concave-convex point set under different projections, the position characteristics are constructed, as with the extraction of the structural characteristic point characteristics, intersection is calculated on the position information of different visual angles on the xoy plane to obtain an intersection matrix, row calculation and parallel inversion operations are carried out on the intersection matrix to further obtain a position matrix Z2So as to obtain local structural feature Z ═ Z1,Z2](ii) a And equally dividing the three-dimensional image into N/b section matrixes by using a plane parallel to an axis xoz, and obtaining an overall characteristic S by counting the variance of a pixel point set of the number of pixel points contained in each section and each section matrix and binarizing to obtain a structural characteristic H2=[Z1,Z2,S]Length L of2=6×N/b-2。
9. The method of image hashing based on adjacent gradients and structural features according to claim 8, wherein: the overall characteristics include, among others,
by counting the pixel point set of the number of the pixel points contained in each section
Figure FDA0002995070690000031
The variance of each area matrix is counted
Figure FDA0002995070690000032
Will S1And S2Respectively binarized to obtain S3,S4
Figure FDA0002995070690000033
Figure FDA0002995070690000034
Combined to obtain integral features
Figure FDA0002995070690000035
10. The method of image hashing based on adjacent gradients and structural features according to claim 9, wherein: the final hash-sequence may comprise a hash of,
combining the near gradient statistical features H1And structural feature H2Get intermediate Hash HmIs shown as Hm=[H1,H2]The length of which is L ═ L1+L2Generating a key K of length L by using a random generator, and performing intermediate hash H according to the following formulamAnd (3) calling the ith bit value after scrambling the secret key K to perform position indexing:
h(i)=Hm(K[i])
wherein: k [ i ]]H representing the ith number in the pseudo-random number sequence K to be indexedmAnd assigning the value to the ith position of the new hash sequence H for position scrambling to obtain the final hash sequence.
CN202110327145.2A 2021-03-26 2021-03-26 Image hash processing method based on adjacent gradient and structural features Active CN113095380B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110327145.2A CN113095380B (en) 2021-03-26 2021-03-26 Image hash processing method based on adjacent gradient and structural features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110327145.2A CN113095380B (en) 2021-03-26 2021-03-26 Image hash processing method based on adjacent gradient and structural features

Publications (2)

Publication Number Publication Date
CN113095380A true CN113095380A (en) 2021-07-09
CN113095380B CN113095380B (en) 2023-03-31

Family

ID=76670187

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110327145.2A Active CN113095380B (en) 2021-03-26 2021-03-26 Image hash processing method based on adjacent gradient and structural features

Country Status (1)

Country Link
CN (1) CN113095380B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117034367A (en) * 2023-10-09 2023-11-10 北京点聚信息技术有限公司 Electronic seal key management method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108734728A (en) * 2018-04-25 2018-11-02 西北工业大学 A kind of extraterrestrial target three-dimensional reconstruction method based on high-resolution sequence image
CN110490789A (en) * 2019-07-15 2019-11-22 上海电力学院 A kind of image hashing acquisition methods based on color and structure feature
CN111177432A (en) * 2019-12-23 2020-05-19 北京航空航天大学 Large-scale image retrieval method based on hierarchical depth hash
CN111429337A (en) * 2020-02-28 2020-07-17 上海电力大学 Image hash acquisition method based on transform domain and shape characteristics
CN111787179A (en) * 2020-05-30 2020-10-16 上海电力大学 Image hash acquisition method, image security authentication method and device
CN112232428A (en) * 2020-10-23 2021-01-15 上海电力大学 Image hash acquisition method based on three-dimensional characteristics and energy change characteristics

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108734728A (en) * 2018-04-25 2018-11-02 西北工业大学 A kind of extraterrestrial target three-dimensional reconstruction method based on high-resolution sequence image
CN110490789A (en) * 2019-07-15 2019-11-22 上海电力学院 A kind of image hashing acquisition methods based on color and structure feature
CN111177432A (en) * 2019-12-23 2020-05-19 北京航空航天大学 Large-scale image retrieval method based on hierarchical depth hash
CN111429337A (en) * 2020-02-28 2020-07-17 上海电力大学 Image hash acquisition method based on transform domain and shape characteristics
CN111787179A (en) * 2020-05-30 2020-10-16 上海电力大学 Image hash acquisition method, image security authentication method and device
CN112232428A (en) * 2020-10-23 2021-01-15 上海电力大学 Image hash acquisition method based on three-dimensional characteristics and energy change characteristics

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵琰 等: "基于图像能量的稳健图像哈希算法", 《计算机应用研究》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117034367A (en) * 2023-10-09 2023-11-10 北京点聚信息技术有限公司 Electronic seal key management method
CN117034367B (en) * 2023-10-09 2024-01-26 北京点聚信息技术有限公司 Electronic seal key management method

Also Published As

Publication number Publication date
CN113095380B (en) 2023-03-31

Similar Documents

Publication Publication Date Title
Li et al. Robust image hashing based on random Gabor filtering and dithered lattice vector quantization
JP2694101B2 (en) Method and apparatus for pattern recognition and validation
CN107622489B (en) Image tampering detection method and device
Wang et al. Perceptual hashing‐based image copy‐move forgery detection
Huang et al. Perceptual image hashing with texture and invariant vector distance for copy detection
Mohamed et al. An improved LBP algorithm for avatar face recognition
CN112232428B (en) Image hash acquisition method based on three-dimensional characteristics and energy change characteristics
CN106548445A (en) Spatial domain picture general steganalysis method based on content
Tang et al. Robust Image Hashing via Random Gabor Filtering and DWT.
Samanta et al. Analysis of perceptual hashing algorithms in image manipulation detection
Liu An improved approach to exposing JPEG seam carving under recompression
Tang et al. Robust image hashing via visual attention model and ring partition
Yuan et al. Perceptual image hashing based on three-dimensional global features and image energy
CN113095380B (en) Image hash processing method based on adjacent gradient and structural features
CN107977964A (en) Slit cropping evidence collecting method based on LBP and extension Markov feature
Liang et al. Robust hashing with local tangent space alignment for image copy detection
Vatsa et al. Signature verification using static and dynamic features
Lu et al. Detection of image seam carving using a novel pattern
Doegar et al. Image forgery detection based on fusion of lightweight deep learning models
Shang et al. Double JPEG detection using high order statistic features
CN106952211A (en) The compact image hash method of feature based spot projection
Gupta et al. Energy deviation measure: a technique for digital image forensics
Tang et al. Robust image hashing with low-rank representation and ring partition
Cui et al. A novel DIBR 3D image hashing scheme based on pixel grouping and NMF
Abdullahi et al. Fourier-Mellin transform and fractal coding for secure and robust fingerprint image hashing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant