CN108399593B - Non-intrusive watermark extraction and embedding method for IoT - Google Patents

Non-intrusive watermark extraction and embedding method for IoT Download PDF

Info

Publication number
CN108399593B
CN108399593B CN201810246835.3A CN201810246835A CN108399593B CN 108399593 B CN108399593 B CN 108399593B CN 201810246835 A CN201810246835 A CN 201810246835A CN 108399593 B CN108399593 B CN 108399593B
Authority
CN
China
Prior art keywords
image
features
vector
information
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810246835.3A
Other languages
Chinese (zh)
Other versions
CN108399593A (en
Inventor
李嘉
黄程韦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201810246835.3A priority Critical patent/CN108399593B/en
Publication of CN108399593A publication Critical patent/CN108399593A/en
Application granted granted Critical
Publication of CN108399593B publication Critical patent/CN108399593B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/49Analysis of texture based on structural texture description, e.g. using primitives or placement rules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Abstract

The invention discloses a non-intrusive watermark extraction and embedding method for IoT, which is characterized by comprising the following steps: s1, carrying out illumination equalization processing on the original image I (x, y); s2, extracting texture and color features of the image subjected to the illumination equalization processing in the S1; s3, modeling the features extracted in the step S2 in a high-dimensional vector space, increasing the information of coordinate position distribution of the extracted features in the original image, and constructing geometric shape features; s4, sending the features modeled in the high-dimensional vector space in the step S3 into a pre-trained encoder for encoding to obtain unique watermark information corresponding to the image; and S5, storing the unique watermark information corresponding to the image in the step S4 in a storage space of a network data center, and associating the expanded data information, wherein the data information comprises time, place and additional user comment information.

Description

Non-intrusive watermark extraction and embedding method for IoT
Technical Field
The present invention relates to the technical field of IoT feature extraction methods, and more particularly, to a non-intrusive watermark extraction and embedding method for IoT.
Background
The rapid development of the Internet of things provides a new technical means for tracing the source of articles, the historical information of the articles can be recorded by carrying out the authentication of time stamps and electronic signatures on all links of production, processing and transportation of the articles, and a time data stream of a credible Internet of things is formed, so that a consumer can scan attached RFID and two-dimensional codes or shoot an identification object to trace the source of information.
The rapid development of artificial intelligence enables the identification of the visual characteristics of objects to be more and more efficient, the image identification based on deep learning can approach the visual cognition level of human beings, the image characteristics can be quickly and accurately matched, and the method is widely applied to the fields of the Internet of things, financial authentication, biological characteristic identification and the like.
The following five patents [1-5] are available which are similar to the present invention. The invention patent [1] application number: CN201710496927.2 provides a method for monitoring and uploading information of agricultural products to a network in the production process, but the traceability system omits the authentication link of trusted time information. Patent application No. CN201611042032.3 of the invention [2] proposes a system for recording attendance information by means of check-in, which is capable of authenticating time information, but lacks adhesiveness to an article, cannot relate time information to an arbitrary article, lacks mobility, cannot move conveniently, and therefore cannot supervise commodity circulation in an IoT network. The invention patent [3] application number: CN201710120913.0 provides a security timestamp authentication method based on key encryption, which can associate watermark information with a label of an article, but the association method is a traditional label printing technology, does not utilize image watermarking and image identification technologies, and is an intrusive and contact type time authentication mode. Invention patent [4] application no: CN201611054170.3, which proposes a message time information embedding mode based on hardware in the field of network communication, belongs to a time authentication mode of network data communication, and is not suitable for monitoring the production and circulation information of goods in IoT. The patent of invention [5] application No. CN201710253044.9 proposes a watermark embedding method based on image color features, the utilized image pixel information is very different from the feature construction method in the present invention, and the watermark embedding method is an invasive method, only suitable for copyright protection of digital media images, and can not be used for identification of actual objects.
Disclosure of Invention
Based on this, there is a need to provide a non-intrusive watermark extraction and embedding method for IoT to solve the above technical problem.
The technical scheme of the invention is as follows:
a non-intrusive watermark extraction and embedding method for IoT, characterized in that it comprises the following steps:
s1, carrying out illumination equalization processing on the original image I (x, y);
s2, extracting texture and color features of the image subjected to the illumination equalization processing in the S1;
s3, modeling the features extracted in the step S2 in a high-dimensional vector space, increasing the information of coordinate position distribution of the extracted features in the original image, and constructing geometric shape features;
s4, sending the features modeled in the high-dimensional vector space in the step S3 into a pre-trained encoder for encoding to obtain unique watermark information corresponding to the image;
and S5, storing the unique watermark information corresponding to the image in the step S4 in a storage space of a network data center, and associating the expanded data information, wherein the data information comprises time, place and additional user comment information.
In one embodiment, the equalization processing of the illumination in step S1 adopts the following algorithm:
R=w1[log(I(x,y))-log(F1(x,y)*I(x,y))]+w2[log(I(x,y))-log(F2(x,y)*I(x,y))],
wherein, w is the weight coefficient of each scale, F is a Gaussian filter, and x and y are image pixel point coordinates.
In one embodiment, the texture features include: DSP-SIFT characteristics, GMS characteristics, image edge characteristics and image corner characteristics; the color characteristics include: a color pixel distribution probability density function in HSV color space, RGB color space, TSL color space, YCrCb color space;
the DSP-SIFT feature extraction method comprises the following steps:
h(θ|I)[x]=∫p(θ|I,s)[x]e(s)ds
h is a feature extraction function, theta is the angle direction of the features, the value is taken between 0 pi and 2 pi, x represents a position in an image, p is an extraction function of SIFT features, s is an image scale, and e is an exponential function.
In one embodiment, the modeling the features extracted in step S2 in the high-dimensional vector space includes the following steps:
s41, extracting coordinate positions (xi, yi) of n feature points, and forming a one-dimensional shape vector of the coordinate shape, where S is { x1, y1, x2, y2,. once.. xn, yn }, for statistical modeling, where a line segment is formed between every two feature points to represent a geometric position relationship between the feature points;
s42, carrying out probability density estimation on the multivariate Gaussian distribution model of the S to obtain a mean vector U, wherein the multivariate Gaussian distribution of the shape vector S is as follows:
p(S)=1/((2π)^(d/2)|Σ|^(1/2))exp[-0.5(S-U)'Σ^(-1)(S-U)]
wherein p represents the probability distribution, d represents the characteristic dimension, sigma represents the covariance matrix, ^ represents the power operation symbol, and' represents the matrix transposition operation symbol;
s43, calculating a relative distance vector D between the shape vectors S and U by using the mean vector U;
and S44, merging the extracted feature numerical values of the n feature points and the relative distance vector and expanding the dimension of the feature vector.
In one embodiment, the features modeled in the high-dimensional vector space in step S4 are sent to a pre-trained encoder, and the encoding includes the following steps:
s51, collecting sample images of similar articles, and extracting feature vectors of the images according to the step S3;
s52, training the deep neural network through the self-encoder structure to obtain parameters of each node of the neural network of compressed data, and the calculation method of the neural network comprises the following steps: h (x) (w 1 x1+ w2 x2+ w3 x3+. + b, wherein w1, w2.. is the weight of the neuron, b is the offset, and x1, x2
R → R is a mapping function on the real number domain, called the excitation function, and the neural network activation function takes the form:
f(z)=1/(1+e^(-z))
using s to represent class labels, x to represent input feature vectors, y to represent the output of the neural network, and the input vectors of the first L layers are recorded as: v. oflWhere l is the label of each network node, and the hidden vector is recorded as: h islM in the hidden layer is the serial number of each hidden layer unit, and the total number of hidden nodes can be calculated as NlPosterior probability:
Figure BDA0001606773570000041
wherein z isl(vl)=(Wl)’vl+al. W is the weight vector, a is the bias vector; and the output layer of the neural network carries out final calculation on the posterior probability:
Figure BDA0001606773570000042
when the nodes of the neural network are all binary random values, the energy function is:
E(v,h;θ)=-v’w h-b’v h-a’h
wherein the parameter θ is { w, a, b };
and S53, extracting the watermark information of the input image from the network node of the middle layer of the encoder.
In one embodiment, the associating expanded data information in step S5 includes the following steps:
s61, carrying out mapping calculation of the data center storage address according to the unique watermark information corresponding to the image, wherein map is wmark → addr
Wherein wmark represents watermark data, addr represents data address, and the mapping adopts a linear function:
map(x)={y1,y2,y3...}
yi=x1*mi1+x2*mi2+x3*mi3+...
the mapping function matrix is:
Figure BDA0001606773570000051
s62, carrying out chaotic scrambling encryption on data information associated with the article;
s63, encrypting by adopting an exclusive OR operation mode of the chaos sequence and the original data:
the chaos sequence is recorded as { xi }, the original data is a sequence { si }, the real value sequence { si } is converted into a binary sequence { bi }, and an output sequence yi after the chaos sequence processing can be obtained through XOR operation, wherein i is 1, 2.
Figure BDA0001606773570000052
The invention has the beneficial effects that:
the invention provides a non-invasive watermark extraction and embedding method, which relates watermark information to an actual scene or an article in the real world under the conditions of not destroying an original image, not modifying the original image and only utilizing the image data characteristics of the original image. The method is non-invasive and non-destructive, so that the application deployment measures are extremely simple and convenient, and the method naturally has compatibility with various image acquisition devices and network transmission facilities. Compared with the two-dimensional code scanning of the prior common technology, the method has the advantages that the supported data storage capacity is larger, the shooting distance and the shooting angle are more convenient and flexible, in addition, the image data characteristic matching and identifying method provided by the invention has the advantages that the accuracy is obviously improved compared with the similar technology, and the reliability is obviously enhanced. The time information of the goods or the nodes in the IoT is carried, so that the safety of the goods such as food, medicinal materials and the like can be improved, and the key information of the production process of the goods is obtained through tracing of the time information. The comment information of commodities in the IoT network can be stored in the network storage space pointed by the watermark, and the watermark extraction technology can share the comment information of different users, so that the safety and circulation process of food or medicinal materials can be supervised.
Drawings
Fig. 1 is a flow chart of a method for non-intrusive watermark extraction and embedding for IoT of the present invention;
fig. 2 is a flow chart of texture feature extraction in a non-intrusive watermark extraction and embedding method for IoT of the present invention;
fig. 3 is a flow chart of the construction of geometric features in a method for non-intrusive watermark extraction and embedding for IoT of the present invention;
FIG. 4 is a training flow diagram of a watermark feature deep neural network in a non-intrusive watermark extraction and embedding method for IoT of the present invention;
fig. 5 is an expanded flow chart of watermark information in a non-intrusive watermark extraction and embedding method for IoT of the present invention;
fig. 6 is a chaotic sequence encryption flowchart of watermark information in a non-intrusive watermark extraction and embedding method for IoT of the present invention;
fig. 7 is a flowchart of the extraction process of time, place, and comment information in the IoT non-intrusive watermark extraction and embedding method of the present invention.
Detailed Description
To facilitate an understanding of the invention, the invention will now be described more fully with reference to the accompanying drawings. Preferred embodiments of the present invention are shown in the drawings. However, the present invention may be embodied in many different forms and is not limited to the embodiments described below. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The invention provides a non-intrusive watermark extraction and embedding method for IoT, which is characterized by comprising the following steps:
s1, carrying out illumination equalization processing on the original image I (x, y); the value of the original image is easier to extract, and the equalization processing of illumination adopts the following algorithm:
R=w1[log(I(x,y))-log(F1(x,y)*I(x,y))]+w2[log(I(x,y))-log(F2(x,y)*I(x,y))],
w is a weight coefficient of each scale, F is a Gaussian filter, and x and y are image pixel point coordinates;
s2, extracting texture and color features of the image subjected to the illumination equalization processing in the S1; wherein, the texture feature includes: DSP-SIFT characteristics, GMS characteristics, image edge characteristics and image corner characteristics; the color characteristics include: a color pixel distribution probability density function in HSV color space, RGB color space, TSL color space, YCrCb color space;
the DSP-SIFT feature extraction method comprises the following steps:
h(θ|I)[x]=∫p(θ|I,s)[x]e(s)ds
h is a feature extraction function, theta is the angle direction of the features, the value is taken between 0 pi and 2 pi, x represents a position in an image, p is an extraction function of SIFT features, s is an image scale, and e is an exponential function;
s3, modeling the features extracted in the step S2 in a high-dimensional vector space, increasing the information of coordinate position distribution of the extracted features in the original image, and constructing geometric shape features; the extracted features are modeled in a high-dimensional vector space, and the method comprises the following steps:
t1, extracting coordinate positions (xi, yi) of n feature points, forming a one-dimensional shape vector S of the coordinate shape { x1, y1, x2, y2,. once.. xn, yn }, for statistical modeling, and forming a line segment between every two feature points to represent the geometric position relationship between the feature points;
and T2, estimating the probability distribution of the shape vector S through multivariate Gaussian distribution, and performing probability density estimation on the shape vector S through a multivariate Gaussian distribution model to obtain a mean vector U, wherein the multivariate Gaussian distribution of the shape vector S is as follows:
p(S)=1/((2π)^(d/2)|Σ|^(1/2))exp[-0.5(S-U)'Σ^(-1)(S-U)]
wherein p represents the probability distribution, d represents the characteristic dimension, sigma represents the covariance matrix, ^ represents the power operation symbol, and' represents the matrix transposition operation symbol;
t3, calculating a relative distance vector D between the shape vectors S and U by using the mean vector U;
and T4, merging the extracted feature numerical values of the n feature points and the relative distance vector and expanding the dimension of the feature vector.
S4, sending the features modeled in the high-dimensional vector space in the step S3 into a pre-trained encoder for encoding to obtain unique watermark information corresponding to the image; the encoding comprises the following steps:
m1, collecting sample images of similar articles, and extracting feature vectors of the images according to the step S3;
m2, training a deep neural network through a self-encoder structure to obtain parameters of each node of the neural network of compressed data, and the calculation method of the neural network comprises the following steps: h (x) 1 x1+ w2 x2+ w3 x3+. b, where w1, w2.. is the weight of the neuron, b is the offset, x1, x2. is the input of each node of the neuron,
r → R is a mapping function on the real number domain, called the excitation function, and the neural network activation function takes the form:
f(z)=1/(1+e^(-z))
using s to represent class labels, x to represent input feature vectors, y to represent the output of the neural network, and the input vectors of the first L layers are recorded as: v. oflWhere l is the label of each network node, and the hidden vector is recorded as: h islM in the hidden layer is the serial number of each hidden layer unit, and the total number of hidden nodes can be calculated as NlPosterior probability:
Figure BDA0001606773570000081
wherein z isl(vl)=(Wl)’vl+al. W is the weight vector, a is the bias vector; and the output layer of the neural network carries out final calculation on the posterior probability:
Figure BDA0001606773570000091
when the nodes of the neural network are all binary random values, the energy function is:
E(v,h;θ)=-v’w h-b’v h-a’h
wherein the parameter θ is { w, a, b };
and M3, and the watermark information extracted from the network node of the middle layer of the encoder is used as the watermark information of the input image.
And S5, storing the unique watermark information corresponding to the image in the step S4 in a storage space of a network data center, and associating the expanded data information, wherein the data information comprises time, place and additional user comment information. Associating the extended data information comprises the steps of:
n1, performing mapping calculation of the storage address of the data center according to the unique watermark information corresponding to the image, wherein map is wmark → addr
Wherein wmark represents watermark data, addr represents data address, and the mapping adopts a linear function:
map(x)={y1,y2,y3...}
yi=x1*mi1+x2*mi2+x3*mi3+...
the mapping function matrix is:
Figure BDA0001606773570000092
n2, carrying out chaotic scrambling encryption on data information associated with the article;
n3, performing encryption by adopting an exclusive or operation mode of the chaotic sequence and the original data:
the chaos sequence is recorded as { xi }, the original data is a sequence { si }, the real value sequence { si } is converted into a binary sequence { bi }, and an output sequence yi after being processed by the chaos sequence can be obtained through an exclusive-or (xor) operation (exclusive use or), wherein i is 1, 2.
Figure BDA0001606773570000101
As shown in fig. 1, in an IoT mobile network, an image of an article or a scene is captured by a camera device of a terminal device, and texture features of the image, such as SIFT, DSP-SIFT, GMS, are extracted to form data for identifying an image target. And performing data compression of a deep self-coding neural network on the basis of original image data, extracting efficient feature expression and forming the watermark of the image. The image watermark of the article or scene is analyzed through the network data center, and the article information at the corresponding data storage address position is read, wherein the article information generally comprises time information, place information, comment information and the like. The authentication information of the goods is encrypted and decrypted by the chaotic secret key, so that the security of key data is protected, and the extraction of the trusted information of the goods on site in the IoT is realized.
As shown in fig. 2, there are various implementation approaches for feature extraction of a digital image, for example, ORB features, SIFT features, SURF features, DSP-SIFT features, GMS features, and the latter two features are the features that have the best effect in the image recognition field so far, and have more efficient distinguishing capability than a deep convolutional network. Through traditional image edge and image interest point (corner) detection, characteristics beneficial to image identification can be extracted, the characteristics with different dimensionalities are combined to form high-dimensionality characteristics for identification, the original characteristics can reflect visual characteristics of objects or scenes in the images, and then further characteristic compression optimization processing is needed to achieve more efficient data expression.
As shown in fig. 3, the image features are processed to increase the coordinates of the positions of the feature points, the coordinates are expanded into one-dimensional feature vectors in which X-axis and Y-axis alternately appear, modeling of a multivariate gaussian model is performed, the shapes of the feature coordinates are normalized on the average of a plurality of images to form geometric features, and the relative position relationship of the feature points in the images is reflected.
As shown in fig. 4, the feature information extracted from the digital image is input to the deep neural network as the original watermark information, and the feature is compressed. The self-encoder neural network structure is utilized, the input layer and the output layer of the self-encoder neural network structure are vertically symmetrical about a middle network node, and the output layer of the self-encoder neural network structure is used for restoring compressed data. In training the network, the cost function is defined by the difference between the output and the input, and the error is minimized by the training. The most middle neuron node of the trained neural network is the best expression of the calculated original data characteristics, which is equivalent to a characteristic compression process, and the obtained characteristic data is used as a watermark extraction result of an image.
As shown in fig. 5, a network data center performs regular mapping from image watermarks to obtain storage addresses for storing data information, and the storage addresses corresponding to the network data center may store information of the registered items, such as time, place, and additional user comment information. The information protects the data security through the encryption of the chaos sequence.
As shown in fig. 6, after the mobile terminal takes a picture of an object in the IoT network, the near feature extraction and the watermark information extraction are performed, an address is analyzed from the network data center, information such as a stored time and a stored place is acquired, decryption is performed through a pre-agreed chaotic key, the decrypted data is returned and sent to the mobile terminal, and the user sees credible traceability information.
As shown in fig. 7, a chaos sequence is generated through Tent mapping, has high similarity with a random sequence, and is used as a key to perform xor operation with information to be encrypted to obtain a ciphertext. In the decryption process, the secret key agreed in advance is subjected to decryption operation to recover plaintext information.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail. In the above description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Moreover, the technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (3)

1. A non-intrusive watermark extraction and embedding method for IoT, characterized in that it comprises the following steps:
s1, carrying out illumination equalization processing on the original image I (x, y);
s2, extracting texture and color features of the image subjected to the illumination equalization processing in the S1;
s3, modeling the features extracted in the step S2 in a high-dimensional vector space, increasing the information of coordinate position distribution of the extracted features in the original image, and constructing geometric shape features; wherein the modeling the features extracted in step S2 in a high-dimensional vector space includes the following steps:
s41, extracting coordinate positions (xi, yi) of n feature points, and forming a one-dimensional shape vector of the coordinate shape, where S is { x1, y1, x2, y2,. once.. xn, yn }, for statistical modeling, where a line segment is formed between every two feature points to represent a geometric position relationship between the feature points;
s42, carrying out probability density estimation on the multivariate Gaussian distribution model of the S to obtain a mean vector U, wherein the multivariate Gaussian distribution of the shape vector S is as follows:
p(S)=1/((2π)^(d/2)|Σ|^(1/2))exp[-0.5(S-U)'Σ^(-1)(S-U)]
wherein p represents the probability distribution, d represents the characteristic dimension, sigma represents the covariance matrix, ^ represents the power operation symbol, and' represents the matrix transposition operation symbol;
s43, calculating a relative distance vector D between the shape vectors S and U by using the mean vector U;
s44, merging the extracted feature numerical values of the n feature points and the relative distance vector to expand the dimension of the feature vector;
s4, sending the features modeled in the high-dimensional vector space in the step S3 into a pre-trained encoder for encoding to obtain unique watermark information corresponding to the image;
and S5, storing the unique watermark information corresponding to the image in the step S4 in a storage space of a network data center, and associating the expanded data information, wherein the data information comprises time, place and additional user comment information.
2. The method of claim 1, wherein the equalization of illumination in step S1 adopts the following algorithm:
R=w1[log(I(x,y))-log(F1(x,y)*I(x,y))]+w2[log(I(x,y))-log(F2(x,y)*I(x,y))],
wherein, w is the weight coefficient of each scale, F is a Gaussian filter, and x and y are image pixel point coordinates.
3. The IoT watermark extraction and embedding method as claimed in claim 1, wherein the features modeled in the high-dimensional vector space in step S4 are fed into a pre-trained encoder, and the encoding comprises the following steps:
s51, collecting sample images of similar articles, and extracting feature vectors of the images according to the step S3;
s52, training the deep neural network through the self-encoder structure to obtain parameters of each node of the neural network of compressed data, and the calculation method of the neural network comprises the following steps: h (x) (w 1 x1+ w2 x2+ w3 x3+. + b, wherein w1, w2.. is the weight of the neuron, b is the offset, and x1, x2
R → R is a mapping function on a real number domain, called the activation function, which uses the following form:
f(z)=1/(1+e^(-z))
wherein z represents an argument of the function, which is an input value of the activation function;
using s to represent class labels, x to represent input feature vectors, y to represent the output of the neural network, and the input vectors of the first L layers are recorded as: v. oflWhere l is the label of each network node, and the hidden vector is recorded as: h islM in the hidden layer is the serial number of each hidden layer unit, and the total number of hidden nodes can be calculated as NlPosterior probability:
Figure FDA0003277106490000031
wherein z isl(vl)=(Wl)’vl+alW is a weight vector, a is a bias vector; and the output layer of the neural network carries out final calculation on the posterior probability:
Figure FDA0003277106490000032
when the nodes of the neural network are all binary random values, the energy function is:
E(v,h;θ)=-v'wh-b'vh-a'h
wherein the parameter θ is { w, a, b };
and S53, extracting the watermark information of the input image from the network node of the middle layer of the encoder.
CN201810246835.3A 2018-03-23 2018-03-23 Non-intrusive watermark extraction and embedding method for IoT Active CN108399593B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810246835.3A CN108399593B (en) 2018-03-23 2018-03-23 Non-intrusive watermark extraction and embedding method for IoT

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810246835.3A CN108399593B (en) 2018-03-23 2018-03-23 Non-intrusive watermark extraction and embedding method for IoT

Publications (2)

Publication Number Publication Date
CN108399593A CN108399593A (en) 2018-08-14
CN108399593B true CN108399593B (en) 2021-12-10

Family

ID=63093095

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810246835.3A Active CN108399593B (en) 2018-03-23 2018-03-23 Non-intrusive watermark extraction and embedding method for IoT

Country Status (1)

Country Link
CN (1) CN108399593B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111161209A (en) * 2019-11-22 2020-05-15 京东数字科技控股有限公司 Method, device and equipment for detecting certificate watermark and storage medium
CN113709497A (en) * 2021-09-10 2021-11-26 广东博华超高清创新中心有限公司 Method for embedding artificial intelligence characteristic information based on AVS3 coding framework

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8005254B2 (en) * 1996-11-12 2011-08-23 Digimarc Corporation Background watermark processing
CN102208097B (en) * 2011-05-26 2013-06-19 浙江工商大学 Network image copyright real-time distinguishing method
CN104217387A (en) * 2014-01-22 2014-12-17 河南师范大学 Image watermark embedding and extracting method and device based on quantization embedding
CN105389770A (en) * 2015-11-09 2016-03-09 河南师范大学 Method and apparatus for embedding and extracting image watermarking based on BP and RBF neural networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8005254B2 (en) * 1996-11-12 2011-08-23 Digimarc Corporation Background watermark processing
CN102208097B (en) * 2011-05-26 2013-06-19 浙江工商大学 Network image copyright real-time distinguishing method
CN104217387A (en) * 2014-01-22 2014-12-17 河南师范大学 Image watermark embedding and extracting method and device based on quantization embedding
CN105389770A (en) * 2015-11-09 2016-03-09 河南师范大学 Method and apparatus for embedding and extracting image watermarking based on BP and RBF neural networks

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Adaptive Digital Watermarking Using Neural Network Technique;Der-Chyuan Lou等;《IEEE 37th Annual 2003 International Carnahan Conference onSecurity Technology》;20040518;第325-332页 *
一种基于神经网络和小波变换的盲水印算法;燕丹丹等;《现代计算机》;20090525(第5期);第72-74页 *
一种基于纹理分块和颜色特征的数字水印算法;杨静静;《中国优秀硕士学位论文全文数据库(信息科技辑)》;20080515(第5期);第I138-725页 *
一种新的空间直方图相似性度量方法及其在目标跟踪中的应用;姚志均等;《电子与信息学报》;20130731;第35 卷(第7 期);第1644-1649页 *
基于DWT-DFT和SVD域的盲水印算法;黄福莹等;《电视技术》;20151017;第39卷(第20期);第11-14页 *
基于物联网的家禽质量安全追溯关键技术研究;顾细明;《中国优秀硕士学位论文全文数据库(信息科技辑)》;20150415(第 04 期);第I138-898页 *

Also Published As

Publication number Publication date
CN108399593A (en) 2018-08-14

Similar Documents

Publication Publication Date Title
Yan et al. Multi-scale image hashing using adaptive local feature extraction for robust tampering detection
Wang et al. A robust blind color image watermarking in quaternion Fourier transform domain
Feng et al. Secure binary image steganography based on minimizing the distortion on the texture
Khelifi et al. Perceptual image hashing based on virtual watermark detection
US20220092336A1 (en) Adversarial image generation method, computer device, and computer-readable storage medium
Verlekar et al. View‐invariant gait recognition system using a gait energy image decomposition method
Deeba et al. Digital watermarking using deep neural network
Swathi et al. Deepfake creation and detection: A survey
Ramadhani et al. A comparative study of deepfake video detection method
CN108399593B (en) Non-intrusive watermark extraction and embedding method for IoT
Agarwal et al. Damad: Database, attack, and model agnostic adversarial perturbation detector
Manjunatha et al. Deep learning-based technique for image tamper detection
Barni et al. Iris deidentification with high visual realism for privacy protection on websites and social networks
Jiang et al. PointFace: Point set based feature learning for 3D face recognition
Gui et al. Steganalysis of LSB matching based on local binary patterns
Diwan et al. Unveiling Copy-Move Forgeries: Enhancing Detection With SuperPoint Keypoint Architecture
Sharma et al. Zero distortion technique: an approach to image steganography using strength of indexed based chaotic sequence
Kimura et al. Quality-dependent score-level fusion of face, gait, and the height biometrics
Kich et al. Image steganography by deep CNN auto-encoder networks
Patil et al. Image hashing by SDQ-CSLBP
Anitha et al. Edge detection based salient region detection for accurate image forgery detection
Sun et al. Presentation attacks in palmprint recognition systems
Khan et al. Security issues in face recognition
Singh et al. Robust and efficient hashing framework for industrial surveillance
Lakshminarasimha et al. Deep Learning Base Face Anti Spoofing-Convolutional Restricted Basis Neural Network Technique

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant