CN108399593A - It is a kind of for IoT without intrusion watermark extracting and embedding grammar - Google Patents

It is a kind of for IoT without intrusion watermark extracting and embedding grammar Download PDF

Info

Publication number
CN108399593A
CN108399593A CN201810246835.3A CN201810246835A CN108399593A CN 108399593 A CN108399593 A CN 108399593A CN 201810246835 A CN201810246835 A CN 201810246835A CN 108399593 A CN108399593 A CN 108399593A
Authority
CN
China
Prior art keywords
feature
vector
image
information
watermark
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810246835.3A
Other languages
Chinese (zh)
Other versions
CN108399593B (en
Inventor
李嘉
黄程韦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201810246835.3A priority Critical patent/CN108399593B/en
Publication of CN108399593A publication Critical patent/CN108399593A/en
Application granted granted Critical
Publication of CN108399593B publication Critical patent/CN108399593B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/49Analysis of texture based on structural texture description, e.g. using primitives or placement rules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Abstract

The invention discloses a kind of for IoT without intrusion watermark extracting and embedding grammar, which is characterized in that it includes the following steps:S1, the equalization processing that illumination is carried out to original image I (x, y);S2, the extraction that texture and color characteristic are carried out to the image Jing Guo illumination equilibrium treatment in S1;S3, the feature extracted in step S2 is modeled in high-dimensional vector space, the information of the coordinate position distribution in original image where increasing extracted feature carries out the construction of geometric characteristic;S4, the feature for building up mould in step S3 in high-dimensional vector space is sent into trained encoder in advance, is encoded, obtains the corresponding unique watermark information of the image;S5, by the corresponding unique watermark information of the image described in step S4, be stored in the memory space of network data center, be associated with the data information of extension, wherein the data information includes time, place and additional user reviews information.

Description

It is a kind of for IoT without intrusion watermark extracting and embedding grammar
Technical field
The present invention relates to the technical fields of IoT feature extracting methods, more particularly, it relates to which a kind of being used for IoT Without intrusion watermark extracting and embedding grammar.
Background technology
The rapid development of Internet of Things provides new technological means to tracing to the source for article, by the production of article, processing, The links of transport carry out the certification of timestamp and Electronic Signature, are able to record the historical information of article, composition one is credible The time data stream of Internet of Things, to allow consumer that can scan subsidiary RFID, Quick Response Code or shooting identification object itself Carry out information source tracing.
The rapid development of artificial intelligence so that it is more and more efficient to the identification of the visual signature of object, it is based on deep learning Image recognition have been able to, quick accurate matching to characteristics of image, in Internet of Things, gold horizontal close to the visual cognition of the mankind Melt the fields such as certification, living things feature recognition to be widely used.
Existing patent similar with the present invention has following five [1-5].Wherein, patent of invention [1] application number: CN201710496927.2, the method for providing a kind of agricultural product information monitoring in process of production and uploading network, still Its traceability system has ignored the certification link to trusted time information.Patent of invention [2] application number:CN201611042032.3, Propose it is a kind of by way of registering come the system for recording attendance information, although can authenticated time information, be the absence of pair The adhesion of article, temporal information cannot be associated on arbitrary article, and lack mobility, and Work attendance device cannot be convenient Movement, therefore the commodity circulation in IoT networks can not be supervised.Patent of invention [3] application number: CN201710120913.0 provides one kind and stabbing authentication method based on the encrypted safety time of key, can be to the label of article It is associated with watermark information, but its correlating method is traditional label print: technology, there is no utilize image watermark and image recognition Technology is a kind of intrusive, contact time certification mode.Patent of invention [4] application number:CN201611054170.3, Network communication field proposes a kind of hardware based message time information embedded mode, and the time for belonging to network data communication recognizes Card mode is not suitable for the supervision of article production circulation information in IoT.Patent of invention [5] application number: CN201710253044.9, it is proposed that a kind of watermark embedding method based on color of image feature, the image pixel utilized Information is very different with the latent structure method in the present invention, and watermark embedded mode is a kind of invasive mode, only It is suitble to the copyright protection of Digital Media image, it is impossible to be used in the identification to actual object.
Invention content
Based on this, it is necessary to provide it is a kind of for IoT without intrusion watermark extracting with embedding grammar to solve above-mentioned skill Art problem.
The present invention a technical solution be:
It is a kind of for IoT without intrusion watermark extracting and embedding grammar, which is characterized in that it includes the following steps:
S1, the equalization processing that illumination is carried out to original image I (x, y);
S2, the extraction that texture and color characteristic are carried out to the image Jing Guo illumination equilibrium treatment in S1;
S3, the feature extracted in step S2 is modeled in high-dimensional vector space, increases extracted feature The information of coordinate position distribution in the original image of place, carries out the construction of geometric characteristic;
S4, the feature for building up mould in step S3 in high-dimensional vector space is sent into trained encoder in advance, into Row coding, obtains the corresponding unique watermark information of the image;
S5, by the corresponding unique watermark information of the image described in step S4, be stored in the storage of network data center In space, it is associated with the data information of extension, wherein the data information includes time, place and additional user reviews letter Breath.
In a wherein embodiment, the equalization processing of illumination uses following algorithm in the step S1:
R=w1 [log (I (x, y))-log (F1 (x, y) * I (x, y))]+w2 [log (I (x, y))-log (F2 (x, y) * I (x, y))],
Wherein, w is the weight coefficient of each scale, and F is Gaussian filter, and x and y are image pixel point coordinates.
In a wherein embodiment, textural characteristics include:DSP-SIFT features, GMS features, picture edge characteristic, image Corner feature;Color characteristic includes:Face in hsv color space, RGB color, TSL color spaces, YCrCb color spaces Color pixel distribution probability density function;
Wherein, the extracting method of DSP-SIFT features is as follows:
H (θ | I) [x]=∫ p (θ | I, s) [x] e (s) ds
Wherein, h is feature extraction function, and θ is the angle direction of feature, the value between 0~2 π, in x representative images One position, p are the extraction functions of SIFT feature, and s is graphical rule, and e is exponential function.
In a wherein embodiment, the feature extracted in the S2 by step is built in high-dimensional vector space Mould includes the following steps:
S41, extract n characteristic point coordinate position (xi, yi), composition coordinate shape one-dimensional shape vector S=x1, Y1, x2, y2 ... xn, yn, for statistical modeling, between each two characteristic point, constitute a line segment, represent characteristic point it Between geometry site;
S42, the Multilayer networks that S is carried out to multivariate Gaussian distributed model, obtain mean vector U, shape vector S's Multivariate Gaussian distribution is as follows:
P (S)=1/ ((2 π) ^ (d/2) | Σ | ^ (1/2)) exp [- 0.5 (S-U) ' Σ ^ (- 1) (S-U)]
Wherein, p represents its probability distribution, and d represents intrinsic dimensionality, and Σ represents covariance matrix, and ^ represents power operation symbol, ' Represent matrix transposition oeprator;
S43, mean vector U, the relative distance vector D between calculating shape vector S and U are utilized;
S44, the character numerical value of n characteristic point of extraction and relative distance vector are merged to extension feature vector dimension Number.
In a wherein embodiment, the feature for building up mould in the step S4 in high-dimensional vector space is sent into training in advance Good encoder carries out coding and includes the following steps:
S51, the sample image for acquiring ware, the characteristic vector of image is extracted according to step S3;
S52, by self-encoding encoder structured training deep neural network, obtain each node of neural network of compressed data Parameter, the computational methods of neural network:H (x)=w1*x1+w2*x2+w3*x3+...+b, wherein w1, w2... are neuron Weight, b are amount of bias, and x1, x2... are the input quantity of each node of neuron
Define f:R → R is the mapping function in a real number field, referred to as excitation function, and neural network activation primitive uses Following form:
F (z)=1/ (1+e^ (- z))
With behalf category label, x represents input feature vector vector, and y represents the output of neural network, the input of preceding L layer to Amount is denoted as:vl, wherein l is the label of each network node, and hidden layer vector is denoted as:hl, the m in hidden layer is each Hidden unit The total quantity of serial number, implicit node can be calculated as NlPosterior probability:
Wherein, zl(vl)=(Wl)’vl+al.W is weight vectors, and a is bias vector;The output layer of neural network is to posteriority Probability carries out final calculating:
When the node of neural network is all the random number of binaryzation, energy function is:
E(v,h;θ)=- v ' w h-b ' v h-a ' h
Wherein parameter θ={ w, a, b };
S53, the network node output for extracting self-encoding encoder middle layer, the watermark information as input picture.
In a wherein embodiment, the data information that extension is associated in the step S5 includes following steps:
S61, the mapping calculation that data center's storage address is carried out according to the corresponding unique watermark information of image, map: wmark→addr
Wherein, wmark represents watermark data, and addr represents data address, and mapping uses linear function:
Map (x)={ y1, y2, y3... }
Yi=x1*mi1+x2*mi2+x3*mi3+...
Mapping function matrix is:
S62, Chaotic Scrambling encryption is carried out to the data information associated by article;
S63, using chaos sequence and initial data be encrypted by the way of XOR operation:
Remember that chaos sequence is { xi }, initial data is sequence { si }, and real value sequence { si } is converted to binary sequence { bi } can obtain the output sequence yi, wherein i=1,2 after chaos sequence is processed by XOR operation XOR ... N ×N.
Beneficial effects of the present invention:
The present invention propose watermark extracting and embedded mode without intrusion, do not destroy artwork, it is not necessary to modify artwork, merely with Under conditions of the image data feature of artwork, watermark information is associated on actual scene or article with the real world.Its Without intrusion and without destructiveness so that extremely easy using deployment measure and natural has to various image capture devices, net The compatibility of network transmission facilities.Compared with existing common technique two-dimensional code scanning, the data storage capacity bigger of support, shooting away from Walk-off angle degree is more convenient flexibly, also, using the matching discrimination method of image data feature proposed by the invention, accurate Rate is significantly improved than similar technique, and reliability significantly increases.The temporal information of article in IoT or node is carried, Neng Gouti The safety of the commodity such as high food, medicinal material obtains the production process key message of commodity by tracing to the source for temporal information.To IoT The comment information of commodity in network can be stored in the network storage space of watermark direction, and watermark extracting technology can be shared The comment information of different user is equally beneficial for safety and the process of circulation of supervision food or medicinal material.
Description of the drawings
Fig. 1 be the present invention it is a kind of for IoT without intrusion watermark extracting and embedding grammar flow chart;
Fig. 2 is a kind of extraction flow without intrusion watermark extracting with textural characteristics in embedding grammar for IoT of the present invention Figure;
Fig. 3 is a kind of construction flow without intrusion watermark extracting with geometric properties in embedding grammar for IoT of the present invention Figure;
Fig. 4 be the present invention it is a kind of for IoT without intrusion watermark extracting and watermark feature depth nerve net in embedding grammar The training flow chart of network;
Fig. 5 is a kind of extension flow without intrusion watermark extracting with watermark information in embedding grammar for IoT of the present invention Figure;
Fig. 6 is a kind of chaos sequence without intrusion watermark extracting with watermark information in embedding grammar for IoT of the present invention Encryption flow figure;
Fig. 7 be the present invention it is a kind of for IoT without intrusion watermark extracting and time, place, comment information in embedding grammar Extraction process flow chart.
Specific implementation mode
To facilitate the understanding of the present invention, below with reference to relevant drawings to invention is more fully described.In attached drawing Give the better embodiment of the present invention.But the present invention can be realized by many different forms, however it is not limited under Embodiment described in face.On the contrary, the purpose of providing these embodiments is that making to understand the disclosure It is more thorough and comprehensive.
Unless otherwise defined, all of technologies and scientific terms used here by the article and belong to the technical field of the present invention The normally understood meaning of technical staff is identical.Used term is intended merely to description tool in the description of the invention herein The purpose of the embodiment of body, it is not intended that in the limitation present invention.Term as used herein "and/or" includes one or more Any and all combinations of relevant Listed Items.
The present invention provide it is a kind of for IoT without intrusion watermark extracting and embedding grammar, which is characterized in that it includes following Step:
S1, the equalization processing that illumination is carried out to original image I (x, y);The value of the original image made is more prone to extract, The equalization processing of illumination uses following algorithm:
R=w1 [log (I (x, y))-log (F1 (x, y) * I (x, y))]+w2 [log (I (x, y))-log (F2 (x, y) * I (x, y))],
Wherein, w is the weight coefficient of each scale, and F is Gaussian filter, and x and y are image pixel point coordinates;
S2, the extraction that texture and color characteristic are carried out to the image Jing Guo illumination equilibrium treatment in S1;Wherein, textural characteristics Including:DSP-SIFT features, GMS features, picture edge characteristic, image corner feature;Color characteristic includes:Hsv color space, Colored pixels distribution probability density function in RGB color, TSL color spaces, YCrCb color spaces;
Wherein, the extracting method of DSP-SIFT features is as follows:
H (θ | I) [x]=∫ p (θ | I, s) [x] e (s) ds
Wherein, h is feature extraction function, and θ is the angle direction of feature, the value between 0~2 π, in x representative images One position, p are the extraction functions of SIFT feature, and s is graphical rule, and e is exponential function;
S3, the feature extracted in step S2 is modeled in high-dimensional vector space, increases extracted feature The information of coordinate position distribution in the original image of place, carries out the construction of geometric characteristic;The feature of extraction is high-dimensional It is modeled, is included the following steps in vector space:
T1, extract n characteristic point coordinate position (xi, yi), composition coordinate shape one-dimensional shape vector S=x1, Y1, x2, y2 ... xn, yn, for statistical modeling, between each two characteristic point, constitute a line segment, represent characteristic point it Between geometry site;
S is carried out multivariate Gaussian distributed mode by T2, the probability distribution that estimation shape vector S is distributed by multivariate Gaussian The Multilayer networks of type, obtain mean vector U, and the multivariate Gaussian distribution of shape vector S is as follows:
P (S)=1/ ((2 π) ^ (d/2) | Σ | ^ (1/2)) exp [- 0.5 (S-U) ' Σ ^ (- 1) (S-U)]
Wherein, p represents its probability distribution, and d represents intrinsic dimensionality, and Σ represents covariance matrix, and ^ represents power operation symbol, ' Represent matrix transposition oeprator;
T3, mean vector U, the relative distance vector D between calculating shape vector S and U are utilized;
T4, the character numerical value of n characteristic point of extraction and relative distance vector are merged into extension feature vector dimension.
S4, the feature for building up mould in step S3 in high-dimensional vector space is sent into trained encoder in advance, into Row coding, obtains the corresponding unique watermark information of the image;Coding is carried out to include the following steps:
M1, the sample image for acquiring ware, the characteristic vector of image is extracted according to step S3;
M2, by self-encoding encoder structured training deep neural network, obtain each node of neural network of compressed data Parameter, the computational methods of neural network:H (x)=w1*x1+w2*x2+w3*x3+...+b, wherein w1, w2... are neuron Weight, b are amount of bias, and x1, x2... are the input quantity of each node of neuron,
Define f:R → R is the mapping function in a real number field, referred to as excitation function, and neural network activation primitive uses Following form:
F (z)=1/ (1+e^ (- z))
With behalf category label, x represents input feature vector vector, and y represents the output of neural network, the input of preceding L layer to Amount is denoted as:vl, wherein l is the label of each network node, and hidden layer vector is denoted as:hl, the m in hidden layer is each Hidden unit The total quantity of serial number, implicit node can be calculated as NlPosterior probability:
Wherein, zl(vl)=(Wl)’vl+al.W is weight vectors, and a is bias vector;The output layer of neural network is to posteriority Probability carries out final calculating:
When the node of neural network is all the random number of binaryzation, energy function is:
E(v,h;θ)=- v ' w h-b ' v h-a ' h
Wherein parameter θ={ w, a, b };
M3, the network node output for extracting self-encoding encoder middle layer, the watermark information as input picture.
S5, by the corresponding unique watermark information of the image described in step S4, be stored in the storage of network data center In space, it is associated with the data information of extension, wherein the data information includes time, place and additional user reviews letter Breath.The data information of association extension includes following steps:
N1, the mapping calculation that data center's storage address is carried out according to the corresponding unique watermark information of image, map: wmark→addr
Wherein, wmark represents watermark data, and addr represents data address, and mapping uses linear function:
Map (x)={ y1, y2, y3... }
Yi=x1*mi1+x2*mi2+x3*mi3+...
Mapping function matrix is:
N2, Chaotic Scrambling encryption is carried out to the data information associated by article;
N3, using chaos sequence and initial data be encrypted by the way of XOR operation:
Remember that chaos sequence is { xi }, initial data is sequence { si }, and real value sequence { si } is converted to binary sequence { bi } can obtain the output sequence yi after chaos sequence is processed by XOR operation XOR (exclusive or), Middle i=1,2 ... N × N.
As shown in Figure 1, in IoT mobile networks, article or the figure at scene are acquired by the picture pick-up device of terminal device Picture extracts the textural characteristics of image, such as SIFT, DSP-SIFT, GMS feature, constitutes the data of identification image object. The data compression that depth own coding neural network is carried out on the basis of raw image data, extracts efficient feature representation, shape At the watermark of image.By network data center, the image watermark of the article or scene is parsed, corresponding data is read and deposits The Item Information on address location is stored up, generally may include, temporal information, location information, comment information etc..Certification to article Information is encrypted and decrypted using chaos secret key, protects the safety of critical data, to realize to live article in IoT Reliable information extracts.
As shown in Fig. 2, the feature extraction of digital picture, there are many realization means, for example, ORB features, SIFT feature, SURF features, DSP-SIFT features, GMS features etc., latter two are characterized in best in field of image recognition effect so far Feature has more efficient separating capacity than depth convolutional network.It is examined by traditional image border, image point of interest (angle point) It surveys, can also extract to the advantageous feature of image recognition, the feature of these different dimensions is merged, constitute the higher-dimension of identification Feature is spent, this original feature can reflect article or the visual characteristic of scene in image, then also need into traveling The Feature Compression optimization processing of one step, ability arrive more efficient data representation.
As shown in figure 3, handling characteristics of image, increases the coordinate of characteristic point position, these coordinates are extended As the one-dimensional characteristic vector that X-axis Y-axis is alternately present, to carry out the modeling of multivariate Gaussian model, in the mean value of many images On, standardize to the shape of characteristic coordinates, constitutes geometric characteristic, reflect the relative position of characteristic point in the picture Relationship.
As shown in figure 4, the characteristic information extracted from digital picture, as original watermark information, input depth nerve Network carries out the compression of feature.Using self-encoding encoder neural network structure, input layer, output layer are about intermediate network section Point is symmetrical above and below, and output layer is the recovery to data after compression.Training network during, cost function by output and it is defeated Difference between entering defines, by training so that minimizing the error.Trained neural network, most intermediate neuron section Point is the optimum expression for calculating initial data feature, is equivalent to Feature Compression process, thus obtained characteristic conduct The watermark extracting result of image.
As shown in figure 5, in network data center, the mapping of certain rule is carried out from image watermark, obtains storage data letter The storage address of breath can preserve the information of the registration article in the corresponding storage address of network data center, such as the time, Point, additional user reviews information etc..The encipherment protection data safety that these information pass through chaos sequence.
As shown in fig. 6, in IoT networks after mobile terminal shooting object picture, the extraction of nearly feature and watermark information extraction, Address is parsed from network data center, the information such as the time and location of storage is obtained, is solved by the chaos secret key arranged in advance Close, the data return after decryption is sent to mobile terminal, and user sees believable information of tracing to the source.
As shown in fig. 7, being mapped by Tent, chaos sequence is generated, there is the similitude of height with random sequence, with chaos Sequence carries out XOR operation as secret key, with information to be encrypted, obtains ciphertext.In decrypting process, by what is appointed in advance Operation is decrypted in secret key, restores cleartext information.
In order to make the foregoing objectives, features and advantages of the present invention clearer and more comprehensible, above in conjunction with attached drawing to the present invention Specific implementation mode be described in detail.Many details are elaborated in the above description in order to fully understand this hair It is bright.But the present invention can be much to implement different from other manner described above, those skilled in the art can be not Similar improvement is done in the case of violating intension of the present invention, therefore the present invention is not limited by particular embodiments disclosed above.And And each technical characteristic of embodiment described above can be combined arbitrarily, to keep description succinct, not in above-described embodiment Each technical characteristic it is all possible combination be all described, as long as however, the combination of these technical characteristics be not present contradiction, All it is considered to be the range of this specification record.
Several embodiments of the invention above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously Cannot the limitation to the scope of the claims of the present invention therefore be interpreted as.It should be pointed out that for those of ordinary skill in the art For, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to the guarantor of the present invention Protect range.Therefore, the protection domain of patent of the present invention should be determined by the appended claims.

Claims (6)

1. it is a kind of for IoT without intrusion watermark extracting and embedding grammar, which is characterized in that it includes the following steps:
S1, the equalization processing that illumination is carried out to original image I (x, y);
S2, the extraction that texture and color characteristic are carried out to the image Jing Guo illumination equilibrium treatment in S1;
S3, the feature extracted in step S2 is modeled in high-dimensional vector space, increases extracted feature place The information of coordinate position distribution in original image, carries out the construction of geometric characteristic;
S4, the feature for building up mould in step S3 in high-dimensional vector space is sent into trained encoder in advance, is compiled Code, obtains the corresponding unique watermark information of the image;
S5, by the corresponding unique watermark information of the image described in step S4, be stored in the memory space of network data center In, it is associated with the data information of extension, wherein the data information includes time, place and additional user reviews information.
2. it is according to claim 1 it is a kind of for IoT without intrusion watermark extracting and embedding grammar, which is characterized in that institute The equalization processing for stating illumination in step S1 uses following algorithm:
R=w1 [log (I (x, y))-log (F1 (x, y) * I (x, y))]+w2 [log (I (x, y))-log (F2 (x, y) * I (x, Y))],
Wherein, w is the weight coefficient of each scale, and F is Gaussian filter, and x and y are image pixel point coordinates.
3. it is according to claim 1 it is a kind of for IoT without intrusion watermark extracting and embedding grammar, which is characterized in that line Managing feature includes:DSP-SIFT features, GMS features, picture edge characteristic, image corner feature;Color characteristic includes:HSV face Colored pixels distribution probability density function in the colour space, RGB color, TSL color spaces, YCrCb color spaces;
Wherein, the extracting method of DSP-SIFT features is as follows:
H (θ | I) [x]=∫ p (θ | I, s) [x] e (s) ds
Wherein, h is feature extraction function, and θ is the angle direction of feature, the value between 0~2 π, one in x representative images Position, p are the extraction functions of SIFT feature, and s is graphical rule, and e is exponential function.
4. it is according to claim 1 it is a kind of for IoT without intrusion watermark extracting and embedding grammar, which is characterized in that institute It states and models the feature extracted in step S2 in high-dimensional vector space, include the following steps:
S41, extract n characteristic point coordinate position (xi, yi), composition coordinate shape one-dimensional shape vector S=x1, y1, X2, y2 ... xn, yn }, for statistical modeling, between each two characteristic point, a line segment is constituted, is represented between characteristic point Geometry site;
S42, the Multilayer networks that S is carried out to multivariate Gaussian distributed model, obtain mean vector U, shape vector S's is changeable It is as follows to measure Gaussian Profile:
P (S)=1/ ((2 π) ^ (d/2) | Σ | ^ (1/2)) exp [- 0.5 (S-U) ' Σ ^ (- 1) (S-U)]
Wherein, p represents its probability distribution, and d represents intrinsic dimensionality, and Σ represents covariance matrix, and ^ represents power operation symbol, ' represent Matrix transposition oeprator;
S43, mean vector U, the relative distance vector D between calculating shape vector S and U are utilized;
S44, the character numerical value of n characteristic point of extraction and relative distance vector are merged into extension feature vector dimension.
5. it is according to claim 1 it is a kind of for IoT without intrusion watermark extracting and embedding grammar, which is characterized in that institute The advance trained encoder of feature feeding for building up mould in step S4 in high-dimensional vector space is stated, it includes following to carry out coding Step:
S51, the sample image for acquiring ware, the characteristic vector of image is extracted according to step S3;
S52, by self-encoding encoder structured training deep neural network, obtain the ginseng of each node of neural network of compressed data Number, the computational methods of neural network:H (x)=w1*x1+w2*x2+w3*x3+...+b, wherein w1, w2... are the power of neuron Weight, b are amount of bias, and x1, x2... are the input quantity of each node of neuron
Define f:R → R is the mapping function in a real number field, referred to as excitation function, and neural network activation primitive uses as follows Form:
F (z)=1/ (1+e^ (- z))
With behalf category label, x represents input feature vector vector, and y represents the output of neural network, the input vector note of preceding L layer For:vl, wherein l is the label of each network node, and hidden layer vector is denoted as:hl, the m in hidden layer is the sequence of each Hidden unit Number, the total quantity of implicit node can be calculated as NlPosterior probability:
Wherein, zl(vl)=(Wl)’vl+al.W is weight vectors, and a is bias vector;The output layer of neural network is to posterior probability Carry out final calculating:
When the node of neural network is all the random number of binaryzation, energy function is:
E(v,h;θ)=- v ' w h-b ' v h-a ' h
Wherein parameter θ={ w, a, b };
S53, the network node output for extracting self-encoding encoder middle layer, the watermark information as input picture.
6. it is according to claim 1 it is a kind of for IoT without intrusion watermark extracting and embedding grammar, which is characterized in that institute It includes following steps to state and be associated with the data information of extension in step S5:
S61, the mapping calculation that data center's storage address is carried out according to the corresponding unique watermark information of image, map:wmark →addr
Wherein, wmark represents watermark data, and addr represents data address, and mapping uses linear function:
Map (x)={ y1, y2, y3... }
Yi=x1*mi1+x2*mi2+x3*mi3+...
Mapping function matrix is:
S62, Chaotic Scrambling encryption is carried out to the data information associated by article;
S63, using chaos sequence and initial data be encrypted by the way of XOR operation:
Remember that chaos sequence is { xi }, initial data is sequence { si }, and real value sequence { si } is converted to binary sequence { bi }, is led to Output sequence yi, wherein i=1,2 after chaos sequence is processed can be obtained by crossing XOR operation XOR ... N × N.
CN201810246835.3A 2018-03-23 2018-03-23 Non-intrusive watermark extraction and embedding method for IoT Active CN108399593B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810246835.3A CN108399593B (en) 2018-03-23 2018-03-23 Non-intrusive watermark extraction and embedding method for IoT

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810246835.3A CN108399593B (en) 2018-03-23 2018-03-23 Non-intrusive watermark extraction and embedding method for IoT

Publications (2)

Publication Number Publication Date
CN108399593A true CN108399593A (en) 2018-08-14
CN108399593B CN108399593B (en) 2021-12-10

Family

ID=63093095

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810246835.3A Active CN108399593B (en) 2018-03-23 2018-03-23 Non-intrusive watermark extraction and embedding method for IoT

Country Status (1)

Country Link
CN (1) CN108399593B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111161209A (en) * 2019-11-22 2020-05-15 京东数字科技控股有限公司 Method, device and equipment for detecting certificate watermark and storage medium
CN113709497A (en) * 2021-09-10 2021-11-26 广东博华超高清创新中心有限公司 Method for embedding artificial intelligence characteristic information based on AVS3 coding framework

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8005254B2 (en) * 1996-11-12 2011-08-23 Digimarc Corporation Background watermark processing
CN102208097B (en) * 2011-05-26 2013-06-19 浙江工商大学 Network image copyright real-time distinguishing method
CN104217387A (en) * 2014-01-22 2014-12-17 河南师范大学 Image watermark embedding and extracting method and device based on quantization embedding
CN105389770A (en) * 2015-11-09 2016-03-09 河南师范大学 Method and apparatus for embedding and extracting image watermarking based on BP and RBF neural networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8005254B2 (en) * 1996-11-12 2011-08-23 Digimarc Corporation Background watermark processing
CN102208097B (en) * 2011-05-26 2013-06-19 浙江工商大学 Network image copyright real-time distinguishing method
CN104217387A (en) * 2014-01-22 2014-12-17 河南师范大学 Image watermark embedding and extracting method and device based on quantization embedding
CN105389770A (en) * 2015-11-09 2016-03-09 河南师范大学 Method and apparatus for embedding and extracting image watermarking based on BP and RBF neural networks

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
DER-CHYUAN LOU等: "Adaptive Digital Watermarking Using Neural Network Technique", 《IEEE 37TH ANNUAL 2003 INTERNATIONAL CARNAHAN CONFERENCE ONSECURITY TECHNOLOGY》 *
姚志均等: "一种新的空间直方图相似性度量方法及其在目标跟踪中的应用", 《电子与信息学报》 *
杨静静: "一种基于纹理分块和颜色特征的数字水印算法", 《中国优秀硕士学位论文全文数据库(信息科技辑)》 *
燕丹丹等: "一种基于神经网络和小波变换的盲水印算法", 《现代计算机》 *
顾细明: "基于物联网的家禽质量安全追溯关键技术研究", 《中国优秀硕士学位论文全文数据库(信息科技辑)》 *
黄福莹等: "基于DWT-DFT和SVD域的盲水印算法", 《电视技术》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111161209A (en) * 2019-11-22 2020-05-15 京东数字科技控股有限公司 Method, device and equipment for detecting certificate watermark and storage medium
CN113709497A (en) * 2021-09-10 2021-11-26 广东博华超高清创新中心有限公司 Method for embedding artificial intelligence characteristic information based on AVS3 coding framework

Also Published As

Publication number Publication date
CN108399593B (en) 2021-12-10

Similar Documents

Publication Publication Date Title
Ferreira et al. A review of digital image forensics
Ijjina et al. Human action recognition in RGB-D videos using motion sequence information and deep learning
Agrawal et al. A secured tag for implementation of traceability in textile and clothing supply chain
JP5822411B2 (en) Image information code conversion apparatus, image information code conversion method, image related information providing system using image code, image information code conversion program, and recording medium recording the program
CN116664961B (en) Intelligent identification method and system for anti-counterfeit label based on signal code
Zenati et al. SSDIS-BEM: A new signature steganography document image system based on beta elliptic modeling
CN108399593A (en) It is a kind of for IoT without intrusion watermark extracting and embedding grammar
Barni et al. Iris deidentification with high visual realism for privacy protection on websites and social networks
Chi et al. Consistency penalized graph matching for image-based identification of dendritic patterns
Sarmah et al. Optimization models in steganography using metaheuristics
Meng et al. High-capacity steganography using object addition-based cover enhancement for secure communication in networks
Sun et al. Minimum noticeable difference-based adversarial privacy preserving image generation
CN113034332B (en) Invisible watermark image and back door attack model construction and classification method and system
CN104376523B (en) A kind of image code false-proof system composing method towards Google glass
CN104376280A (en) Image code generating method for Google project glass
CN104376314B (en) A kind of constructive method towards Google glass Internet of Things web station system
Yongqiang et al. A DWT domain image watermarking scheme using genetic algorithm and synergetic neural network
Inoue et al. Amplitude based keyless optical encryption system using deep neural network
Bhattacharyya et al. DCT difference modulation (DCTDM) image steganography
El Bakrawy et al. A rough k-means fragile watermarking approach for image authentication
Li et al. Measurement-driven security analysis of imperceptible impersonation attacks
ElSayed et al. Process Control-Based Embedding and Computer Vision-Based Retrieval of 2D Codes in Fused Deposition Modeling
El Bakrawy et al. Intelligent Machine Learning in Image Authentication
Khandelwal et al. Reversible Image Steganography Using Deep Learning Method: A Review
Ibrahim A Method to Encode the Fingerprint Minutiae Using QR Code

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant