CN112040241B - Video image transparent watermark embedding and extracting method based on deep learning - Google Patents
Video image transparent watermark embedding and extracting method based on deep learning Download PDFInfo
- Publication number
- CN112040241B CN112040241B CN201910480250.2A CN201910480250A CN112040241B CN 112040241 B CN112040241 B CN 112040241B CN 201910480250 A CN201910480250 A CN 201910480250A CN 112040241 B CN112040241 B CN 112040241B
- Authority
- CN
- China
- Prior art keywords
- watermark
- coding
- graph
- information
- encoding
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 27
- 238000013135 deep learning Methods 0.000 title claims abstract description 12
- 238000012549 training Methods 0.000 claims abstract description 16
- 238000005516 engineering process Methods 0.000 claims abstract description 4
- 238000004519 manufacturing process Methods 0.000 claims abstract description 4
- 230000009466 transformation Effects 0.000 claims description 15
- 238000012360 testing method Methods 0.000 claims description 4
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 3
- 238000003491 array Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 238000005286 illumination Methods 0.000 claims description 3
- 238000012163 sequencing technique Methods 0.000 claims description 3
- 238000007493 shaping process Methods 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 3
- 230000005540 biological transmission Effects 0.000 abstract description 3
- 230000006835 compression Effects 0.000 abstract description 3
- 238000007906 compression Methods 0.000 abstract description 3
- 238000013136 deep learning model Methods 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 8
- 238000001514 detection method Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000007477 logistic regression Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 230000001795 light effect Effects 0.000 description 1
- 238000012806 monitoring device Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
- H04N19/467—Embedding additional information in the video signal during the compression process characterised by the embedded information being invisible, e.g. watermarking
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Editing Of Facsimile Originals (AREA)
- Image Processing (AREA)
Abstract
The invention provides a video image transparent watermark embedding and extracting method based on deep learning, which comprises the following steps: acquiring video images of different scenes and different time periods; adding a transparent watermark to the video image and generating a corresponding label; using a data enhancement technology to manufacture a video image transparent watermark data set; training a deep network YOLO v3 model, and storing training parameters; identifying the watermark coding position, the class information and the class confidence coefficient of the watermark coding graph to be extracted by using the trained deep network YOLO v3 model; and integrating the watermark coding graph to generate complete watermark information. The invention can increase the redundant information of the video watermark, and can extract the complete information of the watermark when malicious shooting and transmission under different scenes such as local light, different light rays and the like are realized; the embedded semitransparent watermark has strong compression resistance, and watermark information can be reserved when the embedded semitransparent watermark is transmitted under severe conditions such as mobile phone shooting; the watermark information is extracted by using the deep learning model, the operation is fast, and the algorithm is more robust than the traditional algorithm.
Description
Technical Field
The invention relates to the technical field of video image transparent watermark embedding and extraction, in particular to a video image transparent watermark embedding and extraction method based on deep learning.
Background
With the rapid development of computer networks and embedded devices and the enhancement of public safety awareness of people, the monitoring devices are visible everywhere, and monitoring videos are easy to store, copy and spread, thereby playing a great role in field protection and event reproduction. However, the malicious transmission of surveillance videos and video images is receiving more and more attention.
Adding extractable watermarks to video images becomes an important means of tracking the source of video image leakage. Video watermarking algorithms generally fall into three major categories, the first category is embedding watermarks in DCT coefficients, the second category is embedding watermarks in motion vectors, and the third category is embedding watermarks in entropy-encoded codewords, but the following disadvantages exist: the algorithm has weak compression resistance, and after the video with the watermark is recoded, the watermark in the video is damaged, which is not beneficial to subsequent verification. The redundant information of the watermark is insufficient, and the watermark information is lost and cannot be extracted after the watermark video is transmitted by shooting of a mobile phone and the like.
Disclosure of Invention
In order to solve the problems, the invention provides a video image transparent watermark embedding and extracting method based on deep learning.
In order to realize the purpose, the invention adopts the technical scheme that:
a video image transparent watermark embedding and extracting method based on deep learning comprises the following steps:
the method comprises the following steps: acquiring video images of different scenes and different time periods;
step two: adding a transparent watermark to the video image and generating a corresponding label;
step three: using a data enhancement technology to manufacture a video image transparent watermark data set;
step four: training a deep network YOLO v3 model, and storing training parameters;
step five: identifying the watermark coding position, the class information and the class confidence coefficient of the watermark coding graph to be extracted by using the trained deep network YOLO v3 model;
step six: and integrating the watermark coding graph to generate complete watermark information.
Preferably, in the second step, the specific method for adding the transparent watermark to the video image and generating the corresponding label includes:
step 11, randomly generating watermark information, wherein the watermark information is a six-digit shaping number;
step 12, encoding watermark information, wherein each digit corresponds to a watermark encoding graph, and the watermark information encoding has a same common initial watermark encoding graph;
step 13, embedding watermark information codes into the video images, wherein watermark coding patterns are horizontally tiled at equal intervals, and the distance in the vertical direction is randomly generated within the width range of the video images;
and step 14, generating a label file corresponding to the video image, wherein each line comprises information of a watermark coding image, such as category, initial coordinate and length and width.
Further, in the step 12, the watermark encoding patterns have equal widths and equal heights, and the distinction degree of different watermark encoding patterns is obvious, and there are 11 categories in total.
Further, in step 13, an embedding method algorithm for embedding watermark information into the video image is as follows:
i(x,y)=α*i(x,y)+(1-α)*(255-i(x,y))
where i (x, y) represents a pixel value of coordinates (x, y) under the watermark encoding pattern mask, and α represents a transparency coefficient, by which the degree of transparency can be adjusted.
Furthermore, the horizontal tiling interval of the watermark coding patterns is half of the width of the watermark coding patterns.
Preferably, in the third step, the specific method for making the video image transparent watermark data set is as follows:
step 21, forming a data set by the video images added with the transparent watermarks in the step two;
step 22, sequentially carrying out horizontal turning, random cutting, translation transformation, affine transformation, color transformation, illumination transformation and rotation transformation on each image in the data set to obtain video images, respectively storing the video images into the data set, and generating corresponding label files;
and 23, randomly selecting 80% of images from all the images in the data set to form a training set, and forming the rest 20% of images into a testing set.
Preferably, in the sixth step, the specific method for integrating the watermark coding pattern to generate complete watermark information includes:
step 31, filtering out watermark coding patterns with the category confidence coefficient less than 0.55 in the step five;
step 32, sequencing the watermark coding graphics in the step 31 from left to right and from top to bottom from the coordinate position of the upper left corner;
step 33, calculating the average height of the watermark coding graph in the step 32;
and step 34, if the difference between the vertical distances of the two watermark coding patterns is smaller than 0.25 time of the average height of the watermark coding patterns, and the horizontal distance is smaller than 1.5 times of the average height of the watermark coding patterns, the two watermark coding patterns are adjacent. Defining 6 arrays corresponding to six digits of watermark information, taking out an initial watermark encoding graph from the watermark encoding graph in step 32, taking out a watermark encoding graph adjacent to the right side of the initial watermark encoding graph from the residual watermark encoding graphs, putting the watermark encoding graph adjacent to the right side of the watermark encoding graph in the first array from the residual watermark encoding graphs, putting the watermark encoding graph adjacent to the right side of the watermark encoding graph in the second array from the residual watermark encoding graphs, and so on; and taking out the watermark coding graph adjacent to the left side of the initial watermark coding graph from the residual watermark coding graphs and putting the watermark coding graph adjacent to the left side of the watermark coding graph in the sixth array into the fifth array, and so on. Counting the types of the watermark coding patterns in each array, the number of the watermark coding patterns in each type and the maximum confidence coefficient;
and step 35, confirming the type of the watermark coding pattern according to the statistical result of the step 34, if the number of a certain type corresponding to the watermark coding pattern in the array is the largest, considering the type as the type of the watermark coding pattern corresponding to the array, if the number of the certain type is the same, selecting the type with higher type confidence coefficient as the type of the watermark coding pattern corresponding to the array, and further extracting the number corresponding to the watermark coding pattern corresponding to each array.
The invention can increase the redundant information of the video watermark, and can extract the complete information of the watermark when malicious shooting and transmission under different scenes such as local light, different light rays and the like are realized; the embedded semitransparent watermark has strong compression resistance, and watermark information can be reserved when the embedded semitransparent watermark is transmitted under severe conditions such as mobile phone shooting; the watermark information is extracted by using the deep learning model, the operation is fast, and the algorithm is more robust than the traditional algorithm.
Drawings
The accompanying drawings are included to provide a further understanding of the invention.
In the drawings:
fig. 1 is a work flow diagram of a video image transparent watermark embedding and extracting method based on deep learning according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Abbreviations and key term definitions:
YOLO, Real-Time Object Detection, an advanced Real-Time Object Detection system.
GPU, an acronym for Graphics Processing Unit, is a microprocessor that works specifically for image operations on personal computers, workstations, gaming machines, and some mobile devices.
As shown in fig. 1, a method for embedding and extracting a transparent watermark in a video image based on deep learning includes the following steps:
the method comprises the following steps: acquiring video images of different scenes and different time periods;
video images of the inside and outside of the room are collected over a period of time. The video images of different scenes such as a parking lot, a park, a district and an intersection, and different time periods such as early morning, noon, afternoon, dusk and evening are arranged outdoors; the indoor video images comprise video images of different scenes such as factory buildings, offices, indoor parking lots, indoor playgrounds and the like under different light effects. The more cases the collected video image covers the better.
Step two: adding a transparent watermark to the video image and generating a corresponding label;
the specific method comprises the following steps:
step 11, randomly generating watermark information, wherein the watermark information is a six-digit shaping number M;
step 12, encoding watermark information, wherein each digit corresponds to a watermark encoding graph, and the watermark information encoding has a same common initial watermark encoding graph; the watermark coding patterns have equal width and equal height, the distinction degree of different watermark coding patterns is obvious, 11 categories are totally represented by a-k respectively, the number 9302 of codes is akebd, and a corresponds to the category corresponding to the initial watermark coding pattern.
Step 13, embedding watermark information codes into video images, wherein watermark coding patterns are horizontally tiled at equal intervals, the horizontal tiling interval of the watermark coding patterns is half of the width of the watermark coding patterns, and the distance in the vertical direction is randomly generated within the width range of the video images; the embedding algorithm is as follows:
i(x,y)=α*i(x,y)+(1-α)*(255-i(x,y))
where i (x, y) represents a pixel value of coordinates (x, y) under the watermark encoding pattern mask, and α represents a transparency coefficient, by which the degree of transparency can be adjusted.
And 24, generating a label file corresponding to the video image, wherein each line comprises information of a watermark coding graph, such as category, initial coordinates x and y, length and width h and w.
Step three: using a data enhancement technology to manufacture a video image transparent watermark data set;
the specific method comprises the following steps:
step 21, forming a data set by the video images added with the transparent watermarks in the step two;
step 22, sequentially carrying out horizontal turning, random cutting, translation transformation, affine transformation, color transformation, illumination transformation and rotation transformation on each image in the data set to obtain video images, respectively storing the video images into the data set, and generating corresponding label files;
and 23, randomly selecting 80% of images from all the images in the data set to form a training set, and forming the rest 20% of images into a testing set.
Step four: training a deep network YOLO v3 model, and storing training parameters;
an algorithm objective is defined. The algorithm totally uses the mean square sum error as a Loss function, and finally iterates the Loss function to minimize the convergence value (generally, 0.6 is enough). The Loss function consists of 3 parts: coordinate error, IOU error and classification error, the concrete calculation formula is as follows:
the mean square error in mathematical statistics refers to the expected value of the square of the difference between the estimated value of the parameter and the true value of the parameter, and is denoted as MSE. MSE is a convenient method for measuring average error, the MSE can evaluate the change degree of data, and the smaller the value of MSE is, the better accuracy of the prediction model for describing experimental data is shown. Generally, at a certain sample size, the criterion for evaluating the quality of a point estimate is always a function of the distance between the point estimate and the true value of the parameter, the most common function is the square of the distance, and the function can be expected due to the randomness of the estimate. Mean square and variance formula:
wherein y is i Is the real data that is to be presented,is fitting data, w i And m is the number of samples and is more than 0. It can be seen here that the closer the SSE is to 0, the better the model selection and fitting, the more successful the data prediction is, and the mean value of the SSE is MSE.
The IOU (interaction-over-unit, IoU), a criterion for measuring the accuracy of detecting corresponding objects in a particular dataset, is the overlap ratio of the generated candidate frame (candidate frame) and the original mark frame (ground road frame). The optimal situation is complete overlap, i.e. a ratio of 1. The calculation formula is
C in the formula is a candidate frame, and G is an original mark frame; the model divides the input image into an S × S grid, and if the center of the encoded pattern falls into a grid cell, the grid cell is responsible for the detection of the encoded pattern, and each grid cell predicts B bounding boxes.
x i ,y i ,w i ,h i Coordinate values, x, representing model training labels, respectively i Abscissa, y, representing the center point of the bounding box i Ordinate, w, representing the centre point of the bounding box i Indicates the width, h, of the bounding box i The height of the bounding box is indicated,is a corresponding predicted coordinate value, and the coordinate values in the formula are normalized relative values in the range of [0, 1%]。The j-th bounding box prediction of the ith grid cell effectively takes 1 and the invalid takes 0;then, conversely, 0 is taken valid and 1 is taken invalid. C i Andconfidence of the labeling and the predicted bounding box. p is a radical of i (c) Andrepresenting the confidence of the label and the prediction category, respectively.
Initializing training parameters: class is 11, filters of yolo layer is 48, initial learning rate base _ lr is 0.001, and the optimizer adopts Stochastic Gradient Descent (SGD), and the number of iterations is 20000.
And (3) training a model, wherein the model adopts a network structure of a full convolution and a cross-layer jump link structure similar to a residual error network to extract features of an input picture to obtain a feature map (feature map) with a certain size. The input image is divided into meshes, if the coded image object falls into which mesh unit, then this mesh unit is responsible for predicting the object, the IOU is calculated in the 3 bounding boxes predicted by the mesh unit, and only the bounding box with the largest IOU is used to predict the object.
And a multi-label multi-classification logistic regression layer is adopted in the model for category prediction. The logistic regression layer mainly uses a sigmoid function, and classes types of grid units are predicted to be restricted to a range from 0 to 1 through the sigmoid function, and if the value is greater than 0.5, the target belongs to the class.
And the model predicts the frame position by adopting a mode of fusing multiple scales and detects on the feature mapping of the multiple scales. The output feature mapping obtained by prediction has two dimensions which are the dimensions of the extracted feature, and also has one dimension (depth) which is B (5+ C), wherein B represents the number of the predicted boundary boxes of each unit cell, the value in the invention is 3, C represents the category number of the boundary boxes 11, and 5 represents 4 coordinate information and a target confidence coefficient.
And loading the initialization training parameters, and performing model training on the GPU server according to the above thought. And detecting the trained model effect by using the test set, and storing the optimal model weight parameter for extracting the position information and the category information of the video image transparent watermark coding graph.
Step five: identifying the watermark coding position (coordinates x and y at the upper left corner and the length and width h and w), the category information and the category confidence coefficient of the watermark coding graph to be extracted by using the trained deep network YOLO v3 model;
step six: and integrating the watermark coding graph to generate complete watermark information.
The specific method comprises the following steps:
step 31, filtering out watermark coding patterns with the category confidence coefficient less than 0.55 in the step five;
step 32, sequencing the watermark coding graphics in the step 31 from left to right and from top to bottom from the coordinate position of the upper left corner;
step 33, calculating the average height of the watermark coding graph in the step 32;
and step 34, if the difference between the vertical distances of the two watermark coding patterns is smaller than 0.25 time of the average height of the watermark coding patterns, and the horizontal distance is smaller than 1.5 times of the average height of the watermark coding patterns, the two watermark coding patterns are adjacent. Defining 6 arrays corresponding to six digits of watermark information, taking out an initial watermark encoding graph from the watermark encoding graph in step 32, taking out a watermark encoding graph adjacent to the right side of the initial watermark encoding graph from the residual watermark encoding graphs, putting the watermark encoding graph adjacent to the right side of the watermark encoding graph in the first array from the residual watermark encoding graphs, putting the watermark encoding graph adjacent to the right side of the watermark encoding graph in the second array from the residual watermark encoding graphs, and so on; and taking out the watermark coding graph adjacent to the left side of the initial watermark coding graph from the residual watermark coding graphs and putting the watermark coding graph adjacent to the left side of the watermark coding graph in the sixth array into the fifth array, and so on. Counting the types of the watermark coding patterns in each array, the number of the watermark coding patterns in each type and the maximum confidence coefficient;
and step 35, confirming the type of the watermark coding pattern according to the statistical result of the step 34, if the number of a certain type corresponding to the watermark coding pattern in the array is the largest, considering the type as the type of the watermark coding pattern corresponding to the array, if the number of the certain type is the same, selecting the type with higher type confidence coefficient as the type of the watermark coding pattern corresponding to the array, and further extracting the number corresponding to the watermark coding pattern corresponding to each array.
Although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that various changes in the embodiments and/or modifications of the invention can be made, and equivalents and modifications of some features of the invention can be made without departing from the spirit and scope of the invention.
Claims (4)
1. A video image transparent watermark embedding and extracting method based on deep learning is characterized in that: the method comprises the following steps:
the method comprises the following steps: acquiring video images of different scenes and different time periods;
step two: adding a transparent watermark to the video image and generating a corresponding label;
step three: using a data enhancement technology to manufacture a video image transparent watermark data set;
step four: training a deep network YOLO v3 model, and storing training parameters;
step five: identifying the watermark coding position, the class information and the class confidence coefficient of the watermark coding graph to be extracted by using the trained deep network YOLO v3 model;
step six: integrating watermark coding patterns to generate complete watermark information;
in the second step, the specific method for adding the transparent watermark to the video image and generating the corresponding label comprises the following steps:
step 11, randomly generating watermark information, wherein the watermark information is a six-digit shaping number;
step 12, encoding watermark information, wherein each digit corresponds to a watermark encoding graph, and the watermark information encoding has a same common initial watermark encoding graph;
step 13, embedding watermark information codes into the video images, wherein watermark coding patterns are horizontally tiled at equal intervals, and the distance in the vertical direction is randomly generated within the width range of the video images;
step 14, generating a label file corresponding to the video image, wherein each line comprises information of a watermark coding image, such as category, initial coordinate and length and width;
in the third step, the specific method for making the video image transparent watermark data set comprises the following steps:
step 21, forming a data set by the video images added with the transparent watermarks in the step two;
step 22, sequentially carrying out horizontal turning, random cutting, translation transformation, affine transformation, color transformation, illumination transformation and rotation transformation on each image in the data set to obtain video images, respectively storing the video images into the data set, and generating corresponding label files;
step 23, randomly selecting 80% of images from all images in the data set to form a training set, and forming the rest 20% of images into a testing set;
in the sixth step, the specific method for integrating the watermark coding pattern to generate complete watermark information is as follows:
step 31, filtering out watermark coding patterns with the category confidence coefficient smaller than 0.55 in the step five;
step 32, sequencing the watermark coding graphics in the step 31 from left to right and from top to bottom from the coordinate position of the upper left corner;
step 33, calculating the average height of the watermark coding graph in the step 32;
step 34, if the difference between the vertical distances of the two watermark coding patterns is smaller than 0.25 time of the average height of the watermark coding patterns, and the horizontal distance is smaller than 1.5 times of the average height of the watermark coding patterns, the two watermark coding patterns are adjacent; defining 6 arrays corresponding to six digits of watermark information, taking out an initial watermark encoding graph from the watermark encoding graph in step 32, taking out a watermark encoding graph adjacent to the right side of the initial watermark encoding graph from the residual watermark encoding graphs, putting the watermark encoding graph adjacent to the right side of the watermark encoding graph in the first array from the residual watermark encoding graphs, putting the watermark encoding graph adjacent to the right side of the watermark encoding graph in the second array from the residual watermark encoding graphs, and so on; taking out the watermark coding graph adjacent to the left side of the initial watermark coding graph from the residual watermark coding graphs and putting the watermark coding graph adjacent to the left side of the initial watermark coding graph from the residual watermark coding graphs into a sixth array, taking out the watermark coding graph adjacent to the left side of the watermark coding graph from the sixth array from the residual watermark coding graphs and putting the watermark coding graph adjacent to the left side of the initial watermark coding graph into a fifth array, and so on; counting the types of the watermark coding patterns in each array, the number of the watermark coding patterns in each type and the maximum confidence coefficient;
and step 35, confirming the type of the watermark coding pattern according to the statistical result of the step 34, if the number of a certain type corresponding to the watermark coding pattern in the array is the largest, considering the type as the type of the watermark coding pattern corresponding to the array, if the number of the certain type is the same, selecting the type with higher type confidence coefficient as the type of the watermark coding pattern corresponding to the array, and further extracting the number corresponding to the watermark coding pattern corresponding to each array.
2. The method for embedding and extracting the transparent watermark of the video image based on the deep learning as claimed in claim 1, wherein: in the step 12, the watermark coding patterns have equal width and equal height, and the distinction degree of different watermark coding patterns is obvious, and 11 categories are provided.
3. The method for embedding and extracting the transparent watermark of the video image based on the deep learning as claimed in claim 1, wherein: in step 13, the embedding method algorithm for embedding watermark information into the video image is as follows:
i(x,y)=α*i(x,y)+(1-α)*(255-i(x,y))
where i (x, y) represents a pixel value of coordinates (x, y) under the watermark encoding pattern mask, and α represents a transparency coefficient, by which the degree of transparency can be adjusted.
4. The method for embedding and extracting the transparent watermark of the video image based on the deep learning as claimed in claim 1, wherein: the horizontal tiling distance of the watermark coding patterns is half of the width of the watermark coding patterns.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910480250.2A CN112040241B (en) | 2019-06-04 | 2019-06-04 | Video image transparent watermark embedding and extracting method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910480250.2A CN112040241B (en) | 2019-06-04 | 2019-06-04 | Video image transparent watermark embedding and extracting method based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112040241A CN112040241A (en) | 2020-12-04 |
CN112040241B true CN112040241B (en) | 2022-08-05 |
Family
ID=73575847
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910480250.2A Active CN112040241B (en) | 2019-06-04 | 2019-06-04 | Video image transparent watermark embedding and extracting method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112040241B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111932431B (en) * | 2020-07-07 | 2023-07-18 | 华中科技大学 | Visible watermark removing method based on watermark decomposition model and electronic equipment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040258243A1 (en) * | 2003-04-25 | 2004-12-23 | Dong-Hwan Shin | Method for embedding watermark into an image and digital video recorder using said method |
CN103391482A (en) * | 2013-07-15 | 2013-11-13 | 浙江大学 | Blind digital watermarking coding and decoding method capable of resisting geometric attack |
CN109635875A (en) * | 2018-12-19 | 2019-04-16 | 浙江大学滨海产业技术研究院 | A kind of end-to-end network interface detection method based on deep learning |
CN109816024A (en) * | 2019-01-29 | 2019-05-28 | 电子科技大学 | A kind of real-time automobile logo detection method based on multi-scale feature fusion and DCNN |
-
2019
- 2019-06-04 CN CN201910480250.2A patent/CN112040241B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040258243A1 (en) * | 2003-04-25 | 2004-12-23 | Dong-Hwan Shin | Method for embedding watermark into an image and digital video recorder using said method |
CN103391482A (en) * | 2013-07-15 | 2013-11-13 | 浙江大学 | Blind digital watermarking coding and decoding method capable of resisting geometric attack |
CN109635875A (en) * | 2018-12-19 | 2019-04-16 | 浙江大学滨海产业技术研究院 | A kind of end-to-end network interface detection method based on deep learning |
CN109816024A (en) * | 2019-01-29 | 2019-05-28 | 电子科技大学 | A kind of real-time automobile logo detection method based on multi-scale feature fusion and DCNN |
Also Published As
Publication number | Publication date |
---|---|
CN112040241A (en) | 2020-12-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Sun et al. | Understanding architecture age and style through deep learning | |
Chen et al. | Multi-modal fusion of satellite and street-view images for urban village classification based on a dual-branch deep neural network | |
Ma et al. | A real-time crack detection algorithm for pavement based on CNN with multiple feature layers | |
CN107066916B (en) | Scene semantic segmentation method based on deconvolution neural network | |
CN109886085A (en) | People counting method based on deep learning target detection | |
CN109359563B (en) | Real-time lane occupation phenomenon detection method based on digital image processing | |
CN111626141B (en) | Crowd counting model building method, counting method and system based on generated image | |
CN110334719B (en) | Method and system for extracting building image in remote sensing image | |
Law et al. | An application of convolutional neural network in street image classification: The case study of London | |
CN113379771B (en) | Hierarchical human body analysis semantic segmentation method with edge constraint | |
CN117974912B (en) | Urban planning live-action three-dimensional simulation system | |
CN115205672A (en) | Remote sensing building semantic segmentation method and system based on multi-scale regional attention | |
Chen et al. | Classification of soft-story buildings using deep learning with density features extracted from 3D point clouds | |
CN107730530A (en) | A kind of remote emergency management control method based on smart city | |
CN112040241B (en) | Video image transparent watermark embedding and extracting method based on deep learning | |
CN115690584A (en) | SSD (solid State drive) -based improved power distribution room foreign matter detection method | |
Mayer et al. | Building facade interpretation from uncalibrated wide-baseline image sequences | |
CN117496517B (en) | Intelligent laser radar control method and system in urban real-scene three-dimensional construction | |
CN113240829B (en) | Intelligent gate passing detection method based on machine vision | |
Ezimand et al. | The analysis of the spatio-temporal changes and prediction of built-up lands and urban heat islands using multi-temporal satellite imagery | |
CN113158954B (en) | Automatic detection method for zebra crossing region based on AI technology in traffic offsite | |
CN117765258A (en) | Large-scale point cloud semantic segmentation method based on density self-adaption and attention mechanism | |
CN117351360A (en) | Remote sensing image road extraction method based on attention mechanism improvement | |
CN116843965A (en) | Intelligent water level detection method based on deep learning | |
CN116469029A (en) | Oilfield on-site violation behavior identification method based on multi-layer perceptron network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
PP01 | Preservation of patent right |
Effective date of registration: 20231113 Granted publication date: 20220805 |
|
PP01 | Preservation of patent right |