CN105263025B - A kind of video Zero watermarking method based on time-space domain - Google Patents

A kind of video Zero watermarking method based on time-space domain Download PDF

Info

Publication number
CN105263025B
CN105263025B CN201510745110.5A CN201510745110A CN105263025B CN 105263025 B CN105263025 B CN 105263025B CN 201510745110 A CN201510745110 A CN 201510745110A CN 105263025 B CN105263025 B CN 105263025B
Authority
CN
China
Prior art keywords
frame
key frame
time
watermark
time domain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201510745110.5A
Other languages
Chinese (zh)
Other versions
CN105263025A (en
Inventor
彭德中
罗帆
罗一帆
项勇
张利君
银大伟
蒋瑞
桑永胜
彭玺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Ruibei Yingte Information Technology Co ltd
Original Assignee
Chengdu Ruibei Yingte Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Ruibei Yingte Information Technology Co Ltd filed Critical Chengdu Ruibei Yingte Information Technology Co Ltd
Priority to CN201510745110.5A priority Critical patent/CN105263025B/en
Publication of CN105263025A publication Critical patent/CN105263025A/en
Application granted granted Critical
Publication of CN105263025B publication Critical patent/CN105263025B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Editing Of Facsimile Originals (AREA)
  • Image Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a kind of video Zero watermarking method based on time-space domain, including video scene segmentation, synchronized using finite state machine Mechanism Design and key-frame extraction,3D‑HarrisAngle point set up, time-domain gray average data set structure, time-domain extreme value data set structure, dual zero watermarking structure and etc..This method is based on video content features in time-space domain and forms watermark data, the shortcomings of the prior art is overcome, on the basis of resist geometric attacks; color attack and synchronization attack can be fought simultaneously; and zero watermarking ambiguity problem is solved, it is used for the copyright protection of video content, there is high robust feature.

Description

A kind of video Zero watermarking method based on time-space domain
Technical field
The present invention relates to digital watermark technology fields, and in particular to a kind of video Zero watermarking method based on time-space domain.
Background technology
With the development of Internet technology, multimedia technology, including audio, image and video etc. have been enter into our daily lifes In work.Wherein, video is widely used as important medium role, thus the work of its copyright protection seems and becomes more and more important.Herein Under background, copyright information is represented with digital watermarking, easily and effectively realize copyright protection task, digital watermark technology obtains Research extensively.By development for many years, application technology of the video watermark technology as digital watermarking in video copyright protecting, It reaches its maturity, and there is powerful robustness, can effectively resist all kinds of attacks, including frame attack, compression attack, Filtering Attacks, Color and contrast attack etc..Existing video watermark technology can be divided into two major classes, Yi Leiwei according to the processing mode to frame image Keyframe techniques, one kind are space-time field technique.
Keyframe techniques:Key frame selection is carried out with certain standard first, then extracts feature from each key frame Data realize the insertion of watermark information finally by the change of each key frame characteristic.Such technology again can be into one Step carefully divides frequency domain and spatial domain treatment technology into.Frequency domain treatment technology Typical Representative is DFT, DCT, DWT, PCA, SVD etc., in frequency Realize the insertion work of feature information extraction and watermark information in domain.Spatial processing technology Typical Representative is that SIFT and SURF is calculated Method carries out the generation and insertion of watermark information using video frame space characteristics.Fail to make full use of using keyframe techniques merely The feature of video media in the time domain, single key frame characteristic do not carry out covering the characteristic set of entire video, To reduce the anti-attack ability of watermark, especially to the entirety of video file attack, such as overall color Transformation Attack and Synchronization attack.
Space-time field technique:The feature of video in the time domain is made full use of, is combined with spatial domain and frequency domain, in time-space domain three Video frequency feature data is extracted in dimension space, forms watermark information.Single-frame images characteristic extraction is carried out first, can be space Domain SIFT algorithms can also be frequency domain DCT scheduling algorithms.Then from whole video angle, generated in time-domain characteristic to Amount forms vector data collection.Finally, according to respective rule, based on data set, watermark data is generated.Space-time field technique is most Pipe takes full advantage of the characteristic that video continues in the time domain, but under the premise of being not bound with keyframe techniques, to video frame into Row whole SIFT, SURF etc. be easy to cause computationally intensive problem, are unsatisfactory for the requirement of video watermark real-time.
Zero watermarking technology:The robustness and invisibility of watermark are a pair of contradictory body always.The enhancing of invisibility is inevitable Its robustness can be weakened, vice versa.Another problem of watermark is amount of capacity, embedded large capacity watermark information, can be to original Media file causes the destruction of bigger.Zero watermarking technology has well solved these problems, which utilizes media file itself Feature to form watermark information, and the watermark information is registered in the third-party institution, the copyright as media file is believed Breath, to realize the copyright protection that do not destroy on the basis of raw media file itself.There are ambiguity to ask for certain zero watermarking technology Topic, i.e., different media files may will produce same watermark information.
Robustness:Robustness embodies ability of the digital watermark to attack resistance.Extension of the video as image in the time domain, Attacker can not only operate its single-frame images, can also carry out integrated operation, water to series of successive frames image Print technology needs to resist various attacks.Typical video watermarking attack has:Geometric attack includes the rotation of video frame images Turn, burst forth, translating and affine transformation;Frame is attacked or synchronization attack, including frame is cut, frame is inserted into, Frame switch and frame averagely etc.; Color is attacked, including the variation of color change, luminance information and contrast variation etc.;Further include compression attack and Filtering Attacks in addition Deng.Either keyframe techniques or space-time field technique, robustness have the characteristics that respective, and all outstanding behaviours is in one aspect Or two aspects, do not have while the ability of resisting various attacks, the especially attack type to video integrated operation, such as color Change attack and synchronization attack with contrast.
Invention content
Problem to be solved by this invention is:How a kind of video Zero watermarking method based on time-space domain, this method are provided Overcome defect of the existing technology, effectively improve video watermark robustness, can make full use of video overall permanence, support simultaneously Resist geometric attacks, frame attack, synchronization attack, color and contrast attack.
Technical problem proposed by the invention solves in this way:A kind of video zero watermarking side based on time-space domain is provided Method, which is characterized in that include the following steps:
Step 1 scene cut
It using the method based on gray value, calculates per frame gradation of image mean value, cross threshold values using Change in Mean carries out as standard Scene cut, and packet marking is done to scene;
Step 2 synchronization mechanism designs and key frame extraction
According to the research to video time domain characteristic, audio video synchronization mechanism is designed using finite state machine, to each Video scene, the finite state machine current state and subsequent time state are for determining to be formed residing for watermark data key frame Region and specific location, and the state of finite state machine is determined by gray average relationship between frame in region;
Step 3 3D-Harris angle point collection is built
Harris angle point calculating is carried out to each key frame, chooses stable point, and Harris points are extended in time domain, It sets up, forms the Harris angle points combination in three dimensions, and record time domain coordinate;
Step 4 gray average data set is built
Angle point data gray mean value computation is carried out as unit of time domain to 3D-Harris angle point collection, forms gray average number According to collection;
Step 5 extreme point data set is built
Gray average data set is segmented with predetermined distance in the time domain, extreme value calculating is carried out in section, respectively shape At the maximum and minimum data set in time-domain;
Step 6 zero watermarking data generate
(1) it is based on gray average data set generation watermark data, sets numerical value threshold values, ash of the judgement larger and smaller than threshold values Mean value number is spent, and two numbers are compared, watermark data is generated according to comparison result;
(2) it is based on extreme point data integration and gives birth to watermark data, extreme value is compared apart from interior in given time domain, is compared As a result it is used as watermark data to generate standard;
According to the watermark data that above two rule generates, then the randomly ordered operation of step-by-step is carried out, generates final watermark Data.
According to the video Zero watermarking method provided by the present invention based on time-space domain, which is characterized in that specific side in step 2 Method is:
To each scene, zero or one frame key frame is generated according to every four frame of state, forward one frame is walked after the completion, repeats the behaviour Make.Finite state machine original state be designated value, subsequent time state according in four frames in addition to key frame three frame images gray scale Mean value magnitude relationship (co-existing in 6 kinds of relationships) determines that each magnitude relationship represents a kind of state, and determines according to the state It fixs which frame in one section of four frame is selected as key frame, and records the position of key frame in the time domain.
Entire finite state machine operation is repeated, all scenes is traversed, finally generates all key frames.
According to the video Zero watermarking method provided by the present invention based on time-space domain, which is characterized in that have in step 3,4,5 Body method is:
1. extracting Harris stable points in key frame:Harris angle point operations are carried out to each key frame images in scene, And descending arrangement is carried out by its response R, top n angle point is chosen as stable point, and H and W is enabled to be respectively frame image level and hang down Straight coordinate, then its Harris stablize point set be represented by
Hpoint (x, y)=I (x, y) | x=1 ..., H;Y=1 ... W }
2. time-space domain 3D-Harris angle point gray average data sets are built:Indicate that key frame images Harris stablizes with MGV Point gray average enables N indicate Harris stable point quantity in t-th of key frame, F tables according to the diffusion of key frame in the time domain Show all number of key frames in scene, then its gray average data set can be expressed as MGVT in the time domain:
MGVT=MGV (t) | t=1 ..., F }
Extreme point when indicating that distance is K key frame with LEV, including maximum and minimum, then it is in the time domain Extension can be indicated with LEVT, and it is that the extreme value generated in distance K is counted out to enable E, then
LEV(t2)=extreme MGV (j) | j=1 ..., K } t2=1 ..., E/2
LEVT={ LEV (t2)|t2=1 ..., E/2 }.
According to the video Zero watermarking method provided by the present invention based on time-space domain, which is characterized in that specific side in step 6 Method is:
Using scene as base unit, respectively according to time domain gray average and time domain extreme value data set, watermark data is generated.
1. generating watermark information using gray average data set.In the scene, it is from D values with the frame pitch for meeting T inspections 1bit watermarks generate range, calculate the average value of all key frame Harris angle points gray averages (MGV) within the scope of this, are denoted as Em, two threshold values are set, respectively high-order threshold values and low level threshold values are denoted as Thigh and Tlow respectively,
Thigh=Em+α×255
Tlow=Em+β×255
The parameter of α, β between 0-1, the size for controlling threshold values.
The MGV values of each key frame are compared with threshold values, it is number of key frames more than Thigh to enable H, L be less than The number of key frames of Tlow.If H is more than L, watermark data " 1 " is generated, otherwise generates watermark data " 0 ".
2. generating watermark information using extreme point data set.According to T test stones, in the scene, to meet the frame of T inspections Distance D values are that 1bit watermarks generate range, and Nmax and Nmin is enabled to indicate maximum LEV in each segmentmaxWith minimum LEVmin Number.If Nmax is more than Nmin, watermark data " 1 " is generated, otherwise generates watermark data " 0 ".Finally, two groups of watermarks are believed Breath carries out the randomly ordered operation of step-by-step, so that it may to generate final watermark data.
Beneficial effects of the present invention:
1, video time-space domain characteristic is made full use of, realizes the extraction to video global feature data.Harris angle points are expanded Exhibition forms 3D-Harris angle point collection to time-domain.In the research of Contemporary Digital digital watermark, Harris angle points are stablized due to it Property and resist geometric attacks ability it is strong and be used widely, and practical application focus mostly in image watermark or video watermark from It dissipates in single-frame images.The present invention makes full use of video in the continuous characteristic of time-domain, and Harris angle points are extended to whole video Domain chooses stable point from single-frame images Harris angle points, 3D-Harris angle point data sets is formed in the time domain, for extracting generation The characteristic of table video entirety provides the foundation for the watermark information generation based on time-space domain.
2, multi-angle extracts characteristic, forms doubly time series model information.Common zero watermarking technology faces ambiguity problem, The watermark information of different media files may there is a situation where identical.The present invention from two angle extraction video frequency feature datas, Two independent watermark informations are formed, and two groups of watermark informations are subjected to random fusion treatment, ambiguity can be effectively reduced and deposit Probability.
3, synchronous attack resistant ability is improved.The present invention use finite state machine, by be segmented it is progressive in a manner of carry out key frame Detection is determined according to finite state machine current state when key frame position in leading portion, and according to when remaining frame average gray of leading portion Relationship before value determines subsequent time state.The present invention carries out key frame detection using such mode, is effective against synchronization Attack, including the insertion of frame deletion, frame, Frame switch etc., while Harris angle point calculation amounts can be effectively reduced, improve watermark information Formation efficiency.
4, anti-color attacking ability is improved.The research of usual video watermark technology, concentrate on anti-Geometrical change, frame attack, In terms of Filtering Attacks and compression attack.Characteristic extraction is carried out in this hair based on 3D-Harris angle point collection gray values, it can Strong robustness ability is attacked to provide confrontation color.Either attacker carries out color in human vision tolerance band to video Single channel/multichannel adjustment, setting contrast, or blank level adjustment is carried out, it can ensure the integrality of watermark information.
Description of the drawings
Fig. 1 scene cuts and fragment schematic diagram in scene;
Fig. 2 synchronization mechanism finite state machine schematic diagrames;
Fig. 3 3D-Harris data set generation flow charts;
Fig. 4 watermark information product process figures.
Specific implementation mode
The invention will be further described below in conjunction with the accompanying drawings:
Video is first carried out scene cut by characteristic of the present invention according to video in the time domain, then is carried out to each scene thin Change slice, then utilizes finite state machine, traverse each video scene, determine key frame position.According to the crucial frame number of acquisition According to progress Harris angle point calculating, the angle point for selecting stability high is extended in time domain, forms 3D-Harris angle point data Collection.Angle steel joint data set carries out further data processing, based on angle point gray value, respectively with two different difference marks Standard, gray average and extreme value carry out difference, and two groups of different watermark informations are generated with this.Two groups of watermark datas are carried out further Random fusion, to generate final watermark data.
1, fragment in scene cut and scene
Scene cut and segmentation flow in scene are totally as shown in Figure 1.Gray average meter is carried out to video complete frames image It calculates, using gray value continuous mutation as video scene partitioning boundary.After completing scene cut, fragment is carried out to each scene, often One segment number of image frames is taken as N frames, and marks fragment position.In subsequent processing, we will determine the piece from partial segments A certain frame is key frame in disconnected.
2, synchronization mechanism design and key frame extraction
Synchronization mechanism realizes that finite state machine design schematic diagram is as shown in Figure 2 based on finite state machine.Here, we If fragment frame number value is 4 in scene, all fragments in each scene are traversed, key frame position is generated with this.
S is set as to each scene original state first0, first fragment in scene is selected to be traversed.State S0Decision is worked as X frame is key frame in preceding 1-4 frames, except X frame, carries out gradation of image mean value computation to remaining three frames, and carry out to it Arrangement operation can show that 6 kinds of arrangements are possible, NextState value (S is determined with this altogether0--S5), each state value, which corresponds to, to be determined to take piece In disconnected key frame is not chosen in key frame position or segment.After completing this operation, key frame extraction is carried out into next segment.When It has traversed in this scene all to have no progeny, has then entered in later scene and traversed, until completing whole to a vision operation.
3,3D-Harris angle points collection is built
To the key frame that the 2nd step generates, Harris angle point calculating is carried out to each frame first.It may will produce per frame multiple Angle point, it is also possible to not have angle point, such as black transition frames when transition.Descending arrangement is carried out by its response R, N before choosing For a angle point as stable point, it is respectively frame image level and vertical coordinate to enable H and W, then its Hariis stablizes point set and can indicate For
Hpoint (x, y)=I (x, y) | x=1 ..., H;Y=1 ... W }
After obtaining key frame Harris and stablizing angle point, it is extended in time domain scale, key frame figure is indicated with MGV As Harris stable point gray averages, N is enabled to indicate that Harris stable point quantity in t-th of key frame, F indicate that institute is related in scene Key frame number, then its gray average data set can be expressed as MGVT in the time domain:
MGVT=MGV (t) | t=1 ..., F }
Extreme point when indicating that distance is K key frame with LEV, including maximum and minimum, then it is in the time domain Extension can be indicated with LEVT, and it is that the extreme value generated in distance K is counted out to enable E, then
LEV(t2)=extreme MGV (j) | j=1 ..., K } t2=1 ..., E/2
LEVT={ LEV (t2)|t2=1 ..., E/2 }.
4, watermark data generates
The generation overview flow chart of watermark data is as shown in Figure 4.Using the two time-space domain data sets generated in step 3, Gray average MGVT and extreme value LEVT carries out doubly time series model generation.Take random number D as generation 1bit watermarked frame distances first. To generate the watermark information of sufficient length, AllFrames is enabled to indicate that video totalframes, D values meet
The generation of first group of watermark data:All MGVT values in the D of region are compared.To ensure MGVT values in anti-color Stability in the variations such as coloured silk, contrast, white balance, increase by two threshold values, one for ensures selection MGVT values apart from Marginal value 255 is denoted as Thigh, one for ensureing that its value in following edge value 256*20%, is denoted as within 256*20% Tlow。
Thigh=Em+α×255
Tlow=Em+β×255
The parameter of α, β between 0-1, the size for controlling threshold values.
The MGV values of each key frame are compared with threshold values, it is number of key frames more than Thigh to enable H, L be less than The number of key frames of Tow.If H is more than L, watermark data " 1 " is generated, otherwise generates watermark data " 0 ".
The generation of second group of watermark data:All LEVT values in the D of region are compared.Nmax and Nmin is enabled to indicate each Maximum LEV in segmentmaxWith minimum LEVminNumber.If Nmax is more than Nmin, watermark data " 1 " is generated, is otherwise given birth to At watermark data " 0 ".
Finally, first group of watermark information and second group of watermark information are subjected to out of order fusion, obtain final watermark information, The watermark information is registered into third party's information bank, to realize the copyright protection to video file.

Claims (3)

1. a kind of video Zero watermarking method based on time-space domain, which is characterized in that include the following steps:
Step 1, using the method based on gray value, calculate per frame gradation of image mean value, be more than threshold values with mean value continuous mutation Standard carries out scene cut, and does packet marking to scene;
Step 2 chooses key frame using synchronization mechanism to each scene and records the position of key frame in the time domain;
Step 3 carries out Harris angle point calculating to each key frame, chooses stable point, and stable point is extended in time domain, It sets up, the Harris formed in three dimensions stablizes point set, and records time domain coordinate;
Step 4 gray average data set is built, and angle point data gray mean value is carried out as unit of time domain to 3D-Harris angle point collection It calculates, forms gray average data set;
Step 5 extreme point data set build, gray average data set is segmented with predetermined distance in the time domain, in section into Row extreme value calculates, the maximum and minimum data set being respectively formed in time-domain;
Step 6 zero watermarking data generate
(1) it is based on time domain gray average data set generation watermark data, sets numerical value threshold values, ash of the judgement larger and smaller than threshold values Mean value number is spent, and two numbers are compared, watermark data is generated according to comparison result;
(2) it is based on extreme point data integration and gives birth to watermark data, extreme value is compared apart from interior in given time domain, comparison result Standard is generated as watermark data;
According to the watermark data that above two rule generates, then the randomly ordered operation of step-by-step is carried out, generates final watermark data; Specific method is:
Using scene as base unit, respectively according to time domain gray average data set and time domain extreme value data set, watermark data is generated;
1. generating watermark information using gray average data set, all key frame Harris angle point gray averages in scene are calculated The average value of MGV, is denoted as Em, two threshold values are set, respectively high-order threshold values and low level threshold values are denoted as Thigh and Tlow respectively,
Thigh=Em+α×255
Tlow=Em+β×255
The parameter of α, β between 0-1, the size for controlling threshold values;
The MGV values of each key frame are compared with threshold values, it is the number of key frames more than Thigh to enable H, and L is less than Tlow Number of key frames, if H be more than L, generate watermark data " 1 ", otherwise generate watermark data " 0 ";
2. using extreme point data set generate watermark information, according to T test stones, in the scene, with meet T inspection frame pitch from D values are that 1bit watermarks generate range, and Nmax and Nmin is enabled to indicate maximum LEV in each segmentmaxWith minimum LEVminNumber, If Nmax is more than Nmin, watermark data " 1 " is generated, otherwise generates watermark data " 0 ";
Finally, two groups of watermark informations are subjected to randomly ordered operation, so that it may to generate final watermark data.
2. the video Zero watermarking method according to claim 1 based on time-space domain, which is characterized in that specific side in step 2 Method is:
Step 2.1 walks forward one frame, weight after the completion to each scene according to every four frames, zero frame of generation of state or a frame key frame The multiple operation, finite state machine original state are designated value, subsequent time state according in four frames in addition to key frame three frame images Gray average magnitude relationship determine that each magnitude relationship represents a kind of state, next section four is determined according to the state Which frame is selected as key frame in frame, and records the position of key frame in the time domain, next if generating zero frame key frame Moment goes to original state and continues key frame extraction;
Step 2.2 repeats step 2.1, traverses all scenes, finally generates all key frames.
3. the video Zero watermarking method according to claim 1 based on time-space domain, which is characterized in that have in step 3,4,5 Body method is:
1. extracting Harris stable points in key frame:Harris angle point operations are carried out to each key frame images in scene, and are pressed Its response R carries out descending arrangement, chooses top n angle point as stable point, it is respectively frame image level and vertical seat to enable H and W It marks, then its Harris stablizes point set and is represented by
Hpoint (x, y)=I (x, y) | x=1 ..., H;Y=1 ... W }
2. time-space domain 3D-Harris angle point gray average data sets are built:Indicate that each key frame images Harris stablizes with MGV Point gray average enables N indicate Harris stable point quantity in t-th of key frame, F tables according to the diffusion of key frame in the time domain Show all number of key frames in region, then its gray average data set can be expressed as MGVT in the time domain:
MGVT=MGV (t) | t=1 ..., F }
I indicates that the Harris that each frame key frame images select stablizes angle point, and value range 0-N indicates that distance is with LEV Extreme point when K key frame, including maximum and minimum, then its extension in the time domain can be indicated with LEVT, enable E Extreme value to be generated in distance K is counted out, then
LEV(t2)=extreme MGV (j) | j=1 ..., K } t2=1 ..., E/2
LEVT={ LEV (t2)|t2=1 ..., E/2 }.
CN201510745110.5A 2015-11-05 2015-11-05 A kind of video Zero watermarking method based on time-space domain Expired - Fee Related CN105263025B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510745110.5A CN105263025B (en) 2015-11-05 2015-11-05 A kind of video Zero watermarking method based on time-space domain

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510745110.5A CN105263025B (en) 2015-11-05 2015-11-05 A kind of video Zero watermarking method based on time-space domain

Publications (2)

Publication Number Publication Date
CN105263025A CN105263025A (en) 2016-01-20
CN105263025B true CN105263025B (en) 2018-11-02

Family

ID=55102506

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510745110.5A Expired - Fee Related CN105263025B (en) 2015-11-05 2015-11-05 A kind of video Zero watermarking method based on time-space domain

Country Status (1)

Country Link
CN (1) CN105263025B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107527010B (en) * 2017-07-13 2020-07-10 央视国际网络无锡有限公司 Method for extracting video gene according to local feature and motion vector

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102176208A (en) * 2011-02-28 2011-09-07 西安电子科技大学 Robust video fingerprint method based on three-dimensional space-time characteristics
CN102646259A (en) * 2012-02-16 2012-08-22 南京邮电大学 Anti-attack robustness multiple zero watermark method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8214741B2 (en) * 2002-03-19 2012-07-03 Sharp Laboratories Of America, Inc. Synchronization of video and data
CN103177413B (en) * 2011-12-20 2016-04-13 深圳市腾讯计算机系统有限公司 The method that localization blind watermatking generates, detect and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102176208A (en) * 2011-02-28 2011-09-07 西安电子科技大学 Robust video fingerprint method based on three-dimensional space-time characteristics
CN102646259A (en) * 2012-02-16 2012-08-22 南京邮电大学 Anti-attack robustness multiple zero watermark method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Minmum mean squared error time series classification using an echo state network prediction model;M. D. Skowronski, et al;《Circuits and Systems, 2006. ISCAS 2006. Proceedings. 2006 IEEE International Symposium on》;20060524;第3153-3156页 *

Also Published As

Publication number Publication date
CN105263025A (en) 2016-01-20

Similar Documents

Publication Publication Date Title
CN109903223B (en) Image super-resolution method based on dense connection network and generation type countermeasure network
CN104091302B (en) A kind of robust watermarking insertion and extracting method based on multiscale space
KR20060121972A (en) Electronic watermarking method, electronic watermark detecting method, apparatus and program
CN108829711A (en) A kind of image search method based on multi-feature fusion
CN110349132A (en) A kind of fabric defects detection method based on light-field camera extraction of depth information
CN102393900A (en) Video copying detection method based on robust hash
CN106131711B (en) The hidden watermark insertion of robust and extracting method towards 3D high-definition digital video
TWI646467B (en) Method and system for generating two dimensional barcode with hidden data
CN103325082A (en) Vector map reversible information hiding method based on LSD planes
CN102147912A (en) Adaptive difference expansion-based reversible image watermarking method
CN106204410B (en) A kind of novel digital watermark method decomposed based on matrix Schur
CN115424209A (en) Crowd counting method based on spatial pyramid attention network
CN111091151A (en) Method for generating countermeasure network for target detection data enhancement
CN103366332B (en) A kind of image watermark method based on depth information
CN101533509B (en) A three-dimensional grid splitting method of blind watermark
KR101580987B1 (en) A watermarking method for 3D stereoscopic image based on depth and texture images
CN105263025B (en) A kind of video Zero watermarking method based on time-space domain
CN111798359A (en) Deep learning-based image watermark removing method
CN105741222B (en) A kind of steganography information locating method based on the estimation of pixel subset insertion rate
CN103377455A (en) Three-dimensional geographic model digital watermarking method with copyright protection service orientation
CN108470318A (en) The three-dimensional grid doubly time series model method positioned based on grouping strategy and neighborhood relationships
CN104318505A (en) Three-dimensional mesh model blind watermarking method based on image discrete cosine transformation
CN105828061B (en) A kind of virtual view quality evaluating method of view-based access control model masking effect
CN102314667B (en) Vertex weight value-based OBJ (object)-format three-dimensional model digital-watermarking method
Soliman et al. Robust watermarking approach for 3D triangular mesh using self organization map

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Peng Dezhong

Inventor after: Luo Yifan

Inventor after: Xiang Yong

Inventor after: Zhang Lijun

Inventor after: Yin Dawei

Inventor after: Jiang Rui

Inventor after: Sang Yongsheng

Inventor after: Peng Xi

Inventor before: Peng Dezhong

Inventor before: Luo Yifan

Inventor before: Xiang Yong

TA01 Transfer of patent application right

Effective date of registration: 20171221

Address after: 610000 Chengdu City, Chengdu, Sichuan, China (Sichuan) free trade test zone, Chengdu City, Chengdu City, Tianfu New District, Tianfu Road south section, No. 1609, Jing Rong mansion, 16

Applicant after: Sichuan Zhi Qian science and Technology Co.,Ltd.

Address before: 610065 1 building, No. 11, No. 11, Xiaojiahe street, Xiaojiahe high new zone, Sichuan Province

Applicant before: CHENGDU RUIBEI YINGTE INFORMATION TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20180913

Address after: 610000 1 building, 2 Xiaojiahe Zheng street, Chengdu High-tech Zone, Sichuan, China, 11

Applicant after: CHENGDU RUIBEI YINGTE INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 610000 Sichuan Chengdu China (Sichuan) Free Trade Experimental Zone

Applicant before: Sichuan Zhi Qian science and Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20181102