CN114245053A - Video watermark generation method and device - Google Patents
Video watermark generation method and device Download PDFInfo
- Publication number
- CN114245053A CN114245053A CN202111518217.8A CN202111518217A CN114245053A CN 114245053 A CN114245053 A CN 114245053A CN 202111518217 A CN202111518217 A CN 202111518217A CN 114245053 A CN114245053 A CN 114245053A
- Authority
- CN
- China
- Prior art keywords
- video
- frequency domain
- watermark
- space
- map
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 66
- 238000012545 processing Methods 0.000 claims abstract description 25
- 230000001960 triggered effect Effects 0.000 claims abstract description 18
- 238000013507 mapping Methods 0.000 claims abstract description 17
- 238000010586 diagram Methods 0.000 claims description 15
- 238000000605 extraction Methods 0.000 claims description 5
- 230000008859 change Effects 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 238000009877 rendering Methods 0.000 claims description 4
- 230000004044 response Effects 0.000 claims description 4
- 238000004458 analytical method Methods 0.000 claims description 2
- 230000009467 reduction Effects 0.000 abstract description 3
- 230000008569 process Effects 0.000 description 22
- 238000004590 computer program Methods 0.000 description 12
- 238000004891 communication Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 230000002441 reversible effect Effects 0.000 description 3
- 230000007123 defense Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000002829 reductive effect Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000003874 inverse correlation nuclear magnetic resonance spectroscopy Methods 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000011426 transformation method Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/91—Television signal processing therefor
- H04N5/913—Television signal processing therefor for scrambling ; for copy protection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2347—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving video stream encryption
- H04N21/23473—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving video stream encryption by pre-encrypting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/835—Generation of protective data, e.g. certificates
- H04N21/8358—Generation of protective data, e.g. certificates involving watermark
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/91—Television signal processing therefor
- H04N5/913—Television signal processing therefor for scrambling ; for copy protection
- H04N2005/91307—Television signal processing therefor for scrambling ; for copy protection by adding a copy protection signal to the video signal
- H04N2005/91335—Television signal processing therefor for scrambling ; for copy protection by adding a copy protection signal to the video signal the copy protection signal being a watermark
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Security & Cryptography (AREA)
- Editing Of Facsimile Originals (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Image Processing (AREA)
Abstract
The application provides a video watermark generation method and a device, wherein the method comprises the following steps: the graphics processor responds to an encryption instruction triggered in the terminal equipment and analyzes a two-dimensional map corresponding to each video frame from a video to be processed stored in a video memory space; converting a watermark mapping to be added in a video memory space from a time domain space to a frequency domain space to obtain a first frequency domain map, and converting a two-dimensional mapping corresponding to each video frame from the time domain space to the frequency domain space to obtain a second frequency domain map corresponding to each video frame; and generating an anti-counterfeiting image corresponding to each video frame in a video memory space according to the first frequency domain image and the second frequency domain image, wherein the anti-counterfeiting image carries a video watermark corresponding to the watermark map. The watermark anti-counterfeiting encryption of the video to be processed is realized in the video memory space through the graphic processor, the video generation efficiency reduction caused by secondary processing of the video generation end on the video is avoided, the video encryption efficiency and the video generation efficiency are improved, and the video safety and reliability are improved.
Description
Technical Field
The present application relates to the field of image processing, and in particular, to a method and an apparatus for generating a video watermark.
Background
With the development of internet technology, video becomes an important carrier for accepting content in various application software. Such as a video lesson in educational software. Therefore, how to resist various attacks on the video, avoid the video from being tampered or even pirated, and protect the security and authenticity of the video content becomes a technical problem to be solved urgently.
At present, video watermark is generally used for anti-counterfeiting of video. In the related art, a video watermark is embedded into a Least Significant Bit (LSB) space domain by modifying pixels in a single frame video image, so as to produce an image difference that is hard to be perceived by human eyes, thereby achieving video anti-counterfeiting. Or, a circular salient region with the characteristics of translation, scaling, rotation invariance and the like is detected based on a geometric invariant domain of the image, and a watermark is embedded in the video, so that geometric attack is resisted, and the robustness of the video watermark is improved. However, the above methods can only defend partial attacks, namely the former is difficult to defend geometric attacks, and the latter is difficult to defend attacks against geometric transformation parameter templates. In addition, in any mode, the video needs to be processed for the second time at the video generation end, so that the video generation efficiency is greatly reduced.
In view of the above, a solution is needed to overcome the above technical problems in the related art.
Disclosure of Invention
The application provides a video watermark generation method and device, which are used for improving video generation efficiency and video safety and reliability.
In a first aspect, the present application provides a video watermark generating method, including:
the graphics processor responds to an encryption instruction triggered in the terminal equipment and analyzes a two-dimensional map corresponding to each video frame from a video to be processed stored in a video memory space;
converting a watermark mapping to be added in a video memory space from a time domain space to a frequency domain space to obtain a first frequency domain map, and converting a two-dimensional mapping corresponding to each video frame from the time domain space to the frequency domain space to obtain a second frequency domain map corresponding to each video frame;
and generating an anti-counterfeiting image corresponding to each video frame in a video memory space according to the first frequency domain image and the second frequency domain image, wherein the anti-counterfeiting image carries a video watermark corresponding to the watermark map.
In a second aspect, an embodiment of the present application provides a video watermark generating apparatus, where the apparatus is disposed in a graphics processor, and the apparatus includes:
the analysis module is used for responding to an encryption instruction triggered in the terminal equipment and analyzing a two-dimensional map corresponding to each video frame from the video to be processed stored in the video memory space;
the conversion module is used for converting the watermark map to be added in the video memory space from a time domain space to a frequency domain space to obtain a first frequency domain map, and converting the two-dimensional map corresponding to each video frame from the time domain space to the frequency domain space to obtain a second frequency domain map corresponding to each video frame;
and the calling module is used for generating an anti-counterfeiting image corresponding to each video frame in a video memory space according to the first frequency domain graph and the second frequency domain graph, and the anti-counterfeiting image carries a video watermark corresponding to the watermark mapping.
In a third aspect, the present application provides an electronic device, including a memory and a processor, where the memory stores a computer program operable on the processor, and the processor implements the video watermark generation method according to any one of the above first aspects when executing the computer program.
In a fourth aspect, a computer-readable medium has non-volatile program code executable by a processor, the program code causing the processor to execute the video watermark generation method according to any one of the first aspect.
In the technical scheme provided by the embodiment of the application, for the to-be-processed video needing to be added with the video watermark, the graphics processor responds to the encryption instruction triggered in the terminal equipment and analyzes the two-dimensional chartlet corresponding to each video frame from the to-be-processed video stored in the video memory space. And then, converting the watermark map to be added in the video memory space from the time domain space to the frequency domain space to obtain a first frequency domain map, and converting the two-dimensional map corresponding to each video frame from the time domain space to the frequency domain space to obtain a second frequency domain map corresponding to each video frame. And finally, generating an anti-counterfeiting image corresponding to each video frame in a video memory space according to the first frequency domain image and the second frequency domain image, wherein the anti-counterfeiting image carries the video watermark corresponding to the watermark mapping.
In the scheme, the two-dimensional chartlet corresponding to each video frame in the video to be processed is extracted through the graphics processor, and the video watermark is added to the two-dimensional chartlet corresponding to each video frame, so that the anti-counterfeiting encryption of the video to be processed is realized, the reduction of the video generation efficiency caused by the secondary processing of the video generation end on the video is avoided, the video encryption efficiency and the video generation efficiency are effectively improved, and the video safety and reliability are improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic flowchart of a video watermark generation method according to an embodiment of the present application;
FIG. 2 is a schematic view of a video playback interface according to an embodiment of the present application;
FIG. 3 is a schematic view of another video playback interface according to an embodiment of the present application;
fig. 4 is a schematic diagram of a video watermark according to an embodiment of the present application;
fig. 5 is a schematic diagram of a video watermark generation apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
At present, how to resist various attacks against videos in video sharing applications or other applications related to video content, so as to avoid tampering and even piracy of the videos and protect the security and authenticity of the video content becomes a technical problem to be solved urgently. Video watermarking is commonly used to prevent counterfeiting of video.
On one hand, in the related technology, the video watermark can be embedded into the LSB space domain by modifying the pixels in the single-frame video image, and the image difference which is difficult to be perceived by human eyes is manufactured, so that video anti-counterfeiting is realized. However, the anti-counterfeiting method is difficult to resist geometric attacks and influences caused by image post-processing such as video compression, a masking layer, a filter and the like, and video watermarks are often damaged due to the above reasons, so that video anti-counterfeiting fails.
On the other hand, in the related technology, a circular salient region with the characteristics of translation, scaling, rotation invariance and the like can be detected based on the geometric invariant domain of the image, and the watermark is embedded in the video, so that the geometric attack is resisted, and the robustness of the video watermark is improved. However, the geometric transformation parameter template used in the anti-counterfeiting method is easy to attack and remove, and also causes the video watermark to be damaged and the video anti-counterfeiting to fail.
In addition, no matter what anti-counterfeiting mode is adopted, the video needs to be processed for the second time at the video generation end, and the video generation efficiency is reduced.
To solve at least one of the above technical problems, embodiments of the present application provide a video watermark generation scheme.
The video watermark generation scheme provided by the embodiment of the application can be executed by an electronic device, and the electronic device can be a terminal device such as a mobile phone, a tablet computer, a computer device, an intelligent wearable device and the like. In an alternative embodiment, the electronic device may have installed thereon a service program for executing the video watermark generation scheme.
The embodiment of the application can be executed by the terminal equipment, the terminal equipment runs the online learning software, and the user can learn online languages through the online learning software. For example, a Chinese word learning scenario, an English word learning scenario, a German word learning scenario, or a learning scenario for other types of languages. The target word may be any word to be learned.
The embodiment of the application can be applied to loading scenes of various video watermarks. For example, in the process of playing an original video in a mobile phone, a video watermark is synchronously loaded to the original video for video anti-counterfeiting. The video watermark can be loaded in the video image and used for indicating the information of the source, copyright and the like of the video image. The various videos involved in the present application are, for example, teaching videos, practice videos, test videos, and the like of various courses in the word learning scene.
Fig. 1 is a flowchart of a video watermark generating method according to an embodiment of the present application, and as shown in fig. 1, the method includes:
101. the graphics processor responds to an encryption instruction triggered in the terminal equipment and analyzes a two-dimensional map corresponding to each video frame from a video to be processed stored in a video memory space;
102. converting a watermark mapping to be added in a video memory space from a time domain space to a frequency domain space to obtain a first frequency domain map, and converting a two-dimensional mapping corresponding to each video frame from the time domain space to the frequency domain space to obtain a second frequency domain map corresponding to each video frame;
103. and generating an anti-counterfeiting image corresponding to each video frame in a video memory space according to the first frequency domain image and the second frequency domain image.
In the embodiment of the present application, the video to be processed refers to a video to which a video watermark needs to be added. Here, the video may be a video stream, or may be a video segment with a fixed duration, and the present application is not limited thereto. For example, the video to be processed may be a teaching video, a practice video, a test video, etc. in an online language learning scene. It is worth noting that to avoid affecting the video viewing effect, the video watermark may be invisible to the user. For example, assuming that a frame of to-be-processed image in the to-be-processed video is as shown in fig. 2, the anti-counterfeit image after the video watermark is added by the above steps is as shown in fig. 3.
In practical applications, the above steps 101 to 103 may be implemented when the terminal device is used to play a video. Therefore, the video watermark can be directly loaded when the video is played, and the watermark encryption process of the video is completed. Specifically, the steps 101 to 103 can be implemented in a video memory space by a Graphics Processing Unit (GPU) of the terminal device during video parsing, so that secondary processing such as encryption work does not need to be additionally performed at a video generation end, the generation efficiency of the video watermark is effectively improved, and the robustness of the video watermark is improved.
In this embodiment, the watermark generation process for the video to be processed may be initiated by a user, or may be automatically triggered according to a preset policy. In some scenarios, after a user records or imports multiple pieces of videos, the terminal device may present a video list, and the user may select one of the videos as a to-be-processed video to initiate a watermark generation process for the to-be-processed video. In other scenes, the terminal device may set a trigger condition of the watermark generation process according to a certain rule, and when it is detected that the trigger condition is satisfied, the watermark generation process of the video to be processed may be started. For example, a video recording completion instruction (i.e., a trigger condition) is detected, in which case a watermark generation procedure for the video to be processed may be triggered.
In the embodiment, in 101, in response to an encryption instruction triggered in a terminal device, a two-dimensional map (Texture2D) corresponding to each video frame is parsed from a to-be-processed video stored in a video memory space. Actually, Texture2D is a set region in the video memory space, and video data can be drawn to the region, so as to reduce occupation of CPU resources, and simultaneously avoid the process of copying video data from the video memory to the memory for re-rendering, thereby improving the processing efficiency of video data.
The event triggering the encryption instruction includes but is not limited to: any one of a video playing event, a video downloading event and a video editing event of the video to be processed. Specifically, for example, the video playing event may be that the user selects to play a certain video in the terminal device, or may be that the certain video is automatically played by the application program, for example, in an online education application, the lesson video is automatically played after the user enters a learning mode, or the video that the user has shot for preview. After the video playing event is triggered, in order to realize video anti-counterfeiting, an encryption instruction can be triggered to start a video watermark generation process. For example, the user can trigger the encryption command to avoid the theft problem after downloading the video. For example, the user's clipping operation on the video can synchronously trigger the encryption instruction to realize video anti-counterfeiting in the video editing process. In addition to the above examples, the encryption command may be triggered in other ways, and the present application is not limited thereto.
Specifically, in 101, in response to an encryption instruction triggered in the terminal device, a pixel image corresponding to each video frame is extracted from a to-be-processed video stored in the video memory space. And then, acquiring the gray value and the chromatic value corresponding to each video frame from the pixel image in a preset extraction mode. In practical applications, the preset extraction manner includes a cvmetaltexturecacohecreatetexturefrommage method and a cvopenglestexturecacohecreatetexturefrommage method according to a technical architecture (such as an android system or an iOS system) adopted by different types of terminal devices. Therefore, based on the gray value and the chromatic value corresponding to each video frame, the two-dimensional map corresponding to each video frame can be generated.
It should be noted that the two-dimensional maps are analyzed in different ways in different types of terminal devices. In the terminal device adopting the android system, optionally, a soft solution manner of a video decoding tool (such as Fast Forward Mpeg, abbreviated as FFmpeg) may be adopted to analyze the two-dimensional maps corresponding to the video frames from the video to be processed. Specifically, the video to be processed is analyzed into pixel images (denoted as pixels in an android system) corresponding to the video frames through FFmpeg, and then gray value (denoted as Y) data and chrominance value (denoted as UV) data are respectively acquired from the pixels (pixels) of the video frames through a cvopenglesttextturecache-createtexturofromlmage method. Thus, the Y data and UV data can be bound into the texture material by the glBindTexture, and then the texture material is rendered as a two-dimensional map by the OpenGL shader.
In the terminal device adopting the iOS system, optionally, the two-dimensional maps corresponding to the video frames may be parsed from the video to be processed by using a hard solution of FFmpeg. Specifically, the video to be processed is parsed into pixel images (denoted as CVPixelBufferRef in the iOS system) corresponding to the respective video frames through FFmpeg. Further, gray value (denoted as Y) data and chroma value (denoted as UV) data are acquired from CVPixelBufferRef of each video frame by a cvmetertecache createtexture-fromlmage method of Metal. Thereby creating a two-dimensional map from the Y data and the UV data.
Further, in 102, the watermark map to be added in the video memory space is converted from the time domain space to the frequency domain space to obtain a first frequency domain map, and the two-dimensional map corresponding to each video frame is converted from the time domain space to the frequency domain space to obtain a second frequency domain map corresponding to each video frame. The information related to the watermark map to be added includes, but is not limited to, copyright information, video identification, and time corresponding to the video to be processed. The following describes two frequency domain map transformation methods in step 102 with specific implementation devices.
For the watermark map to be added, the step of converting the watermark map to be added in the video memory space from the time domain space to the frequency domain space to obtain the first frequency domain map may be implemented as follows: carrying out disorder encryption processing on pixel positions in the watermark chartlet in the video memory space according to the random key to obtain the watermark chartlet after the disorder encryption processing; the scrambled encrypted watermark map is transformed from the time domain space into the frequency domain space by a fast fourier transform (FFT _ FORWARD) to obtain a first frequency domain map.
Specifically, taking a terminal device adopting an iOS system as an example, pixel positions in the watermark map are arranged out of order according to a key rule corresponding to a random key, so as to obtain the watermark map after the out-of-order encryption processing. The out-of-order ranking process corresponds to a key rule. Of course, to ensure that the original watermark map can be resolved, the process of out-of-order alignment is reversible. Furthermore, a fast (Accelate) FFT _ FORWARD method may be adopted to convert the watermark map after the scrambling encryption processing from the time domain space to the frequency domain space to obtain a corresponding frequency domain map, i.e. the first frequency domain map.
For the two-dimensional map, the step of converting the two-dimensional map corresponding to each video frame from the time domain space to the frequency domain space to obtain the second frequency domain map corresponding to each video frame may be implemented as: filling pixels around the two-dimensional map corresponding to each video frame so as to change the width and the height of the two-dimensional map into a preset power series; and converting the filled two-dimensional map from the time domain space to the frequency domain space by fast Fourier transform to obtain a second frequency domain map.
Specifically, taking a terminal device using the iOS system as an example, pixels having a value of 0 are filled around the two-dimensional map generated by Texture2D so that the width and height of the filled two-dimensional map become a power series of 2. Furthermore, the two-dimensional map after filling may also be converted from the time domain space to the frequency domain space through FFT _ FORWARD of Accelate to obtain a corresponding frequency domain map, i.e. a second frequency domain map.
And according to a key rule corresponding to the random key, pixel positions in the watermark map are subjected to disorder arrangement so as to obtain the watermark map subjected to disorder encryption processing. The out-of-order ranking process corresponds to a key rule. Of course, to ensure that the original watermark map can be resolved, the process of out-of-order alignment is reversible. Furthermore, the FFT _ FORWARD method of Accelate may be adopted to convert the watermark map after the scrambling encryption processing from the time domain space to the frequency domain space to obtain a corresponding frequency domain map, i.e., a first frequency domain map.
Finally, in 103, an anti-counterfeiting image corresponding to each video frame is generated in a video memory space according to the first frequency domain image and the second frequency domain image, so that the loading of the video watermark is completed. The video can be prevented from being attacked through the anti-counterfeiting image containing the video watermark, and the security of the video content is improved.
Specifically, in an optional embodiment, the first frequency domain graph and the second frequency domain graph are superimposed in the frequency domain space to obtain an encrypted frequency domain graph corresponding to each video frame; and converting the encrypted frequency domain image from a frequency domain space to a time domain space through inverse fast Fourier transform to obtain the anti-counterfeiting image. Taking a terminal device adopting an iOS system as an example, the encrypted frequency domain graph is converted from the frequency domain space to the time domain space by means of the INVERSE fast fourier transform (FFT _ invert) of the Accelate, and the anti-counterfeiting image is obtained.
After 103, optionally, the anti-counterfeiting image in the video memory space can be rendered and displayed into a video playing interface through a Graphic Processing Unit (GPU). In practical application, the anti-counterfeiting image containing the video watermark is stored in a video memory space of the terminal device, so that the anti-counterfeiting image can be quickly rendered through a GPU of the terminal device and pushed to a video playing interface for playing.
In the embodiment, the two-dimensional maps corresponding to the video frames in the video to be processed are extracted through the graphics processor, and the video watermarks are added to the two-dimensional maps corresponding to the video frames, so that the anti-counterfeiting encryption of the video to be processed is realized, the video generation efficiency reduction caused by secondary processing of the video generation end on the video is avoided, the video encryption efficiency and the video generation efficiency are effectively improved, and the video safety and reliability are improved.
In the above or following embodiments, optionally, the original watermark map may be reversely parsed from the anti-counterfeit image containing the video watermark, so as to determine whether the image frame to be detected is attacked.
Specifically, in practical application, for a video to be detected, an image frame to be detected may be randomly selected from the video to be detected, or a specific image frame to be detected may be selected from the video to be detected according to a preset detection rule, for example, the image frame to be detected is selected once every 1 second.
Furthermore, after the image frame to be detected is selected, the image frame to be detected can be detected by adopting the following method: the anti-counterfeiting image corresponding to the image frame to be detected in the video memory space can be converted into the frequency domain space from the time domain space through fast Fourier transform, and a third frequency domain image containing the video watermark is obtained. And converting the original two-dimensional map corresponding to the image frame to be detected from a time domain space to a frequency domain space through fast Fourier transform to obtain a fourth frequency domain map. Furthermore, after the third frequency domain map and the fourth frequency domain map are obtained, the encrypted watermark map may be extracted from the third frequency domain map according to the fourth frequency domain map. Further optionally, a difference between the third frequency domain map and the fourth frequency domain map may be calculated, and the encrypted watermark map may be generated from the difference. And further, analyzing the encrypted watermark mapping into an original watermark mapping by adopting a random key corresponding to the image frame to be detected. It will be appreciated that the parsing process of the encrypted watermark map is actually the reverse of the out-of-order encryption process described above and will not be expanded herein. And finally, detecting the image frame to be detected by adopting the original watermark mapping so as to determine whether the image frame to be detected is attacked or not. For example, the original watermark map is shown in fig. 4, where "AAABBB inc" is the copyright information. Optionally, the above steps may be implemented by a vpdsp/Accelerate framework of the iOS system.
Optionally, whether the image frame to be detected is attacked or not may be determined by comparing whether the analyzed original watermark image is consistent with the pre-stored watermark map or not. If the analyzed original watermark image is inconsistent with the pre-stored watermark image, the image frame to be detected is attacked, such as tampered or stolen. In this case, the user may be further alerted and a preset policy may be employed to perform a corresponding defense process.
In the embodiment, whether the image frame to be detected is attacked or not is determined by analyzing the original watermark image, so that a basis is provided for video security defense, the video security is further improved, and the copyright of the video content is protected.
It should be noted that the execution subjects of the steps of the methods provided in the above embodiments may be the same device, or different devices may be used as the execution subjects of the methods. For example, the execution subjects of steps 101 to 103 may be device a; for another example, the execution subject of steps 101 and 102 may be device a, and the execution subject of step 103 may be device B; and so on.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations are included in a specific order, but it should be clearly understood that the operations may be executed out of the order presented herein or in parallel, and the sequence numbers of the operations, such as 101, 102, 103, etc., are merely used for distinguishing different operations, and the sequence numbers do not represent any execution order per se. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel.
It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
The video watermark generation apparatus of one or more embodiments of the present application will be described in detail below. Those skilled in the art will appreciate that these video watermark generation devices can each be configured using commercially available hardware components through the steps taught by the present scheme.
In still another embodiment of the present application, there is also provided a video watermark generation apparatus, as shown in fig. 5, provided in a graphics processor, the apparatus including:
the parsing module 501 is configured to parse, in response to an encryption instruction triggered in the terminal device, a two-dimensional map corresponding to each video frame from a to-be-processed video stored in a video memory space;
a converting module 502, configured to convert the watermark map to be added in the video memory space from a time domain space to a frequency domain space to obtain a first frequency domain map, and convert the two-dimensional map corresponding to each video frame from the time domain space to the frequency domain space to obtain a second frequency domain map corresponding to each video frame;
a calling module 503, configured to generate an anti-counterfeit image corresponding to each video frame in a video memory space according to the first frequency domain diagram and the second frequency domain diagram, where the anti-counterfeit image carries a video watermark corresponding to the watermark map.
Optionally, the parsing module 501 is specifically configured to: responding to an encryption instruction triggered in the terminal equipment, and extracting pixel images corresponding to all video frames from the video to be processed; acquiring a gray value and a chromatic value corresponding to each video frame from the pixel image in a preset extraction mode; and generating a two-dimensional map corresponding to each video frame based on the gray value and the chromatic value corresponding to each video frame.
Optionally, the preset extraction mode includes a cvmetaltexturecacehartetraxturefrommai-ge mode or a cvopenglestexturecacehartetraxturefrommai mode.
Optionally, the event triggering the encryption instruction includes any one of a video playing event, a video downloading event, and a video editing event for the video to be processed.
Optionally, in the process that the conversion module 502 converts the to-be-added watermark map from the time domain space to the frequency domain space to obtain the first frequency domain map, the conversion module is specifically configured to: carrying out disorder encryption processing on pixel positions in the watermark chartlet in the video memory space according to the random key to obtain the watermark chartlet after the disorder encryption processing; and converting the watermark map subjected to the out-of-order encryption processing from a time domain space to a frequency domain space by using fast Fourier transform to obtain the first frequency domain map.
Optionally, in the process that the converting module 502 converts the two-dimensional map corresponding to each video frame from the time domain space to the frequency domain space to obtain the second frequency domain map corresponding to each video frame, the converting module is specifically configured to: filling pixels around the two-dimensional map corresponding to each video frame so as to change the width and the height of the two-dimensional map into a preset power series; and converting the filled two-dimensional map from a time domain space to a frequency domain space by fast Fourier transform to obtain the second frequency domain map.
Optionally, the calling module 503 is specifically configured to: superposing the first frequency domain graph and the second frequency domain graph in a frequency domain space to obtain an encrypted frequency domain graph corresponding to each video frame; and converting the encrypted frequency domain graph from a frequency domain space to a time domain space through inverse fast Fourier transform to obtain the anti-counterfeiting image.
Optionally, the anti-counterfeiting image display system further comprises a rendering module, which is used for rendering and displaying the anti-counterfeiting image to a video playing interface through a graphics processor.
Optionally, the system further comprises a detection module, configured to convert, through fast fourier transform, an anti-counterfeit image corresponding to the image frame to be detected in the video memory space from a time domain space to a frequency domain space, so as to obtain a third frequency domain map including the video watermark; converting the original two-dimensional map corresponding to the image frame to be detected from a time domain space to a frequency domain space through fast Fourier transform to obtain a fourth frequency domain map; extracting an encrypted watermark map from the third frequency domain map according to the fourth frequency domain map; analyzing the encrypted watermark mapping into an original watermark mapping by adopting a random key corresponding to the image frame to be detected; and detecting the image frame to be detected by adopting the original watermark mapping so as to determine whether the image frame to be detected is attacked or not.
In yet another embodiment of the present application, there is also provided an electronic device including: the system comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete mutual communication through the communication bus;
a memory for storing a computer program;
and the processor is used for realizing the video watermark generation method in the embodiment of the method when executing the program stored in the memory.
The communication bus 1140 mentioned in the above electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus 1140 may be divided into an address bus, a data bus, a control bus, and the like.
For ease of illustration, only one thick line is shown in FIG. 6, but this is not intended to represent only one bus or type of bus.
The communication interface 1120 is used for communication between the electronic device and other devices.
The Memory 1130 may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor 1110 may be a general-purpose Processor including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
Accordingly, the present application further provides a computer-readable storage medium storing a computer program, where the computer program is capable of implementing the steps that can be executed by the electronic device in the foregoing method embodiments when executed.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.
Claims (10)
1. A method for video watermark generation, the method comprising:
the graphics processor responds to an encryption instruction triggered in the terminal equipment and analyzes a two-dimensional map corresponding to each video frame from a video to be processed stored in a video memory space;
converting the watermark map to be added in the video memory space from a time domain space to a frequency domain space to obtain a first frequency domain map, and converting the two-dimensional map corresponding to each video frame from the time domain space to the frequency domain space to obtain a second frequency domain map corresponding to each video frame;
and generating an anti-counterfeiting image corresponding to each video frame in the video memory space according to the first frequency domain image and the second frequency domain image, wherein the anti-counterfeiting image carries a video watermark corresponding to the watermark map.
2. The method according to claim 1, wherein the analyzing the two-dimensional map corresponding to each video frame from the video to be processed stored in the video memory space in response to the encryption command triggered in the terminal device comprises:
responding to an encryption instruction triggered in terminal equipment, and extracting pixel images corresponding to all video frames from the video to be processed;
acquiring a gray value and a chromatic value corresponding to each video frame from the pixel image in a preset extraction mode;
and generating a two-dimensional map corresponding to each video frame based on the gray value and the chromatic value corresponding to each video frame.
3. The method according to claim 1, wherein the event triggering the encryption instruction comprises any one of a video playing event, a video downloading event and a video editing event for the video to be processed.
4. The method of claim 1, wherein converting the to-be-added watermark map in the video memory space from a time domain space to a frequency domain space to obtain a first frequency domain map comprises:
carrying out disorder encryption processing on pixel positions in the watermark chartlet in the video memory space according to a random key to obtain the watermark chartlet after the disorder encryption processing;
and converting the watermark map subjected to the out-of-order encryption processing from a time domain space to a frequency domain space by using fast Fourier transform to obtain the first frequency domain map.
5. The method of claim 1, wherein converting the two-dimensional map corresponding to each video frame from the time domain space to the frequency domain space to obtain a second frequency domain map corresponding to each video frame comprises:
filling pixels around the two-dimensional map corresponding to each video frame so as to change the width and the height of the two-dimensional map into a preset power series;
and converting the filled two-dimensional map from a time domain space to a frequency domain space by fast Fourier transform to obtain the second frequency domain map.
6. The method according to claim 1, wherein the generating an anti-counterfeit image corresponding to each video frame in the video memory space according to the first frequency domain diagram and the second frequency domain diagram comprises:
superposing the first frequency domain graph and the second frequency domain graph in a frequency domain space to obtain an encrypted frequency domain graph corresponding to each video frame;
and converting the encrypted frequency domain graph from a frequency domain space to a time domain space through inverse fast Fourier transform to obtain the anti-counterfeiting image.
7. The method of claim 1, further comprising: and rendering and displaying the anti-counterfeiting image to a video playing interface through a graphic processor.
8. The method of claim 1, further comprising:
converting the anti-counterfeiting image corresponding to the image frame to be detected in the video memory space from a time domain space to a frequency domain space through fast Fourier transform to obtain a third frequency domain image containing the video watermark;
converting the original two-dimensional map corresponding to the image frame to be detected from a time domain space to a frequency domain space through fast Fourier transform to obtain a fourth frequency domain map;
extracting an encrypted watermark map from the third frequency domain map according to the fourth frequency domain map;
analyzing the encrypted watermark mapping into an original watermark mapping by adopting a random key corresponding to the image frame to be detected;
and detecting the image frame to be detected by adopting the original watermark mapping so as to determine whether the image frame to be detected is attacked or not.
9. A video watermark generation apparatus, the apparatus being provided in a graphics processor, the apparatus comprising:
the analysis module is used for responding to an encryption instruction triggered in the terminal equipment and analyzing a two-dimensional map corresponding to each video frame from the video to be processed stored in the video memory space;
the conversion module is used for converting the watermark map to be added in the video memory space from a time domain space to a frequency domain space to obtain a first frequency domain map, and converting the two-dimensional map corresponding to each video frame from the time domain space to the frequency domain space to obtain a second frequency domain map corresponding to each video frame;
and the calling module is used for generating an anti-counterfeiting image corresponding to each video frame in the video memory space according to the first frequency domain graph and the second frequency domain graph, and the anti-counterfeiting image carries a video watermark corresponding to the watermark map.
10. An electronic device, comprising: a memory, a processor; wherein the memory has stored thereon executable code which, when executed by the processor, causes the processor to perform a video watermark generation method as claimed in any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111518217.8A CN114245053A (en) | 2021-12-10 | 2021-12-10 | Video watermark generation method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111518217.8A CN114245053A (en) | 2021-12-10 | 2021-12-10 | Video watermark generation method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114245053A true CN114245053A (en) | 2022-03-25 |
Family
ID=80755266
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111518217.8A Pending CN114245053A (en) | 2021-12-10 | 2021-12-10 | Video watermark generation method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114245053A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116883223A (en) * | 2023-05-29 | 2023-10-13 | 广州欢聚马克网络信息有限公司 | Image watermark synthesis method and device, equipment, medium and product thereof |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040091050A1 (en) * | 2000-09-09 | 2004-05-13 | Jong-Uk Choi | Digital image watermarking apparatus and method |
CN108257073A (en) * | 2018-01-17 | 2018-07-06 | 北京新东方迅程网络科技股份有限公司 | A kind of invisible watermark embedding grammar and Blind extracting method |
-
2021
- 2021-12-10 CN CN202111518217.8A patent/CN114245053A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040091050A1 (en) * | 2000-09-09 | 2004-05-13 | Jong-Uk Choi | Digital image watermarking apparatus and method |
CN108257073A (en) * | 2018-01-17 | 2018-07-06 | 北京新东方迅程网络科技股份有限公司 | A kind of invisible watermark embedding grammar and Blind extracting method |
Non-Patent Citations (1)
Title |
---|
张雨: "基于Android 平台的视频保护系统的研究", 硕士学位论文期刊全文电子数据库, pages 3 - 6 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116883223A (en) * | 2023-05-29 | 2023-10-13 | 广州欢聚马克网络信息有限公司 | Image watermark synthesis method and device, equipment, medium and product thereof |
CN116883223B (en) * | 2023-05-29 | 2024-04-16 | 广州欢聚马克网络信息有限公司 | Image watermark synthesis method and device, equipment, medium and product thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9996891B2 (en) | System and method for digital watermarking | |
CN101273367B (en) | Covert and robust mark for media identification | |
CN110896484B (en) | Video watermark adding and extracting method and device, video playing end and storage medium | |
CN111145069B (en) | Image watermarking processing method and device based on block chain | |
CN111223032A (en) | Watermark embedding method, watermark extracting method, watermark embedding device, watermark extracting equipment and data processing method | |
CN111310134B (en) | Screen watermark generation method, device and equipment | |
US10936791B2 (en) | Dynamically changing text wherein if text is altered unusual shapes appear | |
CN108921266B (en) | Static two-dimensional code encryption display method and device based on image segmentation | |
CN110489943B (en) | Page watermark tamper-proofing method and system | |
CN110969563B (en) | Watermark embedding method, watermark extracting method, watermark embedding device, watermark extracting equipment and data processing method | |
CN114245053A (en) | Video watermark generation method and device | |
CN110874456A (en) | Watermark embedding method, watermark extracting method, watermark embedding device, watermark extracting device and data processing method | |
WO2012164361A1 (en) | Frequency-modulated watermarking | |
US20140049802A1 (en) | System and method for digital analysis of authenticity and integrity of documents using portable devices | |
CN115774866A (en) | Blind watermark adding method and device, processor and electronic equipment | |
CN106557529A (en) | Jump page methods of exhibiting and device in page jump | |
CN114004726A (en) | Watermark display method, watermark display device, computer equipment and storage medium | |
CN111640051B (en) | Image processing method and device | |
CN114331841A (en) | Content picture processing method, system, terminal and storage medium | |
WO2015079790A1 (en) | Character string recognition device and character string recognition method | |
CN114070950B (en) | Image processing method, related device and equipment | |
CN113139163B (en) | Method and device for generating watermark information on application program interface | |
CN112949800B (en) | Method, apparatus and storage medium for generating, playing and processing graphic code video | |
JP6172745B2 (en) | Digital watermark detection method | |
Cichowski et al. | Visual data encryption for privacy enhancement in surveillance systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |