CN117061768B - Video watermark processing method, video watermark processing device, electronic equipment and storage medium - Google Patents

Video watermark processing method, video watermark processing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117061768B
CN117061768B CN202311320719.9A CN202311320719A CN117061768B CN 117061768 B CN117061768 B CN 117061768B CN 202311320719 A CN202311320719 A CN 202311320719A CN 117061768 B CN117061768 B CN 117061768B
Authority
CN
China
Prior art keywords
watermark
blocks
video
region
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311320719.9A
Other languages
Chinese (zh)
Other versions
CN117061768A (en
Inventor
刘华罗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202311320719.9A priority Critical patent/CN117061768B/en
Publication of CN117061768A publication Critical patent/CN117061768A/en
Application granted granted Critical
Publication of CN117061768B publication Critical patent/CN117061768B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/467Embedding additional information in the video signal during the compression process characterised by the embedded information being invisible, e.g. watermarking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/96Tree coding, e.g. quad-tree coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/835Generation of protective data, e.g. certificates
    • H04N21/8358Generation of protective data, e.g. certificates involving watermark

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Image Processing (AREA)
  • Editing Of Facsimile Originals (AREA)

Abstract

The embodiment of the application provides a video watermark processing method, a video watermark processing device, electronic equipment and a storage medium, which are at least applied to the fields of security and digital watermarking, wherein the method comprises the following steps: performing multiple quadtree division on each video frame iteration of the original video to be added with watermark information to obtain multiple region blocks; carrying out regional energy calculation on each regional block to obtain an energy value of each regional block; screening a specific number of target area blocks from the plurality of area blocks according to the energy value of each area block; for each video frame, embedding each bit of information in a bit stream corresponding to watermark information into a target area block in the video frame in sequence to obtain a video frame with the embedded watermark; and carrying out video coding on all the video frames with the embedded watermarks to obtain the video with the embedded watermarks. According to the watermark embedding method and device, stability and concealment of the watermark can be improved, so that the embedded watermark has high robustness.

Description

Video watermark processing method, video watermark processing device, electronic equipment and storage medium
Technical Field
The embodiment of the application relates to the field of Internet, and relates to a video watermark processing method, a video watermark processing device, electronic equipment and a storage medium.
Background
In recent years, with the development of computer network technology, people can conveniently watch, download or record video works, and a large number of video works are widely spread on a network. Digital watermarking techniques have been developed to protect copyrights of digital works. Digital watermarking refers to a technique for adding a watermark to an image.
At present, a common digital watermarking technology is generally focused on the purpose of modifying the least significant bit (bit) of image data so as to embed multi-bit watermarking information, and refers to the watermarking technology of a least significant bit (LSB, least Significant Bit) algorithm, wherein the LSB algorithm is mainly used for modifying the least significant bit of the image data in a space domain, so that the purpose of embedding the multi-bit watermarking in the image is achieved. The LSB algorithm typically uses modifications to multiple pixel values to represent one bit of information. The video file is composed of a large number of video frames and can be considered as a stack of images, so that the LSB algorithm can be used for digital watermark embedding for each frame individually.
However, the watermark embedded in the image based on the LSB algorithm is generally fragile, so that a large amount of information can be compressed by the video codec, a large amount of watermark information can be lost, and the watermark information embedded by the LSB algorithm cannot be effectively reserved. Compression is also a common disturbance for images, and LSB algorithms are also not effective against compression. Based on this, how to add watermarks in images to improve the robustness of the watermarks has become a research hotspot.
Disclosure of Invention
The embodiment of the application provides a video watermark processing method, a video watermark processing device, electronic equipment and a storage medium, which can be at least applied to the fields of security and digital watermarking, and can improve the stability and concealment of the watermark, so that the embedded watermark has higher robustness.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a video watermarking method, which comprises the following steps: extracting frames of an original video to be added with watermark information to obtain a plurality of video frames; performing multiple quadtree division on each video frame iteration to obtain multiple region blocks corresponding to the video frames; the plurality of region blocks having at least two grid size parameters; based on the pixel values of the pixel points contained in each regional block, carrying out regional energy calculation on the corresponding regional block to obtain the energy value of each regional block; screening a specific number of target area blocks from the plurality of area blocks according to the energy value of each area block; for each video frame, embedding each bit information in a bit stream corresponding to the watermark information into a target area block in the video frame in sequence to obtain a video frame with embedded watermark; and carrying out video coding on all the video frames with the embedded watermarks to obtain the video with the embedded watermarks.
The embodiment of the application provides a video watermarking method, which comprises the following steps: extracting frames of the video with the embedded watermarks to obtain a plurality of watermark video frames; performing multiple quadtree division on each watermark video frame iteration to obtain multiple watermark region blocks corresponding to the watermark video frames; the plurality of watermark region blocks having at least two grid size parameters; based on pixel values of pixel points contained in each watermark region block, performing region energy calculation on the corresponding watermark region block to obtain an energy value of each watermark region block; screening a specific number of target watermark region blocks from the plurality of watermark region blocks according to the energy value of each watermark region block; extracting bit information in a bit stream corresponding to watermark information from each target watermark region block in sequence for each watermark video frame; and determining the watermark information embedded in the video after watermark embedding based on bit information extracted from all watermark video frames.
An embodiment of the present application provides a video watermarking apparatus, including: the first video frame extraction module is used for extracting frames of the original video to be added with watermark information to obtain a plurality of video frames; the first dividing module is used for carrying out four-way tree division on each video frame iteration for a plurality of times to obtain a plurality of area blocks corresponding to the video frames; the plurality of region blocks having at least two grid size parameters; the first energy calculation module is used for calculating the regional energy of the corresponding regional block based on the pixel values of the pixel points contained in each regional block to obtain the energy value of each regional block; the first screening module is used for screening a specific number of target area blocks from the plurality of area blocks according to the energy value of each area block; the information embedding module is used for sequentially embedding each bit information in the bit stream corresponding to the watermark information into a target area block in the video frame for each video frame to obtain a video frame with the embedded watermark; and the video coding module is used for carrying out video coding on all the video frames with the embedded watermarks to obtain the video with the embedded watermarks.
In some embodiments, the first video snapshot module is further to: performing frame rate unified processing on the original video to obtain a frame rate unified video with a specific frame rate; performing equal time interval frame extraction on the unified video to obtain a plurality of video frames; or, performing non-equal time interval frame extraction on the unified video to obtain the plurality of video frames.
In some embodiments, the apparatus further comprises: the watermark coding module is used for coding the watermark information to obtain a watermark code corresponding to the watermark information; the conversion module is used for converting bit representation in the watermark coding into a bit stream form to obtain a bit stream corresponding to the watermark information; a bit length determining module, configured to determine a bit length of a bit stream corresponding to the watermark information; and the quantity determining module is used for determining the specific quantity corresponding to the target area block according to the bit length.
In some embodiments, the first partitioning module is further to: dividing an image of the video frame into four region blocks with the same grid size parameters when performing the first quadtree division on the video frame; when the video frame is divided into the nth quadtree, each region block obtained by dividing the nth-1 quadtree is obtained, and the image of each region block obtained by dividing the nth-1 quadtree is divided into four region blocks with the same grid size parameters again; n is an integer greater than 1; the plurality of region blocks comprise region blocks obtained after each quadtree division.
In some embodiments, the image of the video frame is a grayscale image; the first energy calculation module is further configured to: acquiring a gray value of each pixel point in the region block aiming at each region block in the gray image; an energy value of the region block is determined based on the gray value of each pixel and the total number of pixels in the region block.
In some embodiments, the first energy calculation module is further to: determining the square value of the gray value of each pixel point; summing the square values of the gray values of all pixel points in the area block to obtain a gray value square sum; and determining the ratio of the gray value square sum to the total number of pixel points in the area block as the energy value of the area block.
In some embodiments, the first screening module is further to: sequencing all the regional blocks according to the order of the energy values of the regional blocks from big to small to obtain a regional block sequence; sequentially selecting the specific number of target region blocks from the region block sequence; wherein there is no region overlap between the specific number of target region blocks.
In some embodiments, the first screening module is further to: in response to the specific number being N, determining the first N region blocks in the region block sequence as primarily selected target region blocks; determining whether at least two area blocks with overlapped areas exist in the N initially selected target area blocks; if at least two area blocks with overlapped areas exist, determining an area block to be reserved from the area blocks with the largest grid size parameter in the at least two area blocks with overlapped areas; deleting other area blocks except the area block to be reserved in at least two area blocks with overlapped areas; after deleting the other area blocks, continuing to select area blocks with the same number as the deleted other area blocks as the initial selection target area blocks according to the sequence of the area blocks in the area block sequence; and when determining that the area blocks with the overlapped areas do not exist in the N primarily selected target area blocks, determining the N primarily selected target area blocks as the finally selected target area blocks.
In some embodiments, the information embedding module is further to: for each video frame, sorting the specific number of target area blocks in the video frame according to the order of the energy values of the target area blocks from large to small to form a target area block sequence; four-quadrant division is carried out on each target area block in the target area block sequence to obtain four sub-areas; determining the subregions positioned at specific positions in the four subregions as information embedded subregions in the target region block; and embedding each bit information in the bit stream corresponding to the watermark information into an information embedding subarea of a target area block in the target area block sequence in sequence according to the sequence of the target area blocks in the target area block sequence, so as to obtain the video frame after watermark embedding.
In some embodiments, the information embedding module is further to: converting pixels in each target region block in the target region block series from a space domain to a frequency domain to obtain a plurality of frequency domain coefficients in the target region block; determining the region center position and the side length of the sub-region of the information embedding sub-region of the target region block; if bit information to be embedded in a bit stream corresponding to the watermark information is first-class bit information, constructing a first-class circle in the information embedding subarea by taking the central position of the area of the information embedding subarea as a circle center and taking the length of the side length of the subarea as a radius in a first proportion; adopting a preset frequency domain coefficient increment value to carry out increment processing on the frequency domain coefficient at each position where the circumference of the first class circle is positioned, so as to obtain a frequency domain coefficient after the first processing; wherein the target region block having the first processed frequency domain coefficient constitutes a target region block in which the first type of bit information is embedded; if bit information to be embedded in the bit stream corresponding to the watermark information is second-class bit information, constructing a second-class circle in the information embedding subarea by taking the central position of the area of the information embedding subarea as a circle center and taking the length of the side length of the subarea as a radius in a second proportion; the second ratio is different from the first ratio; adopting the frequency domain coefficient increment value to carry out increment processing on the frequency domain coefficient at each position where the circumference of the second class circle is positioned, so as to obtain a frequency domain coefficient after second processing; wherein the target region block having the second processed frequency domain coefficients constitutes a target region block embedding the second type of bit information.
In some embodiments, the apparatus further comprises: a domain conversion module, configured to convert, by fourier transform processing, a target region block having the first processed frequency domain coefficient and a target region block having the second processed frequency domain coefficient from the frequency domain to the spatial domain, to obtain a plurality of target region blocks in which the watermark information is embedded; and the region overlapping module is used for overlapping a plurality of target region blocks embedded with the watermark information to the original position of each target region block according to the original position of each target region block in the video frame, so as to obtain the video frame with the embedded watermark.
An embodiment of the present application provides a video watermarking apparatus, including: the second video frame extraction module is used for extracting frames of the video with the embedded watermarks to obtain a plurality of watermark video frames; the second division module is used for carrying out four-way tree division on each watermark video frame iteration for a plurality of times to obtain a plurality of watermark region blocks corresponding to the watermark video frames; the plurality of watermark region blocks having at least two grid size parameters; the second energy calculation module is used for calculating the area energy of the corresponding watermark area block based on the pixel values of the pixel points contained in each watermark area block to obtain the energy value of each watermark area block; the second screening module is used for screening a specific number of target watermark area blocks from the watermark area blocks according to the energy value of each watermark area block; the information extraction module is used for extracting bit information in a bit stream corresponding to watermark information from each target watermark region block in sequence for each watermark video frame; and the watermark information determining module is used for determining the watermark information embedded in the video after the watermark is embedded based on the bit information extracted from all the watermark video frames.
An embodiment of the present application provides an electronic device, including: a memory for storing executable instructions; and the processor is used for realizing the video watermarking processing method when executing the executable instructions stored in the memory.
Embodiments of the present application provide a computer program product comprising executable instructions stored in a computer-readable storage medium; the video watermarking method is realized when the executable instructions are read from the computer readable storage medium by a processor of the electronic device and executed.
The embodiment of the application provides a computer readable storage medium, which stores executable instructions for causing a processor to execute the executable instructions to implement the video watermarking method.
The embodiment of the application has the following beneficial effects:
when watermark embedding is carried out, video frames in an original video to be added with watermark information are divided in a multi-time quadtree division mode to form a plurality of area blocks with different sizes, and then the energy value of each area block is calculated to determine which areas are suitable for embedding the watermark information, so that more suitable area blocks are searched for modification to add the watermark information. Therefore, the embedded watermark information can be ensured to be stably reserved in the video frame, so that the stability and the concealment of the watermark are improved, and the embedded watermark has higher robustness. In addition, by selecting a specific number of target area blocks from a plurality of area blocks divided by a plurality of quadtrees to perform subsequent watermark embedding operation, the modification of pixels of the whole image can be avoided, so that the sensory influence of the watermark on the image can be reduced, the concealment and stability of the watermark can be further improved, the embedding efficiency of the watermark can be improved, and the processing resources can be saved.
Drawings
Fig. 1 is a schematic diagram of an alternative architecture of a video watermarking system according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 3 is a schematic flow chart of an alternative video watermarking method according to an embodiment of the present application;
fig. 4 is a schematic flow chart of another alternative video watermarking method according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an implementation process for determining energy values of region blocks according to an embodiment of the present application;
FIG. 6 is a schematic diagram of an implementation process for selecting a specific number of target area blocks according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a region block with overlapping presence regions provided by an embodiment of the present application;
fig. 8 is a schematic diagram of an implementation process of obtaining a video frame after watermark embedding according to an embodiment of the present application;
fig. 9 is a schematic diagram of increasing frequency domain coefficients according to an embodiment of the present application;
fig. 10 is a schematic flow chart of a video watermarking method according to an embodiment of the present application for implementing a watermark extraction process;
fig. 11 is a schematic diagram of quadtree partitioning provided in an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail with reference to the accompanying drawings, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict. Unless defined otherwise, all technical and scientific terms used in the embodiments of the present application have the same meaning as commonly understood by one of ordinary skill in the art to which the embodiments of the present application belong. The terminology used in the embodiments of the present application is for the purpose of describing the embodiments of the present application only and is not intended to be limiting of the present application.
Based on the problem that the information embedded by the method that video watermark is embedded into video frames by LSB algorithm is fragile, a large amount of information is compressed by video encoding and decoding, a large amount of watermark information is lost, and the watermark information embedded by LSB algorithm cannot be effectively reserved in the related art, the embodiment of the application provides a video watermark processing method. The video watermarking method can adaptively search the target area block to embed watermark information according to the content of the image, so that the influence of the watermark on the sense organs can be reduced, and the robustness of the watermark can be improved. After the video is encoded and compressed or some processing is carried out on the image, watermark information can still be effectively reserved, so that various interferences of the image or the video in the transmission process can be resisted.
Specifically, in the video watermarking method provided by the embodiment of the application, firstly, frame extraction is performed on an original video to be added with watermark information to obtain a plurality of video frames; performing multiple quadtree division on each video frame iteration to obtain multiple region blocks corresponding to the video frames; the plurality of region blocks having at least two grid size parameters; then, based on the pixel values of the pixel points contained in each regional block, regional energy calculation is carried out on the corresponding regional block, and the energy value of each regional block is obtained; screening a specific number of target area blocks from the plurality of area blocks according to the energy value of each area block; then, for each video frame, embedding each bit information in the bit stream corresponding to the watermark information into a target area block in the video frame in sequence to obtain the video frame with the embedded watermark; and finally, carrying out video coding on all the video frames with the embedded watermarks to obtain the video with the embedded watermarks.
Here, first, an exemplary application of the video watermarking apparatus of the embodiment of the present application, which is an electronic apparatus for implementing the video watermarking method, will be described. In one implementation manner, the video watermarking apparatus (i.e., electronic apparatus) provided in the embodiments of the present application may be implemented as a terminal or may be implemented as a server. In one implementation manner, the video watermark processing device provided in the embodiment of the present application may be implemented as any terminal having image processing and video watermark processing functions, such as a notebook computer, a tablet computer, a desktop computer, a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, a portable game device, an intelligent robot, an intelligent home appliance, and an intelligent vehicle-mounted device; in another implementation manner, the video watermarking device provided in the embodiment of the present application may be implemented as a server, where the server may be an independent physical server, or may be a server cluster or a distributed system formed by multiple physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content distribution networks (CDN, content Delivery Network), and basic cloud computing services such as big data and artificial intelligence platforms. The terminal and the server may be directly or indirectly connected through wired or wireless communication, which is not limited in the embodiments of the present application. In the following, an exemplary application when the video watermarking apparatus is implemented as a server will be described.
Referring to fig. 1, fig. 1 is an optional architecture diagram of a video watermarking system according to an embodiment of the present application, to implement watermark embedding in an original video and extracting an embedded watermark from a video after watermark embedding, a video watermarking application may be provided, where the video watermarking application has at least a watermark embedding function and a watermark extracting function, or a video watermarking application may also be provided, where the video watermarking application has at least a watermark embedding function.
The embodiment of the present application is described taking a video watermarking application as an example, where the video watermarking system 10 at least includes a terminal 100, a network 200, and a server 300, and the video watermarking application is run on the terminal 100, where the server 300 is a server of the video watermarking application. The server 300 may constitute a video watermarking apparatus of the embodiments of the present application, that is, the video watermarking method of the embodiments of the present application is implemented by the server 300. The terminal 100 is connected to the server 300 through the network 200, and the network 200 may be a wide area network or a local area network, or a combination of both.
When video watermarking is performed, a user can perform input operation on a client of a video watermarking application, namely, input an original video to be added with watermark information through the client of the video watermarking application, input the watermark information to be added, and then add the watermark information into the original video by adopting the video watermarking method provided by the embodiment of the application to generate a video with embedded watermark. In some embodiments, a video with embedded watermark may also be input through a client of the video watermarking application, and watermark information in the video may be extracted by using the video watermarking method provided in the embodiments of the present application to obtain watermark information embedded in the video.
When video watermark embedding is performed, the terminal 100 may encapsulate the original video to be watermarked with watermark information and watermark information into a video watermark processing request, which may be a video watermark embedding request, in response to an input operation of a user. Referring to fig. 1, a terminal 100 transmits a video watermarking request to a server 300 through a network 200. After receiving the video watermarking request, the server 300 responds to the video watermarking request and frames the original video to which watermarking information is to be added to obtain a plurality of video frames; performing multiple quadtree division on each video frame iteration to obtain multiple region blocks corresponding to the video frames; then, based on the pixel values of the pixel points contained in each regional block, regional energy calculation is carried out on the corresponding regional block, and the energy value of each regional block is obtained; screening a specific number of target area blocks from the plurality of area blocks according to the energy value of each area block; then, for each video frame, embedding each bit information in the bit stream corresponding to the watermark information into a target area block in the video frame in sequence to obtain the video frame with the embedded watermark; and finally, carrying out video coding on all the video frames with the embedded watermarks to obtain the video with the embedded watermarks. After obtaining the watermark embedded video, the server 300 may transmit the watermark embedded video to the terminal 100, and the terminal 100 may display the watermark embedded video on the current interface.
In performing video watermark extraction, the terminal 100 may encapsulate the video with the watermark embedded into a video watermark processing request, which may be a video watermark extraction request, in response to an input operation of a user. The terminal 100 transmits a video watermarking request to the server 300 through a network. After receiving the video watermarking request, the server 300 responds to the video watermarking request and frames the video with the embedded watermark to obtain a plurality of watermark video frames; performing four-way tree division for each watermark video frame iteration for a plurality of times to obtain a plurality of watermark region blocks corresponding to the watermark video frames; then, based on the pixel values of the pixel points contained in each watermark region block, performing region energy calculation on the corresponding watermark region block to obtain an energy value of each watermark region block; screening a specific number of target watermark region blocks from a plurality of watermark region blocks according to the energy value of each watermark region block; then, for each watermark video frame, bit information in bit stream corresponding to watermark information is extracted from each target watermark region block in sequence; finally, watermark information embedded in the watermark embedded video is determined based on bit information extracted from all watermark video frames. After extracting the watermark information, the server 300 may transmit the extracted watermark information to the terminal 100, and the terminal 100 may display the extracted watermark information on the current interface.
In some embodiments, the video watermarking method of the embodiments of the present application may also be executed by the terminal 100, that is, after the video watermarking application running on the terminal 100 receives, through the client, the original video and the watermark information input by the user, the terminal 100 executes the video watermarking method provided by the embodiments of the present application in response to the input operation of the user, and embeds the watermark information into the original video, so as to obtain the video with the embedded watermark. Or, after receiving the watermark embedded video input by the user through the client, the video watermark processing application running on the terminal 100 executes the video watermark processing method provided in the embodiment of the application in response to the input operation of the user by the terminal 100, and extracts the embedded watermark information from the watermark embedded video.
The video watermarking method provided in the embodiments of the present application may also be implemented by cloud technology based on a cloud platform, for example, the server 300 may be a cloud server. Extracting frames of an original video to be added with watermark information through a cloud server, or carrying out repeated quadtree division on each video frame iteration through the cloud server, or carrying out regional energy calculation on regional blocks through the cloud server, or screening out a specific number of target regional blocks from a plurality of regional blocks through the cloud server, or embedding each bit of information in a bit stream corresponding to the watermark information into one target regional block in the video frame in sequence through the cloud server, or carrying out video coding and the like on the video frame with all the watermarks embedded.
In some embodiments, the system may further have a cloud storage, where the original video and watermark information may be stored in the cloud storage, and the video with the watermark embedded may also be stored in the cloud storage. Thus, when watermark extraction operation is received, the video with the embedded watermark can be directly obtained from the cloud storage for watermark extraction. Or, the extracted watermark information may be verified based on the watermark information stored in the cloud memory, so as to determine whether the video with the embedded watermark is cut or modified.
Here, cloud technology (Cloud technology) refers to a hosting technology that unifies serial resources such as hardware, software, and networks in a wide area network or a local area network to implement calculation, storage, processing, and sharing of data. The cloud technology is based on the general names of network technology, information technology, integration technology, management platform technology, application technology and the like applied by the cloud computing business mode, can form a resource pool, and is flexible and convenient as required. Cloud computing technology will become an important support. Background services of technical networking systems require a large amount of computing, storage resources, such as video websites, picture-like websites, and more portals. Along with the high development and application of the internet industry, each article possibly has an own identification mark in the future, the identification mark needs to be transmitted to a background system for logic processing, data with different levels can be processed separately, and various industry data need strong system rear shield support, which can be realized through cloud computing.
Fig. 2 is a schematic structural diagram of an electronic device provided in an embodiment of the present application, where the electronic device shown in fig. 2 may be a video watermarking device, and the video watermarking device includes: at least one processor 310, a memory 350, at least one network interface 320, and a user interface 330. The various components in the video watermarking device are coupled together by a bus system 340. It is understood that the bus system 340 is used to enable connected communications between these components. The bus system 340 includes a power bus, a control bus, and a status signal bus in addition to the data bus. But for clarity of illustration the various buses are labeled in fig. 2 as bus system 340.
The processor 310 may be an integrated circuit chip with signal processing capabilities such as a general purpose processor, which may be a microprocessor or any conventional processor, or the like, a digital signal processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like.
The user interface 330 includes one or more output devices 331 that enable presentation of media content, and one or more input devices 332.
Memory 350 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like. Memory 350 optionally includes one or more storage devices physically located remote from processor 310. Memory 350 includes volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a random access Memory (RAM, randomAccess Memory). The memory 350 described in embodiments of the present application is intended to comprise any suitable type of memory. In some embodiments, memory 350 is capable of storing data to support various operations, examples of which include programs, modules and data structures, or subsets or supersets thereof, as exemplified below.
The operating system 351 including system programs for handling various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and handling hardware-based tasks; network communication module 352 for reaching other computing devices via one or more (wired or wireless) network interfaces 320, exemplary network interfaces 320 include: bluetooth, wireless compatibility authentication (WiFi), and universal serial bus (USB, universal Serial Bus), etc.; an input processing module 353 for detecting one or more user inputs or interactions from one of the one or more input devices 332 and translating the detected inputs or interactions.
In some embodiments, the apparatus provided in the embodiments of the present application may be implemented in a software manner, fig. 2 shows a video watermarking apparatus 354 stored in a memory 350, where the video watermarking apparatus 354 may be a video watermarking apparatus for implementing video watermarking embedding in an electronic device, and may be software in the form of a program and a plug-in, and the like, including the following software modules: the first video extraction module 3541a, the first partitioning module 3542a, the first energy calculation module 3543a, the first screening module 3544a, the information embedding module 3545a, and the video encoding module 3546a are logical, and thus can be arbitrarily combined or further split according to the implemented functions. The functions of the respective modules will be described hereinafter. In other embodiments, the video watermarking device 354 may also be a video watermarking device for implementing video watermark extraction in an electronic device, which may be software in the form of a program and a plug-in, including the following software modules: a second video extraction module 3541b, a second division module 3542b, a second energy calculation module 3543b, a second screening module 3544b, an information extraction module 3545b, and a watermark information determination module 3546b.
In some embodiments, the apparatus provided by the embodiments of the present application may be implemented in hardware, and by way of example, the apparatus provided by the embodiments of the present application may be a processor in the form of a hardware decoding processor that is programmed to perform the video watermarking method provided by the embodiments of the present application, e.g., the processor in the form of a hardware decoding processor may employ one or more application specific integrated circuits (ASIC, application Specific Integrated Circuit), DSP, programmable logic device (PLD, programmable LogicDevice), complex programmable logic device (CPLD, complex Programmable Logic Device), field programmable gate array (FPGA, field-Programmable Gate Array), or other electronic component.
The video watermarking method provided by the embodiments of the present application may be performed by an electronic device, where the electronic device may be a server or a terminal, that is, the video watermarking method of the embodiments of the present application may be performed by the server or the terminal, or may be performed by interaction between the server and the terminal.
Fig. 3 is a schematic flowchart of an alternative video watermarking method according to an embodiment of the present application, and the steps shown in fig. 3 will be described below, where, as shown in fig. 3, the video watermarking method is performed by taking an execution body of the video watermarking method as a server, and the method includes the following steps S101 to S106:
Step S101, frame extraction is carried out on an original video to be added with watermark information, and a plurality of video frames are obtained.
Here, the watermark information may be data to be added to the original video to prevent others from stealing the picture; the type of watermark information in the embodiment of the present application is not limited, and may be, for example, 01 bit stream data (i.e., bit stream data composed of a value 0 and a value 1), picture data, text data, and the like. Each video frame (also referred to as an image) of the original video to which the watermark information is to be added is located in the spatial domain, and the video frame contains a plurality of pixels. The spatial domain may also be referred to herein as the spatial domain or the pixel domain, where processing of an image is pixel-level processing.
In practical application, the video watermark processing method provided by the embodiment of the application can be applied to a scene for embedding the watermark into video data, and can also be applied to a scene for embedding the watermark into a single picture. That is, in addition to watermarking information for the original video as mentioned in the embodiments of the present application, watermarking may also be performed for a picture.
The embodiment of the application is described by taking a scene of embedding a watermark into video data as an example, because the original video is formed by multiple frames of video frames, if all the video frames are processed, a great deal of computing resources are consumed. Therefore, in the embodiment of the application, before embedding the watermark, the original video needs to be subjected to frame extraction first, and the original video can be subjected to frame extraction operation to extract a video frame from the original video, so that a plurality of pictures are obtained.
Step S102, performing multiple quadtree division on each video frame iteration to obtain multiple region blocks corresponding to the video frames.
Here, the plurality of region blocks have at least two mesh size parameters. That is, every time a video frame is quadtree partitioned, at least four region blocks having the same grid size parameter will be obtained. When the quadtree division is performed, the area blocks with the same grid size parameters can be divided simultaneously, each time of quadtree division can be performed through the same thread, and the simultaneous reduction division of a plurality of area blocks with the same grid size parameters can be realized through the same thread.
For example, if the grid size parameter of the video frame is a, four region blocks B1, B2, B3, and B4 having the a/2 grid size parameter can be obtained at the time of the first quadtree division; in the second quadtree division, the region blocks B1, B2, B3 and B4 having the same mesh size parameter may be quadtree divided at the same time, wherein the region block B1 is divided into four region blocks B11, B12, B13 and B14 having the a/4 mesh size parameter, the region block B2 is divided into four region blocks B21, B22, B23 and B24 having the a/4 mesh size parameter, the region block B3 is divided into four region blocks B31, B32, B33 and B34 having the a/4 mesh size parameter, the region block B4 is divided into four region blocks B41, B42, B43 and B44 … … having the a/4 mesh size parameter, and so on, the video frame is quadtree-divided a plurality of times until the mesh size parameter of the divided region blocks is smaller than the preset size parameter threshold, that is, until the division of the region blocks cannot be continued any more.
Step S103, based on the pixel values of the pixel points contained in each regional block, regional energy calculation is carried out on the corresponding regional block, and the energy value of each regional block is obtained.
In the embodiment of the application, after the video frame is divided into various area blocks with different sizes through the quadtree, whether each area block is suitable for embedding watermark information needs to be continuously determined, so that some robust area blocks need to be searched for embedding watermark information, and the anti-interference capability of the watermark can be improved. In order to determine whether each region block is suitable for embedding watermark information, the embodiment of the present application calculates a corresponding energy value for each region block, thereby determining whether each region block is suitable for embedding watermark information through the energy value. The energy values are used to represent the distribution of pixel values of the corresponding region blocks in the corresponding image region in the video frame. The higher the energy value, the more dispersed the distribution of pixel values in the image area, the greater the contrast of the image area; the lower the energy value, the more concentrated the distribution of pixel values in the image area, the less the contrast of the image area. Thus, the energy value may be used to describe features such as sharpness, contrast, and sharpness of an image. In image processing, energy values are often used as important parameters for algorithms such as image segmentation, edge detection, and feature extraction, and embodiments of the present application use energy values to determine the anti-interference capabilities of regional blocks of an image.
In some embodiments, the energy value may be calculated from pixel values of a plurality of pixel points of a region block in a corresponding image region in a video frame, and the calculation process of the energy value will be described in detail below.
Step S104, selecting a specific number of target area blocks from the plurality of area blocks according to the energy value of each area block.
In the embodiment of the application, the plurality of region blocks may be ordered according to the order of the energy values from large to small to form a region block sequence, and then a specific number of target region blocks are sequentially selected from the region block sequence. The specific number of target region blocks may be determined according to watermark information to be embedded, a bit length of a bit stream corresponding to the watermark information may be determined, and the specific number of target region blocks may be determined based on the bit length of the bit stream.
For example, the bit length of the bit stream may be the number of bit information in the bit stream, the bit information including bit 0 and bit 1 in the bit stream, the total number of bit 0 and bit 1 in the bit stream may be counted, and then the total number may be determined as a specific number of target area blocks, so that by selecting a specific number of target area blocks identical to the bit length of the bit stream, it may be ensured that each bit information in the bit stream is just effectively embedded into one target area block. Or, the method may also have a preset adjustment value, and the result of adding the total number of bits 0 and 1 to the adjustment value is determined as a specific number of target area blocks, so that a specific number of target area blocks greater than the bit length of the bit stream can be selected, thereby ensuring that when watermark information is embedded, when some target area blocks are invalid area blocks, a new target area block is reselected without returning to the step of screening the target area blocks, and effectively embedding all watermark information into the target area blocks at one time.
Step S105, for each video frame, embedding each bit information in the bit stream corresponding to the watermark information into a target area block in the video frame in turn, and obtaining the video frame with the embedded watermark.
In this embodiment of the present application, for each video frame, an image corresponding to a target region block in a spatial domain in the video frame may be converted to a frequency domain by a domain conversion method, so as to obtain corresponding frequency domain data. Wherein domain conversion refers to a process of converting original information into a new domain, which is generally called a frequency domain; the result of the domain conversion is understood to be another representation of the original information, such as a person represented by a name or an ID (identification number), which essentially refers to the same person, but with a different representation. The frequency domain data obtained by converting the image to the frequency domain is a representation of the image in another dimension, and can be converted back to the original image by the inverse domain conversion process. The frequency domain mentioned herein may also be referred to as a frequency domain, and the frequency domain has a characteristic of strong anti-interference capability. In the frequency domain, features of an image are described with frequency as an argument; the frequency domain coefficients at different positions in the frequency domain represent different information, and for any frequency domain coefficient in the frequency domain, the closer the frequency domain coefficient is to the center point of the frequency domain, the lower the frequency domain coefficient represents information of the lower frequency, namely information possessed by the original pixel content itself, and the further the frequency domain coefficient is from the center point, the higher the frequency domain coefficient represents information of the higher frequency, namely detailed information of the original pixel content.
Considering that the space domain is unstable, when the image is interfered, the information is easy to lose, and the frequency domain has the characteristic of strong anti-interference capability, so that the image can be uniformly converted from the space domain to the frequency domain by a domain conversion method, and then watermark embedding is carried out in the frequency domain.
In the embodiment of the present application, when performing domain conversion, the image corresponding to the target region block in the video frame may be converted into the frequency domain by discrete fourier transform (DFT, discrete Fourier Transform), discrete cosine transform, walsh transform, or wavelet transform, so as to obtain corresponding frequency domain data. The frequency domain data comprises frequency domain coefficients at different positions in a frequency domain, wherein the frequency domain corresponding to the target region block comprises a plurality of frequency domain coefficients, the plurality of frequency domain coefficients can form a coefficient matrix of N rows and M columns, and M is an integer greater than 1; that is, the plurality of frequency domain coefficients are essentially N rows of one-dimensional frequency domain coefficients, each row of frequency domain coefficients including M frequency domain coefficients, thereby forming a matrix of N rows and M columns of frequency domain coefficients having a square area.
In some embodiments, each bit of information in the bit stream corresponding to the watermark information is embedded into a target area block in the video frame, and the bit information is embedded into one target area block in the video frame by modifying part of frequency domain coefficients in the frequency domain coefficient matrix of the square area according to the type of each bit of information. Before modifying part of the frequency domain coefficients in the frequency domain coefficient matrix of the square area, watermark information can be uniformly encoded into a bit stream form to obtain a bit stream corresponding to the watermark information, so that the bit stream corresponding to the watermark information is embedded into part of the frequency domain coefficients in the frequency domain coefficient matrix to obtain a watermark embedding result. Each bit information in the bit stream corresponding to the watermark information is embedded into a target area block in the video frame, and may be one bit information in the bit stream embedded into each target area block. For example, the bit information may be bit 1 or bit 0, and the bit stream may have a plurality of bit information therein, and then each bit 1 or bit 0 in the bit stream may be sequentially embedded in a target area block, in which one bit 1 or one bit 0 is embedded, according to the order in the bit stream.
In the implementation process, the number of target area blocks is a specific number, and the total number of bit information (i.e., bit 1 and bit 0) in the bit stream corresponding to the watermark information may be less than or equal to the specific number. When the total number of bit information is smaller than the specific number, after each bit information is embedded into the target area blocks, not all the target area blocks are embedded with the bit information, and only the target area blocks with the same number as the total number are embedded with the bit information; when the total number of bit information is equal to a specific number, after each bit information is embedded into the target area blocks, all the target area blocks are embedded with the bit information, and all the target area blocks just completely embed all the bit information in the bit stream.
It should be noted that, when embedding the bit stream corresponding to the watermark information, the bit stream corresponding to the watermark information may be embedded in a direct embedding manner, where direct embedding is understood to be that each bit information in the bit stream is directly superimposed on the corresponding frequency domain coefficient. Alternatively, the embedding of the bit stream may be implemented in an indirect representation, which may be understood as an operation of representing individual bit information in the bit stream by employing one or more frequency domain coefficients. That is, the modification of the frequency-domain coefficients includes a direct superposition manner and an indirect representation manner, and a specific implementation procedure of the modification of the frequency-domain coefficients will be described below.
And step S106, video encoding is carried out on all the video frames with the embedded watermarks, and the video with the embedded watermarks is obtained.
The video coding refers to that all the video frames with the embedded watermarks are combined and coded into a complete video according to the sequence of the video frames in the original video, so that the video with the embedded watermarks is obtained, and the video with the embedded watermarks is a video carrying watermark information invisible to naked eyes.
In the video watermark processing method provided by the embodiment of the application, when watermark embedding is performed, video frames in an original video to be added with watermark information are divided in a multi-time quadtree division mode to form a plurality of area blocks with different sizes, and then the energy value of each area block is calculated to determine which areas are suitable for embedding the watermark information, so that more suitable area blocks are found to modify to add the watermark information. Therefore, the embedded watermark information can be ensured to be stably reserved in the video frame, so that the stability and the concealment of the watermark are improved, and the embedded watermark has higher robustness. In addition, by selecting a specific number of target area blocks from a plurality of area blocks divided by a plurality of quadtrees to perform subsequent watermark embedding operation, the modification of pixels of the whole image can be avoided, so that the sensory influence of the watermark on the image can be reduced, the concealment and stability of the watermark can be further improved, the embedding efficiency of the watermark can be improved, and the processing resources can be saved.
In some embodiments, the video watermarking system at least comprises a terminal and a server, wherein the terminal is provided with a video watermarking application, and the video watermarking application can have a watermark embedding function and a watermark extracting function respectively. The server forms a background server of the video watermarking application, and the video watermarking method of the embodiment of the application is realized through interaction between the terminal and the server.
Fig. 4 is another optional flowchart of a video watermarking method according to an embodiment of the present application, as shown in fig. 4, the method includes the following steps S201 to S218:
in step S201, the terminal acquires a video watermark embedding operation.
Here, the video watermark embedding operation includes an information input operation for inputting the original video to be watermarked with watermark information and a watermark embedding operation; the watermark embedding operation is used to confirm watermark embedding of the original video.
In step S202, the terminal generates a video watermark embedding request in response to the video watermark embedding operation.
In the embodiment of the application, the original video and watermark information can be encapsulated into the video watermark embedding request.
In step S203, the terminal sends a video watermark embedding request to the server.
In step S204, the server responds to the video watermark embedding request, and performs frame rate unification processing on the original video to obtain a frame rate unification video with a specific frame rate.
Here, the frame rate unification processing refers to processing an original video into a frame rate unification video having a fixed unification specific frame rate. Because the frame rates of the original videos may be different, the frame rates of the original videos may be unified for simplifying the processing, so that the video with a fixed and unified specific frame rate may be adopted for different videos to perform subsequent frame extraction processing, and uniform frame extraction for different videos may be achieved. For example, the original video may be unified into a Frame rate unified video of 24 transmission Frames Per Second (FPS) Frame rate.
Step S205, the server performs equal time interval frame extraction on the unified video with the frame rate to obtain a plurality of video frames; or, non-equidistant frame extraction is carried out on the unified video with the frame rate, so as to obtain a plurality of video frames.
In the embodiment of the application, when the video frame is extracted, the frame can be extracted at equal time intervals, and also can be extracted at unequal time intervals. One frame of video frame can be extracted at the same time interval, and multiple frames of video frames can be randomly extracted. And extracting a plurality of video frames through the video frames, wherein each video frame is a frame image in the unified video.
Step S206, the server iterates the four-way tree division for a plurality of times on each video frame to obtain a plurality of area blocks corresponding to the video frames; the plurality of region blocks have at least two grid size parameters.
In this embodiment, when performing the first quadtree division on the video frame, the image of the video frame may be divided into four area blocks with the same grid size parameter; when the nth quadtree division is carried out on the video frame, each area block obtained by the nth-1 quadtree division can be obtained, and the image of each area block obtained by the nth-1 quadtree division is divided into four area blocks with the same grid size parameters again; n is an integer greater than 1; the plurality of region blocks comprise region blocks obtained after each quadtree division.
The reason why the video frames are divided by using the quadtree in the embodiment of the present application is that the content included in different video frames is different, and the sizes of the area blocks containing the content in the video frames are also different. Quadtrees can better handle the problem of different sized blocks of content, and large to small blocks of area can be captured for watermark embedding.
In step S207, the server encodes the watermark information to obtain a watermark code corresponding to the watermark information.
In step S208, the server converts the bit representation in the watermark encoding into a bit stream form, and obtains a bit stream corresponding to the watermark information.
The bit stream corresponding to watermark information is a bit stream obtained by encoding watermark information, and the bit stream is a data stream composed of a plurality of bits (binary bits). In the embodiment of the present application, the bit stream may include H bits, where H is a positive integer; and the value of any bit in the H bits is either the first type of bit information or the second type of bit information. For example, the first type of bit information may be a first value, the second type of bit information may be a second value, where the first value and the second value may be set according to actual requirements, for example, the first value may be a value 0, and the second value may be a value 1; alternatively, the first value may be a value of 1 and the second value may be a value of 0. For convenience of explanation, the first value is set to be 0, and the second value is set to be 1.
In the implementation process, the server may first encode the watermark into a bit representation to obtain a watermark code. For example, the watermark may be encoded into a series of ascii codes using a base64 mode, so that the ascii codes are converted into a bitstream form, and a bitstream corresponding to the watermark information is obtained. The base64 mode is an encoding mode for transmitting 8-bit byte codes. ascii codes represent 128 or 256 possible characters using specified 7-bit or 8-bit binary combinations. It should be understood, of course, that the server may also use other encoding modes capable of transmitting 8-bit byte codes, other than the base64 mode, to obtain the ascii code, thereby obtaining the bit stream corresponding to the watermark information; alternatively, the server may directly encode the watermark information into a bit representation by using other modes (such as a mode of calling a coding model), so as to obtain a bit stream corresponding to the watermark information.
After obtaining the bit stream corresponding to the watermark information, the server may directly use the bit stream as the target bit stream corresponding to the watermark information. Or, considering that the watermark may be disturbed due to the fact that the video data may be disturbed in the transmission process, in order to resist such disturbance and reduce the watermark extraction error generated during the subsequent watermark extraction, the server may further use an error correction check code to re-encode the bit stream corresponding to the watermark information, so as to obtain the target bit stream corresponding to the watermark information. The error correction code mentioned herein may be a hamming code (or referred to as a hamming code), a cyclic redundancy check code, or other codes with error correction capability, which is not limited thereto. And, the bit stream corresponding to the watermark information is encoded by adopting an error correction check code, which can be understood as: and recoding the bit stream corresponding to the watermark information according to the coding mode of the error correction check code.
In step S209, the server determines the bit length of the bit stream corresponding to the watermark information.
The bit length of the bit stream may be the number of bit information in the bit stream, the bit information includes bit 0 and bit 1 (i.e. the first value and the second value) in the bit stream, and the total number of bit 0 and bit 1 in the bit stream may be counted to obtain the bit length of the bit stream.
In step S210, the server determines, according to the bit length, a specific number corresponding to the target area block to be selected.
In this embodiment of the present application, the total number of bits 0 and 1 in the bitstream may be determined as the specific number of target area blocks, or may have a preset adjustment value, and the result obtained by adding the total number of bits 0 and 1 to the adjustment value is determined as the specific number of target area blocks.
In some embodiments, the image of the video frame is a grayscale image. That is, if the image corresponding to the video frame is not a gray image, the video frame may be converted into a gray image in advance. A grayscale image is an image in which there is only one sampled color per pixel in the image. Gray images are typically displayed as gray levels from darkest to brightest white, and in the computer arts gray images are typically displayed as gray levels from darkest to brightest white, although in theory this sampling could be of different shades of any color or even of different colors at different brightnesses. Gray scale images are different from black and white images, and in the field of computer images, black and white images only have two colors of black and white; the gray scale image also has a number of levels of color depth between black and white.
In converting a video frame into a grayscale image, the image corresponding to the video frame may be converted into a grayscale image in several ways:
mode one, floating point algorithm: gray scale value
Mode two, integer method: gray scale value
Mode three, shift method: gray scale value
Mode four, average method: gray value gray= (r+g+b)/3;
mode five, only green: gray value gray=g.
Wherein R, G, B represents the color values of the three red, green and blue channels in the video frame of RGB image format, respectively. After the Gray value Gray is obtained by any of the above methods, R, G, B values in the video frame of the original RGB image format can be replaced with the Gray value Gray to form a new color RGB (Gray ), and the original RGB (R, G, B) can be replaced with the new color RGB (Gray ).
In step S211, the server acquires, for each region block in the grayscale image, a grayscale value of each pixel point in the region block.
Here, since RGB (R, G, B) of each region block in the gradation image is actually color RGB (Gray ) represented by a gradation value, the acquisition can directly acquire the gradation value Gray from each pixel point.
In step S212, the server determines an energy value of the region block based on the gray value of each pixel and the total number of pixels in the region block.
In some embodiments, referring to fig. 5, fig. 5 shows that in step S212, determining the energy value of the region block may be achieved by the following steps S2121 to S2123:
in step S2121, a square value of the gray value of each pixel is determined.
In step S2122, the square values of the gray values of all the pixel points in the area block are summed to obtain a gray value sum of squares.
In step S2123, the ratio of the sum of squares of gray values to the total number of pixel points in the region block is determined as the energy value of the region block.
In step S213, the server orders all the region blocks according to the order of the energy values of the region blocks from large to small, and obtains a region block sequence.
In some embodiments, when all the region blocks are ordered in the order of from large to small in terms of the energy values of the region blocks, if the energy values of any plurality of region blocks are the same and also correspond to a plurality of region blocks with the same energy value, then the region blocks with the same energy value may be continuously ordered again in the order of from large to small in terms of the grid size parameters of the region blocks; if the energy values of the plurality of region blocks are the same and the grid size parameters are also the same, the plurality of region blocks may be randomly arranged to obtain a final sequence of region blocks.
In the embodiment of the present application, the grid size parameter may be the area of the region block or the length of the square.
Step S214, the server sequentially selects a specific number of target area blocks from the area block sequence; wherein there is no region overlap between a certain number of target region blocks.
In some embodiments, referring to fig. 6, fig. 6 shows that in step S214, selecting a specific number of target region blocks may be achieved by the following steps S2141 to S2146:
in step S2141, in response to the specific number being N, the first N region blocks located in the region block sequence are determined as the primary selection target region blocks.
In this embodiment of the present application, if the specific number is N, the first N area blocks in the area block sequence may be initially screened out, and determined as the initially selected target area blocks. Then determining whether to delete partial unsatisfactory area blocks in the N initially selected target area blocks, and further continuing to select new area blocks from the area block sequence.
Step S2142, it is determined whether there are at least two region blocks whose regions overlap among the N initially selected target region blocks.
Step S2143, if there are at least two region blocks with overlapping regions, determining a region block to be reserved from the region blocks with the largest grid size parameter among the at least two region blocks with overlapping regions.
And step S2144, deleting other area blocks except the area block to be reserved in at least two area blocks with overlapped areas.
In the embodiment of the present application, if there are a plurality of region blocks with overlapping regions in the currently selected N initially selected target region blocks, the region block with the larger grid size parameter is reserved.
For example, if N is equal to 20, the first 20 region blocks may be selected from the sequence of region blocks, and the positions of the 20 region blocks in the video frame may be determined, and if there are two region blocks B1 and B2 that overlap, and the area of B2 is greater than the area of B1, then region block B1 is deleted from the 20 region blocks, leaving 19 region blocks.
In this embodiment of the present application, since each division is performed based on the result of the previous division in the quadtree division process, there is an inclusion relationship between the region blocks that overlap, that is, for two region blocks that overlap in any region, one region block with a smaller grid size parameter is located within one region block with a larger grid size parameter.
Step S2145, after deleting the other region blocks, the region blocks having the same number as the deleted other region blocks are continuously selected as the primary selection target region blocks in the order of the region blocks in the region block sequence.
Since the region blocks whose regions overlap are deleted, it is necessary to continue selecting a new region block from the sequence of region blocks as the initially selected target region block. In the implementation process, deleting a plurality of other area blocks, and continuing to screen the same number of area blocks as the initial selection target area blocks, so that the number of the current initial selection target area blocks continues to reach the specific number N.
In step S2146, when it is determined that there is no region block with region overlapping among the N primarily selected target region blocks, the N primarily selected target region blocks are determined as the finally selected target region blocks.
In this embodiment of the present application, the multiple filtering is performed by circulation until there is no region block with overlapping regions among the N initially selected target region blocks that are finally selected, and then the N initially selected target region blocks at this time are determined as the finally selected target region blocks.
For the above-mentioned scheme for selecting target area blocks in the embodiment of the present application, a specific number of initially selected target area blocks may be selected first, then a determination is made to see whether there are area blocks with overlapping areas in the initially selected target area blocks, and after deleting the area blocks with overlapping areas, a new target area block is selected from the area block sequence until there are no area blocks with overlapping areas in all N initially selected target area blocks. According to the implementation process for selecting the target area blocks with the specific number, the N initially selected target area blocks are selected firstly, and then the area blocks with the overlapped areas are judged and deleted, so that the N initially selected target area blocks can be processed in one judgment process, the calculated amount of the judgment process can be greatly reduced, the calculation resources of a server are saved, and the screening efficiency of the screening process is improved.
In other embodiments, the determining and deleting process may be performed on the area blocks overlapped by the area in the process of selecting the initially selected target area block, that is, when selecting the second initially selected target area block, a determining step may be performed when selecting one initially selected target area block each time, that is, determining whether there is an area overlapping between the currently selected initially selected target area block and the currently selected initially selected target area block, if there is an area overlapping, a screening and deleting step may be performed on the area blocks overlapped by the currently selected initially selected target area block and the existing area, that is, determining a size relationship of the mesh size parameter between the currently selected initially selected target area block and the area blocks overlapped by the existing area, and if the mesh size parameter of the currently selected initially selected target area block is greater than the mesh size parameter of the area blocks overlapped by the existing area, deleting the area blocks overlapped by the existing area; if the mesh size parameter of the one preliminary selected target area block of the current increase selection is smaller than the mesh size parameter of the area block where the area overlap exists, the one preliminary selected target area block of the current increase selection is deleted (since it is impossible to repeat the selection of the same area block twice, there is no case where the mesh size parameter of the one preliminary selected target area block of the current increase selection is equal to the mesh size parameter of the area block where the area overlap exists). After deleting the region blocks with overlapping regions, the selection of new target region blocks back in the sequence of region blocks is continued until a sufficiently specific number of target region blocks are selected. In the implementation process of selecting the target area blocks with the specific number, the judgment process is performed once when each new area block is selected, so that accurate selection of all the target area blocks can be ensured.
Fig. 7 is a schematic diagram of an area block with overlapping areas provided in the embodiment of the present application, and as shown in fig. 7, an overlapping area exists between an area block 71 and an area block 711, no overlapping area exists between an area block 71 and an area block 72, and no overlapping area exists between an area block 71 and an area block 721.
In step S215, for each video frame, the server sequentially embeds each bit information in the bit stream corresponding to the watermark information into the target region block in the video frame, so as to obtain the video frame with the embedded watermark.
In some embodiments, referring to fig. 8, fig. 8 shows that in step S215, obtaining the video frame after watermark embedding may be achieved by the following steps S2151 to S2154:
step S2151, for each video frame, orders a specific number of target region blocks in the video frame according to the order of the energy values of the target region blocks from large to small, forming a target region block sequence.
In step S2152, four-quadrant division is performed on each target region block in the target region block sequence, so as to obtain four sub-regions.
Step S2153, determining the sub-area located at the specific position in the four sub-areas as the information embedded sub-area in the target area block.
Step S2154, according to the order of the target region blocks in the target region block sequence, sequentially embedding each bit information in the bit stream corresponding to the watermark information into the information embedding sub-region of the target region block in the target region block sequence, to obtain the video frame after watermark embedding.
In some embodiments, embedding each bit of information in the bit stream corresponding to the watermark information into the information embedded sub-area of one target area block in the target area block sequence in turn may be achieved by: firstly, converting pixels in each target area block in a target area block series from a space domain to a frequency domain to obtain a plurality of frequency domain coefficients in the target area block; then, the information of the target area block is determined to be embedded into the area center position and the side length of the sub-area. At this time, if bit information to be embedded in a bit stream corresponding to watermark information is first-class bit information, constructing a first-class circle in an information embedding sub-area by taking the central position of the area of the information embedding sub-area as a circle center and taking the length of the side length of the sub-area as a radius in a first proportion; the frequency domain coefficient at each position of the circumference of the first circle is increased by adopting a preset frequency domain coefficient increasing value, so that the frequency domain coefficient after the first processing is obtained; wherein the target region block having the first processed frequency domain coefficients constitutes a target region block in which the first type of bit information is embedded.
In the embodiment of the present application, the first type of bit information may be a first value, for example, the first value may be a value of 0. That is, if the bit information to be embedded in the bit stream corresponding to the watermark information is bit 0, a first circle in the information embedding sub-area may be constructed with the center of the area of the information embedding sub-area as a center and the length of the side length of the sub-area at the first ratio as a radius. After the first circle is obtained, the frequency domain coefficient at each position where the circumference of the first circle is located may be determined, and then the frequency domain coefficient at each position where the circumference of the first circle is located is subjected to an increasing process. When the increasing process is performed, a preset frequency domain coefficient increasing value may be added to the frequency domain coefficient at each position, so as to correspondingly obtain the first processed frequency domain coefficient at each position where the circumference is located. At this time, in the entire information embedding sub-area, since the first processed frequency domain coefficient at each position where the circumference is located is the frequency domain information after being amplified, the circle formed at that position can be displayed in a convex manner. In this embodiment of the present application, the first type of bit information corresponds to a fixed first proportion, and in a case where the first type of bit information is known, the first proportion is a fixed proportion. In the subsequent watermark extraction process, the bit information embedded in the information embedding subarea can be determined to be a first numerical value according to the size of the circle in the information embedding subarea.
Fig. 9 is a schematic diagram of performing an increase process on frequency domain coefficients according to an embodiment of the present application, where, as shown in fig. 9, the entire area in fig. 9 is a sub-area, and triangles in fig. 9 represent individual frequency domain coefficients in a target area block. If the bit information to be embedded in the bit stream corresponding to the watermark information is the first type of bit information, the dotted circle in fig. 9 is a first type of circle constructed by taking the center position of the region of the information embedded sub-region as the center and taking the length of the side length of the sub-region as the radius in the first proportion. The frequency domain coefficients at each position where the circumference of the first class circle is located are enlarged processed as black triangles shown in fig. 9, the black triangles representing the first processed frequency domain coefficients, and the entire target area block having the first processed frequency domain coefficients constitutes the target area block embedded with the first class bit information.
If the bit information to be embedded in the bit stream corresponding to the watermark information is the second type bit information, constructing a second type circle in the information embedding subarea by taking the central position of the area of the information embedding subarea as the circle center and taking the length of the side length of the subarea as the radius in a second proportion; the second ratio is different from the first ratio; the frequency domain coefficient at each position of the circumference of the second class circle is increased by adopting the frequency domain coefficient increasing value, so that the frequency domain coefficient after the second processing is obtained; wherein the target region block having the second processed frequency domain coefficients constitutes a target region block in which the second type of bit information is embedded.
In this embodiment of the present application, the first type of bit information may be a second value, for example, the first value may be a value of 1. That is, when the bit information to be embedded in the bit stream corresponding to the watermark information is bit 1, a second type circle located in the information embedding sub-area may be constructed with the center position of the area of the information embedding sub-area as the center and the length of the side length of the sub-area at the second ratio as the radius. After the second class circle is obtained, the frequency domain coefficient at each position where the circumference of the second class circle is located may be determined, and then the frequency domain coefficient at each position where the circumference of the second class circle is located is subjected to an increasing process. When the increasing process is performed, a preset frequency domain coefficient increasing value may be added to the frequency domain coefficient at each position, so as to correspondingly obtain a second processed frequency domain coefficient at each position where the circumference is located. At this time, in the entire information embedding sub-area, since the second processed frequency domain coefficient at each position where the circumference is located is the frequency domain information that is amplified, the circle formed at that position can be displayed in a convex manner. In this embodiment of the present application, the second type of bit information corresponds to a fixed second proportion, and in a case where the second type of bit information is known, the second proportion is a fixed proportion. In the subsequent watermark extraction process, the bit information embedded in the information embedding subarea can be determined to be the second numerical value according to the size of the circle in the information embedding subarea.
In some embodiments, after embedding each bit information in the bit stream corresponding to the watermark information into the information embedding sub-area of one target area block in the target area block sequence, the method may further include the steps of: firstly, converting a target area block with a first processed frequency domain coefficient and a target area block with a second processed frequency domain coefficient from a frequency domain to the airspace through Fourier transform processing to obtain a plurality of target area blocks embedded with watermark information; and then, according to the original position of each target area block in the video frame, superposing a plurality of target area blocks embedded with watermark information to the original position of each target area block to obtain the video frame with the embedded watermark.
In step S216, the server performs video encoding on all the video frames with embedded watermarks, to obtain video with embedded watermarks.
In step S217, the server transmits the video with the watermark embedded to the terminal.
In step S218, the terminal displays the video with the watermark embedded on the current interface.
The video watermark processing method provided by the embodiment of the application can ensure the stable retention of the embedded watermark information in the video frame, thereby improving the stability and concealment of the watermark and ensuring that the embedded watermark has higher robustness; in addition, by selecting a specific number of target area blocks from a plurality of area blocks divided by a plurality of quadtrees to perform subsequent watermark embedding operation, the modification of pixels of the whole image can be avoided, so that the sensory influence of the watermark on the image can be reduced, the concealment and stability of the watermark can be further improved, the embedding efficiency of the watermark can be improved, and the processing resources can be saved.
In some embodiments, in the practical application process, after obtaining the video with the watermark embedded based on the video watermarking method, that is, after obtaining the new video with the watermark added, the embodiments of the present application may also support extracting the watermark from the video with the watermark embedded according to the corresponding logic. Referring to fig. 10, fig. 10 is a schematic flow chart of a video watermark processing method for implementing a watermark extraction process according to an embodiment of the present application, where the video watermark processing method according to an embodiment of the present application may also be implemented by an electronic device, where the electronic device may be the same electronic device as the above-mentioned electronic device for implementing a watermark embedding process, or may be a different electronic device. That is, the electronic device may be a server or a terminal, that is, the video watermarking method in the embodiment of the present application may be performed by the server or the terminal, or may be performed by interaction between the server and the terminal. Here, the description will be given taking, as an example, an execution subject of the video watermark processing method for implementing the watermark extraction process as a server, where the process of extracting the watermark from the video after watermark embedding may include the following steps S301 to S306:
Step S301, frame extraction is carried out on the video with the embedded watermark, and a plurality of watermark video frames are obtained.
Step S302, performing multiple quadtree division on each watermark video frame iteration to obtain multiple watermark region blocks corresponding to the watermark video frames; the plurality of watermark region blocks have at least two grid size parameters.
Step S303, based on the pixel values of the pixel points contained in each watermark region block, performing region energy calculation on the corresponding watermark region block to obtain an energy value of each watermark region block.
Step S304, selecting a specific number of target watermark region blocks from a plurality of watermark region blocks according to the energy value of each watermark region block.
In the implementation process, the specific implementation of step S301 to step S304 may be similar to the related steps (e.g. step S101 to step S104) in the foregoing method embodiments, which are not described herein. It should be understood that the multiple quadtree division for each video frame iteration mentioned in the foregoing embodiment is the same manner as the multiple quadtree division for each watermark video frame iteration in step S302, so that the target watermark region block obtained by the multiple quadtree division for each watermark video frame iteration in step S302 is essentially the target region block obtained in the foregoing embodiment, except that the video frame in the original video to which watermark information is to be added is quadtree divided in the foregoing embodiment, and the video frame in the video after watermark embedding is quadtree divided in the present embodiment.
Step S305, for each watermark video frame, extracts bit information in the bit stream corresponding to the watermark information from each target watermark region block in turn.
In this embodiment of the present application, for a plurality of target watermark region blocks selected from each watermark video frame, fourier transform may be performed on the target watermark region blocks, where the target watermark region blocks are converted from a space domain to a frequency domain, and in the frequency domain, the target watermark region blocks are divided into 4 regions according to four quadrants, and an upper left corner region is taken as an information extraction sub-region (corresponding to an information embedding sub-region in the watermark embedding process) to perform information extraction. Here, since the information extraction sub-region is the information embedding sub-region in which the bit information is embedded, the frequency domain coefficient in the information extraction sub-region can be used for looking at whether a circle corresponding to the frequency domain coefficient of the convex display after the amplification processing is a first circle or a second circle, that is, looking at whether the length ratio between the radius of the circle and the side length of the sub-region of the information extraction sub-region is a first ratio or a second ratio, if the ratio is the first ratio, the information extraction sub-region is embedded with the first bit information, that is, the first value is embedded; if it is the second scale, the second type of bit information is embedded in the information extraction sub-area, i.e. the second value is embedded.
Step S306, determining watermark information embedded in the watermark embedded video based on the bit information extracted from all watermark video frames.
In this embodiment of the present application, a specific number of target watermark region blocks may be ordered according to an order of from large to small energy values to form a target watermark region block sequence, and similarly, for bit information extracted from each target watermark region block, the bit information is ordered according to the same order as the order of the target watermark region block sequence to form a bit stream.
Since the number of all watermark video frames is multiple, multiple bit streams can be extracted from multiple watermark video frames, and in order to ensure the accuracy of the finally extracted watermark information, multiple bit streams extracted from multiple watermark video frames can be synthesized to obtain the final watermark information. When the synthesis is performed, statistics can be performed on the bit information at the same sequence position in the bit stream, and whether the number of the first values is larger than the number of the second values or the number of the second values is larger than the number of the first values is considered, but the larger number of the values is used as the final bit information at the sequence position.
For example, if the number of watermark videos is 10 and the specific number of target watermark region blocks is 20, for the bit streams extracted from the entire watermark video, there are 10 bit streams with 20 bits of information in each bit stream. During synthesis, it may be determined whether the bit at the first bit position (i.e., the sequence position) of each of the 10 bit streams is 0 or 1, and the number of bits 0 and 1 at the extracted first bit position is counted, whether the number of bits 0 is greater than or less than the number of bits 1 is considered, and if the number of bits 0 is less than the number of bits 1, it is determined that the bit at the first bit position in the final bit stream is 1; then, the bit information of the 2 nd and 3 rd and … … th bit positions and 20 th bit positions are sequentially judged by adopting the same method, and a final bit stream is obtained. After the final bit stream is obtained, watermark information embedded in the watermarked video may be determined based on the final bit stream.
According to the watermark extraction process provided by the embodiment of the application, when the electronic equipment realizes watermark embedding through the steps in the embodiment of the watermark embedding method, watermark information embedded in the video after watermark embedding can be extracted without using an original image without adding watermark to assist in processing in the watermark extraction process. Therefore, the processing resources and processing time required by processing the original image without the watermark can be reduced, and the watermark extraction efficiency is improved.
In the following, an exemplary application of the embodiments of the present application in a practical application scenario will be described.
According to the method, in the embedding process, an image is divided into a plurality of square areas with different sizes by using a quadtree, whether each area is suitable for embedding watermark information is determined by calculating the energy value of the image in each square area, and therefore, the proper square area is found to modify. For the area suitable for embedding the watermark, the purpose of embedding the watermark is achieved by carrying out Fourier transform on the area and embedding a section of identifiable information in a stable frequency domain. The embedded watermark information can be stably reserved in the processing procedures of image transmission or video encoding and decoding, so that a set of algorithms for protecting image and video content is formed.
The video watermark processing method comprises a video watermark embedding process and a video watermark extracting process.
The following describes a video watermark embedding process, wherein the watermark embedding process comprises:
(1) And (5) pretreatment.
For video data, since all video frames cannot be directly processed, video (i.e., the original video to which watermark information is to be added) needs to be first frame-decimated. Since a video file has a large number of video frames, if all video frames are processed, a large amount of computing resources are consumed. Therefore, the embodiment of the application performs frame extraction processing on the original video. Since the frame rates of the original video may be different, the original video may be processed first to be unified into a frame rate of 24FPS in order to simplify the processing. In this way, the original video can be subjected to frame extraction, so that the images or video frames obtained after frame extraction are uniformly processed to carry out watermark embedding. Subsequent operations on the image are also applicable to video frames.
(2) And (5) watermark information encoding.
The watermark information may be different types of information, such as 01 bit stream data, picture data, text data, etc., without limitation. In order to embed watermark information into frequency domain coefficients of video frames, different types of watermark information are encoded first, and the watermark information is encoded uniformly into a bit stream. The embodiment of the application adopts a base64 method to encode all watermark information into a series of ascii codes, and then converts the watermark information into a bit stream form according to the bit representation of the ascii codes.
(3) The image is quadtree divided.
Different images carry different content. For different contents, the degree of the reserved information is different in the compression transmission process, and generally, the flatter the area is, the easier the transmission process is compressed and smoothed after the digital watermark is added, so that the problem of watermark information loss is caused. In order to determine some suitable region embedded watermark information from the image content, embodiments of the present application utilize quadtrees to region divide the image.
Fig. 11 is a schematic diagram of quadtree division provided in the embodiment of the present application, where, as shown in fig. 11, the quadtree divides an image into 4 grid areas with uniform sizes, and forms a first-layer quadtree structure. For the divided 4 grid areas with the same size, the second division can be further carried out on each area according to the same method to form a second layer structure of the quadtree. This process may continue until the size of the grid area is no longer divided. The reason why the image is divided by using the quadtree is that the contents of different images are different and the sizes of the areas containing the contents in the images are different. Quadtrees can better handle the problem of different sized content areas, from large to small areas can be captured for watermark embedding.
(4) And (5) calculating the regional energy.
After the image is divided into various region blocks having different sizes using the quadtree method, it is necessary to continue to determine whether each region block is suitable for embedding watermark information. The embodiment of the application needs a plurality of robust area blocks to embed the watermark, so that the anti-interference capability of the watermark can be improved. To determine whether each region block is suitable for embedding watermark information, a corresponding energy value may be calculated for each region block. The energy of an image represents the distribution of pixel values in the image. The higher the energy value, the more dispersed the distribution of pixel values in the image, and the greater the contrast of the image; the lower the energy value, the more concentrated the distribution of pixel values in the image, and the less the contrast of the image. Thus, the energy value may be used to describe features such as sharpness, contrast, and sharpness of an image. In image processing, energy values are often used as important parameters for algorithms such as image segmentation, edge detection, and feature extraction. The energy value is used here to determine the tamper resistance of the region block of the image. The energy value calculation mode of the image area block comprises the following steps:
first, the image is converted to a grayscale image (if not). Then, for each pixel, the square of the gray value of that pixel is calculated; summing the gray value squares of all pixels; dividing the added result by the total pixel number of the image to obtain the energy of the image. See the following formula (1):
(1)。
Where E represents the energy value of the image,(x, y) represents pixel point +.>N represents the total number of pixels of the image.
In the above manner, the energy value of each region block can be calculated. For all the regional blocks, the energy values of all the regional blocks can be calculated in sequence according to the sequence from the large area to the small area of the regional blocks, and the first 20 regional blocks with the highest energy values are taken for watermark embedding.
(5) Watermark information is embedded.
For the selected robust target region blocks, fourier transforms may be performed on the target region blocks to transform the target region blocks from the spatial domain to the frequency domain. In the frequency domain, the target area block is divided into 4 sub-areas according to four quadrants, and the sub-area in the upper left corner is taken for modification. Each target area block embeds one bit of watermark information.
For each sub-region in the target region block, consider the watermark bits that need to be embedded:
if the bit to be embedded is 0, a circle with the radius of 2/3 of the area side length is embedded in the frequency domain of the subarea, and the purpose of embedding a circle is achieved by modifying the frequency domain coefficient of the circumference position, namely increasing the size of the frequency domain coefficient on a circle.
If the bit to be embedded is 1, a circle with the radius of 1/3 of the area side length is embedded in the frequency domain of the subarea, and the purpose of embedding a circle is achieved by modifying the frequency domain coefficient of the circumference position, namely increasing the size of the frequency domain coefficient on a circle.
After embedding the bit information, domain conversion can be performed on the robust target area block through inverse Fourier transform, the target area block is restored to pixels of a return space domain, and the target area block is replaced to the original picture position, so that an image embedded with watermark information is obtained.
(6) And (5) post-treatment.
After processing the video frames according to the images, a series of adjusted video frames are obtained. By combining and encoding the adjusted video frames into a complete video, a video with watermark information that is invisible to the naked eye can be obtained.
The video watermark extraction process is described below. Since the watermark embedding process is a blind watermark method, the extraction of the watermark does not need the original video to assist in extraction, and only the video file to be extracted, namely the video after watermark embedding, is provided. The specific extraction steps are as follows:
first, the video after watermark embedding is unified in frame rate. Since the frame rates of the transmitted or processed video may be different, the video may be processed first to be unified into a frame rate of 24 FPS. Then, the video is decimated, and all the video frames are processed as images.
For an image, according to the step (3) in the watermark embedding process, the image is subjected to quadtree division to divide the area blocks with different sizes. Then, for each region block, according to the method of calculating the region energy in the step (4) in the watermark embedding process, calculating the energy value of each region block, arranging the energy values in order from large to small, and extracting watermark information from the first 20 regions with the highest energy values.
For the selected robust target area block, the robust target area block can be subjected to domain conversion through Fourier transformation, the target area block is converted into a frequency domain, the target area block is divided into 4 areas according to four quadrants in the frequency domain, and the upper left sub-area is taken for information extraction. For a sub-area, it can be seen whether the radius of the circle is 1/3 or 2/3. If the radius is 1/3, it means that bit 1 is extracted; if the radius is 2/3 then this indicates that bit 0 is extracted.
And voting watermark bits extracted from all video frames for the video frames to obtain a bit sequence. And finally, converting the bit sequence back to the character represented by the ascii, and restoring the ascii code back to the original watermark information form by using the reverse decoding operation of the base64, so that the whole watermark extraction process can be completed.
Through the steps, a complete watermark embedding and extracting system can be obtained.
In the embodiment of the application, the method of searching the robust region block on the image by utilizing the quadtree and the energy value to carry out Fourier transform and embedding a piece of information on the frequency domain can enable the image or the video to resist stronger interference in the propagation process, and simultaneously has smaller sensory influence on the image and the video and better experience for users.
It may be appreciated that in the embodiments of the present application, the content of the user information, for example, watermark information, video after watermark embedding, etc., if data related to the user information or enterprise information is involved, when the embodiments of the present application are applied to specific products or technologies, user permission or consent needs to be obtained, or blurring processing is performed on the information, so as to eliminate the correspondence between the information and the user; and the related data collection and processing should be strictly according to the requirements of relevant national laws and regulations when the example is applied, obtain the informed consent or independent consent of the personal information body, and develop the subsequent data use and processing behaviors within the authorized scope of laws and regulations and personal information body.
Continuing with the description below, the video watermarking apparatus 354 provided in the embodiments of the present application is implemented as an exemplary structure of software modules, and in some embodiments, as shown in fig. 2, the video watermarking apparatus 354 includes: the first video frame extraction module 3541a is configured to extract frames of an original video to which watermark information is to be added, so as to obtain a plurality of video frames; a first dividing module 3542a, configured to iterate each video frame for multiple times to divide the video frame into four-way trees, so as to obtain multiple region blocks corresponding to the video frame; the plurality of region blocks having at least two grid size parameters; a first energy calculating module 3543a, configured to calculate an area energy of a corresponding area block based on a pixel value of a pixel point included in each area block, to obtain an energy value of each area block; a first screening module 3544a, configured to screen a specific number of target region blocks from the plurality of region blocks according to an energy value of each region block; the information embedding module 3545a is configured to, for each video frame, embed each bit information in the bit stream corresponding to the watermark information into a target region block in the video frame in sequence, so as to obtain a video frame in which the watermark is embedded; the video coding module 3546a is configured to perform video coding on all the video frames with embedded watermarks, so as to obtain video with embedded watermarks.
In some embodiments, the first video snapshot module is further to: performing frame rate unified processing on the original video to obtain a frame rate unified video with a specific frame rate; performing equal time interval frame extraction on the unified video to obtain a plurality of video frames; or, performing non-equal time interval frame extraction on the unified video to obtain the plurality of video frames.
In some embodiments, the apparatus further comprises: the watermark coding module is used for coding the watermark information to obtain a watermark code corresponding to the watermark information; the conversion module is used for converting bit representation in the watermark coding into a bit stream form to obtain a bit stream corresponding to the watermark information; a bit length determining module, configured to determine a bit length of a bit stream corresponding to the watermark information; and the quantity determining module is used for determining the specific quantity corresponding to the target area block according to the bit length.
In some embodiments, the first partitioning module is further to: dividing an image of the video frame into four region blocks with the same grid size parameters when performing the first quadtree division on the video frame; when the video frame is divided into the nth quadtree, each region block obtained by dividing the nth-1 quadtree is obtained, and the image of each region block obtained by dividing the nth-1 quadtree is divided into four region blocks with the same grid size parameters again; n is an integer greater than 1; the plurality of region blocks comprise region blocks obtained after each quadtree division.
In some embodiments, the image of the video frame is a grayscale image; the first energy calculation module is further configured to: acquiring a gray value of each pixel point in the region block aiming at each region block in the gray image; an energy value of the region block is determined based on the gray value of each pixel and the total number of pixels in the region block.
In some embodiments, the first energy calculation module is further to: determining the square value of the gray value of each pixel point; summing the square values of the gray values of all pixel points in the area block to obtain a gray value square sum; and determining the ratio of the gray value square sum to the total number of pixel points in the area block as the energy value of the area block.
In some embodiments, the first screening module is further to: sequencing all the regional blocks according to the order of the energy values of the regional blocks from big to small to obtain a regional block sequence; sequentially selecting the specific number of target region blocks from the region block sequence; wherein there is no region overlap between the specific number of target region blocks.
In some embodiments, the first screening module is further to: in response to the specific number being N, determining the first N region blocks in the region block sequence as primarily selected target region blocks; determining whether at least two area blocks with overlapped areas exist in the N initially selected target area blocks; if at least two area blocks with overlapped areas exist, determining an area block to be reserved from the area blocks with the largest grid size parameter in the at least two area blocks with overlapped areas; deleting other area blocks except the area block to be reserved in at least two area blocks with overlapped areas; after deleting the other area blocks, continuing to select area blocks with the same number as the deleted other area blocks as the initial selection target area blocks according to the sequence of the area blocks in the area block sequence; and when determining that the area blocks with the overlapped areas do not exist in the N primarily selected target area blocks, determining the N primarily selected target area blocks as the finally selected target area blocks.
In some embodiments, the information embedding module is further to: for each video frame, sorting the specific number of target area blocks in the video frame according to the order of the energy values of the target area blocks from large to small to form a target area block sequence; four-quadrant division is carried out on each target area block in the target area block sequence to obtain four sub-areas; determining the subregions positioned at specific positions in the four subregions as information embedded subregions in the target region block; and embedding each bit information in the bit stream corresponding to the watermark information into an information embedding subarea of a target area block in the target area block sequence in sequence according to the sequence of the target area blocks in the target area block sequence, so as to obtain the video frame after watermark embedding.
In some embodiments, the information embedding module is further to: converting pixels in each target region block in the target region block series from a space domain to a frequency domain to obtain a plurality of frequency domain coefficients in the target region block; determining the region center position and the side length of the sub-region of the information embedding sub-region of the target region block; if bit information to be embedded in a bit stream corresponding to the watermark information is first-class bit information, constructing a first-class circle in the information embedding subarea by taking the central position of the area of the information embedding subarea as a circle center and taking the length of the side length of the subarea as a radius in a first proportion; adopting a preset frequency domain coefficient increment value to carry out increment processing on the frequency domain coefficient at each position where the circumference of the first class circle is positioned, so as to obtain a frequency domain coefficient after the first processing; wherein the target region block having the first processed frequency domain coefficient constitutes a target region block in which the first type of bit information is embedded; if bit information to be embedded in the bit stream corresponding to the watermark information is second-class bit information, constructing a second-class circle in the information embedding subarea by taking the central position of the area of the information embedding subarea as a circle center and taking the length of the side length of the subarea as a radius in a second proportion; the second ratio is different from the first ratio; adopting the frequency domain coefficient increment value to carry out increment processing on the frequency domain coefficient at each position where the circumference of the second class circle is positioned, so as to obtain a frequency domain coefficient after second processing; wherein the target region block having the second processed frequency domain coefficients constitutes a target region block embedding the second type of bit information.
In some embodiments, the apparatus further comprises: a domain conversion module, configured to convert, by fourier transform processing, a target region block having the first processed frequency domain coefficient and a target region block having the second processed frequency domain coefficient from the frequency domain to the spatial domain, to obtain a plurality of target region blocks in which the watermark information is embedded; and the region overlapping module is used for overlapping a plurality of target region blocks embedded with the watermark information to the original position of each target region block according to the original position of each target region block in the video frame, so as to obtain the video frame with the embedded watermark.
In other embodiments, as shown in fig. 2, the video watermarking apparatus may further include: the second video frame extraction module 3541b is configured to extract frames of the video with the watermark embedded to obtain a plurality of watermark video frames; a second dividing module 3542b, configured to iterate each watermark video frame for multiple times to divide the watermark video frame into four-way trees, so as to obtain multiple watermark region blocks corresponding to the watermark video frame; the plurality of watermark region blocks having at least two grid size parameters; a second energy calculating module 3543b, configured to perform area energy calculation on the corresponding watermark area blocks based on pixel values of pixel points included in each watermark area block, so as to obtain an energy value of each watermark area block; a second screening module 3544b, configured to screen a specific number of target watermark region blocks from the plurality of watermark region blocks according to an energy value of each watermark region block; an information extraction module 3545b, configured to extract, for each watermark video frame, bit information in a bit stream corresponding to watermark information from each target watermark region block in sequence; watermark information determination module 3546b is configured to determine watermark information embedded in the watermarked video based on bit information extracted from all watermark video frames.
It should be noted that, the description of the apparatus in the embodiment of the present application is similar to the description of the embodiment of the method described above, and has similar beneficial effects as the embodiment of the method, so that a detailed description is omitted. For technical details not disclosed in the embodiments of the present apparatus, please refer to the description of the embodiments of the method of the present application for understanding.
Embodiments of the present application provide a computer program product comprising executable instructions that are a computer instruction; the executable instructions are stored in a computer readable storage medium. The executable instructions, when read from the computer readable storage medium by a processor of an electronic device, when executed by the processor, cause the electronic device to perform the methods described in embodiments of the present application.
The present embodiments provide a storage medium having stored therein executable instructions that, when executed by a processor, cause the processor to perform a method provided by the embodiments of the present application, for example, as shown in fig. 3.
In some embodiments, the storage medium may be a computer-readable storage medium, such as, for example, ferroelectric Memory (FRAM, ferromagnetic Random Access Memory), read Only Memory (ROM), programmable Read Only Memory (PROM, programmable Read Only Memory), erasable programmable Read Only Memory (EPROM, erasable ProgrammableRead Only Memory), electrically erasable programmable Read Only Memory (EEPROM, electrically Erasable Programmable Read Only Memory), flash Memory, magnetic surface Memory, optical disk, or compact disk Read Only Memory (CD-ROM, compactDisk-Read Only Memory), among others; but may be a variety of devices including one or any combination of the above memories.
In some embodiments, the executable instructions may be in the form of programs, software modules, scripts, or code, written in any form of programming language (including compiled or interpreted languages, or declarative or procedural languages), and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, the executable instructions may, but need not, correspond to files in a file system, may be stored as part of a file that holds other programs or data, for example, in one or more scripts in a hypertext markup language (HTML, hyper Text Markup Language) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code). As an example, executable instructions may be deployed to be executed on one electronic device or on multiple electronic devices located at one site or, alternatively, on multiple electronic devices distributed across multiple sites and interconnected by a communication network.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application. Any modifications, equivalent substitutions, improvements, etc. that are within the spirit and scope of the present application are intended to be included within the scope of the present application.

Claims (13)

1. A method of video watermarking, the method comprising:
extracting frames of an original video to be added with watermark information to obtain a plurality of video frames;
performing multiple quadtree division on each video frame iteration to obtain multiple region blocks corresponding to the video frames; the plurality of region blocks having at least two grid size parameters; the image of the video frame is a gray image;
acquiring a gray value of each pixel point in the region block aiming at each region block in the gray image;
determining the square value of the gray value of each pixel point; summing the square values of the gray values of all pixel points in the area block to obtain a gray value square sum; determining the ratio of the gray value square sum to the total number of pixel points in the area block as an energy value of the area block;
sequencing all the regional blocks according to the order of the energy values of the regional blocks from big to small to obtain a regional block sequence;
In response to a specific number of N, determining the first N area blocks in the area block sequence as primarily selected target area blocks; determining whether at least two area blocks with overlapped areas exist in the N initially selected target area blocks;
if at least two area blocks with overlapped areas exist, determining the area block with the maximum grid size parameter in the at least two area blocks with overlapped areas as an area block to be reserved, and deleting other area blocks except the area block to be reserved in the at least two area blocks with overlapped areas;
after deleting the other area blocks, continuing to select area blocks with the same number as the deleted other area blocks as the initial selection target area blocks according to the sequence of the area blocks in the area block sequence;
when determining that no region block with overlapped regions exists in the N initially selected target region blocks, determining the N initially selected target region blocks as the finally selected target region blocks;
for each video frame, embedding each bit information in a bit stream corresponding to the watermark information into a target area block in the video frame in sequence to obtain a video frame with embedded watermark;
and carrying out video coding on all the video frames with the embedded watermarks to obtain the video with the embedded watermarks.
2. The method according to claim 1, wherein the extracting frames of the original video to which the watermark information is to be added to obtain a plurality of video frames includes:
performing frame rate unified processing on the original video to obtain a frame rate unified video with a specific frame rate;
performing equal time interval frame extraction on the unified video to obtain a plurality of video frames; or, performing non-equal time interval frame extraction on the unified video to obtain the plurality of video frames.
3. The method according to claim 1, wherein the method further comprises:
performing coding processing on the watermark information to obtain watermark codes corresponding to the watermark information;
converting bit representation in the watermark coding into a bit stream form to obtain a bit stream corresponding to the watermark information;
determining the bit length of a bit stream corresponding to the watermark information;
and determining the specific number corresponding to the target area block according to the bit length.
4. The method of claim 1, wherein performing the quadtree partitioning for each of the video frame iterations to obtain a plurality of region blocks corresponding to the video frame comprises:
Dividing an image of the video frame into four region blocks with the same grid size parameters when performing the first quadtree division on the video frame;
when the video frame is divided into the nth quadtree, each region block obtained by dividing the nth-1 quadtree is obtained, and the image of each region block obtained by dividing the nth-1 quadtree is divided into four region blocks with the same grid size parameters again; n is an integer greater than 1; the plurality of region blocks comprise region blocks obtained after each quadtree division.
5. The method of claim 1, wherein there is no region overlap between the specific number of target region blocks sequentially selected from the sequence of region blocks.
6. The method according to claim 1, wherein for each video frame, embedding each bit information in the bit stream corresponding to the watermark information into a target area block in the video frame in turn, to obtain a video frame with embedded watermark, includes:
for each video frame, sorting the specific number of target area blocks in the video frame according to the order of the energy values of the target area blocks from large to small to form a target area block sequence;
Four-quadrant division is carried out on each target area block in the target area block sequence to obtain four sub-areas;
determining the subregions positioned at specific positions in the four subregions as information embedded subregions in the target region block;
and embedding each bit information in the bit stream corresponding to the watermark information into an information embedding subarea of a target area block in the target area block sequence in sequence according to the sequence of the target area blocks in the target area block sequence, so as to obtain the video frame after watermark embedding.
7. The method according to claim 6, wherein the sequentially embedding each bit information in the bit stream corresponding to the watermark information into the information embedding sub-area of the target area block in the target area block sequence includes:
converting pixels in each target region block in the target region block series from a space domain to a frequency domain to obtain a plurality of frequency domain coefficients in the target region block;
determining the region center position and the side length of the sub-region of the information embedding sub-region of the target region block;
if bit information to be embedded in a bit stream corresponding to the watermark information is first-class bit information, constructing a first-class circle in the information embedding subarea by taking the central position of the area of the information embedding subarea as a circle center and taking the length of the side length of the subarea as a radius in a first proportion;
Adopting a preset frequency domain coefficient increment value to carry out increment processing on the frequency domain coefficient at each position where the circumference of the first class circle is positioned, so as to obtain a frequency domain coefficient after the first processing; wherein the target region block having the first processed frequency domain coefficient constitutes a target region block in which the first type of bit information is embedded;
if bit information to be embedded in the bit stream corresponding to the watermark information is second-class bit information, constructing a second-class circle in the information embedding subarea by taking the central position of the area of the information embedding subarea as a circle center and taking the length of the side length of the subarea as a radius in a second proportion; the second ratio is different from the first ratio;
adopting the frequency domain coefficient increment value to carry out increment processing on the frequency domain coefficient at each position where the circumference of the second class circle is positioned, so as to obtain a frequency domain coefficient after second processing; wherein the target region block having the second processed frequency domain coefficients constitutes a target region block embedding the second type of bit information.
8. The method according to claim 7, wherein after embedding each bit information in the bit stream corresponding to the watermark information into the information embedding sub-area of the target area block in the target area block sequence in turn, the method further comprises:
Converting a target region block with the first processed frequency domain coefficient and a target region block with the second processed frequency domain coefficient from the frequency domain to the airspace through Fourier transform processing to obtain a plurality of target region blocks embedded with the watermark information;
and superposing a plurality of target area blocks embedded with the watermark information to the original position of each target area block according to the original position of each target area block in the video frame, so as to obtain the video frame embedded with the watermark.
9. A method of video watermarking, the method comprising:
extracting frames of the video with the embedded watermarks to obtain a plurality of watermark video frames;
performing multiple quadtree division on each watermark video frame iteration to obtain multiple watermark region blocks corresponding to the watermark video frames; the plurality of watermark region blocks having at least two grid size parameters; the image of the watermark video frame is a gray image;
acquiring a gray value of each pixel point in the watermark region block aiming at each watermark region block in the gray image; determining the square value of the gray value of each pixel point; summing the square values of the gray values of all pixel points in the watermark region block to obtain a gray value square sum; determining the ratio of the gray value square sum to the total number of pixel points in the watermark region block as the energy value of the watermark region block;
Sequencing all the watermark region blocks according to the sequence from big to small of the energy value of the watermark region blocks to obtain a watermark region block sequence;
in response to a specific number of N, determining the first N region blocks in the watermark region block sequence as primarily selected target watermark region blocks; determining whether at least two watermark region blocks with overlapped regions exist in the N initially selected target watermark region blocks;
if at least two watermark area blocks with overlapped areas exist, determining the watermark area block with the largest grid size parameter in the at least two watermark area blocks with overlapped areas as a watermark area block to be reserved, and deleting other watermark area blocks except the watermark area block to be reserved in the at least two watermark area blocks with overlapped areas;
after deleting the other watermark region blocks, continuing to select watermark region blocks with the same number as the deleted other watermark region blocks as the initially selected target watermark region blocks according to the sequence of watermark region blocks in the watermark region block sequence;
when determining that the watermark region blocks with overlapped regions do not exist in the N primarily selected target watermark region blocks, determining the N primarily selected target watermark region blocks as the finally selected target watermark region blocks;
Extracting bit information in a bit stream corresponding to watermark information from each target watermark region block in sequence for each watermark video frame;
and determining the watermark information embedded in the video after watermark embedding based on bit information extracted from all watermark video frames.
10. A video watermarking apparatus, the apparatus comprising:
the first video frame extraction module is used for extracting frames of the original video to be added with watermark information to obtain a plurality of video frames;
the first dividing module is used for carrying out four-way tree division on each video frame iteration for a plurality of times to obtain a plurality of area blocks corresponding to the video frames; the plurality of region blocks having at least two grid size parameters; the image of the video frame is a gray image;
the first energy calculation module is used for acquiring the gray value of each pixel point in each region block aiming at each region block in the gray image; determining the square value of the gray value of each pixel point; summing the square values of the gray values of all pixel points in the area block to obtain a gray value square sum; determining the ratio of the gray value square sum to the total number of pixel points in the area block as an energy value of the area block;
The first screening module is used for sequencing all the regional blocks according to the order of the energy values of the regional blocks from large to small to obtain a regional block sequence; in response to a specific number of N, determining the first N area blocks in the area block sequence as primarily selected target area blocks; determining whether at least two area blocks with overlapped areas exist in the N initially selected target area blocks; if at least two area blocks with overlapped areas exist, determining the area block with the maximum grid size parameter in the at least two area blocks with overlapped areas as an area block to be reserved, and deleting other area blocks except the area block to be reserved in the at least two area blocks with overlapped areas; after deleting the other area blocks, continuing to select area blocks with the same number as the deleted other area blocks as the initial selection target area blocks according to the sequence of the area blocks in the area block sequence; when determining that no region block with overlapped regions exists in the N initially selected target region blocks, determining the N initially selected target region blocks as the finally selected target region blocks;
the information embedding module is used for sequentially embedding each bit information in the bit stream corresponding to the watermark information into a target area block in the video frame for each video frame to obtain a video frame with the embedded watermark;
And the video coding module is used for carrying out video coding on all the video frames with the embedded watermarks to obtain the video with the embedded watermarks.
11. A video watermarking apparatus, the apparatus comprising:
the second video frame extraction module is used for extracting frames of the video with the embedded watermarks to obtain a plurality of watermark video frames;
the second division module is used for carrying out four-way tree division on each watermark video frame iteration for a plurality of times to obtain a plurality of watermark region blocks corresponding to the watermark video frames; the plurality of watermark region blocks having at least two grid size parameters; the image of the watermark video frame is a gray image;
the second energy calculation module is used for acquiring the gray value of each pixel point in each watermark area block aiming at each watermark area block in the gray image; determining the square value of the gray value of each pixel point; summing the square values of the gray values of all pixel points in the watermark region block to obtain a gray value square sum; determining the ratio of the gray value square sum to the total number of pixel points in the watermark region block as the energy value of the watermark region block;
the second screening module is used for sequencing all the watermark region blocks according to the sequence from the large energy value to the small energy value of the watermark region blocks to obtain a watermark region block sequence; in response to a specific number of N, determining the first N region blocks in the watermark region block sequence as primarily selected target watermark region blocks; determining whether at least two watermark region blocks with overlapped regions exist in the N initially selected target watermark region blocks; if at least two watermark area blocks with overlapped areas exist, determining the watermark area block with the largest grid size parameter in the at least two watermark area blocks with overlapped areas as a watermark area block to be reserved, and deleting other watermark area blocks except the watermark area block to be reserved in the at least two watermark area blocks with overlapped areas; after deleting the other watermark region blocks, continuing to select watermark region blocks with the same number as the deleted other watermark region blocks as the initially selected target watermark region blocks according to the sequence of watermark region blocks in the watermark region block sequence; when determining that the watermark region blocks with overlapped regions do not exist in the N primarily selected target watermark region blocks, determining the N primarily selected target watermark region blocks as the finally selected target watermark region blocks;
The information extraction module is used for extracting bit information in a bit stream corresponding to watermark information from each target watermark region block in sequence for each watermark video frame;
and the watermark information determining module is used for determining the watermark information embedded in the video after the watermark is embedded based on the bit information extracted from all the watermark video frames.
12. An electronic device, comprising:
a memory for storing executable instructions; a processor for implementing any one of claims 1 to 8, or the video watermarking method of claim 9, when executing executable instructions stored in said memory.
13. A computer readable storage medium, characterized in that executable instructions are stored for causing a processor to execute the executable instructions, implementing the video watermarking method according to any one of claims 1 to 8, or claim 9.
CN202311320719.9A 2023-10-12 2023-10-12 Video watermark processing method, video watermark processing device, electronic equipment and storage medium Active CN117061768B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311320719.9A CN117061768B (en) 2023-10-12 2023-10-12 Video watermark processing method, video watermark processing device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311320719.9A CN117061768B (en) 2023-10-12 2023-10-12 Video watermark processing method, video watermark processing device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN117061768A CN117061768A (en) 2023-11-14
CN117061768B true CN117061768B (en) 2024-01-30

Family

ID=88655802

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311320719.9A Active CN117061768B (en) 2023-10-12 2023-10-12 Video watermark processing method, video watermark processing device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117061768B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101950405A (en) * 2010-08-10 2011-01-19 浙江大学 Video content-based watermarks adding method
CN112217958A (en) * 2020-09-15 2021-01-12 陕西科技大学 Method for preprocessing digital watermark carrier image irrelevant to device color space
WO2021093648A1 (en) * 2019-11-11 2021-05-20 阿里巴巴集团控股有限公司 Watermark information embedding method and apparatus
CN112907433A (en) * 2021-03-25 2021-06-04 苏州科达科技股份有限公司 Digital watermark embedding method, digital watermark extracting device, digital watermark embedding apparatus, digital watermark extracting apparatus, and digital watermark extracting medium
CN113538197A (en) * 2020-04-15 2021-10-22 北京达佳互联信息技术有限公司 Watermark extraction method, device, storage medium and electronic equipment
CN114626967A (en) * 2022-03-17 2022-06-14 阳光电源股份有限公司 Digital watermark embedding and extracting method, device, equipment and storage medium
CN116095341A (en) * 2023-01-13 2023-05-09 北京永新视博数字电视技术有限公司 Watermark embedding method, device, equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112040337B (en) * 2020-09-01 2022-04-15 腾讯科技(深圳)有限公司 Video watermark adding and extracting method, device, equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101950405A (en) * 2010-08-10 2011-01-19 浙江大学 Video content-based watermarks adding method
WO2021093648A1 (en) * 2019-11-11 2021-05-20 阿里巴巴集团控股有限公司 Watermark information embedding method and apparatus
CN113538197A (en) * 2020-04-15 2021-10-22 北京达佳互联信息技术有限公司 Watermark extraction method, device, storage medium and electronic equipment
CN112217958A (en) * 2020-09-15 2021-01-12 陕西科技大学 Method for preprocessing digital watermark carrier image irrelevant to device color space
CN112907433A (en) * 2021-03-25 2021-06-04 苏州科达科技股份有限公司 Digital watermark embedding method, digital watermark extracting device, digital watermark embedding apparatus, digital watermark extracting apparatus, and digital watermark extracting medium
CN114626967A (en) * 2022-03-17 2022-06-14 阳光电源股份有限公司 Digital watermark embedding and extracting method, device, equipment and storage medium
CN116095341A (en) * 2023-01-13 2023-05-09 北京永新视博数字电视技术有限公司 Watermark embedding method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN117061768A (en) 2023-11-14

Similar Documents

Publication Publication Date Title
CN112561766B (en) Image steganography and extraction method and device and electronic equipment
Ghadi et al. A novel zero‐watermarking approach of medical images based on Jacobian matrix model
Wahed et al. Reversible data hiding with interpolation and adaptive embedding
Li et al. LSB-based steganography using reflected gray code for color quantum images
Pevny et al. Exploring non-additive distortion in steganography
Ghosal et al. Application of Lah transform for security and privacy of data through information hiding in telecommunication
KR102637177B1 (en) Method and apparatus for verifying integrity of image based on watermark
Srinivas et al. Web image authentication using embedding invisible watermarking
CN110634096B (en) Self-adaptive multi-mode information hiding method and device
CN115358911A (en) Screen watermark generation method, device, equipment and computer readable storage medium
Mukherjee et al. A multi level image steganography methodology based on adaptive PMS and block based pixel swapping
CN117061768B (en) Video watermark processing method, video watermark processing device, electronic equipment and storage medium
Khosravi et al. A novel joint secret image sharing and robust steganography method using wavelet
Baziyad et al. Toward stronger energy compaction for high capacity dct-based steganography: a region-growing approach
Zhang et al. A robust and high-efficiency blind watermarking method for color images in the spatial domain
CN111738972A (en) Building detection system, method and device
Margalikas et al. Image steganography based on color palette transformation in color space
Channapragada et al. Watermarking techniques in curvelet domain
CN113763225A (en) Image perception hashing method, system and equipment and information data processing terminal
Du et al. Robust HDR video watermarking method based on the HVS model and T-QR
CN112613055A (en) Image processing system and method based on distributed cloud server and digital-image conversion
Mandal et al. Variant of LSB steganography algorithm for hiding information in RGB images
CN112966230A (en) Information steganography and extraction method, device and equipment
CN117437108B (en) Watermark embedding method for image data
Lydia et al. Robust Digital Image Watermarking Approach for Secure Medical Data Transmission in Smart Cities

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40098962

Country of ref document: HK