CN113766273A - Method and device for processing video data - Google Patents

Method and device for processing video data Download PDF

Info

Publication number
CN113766273A
CN113766273A CN202110009538.9A CN202110009538A CN113766273A CN 113766273 A CN113766273 A CN 113766273A CN 202110009538 A CN202110009538 A CN 202110009538A CN 113766273 A CN113766273 A CN 113766273A
Authority
CN
China
Prior art keywords
video frame
processed
histogram
value
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110009538.9A
Other languages
Chinese (zh)
Inventor
左鑫孟
赖荣凤
梅涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Wodong Tianjun Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN202110009538.9A priority Critical patent/CN113766273A/en
Publication of CN113766273A publication Critical patent/CN113766273A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234309Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4 or from Quicktime to Realvideo
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234345Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440245Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a method and a device for processing video data, and relates to the technical field of video processing. One embodiment of the method comprises: acquiring a video frame to be processed from a video to be processed; graying the video frame to be processed to obtain a gray level histogram; calculating the coding value of the video frame to be processed according to the gray level histogram; processing each pixel point in the video frame to be processed according to the coded value of the video frame to be processed, thereby obtaining a coded video frame; and replacing the video frame to be processed with the encoded video frame so as to synthesize the encoded video. The implementation method can solve the technical problems of complex processing process and high calculation overhead.

Description

Method and device for processing video data
Technical Field
The present invention relates to the field of video processing technologies, and in particular, to a method and an apparatus for processing video data.
Background
With the development of science and technology, internet services are increasingly integrated into life, a plurality of traditional operation modes are replaced, and internet applications, small programs and the like become essential tools in life and work of people. More and more users share photos or videos of life in applications such as short videos and micro blogs, or when important businesses such as money, health and the like are involved in the online business handling process, the users are passively collected by financial institutions, medical institutions and the like in video modes such as living body detection and the like.
If the video data is attacked by network hackers or security holes are implanted in the application programs of the user terminals, the video data can be leaked, and image information of the users can be leaked. Lawbreakers can train out a face model by using the user image information for user identity verification. Therefore, video data needs to be processed to protect user privacy.
In the process of implementing the invention, the inventor finds that at least the following problems exist in the prior art:
the existing video processing mode has the technical problems of complex processing process, high calculation overhead and the like.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for processing video data, so as to solve the technical problems of complex processing procedure and high computational overhead.
To achieve the above object, according to an aspect of an embodiment of the present invention, there is provided a method of processing video data, including:
acquiring a video frame to be processed from a video to be processed;
graying the video frame to be processed to obtain a gray level histogram;
calculating the coding value of the video frame to be processed according to the gray level histogram;
processing each pixel point in the video frame to be processed according to the coded value of the video frame to be processed, thereby obtaining a coded video frame;
and replacing the video frame to be processed with the coded video frame so as to synthesize the coded video.
Optionally, graying the to-be-processed video frame to obtain a grayscale histogram, including:
graying the video frame to be processed to obtain a first gray level histogram;
and according to a preset color interval step length, carrying out color interval combination on the first gray level histogram to obtain a second gray level histogram.
Optionally, the abscissa of the first grayscale histogram is composed of 256 color bins, the abscissa of the second grayscale histogram is composed of 8 color bins, and the color bin step size is 32.
Optionally, calculating an encoding value of the video frame to be processed according to the gray histogram includes:
calculating a median color interval of the gray level histogram;
and calculating the coding value of the video frame to be processed according to the gray level histogram and the median color interval.
Optionally, calculating a median color interval of the gray histogram includes:
calculating the total number of pixel points of the video frame to be processed to determine the median of the total number of the pixel points;
and finding out the median color interval to which the median of the total number of the pixel points belongs from the gray level histogram.
Optionally, calculating an encoding value of the video frame to be processed according to the gray level histogram and the median color interval, including:
comparing the number of pixel points corresponding to each color interval of the gray level histogram with the number of pixel points corresponding to the median color interval;
obtaining a binary character string of the gray level histogram according to the comparison result; wherein the comparison results are distinguished by 0 and 1;
and calculating the coding value of the video frame to be processed according to the binary character string.
Optionally, obtaining a binary string of the gray histogram according to the comparison result, including:
for each color interval of the gray histogram, if the number of pixel points corresponding to the color interval is greater than or equal to the number of pixel points corresponding to the median color interval, setting the value corresponding to the color interval to be 1, otherwise, setting the value to be 0, and thus obtaining a binary string of the gray histogram; alternatively, the first and second electrodes may be,
for each color interval of the gray histogram, if the number of the pixel points corresponding to the color interval is greater than or equal to the number of the pixel points corresponding to the median color interval, setting the value corresponding to the color interval to be 0, otherwise, setting the value to be 1, thereby obtaining the binary string of the gray histogram.
Optionally, calculating an encoded value of the video frame to be processed according to the binary string includes:
carrying out reverse order operation on the binary character string to obtain a reverse order binary character string;
and converting the reverse order binary character string into a decimal character string so as to obtain the coded value of the video frame to be processed.
Optionally, processing each pixel point in the video frame to be processed according to the encoded value of the video frame to be processed, so as to obtain an encoded video frame, including:
for each channel of each pixel point in the video to be processed, if the pixel value of the channel is greater than or equal to the encoding value of the video frame to be processed, adding M to the pixel value of the channel, otherwise, subtracting M to obtain an encoding video frame;
wherein M is greater than zero.
In addition, according to another aspect of an embodiment of the present invention, there is provided an apparatus for processing video data, including:
the acquisition module is used for acquiring a video frame to be processed from a video to be processed;
the graying module is used for graying the video frame to be processed to obtain a grayscale histogram;
the calculation module is used for calculating the coding value of the video frame to be processed according to the gray level histogram;
the encoding module is used for processing each pixel point in the video frame to be processed according to the encoding value of the video frame to be processed, so as to obtain an encoded video frame;
a synthesizing module for replacing the video frame to be processed with the encoded video frame, thereby synthesizing an encoded video.
Optionally, the graying module is further configured to:
graying the video frame to be processed to obtain a first gray level histogram;
and according to a preset color interval step length, carrying out color interval combination on the first gray level histogram to obtain a second gray level histogram.
Optionally, the abscissa of the first grayscale histogram is composed of 256 color bins, the abscissa of the second grayscale histogram is composed of 8 color bins, and the color bin step size is 32.
Optionally, the computing module is further configured to:
calculating a median color interval of the gray level histogram;
and calculating the coding value of the video frame to be processed according to the gray level histogram and the median color interval.
Optionally, the computing module is further configured to:
calculating the total number of pixel points of the video frame to be processed to determine the median of the total number of the pixel points;
and finding out the median color interval to which the median of the total number of the pixel points belongs from the gray level histogram.
Optionally, the computing module is further configured to:
comparing the number of pixel points corresponding to each color interval of the gray level histogram with the number of pixel points corresponding to the median color interval;
obtaining a binary character string of the gray level histogram according to the comparison result; wherein the comparison results are distinguished by 0 and 1;
and calculating the coding value of the video frame to be processed according to the binary character string.
Optionally, the computing module is further configured to:
for each color interval of the gray histogram, if the number of pixel points corresponding to the color interval is greater than or equal to the number of pixel points corresponding to the median color interval, setting the value corresponding to the color interval to be 1, otherwise, setting the value to be 0, and thus obtaining a binary string of the gray histogram; alternatively, the first and second electrodes may be,
for each color interval of the gray histogram, if the number of the pixel points corresponding to the color interval is greater than or equal to the number of the pixel points corresponding to the median color interval, setting the value corresponding to the color interval to be 0, otherwise, setting the value to be 1, thereby obtaining the binary string of the gray histogram.
Optionally, the computing module is further configured to:
carrying out reverse order operation on the binary character string to obtain a reverse order binary character string;
and converting the reverse order binary character string into a decimal character string so as to obtain the coded value of the video frame to be processed.
Optionally, the encoding module is further configured to:
for each channel of each pixel point in the video to be processed, if the pixel value of the channel is greater than or equal to the encoding value of the video frame to be processed, adding M to the pixel value of the channel, otherwise, subtracting M to obtain an encoding video frame;
wherein M is greater than zero.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including:
one or more processors;
a storage device for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors implement the method of any of the embodiments described above.
According to another aspect of the embodiments of the present invention, there is also provided a computer readable medium, on which a computer program is stored, which when executed by a processor implements the method of any of the above embodiments.
One embodiment of the above invention has the following advantages or benefits: because the technical means of synthesizing the coded video by processing each pixel point in the video frame to be processed according to the coding value of the video frame to be processed is adopted, the technical problems of complex processing process and high calculation cost in the prior art are solved. The embodiment of the invention modifies the video content by processing the video frame at the pixel level, protects the privacy of the user and prevents the privacy of the user from being stolen under the condition of not influencing the distortion of the video content, and the method has simple processing process and very small calculation cost and can process in real time. Therefore, the embodiment of the invention can not influence the user to watch the video, and can also analyze the video content by illicitly using the content recognition algorithm (such as face recognition, OCR recognition, action recognition, scene recognition and the like) related to the AI, thereby preventing the user information from being leaked.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
fig. 1 is a schematic diagram of a main flow of a method of processing video data according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a main flow of a method of processing video data according to one referenceable embodiment of the present invention;
fig. 3 is a schematic diagram of a main flow of a method of processing video data according to another referenceable embodiment of the present invention;
fig. 4 is a schematic diagram of main blocks of an apparatus for processing video data according to an embodiment of the present invention;
FIG. 5 is an exemplary system architecture diagram in which embodiments of the present invention may be employed;
fig. 6 is a schematic block diagram of a computer system suitable for use in implementing a terminal device or server of an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
At present, the processing methods of video data are mainly classified into the following methods:
1) method of using antagonistic network transformations
The method has some advantages in image style migration, can convert the image from an X domain to a Y domain and then restore the image from the Y domain to the X domain, and structurally, one discriminator of the Cycle-GAN shares two generators, and the two generators are respectively provided with one discriminator.
2) Method based on video encryption
The method encrypts video pixel values and video content to obtain an encrypted video and a video key, and randomly arranges integers in a random seed range by a pseudo-random generator to generate variables.
3) Method based on replacement processing
The method carries out replacement processing on the part containing the sensitive information in the video, such as coding processing, replacing background, changing into other images or other synthesized faces and the like.
The above treatment mainly has the following disadvantages:
1) using a method of countering network transitions: when the applied scene only needs to convert the video frame from the X domain to the Y domain, redundant discriminators exist, loss is generated, the loss rate is not easy to converge, the training time is prolonged, and the generalization capability is low.
2) The video encryption-based method comprises the following steps: the method needs to run all calculations at the user terminal, and causes large calculation overhead due to continuous encryption and decryption of data, and the method is too complex for application programs and needs more calculation examples and hardware support.
3) The method based on replacement processing comprises the following steps: the method cannot meet the purpose of protecting the privacy of the user while using the video for identification.
To solve the technical problems in the prior art, embodiments of the present invention provide a method for processing video data, which performs a slight modification at a pixel level on a video frame image to mask an AI inspection. In fact, for deep neural networks, some small perturbations can alter the model's knowledge. The embodiment of the invention utilizes the imperceptible slight data disturbance to process the video content so as to protect the privacy of users. In this way, even if the video data of the user on the network is captured illegally, the face model trained by the data cannot successfully identify the user identity.
Fig. 1 is a schematic diagram of a main flow of a method of processing video data according to an embodiment of the present invention. As an embodiment of the present invention, as shown in fig. 1, the method for processing video data may include:
step 101, obtaining a video frame to be processed from a video to be processed.
A frame capture tool may be employed to capture at least one frame of video frames from the video to be processed as a video frame to be processed. Alternatively, the video frame truncation may be performed in equal steps, resulting in multiple video frames, which constitute a sequence of video frames. The size of the step size may be set according to actual needs, which is not limited in the embodiment of the present invention.
And 102, graying the video frame to be processed to obtain a gray histogram.
Step 102 and step 105 are executed for each frame of the video to be processed obtained from the video to be processed. Specifically, in step 102, for each frame of video frame to be processed, graying is performed on the video frame to be processed to obtain a grayscale histogram of the video frame to be processed, which is a basis for processing each video frame to be processed subsequently.
Optionally, step 102 may comprise: graying the video frame to be processed to obtain a first gray level histogram; and according to a preset color interval step length, carrying out color interval combination on the first gray level histogram to obtain a second gray level histogram. Optionally, the abscissa of the first grayscale histogram is composed of 256 color bins, the abscissa of the second grayscale histogram is composed of 8 color bins, and the color bin step size is 32. In the embodiment of the present invention, a graying formula may be adopted to graye the to-be-processed video frame to obtain a first grayscale histogram (i.e., a grayscale histogram of 256-bin, and a value of a horizontal coordinate is an integer of [0, 255 ]), and then bin-merge is performed on the first grayscale histogram with 32 as a step length to obtain a second grayscale histogram (i.e., a grayscale histogram of 8-bin). The abscissa of the grayscale histogram of 8-bins includes eight bins, which are: [0, 31],[32, 63],[64, 95],[96, 127],[128, 159],[150, 191],[192, 223],[224, 255].
The embodiment of the invention combines the 256-bin gray level histogram into the 8-bin gray level histogram, which is convenient for representing the gray level histogram of the video frame to be processed by using eight-bit binary number, thereby calculating the coding value based on the eight-bit binary number. Therefore, after the processing of step 102, each video frame to be processed can obtain a 8-bin gray histogram.
And 103, calculating the coding value of the video frame to be processed according to the gray level histogram.
And after the gray level histogram of the video frame to be processed is obtained, calculating the coding value of the video frame to be processed according to the gray level histogram. Optionally, step 103 may comprise: calculating a median color interval of the gray level histogram; and calculating the coding value of the video frame to be processed according to the gray level histogram and the median color interval. In the embodiment of the invention, the median bin of the gray histogram is calculated according to the gray histogram of the video frame to be processed, and then the coding value of the video frame to be processed is calculated according to the gray histogram and the median bin.
Optionally, calculating a median color interval of the gray histogram includes: calculating the total number of pixel points of the video frame to be processed to determine the median of the total number of the pixel points; and finding out the median color interval to which the median of the total number of the pixel points belongs from the gray level histogram. For example, the ordinate value (representing the number of pixels corresponding to each color interval) of the 8-bin gray histogram is [10, 20, 20, 10, 10, 10, 20, 0], then the total number of pixels of the video frame to be processed is 10+20+20+10+10 +20+0 equals to 100, the median of the total number of pixels is 50 (may also be 51, which is not limited in the embodiment of the present invention), and since the color interval to which the 50 th pixel belongs is the third bit in the sequence (i.e., the color interval corresponding to 20), therefore [64, 95] is taken as the median color interval.
It should be noted that if the median of the total number of the pixel points is 51, the color interval to which the 50 th pixel point belongs is the fourth position in the sequence (i.e. the color interval corresponding to 10), so [96, 127] is taken as the median color interval.
Optionally, calculating an encoding value of the video frame to be processed according to the gray level histogram and the median color interval, including: comparing the number of pixel points corresponding to each color interval of the gray level histogram with the number of pixel points corresponding to the median color interval; obtaining a binary character string of the gray level histogram according to the comparison result; wherein the comparison results are distinguished by 0 and 1; and calculating the coding value of the video frame to be processed according to the binary character string. After a median bin in the gray histogram is determined, the number of pixel points corresponding to each bin in the gray histogram (namely, the vertical coordinate value of the gray histogram) is compared with the number of pixel points corresponding to the median bin, and then an eight-bit binary character string is obtained according to the comparison result. Since the grey histogram is 8bin, an eight bit binary string is available, which facilitates the conversion of the binary string into coded decimal values.
Optionally, obtaining a binary string of the gray histogram according to the comparison result, including: for each color interval of the gray histogram, if the number of pixel points corresponding to the color interval is greater than or equal to the number of pixel points corresponding to the median color interval, setting the value corresponding to the color interval to be 1, otherwise, setting the value to be 0, and thus obtaining a binary string of the gray histogram; or, for each color interval of the gray histogram, if the number of pixels corresponding to the color interval is greater than or equal to the number of pixels corresponding to the median color interval, setting the value corresponding to the color interval to be 0, otherwise, setting the value to be 1, thereby obtaining the binary string of the gray histogram.
For example, if the ordinate values of the 8-bin gray histogram are [10, 20, 20, 10, 10, 10, 20, 0], [64, 95] is a median bin, then comparing the number of pixels corresponding to each bin in the sequence with the number of pixels corresponding to [64, 95], respectively, can obtain [0,1,1,0,0, 1,0], and thus obtain a binary string of 01100010.
Optionally, calculating an encoded value of the video frame to be processed according to the binary string includes: carrying out reverse order operation on the binary character string to obtain a reverse order binary character string; and converting the reverse order binary character string into a decimal character string so as to obtain the coded value of the video frame to be processed. And converting the eight-bit binary character string into a decimal character string, wherein the value range of the obtained code value is 0-255. Since the abscissa values of the gray level histogram are sequentially increased from left to right, and are opposite to the carry order of the binary numbers, the binary string needs to be subjected to reverse order operation, so that the ordering direction of the abscissa values of the gray level histogram is the same as the ordering direction of the binary numbers.
For example, the binary string 01100010 is subjected to a reverse order operation, so as to obtain a reverse order binary string 01000110, which is converted into a decimal string 70, and then 70 is the encoded value of the video frame to be processed.
And 104, processing each pixel point in the video frame to be processed according to the coding value of the video frame to be processed, thereby obtaining a coded video frame.
And after the coding value of the video frame to be processed is calculated, processing each pixel point in the video frame to be processed according to the coding value, thereby obtaining the processed coded video frame. In fact, this step is a pixel-level encoding process on the video frame to be processed, so as to protect the privacy of the user.
Optionally, step 104 may include: for each channel of each pixel point in the video to be processed, if the pixel value of the channel is greater than or equal to the encoding value of the video frame to be processed, adding M to the pixel value of the channel, otherwise, subtracting M to obtain an encoding video frame; wherein M is greater than zero. According to the embodiment of the invention, the pixel-level micro-modification is carried out on the video frame to be processed according to the coding value of the video frame to be processed, so that the function of preventing illegal AI identification can be achieved, and the video watching of a user cannot be influenced. The AI recognition has a recognition capability of 0 for the artificially modified picture or video, because the generalization capability of the trained model is distributed according to a certain rule to some extent, and after the picture or video is subjected to pixel-level recoding, the AI recognition capability is disordered by the unknown picture or video distribution, so that an erroneous recognition result is obtained, and the purposes of protecting the privacy of a user and preventing illegal AI recognition are achieved.
Optionally, the value of M may be between 1 and 3, so as to avoid that the processed video affects the user's viewing. For example, for each channel of each pixel point in the video frame to be processed (each pixel point generally has three channels of RGB), if the pixel value of the channel is less than the encoding value, 1 is subtracted from the pixel value of the channel (if the pixel value of the channel is 0, no processing is performed), if the pixel value of the channel is greater than or equal to the encoding value, 1 is added to the pixel value of the channel (if the pixel value of the channel is 255, no processing is performed). Because the value of the pixel point of the video frame is 0-255, after the pixel level coding is carried out according to the method, the watching of a user is not influenced, and meanwhile, the functions of preventing illegal AI (artificial intelligence) identification and stealing the privacy of the user can be achieved. In fact, step 104 is to perform unsmooth processing on the video frame to be processed, so that the pixel points are changed from smooth to unsmooth, which can reduce the AI recognition capability.
And 105, replacing the video frame to be processed with the coded video frame so as to synthesize a coded video.
After each frame of video frame to be processed is processed, the coded video frame corresponding to each frame of video frame to be processed is obtained respectively, and then the coded video frame is used for replacing the corresponding video frame to be processed respectively, so that the coded video frame, namely the processed video, is obtained.
According to the various embodiments described above, it can be seen that the technical means for synthesizing the encoded video by processing each pixel point in the video frame to be processed according to the encoded value of the video frame to be processed in the embodiments of the present invention solves the technical problems of complex processing process and high computational overhead in the prior art. The embodiment of the invention modifies the video content by processing the video frame at the pixel level, protects the privacy of the user and prevents the privacy of the user from being stolen under the condition of not influencing the distortion of the video content, and the method has simple processing process and very small calculation cost and can process in real time. Therefore, the embodiment of the invention can not influence the user to watch the video, and can also analyze the video content by illicitly using the content recognition algorithm (such as face recognition, OCR recognition, action recognition, scene recognition and the like) related to the AI, thereby preventing the user information from being leaked.
Fig. 2 is a schematic diagram of a main flow of a method of processing video data according to one referenceable embodiment of the present invention. As still another embodiment of the present invention, as shown in fig. 2, the method of processing video data may include:
step 201, acquiring multiple frames of video frames to be processed from a video to be processed according to a preset time step.
The size of the time step may be set according to actual needs, which is not limited in the embodiment of the present invention. A plurality of video frames obtained from the video to be processed constitute a video frame sequence, and for each video frame to be processed, step 202 and step 211 are executed.
Step 202, graying the video frame to be processed to obtain a 256-bin grayscale histogram.
And 203, performing bin combination on the 256-bin gray level histogram by taking 32 as a step length to obtain an 8-bin gray level histogram.
Wherein, the abscissa of the 8-bin gray histogram includes eight bins, which are respectively: [0, 31],[32, 63],[64, 95],[96, 127],[128, 159],[150, 191],[192, 223],[224, 255].
The embodiment of the invention combines the 256-bin gray level histogram into the 8-bin gray level histogram, which is convenient for representing the gray level histogram of the video frame to be processed by using eight-bit binary number, thereby calculating the coding value based on the eight-bit binary number.
And 204, calculating the total number of the pixel points of the video frame to be processed to determine the median of the total number of the pixel points.
And step 205, finding out the median bin to which the median of the total number of the pixel points belongs from the gray histogram of the 8-bin.
And step 206, comparing the number of pixel points corresponding to each color interval of the 8-bin gray level histogram with the number of pixel points corresponding to the median bin.
And step 207, obtaining a binary character string of the gray histogram of the 8-bin according to the comparison result.
For example, for each color interval of the gray level histogram, if the number of pixels corresponding to the color interval is greater than or equal to the number of pixels corresponding to the median bin, setting the value corresponding to the color interval to be 1, otherwise, setting the value to be 0, thereby obtaining a binary string of the gray level histogram; or, for each color interval of the gray histogram, if the number of pixels corresponding to the color interval is greater than or equal to the number of pixels corresponding to the median bin, setting the value corresponding to the color interval to be 0, otherwise, setting the value to be 1, thereby obtaining the binary string of the gray histogram of the 8-bin.
And 208, performing reverse order operation on the binary character string to obtain a reverse order binary character string.
Step 209, converting the reverse binary character string into a decimal character string, thereby obtaining the coded value of the video frame to be processed.
Step 210, processing each pixel point in the video frame to be processed according to the coded value of the video frame to be processed, thereby obtaining a coded video frame.
And step 211, replacing the video frame to be processed with the encoded video frame, thereby synthesizing an encoded video.
In addition, in a reference embodiment of the present invention, the detailed implementation of the method for processing video data is described in detail in the above-mentioned method for processing video data, and therefore the repeated description is not repeated here.
Fig. 3 is a schematic diagram of a main flow of a method of processing video data according to another referenceable embodiment of the present invention. As another embodiment of the present invention, as shown in fig. 3, the method of processing video data may include:
step 301, acquiring multiple frames of video frames to be processed from a video to be processed according to a preset time step.
Step 302, graying the video frame to be processed to obtain a grayscale histogram.
Step 303, calculating the encoding value of the video frame to be processed according to the gray histogram.
Step 304, for each channel of each pixel point in the video to be processed, judging whether the pixel value of the channel is greater than or equal to the coding value of the video frame to be processed; if yes, go to step 305; if not, go to step 306.
Step 305, add M to the pixel value of the channel, where the value of M may be between 1 and 3.
Step 306, subtracting M from the pixel value of the channel, wherein the value of M may be between 1 and 3.
And 307, replacing the video frame to be processed with the encoded video frame, thereby synthesizing an encoded video.
In addition, in another embodiment of the present invention, the detailed implementation of the method for processing video data is described in detail in the above-mentioned method for processing video data, and therefore the repeated description is omitted.
Fig. 4 is a schematic diagram of main blocks of an apparatus for processing video data according to an embodiment of the present invention, and as shown in fig. 4, the apparatus 400 for processing video data includes an acquisition module 401, a graying module 402, a calculation module 403, an encoding module 404, and a composition module 405; the obtaining module 401 is configured to obtain a video frame to be processed from a video to be processed; the graying module 402 is configured to graying the video frame to be processed to obtain a grayscale histogram; the calculating module 403 is configured to calculate an encoding value of the video frame to be processed according to the gray histogram; the encoding module 404 is configured to process each pixel point in the video frame to be processed according to the encoding value of the video frame to be processed, so as to obtain an encoded video frame; the composition module 405 is configured to replace the video frame to be processed with the encoded video frame, thereby composing an encoded video.
Optionally, the graying module 402 is further configured to:
graying the video frame to be processed to obtain a first gray level histogram;
and according to a preset color interval step length, carrying out color interval combination on the first gray level histogram to obtain a second gray level histogram.
Optionally, the abscissa of the first grayscale histogram is composed of 256 color bins, the abscissa of the second grayscale histogram is composed of 8 color bins, and the color bin step size is 32.
Optionally, the computing module 403 is further configured to:
calculating a median color interval of the gray level histogram;
and calculating the coding value of the video frame to be processed according to the gray level histogram and the median color interval.
Optionally, the computing module 403 is further configured to:
calculating the total number of pixel points of the video frame to be processed to determine the median of the total number of the pixel points;
and finding out the median color interval to which the median of the total number of the pixel points belongs from the gray level histogram.
Optionally, the computing module 403 is further configured to:
comparing the number of pixel points corresponding to each color interval of the gray level histogram with the number of pixel points corresponding to the median color interval;
obtaining a binary character string of the gray level histogram according to the comparison result; wherein the comparison results are distinguished by 0 and 1;
and calculating the coding value of the video frame to be processed according to the binary character string.
Optionally, the computing module 403 is further configured to:
for each color interval of the gray histogram, if the number of pixel points corresponding to the color interval is greater than or equal to the number of pixel points corresponding to the median color interval, setting the value corresponding to the color interval to be 1, otherwise, setting the value to be 0, and thus obtaining a binary string of the gray histogram; alternatively, the first and second electrodes may be,
for each color interval of the gray histogram, if the number of the pixel points corresponding to the color interval is greater than or equal to the number of the pixel points corresponding to the median color interval, setting the value corresponding to the color interval to be 0, otherwise, setting the value to be 1, thereby obtaining the binary string of the gray histogram.
Optionally, the computing module 403 is further configured to:
carrying out reverse order operation on the binary character string to obtain a reverse order binary character string;
and converting the reverse order binary character string into a decimal character string so as to obtain the coded value of the video frame to be processed.
Optionally, the encoding module 404 is further configured to:
for each channel of each pixel point in the video to be processed, if the pixel value of the channel is greater than or equal to the encoding value of the video frame to be processed, adding M to the pixel value of the channel, otherwise, subtracting M to obtain an encoding video frame;
wherein M is greater than zero.
According to the various embodiments described above, it can be seen that the technical means for synthesizing the encoded video by processing each pixel point in the video frame to be processed according to the encoded value of the video frame to be processed in the embodiments of the present invention solves the technical problems of complex processing process and high computational overhead in the prior art. The embodiment of the invention modifies the video content by processing the video frame at the pixel level, protects the privacy of the user and prevents the privacy of the user from being stolen under the condition of not influencing the distortion of the video content, and the device has simple processing process and very small calculation cost and can process in real time. Therefore, the embodiment of the invention can not influence the user to watch the video, and can also analyze the video content by illicitly using the content recognition algorithm (such as face recognition, OCR recognition, action recognition, scene recognition and the like) related to the AI, thereby preventing the user information from being leaked.
It should be noted that, in the implementation of the apparatus for processing video data according to the present invention, the method for processing video data has been described in detail above, and therefore, the repeated description is not repeated here.
Fig. 5 illustrates an exemplary system architecture 500 of a method of processing video data or an apparatus for processing video data to which embodiments of the present invention may be applied.
As shown in fig. 5, the system architecture 500 may include terminal devices 501, 502, 503, a network 504, and a server 505. The network 504 serves to provide a medium for communication links between the terminal devices 501, 502, 503 and the server 505. Network 504 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 501, 502, 503 to interact with a server 505 over a network 504 to receive or send messages or the like. The terminal devices 501, 502, 503 may have installed thereon various communication client applications, such as shopping-like applications, web browser applications, search-like applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only).
The terminal devices 501, 502, 503 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 505 may be a server providing various services, such as a background management server (for example only) providing support for shopping websites browsed by users using the terminal devices 501, 502, 503. The background management server can analyze and process the received data such as the article information query request and feed back the processing result to the terminal equipment.
It should be noted that the method for processing video data provided by the embodiment of the present invention is generally executed by the server 505, and accordingly, the apparatus for processing video data is generally disposed in the server 505. The method for processing video data provided by the embodiment of the present invention may also be executed by the terminal devices 501, 502, 503, and accordingly, the apparatus for processing video data may be disposed in the terminal devices 501, 502, 503.
It should be understood that the number of terminal devices, networks, and servers in fig. 5 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 6, a block diagram of a computer system 600 suitable for use with a terminal device implementing an embodiment of the invention is shown. The terminal device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data necessary for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program performs the above-described functions defined in the system of the present invention when executed by the Central Processing Unit (CPU) 601.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer programs according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present invention may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor includes an acquisition module, a graying module, a calculation module, an encoding module, and a composition module, where the names of the modules do not in some cases constitute a limitation on the modules themselves.
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, implement the method of: acquiring a video frame to be processed from a video to be processed; graying the video frame to be processed to obtain a gray level histogram; calculating the coding value of the video frame to be processed according to the gray level histogram; processing each pixel point in the video frame to be processed according to the coded value of the video frame to be processed, thereby obtaining a coded video frame; and replacing the video frame to be processed with the encoded video frame so as to synthesize the encoded video.
According to the technical scheme of the embodiment of the invention, as the technical means of processing each pixel point in the video frame to be processed according to the coding value of the video frame to be processed so as to synthesize the coded video is adopted, the technical problems of complex processing process and high calculation cost in the prior art are solved. The embodiment of the invention modifies the video content by processing the video frame at the pixel level, protects the privacy of the user and prevents the privacy of the user from being stolen under the condition of not influencing the distortion of the video content, and the method has simple processing process and very small calculation cost and can process in real time. Therefore, the embodiment of the invention can not influence the user to watch the video, and can also analyze the video content by illicitly using the content recognition algorithm (such as face recognition, OCR recognition, action recognition, scene recognition and the like) related to the AI, thereby preventing the user information from being leaked.
The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (12)

1. A method of processing video data, comprising:
acquiring a video frame to be processed from a video to be processed;
graying the video frame to be processed to obtain a gray level histogram;
calculating the coding value of the video frame to be processed according to the gray level histogram;
processing each pixel point in the video frame to be processed according to the coded value of the video frame to be processed, thereby obtaining a coded video frame;
and replacing the video frame to be processed with the coded video frame so as to synthesize the coded video.
2. The method of claim 1, wherein graying the video frame to be processed to obtain a grayscale histogram, comprises:
graying the video frame to be processed to obtain a first gray level histogram;
and according to a preset color interval step length, carrying out color interval combination on the first gray level histogram to obtain a second gray level histogram.
3. The method of claim 2, wherein the abscissa of the first histogram of gray levels is comprised of 256 color bins, the abscissa of the second histogram of gray levels is comprised of 8 color bins, and the color bin step size is 32.
4. The method according to claim 1, wherein calculating the encoded value of the video frame to be processed according to the gray histogram comprises:
calculating a median color interval of the gray level histogram;
and calculating the coding value of the video frame to be processed according to the gray level histogram and the median color interval.
5. The method of claim 4, wherein computing the median color bin of the grayscale histogram comprises:
calculating the total number of pixel points of the video frame to be processed to determine the median of the total number of the pixel points;
and finding out the median color interval to which the median of the total number of the pixel points belongs from the gray level histogram.
6. The method of claim 4, wherein calculating the encoded value of the video frame to be processed according to the gray histogram and the median color interval comprises:
comparing the number of pixel points corresponding to each color interval of the gray level histogram with the number of pixel points corresponding to the median color interval;
obtaining a binary character string of the gray level histogram according to the comparison result; wherein the comparison results are distinguished by 0 and 1;
and calculating the coding value of the video frame to be processed according to the binary character string.
7. The method of claim 6, wherein obtaining the binary string of the gray histogram according to the comparison result comprises:
for each color interval of the gray histogram, if the number of pixel points corresponding to the color interval is greater than or equal to the number of pixel points corresponding to the median color interval, setting the value corresponding to the color interval to be 1, otherwise, setting the value to be 0, and thus obtaining a binary string of the gray histogram; alternatively, the first and second electrodes may be,
for each color interval of the gray histogram, if the number of the pixel points corresponding to the color interval is greater than or equal to the number of the pixel points corresponding to the median color interval, setting the value corresponding to the color interval to be 0, otherwise, setting the value to be 1, thereby obtaining the binary string of the gray histogram.
8. The method according to claim 6, wherein calculating the encoded value of the video frame to be processed according to the binary string comprises:
carrying out reverse order operation on the binary character string to obtain a reverse order binary character string;
and converting the reverse order binary character string into a decimal character string so as to obtain the coded value of the video frame to be processed.
9. The method according to claim 1, wherein processing each pixel point in the video frame to be processed according to the encoded value of the video frame to be processed, thereby obtaining an encoded video frame, comprises:
for each channel of each pixel point in the video to be processed, if the pixel value of the channel is greater than or equal to the encoding value of the video frame to be processed, adding M to the pixel value of the channel, otherwise, subtracting M to obtain an encoding video frame;
wherein M is greater than zero.
10. An apparatus for processing video data, comprising:
the acquisition module is used for acquiring a video frame to be processed from a video to be processed;
the graying module is used for graying the video frame to be processed to obtain a grayscale histogram;
the calculation module is used for calculating the coding value of the video frame to be processed according to the gray level histogram;
the encoding module is used for processing each pixel point in the video frame to be processed according to the encoding value of the video frame to be processed, so as to obtain an encoded video frame;
a synthesizing module for replacing the video frame to be processed with the encoded video frame, thereby synthesizing an encoded video.
11. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
the one or more programs, when executed by the one or more processors, implement the method of any of claims 1-9.
12. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-9.
CN202110009538.9A 2021-01-05 2021-01-05 Method and device for processing video data Pending CN113766273A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110009538.9A CN113766273A (en) 2021-01-05 2021-01-05 Method and device for processing video data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110009538.9A CN113766273A (en) 2021-01-05 2021-01-05 Method and device for processing video data

Publications (1)

Publication Number Publication Date
CN113766273A true CN113766273A (en) 2021-12-07

Family

ID=78786329

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110009538.9A Pending CN113766273A (en) 2021-01-05 2021-01-05 Method and device for processing video data

Country Status (1)

Country Link
CN (1) CN113766273A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118134436A (en) * 2024-05-06 2024-06-04 江西微博科技有限公司 Cloud-based remote business service method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101339742A (en) * 2007-03-29 2009-01-07 英特尔公司 Using spatial distribution of pixel values when determining adjustments to be made to image luminance and backlight
CN106791623A (en) * 2016-12-09 2017-05-31 深圳市云宙多媒体技术有限公司 A kind of panoramic video joining method and device
US20180247125A1 (en) * 2017-02-27 2018-08-30 Smart Engines Service LLC Method for holographic elements detection in video stream
CN110248195A (en) * 2019-07-17 2019-09-17 北京百度网讯科技有限公司 Method and apparatus for output information
WO2020244474A1 (en) * 2019-06-03 2020-12-10 中兴通讯股份有限公司 Method, device and apparatus for adding and extracting video watermark

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101339742A (en) * 2007-03-29 2009-01-07 英特尔公司 Using spatial distribution of pixel values when determining adjustments to be made to image luminance and backlight
CN106791623A (en) * 2016-12-09 2017-05-31 深圳市云宙多媒体技术有限公司 A kind of panoramic video joining method and device
US20180247125A1 (en) * 2017-02-27 2018-08-30 Smart Engines Service LLC Method for holographic elements detection in video stream
WO2020244474A1 (en) * 2019-06-03 2020-12-10 中兴通讯股份有限公司 Method, device and apparatus for adding and extracting video watermark
CN110248195A (en) * 2019-07-17 2019-09-17 北京百度网讯科技有限公司 Method and apparatus for output information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
宋磊;朱晓强;叶翰辰;史璇;朱梦尧;王向阳;: "基于直方图匹配的艺术Mosaic图像并行拼贴", 安徽工业大学学报(自然科学版), no. 04 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118134436A (en) * 2024-05-06 2024-06-04 江西微博科技有限公司 Cloud-based remote business service method and system

Similar Documents

Publication Publication Date Title
Dhawan et al. Analysis of various data security techniques of steganography: A survey
Prasad et al. An RGB colour image steganography scheme using overlapping block-based pixel-value differencing
Zhang et al. A brief survey on deep learning based data hiding
Nazir et al. Robust secure color image watermarking using 4D hyperchaotic system, DWT, HbD, and SVD based on improved FOA algorithm
CN116383793B (en) Face data processing method, device, electronic equipment and computer readable medium
CN115643001B (en) Image encryption method and system based on bit plane and readable storage medium
CN112802138A (en) Image processing method and device, storage medium and electronic equipment
WO2022241307A1 (en) Image steganography utilizing adversarial perturbations
Kumar et al. Steganography techniques using convolutional neural networks
CN114880687A (en) Document security protection method and device, electronic equipment and storage medium
CN110069907A (en) Big data source tracing method and system based on digital watermarking
CN113766273A (en) Method and device for processing video data
Debnath et al. Secret data sharing through coverless video steganography based on bit plane segmentation
Liu et al. Image processing method based on chaotic encryption and wavelet transform for planar design
Fadhil et al. Improved Security of a Deep Learning-Based Steganography System with Imperceptibility Preservation
CN113537516B (en) Training method, device, equipment and medium for distributed machine learning model
Sun et al. High‐Capacity Data Hiding Method Based on Two Subgroup Pixels‐Value Adjustment Using Encoding Function
Liu et al. A Larger Capacity Data Hiding Scheme Based on DNN
Zhou et al. Latent vector optimization-based generative image steganography for consumer electronic applications
Bharti et al. Security enhancements for high quality image transaction with hybrid image steganography algorithm
CN107862211B (en) JPEG image encryption method for avoiding image enhancement filtering of social network platform
Li et al. Reversible data hiding for encrypted 3D model based on prediction error expansion
Shawkat et al. Evolutionary programming approach for securing medical images using genetic algorithm and standard deviation
CN115643348B (en) Method and device for certifiable safety natural steganography based on reversible image processing network
Gao et al. An Improved Image Processing Based on Deep Learning Backpropagation Technique

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination