CN115550730A - Video distribution method and device, electronic equipment and storage medium - Google Patents

Video distribution method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115550730A
CN115550730A CN202211181374.9A CN202211181374A CN115550730A CN 115550730 A CN115550730 A CN 115550730A CN 202211181374 A CN202211181374 A CN 202211181374A CN 115550730 A CN115550730 A CN 115550730A
Authority
CN
China
Prior art keywords
video
abstract
signature
published
calculation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211181374.9A
Other languages
Chinese (zh)
Inventor
谢志钢
胡小鹏
顾振华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Keda Technology Co Ltd
Original Assignee
Suzhou Keda Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Keda Technology Co Ltd filed Critical Suzhou Keda Technology Co Ltd
Priority to CN202211181374.9A priority Critical patent/CN115550730A/en
Publication of CN115550730A publication Critical patent/CN115550730A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
    • H04N21/4627Rights management associated to the content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/835Generation of protective data, e.g. certificates
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8549Creating video summaries, e.g. movie trailer

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to the technical field of video processing, in particular to a video publishing method, a device, electronic equipment and a storage medium, wherein the method comprises the steps of acquiring an original video of a video to be published and publisher identity information of the video to be published; performing abstract calculation on the content of an original video to determine a video abstract index; determining a reference abstract based on the publisher identity information and the video abstract index, and signing the reference abstract to obtain a digital signature; splicing the reference abstract and the digital signature to determine a signature reference abstract; and coding the signature reference abstract into a video code stream of the video to be released to determine and release the target release video, wherein the video code stream is the code stream of the video to be released after compression coding. The image has invariant related to the image, and the reliability of video abstract indexes can be ensured by performing abstract calculation based on the invariant; meanwhile, the publisher identity information is combined on the basis, so that the publisher identity mark is carried in the published video, and the video has good anti-counterfeiting capability.

Description

Video distribution method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of video processing, in particular to a video publishing method, a video publishing device, electronic equipment and a storage medium.
Background
With the gradually extensive application of intelligent technology in the video field, the current intelligent video compiling technology can edit videos, and the authenticity of the video contents is difficult to distinguish by naked eyes. For example, taking several photographs and a reference video, the person in the photograph can be used to perform a simulation with a given character in the reference video and generate content in the reference video, thereby generating untrusted video content about the person in the photograph. Therefore, the reliability of the distributed video of the existing method is low.
Disclosure of Invention
In view of this, embodiments of the present invention provide a video distribution method and apparatus, an electronic device, and a storage medium, so as to improve reliability of video distribution.
According to a first aspect, an embodiment of the present invention provides a video distribution method, including:
acquiring an original video of a video to be published and publisher identity information of the video to be published;
performing abstract calculation on the content of the original video to determine a video abstract index;
determining a reference abstract based on the publisher identity information and the video abstract index, and signing the reference abstract to obtain a digital signature;
splicing the reference abstract and the digital signature to determine a signature reference abstract;
and coding the signature reference abstract into a video code stream of the video to be published to determine and publish the target published video, wherein the video code stream is the code stream of the video to be published after compression coding.
According to the video publishing method provided by the embodiment of the invention, the original video can be regarded as an image sequence on continuous video points, the images have own unique content, each image can obtain certain invariant through mathematical calculation with respect to the content, namely, the image has the invariant related to the image, the video is split into the images to obtain the invariant of the images, and the abstract calculation based on the invariant can ensure that the result of the video abstract index is not influenced by the outside, so that the reliability of the video abstract index is ensured; meanwhile, the identity information of the publisher is combined on the basis, so that the identity mark of the publisher is carried in the published video, the anti-counterfeiting capability is good, and meanwhile, the video content of the published video is publicly accessible, so that the non-encryption property of the public field is maintained, and the reliability of the published video can be ensured on the basis that the publicly accessible published video is not influenced.
In some embodiments, the performing a summarization calculation on the content of the original video and determining a video summarization index includes:
performing feature processing on an original video image in the original video to determine a feature processing result;
and determining the characteristic processing result as a video abstract index.
In some embodiments, the performing feature processing on an original video image in the original video and determining a feature processing result includes:
reducing the size of the original video image to a preset size to obtain a thumbnail of the original video so as to determine the feature processing result;
and/or the presence of a gas in the gas,
and analyzing the color characteristics and/or the light and shade characteristics of the original video image to determine the characteristic processing result.
According to the video publishing method provided by the embodiment of the invention, the feature processing result is determined by the analysis result of the reduced size, the color feature or the light and shade feature, so that the calculation is simple, quick and effective, and the real-time performance of video publishing is improved.
In some embodiments, the performing a summarization calculation on the content of the original video and determining a video summarization index includes:
acquiring data with a preset length;
and carrying out numerical calculation on the original video image in the original video and the data with the preset length to determine the video abstract index.
According to the video publishing method provided by the embodiment of the invention, the data with the preset length is combined in the calculation process, and the data with the preset length can be different along with different use scenes, so that the calculation of the video abstract index can be suitable for different use scenes.
In some embodiments, the determining a reference digest based on the publisher identity information and the video digest indicator and signing the reference digest to obtain a digital signature includes:
obtaining a first reference abstract, wherein the first reference abstract comprises the publisher identity information and a calculation description used for the abstract calculation;
signing the first reference digest to determine a first signature;
and determining the video abstract index as a second reference abstract, signing the second reference abstract, and determining a second signature.
According to the video publishing method provided by the embodiment of the invention, because the publisher identity information and the calculation description for the abstract calculation are not changed along with the change of the video content, the publisher identity information and the calculation description for the abstract calculation can be commonly used in one video publishing process, and therefore, the publisher identity information and the calculation description for the abstract calculation can be directly stored and extracted when needing to be used, so that repeated calculation can be avoided, and the video publishing efficiency is improved.
In some embodiments, the obtaining a first reference summary comprises:
acquiring the video description of the video to be published;
and splicing the publisher identity information, the video description and the calculation description to determine the first reference abstract.
According to the video publishing method provided by the embodiment of the invention, the video description is used for representing some additional information of the video, such as the video use range and the like, so as to facilitate distribution of the subsequent video and the like.
In some embodiments, the encoding the signature reference digest into the video code stream of the video to be published to determine and publish the target published video includes:
compiling a first signature reference abstract into a target position of the video code stream, wherein the first signature reference abstract is obtained by splicing the first reference abstract and the first signature;
coding a second signature reference abstract into a target position of each compressed coding video stream in the video code stream, wherein the second signature reference abstract is obtained by splicing the second reference abstract and the second signature;
and determining and publishing the target release video based on the video code stream coded into the first signature reference abstract and the second signature reference abstract.
According to the video publishing method provided by the embodiment of the invention, the first reference abstract is only used in the publishing process and is coded into the target position of the video code stream instead of each compressed and coded video stream, so that the data volume of the published video can be shortened, the smaller the data volume is, the better the data volume is when all necessary information is transmitted, and the second reference abstract is changed along with time.
According to a second aspect, an embodiment of the present invention further provides a video distribution apparatus, including:
the system comprises an acquisition module, a video distribution module and a video distribution module, wherein the acquisition module is used for acquiring an original video of a video to be distributed and distributor identity information of the video to be distributed;
the abstract calculation module is used for performing abstract calculation on the content of the original video and determining video abstract indexes;
the signature module is used for determining a reference abstract based on the publisher identity information and the video abstract index and signing the reference abstract to obtain a digital signature;
the splicing module is used for splicing the reference abstract and the digital signature to determine a signature reference abstract;
and the publishing module is used for coding the signature reference abstract into a video code stream of the video to be published, and determining and publishing a target published video, wherein the video code stream is a code stream obtained by compressing and coding the video to be published.
According to a third aspect, an embodiment of the present invention provides an electronic device, including: a memory and a processor, the memory and the processor being communicatively connected to each other, the memory storing therein computer instructions, and the processor executing the computer instructions to perform the video distribution method according to the first aspect or any one of the embodiments of the first aspect.
According to a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, which stores computer instructions for causing a computer to execute the video distribution method described in the first aspect or any one of the implementation manners of the first aspect.
It should be noted that, for corresponding beneficial effects of the video publishing device, the electronic device and the computer-readable storage medium provided in the embodiment of the present invention, please refer to the description of the corresponding beneficial effects of the video publishing method above, which is not described herein again.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a video distribution method according to an embodiment of the present invention;
FIG. 2 is a flow diagram of a video distribution method according to an embodiment of the present invention;
FIG. 3 is a flow diagram of a video distribution method according to an embodiment of the invention;
fig. 4 is a flowchart of a video distribution method according to an embodiment of the present invention;
fig. 5 is a flowchart of a video playing method according to an embodiment of the present invention;
fig. 6 is a block diagram of the structure of a video distribution apparatus according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The whole process from generation to playing of the video can be mainly divided into video distribution, video distribution and video playing. The video publishing is that a video publishing server and the like perform video publishing processing on the obtained video to be published to obtain a target publishing video, and the target publishing video is sent to a video distribution server and the like. Taking a video distribution server as an example, making a target release video into target distribution videos with different preset service qualities, and sending the target distribution videos with different preset service qualities to corresponding terminals for playing; after receiving the target distribution video with the corresponding service quality, the terminal decodes the target distribution video and performs other operations, namely playing the target video with the corresponding service quality.
It should be noted that the video distribution, and video playing are not strictly processed in the above order. For example, for a video distribution server, the obtained video may be distributed by a video distribution server, or may be processed by a video distribution server at a previous stage, and so on; for a terminal that plays video, the video that is played by the terminal may be delivered by a video distribution server, or may be delivered by a video distribution server, and so on, for the terminal that plays video, it does not know whether the received target video is from video distribution or video distribution.
The video publishing method provided by the embodiment of the invention is to perform abstract calculation on the original video in the video to be published to determine the video abstract index, form a signature reference abstract on the basis, and encode the signature reference abstract into the code stream after the video to be published is compressed and encoded to obtain the target published video. The original video comprises an original video image, wherein the original video image refers to large data formed by pixel point matrix accumulation; the video code stream refers to a media data stream after compression and encoding processing, including data streams of audio media and video media, and is obtained by greatly compressing an original media usually in a lossy compression mode. Therefore, in the embodiment of the invention, the reference summary is directed to the original video, not the compressed and encoded video media code stream.
The original video can be regarded as a sequence of images at successive video points, and each image can be mathematically transformed to obtain an invariant, which describes the amount of data that will, or will not be sensitive to, changes in the video image over a range of deformations and adjustments. The invariant of the image is characterized in that as long as the content of the image is not changed greatly, the good invariant is selected and cannot be changed obviously due to the change of the quality of the image. But whenever the content of the image changes significantly, the invariants should change sensitively with them. The invariant of the image reflects what the image describes. The invariant obtained by dividing the video into images is arranged and counted according to a time sequence, so that the video content can be regarded as the invariant of the video content, and the video content in a period of time is reflected. Therefore, the processing object for generating the video summary index is the content of the original video.
For example, an image is subjected to average brightness of all pixels, and the average brightness can be used as a constant, and as long as the brightness of the image is not adjusted, the average brightness is not changed significantly whether the image is enlarged or reduced; for another example, the position of the center of gravity of an image is calculated as another invariant, and the position of the center of gravity of the image is basically kept unchanged no matter whether the image is enlarged or reduced or dimmed; for another example, the N-order central moment of the image is calculated and can be used as a geometric invariant similar to the position of the center of gravity; for another example, the eigenvalue/singular value of the numerical matrix of the image after normalization processing is calculated, and after the image is subjected to a plurality of processing, the main part of the eigenvalue/singular value of the numerical matrix after normalization processing can not be changed significantly; for another example, the frequency spectrum of the image is calculated, and the characteristics of the high-frequency and low-frequency positions are extracted from the frequency spectrum, which can also be used as an invariant; for another example, a plurality of representative key points are selected from the image and the appearance similarity feature descriptors in the vicinity of the key points are added, and the key points can be arranged to be invariant.
Therefore, performing the summarization calculation based on the content of the original video may be considered as a constant summarization calculation. The target release video is obtained by encoding the signature reference abstract in the code stream after the compressed and encoded video to be released, so that the target release video carries the identity information of the publisher and has good anti-counterfeiting capability.
In accordance with an embodiment of the present invention, there is provided a video distribution method embodiment, it is noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that while a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.
In this embodiment, a video distribution method is provided, which may be used in electronic devices, such as a video distribution server, a computer, a mobile terminal, and the like, and fig. 1 is a flowchart of the video distribution method according to the embodiment of the present invention, and as shown in fig. 1, the flowchart includes the following steps:
s11, acquiring the original video of the video to be published and the publisher identity information of the video to be published.
The original video of the video to be published may be a video of a target duration in the video to be published, for example, the video of the target duration is extracted from the video to be published every preset duration, and is used as the original video to perform subsequent video summary index calculation.
The target duration may be as small as 0 second, that is, the video of the target duration does not include any original video image, and the corresponding video summary index is null and may be represented by a corresponding data length of 0. The length of 0 indicates that the content is empty, that is, there is no content, and it can be known from the description of the subsequent steps that the reference summary not only contains one content of the video summary index, but also forms a reasonable reference summary for functional or identification purposes under the condition that the result of the video summary index is empty.
Alternatively, the target time period may be a time length of 1 to 10 seconds or the like. The specific length of the target duration is not limited at all, and is specifically set according to actual requirements.
For example, the video to be published is 30 minutes, the preset time duration is 5 minutes, and the target time duration is 1 second. Namely, extracting 1s of original video in 0-5 minutes of the video to be published for calculating the video abstract index; extracting 1s of original video in 6-10 minutes of the video to be published for calculating video abstract indexes; and so on, and so on.
The publisher identity information of the video to be published may include: the publisher universal name, the publisher entity name, the publisher address, the author name, the author contact, the publisher digital certificate access descriptor, the publisher identity public key access descriptor, the publisher identity description number, and may further include extension data of general description related to the added service due to a specific requirement, so as to be submitted to a specific service system or access descriptor to obtain the digital certificate or the identity public key. The publisher identity information is specifically set according to actual needs, and is not limited herein.
And S12, performing summary calculation on the content of the original video and determining a video summary index.
As described above, the digest calculation of the content of the original video is an invariant construct of the video. Video is images arranged in time, and invariant is a data description of images and video contents in summary on data. Viewing the same video content from multiple perspectives can result in an invariant data description, i.e., an invariant. The advantage of using invariants is that the calculation of the video summary index is made to be concerned not with the data of the video but with the content of the video.
When performing the summary calculation, the brightness or the picture of the original video image in the original video may be analyzed, or the value calculation may be performed on the original video image in the original video, and so on. Alternatively, the video summary index includes at least one video summary sub-index, and the video summary sub-index includes, but is not limited to, luminance, chrominance, and the like.
And S13, determining a reference abstract based on the publisher identity information and the video abstract index, and signing the reference abstract to obtain a digital signature.
The reference summary includes publisher identity information and video summary index, or further includes other information based on the publisher identity information and the video summary index, such as video description information and the like. After the reference digest is determined, it is signed to obtain a digital signature.
The manner of signing includes, but is not limited to, digital certificates and private keys, quantum entanglement techniques or other techniques, and the like. Taking the example of signing the reference digest by using the digital certificate and the private key, the digital certificate is signed by a widely recognized authority or a digital certificate center, or is signed by a digital certificate center accepted in a small range, or is a self-signed digital certificate which is not widely accepted, or is only an asymmetric public key which is not widely recognized. Of course, the form and source of the digital certificate selected by the publisher may have a corresponding indirect effect on the level of trust of the video stream that is published by the publisher.
And S14, splicing the reference abstract and the digital signature to determine a signature reference abstract.
The signature reference digest is obtained by splicing the reference digest and the corresponding digital signature, wherein the splicing of the reference digest and the digital signature can be to place the reference digest before or after the digital signature.
And S15, compiling the signature reference abstract into a video code stream of the video to be published, and determining and publishing the target published video.
The video code stream is a code stream obtained after the video to be released is compressed and coded.
Firstly, performing compression coding on a video to be published to obtain a video code stream of the video to be published. And coding the signature reference abstract into the corresponding position of the video code stream to determine the target release video. Since the reference digest included in the signed reference digest and the digital signature are related to the original video, they vary with the content of the video. Therefore, when the signature reference abstract is compiled, the position of the original video generating the video reference index in the video to be published can be combined for processing.
Details about this step will be described later.
In the video publishing method provided by this embodiment, the original video can be regarded as an image sequence on continuous video points, the images have their own unique contents, each image can obtain a certain invariant through mathematical calculation with respect to its contents, that is, the image has an invariant related to it, the video is split into the images to obtain the invariant of the images, and the abstract calculation based on the invariant can ensure that the result of the video abstract index is not affected by the outside world, thereby ensuring the reliability of the video abstract index; meanwhile, the identity information of the publisher is combined on the basis, so that the published video carries the identity mark of the publisher, the anti-counterfeiting capability is good, and meanwhile, as the video content of the published video is publicly accessible, the non-encryption property of the public field is maintained, and the reliability of the published video can be ensured on the basis of not influencing the publicly accessible published video.
In this embodiment, a video distribution method is provided, which may be used in electronic devices, such as a video distribution server, a computer, a mobile terminal, and the like, and fig. 2 is a flowchart of the video distribution method according to an embodiment of the present invention, as shown in fig. 2, where the flowchart includes the following steps:
s21, acquiring the original video of the video to be published and the publisher identity information of the video to be published.
Please refer to S11 in fig. 1, which is not repeated herein.
And S22, performing summary calculation on the content of the original video to determine a video summary index.
Specifically, the above S22 includes:
s221, performing feature processing on the original video image in the original video and determining a feature processing result.
S222, determining the characteristic processing result as a video abstract index.
In summary calculation, the original video image in the original video is processed. The original video comprises at least one original video image, wherein if the original video comprises at least two original video images, the original video images can be respectively subjected to feature processing, and feature processing results of the at least two original video images are fused to obtain a feature processing result of the original video. The fusion method includes, but is not limited to, mean, weighted sum, and the like. Alternatively, if the original video includes at least two original video images, only one of the original video images may be extracted to be used as the original video image for feature processing.
In some embodiments, the S221 includes: and reducing the size of the original video image to a preset size to obtain a thumbnail of the original video so as to determine a feature processing result.
For example, a first video image in the original video is determined as an original video image for feature processing, and the size of the original video image is reduced to a preset size, for example, to a 64 × 32 thumbnail image, thereby obtaining a feature processing result.
Or, extracting a preset number of original video images from the original video, respectively reducing the sizes of the original video images to preset sizes, then superposing and averaging the original video images according to pixels to obtain an average thumbnail, and determining the average thumbnail as a feature processing result. For example, three original video images are obtained by taking a video image near the 1/3 time point, a video image near the 2/3 time point and a video image near the 3/3 time point in the original video, thumbnails of the three original video images are respectively made, and then an average thumbnail is made after pixel superposition and averaging, and a feature processing result is determined.
In some embodiments, the S221 includes: and analyzing the color characteristics and/or the light and shade characteristics of the original video image to determine the characteristic processing result.
The color feature and/or the shading feature may be obtained based on a partial region in the original video image, or may be obtained based on the entire region in the original video image, and so on.
For example, for each original video image, 16 horizontal blocks and 9 vertical blocks are divided averagely, and the statistical brightness average value of pixels in about one fifth of the central part of each divided image block represents the block, so that each original video image can be represented by a 16 × 9 brightness vector; and then, carrying out vector superposition on the brightness vectors of all the original video images, averaging to obtain a 16-9 brightness vector, and determining a feature processing result.
In some embodiments, the S22 includes:
(1) And acquiring data with a preset length.
(2) And carrying out numerical calculation on the original video image in the original video and the data with the preset length to determine the video abstract index.
The data with the preset length is auxiliary configuration data, and the configuration data with the corresponding length needs to be input due to different abstract index calculation programs. The configuration data are arranged to form a finite length data column, and a data column is a vector. The data of the preset length may be generated from pre-configured data, and may be generated by a user's selection. The data of the preset length is different according to modes, programs and the like selected by a user.
For example, all original video images in the original video are used as input factors, and data with a preset length is used as an additional input factor. A finite number vector of finite dimension, namely a group of numbers consisting of a plurality of numbers, is obtained by carrying out numerical calculation on an original video image and data of a preset length.
The specific manner of calculating the numerical value is not limited to the above, and other manners may be adopted, and the specific manner of calculating the data is not limited at all.
In the calculation process, data with a preset length is combined, and the data with the preset length can be different along with different use scenes, so that the calculation of the video abstract index can be suitable for different use scenes.
It should be noted that, the manner described above regarding determining the video summary index in S22 may be obtained based on one or more of the feature processing results, may also be obtained based on numerical calculation, or may be obtained based on a combination of the feature processing results and the numerical calculation, and so on. The method is not limited in any way, and is specifically set according to actual requirements.
And S23, determining a reference abstract based on the publisher identity information and the video abstract index, and signing the reference abstract to obtain a digital signature.
Please refer to S13 in fig. 1 for details, which are not described herein again.
And S24, splicing the reference digest and the digital signature to determine a signature reference digest.
Please refer to S14 in fig. 1, which is not repeated herein.
And S25, compiling the signature reference abstract into a video code stream of the video to be published, and determining and publishing the target published video.
The video code stream is a code stream obtained by compressing and coding a video to be released.
Please refer to S15 in fig. 1 for details, which are not described herein again.
According to the video publishing method, the feature processing result is determined through the analysis result of the reduced size, the color feature or the light and shade feature, the calculation is simple, fast and effective, and the real-time performance of video publishing is improved.
In this embodiment, a video distribution method is provided, which can be used in electronic devices, such as a video distribution server, a computer, a mobile terminal, and the like, and fig. 3 is a flowchart of the video distribution method according to an embodiment of the present invention, as shown in fig. 3, where the flowchart includes the following steps:
s31, the original video of the video to be published and the publisher identity information of the video to be published are obtained.
Please refer to S11 in fig. 1 for details, which are not described herein again.
And S32, performing summary calculation on the content of the original video, and determining a video summary index.
Please refer to S22 in fig. 2 for details, which are not described herein.
And S33, determining a reference abstract based on the publisher identity information and the video abstract index, and signing the reference abstract to obtain a digital signature.
Specifically, the above S33 includes:
s331, obtain the first reference abstract.
Wherein the first reference summary comprises publisher identity information and a calculation description for summary calculation.
The calculation description for the summary calculation includes, but is not limited to, the name of the calculation program used, the number of original video images for generating the first reference summary, and a list of configuration parameters to be used when performing the calculation using the specified calculation program, and the like.
In some embodiments, the S331 includes:
(1) And acquiring the video description of the video to be released.
(2) And splicing the publisher identity information, the video description and the calculation description to determine a first reference abstract.
When the first reference abstract is generated, the video description of the video to be published is required to be combined. The video description includes, but is not limited to, program title, program duration, program content text vignette, program cover map, program category, program related participants, program related contributors, etc. And splicing the publisher identity information, the video description and the calculation description to obtain a first reference abstract. Therefore, the first reference abstract comprises some description information which is not changed along with the change of the video content under the condition of no external modification. Therefore, the first reference abstract can be stored and used as a common backup, and when the first reference abstract needs to be coded into a video code stream of a video to be published, the first reference abstract is signed to obtain a first signature reference abstract.
The video description is used to represent some side information of the video, such as video usage scope, etc., to facilitate distribution of subsequent videos, etc.
S332, sign the first reference digest, and determine a first signature.
In this embodiment, a specific manner of the signature is not limited, for example, the signature is performed by using a digital certificate and a private key, and a first signature is obtained after the first reference digest is signed.
And S333, determining the video abstract index as a second reference abstract, signing the second reference abstract, and determining a second signature.
The video summary index is obtained by performing summary calculation by using the original video image of the original video, and is changed along with the change of the original video image. Therefore, the summary calculation is required for each extracted original video image. And determining the video abstract index as a second reference abstract, and then signing the second reference abstract by using a corresponding signature mode to determine a second signature.
And S34, splicing the reference digest and the digital signature to determine a signature reference digest.
Splicing the first reference abstract and the first signature to obtain a first signature reference abstract; and splicing the second reference digest and the second signature to obtain a second signature reference digest.
And S35, compiling the signature reference abstract into a video code stream of the video to be published, and determining and publishing the target published video.
The video code stream is a code stream obtained by compressing and coding a video to be released.
Specifically, the above S35 includes:
and S351, coding the first signature reference abstract into the target position of the video code stream.
As described above, the first reference digest is a common backup, and the first signed reference digest obtained from the first reference digest may be encoded on or before the first frame of the video bitstream, and so on. For example, the first signature reference summary is compiled every odd number of times or every other 1 time of N division of the video bitstream.
Since it is a common backup, it should not be counted for each compressed encoded video stream, but for several compressed encoded video streams. However, the number of times of insertion is not limited to one, but may be several times. This has the advantage that if the total content is long, no long backtracking is required to find the data, but a short backtracking time is sufficient.
And S352, coding the second signature reference abstract into the target position of each compressed and coded video stream in the video code stream.
Each compressed encoded video stream is the original video used to generate the index of the video digest since the second signed reference digest is changed from the original video, i.e. from time to time. Therefore, when the second signed reference digest is compiled, the position of the original video generating the second signed reference digest in the video to be published needs to be combined. After the position is determined, a second signed reference digest may be incorporated into the first or last frame of the compressed encoded video stream corresponding to the original video segment.
And S353, determining and releasing the target release video based on the video code stream coded with the first signature reference abstract and the second signature reference abstract.
The target release video is a video code stream including a first signature reference abstract and a second signature reference abstract, and the time for compiling the first signature reference abstract and the second signature reference abstract is not limited at all. For example, the second signature reference digest may be first compiled, and then the first signature reference digest may be compiled; or both may be performed simultaneously, etc.
According to the video publishing method provided by the embodiment, because the publisher identity information and the calculation description for the abstract calculation are not changed along with the change of the video content, the publisher identity information and the calculation description for the abstract calculation can be commonly used in one video publishing process, and therefore repeated calculation can be avoided by directly storing the publisher identity information and extracting the publisher identity information when the publisher identity information needs to be used, and the video publishing efficiency is improved. The first reference abstract is only used in the releasing process and is coded into the target position of the video code stream instead of each compressed and coded video stream, so that the data volume of the released video can be shortened.
As a specific application example of the embodiment of the present invention, the generated signature reference digests are classified into two types, one type is a first signature reference digest, hereinafter abbreviated as RA; the other type is a second signature reference digest, hereinafter abbreviated RB. The first reference abstract forming the RA comprises publisher identity information, video description and calculation description used for abstract calculation; the second reference summary forming the RB contains only the video summary index.
For example, JSON is used in RA to organize and encode information into data blocks, or other forms, such as XML, or protobuf, or Box structures of ISOBMFF, may be used to organize and encode information into data blocks.
Taking JSON as an example, RA has the following structure:
Figure BDA0003866933330000081
Figure BDA0003866933330000091
as shown in the above structure code, author represents the publisher identity information, where sn, cn, cert represent name, common name, and digital certificate for display, respectively, which must be provided, and do not allow to provide the contents anonymously. The other data items in the author are option-filled. Also, it is allowed to add other data items that do not conflict with the above table definition.
video represents a video description, and each data item of the video description is an option. The video description entry is allowed to be empty. Allowing the addition of other data items that do not conflict with the above table definitions.
feature represents the calculation description of the video abstract index, wherein program represents the name of a used calculation program, length represents the number of image frames of a small segment of video, and init represents a configuration parameter list required to be used when calculation is carried out by using a specified calculation program. A program must be provided that allows no length, init, and also allows other data items to be added, depending on the chosen name of the computer program.
When a plurality of video summary sub-indexes are included in the video summary index, each feature object needs to use a unique id to indicate which video summary sub-index is currently described. When a plurality of feature objects appear in array form, id is a data item that cannot be omitted.
sign denotes the digital signature of RA, where digest denotes the hashing algorithm used in computing the digital signature; signature represents the calculated digital signature result, expressed in Base64 code. Both digest and signature must be supplied and cannot be missing, and sign must be supplied. Useful hashing algorithms are SHA-256, SM3, etc.
Since JSON-form data is not convenient for direct storage into binary data, wherever binary data values are involved, the binary data is encoded using Base64 encoding or any other suitable encoding.
In the process of calculating the signature, a hash code is calculated by using a hash algorithm described by digest on the RA coded data block to which the sign object/data block is not added, or the hash code is calculated after the sign object/data block is deleted from the RA coded data block. And then, signing the hash code obtained by the calculation by using a private key associated with the data certificate cert in the identity description of the issuer, and coding the signature by using a Base64 algorithm to obtain a signature value.
The private key associated with cert here is private information and is key information that the publisher owns its true identity, requiring that the publisher be properly kept. According to the x.509 public key cryptosystem, just because a publisher owns this private key that anyone else does not own, anyone else other than the publisher is believed to be unable to forge a digital signature of consistent cryptographic effect in a short time that is practically valid.
In RB, JSON, XML, protobuf, box structure of ISOBMFF, or the like is used to organize and encode information into data blocks. Taking JSON as an example, the specific structure of RB is shown below:
Figure BDA0003866933330000092
Figure BDA0003866933330000101
in the RB, only feature objects and sign objects need to be included. sign objects are computed in the same way as RA, requiring the use of the cert-associated private key described in RA.
In RB, the feature object has id, program, length, init data items as in RA, but the program, length, and init data items are negligible. When an RB is encountered that ignores program, length, and init data items, it may be filled in using the feature object in the RA. The feature object is the most different from the feature object in the RA by the data item result, which is a video summary index, and the specific calculation method is described in detail by program and init.
Since the data items of feature objects in RA and RB, such as id, program, length, init, etc., are essentially identical, the following description may be treated without distinction, as may the sign object.
In the embodiment, the feature object optional calculation model, i.e. the calculation model for abstract calculation, is provided, and the name program has two optional models/programs, namely thunbnail and blocks.
The index calculation model/program thumbnafil calculates the index result by enumerating the picture image to be an average thumbnail. The configuration parameters of this model are: "init" [ "thumbnail wide", "thumbnail high", "sampling mode" ]. The thumbnail width is used to configure the pixel width of the thumbnail finally generated by the program thumbnail, the thumbnail height is used to configure the final pixel height, and the sampling mode can take an integer of-2, -1, 0, 1, 2,. And so on to indicate the mode of which images are extracted.
When n < =0 is taken, the 1 st to nth images are taken to generate thumbnails; when n >0 is taken, the thumbnail is generated by taking every n-1 original video images, and then the average thumbnail is obtained. For example, when n =1 indicates taking all the original video images; when n =2, the original video image is taken every other original video image after the first one is taken; when n =3, it means that one sheet is taken every 2 sheets after the first sheet is taken. In creating the average thumbnail, each thumbnail is accumulated pixel-by-pixel and then divided by the number of accumulated image sheets. And the thumbnail is coded into a data block according to jpeg, and then is coded by Base64 to obtain the value of a result data item.
And the index calculation models/program blocks are divided into equally divided blocks by enumerating the image of the picture, then the average brightness of the central small block is taken as a representation after each block is divided according to the nine-square grid, each picture obtains the representative brightness of a group of blocks, and the brightness matrixes are combined into a brightness tensor to be made into the value of the result data item.
The index calculation model/program blocks have the following model configuration parameters: init [ "column number of equal blocks", "row number of equal blocks", "sampling mode" ]. The "sampling mode" herein has the same meaning as that of the "thumbnail" and is used to describe a mode of extracting an image for calculation from an image group.
As a specific application example of the foregoing embodiment, the video distribution method includes: 30 color original images with the BT.601 specification of 1920 pixels wide and 1080 pixels high are shot every second, the data format of the images is YUV420P, and the images are called videos to be issued. And performing image compression coding operation on the video to be issued, specifically, performing compression coding on each 1920 × 1080 YUV420P image to obtain compression coding frame data. By default, image frame Slice compressed data blocks of H.264 will be obtained, e.g., I-Slice frame data blocks, B-Slice frame data blocks, P-Slice frame data blocks, etc. may be obtained.
When an image is coded, a key image I-Slice compressed data block of a frame H.264 can be appointed to be coded, an SPS data block and a PPS data block are obtained, and the SPS data block, the PPS data block and the I-Slice compressed data block are spliced together to form a key frame image data block IDR frame data block. The encoding of the image corresponds to the image capturing, and for example, 30 1920 × 1080 color images of YUV420P are received per second, and 30 h.264 frame data blocks are output. When encoding the image, after outputting a key frame IDR frame, outputting an IDR frame again when outputting a subsequent 300 th frame data block.
When the abstract calculation is carried out on the original video image in the original video, the abstract index is calculated according to the index calculation model/program of the blocks, one image is divided into 16 columns and 9 rows of blocks, and the sampling mode is configured to be 6, namely, after one YUV420 image is extracted, the 6 th YUV420 image is extracted at intervals of 5 images. The length is configured to be 30, that is, one video summary index is output after every 30 images are input. When the abstract is calculated, each time a video abstract index is received, the video abstract index is made into an RB.
When an IDR frame is received, a copy of RA is obtained and the calculation of the summary index calculation module is reset. When an RA or RB digest is obtained, the RA or RB digest is made into an auxiliary data block (SEI), and an SEI NAL Unit containing a signed reference digest is made according to the Annex B specification of the IEC/ISO-14496-10 standard and output to a data interface of a video stream.
In particular, in order to prevent the signature reference digest from being semantically confused in the resulting SEI data block with any other application-generated SEI data block, the auxiliary data payload of SEI PayloadType 5, i.e. user _ data _ unregistered, is used, and here a UUID is specifically introduced to be assigned to the signature reference digest for use, which is placed at the first 16 bytes of the SEI data block to guide a signature reference digest.
For example, the UUID is defined as 1e2bc68c-33d2-5ca2-af3b-0b5e5469c7b8.
When an IDR frame or Slice frame is obtained, NAL units are generated by Annex B rules and output to the data interface of the video stream. When meeting the simultaneous arrival of the summary SEI NAL Unit and IDR frame or Slice frame, the SEI NAL Unit containing RA summary is placed in front of the frame NAL Unit, and the SEI NAL Unit containing RB summary is placed behind the frame NAL Unit.
The video publishing method provided by the embodiment allows a video publisher to publish a credible video based on a signature reference abstract to the public domain, the video has an anti-counterfeiting characteristic and carries an identity stamp of the publisher, but the non-encryption of the public domain is maintained, and the video content is publicly accessible. The video anti-counterfeiting characteristics are mainly expressed in three aspects: firstly, video abstract indexes can be calculated from video contents and can be checked with indexes in a signature reference abstract; secondly, the signature reference digest is the reference digest plus the signature thereof, and people without a private key cannot calculate a new reference digest from the forged video content to generate a qualified signature; and thirdly, the signature reference abstract carries a digital certificate which indicates the identity of the publisher, and the digital certificate can be managed by authority. Therefore, the verification digital certificate is true, the verification reference digest signature is true, the calculation verification digest index goodness of fit is high, and the video is necessarily distributed by the person who holds the private key of the valid digital certificate.
In the present embodiment, a video distribution method is provided, which may be used in a video distribution server, a mobile terminal, and the like, fig. 4 is a flowchart of a video distribution method according to an embodiment of the present invention, as shown in fig. 4, the flowchart includes the following steps:
and S41, acquiring the target release video.
The target release video comprises a signature reference abstract and a compressed coding video stream, the signature reference abstract comprises a reference abstract and a signature of the reference abstract, and the reference abstract is obtained by splicing an abstract calculation result of content of an original video in the compressed coding video stream and publisher identity information of the release video.
It should be noted that the target distribution video herein is not specifically a video delivered from a video distribution server, and may also be obtained from a video distribution server at an upper level, and the like.
For the generation process of the target release video, please refer to the above details, and details are not repeated herein.
And S42, separating the signature reference abstract and the compressed and coded video stream from the target release video based on the identification of the signature reference abstract.
The signed reference digest is encoded in the compressed encoded video stream and is distinguished from the compressed encoded video stream by a corresponding identifier. For example, a signed reference digest is coded as an SEI field into a compression-coded video stream. This field can thus be used to determine a signed reference digest in the target release video.
The signature reference digest may include a plurality of signature reference digests of the same type, or may include a plurality of signature reference digests of different types. The signature reference digest is obtained based on the content of the original video, for example, the second signature reference digest described above; the different types may be obtained based on the content of the original video, or may be obtained in combination with the identity information of the publisher, for example, the first signature reference digest described above.
As described above, the signed reference digest may correspond to the compressed encoded video stream, and accordingly, the signed reference digest separated from the target release video may correspond to the compressed encoded video stream. For example, for a video to be published, an original video with a target duration is extracted from the video to be published every preset duration according to the requirement for generating the second signature reference digest. Continuing with the above example, if the video to be published is 30 minutes, the preset time duration is 5 minutes, and the target time duration is 10 seconds, then the processing of the video to be published is as follows:
to-be-published sub-video 1: representing videos from [0,5] minutes in the to-be-released videos, obtaining a compressed coding video stream 1 after compression coding, extracting 10 seconds of original videos from the to-be-released sub-videos 1, and generating a second signature reference abstract 1;
to-be-published sub-video 2: representing videos of (5, 10) minutes in the to-be-released videos, obtaining a compressed coding video stream 2 after compression coding, extracting 10 seconds of original videos from the to-be-released sub-videos 2, and generating a second signature reference abstract 2;
to-be-published sub-video 3: representing videos of (10, 15) minutes in the to-be-released videos, obtaining compressed coded video streams 3 after compression coding, extracting 10 seconds of original videos from the to-be-released sub-videos 3, and generating second signature reference digests 3;
and so on;
to-be-published sub-video 6: and (3) representing the video (25, 30) minutes in the video to be published, obtaining a compressed and encoded video stream 6 after compression encoding, extracting 10 seconds of original video from the sub-video 6 to be published, and generating a second signature reference digest 6.
As indicated above, the second signed reference digest is in a one-to-one correspondence with the compressed encoded video stream, and thus, the second signed reference digest and the corresponding compressed encoded video stream can be separated by using the identity of the second signed reference digest.
And S43, processing the compressed and coded video stream by preset service quality to obtain a to-be-distributed video with the preset service quality.
The preset service quality includes but is not limited to 8K, 4K, high definition, standard definition, smooth and the like, and is determined according to actual requirements. And processing the compressed and coded video stream according to the determined preset service quality to obtain the video to be distributed with the preset service quality. That is, through the processing of this step, videos to be distributed with different service qualities can be obtained for the same compressed and encoded video stream.
In some embodiments, S43 above further includes: verifying the publisher identity information based on the signed reference digest; when the verification passes, the above step S43 is performed.
For example, verifying the authenticity of the digital certificate of the issuer identity carried in the signed reference digest may alert and record an alert information log or the like on the digital certificate of the false identity, which may further prevent further distribution of untrusted video streams.
Alternatively, the signature in the signed reference digest may be verified, a warning may be issued for an unreal signature and a warning information log may be recorded, etc., and further, further distribution of the untrusted video stream may be prevented.
In some embodiments, when a video to be distributed with a preset service quality is obtained, since the compressed and encoded video stream needs to be processed, an image of the compressed and encoded video stream may be scanned, and the video content may be verified whether the video content meets a preset distribution condition. For the occurrence of a video that does not satisfy the preset distribution condition, a warning event may be issued and a warning information log or the like may be recorded. Or restrict distribution of the video, etc.
And S44, compiling the signature reference abstract into the video to be distributed, and determining and distributing the target distribution video with the preset service quality.
When the signature reference digest is compiled into the video to be distributed, the above description about compiling the signature reference digest in the generation of the target distribution video can be referred to. Alternatively, the location of the signed reference digest is recorded when the compressed encoded video stream is separated from the signed reference digest. And after the video to be distributed is obtained, the signature reference abstract is programmed into the recorded position, so that the target distribution video is determined. And finally, distributing the target distribution video to the corresponding terminal.
In some embodiments, the signed reference digest includes a first signed reference digest including publisher identity information and a computation description for digest computation, and a second signed reference digest including a digest computation result of content of an original video in the compressed encoded video stream, and the video to be distributed includes sub-videos to be distributed in one-to-one correspondence with the compressed encoded video stream. Based on this, S44 includes:
(1) And compiling the second signature reference abstract into the corresponding sub-video to be distributed to obtain the target distribution sub-video with the preset service quality.
(2) And splicing the target distribution sub-videos with the preset service quality, compiling the first signature reference abstract into a splicing result, and determining and distributing the target distribution videos with the preset service quality.
It should be noted that the target distribution video includes multiple segments of compressed and encoded video streams, and each segment of compressed and encoded video stream can obtain a sub-video to be distributed with a preset service quality through the processing in the above steps. And based on the corresponding relation between the compressed coded video stream and the second signature reference abstract, the corresponding relation between the sub-video to be distributed and the second signature reference abstract can be determined. Based on the above, the second signature reference abstract is programmed into the corresponding sub-video to be distributed, so as to form the target distribution sub-video with the preset service quality.
Splicing the target distribution sub-videos to obtain a splicing result; and then the first signature reference abstract is compiled into the splicing result to obtain the target distribution video. The number of the first signature reference digests can be determined according to the duration of the splicing result, if the duration of the splicing result is longer, a plurality of first signature reference digests can be compiled, and the compiling positions of the signature reference digests are distributed at different positions of the splicing result; if the duration of the concatenation result is short, only one first signature reference digest may be incorporated, and so on. The position of the first signature reference digest is set according to actual requirements, and no limitation is imposed on the position.
It should be noted that, when the first signature reference digest is compiled, it is not limited to the above-mentioned concatenation result obtained first and then compiled. Or the number of the first signature reference digests to be coded is determined, the number of the compressed coding video streams between two adjacent first signature reference digests is determined according to the number of the first signature reference digests, and finally, when the second signature reference digests are coded, the number of the coded second signature reference digests is counted, so that the coding position of the first signature reference digests can be determined.
The first signature reference abstract represents some description information which does not change along with time change; and the second signature reference digest is closely related to the video content, so that different compiling modes are adopted for different signature reference digests, and the data volume of the target distribution video increased by compiling the signature reference digest can be reduced on the basis of ensuring the reliability of the target distribution video.
In the video distribution method provided by this embodiment, a signature reference digest is carried in a target release video, and the signature reference digest is obtained by performing digest calculation based on the content of an original video in a compressed coded video stream, and since an image has its own unique content, that is, the image has an invariant related to it, the reliability of the signature reference digest is calculated based on the invariant; meanwhile, the signature reference abstract also comprises the identity information of the publisher, so that the finally obtained target distribution video has the identity mark of the publisher, and the anti-counterfeiting capability is good, thereby ensuring that the reliability of the obtained target distribution video with the preset service quality is higher.
As a specific application example of the video distribution method according to the embodiment of the present invention, the video distribution method includes: a target distribution video of 30 frames per second and a signature reference digest RA or RB are obtained, and the separation process of the signature reference digest is performed thereon.
In the separation process, by detecting the SEI data block with payload type PayloadType of 5 and having UUID {1e2bc68c-33d2-5ca2-af3b-0b5e5469c7b8} guidance as defined in the above embodiment, a signature reference digest is extracted from this SEI data block, in addition to any other video frame data for the subsequent production of the preset quality of service.
When producing videos with different service qualities, it is necessary to decode each input video frame or compressed data of a plurality of video frames, output one or a plurality of video frame images, and obtain 30 decoded 1920 × 1080 video picture images each second.
And then carrying out video compression coding on each sent video image after reducing or amplifying according to specified configuration, and outputting one or more video frame images with different sizes or different video code rates. For example, a video with an input image size of 1920 × 1080 is subjected to a production process to obtain 1280 × 720, 704 × 576 and 372 × 288 of small-sized video compression-encoded videos, and then h.264 compression-encoded videos are respectively obtained.
After the image is reduced or enlarged, the original video aspect ratio information can be kept in the coded video coding frame or the related description information. For example, after an image of 1920 × 1080 size is reduced to an image of 704 × 576 size and encoded, the original video aspect ratio of 16 can be described in the encoded frame data.
During the decoding process, if an IDR frame is received and decoded, the indication encoding process synchronously encodes the IDR frame at each different size/video code rate.
When a copy of the RA is received from the reference digest detachment module, it is temporarily stored until the arrival of the next signature RA. When receiving an IDR frame from the image coding module, the temporary stored signature RA is first output to the data interface of the video stream of the corresponding size, and then the IDR frame of the corresponding size is output to the output interface. For example, when receiving an IDR encoded frame with size 704 × 576 sent from the image encoding module, the buffered RA is output on the video output data interface with size 704 × 576 before outputting the 704 × 576 video encoded frame.
When a non-IDR frame is received from the image coding module, the video frame is directly output to the corresponding video stream data output interface.
When a share of RB is received, the RB is output to the video streaming data output interfaces of all streams at the same time.
When an RA is received, the digital certificate in the RA is first verified when the RA or RB is detached. If the digital certificate carried in the RA is signed by a trusted CA certificate that is already local for playback purposes, then this certificate is validated as true. If no valid certificate can be extracted, a warning event is issued, a log is logged and a recommendation is made to the system to prevent the distribution process.
Alternatively, the truthful and trustworthy digital certificate may be extracted, the value of its CN data field is compared with the value of the author CN data field in the RA, and if the two are not the same or there is no inclusion relationship, a warning event is issued, a log is recorded, and a potential risk is reported.
Alternatively, a sign object is taken from the RA and deleted in the RA. And then extracting a public key of the publisher from the digital certificate, acquiring a hash algorithm from the digest data field of the sign object, calculating a hash value of the reference digest data block after deleting the sign object by using the hash algorithm, and decrypting the signature data of the sign by using the public key of the publisher to obtain another hash value. The two hash values are compared and if identical, the original signature reference digest is authenticated as being true. Otherwise, a warning event may be issued, logged and a recommendation made to the system to prevent the distribution process.
When the video with the preset service quality is manufactured, the video which does not accord with the distribution condition can be screened out by scanning the video. Specifically, when the RA is received, the author object is extracted, the video object is extracted, and relevant records are made. When an event which does not meet the distribution condition is triggered, recording and proposing a proposal to prevent the distribution process to the system or using a proposal image processing method to appropriately process the image.
The video distribution method provided by the embodiment does not need to encrypt the content of the video, and the video tool or service has the capability of distributing the video content according to different service qualities without reducing the credibility of the video stream, and meanwhile, the video tool or service has the capability of scanning the video content and performing necessary supervision measures without reducing the credibility of the video stream.
In this embodiment, a video playing method is provided, which can be used in a playing terminal, such as a computer, a mobile terminal, etc., fig. 5 is a flowchart of the video playing method according to an embodiment of the present invention, as shown in fig. 5, the flowchart includes the following steps:
and S51, acquiring a target video.
The target video comprises a signature reference abstract and a compressed coding video stream, the signature reference abstract comprises a reference abstract and a signature of the reference abstract, and the reference abstract is obtained by splicing a digest calculation result of content of an original video in the compressed coding video stream with publisher identity information of the target distribution video.
When the target video is the target distribution video, please refer to the above description for the generation method of the target distribution video, which is not described herein again. When the target video is the target release video, please refer to the above description for the generation method of the target release video, which is not described herein again.
And S52, separating the signature reference digest and the compressed coding video stream from the target video based on the identification of the signature reference digest.
Regarding the manner of separating the signed reference digest and the compressed encoded video stream from the target video, it is similar to the manner of separating the signed reference digest and the compressed encoded video stream from the target release video described in S42 in the embodiment shown in fig. 4, and therefore will not be described again here.
In some embodiments, the signed reference digest includes a first signed reference digest including publisher identity information and a computation description for digest computation, and a second signed reference digest including a digest computation result of content of an original video in the compressed encoded video stream. Based on this, S52 includes:
(1) And separating the first signature reference abstract from the target release video by using the identification of the first signature reference abstract.
(2) And separating the second signature reference abstract and the compressed coding video stream corresponding to the second signature reference abstract from the target release video by using the identifier of the second signature reference abstract.
The specific contents of the first signature reference digest and the second signature reference digest are described above, and are not described herein again.
For example, a video frame queue is used to process the received target distribution video, and video frames are placed into the video frame queue when a compressed encoded frame of video is found. When a second signature reference summary is found, all video frames in the video frame queue are taken out as a compressed coded video stream in a short time.
Alternatively, it may be the case that the above-mentioned video frame queue is empty and the second signature reference digest is encountered again, and this time, the length of the compression-encoded video stream is regarded as 0, that is, an empty video stream is obtained.
When putting video frames into the video frame queue, the compressed video frames can be decoded to obtain video frame images, and the compressed coded data of the images, not the videos, can be put into the video frames, so that the effect of verifying the authenticity of the video streams is the same.
The target release video comprises two types of signature reference digests, and the two types of signature reference digests can be accurately separated from the target release video by using the corresponding identifiers.
S53, decoding the compressed and coded video stream, and determining a decoded video in the compressed and coded video stream.
And sending the compressed and coded video stream into a video decoder for decoding, thus obtaining the decoded video in the compressed and coded video stream.
And S54, performing summary calculation on the content of the decoded video, and determining a decoding summary index.
And the calculation mode of the abstract is consistent with the calculation mode of obtaining the abstract calculation result.
Specifically, please refer to the above description of S22 in the embodiment shown in fig. 2, and details thereof are not repeated herein.
For example, when the first thumbnail of the summary calculation result calculated in a 64 × 32 manner is obtained, the first original video image of the decoded video is reduced to a size of 64 × 32 as a decoding summary index.
For example, when three enumerated average thumbnails whose digest calculation result is calculated in a manner of 64 × 32 are obtained, three frames of original video images near 1/3, 2/3, and 3/3 time points in the decoded video are reduced to a size of 64 × 32, and then an average image is calculated, that is, an average thumbnail is obtained by averaging pixel by pixel, and is used as a decoding digest index.
For example, when the calculation mode of obtaining the summary calculation result is to designate the average brightness of 16 × 9 blocks of the enumerated image, a plurality of images of a designated number of columns are taken, each image is divided averagely according to 16 horizontal blocks and vertical 9 blocks, the statistical brightness average value of pixels in about one fifth of the central part of each divided image block represents the block, so that each taken image can be represented by a 16 × 9 brightness vector, the group of representative vectors are superposed and averaged to form a 16 × 9 brightness vector, and the 16 × 9 brightness vector is used as a decoding summary index.
For example, when the calculation mode of obtaining the digest calculation result is to designate the singular value of the average brightness of the 32 × 18 blocks of the enumerated image, the designated number of columns of images are respectively taken to be 32 × 18 blocks of the enumerated image, the average brightness of the blocks is counted to obtain a 32 × 18 numerical matrix, the numerical matrix is accumulated and then is subjected to singular value decomposition, and the nonzero singular value is intercepted to form a variable length vector to serve as a decoding digest index.
For example, the calculation method for obtaining the digest calculation result is to specify a 16 × 9 block average luminance joint vector of the enumerated images, obtain a 16 × 9 block average luminance numerical matrix from specified number of rows of images, directly concatenate and splice into a long numerical vector, and make a decoding digest index.
And S55, verifying the target video based on the decoding abstract index and the signature reference abstract.
And when the target video is verified, comparing the calculated decoding abstract index with the video abstract index, and recording the difference. When the comparison difference is large, a relevant warning is given or relevant recording is made, which indicates that the compressed and coded video stream has a certain degree of unreality.
In some embodiments, the first signed reference digest comprises a first reference digest and a first signature of the first reference digest, the first reference digest comprising the issuer identity information and a computation description for the digest computation. Based on this, the above S54 includes:
(1) And extracting the publisher identity information and/or the first signature in the first signature reference digest.
(2) And verifying the identity information and/or the first signature of the publisher to determine the verification result of the target video.
When the first signature reference digest is used for verification, the authenticity of the digital certificate and/or the first signature in the issuer identity information can be verified. When the digital certificate of the publisher is obtained, the authenticity of the digital certificate is verified, and when the authenticity cannot be verified due to the lack of information, relevant warning is given or relevant records are made to show that the authenticity of the identity of the publisher is questioned.
When the acquired digital certificate of the publisher is not signed by a trusted organization or a CA center, a relevant warning is given or a relevant record is made, which indicates that the authenticity of the identity of the publisher is questioned.
When no publisher digital certificate is provided, a trusted publisher digital certificate access descriptor is not provided, and a trusted publisher identity public key access descriptor is not provided, a relevant warning is given or a relevant record is made, which indicates that the authenticity of the publisher identity is suspicious.
When the digital certificate of the publisher is obtained, verification failure occurs in the process of verifying the digital certificate, and relevant warning is given or relevant records are made to indicate that the authenticity of the identity of the publisher is false and is not worth being trusted.
And inputting the reference digest data except the signature part in the signature reference digest by using the obtained publisher public key to verify the authenticity of the digital signature, and giving a relevant warning or making a relevant record if the verification fails, wherein the signature reference digest is not authentic and is not worthy of being informed.
The first signed reference digest includes some descriptive information and does not change with the change of the image in the video, and therefore, the first signed reference digest can be used to represent the situation of the target release video as a whole.
In some embodiments, the step S54 includes:
(1) And extracting the reference abstract in the second signature reference abstract to obtain a video abstract index, wherein the video abstract index is an abstract calculation result of the content of the original video in the compressed coding video stream.
(2) And calculating the similarity between the decoding abstract index and the video abstract index.
(3) And determining the verification result of the target video based on the similarity.
For example, when comparing the vector of the decoded abstract index with the vector of the video abstract index, the absolute difference may be calculated data by data and then averaged, the cosine of the included angle between the two vectors may be calculated, the covariance may be calculated, the correlation coefficient between the two vectors may be calculated, the structural similarity between the two vectors may be calculated, and the like.
Or, taking the cosine of the included angle between the vector of the decoded summary index and the vector of the video summary index as an example, when the cosine of the included angle between the two summary index result vectors is approximately 1.0, it indicates that the true degree of the segment of compressed and encoded video stream is very high; when the cosine is close to 0.0, the truth degree of the compressed coding video stream is very low; a good true degree can be discriminated when the cosine is > 0.7; the poor trueness can be discriminated when the cosine is < 0.5.
When the calculation method of the video abstract index is an average thumbnail, the average thumbnail representing the decoding abstract index and the average thumbnail representing the video abstract index can be compared pixel by pixel, then the difference of pixel by pixel comparison is compared, and a representative area corresponding to the pixel with the difference of more than 30% is dyed and displayed in the playing process to be used as a warning, and relevant records are made.
The second signed reference digest is in one-to-one correspondence with the compressed and encoded video stream, and therefore, the second signed reference digest can be used for representing the situation of the corresponding compressed and encoded video stream, and therefore the verification result of the target video is determined.
And S56, when the verification is passed, playing the decoded video.
When the verification is passed, the obtained decoded video can be determined to be reliable video, so that the video can be played.
When the verification fails, a warning mark can be made in the video image according to the result of the verification when the decoded video is rendered, for example, a target image is displayed, and the like. It is not limited in any way herein.
In the video playing method provided by this embodiment, a signature reference digest is carried in a target release video, and the signature reference digest is obtained by performing digest calculation based on the content of an original video in a compressed coded video stream, and since an image has its own unique content, that is, the image has an invariant related to it, the reliability of the signature reference digest is calculated based on the invariant; meanwhile, the signature reference abstract also comprises the identity information of the publisher, so that the finally obtained target video has the identity mark of the publisher, and the signature reference abstract has good anti-counterfeiting capability.
As a specific application example of the video playing method, the video playing method includes: and acquiring a target video, and separating the signed reference digest and the compressed and coded video stream without the reference digest from the target video. For example, a video frame of 30 frames per second and a signature reference summary RA or RB, etc. are obtained for subsequent separation processing.
During the separation process, signature reference summary data is extracted from an SEI data block with PayloadType of 5 by detecting the SEI data block and guiding with UUID {1e2bc68c-33d2-5ca2-af3b-0b5e5469c7b8} defined in the specific application example, so as to be used for signature verification. Any other video frame data in addition to that is used for decoding.
In decoding, each of the input video frames or the plurality of video frame compressed data is decoded, and one or more video frame images are output. For example, 30 decoded 1920 × 1080 video picture images will be obtained every second.
And when the signature reference digest data block is verified, determining whether RA or RB is received by judging whether the signature result is contained or not every time the signature reference digest data block is received from the reference digest separation module.
When an RA is received, the digital certificate in the RA is first verified. If the digital certificate carried in the RA is signed by a trusted CA certificate that is already local for playback purposes, then this certificate is validated as true. Or the authenticity of this certificate can be confirmed in some trusted way for playback purposes. When a certificate can be verified as authentic and trustworthy, authentication of this certificate is passed and further authentication and verification operations will be enabled. If a valid certificate cannot be extracted, then the potential risk is reported to the image rendering module. The image rendering module will alert the video display window of this risk.
Optionally, the authentic and trusted digital certificate is extracted, the value of its CN data field is compared with the value of the author CN data field in the RA, if the two are not identical or there is no certain inclusion relationship, a potential CN spoofing risk is reported to the image rendering module, and the author information in the RA and the CN data field of the actual certificate or information of more data fields are displayed in the image rendering module to prompt the viewer of a possible CN spoofing risk.
Sign objects are taken out from the signature reference summary data block, and the sign objects are deleted from the signature reference summary data block. And then extracting the public key of the publisher from the digital certificate, obtaining a hash algorithm from the digest data field of the sign object, calculating a hash value of the reference digest data block after the sign object is deleted by using the hash algorithm, and decrypting the signature data of the sign by using the public key of the publisher to obtain another hash value. The two hash values are compared and if identical, the original signature reference digest is authenticated as being true. Otherwise, the false reference abstract is not adopted, a warning is sent to the image rendering module, and further index verification work is terminated until the next RA is encountered.
When an RA is received, the author object is extracted, the video object is extracted, and the image rendering module is informed of the appropriate display.
When receiving each frame of IDR frame data of a video compression-encoded key frame, a configured signed reference digest RA must be received, and it is known from this RA whether the received video stream is a video stream with a signed reference digest. When the received video stream is the video stream without the signed reference digest, the processing of the video stream does not need to be subjected to signature authentication and only needs to be subjected to digest separation, decoding and rendering as long as the processing is verified, so that the related playing is completed.
In summary index verification, the index calculation model/program described in the above example, thumbnail and blocks, is used. After the sign configuration in RA is used, each image is received from the image decoding module, the data contribution part of the image in a short time is calculated by using a configured program through or blocks.
For example, taking the thumbnail as an example, if the sequence number of the current image since the module reset does not conform to the sampling pattern of the thumbnail, the image is directly discarded/ignored. If the sampling mode is met, the image is reduced to a thumbnail of a specified width and height by a faster algorithm, such as a bilinear method, and is accumulated on a preset floating point thumbnail base image. This thumbnail floor is created when the thumbnail program is reset and configured. When the received image sequence number is the same as the length, it indicates that the current video segment has been processed, and at this time, the floating point value on each pixel is averaged once according to the number of the accumulated images on the accumulated floating point thumbnail base map, and then is arranged into an integer thumbnail image. Namely, an index result is calculated according to the thunmbnail description, but the jpeg compression is not carried out.
And when the feature result of the RB is received, performing Base64 decoding and JPEG decompression on the image carried in the feature, namely obtaining an integer thumbnail. And comparing the thumbnail obtained by the calculation with the currently decompressed thumbnail pixel by pixel, marking the pixel as unreal (value 2) when the pixel difference is more than 50%, marking the pixel as possible unreal (value 1) when the pixel difference is more than 30%, and marking the pixel as acceptable (value 0) when the pixel difference is less than 30%. Thus, a real score thumbnail is made. The abstract index verification module sends the real calculation thumbnail to the image rendering module to guide the image rendering module to dye the video picture which is currently played/rendered according to scores. Meanwhile, all scores are accumulated and then divided by the total number of pixels of the thumbnail to obtain a comprehensive score, which indicates unreal when the comprehensive score is 2, may indicate unreal when the comprehensive score is 1, and indicates that the degree of reality is acceptable when the comprehensive score is 0. And the comprehensive score is reported to the image rendering module.
And when each complete index verification is completed, namely the feature result of the RB is received and the verification report is completed, immediately cleaning the calculation parameters to prepare for the index calculation of the next batch of decoded videos.
When the blocks program is configured, calculation is carried out by a method similar to that of the thumbnail, and a dyeing guide picture and a comprehensive score are obtained for subsequent image rendering display.
After the obtained functions of showing the truth degree information, rendering the image and the like are processed, the video audiences can directly know the possible unreliable behaviors of the video in the process of playing the video.
The following images can be drawn on the video drawing window during rendering:
when a copy of an author object is received, displaying the content in the author object appropriately, such as displaying the name, units, etc. of the publisher;
when an author object is received and attached with a digital certificate false alarm, displaying the alarm with proper attention, and optionally, carrying out high-deception warning dyeing on a playing picture;
and when an index verification result, namely a dyeing guide and an integrated score, is received, dyeing the played video according to the dyeing guide, and properly recording the integrated score, or posting a deceptive warning sign on the video.
The video playing method provided by the embodiment plays the credible video based on the signature reference abstract, traces the source of the publisher of the video while playing, and identifies and warns the true degree of the video stream, so that the publisher identity of the video can be confirmed by the player while playing the video, and the possibility that the video is tampered to the extent that the promised value of the publisher is lost can be confirmed. When the method is used for playing the video content with high reliability, the real intention of the publisher in video publishing can be confirmed.
In this embodiment, a video distribution apparatus is further provided, and the apparatus is used to implement the foregoing embodiments and preferred embodiments, and the description of the apparatus is omitted for brevity. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
The present embodiment provides a video distribution apparatus, as shown in fig. 6, including:
the acquisition module 61 is configured to acquire an original video of a video to be published and publisher identity information of the video to be published;
the abstract calculation module 62 is configured to perform abstract calculation on the content of the original video to determine a video abstract index;
the signature module 63 is configured to determine a reference digest based on the publisher identity information and the video digest index, and sign the reference digest to obtain a digital signature;
a concatenation module 64, configured to concatenate the reference digest and the digital signature to determine a signature reference digest;
the publishing module 65 is configured to code the signature reference digest into a video code stream of the video to be published, and determine and publish a target published video, where the video code stream is a code stream obtained by compression coding of the video to be published.
In some embodiments, the summary calculation module 62 includes:
the processing unit is used for carrying out feature processing on an original video image in the original video and determining a feature processing result;
and the determining unit is used for determining the characteristic processing result as a video abstract index.
In some embodiments, the processing unit comprises:
the reducing subunit is configured to reduce the size of the original video image to a preset size to obtain a thumbnail of the original video, so as to determine the feature processing result;
alternatively, the first and second liquid crystal display panels may be,
and the analysis subunit is used for analyzing the color characteristics and/or the light and shade characteristics of the original video image and determining the characteristic processing result.
In some embodiments, the summary calculation module 62 includes:
the first acquisition unit is used for acquiring data with preset length;
and the calculating unit is used for carrying out numerical calculation on the original video image in the original video and the data with the preset length to determine the video abstract index.
In some embodiments, the signature module 63 comprises:
a second obtaining unit, configured to obtain a first reference digest, where the first reference digest includes the publisher identity information and a computation description used for the digest computation;
the first signature unit is used for signing the first reference digest and determining a first signature reference digest;
and the second signature unit is used for determining the video abstract index as a second reference abstract, signing the second reference abstract and determining a second signed reference abstract.
In some embodiments, the second acquiring unit includes:
the obtaining subunit is used for obtaining the video description of the video to be published;
and the splicing subunit is used for splicing the publisher identity information, the video description and the calculation description to determine the first reference abstract.
In some embodiments, the publication module 65 includes:
the first signature reference abstract is coded into the target position of the video code stream;
and coding the second signature reference abstract into the target position of each compressed and coded video stream in the video code stream, and determining and releasing a target release video.
The video distribution apparatus in this embodiment is presented in the form of functional units, where a unit refers to an ASIC circuit, a processor and memory executing one or more software or fixed programs, and/or other devices that may provide the above-described functionality.
Further functional descriptions of the modules are the same as those of the corresponding embodiments, and are not repeated herein.
An embodiment of the present invention further provides an electronic device, which has the video distribution apparatus shown in fig. 6.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an electronic device according to an alternative embodiment of the present invention, and as shown in fig. 7, the electronic device may include: at least one processor 71, such as a CPU (Central Processing Unit), at least one communication interface 73, memory 74, at least one communication bus 72. Wherein a communication bus 72 is used to enable the connection communication between these components. The communication interface 73 may include a Display (Display) and a Keyboard (Keyboard), and the optional communication interface 73 may also include a standard wired interface and a standard wireless interface. The Memory 74 may be a high-speed RAM Memory (volatile Random Access Memory) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The memory 74 may alternatively be at least one memory device located remotely from the processor 71. Wherein the processor 71 may be in connection with the apparatus described in fig. 6, an application program is stored in the memory 74, and the processor 71 calls the program code stored in the memory 74 for performing any of the above-mentioned method steps.
The communication bus 72 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus. The communication bus 72 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 7, but that does not indicate only one bus or one type of bus.
The memory 74 may include a volatile memory (RAM), such as a random-access memory (RAM); the memory may also include a non-volatile memory (e.g., flash memory), a hard disk (HDD) or a solid-state drive (SSD); the memory 74 may also comprise a combination of memories of the kind described above.
The processor 71 may be a Central Processing Unit (CPU), a Network Processor (NP), or a combination of CPU and NP.
The processor 71 may further include a hardware chip. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), general Array Logic (GAL), or any combination thereof.
Optionally, the memory 74 is also used for storing program instructions. Processor 71 may invoke program instructions to implement a video distribution method as shown in any of the embodiments of the present application.
An embodiment of the present invention further provides a non-transitory computer storage medium, where a computer-executable instruction is stored in the computer storage medium, and the computer-executable instruction can execute the video publishing method in any method embodiment. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD), a Solid State Drive (SSD), or the like; the storage medium may also comprise a combination of memories of the kind described above.
Although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope defined by the appended claims.

Claims (10)

1. A method for video distribution, comprising:
acquiring an original video of a video to be published and publisher identity information of the video to be published;
performing abstract calculation on the content of the original video to determine a video abstract index;
determining a reference abstract based on the publisher identity information and the video abstract index, and signing the reference abstract to obtain a digital signature;
splicing the reference abstract and the digital signature to determine a signature reference abstract;
and coding the signature reference abstract into a video code stream of the video to be published to determine and publish the target published video, wherein the video code stream is the code stream of the video to be published after compression coding.
2. The method of claim 1, wherein the performing a summarization calculation on the content of the original video to determine a video summarization index comprises:
performing feature processing on an original video image in the original video to determine a feature processing result;
and determining the characteristic processing result as a video abstract index.
3. The method according to claim 2, wherein the performing feature processing on the original video image in the original video and determining a feature processing result comprises:
reducing the size of the original video image to a preset size to obtain a thumbnail of the original video so as to determine the feature processing result;
and/or the presence of a gas in the gas,
and analyzing the color characteristics and/or the light and shade characteristics of the original video image to determine the characteristic processing result.
4. The method according to any one of claims 1-3, wherein said performing a summarization calculation on the content of the original video to determine a video summarization index comprises:
acquiring data with a preset length;
and carrying out numerical calculation on the original video image in the original video and the data with the preset length to determine the video abstract index.
5. The method of claim 1, wherein determining a reference digest based on the publisher identity information and the video digest indicator and signing the reference digest results in a digital signature comprises:
obtaining a first reference abstract, wherein the first reference abstract comprises the publisher identity information and a calculation description used for the abstract calculation;
signing the first reference digest to determine a first signature;
and determining the video abstract index as a second reference abstract, signing the second reference abstract, and determining a second signature.
6. The method of claim 5, wherein the obtaining the first reference summary comprises:
acquiring the video description of the video to be published;
and splicing the publisher identity information, the video description and the calculation description to determine the first reference abstract.
7. The method according to claim 5, wherein said encoding the signature reference digest into the video code stream of the video to be published to determine and publish the target published video comprises:
coding a first signature reference abstract into a target position of the video code stream, wherein the first signature reference abstract is obtained by splicing the first reference abstract and the first signature;
coding a second signature reference abstract into the target position of each compressed coding video stream in the video code stream, wherein the second signature reference abstract is obtained by splicing the second reference abstract and the second signature;
and determining and publishing the target release video based on the video code stream coded into the first signature reference abstract and the second signature reference abstract.
8. A video distribution apparatus, comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring an original video of a video to be published and publisher identity information of the video to be published;
the abstract calculation module is used for performing abstract calculation on the content of the original video and determining video abstract indexes;
the signature module is used for determining a reference abstract based on the publisher identity information and the video abstract index and signing the reference abstract to obtain a digital signature;
the splicing module is used for splicing the reference abstract and the digital signature to determine a signature reference abstract;
and the publishing module is used for encoding the signature reference abstract into a video code stream of the video to be published to determine and publish a target published video, wherein the video code stream is a code stream obtained by compressing and encoding the video to be published.
9. An electronic device, comprising:
a memory and a processor, the memory and the processor being communicatively connected to each other, the memory having stored therein computer instructions, the processor executing the computer instructions to perform the video distribution method of any one of claims 1-7.
10. A computer-readable storage medium storing computer instructions for causing a computer to perform the video distribution method of any one of claims 1 to 7.
CN202211181374.9A 2022-09-27 2022-09-27 Video distribution method and device, electronic equipment and storage medium Pending CN115550730A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211181374.9A CN115550730A (en) 2022-09-27 2022-09-27 Video distribution method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211181374.9A CN115550730A (en) 2022-09-27 2022-09-27 Video distribution method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115550730A true CN115550730A (en) 2022-12-30

Family

ID=84729775

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211181374.9A Pending CN115550730A (en) 2022-09-27 2022-09-27 Video distribution method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115550730A (en)

Similar Documents

Publication Publication Date Title
US11023618B2 (en) Systems and methods for detecting modifications in a video clip
Lin et al. Issues and solutions for authenticating MPEG video
JP6248172B2 (en) Image metadata generation for improved image processing and content delivery
US9202257B2 (en) System for determining an illegitimate three dimensional video and methods thereof
Dittmann et al. Content-based digital signature for motion pictures authentication and content-fragile watermarking
US8938095B2 (en) Verification method, verification device, and computer product
CN109690538A (en) The system and method for matching content for identification
CN106713964A (en) Method of generating video abstract viewpoint graph and apparatus thereof
Lin Watermarking and digital signature techniques for multimedia authentication and copyright protection
CN112040336B (en) Method, device and equipment for adding and extracting video watermark
CN104618803A (en) Information push method, information push device, terminal and server
US10834158B1 (en) Encoding identifiers into customized manifest data
JP4740706B2 (en) Fraud image detection apparatus, method, and program
CN110366007A (en) The protection of video flowing, verifying, copyright mark generation method
Ding et al. Image authentication and tamper localization based on relative difference between DCT coefficient and its estimated value
CN114868122A (en) Content authentication based on intrinsic properties
CN115550730A (en) Video distribution method and device, electronic equipment and storage medium
CN113014953A (en) Video tamper-proof detection method and video tamper-proof detection system
CN115695909A (en) Video playing method, electronic equipment and storage medium
CN115695942A (en) Video distribution method and device, electronic equipment and storage medium
CN113613015A (en) Tamper-resistant video generation method and device, electronic equipment and readable medium
CN115550691A (en) Video processing method, video transmission method, video viewing method and device
CN113938702A (en) Multimedia data stream tamper-proof device, method and medium based on block chain system
US7356159B2 (en) Recording and reproduction apparatus, recording and reproduction method, recording and reproduction program for imperceptible information to be embedded in digital image data
Alaa ‘Watermarking images for fact-checking and fake news inquiry

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination