CN115022672A - Calculation method for remote video traffic fusion load - Google Patents

Calculation method for remote video traffic fusion load Download PDF

Info

Publication number
CN115022672A
CN115022672A CN202210429104.9A CN202210429104A CN115022672A CN 115022672 A CN115022672 A CN 115022672A CN 202210429104 A CN202210429104 A CN 202210429104A CN 115022672 A CN115022672 A CN 115022672A
Authority
CN
China
Prior art keywords
video
data
module
streaming media
podcast
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210429104.9A
Other languages
Chinese (zh)
Inventor
吴溪
范卓琳
金秋
朱珊峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin Yun Fan Intelligent Engineering Co ltd
Original Assignee
Jilin Yun Fan Intelligent Engineering Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin Yun Fan Intelligent Engineering Co ltd filed Critical Jilin Yun Fan Intelligent Engineering Co ltd
Priority to CN202210429104.9A priority Critical patent/CN115022672A/en
Publication of CN115022672A publication Critical patent/CN115022672A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23103Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion using load balancing strategies, e.g. by placing or distributing content on different disks, different memories or different servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/647Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless
    • H04N21/64715Protecting content from unauthorized alteration within the network

Abstract

The invention discloses a calculation method for a remote video flow fusion load, which comprises a video streaming media data packaging module, a video streaming media center weight module and a video material communication module, wherein the video streaming media data packaging module is used for storing and uploading theme editing data required by video editing people in a distributed manner, the video streaming media center weight module is used for establishing video streaming media of different video editing people and video podcasting video materials which are connected in a similar manner, the video material communication module is used for connecting the video editing people with higher video material similarity for video material communication, the video streaming media data packaging module, the video streaming media center weight module and the video material communication module are connected with each other through a video streaming media service network, and the video streaming media data packaging module comprises a required theme data storage module, a video streaming media center weight module and a video material communication module, The invention discloses a retrieval report data storage module, which has the characteristic of strong practicability.

Description

Calculation method for remote video traffic fusion load
Technical Field
The invention relates to the technical field of video editing, in particular to a computing method for remote video flow fusion load.
Background
Video editors face a problem that finding material is difficult when creating videos of some original topics. Because the knowledge background and the reserve of each video podcast are different, the watched video materials are limited, so that the scenes of a large number of people are required to temporarily search related video or television materials when the video expresses a certain theme, and some materials cannot be searched by a keyword search method.
For example, when a video creator wants to make a related video of a startup, only movie and television works of the startup theme can be retrieved, and although the theme is not the startup in many movies, some details relate to the startup, and the fragmented materials may be more suitable for the original idea of the creator, but are limited by the visual angle of the video podcast of the creator, and the most suitable materials cannot be edited, so that the expressive force of the video materials is single, and the practicability is poor. Therefore, it is necessary to design a calculation method for remote video traffic fusion load with strong practicability.
Disclosure of Invention
The present invention is directed to a method for calculating a remote video traffic fusion load, so as to solve the problems in the background art.
In order to solve the technical problems, the invention provides the following technical scheme: a calculation method for a remote video flow fusion load comprises a video streaming media data packing module, a video streaming media center weight module and a video material communication module, wherein the video streaming media data packing module is used for storing and uploading theme editing data required by video editing people in a distributed mode, the video streaming media center weight module is used for establishing video streaming media of different video editing people and video broadcasting video materials which are connected in a close mode, the video material communication module is used for connecting video editing people with higher video material similarity for video material communication, and the video streaming media data packing module, the video streaming media center weight module and the video material communication module are all connected through a video streaming media service network.
According to the technical scheme, the video streaming media data packaging module comprises a demand theme data storage module, a retrieval report data storage module, a video material characteristic data storage module and a digital signature verification module, wherein the demand theme data storage module is used for recording and storing a video material demand theme fed back by a video communication platform, the retrieval report data storage module is used for registering and recording content analysis report data obtained by analyzing video material big data and storing the content analysis report data, the video material characteristic data storage module is used for recording and storing screening result data and video material scheme data, the digital signature verification module is connected with the demand theme data storage module, the retrieval report data storage module and the video material characteristic data storage module, and the digital signature verification module is used for packaging the video clipper demand theme data into a video segment form for storage, and performs digital signature verification.
According to the technical scheme, the video streaming media center weight module comprises a video clip person video streaming media extraction module, a keyword locking module, a video material similarity calculation module and a video streaming media weight establishment module, the video editor video stream media extraction module is used for extracting all video editor requirement subject editing data video segments in the video stream media service network, the keyword locking module is used for locking data keywords in the data video segment of each video editing person requirement subject clip, the video material similarity calculation module is used for calculating the similarity of the languages of the video material characters of different video editors and the languages of the video podcast video material characters, the video streaming media weight establishing module is used for establishing video streaming media weight by taking the current video podcast video material data as the video streaming media weight.
According to the technical scheme, the video material similarity calculation module comprises a radar image establishing submodule and an overlap ratio comparison submodule, wherein the radar image establishing submodule is used for establishing radar maps according to key words extracted from video segments of required subject editing data of different video editors, and the overlap ratio comparison submodule is used for comparing radar dimension maps of the required subject data of the video podcast video materials with radar dimension maps of video material subject data of other video editors in a video streaming media service network.
According to the technical scheme, the video material communication module comprises a weight sub-center communication module and a weight center main communication module, the weight sub-center communication module is used for performing partial right communication on video materials of video clip person video material demand theme data at the established video streaming media weight sub-center, which are similar to the video podcast video material demand theme data, and the weight center main communication module is used for performing all right video material communication on video clip persons with high similarity to the video podcast video materials.
According to the technical scheme, the video streaming media data storage method mainly comprises the following steps:
step S1: when the video podcast goes to the network to carry out video material retrieval, explaining a clipping requirement theme to the video exchange platform, and after receiving the network clipping, storing the requirement theme of the current characteristics of the video podcast, a retrieval report, a screening result and data of a video material scheme by a video streaming media data packaging module;
step S2: the digital signature verification module packs the theme clipping data required by the video clipper into a video segment form to be stored in a video streaming media service network, and performs digital signature verification to avoid data tampering and improve the safety of data storage;
step S3: the video clip editing device comprises a video clip editing device, a video streaming media extraction module, a video clip storage module and a video clip storage module, wherein the video clip extracting module extracts a video clip in a video streaming media service network and decompresses video clip storage data;
step S4: the keyword locking module is used for performing keyword locking and data sorting on video material data stored in a plurality of video clips;
step S5: taking video material data which the video podcast wants to express as a center, and calculating the similarity of video clip data of the video material in the video streaming media service network and the video material data which the video podcast wants to express by using a video material similarity calculation module;
step S6: establishing video streaming media weight by taking video podcast video material data as a center in a video streaming media service network, and arranging chain lengths of video clips and the video podcast video streaming media weight in different similarity degrees in the video streaming media service network;
step S7: the method comprises the steps of respectively dividing different radius lengths to form a weight sub-center communication module area and a weight center main communication module area by taking the weight of video streaming media established by a video podcast as a center, and respectively granting different rights to carry out video material communication.
According to the above technical solution, the step S4 further includes:
step S41: the action of a subject character, the language of the character, the scene of the character and the mood key words of the character are required to be locked for the video material of a video clip;
step S42: calling video material retrieval report data, and locking similar data in the video material retrieval report data as keywords;
step S43: and extracting the screening result of the video communication platform and the keywords of the video material scheme.
8. The method of claim 7, wherein the method comprises: the step S5 further includes:
step S51: establishing a plane rectangular coordinate system, establishing a radar map by taking the origin of the plane rectangular coordinate system as the center, and performing triangular radar map simulation by a radar image establishing submodule according to keywords extracted from video segments of theme clip data required by different video clips;
step S52: the three angles of the triangular radar map respectively correspond to the requirement theme data keywords, the report data keywords and the editing data keywords extracted by the three types of keyword extraction modules, the pointing area corresponding to each angle of the triangle is 120 degrees, the deviation of each degree of the activity range represents that the keywords are different, different keywords are intelligently identified, the closer the meaning of the terms is, the smaller the deviation of the angle of the triangle is, meanwhile, the distance from each corner vertex of the triangular radar chart to the origin of the plane rectangular coordinate system represents the severity index of the video material data, when the triangular radar map is closer to an equilateral triangle, the requirements theme data keywords, the severity of the video material requirements theme corresponding to the report data keywords and the clipping data keywords, the accuracy of content analysis report and the completeness of the video material scheme are more complete, the better the pertinence of the whole editing of the video material is, the stronger the technology of the network video material at the comprehensive reflection position is;
step S53: the coincidence degree comparison submodule is used for carrying out coincidence comparison on the radar dimension graph of the video podcast video material demand theme data and the radar dimension graphs of the video material demand theme data of other video editors in the video streaming media service network to obtain a coincidence area S Heavy load Area S of video podcast triangular radar map An Comparing the area S of the target triangular radar Ratio of
Step S54: the similarity formula of video clip data of video materials in a video streaming media service network and video material data of a video podcast is calculated as follows:
Figure RE-GDA0003735025010000061
in the formula, Y is a value of similarity between video clip data of one of the video materials in the video streaming service network and video material data of the video podcast, and when the value of Y is closer to 100%, it indicates that the similarity between the video clip data of the video material in the video streaming service network and the video material data of the video podcast is higher, and when the similarities of the video clip data of a plurality of video materials are equal, the alternating reference degree of the corresponding video material closer to the equilateral triangle shape is higher by observing the imaging shape of the triangular radar image.
According to the above technical solution, in step S6, in the video streaming service network, each video podcast can establish a video streaming weight with video material data of the video podcast as a center, and a similarity Y value between video clip data of the video material and the video material data of the video podcast in each video streaming service network can be converted into a chain length of the video streaming weight of the video podcast, where the larger the Y value is, the shorter the corresponding chain length is, the closer the video streaming weight of the video podcast is, and conversely, the smaller the T value is, the longer the corresponding chain length is, and the farther the video streaming weight of the video podcast is.
According to the above technical solution, the step S7 further includes:
step S71: the area with the T being more than 80% and less than or equal to 90% is divided into a weight sub-center communication module area, a video editor in the weight sub-center communication module area can perform video rough theme communication with the video podcast, video rough theme data can be stored and recorded in a video clip corresponding to the video editor in the view area, and the video podcast can be helped to continuously enrich a material library after being edited by a network;
step S72: and dividing the area with the T being more than 90% and less than or equal to 100% into a weight center main communication module area, carrying out complete video material scheme communication on the video editor in the weight center main communication module area and the video podcast, and viewing all editing data stored and recorded by the video editor corresponding to the video clip in the area so as to realize editing communication among the podcasts.
Compared with the prior art, the invention has the following beneficial effects: according to the invention, the characteristics of character actions, languages and the like in the material are obtained by analyzing the video material used by the video editor, so that the accurate meaning of the video is obtained, the similarity matching is carried out on the expressed subjects, and video creators with the same subject can communicate with the video clip, so that the video creators can quickly find the material clip which is most matched with the video subject, and the video making efficiency of the podcast is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention.
In the drawings:
FIG. 1 is a schematic view of the overall module structure of the present invention;
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
Referring to fig. 1, the present invention provides a technical solution: a calculation method for remote video flow fusion load comprises a video streaming media data packing module, a video streaming media center weighting module and a video material communication module, wherein the video streaming media data packing module is used for storing and uploading theme editing data required by video editing people in a distributed mode, the video streaming media center weighting module is used for establishing video streaming media of different video editing people and video podcasts, the video material communication module is used for connecting the video editing people with higher video material similarity for video material communication, and the video streaming media data packing module, the video streaming media center weighting module and the video material communication module are all connected through a video streaming media service network;
the video streaming media data packaging module comprises a demand theme data storage module, a retrieval report data storage module, a video material characteristic data storage module and a digital signature verification module, wherein the demand theme data storage module is used for recording and storing a video material demand theme fed back by a video communication platform, the retrieval report data storage module is used for registering clip content analysis report data obtained by video material big data analysis and recording and storing the content analysis report data, the video material characteristic data storage module is used for recording and storing screening result data and video material scheme data, the digital signature verification module is connected with the demand theme data storage module, the retrieval report data storage module and the video material characteristic data storage module, and is used for packaging the video clip data of the demand theme of a video clip into a video segment form for storage, and carrying out digital signature verification;
the video streaming media center weighting module comprises a video clip person video streaming media extraction module, a keyword locking module, a video material similarity calculation module and a video streaming media weight establishment module, wherein the video clip person video streaming media extraction module is used for extracting all video clip person requirement subject clip data video segments in a video streaming media service network, the keyword locking module is used for locking data keywords in each video clip person requirement subject clip data video segment, the video material similarity calculation module is used for calculating the similarity between the languages of video material characters of different video clip persons and the languages of video podcast video material characters, and the video streaming media weight establishment module is used for establishing video streaming media weight by taking the current video podcast video material data as the video streaming media weight;
the video material similarity calculation module comprises a radar image establishing submodule and an overlap ratio comparison submodule, wherein the radar image establishing submodule is used for establishing a radar map according to keywords extracted from video segments of the required subject editing data of different video editors, and the overlap ratio comparison submodule is used for comparing a radar dimension map of the required subject data of the video podcast video material with radar dimension maps of the subject data of other video editors in a video streaming media service network;
the video material communication module comprises a weight sub-center communication module and a weight center main communication module, the weight sub-center communication module is used for performing partial right communication on video clips at the established video streaming media weight sub-center, wherein the video clip video material demand subject data is similar to the video podcast video material demand subject data, and the weight center main communication module is used for performing all right communication on video clips with high similarity to the video podcast video material;
the video streaming media data storage method mainly comprises the following steps:
step S1: when the video podcast goes to the network for video material retrieval, explaining the editing requirement theme to the video exchange platform, and after receiving the network editing, the video streaming media data packaging module stores the requirement theme of the current characteristics of the video podcast, a retrieval report, a screening result and data of a video material scheme;
step S2: the digital signature verification module packs the theme editing data required by the video editor into a video segment form to be stored in a video streaming media service network, and performs digital signature verification to avoid data tampering and improve the safety of data storage;
step S3: the video clip extractor module extracts a video clip in the video streaming media service network and decompresses the video clip to store data;
step S4: the keyword locking module is used for performing keyword locking and data sorting on video material data stored in a plurality of video clips;
step S5: by taking video material data which the video podcast wants to express as a center, the video material similarity calculation module calculates the similarity between video clip data of the video material in the video streaming media service network and the video material data which the video podcast wants to express;
step S6: establishing video streaming media weight by taking video podcast video material data as a center in a video streaming media service network, and arranging chain lengths of video clips and the video podcast video streaming media weight in different similarity degrees in the video streaming media service network;
step S7: with the weight of video streaming media established by the video podcast as the center, respectively drawing different radius lengths to form a weight sub-center communication module area and a weight center main communication module area, and respectively granting different rights to carry out video material communication;
step S4 further includes:
step S41: the action of a subject character, the language of the character, the scene of the character and the mood keyword of the character are required for a video material of a video editor;
step S42: calling video material retrieval report data, and locking similar data in the video material retrieval report data as keywords;
step S43: and extracting the screening result of the video communication platform and the keywords of the video material scheme.
8. A method for calculating a remote video traffic fusion load according to claim 7, characterized by: step S5 further includes:
step S51: establishing a plane rectangular coordinate system, establishing a radar map by taking the origin of the plane rectangular coordinate system as the center, and performing triangular radar map simulation by a radar image establishing submodule according to keywords extracted from video segments of theme clip data required by different video clips;
step S52: the three angles of the triangular radar map respectively correspond to the requirement theme data keywords, the report data keywords and the editing data keywords extracted by the three types of keyword extraction modules, the pointing area corresponding to each angle of the triangle is 120 degrees, the deviation of each degree of the activity range represents that the keywords are different, different keywords are intelligently identified, the closer the meaning of the terms is, the smaller the deviation of the angle of the triangle is, meanwhile, the distance from each corner vertex of the triangular radar chart to the origin of the plane rectangular coordinate system represents the severity index of the video material data, when the triangular radar map is closer to an equilateral triangle, the requirements theme data keywords, the severity of the video material requirements theme corresponding to the report data keywords and the editing data keywords, the content analysis report accuracy and the completeness of the video material scheme are more complete, the better the pertinence of the whole editing of the video material is, the stronger the technology of the network video material at the comprehensive reflection position is;
step S53: the coincidence degree comparison submodule performs coincidence comparison on the radar dimension map of the video podcast video material requirement subject data and the radar dimension maps of the video clip person video material requirement subject data in the video streaming media service network to obtain a coincidence area S Heavy load Video podcast triangular radar map area S An Comparing the area S of the target triangular radar Ratio of
Step S54: the similarity formula of video clip data of video materials in a video streaming media service network and video material data of a video podcast is calculated as follows:
Figure RE-GDA0003735025010000121
wherein T is one of video material video clip data and video podcast video pixel in video streaming media service networkThe similarity value of the material data indicates that the similarity of video clip data of the video material in the video streaming media service network and video material data of the video podcast is higher when the value of T is closer to 100%, and when the similarity of the video clip data of a plurality of video materials is equal, the alternating current reference degree of the corresponding video material closer to the equilateral triangle shape is higher by observing the imaging shape of the triangular radar image;
in step S6, in the video streaming service network, each video podcast can establish a video streaming weight with video podcast video material data as a center, a similarity T value between video clip data of the video material and video podcast video material data in each video streaming service network can be converted into a chain length corresponding to the video streaming weight of the video podcast, where the larger the T value, the shorter the corresponding chain length is, the closer the video streaming weight of the video podcast is to the video segment data of the video material, and the smaller the T value is, the longer the corresponding chain length is, the farther the video streaming weight of the video podcast is from the video streaming weight of the video material;
step S7 further includes:
step S71: the area with the T being more than 80% and less than or equal to 90% is divided into a weight sub-center communication module area, a video editor in the weight sub-center communication module area can perform video rough theme communication with the video podcast, video rough theme data can be stored and recorded in a video clip corresponding to the video editor in the view area, and the video podcast can be helped to continuously enrich a material library after being edited by a network;
step S72: and dividing the area with the T being more than 90% and less than or equal to 100% into a weight center main communication module area, carrying out complete video material scheme communication on the video editor in the weight center main communication module area and the video podcast, and viewing all clip data stored and recorded by the video editor corresponding to the video clip in the area so as to realize clipping communication between the podcasts.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described above, or equivalents may be substituted for elements thereof. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A calculation method for remote video traffic fusion load is characterized in that: the video stream media data packaging module is used for storing and uploading theme editing data required by video editing people in a distributed mode, the video stream media center weighting module is used for establishing video stream media with different video editing people and video podcast video materials connected in a similar mode, the video material communication module is used for being connected with video editing people with high video material similarity to communicate the video materials, and the video stream media data packaging module, the video stream media center weighting module and the video material communication module are connected through a video stream media service network.
2. The method of claim 1, wherein the method comprises the following steps: the video streaming media data packaging module comprises a demand theme data storage module, a retrieval report data storage module, a video material characteristic data storage module and a digital signature verification module, wherein the demand theme data storage module is used for recording and storing a video material demand theme fed back by a video communication platform, the retrieval report data storage module is used for registering and storing editing content analysis report data obtained by analyzing video material big data and recording and storing the content analysis report data, the video material characteristic data storage module is used for recording and storing screening result data and video material scheme data, the digital signature verification module is connected with the demand theme data storage module, the retrieval report data storage module and the video material characteristic data storage module, and the digital signature verification module is used for packaging and storing the video editor demand theme editing data into a video segment form, and performs digital signature verification.
3. The method of claim 2, wherein the method comprises: the video streaming media center weighting module comprises a video clip person video streaming media extracting module, a keyword locking module, a video material similarity calculating module and a video streaming media weight establishing module, wherein the video clip person video streaming media extracting module is used for extracting all video clip person requirement subject clip data video segments in a video streaming media service network, the keyword locking module is used for locking data keywords in each video clip person requirement subject clip data video segment, the video material similarity calculating module is used for calculating the similarity between the languages of video material characters of different video clip persons and the languages of video podcast video material characters, and the video streaming media weight establishing module is used for establishing video streaming media weight by taking the current video podcast video material data as the video streaming media weight.
4. The method of claim 3, wherein the load is selected from a group consisting of: the video material similarity calculation module comprises a radar image establishing submodule and an overlap ratio comparison submodule, the radar image establishing submodule is used for establishing radar maps according to key words extracted from video segments of the required subject editing data of different video editors, and the overlap ratio comparison submodule is used for comparing radar dimension maps of the required subject data of the video podcast video materials with radar dimension maps of the video material subject data of other video editors in a video streaming media service network.
5. The method of claim 4, wherein the method comprises: the video material communication module comprises a weight sub-center communication module and a weight center main communication module, the weight sub-center communication module is used for performing partial right communication on video editing person video material demand theme data at the established video streaming media weight sub-center, which is similar to the video podcast video material demand theme data, and the weight center main communication module is used for performing all right video material communication on video editing persons with high similarity to the video podcast video material.
6. The method of claim 5, wherein the method comprises the following steps: the video streaming media data storage method mainly comprises the following steps:
step S1: when the video podcast goes to the network for video material retrieval, explaining the editing requirement theme to the video exchange platform, and after receiving the network editing, the video streaming media data packaging module stores the requirement theme of the current characteristics of the video podcast, a retrieval report, a screening result and data of a video material scheme;
step S2: the digital signature verification module packs the theme editing data required by the video editor into a video segment form to be stored in a video streaming media service network, and performs digital signature verification to avoid data tampering and improve the safety of data storage;
step S3: the video clip editing device comprises a video clip editing device, a video streaming media extraction module, a video clip storage module and a video clip storage module, wherein the video clip extracting module extracts a video clip in a video streaming media service network and decompresses video clip storage data;
step S4: the keyword locking module is used for performing keyword locking and data sorting on video material data stored in a plurality of video clips;
step S5: by taking video material data which the video podcast wants to express as a center, the video material similarity calculation module calculates the similarity between video clip data of the video material in the video streaming media service network and the video material data which the video podcast wants to express;
step S6: establishing video streaming media weight by taking video podcast video material data as a center in a video streaming media service network, and arranging chain lengths of video clips and the video podcast video streaming media weight in different similarities in the video streaming media service network;
step S7: the method comprises the steps of respectively dividing different radius lengths to form a weight sub-center communication module area and a weight center main communication module area by taking the weight of video streaming media established by a video podcast as a center, and respectively granting different rights to carry out video material communication.
7. The method of claim 6, wherein the method comprises: the step S4 further includes:
step S41: the action of a subject character, the language of the character, the scene of the character and the mood key words of the character are required to be locked for the video material of a video clip;
step S42: calling video material retrieval report data, and locking similar data in the video material retrieval report data as keywords;
step S43: and extracting the screening result of the video communication platform and the keywords of the video material scheme.
8. The method of claim 7, wherein the method comprises: the step S5 further includes:
step S51: establishing a plane rectangular coordinate system, establishing a radar map by taking the origin of the plane rectangular coordinate system as the center, and performing triangular radar map simulation by a radar image establishing submodule according to keywords extracted from video segments of theme clip data required by different video clips;
step S52: three angles of the triangular radar map respectively correspond to the requirement theme data keywords, the report data keywords and the editing data keywords extracted by the three types of keyword extraction modules, the pointing area corresponding to each angle of the triangle is 120 degrees, the deviation of each degree of the activity range represents that the keywords are different, different keywords are intelligently identified, the closer the meaning of the terms is, the smaller the deviation of the angle of the triangle is, meanwhile, the distance from each corner vertex of the triangular radar chart to the origin of the plane rectangular coordinate system represents the severity index of the video material data, when the triangular radar map is closer to an equilateral triangle, the requirements theme data keywords, the severity of the video material requirements theme corresponding to the report data keywords and the clipping data keywords, the accuracy of content analysis report and the completeness of the video material scheme are more complete, the better the pertinence of the whole editing of the video material is, the stronger the technology of the network video material at the comprehensive reflection position is;
step S53: the coincidence degree comparison submodule performs coincidence comparison on the radar dimension map of the video podcast video material requirement subject data and the radar dimension maps of the video clip person video material requirement subject data in the video streaming media service network to obtain a coincidence area S Heavy load Area S of video podcast triangular radar map An Comparing the area S of the target triangular radar Ratio of
Step S54: the similarity formula of video clip data of video materials in a video streaming media service network and video material data of a video podcast is calculated as follows:
Figure RE-FDA0003735018000000051
in the formula, T is a value of similarity between one of video material video clip data in the video streaming media service network and video podcast video material data, and when the value of T is closer to 100%, it indicates that the similarity between the video material video clip data in the video streaming media service network and the video podcast video material data is higher, and when the similarities of a plurality of video material video clip data are equal, the more the similarity is equal to each other, the higher the corresponding video material alternating-current reference degree is, the closer the similarity is to the equilateral triangle shape, by observing the imaging shape of the triangular radar image.
9. The method of claim 8, wherein the method comprises: in step S6, in the video streaming service network, each video podcast can establish a video streaming weight with video podcast video material data as a center, and a similarity T value between video clip data of the video material and video podcast video material data in each video streaming service network can be converted into a chain length corresponding to the video streaming weight of the video podcast, where the larger the T value, the shorter the corresponding chain length is, the closer the video streaming weight of the video podcast is to the video streaming weight of the video podcast, and the smaller the T value is, the longer the corresponding chain length is, and the farther the video streaming weight of the video podcast is from the video streaming weight of the video podcast.
10. The method of claim 9, wherein the method comprises: the step S7 further includes:
step S71: the area with the T being more than 80% and less than or equal to 90% is divided into a weight sub-center communication module area, a video editor in the weight sub-center communication module area can perform video rough theme communication with the video podcast, video rough theme data can be stored and recorded in a video clip corresponding to the video editor in the view area, and the video podcast can be helped to continuously enrich a material library after being edited by a network;
step S72: and dividing the area with the T being more than 90% and less than or equal to 100% into a weight center main communication module area, carrying out complete video material scheme communication on the video editor in the weight center main communication module area and the video podcast, and viewing all clip data stored and recorded by the video editor corresponding to the video clip in the area so as to realize clipping communication between the podcasts.
CN202210429104.9A 2022-04-22 2022-04-22 Calculation method for remote video traffic fusion load Pending CN115022672A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210429104.9A CN115022672A (en) 2022-04-22 2022-04-22 Calculation method for remote video traffic fusion load

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210429104.9A CN115022672A (en) 2022-04-22 2022-04-22 Calculation method for remote video traffic fusion load

Publications (1)

Publication Number Publication Date
CN115022672A true CN115022672A (en) 2022-09-06

Family

ID=83067301

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210429104.9A Pending CN115022672A (en) 2022-04-22 2022-04-22 Calculation method for remote video traffic fusion load

Country Status (1)

Country Link
CN (1) CN115022672A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003030204A (en) * 2001-07-17 2003-01-31 Takami Yasuda Server for providing video contents, device and method for preparing file for video contents retrieval, computer program and device and method for supporting video clip preparation
US20040025180A1 (en) * 2001-04-06 2004-02-05 Lee Begeja Method and apparatus for interactively retrieving content related to previous query results
US20060294571A1 (en) * 2005-06-27 2006-12-28 Microsoft Corporation Collaborative video via distributed storage and blogging
US20070299870A1 (en) * 2006-06-21 2007-12-27 Microsoft Corporation Dynamic insertion of supplemental video based on metadata
EP2104103A1 (en) * 2008-03-20 2009-09-23 British Telecommunications Public Limited Company Digital audio and video clip assembling
CN105743881A (en) * 2016-01-21 2016-07-06 成都索贝数码科技股份有限公司 Media content business process integrated control application cloud platform
KR20190063352A (en) * 2017-11-29 2019-06-07 한국전자통신연구원 Apparatus and method for clip connection of image contents by similarity analysis between clips

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040025180A1 (en) * 2001-04-06 2004-02-05 Lee Begeja Method and apparatus for interactively retrieving content related to previous query results
JP2003030204A (en) * 2001-07-17 2003-01-31 Takami Yasuda Server for providing video contents, device and method for preparing file for video contents retrieval, computer program and device and method for supporting video clip preparation
US20060294571A1 (en) * 2005-06-27 2006-12-28 Microsoft Corporation Collaborative video via distributed storage and blogging
US20070299870A1 (en) * 2006-06-21 2007-12-27 Microsoft Corporation Dynamic insertion of supplemental video based on metadata
EP2104103A1 (en) * 2008-03-20 2009-09-23 British Telecommunications Public Limited Company Digital audio and video clip assembling
CN105743881A (en) * 2016-01-21 2016-07-06 成都索贝数码科技股份有限公司 Media content business process integrated control application cloud platform
KR20190063352A (en) * 2017-11-29 2019-06-07 한국전자통신연구원 Apparatus and method for clip connection of image contents by similarity analysis between clips

Similar Documents

Publication Publication Date Title
Naphade et al. Factor graph framework for semantic video indexing
Araujo et al. Large-scale video retrieval using image queries
US9805064B2 (en) System, apparatus, method, program and recording medium for processing image
JP3568117B2 (en) Method and system for video image segmentation, classification, and summarization
EP2005364B1 (en) Image classification based on a mixture of elliptical color models
US8135239B2 (en) Display control apparatus, display control method, computer program, and recording medium
CN101853377A (en) Method for identifying content of digital video
Bost et al. Remembering winter was coming: Character-oriented video summaries of TV series
Zhang et al. Efficient summarization from multiple georeferenced user-generated videos
Jensen et al. Valid Time.
TW200951832A (en) Universal lookup of video-related data
CN115022672A (en) Calculation method for remote video traffic fusion load
Widiarto et al. Video summarization using a key frame selection based on shot segmentation
Vega et al. A robust video identification framework using perceptual image hashing
CN107748761A (en) A kind of extraction method of key frame of video frequency abstract
Cheng et al. Stratification-based keyframe cliques for effective and efficient video representation
Sudha et al. Reducing semantic gap in video retrieval with fusion: A survey
Ranathunga et al. Performance evaluation of the combination of Compacted Dither Pattern Codes with Bhattacharyya classifier in video visual concept depiction
Sebastine et al. Semantic web for content based video retrieval
Fu et al. A framework for video structure mining
Doss A novel clustering based near duplicate video retrieval model
Fegade et al. Content-based video retrieval by genre recognition using tree pruning technique
Jacob et al. An innovative Method of Accessing Digital Video Archives through Video Indexing
Rani et al. Key Frame Extraction Techniques: A Survey
Chen et al. Research on key frame extraction algorithm based on deep convolutional neural network in video catalogue

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination