CN116437127B - Video cartoon optimizing method based on user data sharing - Google Patents

Video cartoon optimizing method based on user data sharing Download PDF

Info

Publication number
CN116437127B
CN116437127B CN202310693142.XA CN202310693142A CN116437127B CN 116437127 B CN116437127 B CN 116437127B CN 202310693142 A CN202310693142 A CN 202310693142A CN 116437127 B CN116437127 B CN 116437127B
Authority
CN
China
Prior art keywords
video
user node
data
user
video segment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310693142.XA
Other languages
Chinese (zh)
Other versions
CN116437127A (en
Inventor
张克东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dianji Network Technology Shanghai Co ltd
Original Assignee
Dianji Network Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dianji Network Technology Shanghai Co ltd filed Critical Dianji Network Technology Shanghai Co ltd
Priority to CN202310693142.XA priority Critical patent/CN116437127B/en
Publication of CN116437127A publication Critical patent/CN116437127A/en
Application granted granted Critical
Publication of CN116437127B publication Critical patent/CN116437127B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • H04N21/26208Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists the scheduling operation being performed under constraints
    • H04N21/26216Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists the scheduling operation being performed under constraints involving the channel capacity, e.g. network bandwidth
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/647Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless
    • H04N21/64723Monitoring of network processes or resources, e.g. monitoring of network load
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/647Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless
    • H04N21/64746Control signals issued by the network directed to the server or the client
    • H04N21/64761Control signals issued by the network directed to the server or the client directed to the server

Abstract

The invention relates to the field of image communication, in particular to a video cartoon optimizing method based on user data sharing, which comprises the following steps: obtaining each video segment of the video data according to the corner matching rate corresponding to each video frame; obtaining importance degrees of all video segments according to the comprehensive matching rate of all video segments and the number of video frames contained in all video segments, and further distributing all video segments; acquiring all user nodes with target video segments, and acquiring the flow quality degree and the network demand degree of each user node according to the acquired network flow data and network occupation data of each user node in a historical time period, thereby acquiring the quality degree of each user node; and obtaining an optimal user node according to the quality degree of each user node, and downloading the target video segment from the optimal node. The invention can ensure the transmission efficiency, and in the data sharing process of the user node, the visual experience of other user nodes can not be influenced.

Description

Video cartoon optimizing method based on user data sharing
Technical Field
The invention relates to the field of image communication, in particular to a video cartoon optimizing method based on user data sharing.
Background
With the rapid development of 5G technology, the demand of users for video information is increasingly increased, and the demand of massive users for video information makes the speed of a single user for obtaining video information from terminal equipment lower, seriously affects the look and feel experience of users, so that the look and feel experience of users needs to be optimized in a user data sharing mode.
The user data sharing is to carry out slicing processing on video data, the HomeCDN server distributes sliced video data to different user nodes, the different user nodes download different fragments of the video data from the HomeCDN server, meanwhile, the user nodes can inform other user nodes of own video data through information exchange, and content sharing is realized through video data exchange. One user node informs other user nodes of which segments it owns, and the other user nodes can acquire the required video data segments from itself. Finally, after the user terminal downloads different segments of the video data from other user nodes, the obtained segments are assembled into the original video data in sequence, but in the process of obtaining the video data, when the network state of the other user nodes is poor, video playing is easy to be blocked, and the viewing experience of a user is influenced.
Disclosure of Invention
The invention provides a video cartoon optimizing method based on user data sharing, which aims to solve the existing problems.
The video cartoon optimizing method based on user data sharing adopts the following technical scheme:
an embodiment of the present invention provides a video katon optimization method based on user data sharing, which includes the following steps:
acquiring video data, and acquiring corner matching rates corresponding to all video frames according to all corners of adjacent video frames in the acquired video data; obtaining each video segment of the video data according to the corner matching rate corresponding to each video frame;
obtaining the comprehensive matching rate of each video segment according to the corner matching rate corresponding to each video frame in each video segment; obtaining importance degrees of all video segments according to the comprehensive matching rate of all video segments and the number of video frames contained in all video segments; obtaining the distribution repetition rate of each video segment according to the importance degree of each video segment, and obtaining the number of users to be distributed of each video segment according to the distribution repetition rate of each video segment; distributing each video segment according to the obtained user quantity to obtain all user nodes corresponding to each video segment;
taking any user node as a target user node, taking a video segment to be downloaded by the target user node as a target video segment, acquiring all user nodes with the target video segment, and acquiring a predicted traffic sequence and a predicted network occupation sequence of each user node according to the acquired network traffic data and network occupation data of each user node in a historical time period; obtaining the flow quality degree of each user node according to the predicted flow sequence of each user node; obtaining the network demand degree of each user node according to the predicted network occupation sequence of each user node; obtaining the quality degree of each user node according to the flow quality degree of each user node and the network demand degree; obtaining optimal user nodes according to the quality degree of each user node, and downloading the target video segment from the optimal nodes;
the obtaining expression of the importance degree of each video segment is as follows:
in the method, in the process of the invention,representing the importance of the t-th video segment, < >>Representing the number of video frames of the t-th video segment; />A maximum value representing the number of video frames contained in all video segments; />Representing corner matching rate corresponding to a ith video frame in a ith video segment; />For the match rate threshold, ++>The comprehensive matching rate of the t-th video segment;
the acquisition expression of the number of users is:
in the method, in the process of the invention,the number of users needing to be allocated to the t-th video segment is represented; />Representing the importance of the t-th video segment;importance level for the s-th video segment; m represents the total number of user nodes, n represents the total number of video segments, +.>Representing a rounding down, a +.>The repetition rate is allocated for the t-th video segment;
the method for obtaining each video segment of the video data according to the corner matching rate corresponding to each video frame comprises the following steps:
setting a matching rate threshold, and dividing each video frame and adjacent video frames into a group when the matching rate of the corner points corresponding to each video frame in the video data is greater than or equal to the matching rate threshold; otherwise, dividing the two video frames into two groups, and sequentially processing each video frame in the video data to obtain each initial video segment; and taking all the initial video segments with the number of the video frames larger than or equal to the basic number value as all the video segments, and combining all the video frames between two adjacent video segments into one video segment to obtain all the video segments of the video data.
Preferably, the method for obtaining the corner matching rate corresponding to each video frame comprises the following steps:
the method comprises the steps of referring to the number of corner points in each video frame as a first number value of each video frame, referring to the number of corner points in video frames adjacent to each video frame as a second number value of each video frame, conducting corner matching on each video frame and the adjacent video frame, and obtaining the pair number of corner points matched with each other in each video frame and the adjacent video frame; calculating the addition result between the first quantity value and the second quantity value of each video frame; and calculating the product between the obtained corner pair number and 2.0, and taking the ratio between the obtained product and the obtained addition result as the corner matching rate of each video frame.
Preferably, the obtaining expression of the flow quality degree of each user node is:
in the method, in the process of the invention,for user node->The flow quality of (2); />Representing user node +.>I-th predicted traffic data in the predicted traffic sequence of (a); />Representing the number of data contained in the predicted traffic sequence and the predicted network occupancy sequence; />Representing user node +.>An average value of the predicted flow data; exp () is an exponential function based on a natural constant; />Is theoretical maximum flow data.
Preferably, the obtaining expression of the network demand degree of each user node is:
in the method, in the process of the invention,for user node->Is a network demand level of (1); />Representing user node +.>The i-th predicted occupation data in the predicted network occupation sequence; />Representing the number of data contained in the predicted traffic sequence and the predicted network occupancy sequence;is an exponential function based on natural constants; />Representing user node +.>Average value of the predicted occupancy data;is theoretical maximum occupation data.
The beneficial effects of the invention are as follows: firstly, judging the similarity of each video frame according to the corner matching rate between each video frame and the adjacent video frames in the obtained video data, thereby extracting video segments with more stable shots, setting a basic quantity value to limit the quantity of the video frames contained in the video segments, and avoiding the phenomenon of too few quantity of the video frames contained in the video segments finally distributed to different user nodes; because the data sharing among the user nodes is similar to the component local area network, the transmission rate of video data can be greatly improved, in order to enable video segments to participate in local area network transmission as much as possible and optimize the viewing experience of users, the comprehensive matching rate of each video segment is obtained according to the matching rate of the corresponding corner points of each video frame in each video segment, the importance degree of each video segment is judged according to the magnitude of the comprehensive matching rate of different video segments and the number of the video frames contained in each video segment, so that video segments with higher importance degree, such as video segments with frequent lens transition, are distributed to more user nodes, the downloading selectivity of the video segments with higher importance degree is increased, and the video clamping probability is reduced; when user nodes need to share data, user nodes meeting sharing requirements are searched through video numbers, then the quality degree of each user node is obtained according to the flow quality degree and the network demand degree of each user node meeting the requirements, so that the user node which is most suitable for data sharing is found, namely the optimal user node, the required video segment is acquired from the optimal user node, the influence on the visual experience of other user nodes is avoided while the transmission efficiency is high, and the video clamping problem caused by randomly acquiring the video segment in the traditional method is optimized.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of steps of a video clip optimization method based on user data sharing according to the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the present invention to achieve the preset purpose, the following detailed description refers to specific implementation, structure, features and effects of a video clip optimizing method based on user data sharing according to the present invention with reference to the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of the video cartoon optimizing method based on user data sharing.
Referring to fig. 1, a flowchart of a video katon optimization method based on user data sharing according to an embodiment of the present invention is shown, where the method includes the following steps:
step S001: acquiring video data, and acquiring corner matching rates corresponding to all video frames according to all corners of adjacent video frames in the acquired video data; and obtaining each video segment of the video data according to the corner matching rate corresponding to each video frame.
For live broadcast of HLS, in order to enable a user side to realize breakpoint continuous transmission, ts files are required to be downloaded from a CDN to a HomeCDN server, and then each user is distributed by the HomeCDN server, and each user corresponds to a user node in the HomeCDN server. Wherein the ts file format is a more commonly used encapsulation format for high definition video data; the HomeCDN server utilizes the computing, storing and transmitting capacities of mass home intelligent network devices, realizes the cross-local area network interconnection and content distribution among the home network devices by using a P2P technology through the dispatching of server software, is equivalent to sinking the node position of the edge CDN into the home local area network in a further step, not only can provide the file distribution capacity based on the home network devices for the television playing of the set top box, but also can provide the internet content distribution service for the outside. Different user nodes download different segments of video data from the HomeCDN server, meanwhile, the user nodes can inform other user nodes of which segments the user nodes own through information exchange, and content sharing is realized through exchange of different video data segments.
Therefore, firstly, the ts file, namely video data, is downloaded from the CDN to the HomeCDN server, and then the self-adaptive slicing processing is carried out on the ts file through the HomeCDN server, and the specific process is as follows:
since the live broadcast process is a dynamic process, each video frame of the obtained video data has a change in scale, the embodiment firstly uses a SIFT algorithm to perform corner detection on each video frame in the obtained video data, the SIFT algorithm comprises a corner detector and a descriptor, wherein the corner detector is used for judging whether each pixel point in each video frame is a corner, and the descriptor is used for forming feature vector description for each corner, so that each corner in each video frame and the descriptor of each corner are obtained.
And then carrying out corner matching on adjacent video frames in video data, judging whether descriptors of all the corners are consistent, realizing corner matching in continuous video frames, and referring two corner points matched with each other as a corner point pair, wherein the corner point pair formed by the two corner points matched with each other at the moment represents the same position in an actual scene, namely, through corner point matching between the adjacent video frames, the corresponding corner point of the same position in different video frames in the actual scene can be found, wherein the corner point matching is the prior art and is not repeated herein.
Calculating corner matching rates of two adjacent video frames through corner matching in the adjacent video frames, and sectioning video data according to the corner matching rates, for example, for the p-th video frame, the adjacent video frame is the p+1th video frame, and the corner matching rates formed by the two video framesCan be expressed as:
in the middle ofRepresenting the corner matching rate of the p-th video frame; />Respectively representing the number of corner points in the p-th video frame and the p+1th video frame; />Representing the corner pairs matched with each other in the p-th video frame and the p+1th video frame; when the corner matching rate formed between adjacent video frames is higher, the two video frames are more similar, the two video frames can be divided into a group, namely the probability of sectioning the two video frames is lower; otherwise, the higher the probability of sectioning from between the two video frames;
the embodiment sets the matching rate thresholdEmpirical value->Corner matching rate formed between the p-th video frame and its neighboring video frame +.>Greater than or equal to the match rate threshold->Dividing the two video frames into one group, otherwise, performing sectional slicing from between the two video frames, namely dividing the two video frames into two groups; calculating corner matching probabilities formed between each video frame in the video data and adjacent video frames, and carrying out initial segmentation of the video data according to the obtained corner matching probabilities to obtain each initial video segment;
in view of excessive waste of resources such as storage space caused by repeated distribution of a large amount of video data when the number of video frames included in the obtained initial video segment is too small, in order to avoid the occurrence of the phenomenon that the number of video frames included in the video segment finally distributed to different user nodes is too small, the embodiment sets a basic number value n=10, uses the initial video segment including the number of video frames equal to or greater than the basic number value as the video segment finally distributed to different user nodes, and combines all video frames between two adjacent video segments to form one video segment, thereby completing adaptive slicing processing of video data and obtaining each video segment.
It should be noted that, in this embodiment, each video segment obtained is a video segment that is finally distributed to different user nodes, and one video segment may be a video segment formed when a lens is relatively stable, and at this time, actual scenes in the video segment are relatively similar, so that the corner matching rate corresponding to each video frame is relatively high; in addition, because the actual scene changes faster during the lens transition, the difference between the video frames in the lens transition process is larger, the corresponding corner matching rate is lower, but the number of video frames contained in the initial video segments formed by the method is smaller, so that the initial video segments are combined to form one video segment, i.e. the video frames in one video segment may be similar or dissimilar.
Step S002: and obtaining the importance degree of each video segment according to the comprehensive matching rate of each video segment and the number of video frames contained in each video segment, and further distributing each video segment.
Video jamming can seriously influence the viewing experience of a user, the influence of the jamming of different video contents on the viewing experience of the user is different, the video is frequently changed in a video highlight part, the occurrence of the jamming can seriously influence the viewing experience of the user, and in a video section with a plurality of similar continuous video frames, the user can think of the content of the next video frame according to the current video frame, the occurrence of the video jamming at the moment has relatively low influence degree on the viewing experience of the user. The video segments of the highlight are distributed to more user nodes, more user nodes can be selected when breakpoint continuous transmission is carried out among the user nodes, the network information of each user node in the next time period is predicted according to the network condition of each user node in the historical time period, the good and bad degree of each user node is obtained, then the optimal user node is obtained according to the good and bad degree of each user node, and downloading of the corresponding video segment is carried out from the optimal user node, so that the breakpoint continuous transmission of the user terminal is realized, the transmission efficiency of video data is ensured, and meanwhile, the clamping is avoided as much as possible. The specific process is as follows:
when a video segment is more important, it is necessary to put the video segment into motionDistributing video segments to more user nodes to ensure multiple selectivities of the user nodes and reduce video cartoon probability, wherein the importance degree of one video segment is determined according to the comprehensive matching rate of the video segment, and the higher the comprehensive matching rate is, the more similar video frames in the video segment are, the lower the video highlight degree is, and the importance degree corresponding to the video segment is also lower; the lower the comprehensive matching rate is, the larger the difference between the video frames in the video segment is, i.e. the more the shots are turned, the higher the video highlight is, and the importance of the corresponding video segment is also higher, and taking the t-th video segment as an example, the importance of the video segment isCan be expressed as:
in the method, in the process of the invention,representing the importance of the t-th video segment, < >>Representing the number of video frames of the t-th video segment; />Representing the maximum value of the number of video frames contained in all video segments, namely the maximum number of video frames; />Representing the corner matching rate between the (u) th video frame and the adjacent video frames in the video segment;
for the comprehensive matching rate of the t-th video segment, since the corner matching rate between adjacent video frames in the t-th video segment can represent the similarity degree between the adjacent video frames, when two adjacent video frames in the t-th video segment are more similar, the two adjacent video frames correspond to each otherThe higher the corner matching rate between video frames, the higher the overall matching rate for the video segment>The larger the video segment is, the more stable the lens is in the video segment at the moment, and the less the lens is transferred, otherwise, the more frequent the lens transfer in the video segment at the next time is indicated, so the embodiment judges the type of each video segment according to the comprehensive matching rate of different video segments, namely, the more stable the lens in one video segment is or the more frequent the lens transfer is;
because the comprehensive matching rate of the t-th video segment is the average value of the corner matching rates corresponding to all video frames in the t-th video segment, the overall similarity of the video frames in the single video segment can only be reflected, namely, the higher the internal similarity is, the lower the importance of the video segment is; the number of video frames contained in different video segments also affects the judgment of the importance degree of the video segments, for example, for two video segments with the same comprehensive matching rate and higher comprehensive matching rate, the importance degree corresponding to the video segment with the larger number of the video frames contained therein is lower, i.e. the number of similar video frames contained therein is larger, which means that the smaller the change degree of the actual scene in the video segment is, the user can more easily associate the next video frame from the current video frame, so that when the video segment has video clip, the influence on the visual experience of the user is smaller, and the importance degree of the video segment is lower; for video segments with fewer video frames, the video segments may be video segments formed when shot transition is more frequent, and at this time, the video segments have a large influence on the visual experience of the user due to the blocking, so that the importance of the video segments is high;
for two video segments with the same comprehensive matching rate but lower comprehensive matching rate, the corresponding importance degree of the video segment with more video frames is higher, namely the number of video frames contained in the video segment is more, which means that the shot transition in the video segment is more frequent, and at the moment, a user is difficult to associate the next video frame from the current video frame, so when the video segment has video clamping, the influence on the visual experience of the user is greater, and the importance degree of the video segment is higher, otherwise, the importance degree of the video segment is lower, namely the importance degree of each video segment is judged according to the comprehensive matching rate between the video frames in each video segment and the number of the video frames contained in each video segment;
and repeating the method, and sequentially processing each video segment to obtain the importance degree of each video segment.
In this embodiment, it is expected that video segments with more frequent shot transition, that is, video segments with higher importance degree are distributed to more user nodes, so that a single user node may have more user nodes selectable when acquiring other video segments, so that video segments with higher importance degree need to be repeatedly distributed to more user nodes, that is, video segments with higher importance degree correspond to higher distribution repetition rate, so as to reduce the click probability, and then the number of users to be distributed for the t-th video segmentCan be expressed as:
in the method, in the process of the invention,the number of users needing to be allocated to the t-th video segment is represented; />Representing the importance of the t-th video segment;importance level for the s-th video segment; m represents the total number of user nodes, n represents the total number of video segments, +.>Representing a rounding down.
And determining the distribution repetition rate of the video segments according to the importance degree of each video segment for the distribution repetition rate of the t-th video segment, further obtaining the number of users to be distributed to each video segment, randomly distributing each video segment to user nodes corresponding to the number of users, enabling the distribution rule to accord with Gaussian distribution, and not distributing other video segments after one user node is distributed to one video segment, namely, one video segment can be distributed to a plurality of user nodes, and one user node can only possess one video segment. After the allocation is completed, the user nodes are numbered according to the sequence of the video segments, so that data sharing among the user nodes is facilitated.
Step S003: acquiring all user nodes with target video segments, and acquiring the flow quality degree and the network demand degree of each user node according to the acquired network flow data and network occupation data of each user node in a historical time period, thereby acquiring the quality degree of each user node; and obtaining an optimal user node according to the quality degree of each user node, and downloading the target video segment from the optimal node.
The advantages and disadvantages of the user nodes are positively correlated with the network speed of the user nodes, and because data sharing is needed between the user nodes, if the network of a certain user node is not good in the sharing process, the transmission speed is slow in the data sharing process, video clamping can be caused with high probability, meanwhile, the internet surfing experience of the user node of the transmitting end is not affected as much as possible in the data sharing process, so that the internet surfing experience of the user node of the transmitting end is ensured according to the network speed information after the network speed information prediction of the user node of the transmitting end in a historical period, and meanwhile, the advantages and disadvantages of the user node of the transmitting end are judged according to the network occupation data after the network occupation data prediction of the user node of the transmitting end in the historical period, so that the video clamping problem of the user node of the receiving end is prevented, and the internet surfing experience of the user node of the transmitting end is ensured. The specific process is as follows:
for example, the user node a needs a video segment Q of the video data, where the user node a is a target user node, the video segment Q is a video segment that the target user node needs to download, i.e. a target video segment, and other user nodes have the video segment QThe number of the user nodes is x, and the user nodes with video segments Q are respectively:wherein->The user node with the video segment Q is the x-th user node; with user node->For example, the user node +.>Network traffic data and network occupation data at the first c moments are obtained, and the network traffic data and the network occupation data in a historical time period are obtained, wherein c=30 is set in the embodiment;
because the network traffic data corresponding to each time in the historical time period is obtained in the embodiment, the obtained network traffic data is equivalent to the network speed information corresponding to each time; if the current time is T, the historical time period corresponding to the current time is a time range from the T-c time to the T time, curve fitting is respectively carried out on network flow data and network occupation data corresponding to each time in the historical time period by using a least square method, a flow curve and a network occupation curve are obtained, and a user node is predicted according to the obtained flow curve and the network occupation curveThe least square method is the prior art, and v=30 is set in this embodiment, which is not described in detail herein.
User nodeThe sequence of network traffic data at v times after the predicted current time is denoted as predicted traffic sequence +.>Wherein->The method comprises the steps of predicting flow data for the v-th predicted flow data in a flow sequence; user node +.>The predicted sequence of network occupation data at v times after the current time is recorded as the predicted network occupation sequence +.>Wherein->For the v-th predicted occupancy data in the predicted network occupancy sequence, calculating user node +_according to the predicted traffic sequence and the predicted network occupancy sequence>The node's degree of merit, namely:
in the method, in the process of the invention,representing user nodes +.>The flow quality and the network demand level of the network; />Representing user node +.>I-th predicted traffic data in the predicted traffic sequence of (a); />Representing user node +.>The i-th predicted occupation data in the predicted network occupation sequence; />Representing the number of data contained in the predicted traffic sequence and the predicted network occupancy sequence; />Representing user nodes +.>An average value of predicted traffic data and an average value of predicted occupancy data; exp () is an exponential function based on a natural constant; />For theoretical maximum flow data, +.>The theoretical maximum traffic data and the theoretical maximum occupied data are determined by themselves according to the specific network type actually used, for example, the maximum network bandwidth of a gigabit network is 1000M, and the maximum network bandwidth of a hundred mega network is 100M;
for user node->For characterizing the stability of the network environment of the user node, when the user node is +>The smaller the difference between the predicted traffic data of the user node, the smaller the fluctuation degree of the predicted traffic data corresponding to the user node, at this time, if the average value of the predicted traffic data of the user node is larger than the theoretical maximum traffic data, namely +.>The larger the network is, the more stable the network is, and the smaller the blocking probability is during transmission;
for user node->Is based on the predicted fluctuation degree of the occupied data, when the user nodeThe smaller the difference between the predicted occupancy data of the corresponding user node, the smaller the fluctuation degree of the predicted occupancy data of the corresponding user node, i.e. the network demand degree of the user node at the future v times is more stable, and if the user node is + ∈ ->The smaller the average value of the predicted occupancy data relative to the theoretical maximum occupancy data, i.e. +.>The smaller the network demand is, the smaller the network demand is required by the user node at v times in future;
when a user nodeThe more stable the corresponding network environment is, and the less the network demand is, for example, the user node +.>The downloading of video segments from other user nodes is not required, which means that when the user node transmits the video segment Q, the network traffic of the network node can be used for sharing the data of the video segment QSo that the transmission efficiency is high and the user node is not influenced>Is a look and feel experience of (c).
Repeating the method, and then removing the corresponding good and bad degrees of all the user nodes with the target video segment Q, wherein the user node corresponding to the maximum good and bad degree is the optimal user node of the target user node A, and the target user node downloads the target video segment Q from the optimal user node.
And similarly, processing by taking other user nodes needing to download the video segments as target user nodes to obtain corresponding optimal user nodes, and downloading the required video segments from the corresponding optimal user nodes.
Through the steps, video clamping optimization in the user data sharing process is completed.
According to the embodiment, firstly, the similarity of each video frame is judged according to the corner matching rate between each video frame and the adjacent video frames in the obtained video data, so that video segments with stable shots are extracted, the quantity of video frames contained in the video segments is limited by setting a basic quantity value, and the phenomenon that the quantity of video frames contained in the video segments finally distributed to different user nodes is too small is avoided; because the data sharing among the user nodes is similar to the component local area network, the transmission rate of video data can be greatly improved, in order to enable the video segments to participate in the local area network transmission as much as possible, and the viewing experience of users is optimized, the embodiment obtains the comprehensive matching rate of each video segment according to the matching rate of the corresponding corner point of each video frame in each video segment, judges the importance degree of each video segment according to the magnitude of the comprehensive matching rate of different video segments and the number of the video frames contained in each video segment, and accordingly distributes video segments with higher importance degree, such as video segments with frequent lens transition, to more user nodes, increases the downloading selectivity of the video segments with higher importance degree, and reduces the video clip probability; when user nodes need to share data, user nodes meeting sharing requirements are searched through video numbers, then the quality degree of each user node is obtained according to the flow quality degree and the network demand degree of each user node meeting the requirements, so that the user node which is most suitable for data sharing is found, namely the optimal user node, the required video segment is acquired from the optimal user node, the influence on the visual experience of other user nodes is avoided while the transmission efficiency is high, and the video clamping problem caused by randomly acquiring the video segment in the traditional method is optimized.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.

Claims (4)

1. The video cartoon optimizing method based on user data sharing is characterized by comprising the following steps:
acquiring video data, and acquiring corner matching rates corresponding to all video frames according to all corners of adjacent video frames in the acquired video data; obtaining each video segment of the video data according to the corner matching rate corresponding to each video frame;
obtaining the comprehensive matching rate of each video segment according to the corner matching rate corresponding to each video frame in each video segment; obtaining importance degrees of all video segments according to the comprehensive matching rate of all video segments and the number of video frames contained in all video segments; obtaining the distribution repetition rate of each video segment according to the importance degree of each video segment, and obtaining the number of users to be distributed of each video segment according to the distribution repetition rate of each video segment; distributing each video segment according to the obtained user quantity to obtain all user nodes corresponding to each video segment;
taking any user node as a target user node, taking a video segment to be downloaded by the target user node as a target video segment, acquiring all user nodes with the target video segment, and acquiring a predicted traffic sequence and a predicted network occupation sequence of each user node according to the acquired network traffic data and network occupation data of each user node in a historical time period; obtaining the flow quality degree of each user node according to the predicted flow sequence of each user node; obtaining the network demand degree of each user node according to the predicted network occupation sequence of each user node; obtaining the quality degree of each user node according to the flow quality degree of each user node and the network demand degree; obtaining optimal user nodes according to the quality degree of each user node, and downloading the target video segment from the optimal nodes;
the obtaining expression of the importance degree of each video segment is as follows:
in the method, in the process of the invention,representing the importance of the t-th video segment, < >>Representing the number of video frames of the t-th video segment; />A maximum value representing the number of video frames contained in all video segments; />Representing corner matching rate corresponding to a ith video frame in a ith video segment; />For the match rate threshold, ++>The comprehensive matching rate of the t-th video segment;
the acquisition expression of the number of users is:
in the method, in the process of the invention,the number of users needing to be allocated to the t-th video segment is represented; />Representing the importance of the t-th video segment; />Importance level for the s-th video segment; m represents the total number of user nodes, n represents the total number of video segments, +.>Representing a rounding down, a +.>The repetition rate is allocated for the t-th video segment;
the method for obtaining each video segment of the video data according to the corner matching rate corresponding to each video frame comprises the following steps:
setting a matching rate threshold, and dividing each video frame and adjacent video frames into a group when the matching rate of the corner points corresponding to each video frame in the video data is greater than or equal to the matching rate threshold; otherwise, dividing the two video frames into two groups, and sequentially processing each video frame in the video data to obtain each initial video segment; and taking all the initial video segments with the number of the video frames larger than or equal to the basic number value as all the video segments, and combining all the video frames between two adjacent video segments into one video segment to obtain all the video segments of the video data.
2. The video katon optimization method based on user data sharing according to claim 1, wherein the method for obtaining the corner matching rate corresponding to each video frame is as follows:
the method comprises the steps of referring to the number of corner points in each video frame as a first number value of each video frame, referring to the number of corner points in video frames adjacent to each video frame as a second number value of each video frame, conducting corner matching on each video frame and the adjacent video frame, and obtaining the pair number of corner points matched with each other in each video frame and the adjacent video frame; calculating the addition result between the first quantity value and the second quantity value of each video frame; and calculating the product between the obtained corner pair number and 2.0, and taking the ratio between the obtained product and the obtained addition result as the corner matching rate of each video frame.
3. The video katon optimization method based on user data sharing according to claim 1, wherein the obtaining expression of the traffic quality degree of each user node is:
in the method, in the process of the invention,for user node->The flow quality of (2); />Representing user node +.>I-th predicted traffic data in the predicted traffic sequence of (a); />Representing the number of data contained in the predicted traffic sequence and the predicted network occupancy sequence; />Representing user node +.>An average value of the predicted flow data; exp () is an exponential function based on a natural constant; />Is theoretical maximum flow data.
4. The video katon optimization method based on user data sharing according to claim 1, wherein the obtaining expression of the network demand level of each user node is:
in the method, in the process of the invention,for user node->Is a network demand level of (1); />Representing user node +.>The i-th predicted occupation data in the predicted network occupation sequence; />Representing the number of data contained in the predicted traffic sequence and the predicted network occupancy sequence; />Is an exponential function based on natural constants; />Representing user node +.>Mean value of the predicted occupancy data;>is theoretical maximum occupation data.
CN202310693142.XA 2023-06-13 2023-06-13 Video cartoon optimizing method based on user data sharing Active CN116437127B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310693142.XA CN116437127B (en) 2023-06-13 2023-06-13 Video cartoon optimizing method based on user data sharing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310693142.XA CN116437127B (en) 2023-06-13 2023-06-13 Video cartoon optimizing method based on user data sharing

Publications (2)

Publication Number Publication Date
CN116437127A CN116437127A (en) 2023-07-14
CN116437127B true CN116437127B (en) 2023-08-11

Family

ID=87092943

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310693142.XA Active CN116437127B (en) 2023-06-13 2023-06-13 Video cartoon optimizing method based on user data sharing

Country Status (1)

Country Link
CN (1) CN116437127B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101600095A (en) * 2009-07-02 2009-12-09 谢佳亮 A kind of video frequency monitoring method and video monitoring system
CN103593464A (en) * 2013-11-25 2014-02-19 华中科技大学 Video fingerprint detecting and video sequence matching method and system based on visual features
CN109684530A (en) * 2018-12-07 2019-04-26 石河子大学 Information Push Service system based on web-based management and the application of mobile phone small routine
CN115294409A (en) * 2022-10-08 2022-11-04 南通商翼信息科技有限公司 Video compression method, system and medium for security monitoring
CN115695919A (en) * 2022-09-28 2023-02-03 中国电信股份有限公司 Decentralized video processing method and device and electronic equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7954128B2 (en) * 2005-02-11 2011-05-31 Time Warner Cable Inc. Methods and apparatus for variable delay compensation in networks
US11770498B2 (en) * 2020-09-29 2023-09-26 Lemon Inc. Supplemental enhancement information for multi-layer video streams

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101600095A (en) * 2009-07-02 2009-12-09 谢佳亮 A kind of video frequency monitoring method and video monitoring system
CN103593464A (en) * 2013-11-25 2014-02-19 华中科技大学 Video fingerprint detecting and video sequence matching method and system based on visual features
CN109684530A (en) * 2018-12-07 2019-04-26 石河子大学 Information Push Service system based on web-based management and the application of mobile phone small routine
CN115695919A (en) * 2022-09-28 2023-02-03 中国电信股份有限公司 Decentralized video processing method and device and electronic equipment
CN115294409A (en) * 2022-10-08 2022-11-04 南通商翼信息科技有限公司 Video compression method, system and medium for security monitoring

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《计算机工程与应用》2012年(第48卷)总目次;计算机工程与应用(第36期);全文 *

Also Published As

Publication number Publication date
CN116437127A (en) 2023-07-14

Similar Documents

Publication Publication Date Title
US10999175B2 (en) Network data flow classification method and system
Zhong et al. A deep reinforcement learning-based framework for content caching
CN106713956B (en) Code rate control and version selection method and system for dynamic self-adaptive video streaming media
CN110049357B (en) Bandwidth estimation method, device, equipment and storage medium
CN111447083A (en) Federal learning framework under dynamic bandwidth and unreliable network and compression algorithm thereof
CN105915602B (en) Dispatching method and system based on community detection algorithm P2P network
CN111654712A (en) Dynamic self-adaptive streaming media multicast method suitable for mobile edge computing scene
CN108848395B (en) Edge cooperative cache arrangement method based on fruit fly optimization algorithm
CN105979274A (en) Distributive cache storage method for dynamic self-adaptive video streaming media
CN113783944B (en) Video data processing method, device, system and equipment based on cloud edge cooperation
CN113992945B (en) Multi-server multi-user video analysis task unloading method based on game theory
CN111866601A (en) Cooperative game-based video code rate decision method in mobile marginal scene
CN110913239B (en) Video cache updating method for refined mobile edge calculation
CN115002113A (en) Mobile base station edge computing power resource scheduling method, system and electronic equipment
CN116437127B (en) Video cartoon optimizing method based on user data sharing
Yu et al. Co-optimizing latency and energy with learning based 360 video edge caching policy
CN108810139B (en) Monte Carlo tree search-assisted wireless caching method
CN112488563B (en) Method and device for determining calculation force parameters
CN112672227B (en) Service processing method, device, node and storage medium based on edge node
Dai et al. An mec-enabled wireless vr transmission system with view synthesis-based caching
CN111124298B (en) Mist computing network content cache replacement method based on value function
CN115278290B (en) Virtual reality video caching method and device based on edge nodes
CN108322768B (en) CDN-based video space distribution method
Maniotis et al. Smart caching for live 360° video streaming in mobile networks
CN112822726B (en) Modeling and decision-making method for Fog-RAN network cache placement problem

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant