CN113556573A - Method and system for selecting push flow link - Google Patents
Method and system for selecting push flow link Download PDFInfo
- Publication number
- CN113556573A CN113556573A CN202110835972.2A CN202110835972A CN113556573A CN 113556573 A CN113556573 A CN 113556573A CN 202110835972 A CN202110835972 A CN 202110835972A CN 113556573 A CN113556573 A CN 113556573A
- Authority
- CN
- China
- Prior art keywords
- flow
- node
- pushing
- push
- link
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2187—Live feed
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/24—Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
- H04N21/2402—Monitoring of the downstream path of the transmission network, e.g. bandwidth available
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/262—Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
- H04N21/26208—Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists the scheduling operation being performed under constraints
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/63—Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
- H04N21/647—Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless
- H04N21/64723—Monitoring of network processes or resources, e.g. monitoring of network load
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computer Security & Cryptography (AREA)
- Information Transfer Between Computers (AREA)
Abstract
The application discloses a method for selecting a push flow link, which comprises the following steps: acquiring information of all server nodes between a first stream pushing node and a second stream pushing node; carrying out plug flow between any two server nodes, and grading according to the flow quality to obtain all plug flow paths and corresponding grades between the first plug flow node and the second plug flow node; generating a directed graph according to all the plug flow paths, and taking the scores as the weights of the corresponding edges; and calculating the optimal flow pushing link from the first flow pushing node to the second flow pushing node from the directed graph according to a shortest path algorithm. The application also discloses a push flow link selection system, an electronic device and a computer readable storage medium. Therefore, the optimal plug flow link can be automatically selected in a scoring mode by giving the plug flow quality between each server node, scientific data support is provided, and the optimal solution can be quickly obtained.
Description
Technical Field
The present application relates to the field of live broadcast technologies, and in particular, to a method, a system, an electronic device, and a computer-readable storage medium for selecting a push stream link.
Background
The live broadcast stream pushing means that the anchor acquires a stream pushing address from a live broadcast cloud platform through a service server, and pushes the acquired stream media to a live broadcast cloud receiving end in real time through the stream pushing address. In existing live systems, it is often necessary to provide services for live events. The event stream is pushed to the live broadcast room from the production to the end, so that the user can see the picture and experience a series of server transfer. The network connectivity between each server and the configuration of the machine itself are inconsistent, and from the start of streaming node transmission to the end of actual push to the live broadcast room, there are multiple servers in the middle, and multiple different lines can be selected. In theory any one line can suffice.
However, live competition broadcast can only select one stream pushing line for stream pushing, and currently, research and development personnel are required to manually select the line.
It should be noted that the above-mentioned contents are not intended to limit the scope of protection of the application.
Disclosure of Invention
The present application mainly aims to provide a method, a system, an electronic device, and a computer-readable storage medium for selecting a push flow link, and aims to solve the problem of how to scientifically screen out an optimal link during live broadcast push flow.
In order to achieve the above object, an embodiment of the present application provides a method for selecting a push link, where the method includes:
acquiring information of all server nodes between a first stream pushing node and a second stream pushing node;
carrying out plug flow between any two server nodes, and grading according to the flow quality to obtain all plug flow paths and corresponding grades between the first plug flow node and the second plug flow node;
generating a directed graph according to all the plug flow paths, and taking the scores as the weights of the corresponding edges; and
and calculating the optimal flow pushing link from the first flow pushing node to the second flow pushing node from the directed graph according to a shortest path algorithm.
Optionally, the performing of the plug flow between any two server nodes includes performing the plug flow by using a plurality of protocols, and the generating of the directed graph according to all the plug flow paths includes generating, for each protocol, a corresponding directed graph according to all the plug flow paths between the first plug flow node and the second plug flow node, so as to obtain a plurality of the directed graphs.
Optionally, the method further comprises:
monitoring real-time scores of all edges in the optimal plug flow link during actual plug flow;
and when viewing is abnormal, viewing the real-time scores of all edges in the optimal plug flow link, and positioning the abnormal link according to the real-time scores.
Optionally, the reference factor of the scoring includes at least any one of a flow blockage rate, a flow interruption number, and a resource configuration of the two server nodes, where the resource configuration includes a CPU usage rate and a memory usage rate.
Optionally, the total calculation amount of the scores is positively correlated with the scores corresponding to the resource configurations of the two server nodes themselves, and is negatively correlated with the scores corresponding to the flow blockage rate and the flow interruption times.
Optionally, the generating a directed graph according to all the plug flow paths includes:
and taking the first stream pushing node as a starting node of the directed graph, taking the second stream pushing node as an ending node of the directed graph, forming an edge of the directed graph by every two interconnected server nodes, and taking the stream pushing direction as the direction of the edge.
Optionally, the pushing with multiple protocols includes a real-time messaging protocol RTMP and a secure and reliable transport protocol SRT.
In addition, to achieve the above object, an embodiment of the present application further provides a system for selecting a push link, where the system includes:
the acquisition module is used for acquiring information of all server nodes between the first stream pushing node and the second stream pushing node;
the scoring module is used for pushing flow between any two server nodes and scoring according to flow quality to obtain all pushing flow paths and corresponding scores between the first pushing flow node and the second pushing flow node;
the generation module is used for generating a directed graph according to all the plug flow paths and taking the scores as the weights of the corresponding edges;
and the screening module is used for calculating the optimal flow pushing link from the first flow pushing node to the second flow pushing node from the directed graph according to a shortest path algorithm.
In order to achieve the above object, an embodiment of the present application further provides an electronic device, including: the device comprises a memory, a processor and a push link selection program stored on the memory and capable of running on the processor, wherein the push link selection program realizes the push link selection method when being executed by the processor.
To achieve the above object, an embodiment of the present application further provides a computer-readable storage medium, where a push link selection program is stored, and when executed by a processor, the push link selection program implements the push link selection method as described above.
The method, the system, the electronic device and the computer-readable storage medium for selecting the push flow link provided by the embodiment of the application can abstract the relation between the server nodes into a directed graph, score each path according to the flow quality as the weight of each edge, and finally comprehensively obtain the shortest path. By giving the plug flow quality between each server node a scoring mode, the optimal plug flow link is automatically selected, scientific data support is provided, the optimal solution can be quickly obtained, and manual operation is not needed.
Drawings
FIG. 1 is a diagram of an application environment architecture in which various embodiments of the present application may be implemented;
fig. 2 is a flowchart of a method for selecting a push link according to a first embodiment of the present application;
FIG. 3 is a schematic diagram of a directed graph generated in a first embodiment of the present application;
fig. 4 is a flowchart of a method for selecting a push link according to a second embodiment of the present application;
fig. 5 is a flowchart of a method for selecting a push link according to a third embodiment of the present application;
fig. 6 is a schematic hardware architecture diagram of an electronic device according to a fourth embodiment of the present application;
fig. 7 is a block diagram of a push link selection system according to a fifth embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the descriptions relating to "first", "second", etc. in the embodiments of the present application are only for descriptive purposes and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a diagram illustrating an application environment architecture for implementing various embodiments of the present application. The present application is applicable in an application environment including, but not limited to, a first streaming node 2, a second streaming node 4, a scheduling device 6 and a plurality of server nodes 8.
In various embodiments of the present application, the first push flow node 2 is a start node of the push flow, and the second push flow node 4 is an end node of the push flow. In the process of pushing from the first pushing node 2 to the second pushing node 4, the relay needs to be performed through a plurality of server nodes 8. The scheduling device 6 is configured to screen, among all the server nodes 8, an optimal push link from the first push node 2 to the second push node 4, so as to improve the quality and efficiency of the push flow. The server node 8 is a small plug flow node (edge node) deployed in various places (foreign and domestic), and may be a cloud server.
For example, in the case of a cross-country event live broadcast, it is assumed that a source stream is pushed from abroad, a source stream pushed to the country in the country is subjected to secondary production, and the produced source stream is pushed to a live broadcast room for a user to view. The first streaming node 2 may be a foreign event making streaming node, the second streaming node 4 may be a domestic event making streaming node, and the intermediate nodes are transferred through a plurality of server nodes 8, wherein one part is a foreign cloud server, and the other part is a domestic cloud server. The servers at home and abroad can be interconnected, each link can meet the requirement of event making, but the transmission link and the configuration of the server can influence the quality of the stream.
The live broadcast of the event can only select one stream pushing link for stream pushing at last, and the prior art scheme generally needs research and development personnel to select manually. Because each server can jump, the number of link combinations is too large, and manual screening is difficult in the complicated situation. And only depending on manual experience, one plug flow link is selected, and insufficient data is available for supporting the link as an optimal link. In addition, when a link has a problem, it is difficult to quickly locate which link has the problem in the server transfer process, the problem is difficult to find, and the time is long. The scheduling device 6 in the present application can automatically select the optimal plug flow link scientifically and quickly without manual operation.
The first stream pushing node 2, the second stream pushing node 4, the scheduling device 6 and the plurality of server nodes 8 are in communication connection through a wired or wireless network so as to perform data transmission and interaction. In various embodiments of the present application, the server nodes 8 (including the first and second streaming nodes 2 and 4) may perform streaming through a Real Time Messaging Protocol (RTMP) or a Secure Reliable Transport Protocol (SRT).
Example one
Fig. 2 is a flowchart of a method for selecting a push link according to a first embodiment of the present application. It is to be understood that the flow charts in the embodiments of the present method are not intended to limit the order in which the steps are performed. Some steps in the flowchart may be added or deleted as desired. The method will be described below with the scheduling apparatus 6 as the execution subject.
The method comprises the following steps:
s200, acquiring information of all server nodes between the first stream pushing node and the second stream pushing node.
The first stream pushing node is a starting node of the stream pushing, and the second stream pushing node is an ending node of the stream pushing. In the process of pushing the flow from the first pushing flow node to the second pushing flow node, the flow needs to be transferred through a plurality of server nodes. The scheduling device first needs to obtain information of the server nodes, including addresses, resource allocation, and the like, to prepare for subsequent screening of the optimal link.
S202, flow pushing is carried out between any two server nodes, and grading is carried out according to flow quality.
After the information of all the server nodes is acquired, stream pushing is carried out between any two server nodes which can be interconnected, the stream is pushed from one server node A to the other server node B, and the stream quality is judged at the receiving node (the receiving node scores and then sends the data to the scheduling device, or the data required by the scoring is sent to the scheduling device for scoring). According to a preset scoring rule, a quality score corresponding to the push flow path (from the server node A to the server node B) can be obtained.
In this embodiment, the scoring criteria may include a flow stuck rate, a flow interruption number, resource configurations (e.g., CPU utilization, memory utilization) of the two server nodes (server nodes a and B) themselves, and the like. Wherein:
(1) current stuck rate
Overall flow stuck rate is the number of flows stuck/total number of flows. Wherein the stuck of a single stream means: when the number of Frames Per Second (FPS) of a stream changes in a short time and exceeds a certain magnitude, it indicates that the stream is jittered, which may result in a user watching a card.
The reasons for using FPS as a stuck judgment factor are: FPS is the number of frames transmitted per second, and in live broadcast, the anchor sends data to the cloud server at a constant FPS transmission rate. If all data sent by the cloud server are normal, the received FPS value is always constant. If the cloud server is abnormal in data or too high in load, the cloud server cannot process more frame numbers when sending the data, only part of the frame numbers transmitted by the user can be received, and other frame numbers can be stored in the user side. After the machine load returns to normal, the cloud server sends data and receives the frame number of transmission that is not received before the instant, which results in a large frame rate variation range of the front and back transmission.
The calton calculation formula is as follows: with a period of time T as a period, all FPS values of the stream received within the period are calculated, and an average FPS is calculated as a total FPS value/time T. The condition for judging whether the jamming occurs at a time point is as follows: absolute value (FPS-average FPS at the time point)/average FPS > 30% (a predetermined threshold, or other thresholds may be set according to actual conditions), and when this condition is satisfied, it indicates that flow stuck occurs at the time point of the flow.
(2) Number of times of flow interruption
When a server node is pushed to another server node, a node receiving the pushed stream may be cut off due to network jitter, too high code rate, too high machine load and the like, that is, a link of a receiver is disconnected. By judging the times of the cut-off, the quality of network transmission can be deduced reversely. In this example, a fixed value is subtracted from the score for each occurrence of a cutoff.
(3) Resource allocation of server itself
The server configuration also determines how good the transmission quality is. When the machine load is too high, the speed of flow transmission is somewhat induced. Generally, the CPU utilization rate and/or the memory utilization rate are mainly adopted for evaluation. When the CPU utilization rate and the memory utilization rate exceed certain values, a stuck phenomenon occurs.
In this embodiment, assuming that the server configuration base score is 100, the following calculation is performed for each of the two server nodes:
if the CPU utilization rate is less than or equal to a first preset threshold, no processing is performed, otherwise, the server configuration score is the server configuration basic score- (CPU utilization rate-first preset threshold) 100;
and if the memory utilization rate is less than or equal to a second preset threshold, no processing is performed, otherwise, the server configuration score is the server configuration basic score- (the memory utilization rate-the second preset threshold) × 100.
Of course, if two factors, i.e., the CPU utilization and the memory utilization, are adopted, then the server configuration score is the server configuration basic score- (CPU utilization-first preset threshold) × 100- (memory utilization-second preset threshold) × 100, combining the above two aspects.
The total quality score of the push flow path (from the server node A to the server node B) can be obtained by combining the scores of the factors. In this embodiment, the total score is positively correlated with the scores corresponding to the resource configurations of the two server nodes themselves, and is negatively correlated with the scores corresponding to the flow blockage rate and the flow interruption times. For example, the calculation rule of the score may be: total score is a fixed value (e.g., 100) of base score, server a configured score + server B configured score-flow stuck rate, base score-number of flow breaks.
In other embodiments, other reasonable scoring factors or scoring rules may be used to score each of the plug flow paths, which will not be described herein.
And S204, generating a directed graph according to all the flow pushing paths between the first flow pushing node and the second flow pushing node, and taking the scores as the weights of the corresponding edges.
After the push flow is performed between any two server nodes according to the step S202 and the scores are obtained, all push flow paths and corresponding scores between the first push flow node and the second push flow node can be obtained. And taking the first stream pushing node as a starting node of the directed graph, taking the second stream pushing node as an ending node of the directed graph, wherein every two interconnected server nodes form an edge of the directed graph, and the stream pushing direction is the direction of the edge (for example, from the server node A to the server node B). And obtaining a complete directed graph (for example, as shown in fig. 3) between the first stream pushing node and the second stream pushing node according to all stream pushing paths. And acquiring the quality score of each plug flow path (from the server node A to the server node B), and taking the score as the weight of the corresponding edge (the edge connected with the server node A and the server node B) in the directed graph.
S206, calculating the optimal flow pushing link from the first flow pushing node to the second flow pushing node from the directed graph according to a shortest path algorithm.
After obtaining the complete directed graph between the first flow pushing node and the second flow pushing node and the weight of each edge, calculating the shortest path in the directed graph by adopting a shortest path algorithm, namely the shortest path is the optimal flow pushing link from the first flow pushing node to the second flow pushing node.
In this embodiment, the shortest path algorithm may be Dijkstra (Dijkstra) algorithm. Dijkstra algorithm is a shortest path algorithm from one vertex to other vertexes, solves the shortest path problem in the weighted graph, and is mainly characterized in that a greedy algorithm strategy is adopted from a starting node, and each time traversal is carried out to an adjacent node of the vertex which is closest to the starting node and has not been visited until the adjacent node is expanded to an ending node.
The method for selecting a push flow link provided in this embodiment may abstract the relationship between server nodes into a directed graph, score each path according to the flow quality as the weight of an edge, and finally obtain the shortest path comprehensively. By giving the plug flow quality between each server node a scoring mode, the optimal plug flow link is automatically selected, scientific data support is provided, the optimal solution can be quickly obtained, and manual operation is not needed.
Example two
Fig. 4 is a flowchart of a method for selecting a push link according to a second embodiment of the present application. It is to be understood that the flow charts in the embodiments of the present method are not intended to limit the order in which the steps are performed. Some steps in the flowchart may be added or deleted as desired.
The method comprises the following steps:
s300, acquiring information of all server nodes between the first stream pushing node and the second stream pushing node.
The first stream pushing node is a starting node of the stream pushing, and the second stream pushing node is an ending node of the stream pushing. In the process of pushing the flow from the first pushing flow node to the second pushing flow node, the flow needs to be transferred through a plurality of server nodes. The scheduling device first needs to obtain information of the server nodes, including addresses, resource allocation, and the like, to prepare for subsequent screening of the optimal link.
And S302, pushing flow between any two server nodes by adopting a plurality of protocols respectively, and grading according to the flow quality.
After the information of all the server nodes is acquired, stream pushing is carried out between any two server nodes which can be interconnected, the stream is pushed from one server node A to the other server node B, and the stream quality is judged at the receiving node (the receiving node scores and then sends the data to the scheduling device, or the data required by the scoring is sent to the scheduling device for scoring). According to a preset scoring rule, a quality score corresponding to the push flow path (from the server node A to the server node B) can be obtained.
In this embodiment, the scoring criteria may include a flow stuck rate, a flow interruption number, resource configurations (e.g., CPU utilization, memory utilization) of the two server nodes (server nodes a and B) themselves, and the like. For the detailed description of each factor and the scoring rule, reference is made to the first embodiment, and details are not repeated here. The total quality score of the push flow path (from the server node A to the server node B) can be obtained by combining the scores of the factors. In this embodiment, the total score is positively correlated with the scores corresponding to the resource configurations of the two server nodes themselves, and is negatively correlated with the scores corresponding to the flow blockage rate and the flow interruption times. For example, the calculation rule of the score may be: total score is a fixed value (e.g., 100) of base score, server a configured score + server B configured score-flow stuck rate, base score-number of flow breaks.
In this embodiment, a plurality of protocols may be used to communicate between any two of the server nodes to complete the push flow. For example, the push flow may be performed by the RTMP protocol and the SRT protocol, respectively. Since the push flow is performed between two of the server nodes using a plurality of protocols, a quality score, for example a first score using the RTMP protocol and a second score using the SRT protocol, can be calculated for each protocol.
S304, respectively aiming at each protocol, generating a corresponding directed graph according to all the plug flow paths between the first plug flow node and the second plug flow node, and taking the score corresponding to the protocol as the weight of the corresponding edge.
After the push flow is performed between any two server nodes by using multiple protocols according to the step S302 and the scores are obtained, all push flow paths and corresponding scores between the first push flow node and the second push flow node can be obtained. And respectively aiming at each protocol, taking the first stream pushing node as a starting node of the directed graph, taking the second stream pushing node as an ending node of the directed graph, wherein each two interconnected server nodes form one edge of the directed graph, and the stream pushing direction is the direction of the edge (for example, from the server node A to the server node B). And obtaining a complete directed graph between the first plug flow node and the second plug flow node according to all plug flow paths. In this embodiment, directed graphs corresponding to each protocol may be obtained, for example, a first directed graph corresponding to the RTMP protocol and a second directed graph corresponding to the SRT protocol. Then, respectively aiming at each protocol, obtaining the quality score of each flow pushing path (from the server node A to the server node B), and taking the score as the weight of the corresponding edge (the edge connected with the server node A and the server node B) in the corresponding graph. That is, the first score is used as a weight of a corresponding edge in the first directed graph, and the second score is used as a weight of a corresponding edge in the second directed graph.
S306, calculating the optimal flow pushing link and protocol from the first flow pushing node to the second flow pushing node from the obtained multiple directed graphs according to a shortest path algorithm.
After obtaining a complete directed graph and a weight of each edge for each protocol between the first stream pushing node and the second stream pushing node, calculating a shortest path in all the directed graphs (e.g., the first directed graph and the second directed graph) by using a shortest path algorithm (e.g., Dijkstra algorithm), where the shortest path is an optimal stream pushing link from the first stream pushing node to the second stream pushing node, and a protocol corresponding to the optimal stream pushing link is an optimal protocol obtained by final screening. For example, assuming that the screened optimal plug flow link belongs to the first directed graph, the screened optimal protocol is the RTMP protocol corresponding to the first directed graph.
And subsequently, in actual flow pushing, the optimal flow pushing link is adopted to transfer the flow pushing from the first flow pushing node to the second flow pushing node, and each server node is communicated by adopting an RTMP protocol.
The method for selecting a push link provided in this embodiment may abstract a relationship between server nodes into two directed graphs, perform push using two protocols respectively, score each path according to a stream quality (including a stuck rate, a flow breaking frequency, and a server configuration) as a weight of an edge, calculate a score of each path, and finally obtain an optimal push link and a push protocol comprehensively. By giving the plug flow quality between each server node a scoring mode, the optimal plug flow link is automatically selected, scientific data support is provided, the optimal solution can be quickly obtained, and manual operation is not needed.
EXAMPLE III
Fig. 5 is a flowchart of a method for selecting a push link according to a third embodiment of the present application. In the third embodiment, the method for selecting a push link further includes steps S408 to S410 based on the first embodiment or the second embodiment. It is to be understood that the flow charts in the embodiments of the present method are not intended to limit the order in which the steps are performed. Some steps in the flowchart may be added or deleted as desired.
It is assumed that on the basis of the second embodiment described above, the method comprises the steps of:
s400, acquiring information of all server nodes between the first stream pushing node and the second stream pushing node.
The first stream pushing node is a starting node of the stream pushing, and the second stream pushing node is an ending node of the stream pushing. In the process of pushing the flow from the first pushing flow node to the second pushing flow node, the flow needs to be transferred through a plurality of server nodes. The scheduling device first needs to obtain information of the server nodes, including addresses, resource allocation, and the like, to prepare for subsequent screening of the optimal link.
S402, pushing flow between any two server nodes by adopting various protocols, and grading according to flow quality.
After the information of all the server nodes is acquired, stream pushing is carried out between any two server nodes which can be interconnected, the stream is pushed from one server node A to the other server node B, and the stream quality is judged at the receiving node (the receiving node scores and then sends the data to the scheduling device, or the data required by the scoring is sent to the scheduling device for scoring). According to a preset scoring rule, a quality score corresponding to the push flow path (from the server node A to the server node B) can be obtained.
In this embodiment, the scoring criteria may include a flow stuck rate, a flow interruption number, resource configurations (e.g., CPU utilization, memory utilization) of the two server nodes (server nodes a and B) themselves, and the like. For the detailed description of each factor and the scoring rule, reference is made to the first embodiment, and details are not repeated here. The total quality score of the push flow path (from the server node A to the server node B) can be obtained by combining the scores of the factors. In this embodiment, the total score is positively correlated with the scores corresponding to the resource configurations of the two server nodes themselves, and is negatively correlated with the scores corresponding to the flow blockage rate and the flow interruption times. For example, the calculation rule of the score may be: total score is a fixed value (e.g., 100) of base score, server a configured score + server B configured score-flow stuck rate, base score-number of flow breaks.
In this embodiment, a plurality of protocols may be used to communicate between any two of the server nodes to complete the push flow. For example, the push flow may be performed by the RTMP protocol and the SRT protocol, respectively. Since the push flow is performed between two of the server nodes using a plurality of protocols, a quality score, for example a first score using the RTMP protocol and a second score using the SRT protocol, can be calculated for each protocol.
S404, respectively aiming at each protocol, generating a corresponding directed graph according to all the plug flow paths between the first plug flow node and the second plug flow node, and taking the scores corresponding to the protocols as the weights of corresponding edges.
After the push flow is performed between any two server nodes by using multiple protocols according to the step S402 and the scores are obtained, all push flow paths and corresponding scores between the first push flow node and the second push flow node can be obtained. And respectively aiming at each protocol, taking the first stream pushing node as a starting node of the directed graph, taking the second stream pushing node as an ending node of the directed graph, wherein each two interconnected server nodes form one edge of the directed graph, and the stream pushing direction is the direction of the edge (for example, from the server node A to the server node B). And obtaining a complete directed graph between the first plug flow node and the second plug flow node according to all plug flow paths. In this embodiment, directed graphs corresponding to each protocol may be obtained, for example, a first directed graph corresponding to the RTMP protocol and a second directed graph corresponding to the SRT protocol. Then, respectively aiming at each protocol, obtaining the quality score of each flow pushing path (from the server node A to the server node B), and taking the score as the weight of the corresponding edge (the edge connected with the server node A and the server node B) in the corresponding graph. That is, the first score is used as a weight of a corresponding edge in the first directed graph, and the second score is used as a weight of a corresponding edge in the second directed graph.
S406, calculating an optimal push flow link and protocol from the first push flow node to the second push flow node from the obtained multiple directed graphs according to a shortest path algorithm.
After obtaining a complete directed graph and a weight of each edge for each protocol between the first stream pushing node and the second stream pushing node, calculating a shortest path in all the directed graphs (e.g., the first directed graph and the second directed graph) by using a shortest path algorithm (e.g., Dijkstra algorithm), where the shortest path is an optimal stream pushing link from the first stream pushing node to the second stream pushing node, and a protocol corresponding to the optimal stream pushing link is an optimal protocol obtained by final screening. For example, assuming that the screened optimal plug flow link belongs to the first directed graph, the screened optimal protocol is the RTMP protocol corresponding to the first directed graph.
And subsequently, in actual flow pushing, the optimal flow pushing link is adopted to transfer the flow pushing from the first flow pushing node to the second flow pushing node, and each server node is communicated by adopting an RTMP protocol.
S408, monitoring real-time scores of all edges in the optimal plug flow link during actual plug flow.
And after the optimal plug flow link is screened out, the optimal plug flow link is adopted to carry out actual plug flow from the first plug flow node to the second plug flow node. In this process, the scheduling device monitors the score change of each edge in the optimal push flow link in real time (real-time calculation according to the scoring rule of S402 above), and obtains the real-time score of each edge.
S410, when abnormal watching occurs, checking the real-time scores of all edges in the optimal plug flow link, and positioning an abnormal link according to the real-time scores.
If the abnormal conditions such as watching jamming and the like occur in the actual plug flow process, the link can be quickly located due to the fact that the link transmission between the two server nodes of the edge is in a problem, and the link can be conveniently and quickly repaired or switched as long as the real-time scores of all the edges in the optimal plug flow link are checked and the edge with the lowest real-time score is searched.
The method for selecting a push link provided in this embodiment may abstract a relationship between server nodes into two directed graphs, perform push using two protocols respectively, score each path according to a stream quality (including a stuck rate, a flow breaking frequency, and a server configuration) as a weight of an edge, calculate a score of each path, and finally obtain an optimal push link and a push protocol comprehensively. By giving the plug flow quality between each server node a scoring mode, the optimal plug flow link is automatically selected, scientific data support is provided, the optimal solution can be quickly obtained, and manual operation is not needed. And moreover, the score change of each edge in the link is monitored in real time, and when the phenomenon of watching jamming and the like occurs, the edge with the lowest score in the link is checked, so that the problem can be quickly positioned.
Example four
As shown in fig. 6, a hardware architecture of an electronic device 20 is provided for a fourth embodiment of the present application. In the present embodiment, the electronic device 20 may include, but is not limited to, a memory 21, a processor 22, and a network interface 23, which are communicatively connected to each other through a system bus. It is noted that fig. 6 only shows the electronic device 20 with components 21-23, but it is to be understood that not all of the shown components are required to be implemented, and that more or fewer components may be implemented instead. In this embodiment, the electronic device 20 may be the central server 2.
The memory 21 includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the storage 21 may be an internal storage unit of the electronic device 20, such as a hard disk or a memory of the electronic device 20. In other embodiments, the memory 21 may also be an external storage device of the electronic apparatus 20, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, provided on the electronic apparatus 20. Of course, the memory 21 may also include both an internal storage unit and an external storage device of the electronic apparatus 20. In this embodiment, the memory 21 is generally used for storing an operating system and various application software installed in the electronic device 20, such as program codes of the plug link selection system 60. Further, the memory 21 may also be used to temporarily store various types of data that have been output or are to be output.
The processor 22 may be a CPU, controller, microcontroller, microprocessor, or other data processing chip in some embodiments. The processor 22 is generally used to control the overall operation of the electronic device 20. In this embodiment, the processor 22 is configured to run the program code stored in the memory 21 or process data, for example, run the push link selection system 60.
The network interface 23 may include a wireless network interface or a wired network interface, and the network interface 23 is generally used for establishing a communication connection between the electronic apparatus 20 and other electronic devices.
EXAMPLE five
Fig. 7 is a block diagram of a push link selection system 60 according to a fifth embodiment of the present invention. The push link selection system 60 may be partitioned into one or more program modules, which are stored in a storage medium and executed by one or more processors to implement embodiments of the present application. The program modules referred to in the embodiments of the present application refer to a series of computer program instruction segments capable of performing specific functions, and the following description will specifically describe the functions of each program module in the embodiments.
In this embodiment, the push flow link selection system 60 includes:
the obtaining module 600 is configured to obtain information of all server nodes between a first push flow node and a second push flow node.
The first stream pushing node is a starting node of the stream pushing, and the second stream pushing node is an ending node of the stream pushing. In the process of pushing the flow from the first pushing flow node to the second pushing flow node, the flow needs to be transferred through a plurality of server nodes. The scheduling device first needs to obtain information of the server nodes, including addresses, resource allocation, and the like, to prepare for subsequent screening of the optimal link.
And the scoring module 602 is configured to perform plug flow between any two server nodes, and perform scoring according to the flow quality.
After the information of all the server nodes is acquired, stream pushing is carried out between any two server nodes which can be interconnected, the stream is pushed from one server node A to the other server node B, and the stream quality is judged at the receiving node (the receiving node scores and then sends the data to the scheduling device, or the data required by the scoring is sent to the scheduling device for scoring). According to a preset scoring rule, a quality score corresponding to the push flow path (from the server node A to the server node B) can be obtained.
In this embodiment, the scoring criteria may include a flow stuck rate, a flow interruption number, resource configurations (e.g., CPU utilization, memory utilization) of the two server nodes (server nodes a and B) themselves, and the like. For the detailed description of each factor and the scoring rule, reference is made to the first embodiment, and details are not repeated here. The total quality score of the push flow path (from the server node A to the server node B) can be obtained by combining the scores of the factors. In this embodiment, the total score is positively correlated with the scores corresponding to the resource configurations of the two server nodes themselves, and is negatively correlated with the scores corresponding to the flow blockage rate and the flow interruption times. For example, the calculation rule of the score may be: total score is a fixed value (e.g., 100) of base score, server a configured score + server B configured score-flow stuck rate, base score-number of flow breaks.
A generating module 604, configured to generate a directed graph according to all the flow pushing paths between the first flow pushing node and the second flow pushing node, and use the score as a weight of a corresponding edge.
After the push flow is performed between any two server nodes and the scores are obtained, all push flow paths and corresponding scores between the first push flow node and the second push flow node can be obtained. And taking the first stream pushing node as a starting node of the directed graph, taking the second stream pushing node as an ending node of the directed graph, wherein every two interconnected server nodes form an edge of the directed graph, and the stream pushing direction is the direction of the edge (for example, from the server node A to the server node B). And obtaining a complete directed graph between the first plug flow node and the second plug flow node according to all plug flow paths. And acquiring the quality score of each plug flow path (from the server node A to the server node B), and taking the score as the weight of the corresponding edge (the edge connected with the server node A and the server node B) in the directed graph.
A screening module 606, configured to calculate, according to a shortest path algorithm, an optimal flow pushing link from the first flow pushing node to the second flow pushing node from the directed graph.
After obtaining the complete directed graph between the first stream pushing node and the second stream pushing node and the weight of each edge, a shortest path algorithm (e.g., Dijkstra algorithm) is used to calculate the shortest path in the directed graph, which is the optimal stream pushing link from the first stream pushing node to the second stream pushing node.
The system for selecting a push flow link according to this embodiment can abstract the relationship between server nodes into a directed graph, score each path according to the flow quality as the weight of an edge, and finally obtain the shortest path comprehensively. By giving the plug flow quality between each server node a scoring mode, the optimal plug flow link is automatically selected, scientific data support is provided, the optimal solution can be quickly obtained, and manual operation is not needed.
EXAMPLE six
The present application further provides another embodiment, which is to provide a computer-readable storage medium storing a push link selection program, which is executable by at least one processor to cause the at least one processor to perform the steps of the push link selection method as described above.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It will be apparent to those skilled in the art that the modules or steps of the embodiments of the present application described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different from that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications that can be made by the use of the equivalent structures or equivalent processes in the specification and drawings of the present application or that can be directly or indirectly applied to other related technologies are also included in the scope of the present application.
Claims (10)
1. A method for push link selection, the method comprising:
acquiring information of all server nodes between a first stream pushing node and a second stream pushing node;
carrying out plug flow between any two server nodes, and grading according to the flow quality to obtain all plug flow paths and corresponding grades between the first plug flow node and the second plug flow node;
generating a directed graph according to all the plug flow paths, and taking the scores as the weights of the corresponding edges; and
and calculating the optimal flow pushing link from the first flow pushing node to the second flow pushing node from the directed graph according to a shortest path algorithm.
2. The method according to claim 1, wherein the performing of the push flow between any two of the server nodes includes performing the push flow by using a plurality of protocols, and the generating of the directed graph according to all the push flow paths includes generating, for each protocol, a corresponding directed graph according to all the push flow paths between the first push flow node and the second push flow node, so as to obtain a plurality of the directed graphs.
3. The push flow link selection method according to claim 1 or 2, wherein the method further comprises:
monitoring real-time scores of all edges in the optimal plug flow link during actual plug flow;
and when viewing is abnormal, viewing the real-time scores of all edges in the optimal plug flow link, and positioning the abnormal link according to the real-time scores.
4. The method according to claim 1, wherein the reference factors for scoring include at least any one of a flow stuck rate, a flow interruption number, and resource configurations of the two server nodes themselves, the resource configurations including a CPU usage rate and a memory usage rate.
5. The push link selection method according to claim 4, wherein the total calculation of the scores is positively correlated with the scores corresponding to the resource configurations of the two server nodes themselves, and negatively correlated with the scores corresponding to the flow blockage rate and the flow interruption times.
6. The method of claim 1, wherein the generating a directed graph from all the push flow paths comprises:
and taking the first stream pushing node as a starting node of the directed graph, taking the second stream pushing node as an ending node of the directed graph, forming an edge of the directed graph by every two interconnected server nodes, and taking the stream pushing direction as the direction of the edge.
7. The method of claim 2, wherein the push streaming using multiple protocols comprises a real-time messaging protocol (RTMP) and a secure and reliable transport protocol (SRT).
8. A push link selection system, the system comprising:
the acquisition module is used for acquiring information of all server nodes between the first stream pushing node and the second stream pushing node;
the scoring module is used for pushing flow between any two server nodes and scoring according to flow quality to obtain all pushing flow paths and corresponding scores between the first pushing flow node and the second pushing flow node;
the generation module is used for generating a directed graph according to all the plug flow paths and taking the scores as the weights of the corresponding edges;
and the screening module is used for calculating the optimal flow pushing link from the first flow pushing node to the second flow pushing node from the directed graph according to a shortest path algorithm.
9. An electronic device, comprising: a memory, a processor, and a push link selection program stored on the memory and executable on the processor, the push link selection program when executed by the processor implementing the push link selection method of any one of claims 1 to 7.
10. A computer-readable storage medium, having stored thereon a push link selection program which, when executed by a processor, implements a push link selection method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110835972.2A CN113556573A (en) | 2021-07-23 | 2021-07-23 | Method and system for selecting push flow link |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110835972.2A CN113556573A (en) | 2021-07-23 | 2021-07-23 | Method and system for selecting push flow link |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113556573A true CN113556573A (en) | 2021-10-26 |
Family
ID=78104242
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110835972.2A Pending CN113556573A (en) | 2021-07-23 | 2021-07-23 | Method and system for selecting push flow link |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113556573A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117135046A (en) * | 2023-10-26 | 2023-11-28 | 北京中企慧云科技有限公司 | Target resource configuration method, device, equipment and medium based on node association degree |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150089077A1 (en) * | 2012-03-14 | 2015-03-26 | Amazon Technologies, Inc. | Managing data transfer using streaming protocols |
CN108401492A (en) * | 2017-09-28 | 2018-08-14 | 深圳前海达闼云端智能科技有限公司 | A kind of route selection method, device and server based on mixing resource |
US20180343193A1 (en) * | 2017-05-25 | 2018-11-29 | Fang Hao | Method and apparatus for minimum label bandwidth guaranteed path for segment routing |
CN112260961A (en) * | 2020-09-23 | 2021-01-22 | 北京金山云网络技术有限公司 | Network traffic scheduling method and device, electronic equipment and storage medium |
CN112565082A (en) * | 2020-12-25 | 2021-03-26 | 鹏城实验室 | Service chain mapping method based on hybrid network, intelligent terminal and storage medium |
CN112737897A (en) * | 2021-04-06 | 2021-04-30 | 北京百家视联科技有限公司 | Link monitoring and scheduling method, device, equipment and storage medium |
CN113055693A (en) * | 2021-04-20 | 2021-06-29 | 上海哔哩哔哩科技有限公司 | Data processing method and device |
CN113099261A (en) * | 2021-04-27 | 2021-07-09 | 上海哔哩哔哩科技有限公司 | Node processing method and device and node processing system |
-
2021
- 2021-07-23 CN CN202110835972.2A patent/CN113556573A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150089077A1 (en) * | 2012-03-14 | 2015-03-26 | Amazon Technologies, Inc. | Managing data transfer using streaming protocols |
US20180343193A1 (en) * | 2017-05-25 | 2018-11-29 | Fang Hao | Method and apparatus for minimum label bandwidth guaranteed path for segment routing |
CN108401492A (en) * | 2017-09-28 | 2018-08-14 | 深圳前海达闼云端智能科技有限公司 | A kind of route selection method, device and server based on mixing resource |
CN112260961A (en) * | 2020-09-23 | 2021-01-22 | 北京金山云网络技术有限公司 | Network traffic scheduling method and device, electronic equipment and storage medium |
CN112565082A (en) * | 2020-12-25 | 2021-03-26 | 鹏城实验室 | Service chain mapping method based on hybrid network, intelligent terminal and storage medium |
CN112737897A (en) * | 2021-04-06 | 2021-04-30 | 北京百家视联科技有限公司 | Link monitoring and scheduling method, device, equipment and storage medium |
CN113055693A (en) * | 2021-04-20 | 2021-06-29 | 上海哔哩哔哩科技有限公司 | Data processing method and device |
CN113099261A (en) * | 2021-04-27 | 2021-07-09 | 上海哔哩哔哩科技有限公司 | Node processing method and device and node processing system |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117135046A (en) * | 2023-10-26 | 2023-11-28 | 北京中企慧云科技有限公司 | Target resource configuration method, device, equipment and medium based on node association degree |
CN117135046B (en) * | 2023-10-26 | 2024-01-12 | 北京中企慧云科技有限公司 | Target resource configuration method, device, equipment and medium based on node association degree |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10484464B2 (en) | Connection control device, connection control system, and non-transitory computer readable medium | |
CN109787827B (en) | CDN network monitoring method and device | |
CN103002069A (en) | Domain name resolution method, device and system | |
CN105103495B (en) | For allowing or refusing the admission control of the measurement request between the first and second equipment | |
CN110336848B (en) | Scheduling method, scheduling system and scheduling equipment for access request | |
CN110011926B (en) | Method, device, equipment and storage medium for adjusting message sending time | |
CN110012076B (en) | Connection establishing method and device | |
EP3291592A1 (en) | Monitoring management method and apparatus | |
CN108683528B (en) | Data transmission method, central server, server and data transmission system | |
CN114285795B (en) | State control method, device, equipment and storage medium of virtual equipment | |
CN113630616A (en) | Live broadcast edge node resource control method and system | |
US8972225B2 (en) | Method and system for constructing optimized network simulation environment | |
CN112040407A (en) | Beacon data processing method and device, electronic equipment and readable storage medium | |
CN112636979A (en) | Cluster alarm method and related device | |
CN103152261A (en) | Method and equipment for generating and distributing link state protocol data unit fragment messages | |
CN113556573A (en) | Method and system for selecting push flow link | |
CN110113222B (en) | Method and device for acquiring link bandwidth utilization rate and terminal | |
CN106021026B (en) | Backup method and device | |
KR100994880B1 (en) | System and method for acquiring power monitoring data using distributed network protocol | |
CN112004161A (en) | Processing method and device of address resources, terminal equipment and storage medium | |
CN112887224A (en) | Traffic scheduling processing method and device, electronic equipment and storage medium | |
CN113965538B (en) | Equipment state message processing method, device and storage medium | |
CN112437146B (en) | Equipment state synchronization method, device and system | |
CN117640766A (en) | CDN scheduling method, CDN scheduling system and storage medium | |
CN111508214B (en) | Alarm control method and control equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |