CN111565323B - Flow control method and device, electronic equipment and storage medium - Google Patents

Flow control method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111565323B
CN111565323B CN202010209474.2A CN202010209474A CN111565323B CN 111565323 B CN111565323 B CN 111565323B CN 202010209474 A CN202010209474 A CN 202010209474A CN 111565323 B CN111565323 B CN 111565323B
Authority
CN
China
Prior art keywords
bandwidth
service
preset
processed
predicted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010209474.2A
Other languages
Chinese (zh)
Other versions
CN111565323A (en
Inventor
谢文龙
李云鹏
吕亚亚
杨春晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Visionvera Information Technology Co Ltd
Original Assignee
Visionvera Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Visionvera Information Technology Co Ltd filed Critical Visionvera Information Technology Co Ltd
Priority to CN202010209474.2A priority Critical patent/CN111565323B/en
Publication of CN111565323A publication Critical patent/CN111565323A/en
Application granted granted Critical
Publication of CN111565323B publication Critical patent/CN111565323B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2385Channel allocation; Bandwidth allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/147Network analysis or design for predicting network behaviour
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/647Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless
    • H04N21/64723Monitoring of network processes or resources, e.g. monitoring of network load
    • H04N21/64738Monitoring network characteristics, e.g. bandwidth, congestion level

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application provides a flow control method, a flow control device, an electronic device and a storage medium, wherein the method comprises the following steps: obtaining a service to be processed; predicting the bandwidth required when the service to be processed is processed to obtain predicted bandwidth; according to the predicted bandwidth, the preset bandwidth of the service to be processed and other borrowable bandwidths, obtaining an actual bandwidth prepared for processing the service to be processed, wherein the other borrowable bandwidths comprise: at least one of the reserved bandwidth and the idle bandwidth of other services is preset, and the service to be processed is processed through the actual bandwidth. The service server performs flow control on each service, realizes real-time monitoring on each service link in the service in the server, and further realizes dynamic adjustment on the service link according to the bandwidth condition in the current server; secondly, the actual bandwidth for preparing each service is obtained through various bandwidth resources, and the utilization rate of the bandwidth resources by the service server is improved.

Description

Flow control method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of data transmission technologies, and in particular, to a flow control method and apparatus, an electronic device, and a storage medium.
Background
The video network is a real-time large-bandwidth transmission network based on Ethernet hardware, and is a special network for transmitting high-definition video and special protocols at high speed. When data transmission service is performed in the video network, the bandwidth resources applied by the service server to the operator are limited, and the flow control strategy of the service server for the internal data transmission service is generally considered from the whole and does not go deep into the internal service of the server, so that the bandwidth resources cannot be timely and reasonably allocated according to the real-time service requirements or the service priority, and the utilization rate of the bandwidth resources is low. Therefore, there is a need in the related art for a method for better and reasonable control of traffic of a service server.
Disclosure of Invention
The embodiment of the application provides a flow control method, a flow control device, electronic equipment and a storage medium, and aims to improve the utilization rate of a business server to wide resources.
A first aspect of an embodiment of the present application provides a flow control method, where the method includes:
obtaining a service to be processed;
predicting the bandwidth required when the service to be processed is processed to obtain predicted bandwidth;
obtaining an actual bandwidth prepared for processing the service to be processed according to the predicted bandwidth, the preset bandwidth of the service to be processed, and other borrowable bandwidths, where the other borrowable bandwidths include: presetting at least one of reserved bandwidth and idle bandwidth of other services;
and processing the service to be processed through the actual bandwidth.
Optionally, obtaining an actual bandwidth to be prepared for processing the service to be processed according to the predicted bandwidth, the preset bandwidth of the service to be processed, and other borrowable bandwidths, includes:
when the predicted bandwidth is not higher than the preset bandwidth, obtaining an actual bandwidth matched with the predicted bandwidth from the preset bandwidth;
and when the predicted bandwidth is higher than the preset bandwidth, obtaining an actual bandwidth matched with the predicted bandwidth from the preset bandwidth and the other borrowable bandwidths.
Optionally, obtaining an actual bandwidth matching the predicted bandwidth from the preset bandwidth and the other borrowable bandwidth comprises:
when the predicted bandwidth is not higher than a sum value, obtaining an actual bandwidth matched with the predicted bandwidth from the preset bandwidth and the preset reserved bandwidth, wherein the sum value is the sum value of the preset bandwidth and the preset reserved bandwidth;
and when the predicted bandwidth is higher than the sum value, obtaining an actual bandwidth matched with the predicted bandwidth from the preset bandwidth, the preset reserved bandwidth and the idle bandwidth of other services.
Optionally, the method further comprises:
when the bandwidth obtained from the preset bandwidth and the other borrowable bandwidths does not match the predicted bandwidth, determining the total bandwidth formed by the preset bandwidth and the other borrowable bandwidths as the actual bandwidth ready for processing the to-be-processed service.
Optionally, obtaining an actual bandwidth matching the predicted bandwidth from the preset bandwidth, the preset reserved bandwidth, and the idle bandwidth of the other service includes:
determining the priority of the service to be processed;
determining at least one target service from other services except the service to be processed, wherein the priority of the target service is lower than that of the service to be processed;
and obtaining the actual bandwidth matched with the predicted bandwidth from the preset bandwidth, the preset reserved bandwidth and the idle bandwidth of the at least one target service.
Optionally, obtaining an actual bandwidth matching the predicted bandwidth from the preset bandwidth, the preset reserved bandwidth, and an idle bandwidth of the at least one target service includes:
determining target services with idle bandwidth according to the bandwidth state of each target service, wherein the bandwidth state of one target service represents whether the target service currently has the idle bandwidth;
and after the preset bandwidth and the preset reserved bandwidth are obtained, obtaining the idle bandwidth of the target service with the idle bandwidth according to the sequence of the priority from low to high until the actual bandwidth matched with the predicted bandwidth is obtained.
Optionally, processing the service to be processed through the actual bandwidth includes:
determining a client side for sending the service to be processed;
allocating bandwidth resources matched with the actual bandwidth to a data transmission channel connected with the client;
when the actual bandwidth is matched with the predicted bandwidth, processing the service data corresponding to the service to be processed according to a first preset code rate;
when the actual bandwidth is not matched with the predicted bandwidth, processing the service data according to a second preset code rate, wherein the second preset code rate is lower than the first preset code rate;
and sending the processed service data to the client through the data transmission channel.
Optionally, predicting a bandwidth required when the service to be processed is processed to obtain a predicted bandwidth, including:
sending a flow statistic request to the client;
receiving a statistic parameter returned by the client aiming at the flow statistic request;
and predicting the bandwidth required when the service to be processed is processed according to the statistical parameters to obtain the predicted bandwidth.
A second aspect of the embodiments of the present application provides a flow control device, including:
the first obtaining module is used for obtaining the service to be processed;
the second obtaining module is used for predicting the bandwidth required by processing the service to be processed to obtain the predicted bandwidth;
a third obtaining module, configured to obtain an actual bandwidth to be prepared for processing the service to be processed according to the predicted bandwidth, the preset bandwidth of the service to be processed, and other borrowable bandwidths, where the other borrowable bandwidths include: presetting at least one of reserved bandwidth and idle bandwidth of other services;
and the processing module is used for processing the service to be processed through the actual bandwidth.
Optionally, the third obtaining module includes:
a first obtaining sub-module, configured to obtain, when the predicted bandwidth is not higher than the preset bandwidth, an actual bandwidth that matches the predicted bandwidth from the preset bandwidth;
a second obtaining sub-module, configured to obtain, when the predicted bandwidth is higher than the preset bandwidth, an actual bandwidth that matches the predicted bandwidth from the preset bandwidth and the other borrowable bandwidths.
Optionally, the second obtaining sub-module includes:
a third obtaining submodule, configured to obtain, when the predicted bandwidth is not higher than a sum value, an actual bandwidth that matches the predicted bandwidth from the preset bandwidth and the preset reserved bandwidth, where the sum value is a sum value of the preset bandwidth and the preset reserved bandwidth;
and the fourth obtaining submodule is used for obtaining the actual bandwidth matched with the predicted bandwidth from the preset bandwidth, the preset reserved bandwidth and the idle bandwidth of other services when the predicted bandwidth is higher than the sum value.
Optionally, the apparatus further comprises:
a fourth obtaining module, configured to determine, when a bandwidth obtained from the preset bandwidth and the other borrowable bandwidths does not match the predicted bandwidth, a total bandwidth formed by the preset bandwidth and the other borrowable bandwidths as an actual bandwidth to be prepared for processing the to-be-processed traffic.
Optionally, the fourth obtaining sub-module includes:
the first determining module is used for determining the priority of the service to be processed;
a second determining module, configured to determine at least one target service from services other than the service to be processed, where a priority of the target service is lower than a priority of the service to be processed;
and a fifth obtaining submodule, configured to obtain an actual bandwidth matched with the predicted bandwidth from the preset bandwidth, the preset reserved bandwidth, and an idle bandwidth of the at least one target service.
Optionally, the fifth obtaining sub-module includes:
a third determining module, configured to determine, according to a bandwidth state of each target service, a target service with an idle bandwidth, where a bandwidth state of one target service represents whether the target service currently has the idle bandwidth;
and the sixth obtaining submodule is used for obtaining the idle bandwidth of the target service with the idle bandwidth according to the sequence from low priority to high priority after obtaining the preset bandwidth and the preset reserved bandwidth until obtaining the actual bandwidth matched with the predicted bandwidth.
Optionally, the processing module includes:
a fourth determining module, configured to determine a client that sends the service to be processed;
the distribution module is used for distributing bandwidth resources matched with the actual bandwidth to a data transmission channel connected with the client;
the first processing submodule is used for processing the service data corresponding to the service to be processed according to a first preset code rate when the actual bandwidth is matched with the predicted bandwidth;
a second processing sub-module, configured to process the service data according to a second preset code rate when the actual bandwidth does not match the predicted bandwidth, where the second preset code rate is lower than the first preset code rate;
and the first sending module is used for sending the processed service data to the client through the data transmission channel.
Optionally, the second obtaining module includes:
the second sending module is used for sending a flow statistic request to the client;
a receiving module, configured to receive a statistic parameter returned by the client for the traffic statistic request;
and the seventh obtaining submodule is used for predicting the bandwidth required by processing the service to be processed according to the statistical parameters to obtain the predicted bandwidth.
A third aspect of embodiments of the present application provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the steps of the method according to the first aspect of the present application when executed.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps in the method according to the first aspect of the present application.
By the flow control method, the service server firstly obtains the service to be processed, and then predicts the bandwidth required when the service to be processed is processed to obtain the predicted bandwidth; and then according to the predicted bandwidth, acquiring an actual bandwidth prepared for processing the service to be processed from the preset bandwidth of the service to be processed and other borrowable bandwidths (preset reserved bandwidth and idle bandwidth of other services), and finally processing the service to be processed through the actual bandwidth. In the process, the service server performs flow control on each service, so that each service link is monitored in real time when the service server goes deep into the service inside the server, and the service link is dynamically adjusted according to the bandwidth condition in the current server; secondly, the actual bandwidth for preparing each service is obtained by the preset bandwidth, the preset reserved bandwidth and various bandwidth resources such as idle bandwidth of other services, so that the reasonable utilization of the bandwidth resources is realized, and the utilization rate of the bandwidth resources by the service server is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings used in the description of the embodiments of the present application will be briefly described below.
FIG. 1 is a schematic diagram of an implementation environment shown in an embodiment of the present application;
fig. 2 is a flow chart illustrating a flow control method according to an embodiment of the present application;
fig. 3 is a process diagram illustrating a flow control method according to an embodiment of the present application;
fig. 4 is a block diagram illustrating a flow control device according to an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating a networking of a video network according to an embodiment of the present application;
fig. 6 is a schematic diagram illustrating a hardware structure of a node server according to an embodiment of the present application;
fig. 7 is a schematic diagram illustrating a hardware structure of an access switch according to an embodiment of the present application;
fig. 8 is a schematic diagram of a hardware structure of an ethernet protocol conversion gateway according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a schematic diagram of an implementation environment according to an embodiment of the present application. In fig. 1, a service server receives a service request sent by a plurality of user terminals (including user terminal 1-user terminal N), processes the service request, and returns the processed result to the terminal.
The traffic control method provided by the present application is applied to the service server in fig. 1, as shown in fig. 2. Fig. 2 is a flowchart illustrating a flow control method according to an embodiment of the present application. Referring to fig. 2, the flow control method of the present application includes the steps of:
step S11: and obtaining the service to be processed.
In this embodiment, a client for initiating a service request may be installed on a user terminal, a user initiates a service request to a service server through the client, and the service server parses the service request after receiving the service request to obtain a service to be processed. For example, the service to be processed may be to obtain a video resource, or to obtain an audio resource, and of course, the service to be processed may also be other types of services, and this is not specifically limited in this embodiment of the application.
Step S12: and predicting the bandwidth required when the service to be processed is processed to obtain the predicted bandwidth.
In this embodiment, after obtaining the service to be processed, the service server predicts a bandwidth required when processing the service to be processed, and obtains a predicted bandwidth, that is: and on the premise of keeping the preset code rate used when the response data is sent, the bandwidth in a data transmission channel (namely, a data link corresponding to the service to be processed) when the response data is sent.
Step S13: obtaining an actual bandwidth prepared for processing the service to be processed according to the predicted bandwidth, the preset bandwidth of the service to be processed, and other borrowable bandwidths, where the other borrowable bandwidths include: at least one of a reserved bandwidth and a free bandwidth of other traffic is preset.
In this embodiment, different virtual terminals are set in the service server to process different types of services, and one virtual terminal is used to process one type of service. The service server pre-allocates a bandwidth for each type of service, that is, a preset bandwidth, exemplarily, the virtual terminal 1 is configured to process a first type of service, a bandwidth allocated for the first type of service is the first bandwidth, the virtual terminal 1 is configured to process a second type of service, a bandwidth allocated for the first type of service is the second bandwidth, and so on.
Next, in the present embodiment, for convenience of the statement of the flow control method, the bandwidth previously requested by the service server from the operator is named as a total bandwidth. And aiming at the total bandwidth, the service server obtains a part of bandwidth from the total bandwidth in advance as a preset reserved bandwidth for all services to share.
In this embodiment, when the virtual terminal processes the traffic, the actual bandwidth used by the virtual terminal may not coincide with the predicted bandwidth, and when the actual bandwidth used by the virtual terminal is smaller than the predicted bandwidth, it indicates that there is unoccupied free bandwidth in the traffic, so that this part of bandwidth may also be borrowed into other traffic. Thus, in the embodiment of the present application, for each pending service, in addition to the preset bandwidth, the bandwidth may be borrowed from other borrowable bandwidths, which include: the reserved bandwidth and the idle bandwidth of other services are preset, and certainly, other borrowable bandwidths are not limited to the reserved bandwidth and the idle bandwidth of other services, which is not specifically limited in this application.
In this embodiment, after the predicted bandwidth is obtained, the actual bandwidth to be prepared for processing the service to be processed is further obtained from the preset bandwidth and other borrowable bandwidths according to the size relationship between the predicted bandwidth and the preset bandwidth.
Step S14: and processing the service to be processed through the actual bandwidth.
In this embodiment, after obtaining the actual bandwidth to be prepared for processing the service to be processed, the actual bandwidth may be allocated to a specific link for processing the service to be processed, so that the virtual terminal for processing the service to be processed may process the service to be processed when a preset condition (for example, a code rate condition) is satisfied.
Exemplarily, taking an example that a user 1 initiates a request for obtaining a target video resource X to a service server 1 through a client, after obtaining the request, the service server 1 analyzes the request to obtain a service 1 to be processed; then, predicting the bandwidth required when processing the service 1 to be processed to obtain a predicted bandwidth, for example, 10Mbps, and the preset bandwidth allocated to the type of service to be processed in advance is 8Mbps, so that enough predicted bandwidth cannot be obtained from the preset bandwidth, and a bandwidth of 2Mbps needs to be borrowed from other borrowable bandwidths to obtain enough predicted bandwidth, for example, the total bandwidth applied by the service server 1 to an operator is 100Mbps, the preset reserved bandwidth is 20Mbps, and the sum of the idle bandwidths of all other services at the moment is 30Mbps, so that the borrowed bandwidth of 2Mbps can be obtained from the reserved bandwidth of 20Mbps, can also be obtained from the idle bandwidths of all other services of 30Mbps, and can also be obtained from the reserved bandwidth and the idle bandwidths of all other services according to a ratio, and a specific obtaining mode can be set according to actual requirements, which the application is not specifically limited; after the bandwidth of 10Mbps matched with the predicted bandwidth is obtained, the bandwidth is the actual bandwidth ready for processing the service 1 to be processed, and then the bandwidth corresponding to the link for processing the service 1 to be processed is set to 10Mbps, so that the virtual terminal for processing the service 1 to be processed can process the service 1 to be processed under the condition that a preset condition (for example, a code rate condition) is met. Of course, if the predicted bandwidth is 8Mbps, which is the same as the preset bandwidth 8Mbps allocated to the type of service to be processed in advance, the preset bandwidth may be directly used to set the bandwidth corresponding to the link processing the service to be processed 1.
In this embodiment, a service server first obtains a service to be processed, and then predicts a bandwidth required when the service to be processed is processed, so as to obtain a predicted bandwidth; and then according to the predicted bandwidth, acquiring an actual bandwidth prepared for processing the service to be processed from the preset bandwidth and other borrowable bandwidths (preset reserved bandwidth and idle bandwidth of other services) of the service to be processed, and finally processing the service to be processed through the actual bandwidth. In the process, the service server controls the flow of each service, realizes real-time monitoring of each service link when the service server goes deep into the service inside the server, and further realizes dynamic adjustment of the service link according to the bandwidth condition in the current server; secondly, the actual bandwidth for preparing each service is obtained by the preset bandwidth, the preset reserved bandwidth and various bandwidth resources such as idle bandwidth of other services, so that the reasonable utilization of the bandwidth resources is realized, and the utilization rate of the bandwidth resources by the service server is improved.
In combination with the above embodiments, in an implementation, step S13 may include:
when the predicted bandwidth is not higher than the preset bandwidth, obtaining an actual bandwidth matched with the predicted bandwidth from the preset bandwidth;
and when the predicted bandwidth is higher than the preset bandwidth, obtaining an actual bandwidth matched with the predicted bandwidth from the preset bandwidth and the other borrowable bandwidths.
In this embodiment, the predicted bandwidth may be understood as an ideal bandwidth for processing the pending traffic, and the obtained actual bandwidth prepared for processing the pending traffic may be the same as or different from the predicted bandwidth.
Specifically, when an actual bandwidth prepared for processing a service to be processed is obtained, if the predicted bandwidth is not higher than the preset bandwidth, the actual bandwidth matched with the predicted bandwidth is directly obtained from the preset bandwidth; if the predicted bandwidth is higher than the preset bandwidth, the actual bandwidth matching the predicted bandwidth is obtained directly from the preset bandwidth and other available bandwidths (the process will be described in detail later).
The embodiment provides a mode for obtaining an actual bandwidth prepared for processing a service to be processed, the actual bandwidth is obtained according to a size relationship between a predicted bandwidth and a preset bandwidth, the actual bandwidth is directly obtained from the preset bandwidth when the predicted bandwidth is not higher than the preset bandwidth, the actual bandwidth can be quickly obtained, and the processing efficiency of the service to be processed is accelerated.
With reference to the above embodiment, in an implementation manner, when the predicted bandwidth is higher than the preset bandwidth, obtaining an actual bandwidth matching the predicted bandwidth from the preset bandwidth and the other borrowable bandwidths includes:
when the predicted bandwidth is not higher than a sum value, obtaining an actual bandwidth matched with the predicted bandwidth from the preset bandwidth and the preset reserved bandwidth, wherein the sum value is the sum value of the preset bandwidth and the preset reserved bandwidth;
and when the predicted bandwidth is higher than a sum value, obtaining an actual bandwidth matched with the predicted bandwidth from the preset bandwidth, the preset reserved bandwidth and the idle bandwidth of other services.
In this embodiment, when the predicted bandwidth is higher than the preset bandwidth, if the predicted bandwidth is not higher than the sum of the preset bandwidth and the preset reserved bandwidth, the actual bandwidth matching the predicted bandwidth is obtained from the preset bandwidth and the preset reserved bandwidth, for example, the predicted bandwidth is 10Mbps, and the preset bandwidth is 6Mbps, then 6Mbps may be obtained from the preset bandwidth first, and then 4Mbps may be obtained from the preset reserved bandwidth, so as to obtain the actual bandwidth matching the predicted bandwidth by 10Mbps.
In this embodiment, when the predicted bandwidth is higher than the preset bandwidth, if the predicted bandwidth is higher than the sum of the preset bandwidth and the preset reserved bandwidth, the actual bandwidth matched with the predicted bandwidth is obtained from the preset bandwidth, the preset reserved bandwidth and the idle bandwidth of other services, for example, the predicted bandwidth is 20Mbps, the preset bandwidth is 6Mbps, the preset reserved bandwidth is 10Mbps, and the idle bandwidth of other services at the current time is 20Mbps, then 6Mbps can be obtained from the preset bandwidth, 10Mbps can be obtained from the preset reserved bandwidth, and 4Mbps can be obtained from the idle bandwidth of other services, so that 20Mbps of the actual bandwidth matched with the predicted bandwidth can be obtained; of course, it is also possible to obtain 6Mbps from the preset bandwidth first and obtain 14Mbps from the idle bandwidths of other services, so as to obtain the actual bandwidth 20Mbps matching the predicted bandwidth, and the present embodiment does not specifically limit the order of obtaining the bandwidth from other borrowable bandwidths (i.e., obtaining the bandwidth from the preset reserved bandwidth first or obtaining the bandwidth from the idle bandwidths of other services first), and may be specifically set according to actual requirements.
In this embodiment, if the predicted bandwidth is higher than the sum of the preset bandwidth and the preset reserved bandwidth, the borrowing order when borrowing the actual bandwidth matching the predicted bandwidth from the idle bandwidth of the preset bandwidth, the preset reserved bandwidth, and other traffic may be: firstly, acquiring all preset bandwidths, then acquiring all preset reserved bandwidths, then determining residual bandwidths in the preset bandwidths except the preset bandwidths and the preset reserved bandwidths, and acquiring the residual bandwidths from idle bandwidths of other services; or first obtaining all the preset bandwidths, then determining a first residual bandwidth except the preset bandwidth in the preset bandwidths, if the first residual bandwidth is not higher than the idle bandwidths of other services, obtaining the first residual bandwidth from the idle bandwidths of other services, if the first residual bandwidth is higher than the idle bandwidths of other services, obtaining all the bandwidths from the idle bandwidths of other services, then determining a second residual bandwidth except the preset bandwidth and the idle bandwidths of other services in the preset bandwidths, and obtaining the second residual bandwidth from the idle bandwidths of other services; the method may further include obtaining all preset bandwidths first, and then obtaining remaining bandwidths, except for the preset bandwidth, in the preset bandwidth from the preset reserved bandwidth and idle bandwidths of other services according to a preset ratio, which is not limited in this embodiment of the present application.
The embodiment provides a way for obtaining the actual bandwidth matched with the predicted bandwidth, so that when the predicted bandwidth is higher than the preset bandwidth, the actual bandwidth matched with the predicted bandwidth can be flexibly obtained from the preset bandwidth and other borrowable bandwidths according to requirements, the reasonable utilization of bandwidth resources is realized, and meanwhile, support is provided for realizing the flow control of a single service.
With reference to the foregoing embodiment, in an implementation manner, the flow control method according to the embodiment of the present application may further include the following steps:
when the bandwidth obtained from the preset bandwidth and the other borrowable bandwidths does not match the predicted bandwidth, determining the total bandwidth formed by the preset bandwidth and the other borrowable bandwidths as the actual bandwidth ready for processing the to-be-processed service.
In this embodiment, if the sum of the preset bandwidth and the other borrowable bandwidths is still smaller than the predicted bandwidth, which indicates that the actual bandwidth matching the predicted bandwidth (ideal bandwidth) and prepared for processing the service to be processed cannot be obtained, the total bandwidth formed by the preset bandwidth and the other borrowable bandwidths is directly determined as the actual bandwidth prepared for processing the service to be processed, and the service to be processed is processed through the actual bandwidth.
In this embodiment, even if the ideal actual bandwidth cannot be obtained, the actual bandwidth available for processing the pending service can be obtained to the maximum extent, so that the pending service is smoothly executed.
With reference to the foregoing embodiment, in an implementation manner, when the predicted bandwidth is higher than the preset bandwidth and the predicted bandwidth is higher than a sum, obtaining an actual bandwidth matching the predicted bandwidth from the preset bandwidth, the preset reserved bandwidth, and idle bandwidths of the other services includes:
determining the priority of the service to be processed;
determining at least one target service from other services except the service to be processed, wherein the priority of the target service is lower than that of the service to be processed;
and obtaining the actual bandwidth matched with the predicted bandwidth from the preset bandwidth, the preset reserved bandwidth and the idle bandwidth of the at least one target service.
In this embodiment, the service server sets a priority for each type of service (a virtual terminal is used for processing one type of service) in advance, so that, when acquiring bandwidth from the idle bandwidth of other services, bandwidth can be acquired from the idle bandwidth of the service with the priority lower than that of the service to be processed first, without affecting the execution of the service with the high priority.
In this embodiment, after determining the priority of the to-be-processed service, at least one target service with a priority lower than that of the to-be-processed service is determined from other services except the to-be-processed service, then an idle bandwidth of the at least one target service is obtained, and then an actual bandwidth matching the predicted bandwidth is obtained from the preset bandwidth, the preset reserved bandwidth, and the idle bandwidth of the at least one target service.
In the embodiment, when the bandwidth is obtained from the idle bandwidth of other services, the bandwidth is obtained from the idle bandwidth of the service with the priority lower than that of the service to be processed, so that the execution of the service with high priority is effectively ensured.
With reference to the foregoing embodiment, in an implementation manner, obtaining an actual bandwidth matching the predicted bandwidth from the preset bandwidth, the preset reserved bandwidth, and an idle bandwidth of the at least one target service includes:
determining target services with idle bandwidth according to the bandwidth state of each target service, wherein the bandwidth state of one target service represents whether the target service currently has the idle bandwidth;
and after the preset bandwidth and the preset reserved bandwidth are obtained, obtaining the idle bandwidth of the target service with the idle bandwidth according to the sequence from low priority to high priority until obtaining the actual bandwidth matched with the predicted bandwidth.
In this embodiment, each service has a bandwidth status, a value of the bandwidth status is an actual bandwidth reserved for processing a service to be processed, and after obtaining an actual bandwidth of a service, the service server modifies the bandwidth status of the service to a current actual bandwidth, so that it can be determined whether the service has an idle bandwidth according to the bandwidth status and a preset bandwidth, for example, when the bandwidth status is smaller than the preset bandwidth, it indicates that the service has an idle bandwidth, and when the bandwidth status is not smaller than the preset bandwidth, it indicates that the service does not have an idle bandwidth.
In this embodiment, when the sum of the preset bandwidth and the preset reserved bandwidth is smaller than the predicted bandwidth, the preset bandwidth and the preset reserved bandwidth are obtained first, and then the idle bandwidth is obtained from the target service with the idle bandwidth in the order from low priority to high priority until the actual bandwidth matching the predicted bandwidth is obtained. Illustratively, a 10Mbps bandwidth needs to be obtained from a target service with an idle bandwidth, the target service is service 1-service 5 in sequence from low to high according to priority, and the idle bandwidths respectively provided by service 1-service 5 are 2Mbps, 5Mbps, 4Mbps, 2Mbps and 2Mbps, then first, 2Mbps is obtained from the idle bandwidth of service 1, then, 5Mbps is obtained from the idle bandwidth of service 2, and then, 3Mbps is obtained from the idle bandwidth of service 3, so that a 10Mbps bandwidth is obtained.
In the embodiment, when the bandwidth is obtained from the idle bandwidth of other services, the bandwidth is obtained from the idle bandwidth of the service with the priority lower than that of the service to be processed, so that the execution of the service with high priority is effectively ensured.
With reference to the foregoing embodiment, in an implementation manner, processing the service to be processed through the actual bandwidth includes:
determining a client side for sending the service to be processed;
allocating bandwidth resources matched with the actual bandwidth to a data transmission channel connected with the client;
when the actual bandwidth is matched with the predicted bandwidth, processing the service data corresponding to the service to be processed according to a first preset code rate;
when the actual bandwidth is not matched with the predicted bandwidth, processing the service data according to a second preset code rate, wherein the second preset code rate is lower than the first preset code rate;
and sending the processed service data to the client through the data transmission channel.
In this embodiment, after obtaining the actual bandwidth used when preparing to process the service to be processed, a bandwidth resource matched with the actual bandwidth is allocated to the data transmission channel connected to the client, so that the virtual terminal can process the service to be processed according to the actual bandwidth. When the actual bandwidth is matched with the predicted bandwidth, processing service data corresponding to the service to be processed according to a first preset code rate; and when the actual bandwidth is not matched with the predicted bandwidth, processing the service data according to a second preset code rate, wherein the second preset code rate is lower than the first preset code rate, or reducing the first preset code rate, and processing the service data by utilizing the reduced first preset code rate. And then, sending the processed service data to the client through a data transmission channel.
In the embodiment, bandwidth resources in the data transmission channel are configured according to the actual bandwidth, when the actual bandwidth is matched with the predicted bandwidth, the service data is processed and sent according to the high code rate, the quality of the service data is ensured, and when the actual bandwidth is not matched with the predicted bandwidth, the service data is processed and sent according to the low code rate, so that the service data can be smoothly sent to the client, and the flexible control of the flow of a single task is realized.
With reference to the foregoing embodiment, in an implementation manner, predicting a bandwidth required when processing the service to be processed to obtain a predicted bandwidth includes:
sending a flow statistic request to the client;
receiving a statistic parameter returned by the client aiming at the flow statistic request;
and predicting the bandwidth required when the service to be processed is processed according to the statistical parameters to obtain the predicted bandwidth.
In this embodiment, after sending the service to be processed to the service server, the client sends a traffic statistics request to the client, and after receiving the traffic statistics request, the client returns the statistics parameter to the service server, and the service server can predict the prediction bandwidth required when processing the service to be processed according to the statistics parameter.
In this embodiment, the service server obtains the predicted bandwidth by using a WebRTC (Web Real-Time Communication, which is an API supporting a Web browser to perform Real-Time voice conversation or video conversation) technology. The WebRTC flow control strategy is a flow control strategy for a middle end and an end of a single service link, and can be used for predicting a flow value required when a certain link in a service server is sent to an opposite end. In this embodiment, applying WebRTC technology to a service server (a multi-service capable server) requires calculating real-time traffic of each service link (including uplink and downlink).
In this embodiment, the client needs to calculate a statistical parameter according to a traffic statistics request sent by the service server, and then sends the statistical parameter to the service server, so that the service server predicts a bandwidth required when processing a service to be processed according to the statistical parameter, where the statistical parameter may include: the packet loss rate, the network jitter, and the timestamp, and the process in which the service server obtains the predicted bandwidth according to the WebRTC technology may refer to the prior art specifically, which is not described herein in detail.
Fig. 3 is a process diagram of a flow control method according to an embodiment of the present application. The overall process of the flow control method of the present application will be described in a specific embodiment with reference to fig. 3.
In fig. 3, a total bandwidth applied by a server (i.e., a service server) from an operator is a bandwidth Z, a preset reserved bandwidth is a bandwidth Y, a plurality of virtual terminals are created simultaneously, a service class is set for each virtual terminal, and a bandwidth flow threshold is set for a type of service processed by each virtual terminal, that is: the bandwidth M is preset. After a client sends a service A to be processed to a server, the server returns a flow statistic request, the client responds to the flow statistic request, performs flow statistics on the service A to be processed, such as statistics of packet loss rate, network jitter, and timestamps (including sending Time and receiving Time) of a data packet, and sends obtained statistical parameters to the server, the server calculates RTT (Round-Trip Time, round-Trip delay, and an intermediate value when bandwidth is predicted by using WebRTC) according to the statistical parameters, and further calculates a predicted bandwidth N based on the RTT (the process of calculating the predicted bandwidth can refer to the prior art); then, if the predicted bandwidth N is not larger than the preset bandwidth M, directly obtaining the predicted bandwidth N from the preset bandwidth M, and simultaneously updating the bandwidth state of the service A to be processed; if the bandwidth obtained from the preset reserved bandwidth Y still cannot meet the predicted bandwidth N, then obtaining the bandwidth from the service with the grade lower than that of the service A to be processed until the predicted bandwidth N is met, and simultaneously updating the bandwidth state of the service A to be processed; if the predicted bandwidth N is still unavailable finally, the code rate of the service processed by the server is reduced, or transcoding processing is carried out on the service A to be processed, the service A to be processed is processed by using the lower code rate, after processing, the processed service data is sent to the client through the sending module, wherein UDP-RTP is a protocol used when the server returns the service data to the client. In fig. 3, the bandwidth status of the task is updated only when the predicted bandwidth is available, so as to indicate that there may be idle bandwidth in the current service, and of course, the bandwidth status of the task may also be updated when the predicted bandwidth is obtained, which is not specifically limited in this application.
In the application, the service server performs flow control on each service, realizes real-time monitoring on each service link by penetrating into the service inside the server, and further realizes dynamic adjustment on the service links according to the bandwidth condition in the current server; secondly, the actual bandwidth for preparing each service is obtained by various bandwidth resources such as the preset bandwidth, the preset reserved bandwidth and the idle bandwidth of other services, so that the reasonable utilization of the bandwidth resources is realized, and the utilization rate of the bandwidth resources by the service server is improved.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
The present application further provides a flow control device 1000, as shown in fig. 4. Fig. 4 is a block diagram illustrating a flow control device according to an embodiment of the present application. Referring to fig. 4, the flow control device 1000 of the present application may include:
a first obtaining module 1001, configured to obtain a service to be processed;
a second obtaining module 1002, configured to predict a bandwidth required when the to-be-processed service is processed, so as to obtain a predicted bandwidth;
a third obtaining module 1003, configured to obtain an actual bandwidth to be prepared for processing the service to be processed according to the predicted bandwidth, the preset bandwidth of the service to be processed, and other borrowable bandwidths, where the other borrowable bandwidths include: presetting at least one of reserved bandwidth and idle bandwidth of other services;
a processing module 1004, configured to process the service to be processed through the actual bandwidth.
Optionally, the third obtaining module 1003 includes:
a first obtaining sub-module, configured to obtain, when the predicted bandwidth is not higher than the preset bandwidth, an actual bandwidth that matches the predicted bandwidth from the preset bandwidth;
and the second obtaining sub-module is used for obtaining the actual bandwidth matched with the predicted bandwidth from the preset bandwidth and the other borrowable bandwidths when the predicted bandwidth is higher than the preset bandwidth.
Optionally, the second obtaining sub-module includes:
a third obtaining submodule, configured to obtain, when the predicted bandwidth is not higher than a sum value, an actual bandwidth that matches the predicted bandwidth from the preset bandwidth and the preset reserved bandwidth, where the sum value is a sum value of the preset bandwidth and the preset reserved bandwidth;
and the fourth obtaining submodule is used for obtaining the actual bandwidth matched with the predicted bandwidth from the preset bandwidth, the preset reserved bandwidth and the idle bandwidth of other services when the predicted bandwidth is higher than the sum value.
Optionally, the apparatus 1000 further comprises:
a fourth obtaining module, configured to determine, when a bandwidth obtained from the preset bandwidth and the other borrowable bandwidths does not match the predicted bandwidth, a total bandwidth formed by the preset bandwidth and the other borrowable bandwidths as an actual bandwidth to be prepared for processing the to-be-processed traffic.
Optionally, the fourth obtaining sub-module includes:
the first determining module is used for determining the priority of the service to be processed;
a second determining module, configured to determine at least one target service from services other than the service to be processed, where a priority of the target service is lower than a priority of the service to be processed;
and a fifth obtaining submodule, configured to obtain an actual bandwidth matched with the predicted bandwidth from the preset bandwidth, the preset reserved bandwidth, and an idle bandwidth of the at least one target service.
Optionally, the fifth obtaining sub-module includes:
a third determining module, configured to determine, according to a bandwidth state of each target service, a target service with an idle bandwidth, where a bandwidth state of one target service represents whether the target service currently has the idle bandwidth;
and a sixth obtaining submodule, configured to obtain, after obtaining the preset bandwidth and the preset reserved bandwidth, idle bandwidths of the target service with the idle bandwidths according to a sequence from a low priority to a high priority until obtaining an actual bandwidth matched with the predicted bandwidth.
Optionally, the processing module 1004 includes:
a fourth determining module, configured to determine a client that sends the service to be processed;
the distribution module is used for distributing bandwidth resources matched with the actual bandwidth to a data transmission channel connected with the client;
the first processing submodule is used for processing the service data corresponding to the service to be processed according to a first preset code rate when the actual bandwidth is matched with the predicted bandwidth;
the second processing submodule is used for processing the service data according to a second preset code rate when the actual bandwidth is not matched with the predicted bandwidth, and the second preset code rate is lower than the first preset code rate;
and the first sending module is used for sending the processed service data to the client through the data transmission channel.
Optionally, the second obtaining module 1002 includes:
the second sending module is used for sending a flow statistic request to the client;
a receiving module, configured to receive a statistic parameter returned by the client for the traffic statistic request;
and the seventh obtaining submodule is used for predicting the bandwidth required by processing the service to be processed according to the statistical parameters to obtain the predicted bandwidth.
Based on the same inventive concept, the present application provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the electronic device implements the steps in a flow control method according to any of the embodiments of the present application.
Based on the same inventive concept, the present application provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps in a flow control method according to any of the above embodiments of the present application.
For the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and reference may be made to the partial description of the method embodiment for relevant points.
The video networking is an important milestone for network development, is a real-time network, can realize high-definition video real-time transmission, and pushes a plurality of internet applications to high-definition video, and high-definition faces each other.
The video networking adopts a real-time high-definition video exchange technology, can integrate required services such as dozens of services such as high-definition video conferences, video monitoring, intelligent monitoring analysis, emergency command, digital broadcast television, delayed television, network teaching, live broadcast, VOD on demand, television mails, personal Video Recorder (PVR), intranet (self-office) channels, intelligent video broadcast control, information distribution and the like into a system platform, and realizes high-definition quality video broadcast through a television or a computer.
To better understand the embodiments of the present invention, the following description refers to the internet of view:
some of the technologies applied in the video networking are as follows:
network Technology (Network Technology)
Network technology innovation in video networking has improved over traditional Ethernet (Ethernet) to face the potentially enormous video traffic on the network. Unlike pure network Packet Switching (Packet Switching) or network Circuit Switching (Circuit Switching), the Packet Switching is adopted by the technology of the video networking to meet the Streaming requirement. The video networking technology has the advantages of flexibility, simplicity and low price of packet switching, and simultaneously has the quality and safety guarantee of circuit switching, and realizes the seamless connection of a whole network switching type virtual circuit and a data format.
Switching Technology (Switching Technology)
The video network adopts two advantages of asynchronism and packet switching of the Ethernet, eliminates the defects of the Ethernet on the premise of full compatibility, has end-to-end seamless connection of the whole network, is directly communicated with a user terminal, and directly bears an IP data packet. The user data does not require any format conversion across the entire network. The video networking is a higher-level form of the Ethernet, is a real-time exchange platform, can realize the real-time transmission of the whole-network large-scale high-definition video which cannot be realized by the existing Internet, and pushes a plurality of network video applications to high-definition and unification.
Server Technology (Server Technology)
The server technology on the video networking and unified video platform is different from the traditional server, the streaming media transmission of the video networking and unified video platform is established on the basis of connection orientation, the data processing capacity of the video networking and unified video platform is independent of flow and communication time, and a single network layer can contain signaling and data transmission. For voice and video services, the complexity of video networking and unified video platform streaming media processing is much simpler than that of data processing, and the efficiency is greatly improved by more than one hundred times compared with that of a traditional server.
Storage Technology (Storage Technology)
The super-high speed storage technology of the unified video platform adopts the most advanced real-time operating system in order to adapt to the media content with super-large capacity and super-large flow, the program information in the server instruction is mapped to the specific hard disk space, the media content is not passed through the server any more, and is directly sent to the user terminal instantly, and the general waiting time of the user is less than 0.2 second. The optimized sector distribution greatly reduces the mechanical movement of hard disk magnetic head tracking, the resource consumption only accounts for 20% of the same-grade IP internet, but the concurrent flow which is 3 times larger than that of the traditional hard disk array is generated, and the comprehensive efficiency is improved by more than 10 times.
Network Security Technology (Network Security Technology)
The structural design of the video network completely eliminates the network security problem troubling the internet structurally by the modes of independent service permission control each time, complete isolation of equipment and user data and the like, generally does not need antivirus programs and firewalls, avoids the attack of hackers and viruses, and provides a structural carefree security network for users.
Service Innovation Technology (Service Innovation Technology)
The unified video platform integrates services and transmission, and is not only automatically connected once whether a single user, a private network user or a network aggregate. The user terminal, the set-top box or the PC are directly connected to the unified video platform to obtain various multimedia video services in various forms. The unified video platform adopts a menu type configuration table mode to replace the traditional complex application programming, can realize complex application by using very few codes, and realizes infinite new service innovation.
Networking of the video network is as follows:
the video network is a centralized control network structure, and the network can be a tree network, a star network, a ring network and the like, but on the basis of the centralized control node, the whole network is controlled by the centralized control node in the network.
Fig. 5 is a networking diagram of a video network according to an embodiment of the present application. As shown in fig. 5, the view network is divided into two parts, an access network and a metropolitan network.
The devices of the access network part can be mainly classified into 3 types: node server, access switch, terminal (including various set-top boxes, coding boards, memories, etc.). The node server is connected to an access switch, which may be connected to a plurality of terminals and may be connected to an ethernet network.
The node server is a node which plays a centralized control function in the access network and can control the access switch and the terminal. The node server can be directly connected with the access switch or directly connected with the terminal.
Similarly, devices of the metropolitan network portion may also be classified into 3 types: metropolitan area server, node switch, node server. The metro server is connected to a node switch, which may be connected to a plurality of node servers.
The node server is a node server of the access network part, namely the node server belongs to both the access network part and the metropolitan area network part.
The metropolitan area server is a node which plays a centralized control function in the metropolitan area network and can control a node switch and a node server. The metropolitan area server can be directly connected with the node switch or directly connected with the node server.
Therefore, the whole video network is a network structure with layered centralized control, and the network controlled by the node server and the metropolitan area server can be in various structures such as tree, star and ring.
The access network part can form a unified video platform (the part in the dotted circle), and a plurality of unified video platforms can form a video network; each unified video platform may be interconnected via metropolitan area and wide area video networking.
Video networking device classification
1.1 devices in the video network of the embodiment of the present invention can be mainly classified into 3 types: servers, switches (including ethernet gateways), terminals (including various set-top boxes, code boards, memories, etc.). The video network as a whole can be divided into a metropolitan area network (or national network, global network, etc.) and an access network.
1.2 wherein the devices of the access network part can be mainly classified into 3 types: node servers, access switches (including ethernet gateways), terminals (including various set-top boxes, code boards, memories, etc.).
The specific hardware structure of each access network device is as follows:
a node server:
fig. 6 is a schematic diagram illustrating a hardware structure of a node server according to an embodiment of the present application. As shown in fig. 6, the network interface module 201, the switching engine module 202, the CPU module 203, and the disk array module 204 are mainly included;
the packets coming from the network interface module 201, the cpu module 203 and the disk array module 204 all enter the switching engine module 202; the switching engine module 202 performs an operation of looking up the address table 205 on the incoming packet, thereby obtaining the direction information of the packet; and stores the packet in a queue of the corresponding packet buffer 206 based on the packet's steering information; if the queue of the packet buffer 206 is nearly full, it is discarded; the switching engine module 202 polls all packet buffer queues for forwarding if the following conditions are met: 1) The port send buffer is not full; 2) The queue packet counter is greater than zero. The disk array module 204 mainly implements control over the hard disk, including initialization, read-write, and other operations on the hard disk; the CPU module 203 is mainly responsible for protocol processing with an access switch and a terminal (not shown in the figure), configuring an address table 205 (including a downlink protocol packet address table, an uplink protocol packet address table, and a data packet address table), and configuring the disk array module 204.
The access switch:
fig. 7 is a schematic diagram illustrating a hardware structure of an access switch according to an embodiment of the present application. As shown in fig. 7, the network interface module (downlink network interface module 301, uplink network interface module 302), switching engine module 303 and CPU module 304 are mainly included;
wherein, the packet (uplink data) coming from the downlink network interface module 301 enters the packet detection module 305; the packet detection module 305 detects whether the Destination Address (DA), the Source Address (SA), the packet type, and the packet length of the packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id) and enters the switching engine module 303, otherwise, discards the stream identifier; the packet (downstream data) coming from the upstream network interface module 302 enters the switching engine module 303; the incoming data packet of the CPU module 304 enters the switching engine module 303; the switching engine module 303 performs an operation of looking up the address table 306 on the incoming packet, thereby obtaining the direction information of the packet; if the packet entering the switching engine module 303 is from the downstream network interface to the upstream network interface, the packet is stored in the queue of the corresponding packet buffer 307 in association with the stream-id; if the queue of the packet buffer 307 is close to full, it is discarded; if the packet entering the switching engine module 303 is not from the downlink network interface to the uplink network interface, the data packet is stored in the queue of the corresponding packet buffer 307 according to the guiding information of the packet; if the queue of the packet buffer 307 is nearly full, it is discarded.
The switching engine module 303 polls all packet buffer queues, which in this embodiment of the present invention is divided into two cases:
if the queue is from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) The port send buffer is not full; 2) The queued packet counter is greater than zero; 3) Obtaining a token generated by a code rate control module;
if the queue is not from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) The port send buffer is not full; 2) The queue packet counter is greater than zero.
The rate control module 208 is configured by the CPU module 204, and generates tokens for packet buffer queues from all downstream network interfaces to upstream network interfaces at programmable intervals to control the rate of upstream forwarding.
The CPU module 304 is mainly responsible for protocol processing with the node server, configuration of the address table 306, and configuration of the code rate control module 308.
Ethernet protocol conversion gateway
Fig. 8 is a schematic hardware structure diagram of an ethernet protocol conversion gateway according to an embodiment of the present application. As shown in fig. 8, the apparatus mainly includes a network interface module (a downlink network interface module 401 and an uplink network interface module 402), a switching engine module 403, a CPU module 404, a packet detection module 405, a rate control module 408, an address table 406, a packet buffer 407, a MAC adding module 409, and a MAC deleting module 410.
Wherein, the data packet coming from the downlink network interface module 401 enters the packet detection module 405; the packet detection module 405 detects whether the ethernet MAC DA, the ethernet MAC SA, the ethernet length or frame type, the video network destination address DA, the video network source address SA, the video network packet type, and the packet length of the packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id); then, the MAC deletion module 410 subtracts MAC DA, MAC SA, length or frame type (2 byte) and enters the corresponding receiving buffer, otherwise, discards it;
the downlink network interface module 401 detects the sending buffer of the port, and if there is a packet, obtains the ethernet MAC DA of the corresponding terminal according to the destination address DA of the packet, adds the ethernet MAC DA of the terminal, the MAC SA of the ethernet protocol gateway, and the ethernet length or frame type, and sends the packet.
The other modules in the ethernet protocol gateway function similarly to the access switch.
A terminal:
the system mainly comprises a network interface module, a service processing module and a CPU module; for example, the set-top box mainly comprises a network interface module, a video and audio coding and decoding engine module and a CPU module; the coding board mainly comprises a network interface module, a video and audio coding engine module and a CPU module; the memory mainly comprises a network interface module, a CPU module and a disk array module.
1.3 devices of the metropolitan area network part can be largely classified into 2 types: node server, node exchanger, metropolitan area server. The node switch mainly comprises a network interface module, a switching engine module and a CPU module; the metropolitan area server mainly comprises a network interface module, a switching engine module and a CPU module.
2. Video networking packet definition
2.1 Access network packet definition
The data packet of the access network mainly comprises the following parts: destination Address (DA), source Address (SA), reserved byte, payload (PDU), CRC.
As shown in the following table, the data packet of the access network mainly includes the following parts:
DA SA Reserved Payload CRC
wherein:
the Destination Address (DA) is composed of 8 bytes (byte), the first byte represents the type of the data packet (such as various protocol packets, multicast data packets, unicast data packets, etc.), there are 256 possibilities at most, the second byte to the sixth byte are metropolitan area network addresses, and the seventh byte and the eighth byte are access network addresses;
the Source Address (SA) is also composed of 8 bytes (byte), defined as the same as the Destination Address (DA);
the reserved byte consists of 2 bytes;
the payload part has different lengths according to the types of different datagrams, 64 bytes if various protocol packets, 32+1024=1056 bytes if single-multicast data packets, and certainly not limited to the above 2 types;
the CRC consists of 4 bytes and is calculated in accordance with the standard ethernet CRC algorithm.
2.2 metropolitan area network packet definition
The topology of a metropolitan area network is a graph and there may be 2, or even more than 2, connections between two devices, i.e., more than 2 connections between a node switch and a node server, between a node switch and a node switch, and between a node switch and a node server. However, the metro network address of the metro network device is unique, and in order to accurately describe the connection relationship between the metro network devices, parameters are introduced in the embodiment of the present invention: a label to uniquely describe a metropolitan area network device.
In this specification, the definition of the Label is similar to that of the Label of MPLS (Multi-Protocol Label Switch), and assuming that there are two connections between the device a and the device B, there are 2 labels for the packet from the device a to the device B, and 2 labels for the packet from the device B to the device a. The label is divided into an in label and an out label, and assuming that the label (in label) of the data packet entering the device a is 0x0000, the label (out label) of the data packet leaving the device a may become 0x0001. The network access process of the metro network is a network access process under centralized control, that is, address allocation and label allocation of the metro network are both dominated by the metro server, and the node switch and the node server are both passively executed, which is different from label allocation of MPLS, and label allocation of MPLS is a result of mutual negotiation between the switch and the server.
As shown in the following table, the data packet of the metro network mainly includes the following parts:
DA SA Reserved label (R) Payload CRC
Namely Destination Address (DA), source Address (SA), reserved byte (Reserved), tag, payload (PDU), CRC. The format of the tag may be defined by reference to the following: the tag is 32 bits with the upper 16 bits reserved and only the lower 16 bits used, and its position is between the reserved bytes and payload of the packet.
Based on the characteristics of the video network, one of the core concepts of the embodiment of the invention is provided, following the protocol of the video network, a service server firstly obtains a service to be processed, and then predicts the bandwidth required when the service to be processed is processed to obtain the predicted bandwidth; and then according to the predicted bandwidth, acquiring an actual bandwidth prepared for processing the service to be processed from the preset bandwidth of the service to be processed and other borrowable bandwidths (preset reserved bandwidth and idle bandwidth of other services), and finally processing the service to be processed through the actual bandwidth.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "include", "including" or any other variations thereof are intended to cover non-exclusive inclusion, so that a process, method, article, or terminal device including a series of elements includes not only those elements but also other elements not explicitly listed or inherent to such process, method, article, or terminal device. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or terminal device that comprises the element.
The flow control method, the flow control device, the electronic device and the storage medium provided by the present invention are described in detail above, and a specific example is applied in the present disclosure to explain the principle and the implementation of the present invention, and the description of the above embodiment is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (11)

1. A method of flow control, comprising:
obtaining a service to be processed;
predicting the bandwidth required when the service to be processed is processed to obtain predicted bandwidth;
obtaining an actual bandwidth prepared for processing the service to be processed according to the predicted bandwidth, the preset bandwidth of the service to be processed, and other borrowable bandwidths, where the other borrowable bandwidths include: presetting at least one of reserved bandwidth and idle bandwidth of other services;
processing the service to be processed through the actual bandwidth;
obtaining an actual bandwidth for preparing to process the service to be processed according to the predicted bandwidth, the preset bandwidth of the service to be processed and other borrowable bandwidths, including:
and when the predicted bandwidth is higher than the preset bandwidth, obtaining an actual bandwidth matched with the predicted bandwidth from the preset bandwidth and the other borrowable bandwidths.
2. The method of claim 1, wherein obtaining an actual bandwidth to be used for processing the pending traffic according to the predicted bandwidth, the preset bandwidth of the pending traffic, and other borrowable bandwidths comprises:
and when the predicted bandwidth is not higher than the preset bandwidth, obtaining an actual bandwidth matched with the predicted bandwidth from the preset bandwidth.
3. The method of claim 2, wherein obtaining an actual bandwidth matching the predicted bandwidth from the preset bandwidth and the other borrowable bandwidth comprises:
when the predicted bandwidth is not higher than a sum value, obtaining an actual bandwidth matched with the predicted bandwidth from the preset bandwidth and the preset reserved bandwidth, wherein the sum value is the sum value of the preset bandwidth and the preset reserved bandwidth;
and when the predicted bandwidth is higher than the sum value, obtaining an actual bandwidth matched with the predicted bandwidth from the preset bandwidth, the preset reserved bandwidth and the idle bandwidth of other services.
4. The method of claim 3, further comprising:
when the bandwidth obtained from the preset bandwidth and the other borrowable bandwidths does not match with the predicted bandwidth, determining the total bandwidth formed by the preset bandwidth and the other borrowable bandwidths as the actual bandwidth ready for processing the service to be processed.
5. The method of claim 3, wherein obtaining an actual bandwidth matching the predicted bandwidth from the preset bandwidth, the preset reserved bandwidth and a free bandwidth of the other traffic comprises:
determining the priority of the service to be processed;
determining at least one target service from other services except the service to be processed, wherein the priority of the target service is lower than that of the service to be processed;
and obtaining the actual bandwidth matched with the predicted bandwidth from the preset bandwidth, the preset reserved bandwidth and the idle bandwidth of the at least one target service.
6. The method of claim 5, wherein obtaining an actual bandwidth matching the predicted bandwidth from the preset bandwidth, the preset reserved bandwidth and a free bandwidth of the at least one target service comprises:
determining target services with idle bandwidth according to the bandwidth state of each target service, wherein the bandwidth state of one target service represents whether the target service currently has the idle bandwidth;
and after the preset bandwidth and the preset reserved bandwidth are obtained, obtaining the idle bandwidth of the target service with the idle bandwidth according to the sequence of the priority from low to high until the actual bandwidth matched with the predicted bandwidth is obtained.
7. The method of claim 1, wherein processing the pending traffic through the actual bandwidth comprises:
determining a client side for sending the service to be processed;
allocating bandwidth resources matched with the actual bandwidth to a data transmission channel connected with the client;
when the actual bandwidth is matched with the predicted bandwidth, processing the service data corresponding to the service to be processed according to a first preset code rate;
when the actual bandwidth is not matched with the predicted bandwidth, processing the service data according to a second preset code rate, wherein the second preset code rate is lower than the first preset code rate;
and sending the processed service data to the client through the data transmission channel.
8. The method of claim 7, wherein predicting a bandwidth required for processing the to-be-processed service to obtain a predicted bandwidth comprises:
sending a flow statistic request to the client;
receiving a statistic parameter returned by the client aiming at the flow statistic request;
and predicting the bandwidth required when the service to be processed is processed according to the statistical parameters to obtain the predicted bandwidth.
9. A flow control device, comprising:
the first obtaining module is used for obtaining the service to be processed;
the second obtaining module is used for predicting the bandwidth required by processing the service to be processed to obtain the predicted bandwidth;
a third obtaining module, configured to obtain an actual bandwidth to be prepared for processing the service to be processed according to the predicted bandwidth, the preset bandwidth of the service to be processed, and other borrowable bandwidths, where the other borrowable bandwidths include: presetting at least one of reserved bandwidth and idle bandwidth of other services;
the processing module is used for processing the service to be processed through the actual bandwidth;
the third obtaining module includes:
and the second obtaining sub-module is used for obtaining the actual bandwidth matched with the predicted bandwidth from the preset bandwidth and the other borrowable bandwidths when the predicted bandwidth is higher than the preset bandwidth.
10. An electronic device, comprising:
one or more processors; and
one or more machine readable media having instructions stored thereon that, when executed by the one or more processors, cause the electronic device to perform the steps of a flow control method as recited in any of claims 1-8.
11. A computer-readable storage medium storing a computer program for causing a processor to perform the steps of a method of flow control according to any one of claims 1 to 8.
CN202010209474.2A 2020-03-23 2020-03-23 Flow control method and device, electronic equipment and storage medium Active CN111565323B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010209474.2A CN111565323B (en) 2020-03-23 2020-03-23 Flow control method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010209474.2A CN111565323B (en) 2020-03-23 2020-03-23 Flow control method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111565323A CN111565323A (en) 2020-08-21
CN111565323B true CN111565323B (en) 2022-11-08

Family

ID=72071468

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010209474.2A Active CN111565323B (en) 2020-03-23 2020-03-23 Flow control method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111565323B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114629826B (en) * 2020-12-14 2024-05-28 京东方科技集团股份有限公司 Network maximum bandwidth estimation method and device, electronic equipment and storage medium
CN113207107A (en) * 2021-04-25 2021-08-03 浙江吉利控股集团有限公司 Multichannel bandwidth regulation and control method, device, equipment and storage medium
CN113660173B (en) * 2021-08-16 2024-04-26 抖音视界有限公司 Flow control method, device, computer equipment and storage medium
CN113726691B (en) * 2021-08-20 2024-04-30 北京字节跳动网络技术有限公司 Bandwidth reservation method, device, equipment and storage medium
CN115941622A (en) * 2022-10-25 2023-04-07 阿里巴巴(中国)有限公司 Bandwidth adjusting method, system, equipment and storage medium
CN118175110B (en) * 2024-05-13 2024-07-09 中宇联云计算服务(上海)有限公司 Data resource delivery method based on dynamic flow pool

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001031605A1 (en) * 1999-10-28 2001-05-03 Ncube Corporation Adaptive bandwidth system and method for broadcast data
US7499453B2 (en) * 2000-05-19 2009-03-03 Cisco Technology, Inc. Apparatus and methods for incorporating bandwidth forecasting and dynamic bandwidth allocation into a broadband communication system
JP2005006062A (en) * 2003-06-12 2005-01-06 Nec Corp Voice communication band management system and method, communication connection server, network device, and voice communication band management program
TWI276334B (en) * 2005-09-16 2007-03-11 Ind Tech Res Inst Methods for allocating transmission bandwidths of a network
WO2007071198A1 (en) * 2005-12-23 2007-06-28 Hongkong Applied Science And Technology Research Institute Co., Ltd. A distributed wireless network with dynamic bandwidth allocation
CN100571175C (en) * 2006-09-30 2009-12-16 华为技术有限公司 A kind of cordless communication network bandwidth allocation methods and device
CN101009655B (en) * 2007-02-05 2011-04-20 华为技术有限公司 Traffic scheduling method and device
CN101827027B (en) * 2009-12-25 2013-02-13 中国科学院声学研究所 Interlayer coordination-based home network QoS guarantee method
CN101986619A (en) * 2010-10-29 2011-03-16 南京丹奥科技有限公司 Bandwidth reservation-based VSAT satellite communication system bandwidth distribution method
IL219839A0 (en) * 2012-05-16 2012-08-30 Elbit Systems Land & C4I Ltd Bandwidth prediction for cellular backhauling
CN103685072B (en) * 2013-11-27 2016-11-02 中国电子科技集团公司第三十研究所 A kind of method that network traffics are quickly distributed
CN105099778A (en) * 2015-07-27 2015-11-25 中国联合网络通信集团有限公司 Bandwidth allocation method and device
CN106412628B (en) * 2015-07-30 2020-07-24 华为技术有限公司 Bandwidth adjusting method and related equipment
CN105743562A (en) * 2016-03-21 2016-07-06 南京邮电大学 Satellite network access method based on predicted dynamic bandwidth allocation
CN105703916B (en) * 2016-03-21 2018-09-28 国网信息通信产业集团有限公司 A kind of control method and device based on SDN multiple domain distribution optical-fiber networks
CN105897612B (en) * 2016-06-06 2019-05-28 中国电子科技集团公司第三十研究所 A kind of method and system based on the distribution of SDN multi service dynamic bandwidth
CN105959974B (en) * 2016-06-14 2019-11-29 深圳市海思半导体有限公司 A kind of method and apparatus for predicting bandwidth of air-interface
CN109257304A (en) * 2017-07-12 2019-01-22 中兴通讯股份有限公司 A kind of bandwidth adjusting method, device, storage medium and the network equipment
CN108449286B (en) * 2018-03-01 2020-07-03 北京邮电大学 Network bandwidth resource allocation method and device
CN109246023A (en) * 2018-11-16 2019-01-18 锐捷网络股份有限公司 Flow control methods, the network equipment and storage medium
CN109639470B (en) * 2018-11-30 2021-10-15 四川安迪科技实业有限公司 VSAT satellite communication system bandwidth allocation method based on star networking

Also Published As

Publication number Publication date
CN111565323A (en) 2020-08-21

Similar Documents

Publication Publication Date Title
CN111565323B (en) Flow control method and device, electronic equipment and storage medium
CN110138632B (en) Data transmission method and device
CN109150905B (en) Video network resource release method and video network sharing platform server
CN110049341B (en) Video processing method and device
CN112311610B (en) Communication method and device for realizing QOS guarantee under non-IP system
CN111193767B (en) Request data sending method and device and clustered server system
CN110049280B (en) Method and device for processing monitoring data
CN111669337A (en) Flow control method and device
CN109347930B (en) Task processing method and device
CN108881134B (en) Communication method and system based on video conference
CN110336710B (en) Terminal testing method, system and device and storage medium
CN110740087B (en) Message transmission method, terminal, gateway device, electronic device and storage medium
CN110519549B (en) Conference terminal list obtaining method and system
CN109862439B (en) Data processing method and device
CN109842630B (en) Video processing method and device
CN109769012B (en) Web server access method and device
CN111245733A (en) Data transmission method and device
CN110650169A (en) Terminal equipment upgrading method and device
CN109474848B (en) Video processing method and device based on video network, electronic equipment and medium
CN110493311B (en) Service processing method and device
CN110113563B (en) Data processing method based on video network and video network server
CN110113553B (en) Method and device for processing video telephone
CN111064988A (en) Log saving method and device
CN110769324A (en) Processing method and device of virtual terminal
CN111479136B (en) Monitoring resource transmission method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant