WO2022057131A1 - 数据拥塞处理方法、装置、计算机设备和存储介质 - Google Patents

数据拥塞处理方法、装置、计算机设备和存储介质 Download PDF

Info

Publication number
WO2022057131A1
WO2022057131A1 PCT/CN2020/138089 CN2020138089W WO2022057131A1 WO 2022057131 A1 WO2022057131 A1 WO 2022057131A1 CN 2020138089 W CN2020138089 W CN 2020138089W WO 2022057131 A1 WO2022057131 A1 WO 2022057131A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
transmitted
remote unit
buffer queue
priority
Prior art date
Application number
PCT/CN2020/138089
Other languages
English (en)
French (fr)
Inventor
帅福利
杨波
徐胤
龚贺
Original Assignee
京信网络系统股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京信网络系统股份有限公司 filed Critical 京信网络系统股份有限公司
Publication of WO2022057131A1 publication Critical patent/WO2022057131A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2425Traffic characterised by specific attributes, e.g. priority or QoS for supporting services specification, e.g. SLA
    • H04L47/2433Allocation of priorities to traffic types
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9063Intermediate storage in different physical parts of a node or terminal

Definitions

  • the present application relates to the technical field of microwave communication, and in particular, to a data congestion processing method, apparatus, computer equipment and storage medium.
  • a method for processing data congestion comprising:
  • the data required by each remote unit is obtained from the first buffer queue according to the data transmission volume corresponding to each remote unit; the first buffer queue includes the data after congestion processing is performed according to the user priority of the data to be transmitted.
  • the data is obtained from the first buffer queue according to the data transmission volume corresponding to each remote unit; the first buffer queue includes the data after congestion processing is performed according to the user priority of the data to be transmitted.
  • the data required by each remote unit is buffered in a second buffer queue; the second buffer queue includes data according to each remote unit.
  • the business priority of the required data is the data after congestion processing;
  • the data required by each of the remote units in the second buffer queue is sent to each of the remote units respectively.
  • a data congestion processing device includes:
  • a detection module for detecting the amount of data transmission that can be transmitted on the link between the near-end unit and each remote unit
  • the first processing module is used to obtain the data required by each of the remote units from the first buffer queue according to the data transmission amount corresponding to each of the remote units; the first buffer queue includes data according to the data to be transmitted. User priority for the data after congestion processing;
  • the second processing module is configured to cache the data required by each of the remote units in a second buffer queue according to the service priority of the data required by each of the remote units; the second buffer queue includes The data after the congestion processing is performed according to the service priority of the data required by each of the remote units;
  • a sending module configured to send the data required by each of the remote units in the second buffer queue to each of the remote units respectively.
  • the above data congestion processing method and device by detecting the data transmission amount that can be transmitted on the link between the near-end unit and each remote unit, and obtaining from the first buffer queue according to the data transmission amount corresponding to each remote unit The data required by each remote unit is then cached in the second buffer queue according to the business priority of the data required by each remote unit, and then the data required by each remote unit in the second buffer queue is buffered. The data required by the end unit is sent to each remote unit respectively.
  • the above method realizes that in the case of data congestion, two-level congestion processing is carried out, that is, the near-end unit performs congestion processing according to the user priority of the data to be transmitted, and the near-end unit performs the congestion processing according to the data required by each remote unit.
  • Congestion processing is performed on the priority, so that the near-end unit sends as much data with high user priority and high service priority to each remote unit as possible, which greatly improves the data with high user priority and high service priority. transmission efficiency.
  • the above method also realizes that according to the data transmission volume between the near-end unit and each remote unit, dynamically adjusts the size of the data volume buffered in the second buffer queue and needs to be sent to each remote unit, so that the second buffer queue
  • the data required by each remote unit in the cache can match the data transmission amount that can be transmitted on the link between the near-end unit and each remote unit, which overcomes the limitation of the link between the near-end unit and each remote unit. The problem of inefficient data transmission caused by state changes.
  • FIG. 1 is a schematic structural diagram of a data transmission system provided in an embodiment
  • FIG. 2 is a schematic flowchart of a method for processing data congestion in one embodiment
  • FIG. 3 is a schematic flowchart of a method for processing data congestion in one embodiment
  • FIG. 4 is a schematic flowchart of a specific implementation manner of S202 in the embodiment of FIG. 3;
  • FIG. 5 is a schematic flowchart of a specific implementation manner of S303 in the embodiment of FIG. 4;
  • FIG. 6 is a schematic flowchart of a specific implementation manner of S402 in the embodiment of FIG. 4;
  • FIG. 7 is a schematic flowchart of a specific implementation manner of S102 in the embodiment of FIG. 2;
  • FIG. 8 is a schematic flowchart of a method for processing data congestion in one embodiment
  • FIG. 9 is a schematic flowchart of a specific implementation manner of S602 in the embodiment of FIG. 8;
  • FIG. 10 is a schematic flowchart of a specific implementation manner of S702 in the embodiment of FIG. 9;
  • 11 is a schematic flowchart of a method for processing data congestion in one embodiment
  • FIG. 12 is a schematic flowchart of a specific implementation manner of S101 in the embodiment of FIG. 2;
  • FIG. 13 is a schematic flowchart of a method for processing data congestion in one embodiment
  • FIG. 14 is a structural block diagram of an apparatus for processing data congestion in one embodiment
  • Figure 15 is a diagram of the internal structure of a computer device in one embodiment.
  • the data congestion processing method provided by the present application can be applied to the data transmission system shown in FIG. 1 .
  • the data transmission system includes: a near-end unit, at least one remote unit, at least one client, and a server, wherein the near-end A wireless connection is performed between the unit and each remote unit, a wired connection is performed between the near-end unit and the server, and a wired or wireless connection is performed between each remote unit and its corresponding user terminal.
  • the user terminal may not be limited to various personal computers, notebook computers, smart phones, tablet computers and portable wearable devices, and the near-end unit or the remote unit may be, but not limited to, various switching devices, personal computers, notebook computers, etc. ;
  • the server can be implemented as an independent server or a server cluster composed of multiple servers.
  • a data congestion processing method is provided, and the method is applied to the near-end unit in FIG. 1 as an example for description, including the following steps:
  • S101 Detect the data transmission amount that can be transmitted on the link between the near-end unit and each remote unit.
  • the near-end unit is used for receiving the data to be transmitted sent by the server connected to it, and sending the data to be transmitted to each remote unit on the air interface side.
  • the remote unit is used for receiving the data to be transmitted sent by the near-end unit, and sending the data to be transmitted to the user terminal connected to it.
  • the near-end unit when the near-end unit receives the to-be-transmitted data and needs to send the to-be-transmitted data to each remote unit connected to it, it can first detect the available data on the link between the near-end unit and each remote unit.
  • the transmitted data transmission volume that is, the maximum transmission bandwidth between the near-end unit and each remote unit, so that the data volume sent by the near-end unit to each remote unit can be adjusted in real time according to the data transmission volume corresponding to each remote unit.
  • the amount of data transmission that can be transmitted on the link between the near-end unit and a certain remote unit is related to the link quality between the near-end unit and the remote unit.
  • the quality of the link between the remote unit is good, the amount of data that can be transmitted on the link between the near-end unit and the remote unit is large. If the quality of the link between the near-end unit and the remote unit is high If it is poor, the data transmission amount that can be transmitted on the link between the near-end unit and the remote unit is small.
  • S102 Acquire data required by each remote unit from a first buffer queue according to a data transmission amount corresponding to each remote unit, where the first buffer queue includes data after congestion processing according to user priorities of the data to be transmitted.
  • the user priority is determined by the near-end unit in advance according to the type of data. For example, the user priority of the data sent to user A is higher than the priority of the data sent to user B, and the user priority of user A and user B can be Determined when registering the business.
  • the first buffer queue is a buffer queue preset by the near-end unit, and its size is determined by the near-end unit according to the maximum transmission bandwidth on the air interface side. For example, the first cache queue is set to a cache queue with a size of 200M.
  • the first cache queue includes multiple cache spaces, each cache space has a weight, and the weight of each cache space corresponds to the user priority of storing data in each cache space, that is, the user priority of storing data in each cache space.
  • the data required by each remote unit represents the data to be sent by the near-end unit to each remote unit, that is, the data to be sent to each remote unit determined by the near-end unit according to the data transmission amount corresponding to the remote unit.
  • the first buffer queue also contains the data to be transmitted sent by the near-end unit to multiple remote units
  • the near-end unit needs to send data to each remote unit, it needs to first remove the data from the first buffer queue. Determine the data that needs to be sent to each remote unit, and then according to the data transmission volume corresponding to each remote unit, determine the data that matches the data transmission volume from the above-mentioned data to be sent to each remote unit, that is, each remote unit The data the unit needs to send later.
  • the near-end unit receives the data to be transmitted, data congestion may occur in the data to be transmitted. , the congestion processing is performed on the congested data in the first buffer queue. Therefore, the data after the congestion processing is buffered in the first buffer queue.
  • the service priority can be pre-determined by the near-end unit according to the service type of the data.
  • the data of the control type has a higher service priority than the data of the non-control type, and the service priority of the data of operator A is higher than that of operator B.
  • the business priority of the class is high.
  • the second buffer queue is a buffer queue preset by the near-end unit, and its size is determined by the near-end unit according to the maximum transmission bandwidth on the air interface side. For example, the second cache queue is set to a cache queue with a size of 200M.
  • the second cache queue includes multiple cache spaces, each cache space has a weight, and the weight of each cache space corresponds to the business priority of storing data in each cache space, that is, the business priority of storing data in each cache space.
  • the weight of each cache space corresponds to the business priority of storing data in each cache space, that is, the business priority of storing data in each cache space. The higher the value, the greater the weight of each cache space, the lower the business priority of storing data in each cache space, and the smaller the weight of each cache space.
  • the near-end unit when the near-end unit extracts the data required by each remote unit from the first buffer queue, the data required by each remote unit may be further correspondingly buffered in the second buffer queue according to the service priority.
  • the weight of each buffer space corresponds to the service priority of the data to be buffered.
  • S104 Send the data required by each remote unit in the second buffer queue to each remote unit respectively.
  • the data required by each remote unit can be sorted according to the business priority of the data or according to the weight of each buffer space. It is sent to each remote unit, so that each remote unit can receive as much data with high business priority as possible within a certain period of time.
  • the data transmission amount that can be transmitted on the link between the near-end unit and each remote unit is detected, and each remote unit is obtained from the first buffer queue according to the data transmission amount corresponding to each remote unit.
  • the data required by the unit, and then according to the business priority of the data required by each remote unit, the data required by each remote unit is buffered in the second buffer queue, and then the data required by each remote unit in the second buffer queue is buffered.
  • the required data is sent to each remote unit separately.
  • the above method realizes that in the case of data congestion, two-level congestion processing is carried out, that is, the near-end unit performs congestion processing according to the user priority of the data to be transmitted, and the near-end unit performs the congestion processing according to the data required by each remote unit. Congestion processing is performed on the priority, so that the near-end unit sends as much data with high user priority and high service priority to each remote unit as possible, which greatly improves the data with high user priority and high service priority. transmission efficiency.
  • the above method also realizes that according to the data transmission volume between the near-end unit and each remote unit, dynamically adjusts the size of the data volume buffered in the second buffer queue and needs to be sent to each remote unit, so that the second buffer queue
  • the data required by each remote unit in the cache can match the data transmission amount that can be transmitted on the link between the near-end unit and each remote unit, which overcomes the limitation of the link between the near-end unit and each remote unit.
  • the present application further provides a method for processing data congestion, as shown in FIG. 3 , the method described in the embodiment of FIG. 2 further includes:
  • the near-end unit when the near-end unit receives the data to be transmitted, it can first determine the user priority of the data to be transmitted, then search the first cache queue for a cache space with a weight corresponding to the user priority, and then check the cache space Whether it is full of data, if the buffer space is full of data, it means that there is congestion in the data to be transmitted, and if the buffer space is not full of data, it means that there is no congestion in the data to be transmitted.
  • the discardability indicates whether the data to be transmitted can be discarded, and the near-end unit can pre-mark whether the data to be transmitted is discardable data according to the usage type of the data to be transmitted.
  • the to-be-transmitted data is marked as discardable data, and if the to-be-transmitted data is non-important data, the near-end unit marks the to-be-transmitted data as discardable data.
  • the storage state of the first cache queue indicates whether there is free cache space in the first cache queue.
  • this embodiment relates to an application scenario in which the data to be transmitted is congested.
  • the near-end unit can perform congestion processing on the data to be transmitted according to the discardability of the data to be transmitted and the storage state of the first buffer queue, and can
  • the near-end unit can also perform congestion processing on the data to be transmitted according to the discardability of the data to be transmitted.
  • the near-end unit can also perform congestion processing on the data to be transmitted according to the storage state of the first buffer queue.
  • the method described in the above embodiment realizes the congestion processing of the data to be transmitted, and determines the congestion processing method in combination with the discardability of the data, which ensures the quality of subsequent data transmission, and determines the congestion processing method in combination with the storage state of the first buffer queue, which is sufficient
  • the pre-set cache space is used to avoid resource waste.
  • the above S202 “congestion processing is performed on the data to be transmitted according to the discardability of the data to be transmitted and/or the storage state of the first buffer queue. ",include:
  • step S301 determines whether the data to be transmitted is discardable data, if the data to be transmitted is data that can be discarded, go to step S302, if the data to be transmitted is data that cannot be discarded, go to step S303 .
  • This embodiment relates to an application scenario in which congestion processing is performed on the data to be transmitted according to the discardability of the data to be transmitted and the storage state of the first buffer queue.
  • the near-end unit first judges the data to be Whether the data to be transmitted is discardable data, if the data to be transmitted is discardable data, it means that the data to be transmitted received when congestion occurs will not have any impact on the quality of later data transmission, and the data to be transmitted can be discarded . If the data to be transmitted is data that cannot be discarded, it means that the received data to be transmitted when congestion occurs is more important and cannot be discarded. If the data to be transmitted is discarded, the quality of later data transmission will be affected.
  • This embodiment involves that the data to be transmitted received by the near-end unit is discardable data.
  • the near-end unit directly discards the data to be transmitted.
  • S303 Perform congestion processing on the data to be transmitted according to the storage state of the first buffer queue and the user priority of the data to be transmitted.
  • the near-end unit determines the storage state of the first buffer queue, and determines whether there is free buffer space. If there is no free buffer space, it means that there is still space on the near-end unit to buffer the data to be transmitted when the congestion occurs, and the near-end unit can use this space to cache data. If there is no free buffer space, it means that the near-end unit Each buffer space in the preset first buffer queue above stores data, and the near-end unit needs to further determine whether to discard the data to be transmitted according to the user priority of the data to be transmitted.
  • this embodiment provides a specific implementation of the above S303.
  • the above S303 "congestion processing is performed on the data to be transmitted according to the storage state of the first buffer queue and the user priority of the data to be transmitted", including:
  • This embodiment relates to an application scenario in which the near-end unit determines that there is free buffer space in the first buffer queue.
  • the near-end unit directly buffers the data to be transmitted in the free buffer space, and according to the preset user priority According to the corresponding relationship between the level and the weight, determine the weight corresponding to the user priority of the data to be transmitted, and then modify the weight of the buffer space for buffering the data to be transmitted according to the weight, so as to update the weight of the buffer space for buffering the data to be transmitted.
  • the purpose is to make the weight of the buffer space for buffering the data to be transmitted correspond to the user priority of the buffered data to be transmitted.
  • the first target data to be transmitted is the data to be transmitted buffered in the buffer space with the lowest weight in the first buffer queue.
  • this embodiment relates to an application scenario in which the near-end unit determines that there is no free buffer space in the first buffer queue.
  • the near-end unit first finds the buffer space with the lowest weight in the first buffer queue, and then Determine the user priority of the data cached in the cache space with the lowest weight, that is, the user priority of the data to be transmitted by the first target, and then compare the user priority of the data to be transmitted by the first target with the received data to be transmitted when congestion occurs.
  • the user priorities are compared to obtain a comparison result, and further, according to the comparison result, different congestion processing methods are selected to perform congestion processing on the data to be transmitted.
  • this embodiment provides a specific implementation manner of the above S402.
  • the above S402 “determining to perform congestion processing on the data to be transmitted according to the comparison result” includes:
  • This embodiment relates to an application scenario where the comparison result is that the priority of the user of the data to be transmitted is higher than the priority of the user of the data to be transmitted of the first target.
  • the near-end unit directly discards the data to be transmitted of the first target to clear the The buffer space where the data to be transmitted of the first target is located, and the data to be transmitted is buffered in the buffer space after the data to be transmitted of the first target is discarded.
  • the weight determine the weight corresponding to the user priority of the data to be transmitted to be stored, and modify the cache space after discarding the data to be transmitted from the first target according to the weight.
  • Weight to achieve the purpose of updating the weight of the cache space after discarding the first target data to be transmitted, so that the weight of the cache space after discarding the first target data to be transmitted corresponds to the user priority of the later cached data to be transmitted.
  • This embodiment relates to an application scenario where the comparison result is that the user priority of the data to be transmitted is lower than or equal to the user priority of the target data to be transmitted.
  • the near-end unit directly discards the to-be-transmitted data.
  • the above embodiment determines whether to discard the data to be transmitted by judging the user priority of the data to be transmitted, and discards the data of the low user priority in the first cache queue, so that the data to be transmitted with a high user priority is transmitted as much as possible. , which ensures the effective transmission of data to be transmitted with high user priority, and improves the transmission efficiency of data to be transmitted with high user priority.
  • FIG. 7 is a specific implementation of S102 in the embodiment of FIG. 2 .
  • the above S102 "obtains the data required by each remote unit from the first buffer queue according to the data transmission amount corresponding to each remote unit. ",include:
  • S1021 Determine from the first buffer queue the data to be transmitted that needs to be sent to each remote unit.
  • the near-end unit when the near-end unit buffers the received data to be transmitted in the first buffer queue, the first buffer queue contains data required by different remote units. Therefore, the near-end unit needs to remove the data from the first buffer queue. Determine the data to be transmitted that needs to be sent to each remote unit. Specifically, when the near-end unit determines the to-be-transmitted data that needs to be sent to each remote unit, it may be determined according to the target address of the to-be-transmitted data, for example, according to the MAC address or destination IP address of the to-be-transmitted data. Of course, the near-end unit may also determine the data to be transmitted to each remote unit in other ways, which is not limited here.
  • the near-end unit determines the data to be transmitted that needs to be sent to each remote unit, it can further select the data to be transmitted from the data to be sent to each remote unit according to the data transmission volume corresponding to each remote unit.
  • the data corresponding to the data transmission amount corresponding to each remote unit is extracted from the data, and each extracted data is determined as the data required by each remote unit, and is waiting to be sent to each corresponding remote unit.
  • the method described in the above embodiment realizes the dynamic adjustment of the data volume of data sent to each remote unit according to the corresponding data transmission volume of each remote unit, so that the link between the near-end unit and the remote unit is state, data can be effectively transmitted, and the data transmission between the near-end unit and each remote unit does not affect each other.
  • the present application further provides a method for processing data congestion, as shown in FIG. 8 , and the method described in the embodiment of FIG. 2 further includes:
  • S601 according to the service priority of the data required by each remote unit, detect whether there is congestion in the data required by each remote unit.
  • the near-end unit may first determine the service priority of the data to be dumped, and then search the second cache queue for The cache space with the weight corresponding to the business priority, and then check whether the cache space is full of data. If the cache space is full of data, it means that the data required by the remote unit is congested. If the cache space is not full data, it means that the data required by the remote unit is not congested.
  • the storage state of the second cache queue indicates whether there is free cache space in the second cache queue.
  • this embodiment relates to an application scenario where the data required by the remote unit is congested.
  • the near-end unit can perform congestion processing on the data required by the remote unit according to the storage state of the second buffer queue.
  • the above S602 "congestion processing is performed on the required data according to the storage state of the second cache queue", including:
  • This embodiment relates to an application scenario in which the near-end unit determines that there is free buffer space in the second buffer queue.
  • the near-end unit directly buffers the data required by the remote unit into the free buffer space, and according to the preset Set the corresponding relationship between the business priority and the weight, determine the weight corresponding to the business priority of the data required by the remote unit, and then modify the weight of the cache space for buffering the data required by the remote unit according to the weight, The purpose of updating the weight of the buffer space for buffering the data required by the remote unit is achieved, so that the weight of the buffer space for buffering the data required by the remote unit corresponds to the service priority of the data required by the buffered remote unit.
  • the second target data to be transmitted is the data required by the remote unit buffered in the buffer space with the lowest weight in the second buffer queue.
  • this embodiment relates to an application scenario where the near-end unit determines that there is no free buffer space in the second buffer queue.
  • the near-end unit first finds the buffer space with the lowest weight in the second buffer queue, and then Determine the service priority of the data cached in the buffer space with the lowest weight, that is, the service priority of the data to be transmitted by the second target, and then compare the service priority of the data to be transmitted by the second target with the remote unit that needs to be dumped when congestion occurs
  • the business priorities of the required data are compared to obtain a comparison result, and further, according to the comparison result, different congestion processing methods are selected to perform congestion processing on the data required by the remote unit.
  • this embodiment provides a specific implementation manner of the above S702.
  • the above S702 "determining to perform congestion processing on the data to be transmitted according to the comparison result" includes:
  • the second target data to be transmitted is the data required by the remote unit buffered in the buffer space with the lowest weight in the second buffer queue.
  • this embodiment relates to an application scenario in which the comparison result is that the service priority of the data required by the remote unit is higher than the service priority of the data to be transmitted by the second target.
  • the data to be transmitted is discarded to clear the buffer space where the data to be transmitted of the second target is located, and the data required by the remote unit is buffered in the buffer space after the data to be transmitted of the second target is discarded.
  • the service priority and the weight determine the weight corresponding to the service priority of the data required by the remote unit to be stored, and modify and discard the data to be transmitted from the second target according to the weight.
  • the weight of the cache space after discarding the data to be transmitted from the second target is updated, so that the weight of the cache space after discarding the data to be transmitted from the second target is equal to the data required by the remote unit to be cached later Corresponding business priorities.
  • This embodiment relates to an application scenario in which the comparison result is that the service priority of the required data is lower than or equal to the service priority of the data to be transmitted by the second target.
  • the near-end unit directly discards the data to be transmitted by the second target .
  • the above embodiment determines whether to discard the data required by each remote unit by judging the business priority of the data required by each remote unit, and discards the data with low business priority in the second cache queue, so that the business priority is reduced.
  • the high-level data to be transmitted is transmitted as much as possible, which ensures the effective transmission of the data to be transmitted with a high business priority, and improves the transmission efficiency of the data to be transmitted with a high business priority.
  • the present application also provides a data congestion processing method. As shown in FIG. 11 , the method described in the embodiment of FIG. 2 further includes the steps:
  • the near-end unit is used to receive the data to be transmitted sent by the server in real time.
  • S902 Mark the user priority, service priority and discardability of the data to be transmitted according to the attribute information of the data to be transmitted.
  • the attribute information includes the user type, service type, and usage type of the data to be transmitted. Specifically, when the near-end unit receives the to-be-transmitted data, it can further determine the priority of the user of the to-be-transmitted data according to the user type of the to-be-transmitted data.
  • the near-end unit can then determine the user priority of the data to be transmitted according to the user priority identifier; when the near-end unit receives the data to be transmitted, it can further Determine the service priority of the data to be transmitted according to the service type of the data to be transmitted, and use the corresponding service priority identifier to mark, so that the near-end unit can determine the service priority of the data to be transmitted according to the service priority identifier; When the near-end unit receives the data to be transmitted, it can further determine whether the data to be transmitted is discardable data according to the usage type of the data to be transmitted, and use the corresponding discard identifier to mark it, so that the near-end unit can later The discard identifier determines the discardability of the data to be transmitted.
  • the near-end unit After the near-end unit marks the received data to be transmitted based on the above steps, it can determine the user priority of the to-be-transmitted data according to the user identifier of the to-be-transmitted data, and according to the preset user priority and weight correspondence , determine the weight corresponding to the user priority of the data to be transmitted, then determine the buffer space with the weight in the first buffer queue according to the weight, and finally buffer the data to be transmitted in the buffer space with the weight, so that the cache The weight of the space corresponds to the user priority of the to-be-transmitted data to be stored.
  • the near-end unit when the near-end unit receives the data to be transmitted, it stores the to-be-transmitted data in the buffer space of the corresponding weight in the order of user priority, so that the When the weight of each cache space in the cache queue is used to send data, it can ensure that data with high user priority is preferentially forwarded, and the transmission efficiency of data with high user priority is improved.
  • the present application also provides a specific implementation manner of the above S104, which includes: using a polling scheduling method, sending the data required by each remote unit in the second buffer queue to each remote unit respectively .
  • a round-robin scheduling method can be used to prioritize the effective transmission of data with high service priority, thereby improving the service priority High data transmission efficiency.
  • the present application also provides a specific implementation of the above S101.
  • the above S101 “detects the data transmission amount that can be transmitted on the link between the near-end unit and each remote unit” ,include:
  • the air interface transmission quality can represent the state of the air interface transmission link between the near-end unit and the remote unit, and the state of the air interface transmission link can be determined by the state of the environment where the air interface transmission link is located, for example, the near-end unit and the remote unit If the environment where the air interface transmission link is located is rainy, the state of the air interface transmission link will be poor, which will affect the air interface transmission quality between the near-end unit and the remote unit, and cause the near-end unit and the The air interface transmission quality between remote units is low.
  • the state of the air interface transmission link may also be determined by the operating state of the remote unit. For example, if a remote unit connected to the near-end unit fails, The state of the air interface transmission link is extremely poor, resulting in low air interface transmission quality between the near-end unit and the remote unit.
  • the near-end unit when the near-end unit receives the data to be transmitted and needs to send the to-be-transmitted data to each remote unit connected to it, it first detects the air interface transmission quality between the near-end unit and each remote unit, so as to Then, according to the air interface transmission quality corresponding to each remote unit, the maximum transmission bandwidth between the near-end unit and each remote unit, that is, the maximum data transmission amount, is determined. For example, when the air interface transmission quality is good, the corresponding maximum transmission bandwidth is 200M, and when the air interface transmission quality becomes poor, the corresponding maximum transmission bandwidth becomes 100k.
  • S1002 Determine the data transmission amount corresponding to each remote unit according to the transmission quality of each air interface.
  • this embodiment relates to a method for specifically determining the data transmission amount corresponding to each remote unit.
  • the near-end unit detects the air interface transmission quality between the near-end unit and each remote unit based on the above steps, it can determine the transmission quality of the air interface between the near-end unit and each remote unit by analyzing the air interface transmission quality. The amount of data transferred.
  • the air interface transmission quality between the near-end unit and each remote unit is an index that is relatively easy to detect, and the air interface transmission quality can truly reflect the state of the transmission link between the near-end unit and each remote unit, therefore, according to the air interface transmission quality
  • the quality determines the data transmission volume corresponding to each of the remote units, so that the near-end unit sends the required data to each remote unit according to the data transmission volume, which can match the current ability of each remote unit to receive the required data and improve the
  • the data transmission efficiency is improved, and there will be no waste of resources caused by the remote unit unable to receive data normally and sending data. For example, if the remote unit connected to the near-end unit fails and cannot receive data, if the near-end unit continues to perform the steps of buffering and processing the data required by the remote unit, resources on the near-end unit will be wasted .
  • the present application also provides a data congestion processing, as shown in FIG. 13 , the method includes:
  • step S1102 Determine whether the data to be transmitted is discardable data according to the discardability of the data to be transmitted, if the data to be transmitted is data that can be discarded, go to step S1104, and if the data to be transmitted is data that cannot be discarded, go to step S1105 .
  • S1105 Determine the storage state of the first cache queue. If there is free cache space in the first cache queue, go to step S1106, and if there is no free cache space in the first cache queue, go to step S1107.
  • S1106 Cache the data to be transmitted in the free buffer space, and reset the weight of the buffer space for buffering the data to be transmitted according to the user priority of the data to be transmitted.
  • step S1107 compare the user priority of the data to be transmitted with the user priority of the first target data to be transmitted in the first buffer queue, if the user priority of the data to be transmitted is higher than the user priority of the first target data to be transmitted If the user priority of the data to be transmitted is lower than or equal to the user priority of the data to be transmitted of the first target, then step S1109 is executed.
  • S1108 Discard the data to be transmitted from the first target, cache the data to be transmitted in the buffer space after discarding the data to be transmitted from the first target, and modify and discard the data to be transmitted from the first target according to the user priority of the data to be transmitted.
  • the weight of the cache space is
  • step S1111 Determine whether there is free cache space in the second cache queue, if so, go to step S1113, if not, go to step S1114.
  • S1112 Continue to transfer the data required by each remote unit stored in the first buffer space to the second buffer space.
  • S1113 Cache the required data in the free buffer space, and reset the weight of the buffer space of the required data for caching according to the business priority of the required data.
  • step S1114 compare the service priority of the required data with the service priority of the second target data to be transmitted in the second buffer queue, if the service priority of the required data is higher than that of the second target data to be transmitted If the service priority of the required data is lower than or equal to the service priority of the data to be transmitted by the second target, then step S1116 is executed.
  • S1115 Discard the data to be transmitted from the second target, cache the required data in the buffer space after discarding the data to be transmitted from the second target, and modify and discard the data to be transmitted from the second target according to the user priority of the required data. The weight of the cache space after the data.
  • the above method obviously provides a two-level congestion processing method.
  • the first-level congestion processing performs congestion processing on the wired data to be transmitted received by the near-end unit, and the second-level congestion processing is performed on the transferred data.
  • the data required by the end unit is processed for congestion, and the efficiency of congestion processing is greatly improved through the two-level congestion processing, thereby improving the efficiency of data transmission.
  • the first congestion processing is processed according to the user priority, the priority transmission of data with high user priority is guaranteed, and the second congestion processing is processed according to the business priority, which ensures the transmission of data with high business priority.
  • the method for processing data congestion after two-level congestion processing not only ensures the effective transmission of data with high user priority, but also ensures the effective transmission of data with high business priority, which greatly improves the The transmission efficiency of high-priority data is improved.
  • FIGS. 2-13 are shown in sequence according to the arrows, these steps are not necessarily executed in the sequence shown by the arrows. Unless explicitly stated herein, the execution of these steps is not strictly limited to the order, and these steps may be performed in other orders. Moreover, at least a part of the steps in FIGS. 2-13 may include multiple steps or multiple stages. These steps or stages are not necessarily executed at the same time, but may be executed at different times. The execution of these steps or stages The order is also not necessarily sequential, but may be performed alternately or alternately with other steps or at least a portion of the steps or phases within the other steps.
  • a data congestion processing apparatus including: a detection module 11, a first processing module 12, a second processing module 13 and a sending module 14, wherein:
  • the detection module 11 is used to detect the data transmission amount that can be transmitted on the link between the near-end unit and each remote unit;
  • the first processing module 12 is configured to obtain the data required by each of the remote units from the first buffer queue according to the data transmission amount corresponding to each of the remote units; the first buffer queue includes data according to the data to be transmitted.
  • the second processing module 13 is configured to cache the data required by each of the remote units in a second buffer queue according to the service priority of the data required by each of the remote units; Including the data after the congestion processing is performed according to the business priority of the data required by each of the remote units;
  • the sending module 14 is configured to send the data required by each of the remote units in the second buffer queue to each of the remote units respectively.
  • Each module in the above data congestion processing apparatus may be implemented in whole or in part by software, hardware and combinations thereof.
  • the above modules can be embedded in or independent of the processor in the computer device in the form of hardware, or stored in the memory in the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • a computer device is provided.
  • the computer device may be a terminal or a server, and its internal structure diagram may be as shown in FIG. 15 .
  • the computer equipment includes a processor, memory, a network interface, a display screen, and an input device connected by a system bus.
  • the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes non-volatile storage media, internal memory.
  • the nonvolatile storage medium stores an operating system and a computer program.
  • the internal memory provides an environment for the execution of the operating system and computer programs in the non-volatile storage medium.
  • the network interface of the computer device is used to communicate with an external terminal through a network connection.
  • the computer program when executed by a processor, implements a data congestion handling method.
  • the display screen of the computer equipment may be a liquid crystal display screen or an electronic ink display screen
  • the input device of the computer equipment may be a touch layer covered on the display screen, or a button, a trackball or a touchpad set on the shell of the computer equipment , or an external keyboard, trackpad, or mouse.
  • FIG. 15 is only a block diagram of a partial structure related to the solution of the present application, and does not constitute a limitation on the computer equipment to which the solution of the present application is applied. Include more or fewer components than shown in the figures, or combine certain components, or have a different arrangement of components.
  • a computer device including a memory and a processor, a computer program is stored in the memory, and the processor implements the following steps when executing the computer program:
  • the target data includes data sent by the source base station and data sent by the core network server;
  • the target base station determines the target number of the data packets sent by the target base station at a time
  • the target data is sent to the user terminal according to the target quantity.
  • a computer-readable storage medium is provided on which a computer program is stored, and when the computer program is executed by a processor, the above steps are implemented.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请涉及一种数据拥塞处理方法、装置、计算机设备和存储介质。所述方法通过检测近端单元与各远端单元之间的链路上所能传输的数据传输量,并根据各远端单元对应的数据传输量从第一缓存队列中获取各远端单元所需的数据,再按照各远端单元所需的数据的业务优先级,将各远端单元所需的数据缓存至第二缓存队列中,然后将第二缓存队列中各远端单元所需的数据分别发送给各远端单元。上述方法实现了两级拥塞处理,即近端单元根据待传输数据的用户优先级进行了拥塞处理,以及近端单元根据各远端单元所需的数据的业务优先级进行了拥塞处理,使近端单元尽可能多的将优先级高的数据发送给各远端单元,极大的提高了优先级高的数据的传输效率。

Description

数据拥塞处理方法、装置、计算机设备和存储介质 技术领域
本申请涉及微波通信技术领域,特别是涉及一种数据拥塞处理方法、装置、计算机设备和存储介质。
背景技术
在现有的通信网络中,通常存在某些特定的环境中无法架设有线网络,例如山区,或者部署有线网络投入人力资源较大,例如,大型企业分支机构在不同的地理位置很难租用到有线网络进行公司内部业务传输。在上述应用场景下,经常采用无线数据多路传输技术实现业务数据的有效传输。
然而,随着业务数据量的剧增,以及空口传输质量的动态变化,传输线路中数据拥塞的现象也会越来越严重,极大的影响了数据传输的效率。现有处理拥塞的方式包括很多种,例如,通过丢数据包的方式缓解拥塞现象,或者增加传输线路的带宽缓解拥塞现象,又或者扩大传输设备的缓存空间以缓解拥塞现象。
但是,上述缓解拥塞的方法在空口传输质量动态变化的情况下,仍然存在数据传输效率低下的问题。
发明内容
基于此,有必要针对上述技术问题,提供一种能够有效提高数据传输效率的数据拥塞处理方法、装置、计算机设备和存储介质。
一种数据拥塞处理方法,所述方法包括:
检测近端单元与各远端单元之间的链路上所能传输的数据传输量;
根据各所述远端单元对应的数据传输量从第一缓存队列中获取各所述远端单元所需的数据;所述第一缓存队列中包括根据待传输数据的用户优先级进行拥塞处理后的数据;
按照各所述远端单元所需的数据的业务优先级,将各所述远端单元所需的 数据缓存至第二缓存队列中;所述第二缓存队列中包括根据各所述远端单元所需的数据的业务优先级进行拥塞处理后的数据;
将所述第二缓存队列中各所述远端单元所需的数据分别发送给各所述远端单元。
一种数据拥塞处理装置,所述装置包括:
检测模块,用于检测近端单元与各远端单元之间的链路上所能传输的数据传输量;
第一处理模块,用于根据各所述远端单元对应的数据传输量从第一缓存队列中获取各所述远端单元所需的数据;所述第一缓存队列中包括根据待传输数据的用户优先级进行拥塞处理后的数据;
第二处理模块,用于按照各所述远端单元所需的数据的业务优先级,将各所述远端单元所需的数据缓存至第二缓存队列中;所述第二缓存队列中包括根据各所述远端单元所需的数据的业务优先级进行拥塞处理后的数据;
发送模块,用于将所述第二缓存队列中各所述远端单元所需的数据分别发送给各所述远端单元。
上述数据拥塞处理方法、装置,通过检测近端单元与各远端单元之间的链路上所能传输的数据传输量,并根据各远端单元对应的数据传输量从第一缓存队列中获取各远端单元所需的数据,再按照各远端单元所需的数据的业务优先级,将各远端单元所需的数据缓存至第二缓存队列中,然后将第二缓存队列中各远端单元所需的数据分别发送给各远端单元。上述方法实现了在数据拥塞的情况下,进行了两级拥塞处理,即近端单元根据待传输数据的用户优先级进行了拥塞处理,以及近端单元根据各远端单元所需的数据的业务优先级进行了拥塞处理,使近端单元尽可能多的将用户优先级高和业务优先级高的数据发送给各远端单元,极大的提高了用户优先级高和业务优先级高的数据的传输效率。另外,上述方法还实现了根据近端单元与各远端单元之间的数据传输量,动态调整第二缓存队列中缓存的需要发送给各远端单元的数据量的大小,使第二缓存队列中缓存的各远端单元所需的数据能够匹配近端单元与各远端单元之间链 路上所能传输的数据传输量,克服了因近端单元与各远端单元之间的链路状态变化导致的数据传输效率低下的问题。
附图说明
图1为一个实施例中提供的数据传输系统的结构示意图;
图2为一个实施例中数据拥塞处理方法的流程示意图;
图3为一个实施例中数据拥塞处理方法的流程示意图;
图4为图3实施例中S202的具体实现方式的流程示意图;
图5为图4实施例中S303的具体实现方式的流程示意图;
图6为图4实施例中S402的具体实现方式的流程示意图;
图7为图2实施例中S102的具体实现方式的流程示意图;
图8为一个实施例中数据拥塞处理方法的流程示意图;
图9为图8实施例中S602的具体实现方式的流程示意图;
图10为图9实施例中S702的具体实现方式的流程示意图;
图11为一个实施例中数据拥塞处理方法的流程示意图;
图12为图2实施例中S101的具体实现方式的流程示意图;
图13为一个实施例中数据拥塞处理方法的流程示意图;
图14为一个实施例中数据拥塞处理装置的结构框图;
图15为一个实施例中计算机设备的内部结构图。
具体实施方式
本申请提供的数据拥塞处理方法,可以应用于如图1所示的数据传输系统中,该数据传输系统包括:近端单元、至少一个远端单元、至少一个用户端、服务器,其中,近端单元与各远端单元之间进行无线连接,近端单元与服务器之间进行有线连接,各远端单元与各自对应的用户端之间进行有线或无线连接。其中,用户端可以不限于是各种个人计算机、笔记本电脑、智能手机、平板电脑和便携式可穿戴设备,近端单元或远端单元可以但不限于是各种交换机设备、 个人计算机、笔记本电脑等;服务器可以用独立的服务器或者是多个服务器组成的服务器集群来实现。
在一个实施例中,如图2所示,提供了一种数据拥塞处理方法,以该方法应用于图1中的近端单元为例进行说明,包括以下步骤:
S101,检测近端单元与各远端单元之间的链路上所能传输的数据传输量。
其中,近端单元用于接收与之连接的服务器发送的待传输数据,并将待传输数据在空口侧发送给各远端单元。远端单元用于接收近端单元发送的待传输数据,并将待传输数据发送给与之连接的用户端。
具体地,当近端单元接收到待传输数据,并需要将待传输数据发送给与之连接的各远端单元时,可以先检测近端单元与各远端单元之间的链路上所能传输的数据传输量,即近端单元与各远端单元之间的最大传输带宽,以便之后根据各远端单元对应的数据传输量实时调整近端单元发送给各远端单元的数据量。需要说明的是,近端单元与某一远端单元之间的链路上所能传输的数据传输量与近端单元与该远端单元之间的链路质量相关,若近端单元与该远端单元之间的链路质量优良,则近端单元与该远端单元之间的链路上所能传输的数据传输量大,若近端单元与该远端单元之间的链路质量差,则近端单元与该远端单元之间的链路上所能传输的数据传输量小。
S102,根据各远端单元对应的数据传输量从第一缓存队列中获取各远端单元所需的数据,第一缓存队列中包括根据待传输数据的用户优先级进行拥塞处理后的数据。
其中,用户优先级由近端单元预先根据数据所属类型确定,例如,发送给甲用户的数据的用户优先级高于发送给乙用户的数据的优先级,甲用户和乙用户的用户优先级可以在注册业务时确定。第一缓存队列为近端单元预先设置的缓存队列,其大小由近端单元根据空口侧的最大传输带宽确定。例如,将第一缓存队列设置为200M大小的缓存队列。第一缓存队列中包括多个缓存空间,每个缓存空间具有一个权重,且每个缓存空间的权重与每个缓存空间存储数据的用户优先级对应,即每个缓存空间存储数据的用户优先级越高,每个缓存空间的权重越大,每个缓存空间存储数据的用户优先级越低,每个缓存空间的权重 越小。各远端单元所需的数据表示近端单元即将发送给各远端单元的数据,即表示近端单元根据远端单元对应的数据传输量确定的即将发送给各远端单元的数据。
具体地,由于第一缓存队列中同时包含近端单元发送给多个远端单元的待传输数据,因此,当近端单元需要发送数据至各远端单元时,需要先从第一缓存队列中确定需要发送给各远端单元的数据,再根据各远端单元对应的数据传输量,从上述需要发送给各远端单元的数据中确定出与各数据传输量匹配的数据,即各远端单元所需的数据,以便之后发送。需要说明的是,在本实施例所述的步骤中,近端单元在接收待传输数据时,待传输数据可能发生数据拥塞,在拥塞情况下,近端单元会根据待传输数据的用户优先级,对第一缓存队列中发生拥塞的数据进行拥塞处理,因此,第一缓存队列中缓存经过拥塞处理后的数据。
S103,按照各远端单元所需的数据的业务优先级,将各远端单元所需的数据缓存至第二缓存队列中,第二缓存队列中包括根据各远端单元所需的数据的业务优先级进行拥塞处理后的数据。
其中,业务优先级可以由近端单元预先根据数据的业务类型确定,例如,控制类的数据比非控制类的数据的业务优先级高,A运营商类的数据的业务优先级比B运营商类的业务优先级高。第二缓存队列为近端单元预先设置的缓存队列,其大小由近端单元根据空口侧的最大传输带宽确定。例如,将第二缓存队列设置为200M大小的缓存队列。第二缓存队列中包括多个缓存空间,每个缓存空间具有一个权重,且每个缓存空间的权重与每个缓存空间存储数据的业务优先级对应,即每个缓存空间存储数据的业务优先级越高,每个缓存空间的权重越大,每个缓存空间存储数据的业务优先级越低,每个缓存空间的权重越小。
具体地,当近端单元从第一缓存队列中提取出各远端单元所需的数据时,可以进一步的将各远端单元所需的数据按照业务优先级对应缓存至第二缓存队列中的各缓存空间中,各缓存空间的权重与要缓存的数据的业务优先级对应。需要说明的是,在本实施例所述的步骤中,近端单元在将各远端单元所需的数据转存到第二缓存队列时,也可能发生数据拥塞,在拥塞情况下,近端单元会 根据各远端单元所需的数据的业务优先级,对第二缓存队列中发生拥塞的数据进行拥塞处理,因此,第二缓存队列中缓存经过拥塞处理后的数据。
S104,将第二缓存队列中各远端单元所需的数据分别发送给各远端单元。
具体地,当近端单元将各远端单元所需的数据存储到第二缓存队列中时,即可将各远端单元所需的数据按照数据的业务优先级或者按照各缓存空间的权重先后发送给各远端单元,使各远端单元在一定的时间内接收到尽可能多的业务优先级高的数据。
上述实施例中,通过检测近端单元与各远端单元之间的链路上所能传输的数据传输量,并根据各远端单元对应的数据传输量从第一缓存队列中获取各远端单元所需的数据,再按照各远端单元所需的数据的业务优先级,将各远端单元所需的数据缓存至第二缓存队列中,然后将第二缓存队列中各远端单元所需的数据分别发送给各远端单元。上述方法实现了在数据拥塞的情况下,进行了两级拥塞处理,即近端单元根据待传输数据的用户优先级进行了拥塞处理,以及近端单元根据各远端单元所需的数据的业务优先级进行了拥塞处理,使近端单元尽可能多的将用户优先级高和业务优先级高的数据发送给各远端单元,极大的提高了用户优先级高和业务优先级高的数据的传输效率。另外,上述方法还实现了根据近端单元与各远端单元之间的数据传输量,动态调整第二缓存队列中缓存的需要发送给各远端单元的数据量的大小,使第二缓存队列中缓存的各远端单元所需的数据能够匹配近端单元与各远端单元之间链路上所能传输的数据传输量,克服了因近端单元与各远端单元之间的链路状态变化导致的数据传输效率低下的问题。
基于图2实施例,本申请还提供了一种数据拥塞处理方法,如图3所述,上述图2实施例所述的方法还包括:
S201,根据待传输数据的用户优先级检测待传输数据是否存在拥塞。
具体地,当近端单元接收到待传输数据时,可以先确定该待传输数据的用户优先级,接着在第一缓存队列中查找与该用户优先级对应权重的缓存空间,然后查看该缓存空间是否已存满数据,若该缓存空间已存满数据,则表示待传输数据存在拥塞,若该缓存空间未存满数据,则表示待传输数据不存在拥塞。
S202,若存在,根据待传输数据的可丢弃性和/或第一缓存队列的存储状态对待传输数据进行拥塞处理。
其中,可丢弃性表示待传输数据是否可丢弃,近端单元可预先根据待传输数据的使用类型标记待传输数据是否为可丢弃的数据,例如,若待传输数据为低频率使用的数据,则将该待传输数据标记为可丢弃数据,若待传输数据为非重要数据,则近端单元将该待传输数据标记为可丢弃数据。第一缓存队列的存储状态表示第一缓存队列中是否存在空闲的缓存空间。
具体地,本实施例涉及待传输数据存在拥塞的应用场景,在此应用场景下,近端单元可以根据待传输数据的可丢弃性和第一缓存队列的存储状态对待传输数据进行拥塞处理,可选的,近端单元也可以根据待传输数据的可丢弃性对待传输数据进行拥塞处理,可选的,近端单元还可以根据第一缓存队列的存储状态对待传输数据进行拥塞处理。
上述实施例所述的方法实现了对待传输数据的拥塞处理,以及结合数据的可丢弃性确定拥塞处理的方式,保障了之后数据传输质量,结合第一缓存队列的存储状态确定拥塞处理方式,充分利用了预先设置的缓存空间,避免了资源浪费。
在一个实施例中,提供了上述S202的一种具体实现方式,如图4所示,上述S202“根据待传输数据的可丢弃性和/或第一缓存队列的存储状态对待传输数据进行拥塞处理”,包括:
S301,根据待传输数据的可丢弃性判断待传输数据是否为可丢弃的数据,若待传输数据为可丢弃的数据,则执行步骤S302,若待传输数据为不可丢弃的数据,则执行步骤S303。
本实施例涉及根据待传输数据的可丢弃性和第一缓存队列的存储状态对待传输数据进行拥塞处理的应用场景,在此应用场景下,近端单元先根据待传输数据的可丢弃性判断待传输数据是否为可丢弃的数据,若待传输数据为可丢弃的数据,则说明发生拥塞时接收到的待传输数据对后期数据传输的质量不会造成什么影响,该待传输数据可以被丢弃掉。若待传输数据为不可丢弃的数据,则说明发生拥塞时接收到的待传输数据比较重要,不能被丢弃,若丢弃该待传 输数据,会影响后期数据传输的质量。
S302,丢弃待传输数据。
本实施例涉及近端单元接收到的待传输数据为可丢弃的数据,在此情况下,近端单元将待传输数据直接丢弃。
S303,根据第一缓存队列的存储状态和待传输数据的用户优先级对待传输数据进行拥塞处理。
本实施例涉及近端单元接收到的待传输数据为不可丢弃的数据,在此情况下,近端单元再进一步的确定第一缓存队列的存储状态,确定是否存在空闲的缓存空间,若存在空闲的缓存空间,则说明近端单元上还有可以缓存拥塞发生时接收到的待传输数据的空间,近端单元即可利用该空间缓存数据,若不存在空闲的缓存空间,则说明近端单元上预先设置的第一缓存队列中的各个缓存空间都存储有数据,近端单元需要进一步的根据待传输数据的用户优先级,确定是否丢弃待传输数据。
进一步的,本实施例提供了上述S303的具体实现方式,如图5所示,上述S303“根据第一缓存队列的存储状态和待传输数据的用户优先级对待传输数据进行拥塞处理”,包括:
S401,若第一缓存队列中存在空闲的缓存空间,则将待传输数据缓存至空闲的缓存空间中,并根据待传输数据的用户优先级重置缓存待传输数据的缓存空间的权重。
本实施例涉及近端单元确定第一缓存队列中存在空闲的缓存空间的应用场景,在此场景下,近端单元直接将待传输数据缓存至空闲的缓存空间中,并根据预设的用户优先级与权重之间的对应关系,确定与该待传输数据的用户优先级对应的权重,然后根据该权重对应修改缓存待传输数据的缓存空间的权重,达到更新缓存待传输数据的缓存空间的权重的目的,使缓存该待传输数据的缓存空间的权重与缓存的待传输数据的用户优先级对应。
S402,若第一缓存队列中不存在所述空闲的缓存空间,则将待传输数据的用户优先级,与第一缓存队列中的第一目标待传输数据的用户优先级进行比较,并根据比较结果确定对待传输数据进行拥塞处理。
其中,第一目标待传输数据为第一缓存队列中权重最低的缓存空间中缓存的待传输数据。具体地,本实施例涉及近端单元确定第一缓存队列中不存在空闲的缓存空间的应用场景,在此情景下,近端单元先在第一缓存队列中查找到权重最低的缓存空间,然后确定权重最低的缓存空间中缓存的数据的用户优先级,即第一目标待传输数据的用户优先级,然后将第一目标待传输数据的用户优先级与发生拥塞时接收到的待传输数据的用户优先级进行比较,得到比较结果,再进一步的根据比较结果选择不同的拥塞处理方式对待传输数据进行拥塞处理。
再进一步的,本实施例提供了上述S402的具体实现方式,如图6所示,上述S402“根据比较结果确定对所述待传输数据进行拥塞处理”,包括:
S501,若待传输数据的用户优先级高于第一目标待传输数据的用户优先级,则将第一目标待传输数据丢弃,并将待传输数据缓存至丢弃第一目标待传输数据后的缓存空间中,以及根据待传输数据的用户优先级对应修改丢弃第一目标待传输数据后的缓存空间的权重。
本实施例涉及比较结果为待传输数据的用户优先级高于第一目标待传输数据的用户优先级的应用场景,在此情景下,近端单元直接将第一目标待传输数据丢弃,以清空第一目标待传输数据所在的缓存空间,并将待传输数据缓存至丢弃第一目标待传输数据后的缓存空间中。然后根据预先设置的用户优先级和权重之间的对应关系,确定与要存储的待传输数据的用户优先级对应的权重,并根据该权重对应修改丢弃第一目标待传输数据后的缓存空间的权重,达到更新丢弃第一目标待传输数据后的缓存空间的权重的目的,使丢弃第一目标待传输数据后的缓存空间的权重与之后缓存的待传输数据的用户优先级对应。
S502,若待传输数据的用户优先级低于或等于第一目标待传输数据的用户优先级,则将待传输数据丢弃。
本实施例涉及比较结果为待传输数据的用户优先级低于或等于目标待传输数据的用户优先级的应用场景,在此情景下,近端单元直接将待传输数据丢弃。
上述实施例通过判断待传输数据的用户优先级,确定是否丢弃待传输数据,以及将第一缓存队列中用户优先级低的数据进行丢弃,使用户优先级高的待传 输数据尽可能多的传输,保障了用户优先级高的待传输数据的有效传输,提高了用户优先级高的待传输数据的传输效率。
图7实施例为图2实施例中S102的具体实现方式,如图7所示,上述S102“根据各远端单元对应的数据传输量从第一缓存队列中获取各远端单元所需的数据”,包括:
S1021,从第一缓存队列中确定需要发送给各远端单元的待传输数据。
具体地,当近端单元将接收到的待传输数据缓存至第一缓存队列中时,第一缓存队列中包含不同远端单元所需要的数据,因此,近端单元需要从第一缓存队列中确定需要发送给各远端单元的待传输数据。近端单元具体在确定需要发送给各远端单元的待传输数据时,可以根据待传输数据的目标地址确定,例如,根据待传输数据的MAC地址或目的IP地址确定。当然,近端单元也可以通过其它方式确定需要发送给各远端单元的待传输数据,此处不做限定。
S1022,根据各远端单元对应的数据传输量,从需要发送给各远端单元的待传输数据中提取出各远端单元所需的数据。
具体的,当近端单元确定了需要发送给各远端单元的待传输数据之后,即可进一步的先根据各远端单元对应的数据传输量,从需要发送给各远端单元的待传输数据中提取出与各远端单元对应的数据传输量对应的数据,并将提取出的各数据确定为各远端单元所需的数据,以等待发送给对应的各远端单元。
上述实施例所述的方法实现了根据各远端单元对应的数据传输量动态调整发送给各远端单元的数据的数据量,使近端单元与远端单元之间的链路无论出于任何状态,都可以有效传输数据,且使近端单元与各远端单元之间的数据传输互不影响。
基于图2实施例,本申请还提供了一种数据拥塞处理方法,如图8所述,上述图2实施例所述的方法还包括:
S601,根据各远端单元所需的数据的业务优先级检测各远端单元所需的数据是否存在拥塞.。
具体地,当近端单元预将第一缓存队列中缓存的数据转存到第二缓存队列时,近端单元可以先确定需要转存的数据的业务优先级,接着在第二缓存队列 中查找与该业务优先级对应权重的缓存空间,然后查看该缓存空间是否已存满数据,若该缓存空间已存满数据,则表示远端单元所需的数据存在拥塞,若该缓存空间未存满数据,则表示远端单元所需的数据不存在拥塞。
S602,若存在,根据第二缓存队列的存储状态对所需的数据进行拥塞处理。
其中,第二缓存队列的存储状态表示第二缓存队列中是否存在空闲的缓存空间。具体地,本实施例涉及远端单元所需的数据存在拥塞的应用场景,在此应用场景下,近端单元可以根据第二缓存队列的存储状态对远端单元所需的数据进行拥塞处理。
进一步的,在一个实施例中,提供了上述S602的具体实现方式,如图9所示,上述S602“根据第二缓存队列的存储状态对所需的数据进行拥塞处理”,包括:
S701,若第二缓存队列中存在空闲的缓存空间,则将所需的数据缓存至空闲的缓存空间中,并根据所需的数据的业务优先级重置缓存所需的数据的缓存空间的权重。
本实施例涉及近端单元确定第二缓存队列中存在空闲的缓存空间的应用场景,在此情景下,近端单元直接将远端单元所需的数据缓存至空闲的缓存空间中,并根据预设的业务优先级与权重之间的对应关系,确定与远端单元所需的数据的业务优先级对应的权重,然后根据该权重对应修改缓存远端单元所需的数据的缓存空间的权重,达到更新缓存远端单元所需的数据的缓存空间的权重的目的,使缓存远端单元所需的数据的缓存空间的权重与缓存的远端单元所需的数据的业务优先级对应。
S702,若第二缓存队列中不存在空闲的缓存空间,则将所需的数据的业务优先级,与第二缓存队列中的第二目标待传输数据的业务优先级进行比较,并根据比较结果对所需的数据进行拥塞处理。
其中,第二目标待传输数据为第二缓存队列中权重最低的缓存空间中缓存的远端单元所需的数据。具体地,本实施例涉及近端单元确定第二缓存队列中不存在空闲的缓存空间的应用场景,在此情景下,近端单元先在第二缓存队列中查找到权重最低的缓存空间,然后确定权重最低的缓存空间中缓存的数据的 业务优先级,即第二目标待传输数据的业务优先级,然后将第二目标待传输数据的业务优先级与发生拥塞时需要转存的远端单元所需的数据的业务优先级进行比较,得到比较结果,再进一步的根据比较结果选择不同的拥塞处理方式对远端单元所需的数据进行拥塞处理。
再进一步的,本实施例提供了上述S702的具体实现方式,如图10所示,上述S702“根据比较结果确定对所述待传输数据进行拥塞处理”,包括:
S801,若所需的数据的业务优先级高于第二目标待传输数据的业务优先级,则将第二目标待传输数据丢弃,并将所需的数据缓存至丢弃第二目标待传输数据后的缓存空间中,以及根据所需的数据的用户优先级对应修改丢弃第二目标待传输数据后的缓存空间的权重。
其中,第二目标待传输数据为第二缓存队列中权重最低的缓存空间中缓存的远端单元所需的数据。具体地,本实施例涉及比较结果为远端单元所需的数据的业务优先级高于第二目标待传输数据的业务优先级的应用场景,在此情景下,近端单元直接将第二目标待传输数据丢弃,以清空第二目标待传输数据所在缓存空间,并将远端单元所需的数据缓存至丢弃第二目标待传输数据后的缓存空间中。然后根据预先设置的业务优先级和权重之间的对应关系,确定与要存储的远端单元所需的数据的业务优先级对应的权重,并根据该权重对应修改丢弃第二目标待传输数据后的缓存空间的权重,达到更新丢弃第二目标待传输数据后的缓存空间的权重的目的,使丢弃第二目标待传输数据后的缓存空间的权重与之后缓存的远端单元所需的数据的业务优先级对应。
S802,若所需的数据的业务优先级低于或等于第二目标待传输数据的业务优先级,则将所需的数据丢弃。
本实施例涉及比较结果为所需的数据的业务优先级低于或等于第二目标待传输数据的业务优先级的应用场景,在此情景下,近端单元直接将第二目标待传输数据丢弃。
上述实施例通过判断各远端单元所需的数据的业务优先级,确定是否丢弃各远端单元所需的数据,以及将第二缓存队列中业务优先级低的数据进行丢弃,使业务优先级高的待传输数据尽可能多的传输,保障了业务优先级高的待传输 数据的有效传输,提高了业务优先级高的待传输数据的传输效率。
在图2实施例的基础上,本申请还提供了一种数据拥塞处理方法,如图11所示,图2实施例所述的方法还包括步骤:
S901,接收待传输数据。
在实际应用中,近端单元用于实时接收服务器发送的待传输数据。
S902,根据待传输数据的属性信息,标记待传输数据的用户优先级、业务优先级和可丢弃性。
其中,属性信息包括待传输数据的用户类型、业务类型和使用类型,具体的,当近端单元在接收到待传输数据时,可以进一步的根据待传输数据的用户类型确定待传输数据的用户优先级,并采用相应的用户优先级标识符进行标记,以便之后近端单元可以根据用户优先级标识符确定待传输数据的用户优先级;近端单元在接收到待传输数据时,还可以进一步的根据待传输数据的业务类型确定待传输数据的业务优先级,并采用相应的业务优先级标识符进行标记,以便之后近端单元可以根据业务优先级标识符确定待传输数据的业务优先级;当近端单元在接收到待传输数据时,还可以进一步的根据待传输数据的使用类型确定待传输数据是否为可丢弃的数据,并采用相应的丢弃标识符进行标记,以便之后近端单元可以根据丢弃标识符确定待传输数据的可丢弃性。
S903,根据待传输数据的用户优先级,将待传输数据存储到对应权重的第一缓存队列中。
当近端单元基于上述步骤对接收到的待传输数据进行标记后,即可按照待传输数据的用户标识符,确定待传输数据的用户优先级,并根据预设的用户优先级和权重对应关系,确定与待传输数据的用户优先级对应的权重,然后根据该权重在第一缓存队列中确定具有该权重的缓存空间,最后将待传输数据缓存至具有该权重的缓存空间中,使该缓存空间的权重与所要存储的待传输数据的用户优先级对应。
在上述实施例所述的方法中,由于近端单元在接收到待传输数据时,就将待传输数据按照用户优先级的顺序存储到对应权重的缓存空间中,使之后近端单元根据第一缓存队列中的各缓存空间的权重发送数据时,可以保障用户优先 级高的数据优先转发,提高了用户优先级高的数据的传输效率。
在一个实施例中,本申请还提供了上述S104的具体实现方式,该方式包括:采用轮询调度的方法,将第二缓存队列中各远端单元所需的数据分别发送给各远端单元。本实施例中,在近端单元预将第二缓存队列中缓存的数据发送各远端单元时,可以采用轮询调度的方法,优先业务优先级高的数据的有效发送,提高了业务优先级高的数据的传输效率。
在一个实施例中,本申请还提供了上述S101的具体实现方式,如图12所示,上述S101“检测近端单元与各远端单元之间的链路上所能传输的数据传输量”,包括:
S1001,检测近端单元与各远端单元之间的空口传输质量。
其中,空口传输质量可以表征近端单元与远端单元之间空口传输链路的状态,空口传输链路的状态可以由空口传输链路所在环境的状态确定,例如,近端单元与远端单元之间空口传输链路所在环境若为下雨天,则会导致该空口传输链路的状态较差,进而会影响近端单元与该远端单元之间的空口传输质量,导致近端单元与该远端单元之间的空口传输质量较低。可选的,空口传输链路的状态也可以由远端单元的运行状态确定,例如,与近端单元连接的一个远端单元发生故障,则会导致近端单元与该远端单元之间的空口传输链路的状态极差,进而导致近端单元与该远端单元之间的空口传输质量低下。
具体地,当近端单元接收到待传输数据,并需要将该待传输数据发送给与之连接的各远端单元时,先检测近端单元与各远端单元之间的空口传输质量,以便之后根据不同的各远端单元对应的空口传输质量,确定近端单元与各远端单元之间的最大传输带宽,即最大的数据传输量。例如,当空口传输质量优良时,对应的最大传输带宽为200M,当空口传输质量变差时,对应的最大传输带宽变为100k。
S1002,根据各空口传输质量确定各远端单元对应的数据传输量。
具体地,本实施例涉及具体确定各远端单元对应的数据传输量的方法。当近端单元基于上述步骤检测出近端单元与各远端单元之间的空口传输质量时,即可通过分析空口传输质量确定近端单元与各远端单元之间的传输链路上所能 传输的数据传输量。由于近端单元与各远端单元之间的空口传输质量是比较容易检测的指标,且空口传输质量能够真实反映近端单元与各远端单元之间传输链路的状态,因此,根据空口传输质量确定各所述远端单元对应的数据传输量,以便之后近端单元根据该数据传输量向各远端单元发送所需的数据,可以匹配当下各远端单元接收所需数据的能力,提高了数据传输效率,不会因远端单元无法正常接收数据而发送数据造成的资源浪费。例如,若与近端单元连接的远端单元出现故障,不能接收数据时,近端单元如果继续执行缓存和处理该远端单元所需的数据的步骤,则会造成近端单元上的资源浪费。
综合上述所有实施例,本申请还提供了一种数据拥塞处理,如图13所示,该方法包括:
S1101,根据待传输数据的用户优先级检测待传输数据是否存在拥塞,若存在,则执行步骤S1102,若不存在,则执行步骤S1103。
S1102,根据待传输数据的可丢弃性判断待传输数据是否为可丢弃的数据,若待传输数据为可丢弃的数据,则执行步骤S1104,若待传输数据为不可丢弃的数据,则执行步骤S1105。
S1103,继续接收待传输数据。
S1104,丢弃待传输数据。
S1105,判断第一缓存队列的存储状态,若第一缓存队列中存在空闲的缓存空间,则执行步骤S1106,若第一缓存队列中不存在空闲的缓存空间,则执行步骤S1107。
S1106,将待传输数据缓存至空闲的缓存空间中,并根据待传输数据的用户优先级重置缓存待传输数据的缓存空间的权重。
S1107,将待传输数据的用户优先级,与第一缓存队列中的第一目标待传输数据的用户优先级进行比较,若待传输数据的用户优先级高于第一目标待传输数据的用户优先级,则执行步骤S1108,若待传输数据的用户优先级低于或等于第一目标待传输数据的用户优先级,则执行步骤S1109。
S1108,将第一目标待传输数据丢弃,并将待传输数据缓存至丢弃第一目标待传输数据后的缓存空间中,以及根据待传输数据的用户优先级对应修改丢弃 第一目标待传输数据后的缓存空间的权重。
S1109,将待传输数据丢弃。
S1110,根据各远端单元所需的数据的业务优先级检测各远端单元所需的数据是否存在拥塞,若存在,则执行步骤S1111,若不存在,则执行步骤S1112。
S1111,判断第二缓存队列中是否存在空闲的缓存空间,若存在,则执行步骤S1113,若不存在,则执行步骤S1114。
S1112,继续将第一缓存空间中存储的各远端单元所需的数据转存到第二缓存空间中。
S1113,将所需的数据缓存至空闲的缓存空间中,并根据所需的数据的业务优先级重置缓存所需的数据的缓存空间的权重。
S1114,将所需的数据的业务优先级,与第二缓存队列中的第二目标待传输数据的业务优先级进行比较,若所需的数据的业务优先级高于第二目标待传输数据的业务优先级,则执行步骤S1115,若所需的数据的业务优先级低于或等于第二目标待传输数据的业务优先级,,则执行步骤S1116。
S1115,将第二目标待传输数据丢弃,并将所需的数据缓存至丢弃第二目标待传输数据后的缓存空间中,以及根据所需的数据的用户优先级对应修改丢弃第二目标待传输数据后的缓存空间的权重。
S1116,将所需的数据丢弃。
上述实施例中,各步骤的说明内容请参见前述说明,此处不赘述。需要说明的是,上述方法明显提供了一种两级拥塞处理方法,第一级拥塞处理针对近端单元接收到的有线的待传输数据进行拥塞处理,第二级拥塞处理针对转存的各远端单元所需的数据进行拥塞处理,通过两级拥塞处理极大的提高了拥塞处理的效率,进而提高了数据传输效率。另外,由于第一次拥塞处理时是根据用户优先级处理,保障了用户优先级高的数据的优先传输,第二次拥塞处理时是根据业务优先级处理,保障了业务优先级高的数据的优先传输,因此本申请提供的通过两级拥塞处理后的数据拥塞处理方法既保障了用户优先级高的数据的有效传输,也同时保障了业务优先级高的数据的有效传输,极大的提高了高优先级数据的传输效率。
应该理解的是,虽然图2-13的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图2-13中的至少一部分步骤可以包括多个步骤或者多个阶段,这些步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤中的步骤或者阶段的至少一部分轮流或者交替地执行。
在一个实施例中,如图14所示,提供了一种数据拥塞处理装置,包括:检测模块11、第一处理模块12、第二处理模块13和发送模块14,其中:
检测模块11,用于检测近端单元与各远端单元之间的链路上所能传输的数据传输量;
第一处理模块12,用于根据各所述远端单元对应的数据传输量从第一缓存队列中获取各所述远端单元所需的数据;所述第一缓存队列中包括根据待传输数据的用户优先级进行拥塞处理后的数据;
第二处理模块13,用于按照各所述远端单元所需的数据的业务优先级,将各所述远端单元所需的数据缓存至第二缓存队列中;所述第二缓存队列中包括根据各所述远端单元所需的数据的业务优先级进行拥塞处理后的数据;
发送模块14,用于将所述第二缓存队列中各所述远端单元所需的数据分别发送给各所述远端单元。
关于数据拥塞处理装置的具体限定可以参见上文中对于数据拥塞处理方法的限定,在此不再赘述。上述数据拥塞处理装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是终端,也可以是服务器,其内部结构图可以如图15所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口、显示屏和输入装置。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存 储介质、内存储器。该非易失性存储介质存储有操作系统和计算机程序。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机程序被处理器执行时以实现一种数据拥塞处理方法。该计算机设备的显示屏可以是液晶显示屏或者电子墨水显示屏,该计算机设备的输入装置可以是显示屏上覆盖的触摸层,也可以是计算机设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。
本领域技术人员可以理解,图15中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,提供了一种计算机设备,包括存储器和处理器,存储器中存储有计算机程序,该处理器执行计算机程序时实现以下步骤:
接收目标数据;所述目标数据包括源基站发送的数据和核心网服务器发送的数据;
监控目标基站的当前CPU负荷和/或所述目标基站接收到的当前目标数据对应的数据包的数量;
根据所述当前CPU负荷和/或所述当前目标数据对应的数据包的数量,确定所述目标基站单次发送所述数据包的目标数量;
根据所述目标数量向用户端发送所述目标数据。
上述实施例提供的一种计算机设备,其实现原理和技术效果与上述方法实施例类似,在此不再赘述。
在一个实施例中,提供了一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现以上步骤.

Claims (13)

  1. 一种数据拥塞处理方法,所述方法包括:
    检测近端单元与各远端单元之间的链路上所能传输的数据传输量;
    根据各所述远端单元对应的数据传输量从第一缓存队列中获取各所述远端单元所需的数据;所述第一缓存队列中包括根据待传输数据的用户优先级进行拥塞处理后的数据;
    按照各所述远端单元所需的数据的业务优先级,将各所述远端单元所需的数据缓存至第二缓存队列中;所述第二缓存队列中包括根据各所述远端单元所需的数据的业务优先级进行拥塞处理后的数据;
    将所述第二缓存队列中各所述远端单元所需的数据分别发送给各所述远端单元。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    根据所述待传输数据的用户优先级检测所述待传输数据是否存在拥塞;
    若存在,根据所述待传输数据的可丢弃性和/或所述第一缓存队列的存储状态对所述待传输数据进行拥塞处理;所述可丢弃性表示所述待传输数据是否可丢弃;所述第一缓存队列的存储状态表示所述第一缓存队列中是否存在空闲的缓存空间。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述待传输数据的可丢弃性和/或所述第一缓存队列的存储状态对所述待传输数据进行拥塞处理,包括:
    根据所述待传输数据的可丢弃性判断所述待传输数据是否为可丢弃的数据;
    若所述待传输数据为可丢弃的数据,则丢弃所述待传输数据;
    若所述待传输数据为不可丢弃的数据,则根据所述第一缓存队列的存储状态和所述待传输数据的用户优先级对所述待传输数据进行拥塞处理。
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述第一缓存队列的存储状态和所述待传输数据的用户优先级对所述待传输数据进行拥塞处理,包括:
    若所述第一缓存队列中存在空闲的缓存空间,则将所述待传输数据缓存至 所述空闲的缓存空间中,并根据所述待传输数据的用户优先级重置缓存所述待传输数据的缓存空间的权重;
    若所述第一缓存队列中不存在所述空闲的缓存空间,则将所述待传输数据的用户优先级,与所述第一缓存队列中的第一目标待传输数据的用户优先级进行比较,并根据比较结果确定对所述待传输数据进行拥塞处理;所述第一目标待传输数据为第一缓存队列中权重最低的缓存空间中缓存的待传输数据。
  5. 根据权利要求4所述的方法,其特征在于,所述根据比较结果确定对所述待传输数据进行拥塞处理,包括:
    若所述待传输数据的用户优先级高于所述第一目标待传输数据的用户优先级,则将所述第一目标待传输数据丢弃,并将所述待传输数据缓存至丢弃所述第一目标待传输数据后的缓存空间中,以及根据所述待传输数据的用户优先级对应修改丢弃所述第一目标待传输数据后的缓存空间的权重;
    若所述待传输数据的用户优先级低于或等于所述第一目标待传输数据的用户优先级,则将所述待传输数据丢弃。
  6. 根据权利要求1所述的方法,其特征在于,所述根据各所述远端单元对应的数据传输量从第一缓存队列中获取各所述远端单元所需的数据,包括:
    从所述第一缓存队列中确定需要发送给各所述远端单元的待传输数据;
    根据各所述远端单元对应的数据传输量,从所述需要发送给各所述远端单元的待传输数据中提取出各所述远端单元所需的数据。
  7. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    根据各所述远端单元所需的数据的业务优先级检测各所述远端单元所需的数据是否存在拥塞;
    若存在,根据所述第二缓存队列的存储状态对所述所需的数据进行拥塞处理;所述第二缓存队列的存储状态表示所述第二缓存队列中是否存在空闲的缓存空间。
  8. 根据权利要求7所述的方法,其特征在于,所述根据所述第二缓存队列的存储状态对所述所需的数据进行拥塞处理,包括:
    若所述第二缓存队列中存在空闲的缓存空间,则将所述所需的数据缓存至 所述空闲的缓存空间中,并根据所述所需的数据的业务优先级重置缓存所述所需的数据的缓存空间的权重;
    若所述第二缓存队列中不存在所述空闲的缓存空间,则将所述所需的数据的业务优先级,与所述第二缓存队列中的第二目标待传输数据的业务优先级进行比较,并根据比较结果对所述所需的数据进行拥塞处理;所述第二目标待传输数据为第二缓存队列中权重最低的缓存空间中缓存的远端单元所需的数据。
  9. 根据权利要求8所述的方法,其特征在于,所述根据比较结果对所述所需的数据进行拥塞处理,包括:
    若所述所需的数据的业务优先级高于所述第二目标待传输数据的业务优先级,则将所述第二目标待传输数据丢弃,并将所述所需的数据缓存至丢弃所述第二目标待传输数据后的缓存空间中,以及根据所述所需的数据的用户优先级对应修改丢弃所述第二目标待传输数据后的缓存空间的权重;所述第二目标待传输数据为第二缓存队列中权重最低的缓存空间中缓存的所述远端单元所需的数据;
    若所述所需的数据的业务优先级低于或等于所述第二目标待传输数据的业务优先级,则将所述所需的数据丢弃。
  10. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    接收所述待传输数据;
    根据所述待传输数据的属性信息,标记所述待传输数据的用户优先级、业务优先级和可丢弃性;所述属性信息包括所述待传输数据的用户类型、业务类型和使用类型;
    根据所述待传输数据的用户优先级,将所述待传输数据存储到对应权重的所述第一缓存队列中。
  11. 根据权利要求1所述的方法,其特征在于,所述将所述第二缓存队列中各所述远端单元所需的数据分别发送给各所述远端单元,包括:
    采用轮询调度的方法,将所述第二缓存队列中各所述远端单元所需的数据分别发送给各所述远端单元。
  12. 根据权利要求1所述的方法,其特征在于,所述检测近端单元与各远 端单元之间链路上所能传输的数据传输量,包括:
    检测近端单元与各远端单元之间的空口传输质量;
    根据各所述空口传输质量确定各所述远端单元对应的数据传输量。
  13. 一种数据拥塞处理装置,其特征在于,所述装置包括:
    检测模块,用于检测近端单元与各远端单元之间的链路上所能传输的数据传输量;
    第一处理模块,用于根据各所述远端单元对应的数据传输量从第一缓存队列中获取各所述远端单元所需的数据;所述第一缓存队列中包括根据待传输数据的用户优先级进行拥塞处理后的数据;
    第二处理模块,用于按照各所述远端单元所需的数据的业务优先级,将各所述远端单元所需的数据缓存至第二缓存队列中;所述第二缓存队列中包括根据各所述远端单元所需的数据的业务优先级进行拥塞处理后的数据;
    发送模块,用于将所述第二缓存队列中各所述远端单元所需的数据分别发送给各所述远端单元。
PCT/CN2020/138089 2020-09-18 2020-12-21 数据拥塞处理方法、装置、计算机设备和存储介质 WO2022057131A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010984728.8A CN112202681B (zh) 2020-09-18 2020-09-18 数据拥塞处理方法、装置、计算机设备和存储介质
CN202010984728.8 2020-09-18

Publications (1)

Publication Number Publication Date
WO2022057131A1 true WO2022057131A1 (zh) 2022-03-24

Family

ID=74015525

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/138089 WO2022057131A1 (zh) 2020-09-18 2020-12-21 数据拥塞处理方法、装置、计算机设备和存储介质

Country Status (2)

Country Link
CN (1) CN112202681B (zh)
WO (1) WO2022057131A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116204455A (zh) * 2023-04-28 2023-06-02 阿里巴巴达摩院(杭州)科技有限公司 缓存管理系统、方法、专网缓存管理系统及设备
WO2023241649A1 (en) * 2022-06-17 2023-12-21 Huawei Technologies Co., Ltd. Method and apparatus for managing a packet received at a switch

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070201499A1 (en) * 2006-02-24 2007-08-30 Texas Instruments Incorporated Device, system and/or method for managing packet congestion in a packet switching network
CN101938403A (zh) * 2009-06-30 2011-01-05 中国电信股份有限公司 多用户多业务的服务质量的保证方法和业务接入控制点
CN102811159A (zh) * 2011-06-03 2012-12-05 中兴通讯股份有限公司 一种上行业务的调度方法及装置
WO2013082789A1 (zh) * 2011-12-08 2013-06-13 华为技术有限公司 一种拥塞控制方法及装置
CN103596224A (zh) * 2012-08-13 2014-02-19 上海无线通信研究中心 一种高速移动环境下基于多级映射的资源调度方法
CN105591970A (zh) * 2015-08-31 2016-05-18 杭州华三通信技术有限公司 一种流量控制的方法和装置
CN110290554A (zh) * 2019-06-28 2019-09-27 京信通信系统(中国)有限公司 数据传输处理方法、装置和通信设备

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101232455B (zh) * 2008-02-04 2011-05-11 中兴通讯股份有限公司 一种拥塞控制方法及装置
CN101753440A (zh) * 2009-12-18 2010-06-23 华为技术有限公司 主动队列管理方法、装置和无线网络控制器
CN101800699A (zh) * 2010-02-09 2010-08-11 上海华为技术有限公司 一种丢弃报文的方法及装置
CN102291779B (zh) * 2010-06-17 2014-01-01 鼎桥通信技术有限公司 一种用户面数据的调度方法
CN104092619B (zh) * 2014-07-25 2017-07-21 华为技术有限公司 流量控制方法及装置
CN107820275B (zh) * 2017-10-18 2021-09-14 中国联合网络通信集团有限公司 一种移动网络udp业务拥塞处理方法及基站
CN111355673A (zh) * 2018-12-24 2020-06-30 深圳市中兴微电子技术有限公司 一种数据处理方法、装置、设备及存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070201499A1 (en) * 2006-02-24 2007-08-30 Texas Instruments Incorporated Device, system and/or method for managing packet congestion in a packet switching network
CN101938403A (zh) * 2009-06-30 2011-01-05 中国电信股份有限公司 多用户多业务的服务质量的保证方法和业务接入控制点
CN102811159A (zh) * 2011-06-03 2012-12-05 中兴通讯股份有限公司 一种上行业务的调度方法及装置
WO2013082789A1 (zh) * 2011-12-08 2013-06-13 华为技术有限公司 一种拥塞控制方法及装置
CN103596224A (zh) * 2012-08-13 2014-02-19 上海无线通信研究中心 一种高速移动环境下基于多级映射的资源调度方法
CN105591970A (zh) * 2015-08-31 2016-05-18 杭州华三通信技术有限公司 一种流量控制的方法和装置
CN110290554A (zh) * 2019-06-28 2019-09-27 京信通信系统(中国)有限公司 数据传输处理方法、装置和通信设备

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023241649A1 (en) * 2022-06-17 2023-12-21 Huawei Technologies Co., Ltd. Method and apparatus for managing a packet received at a switch
CN116204455A (zh) * 2023-04-28 2023-06-02 阿里巴巴达摩院(杭州)科技有限公司 缓存管理系统、方法、专网缓存管理系统及设备
CN116204455B (zh) * 2023-04-28 2023-09-22 阿里巴巴达摩院(杭州)科技有限公司 缓存管理系统、方法、专网缓存管理系统及设备

Also Published As

Publication number Publication date
CN112202681B (zh) 2022-07-29
CN112202681A (zh) 2021-01-08

Similar Documents

Publication Publication Date Title
US10110493B2 (en) Systems and methods of emulating a NIC for packet transmission on hardware RSS unaware NICS in a multi-core system
CN108718347B (zh) 一种域名解析方法、系统、装置及存储介质
US10404603B2 (en) System and method of providing increased data optimization based on traffic priority on connection
US11206706B2 (en) Method and apparatus for web browsing on multihomed mobile devices
CN109088799B (zh) 一种客户端接入方法、装置、终端以及存储介质
US11582146B2 (en) High-quality adaptive bitrate video through multiple links
WO2022057131A1 (zh) 数据拥塞处理方法、装置、计算机设备和存储介质
US20210266265A1 (en) Modifying the congestion control algorithm applied to a connection based on request characteristics
CN112148644B (zh) 处理输入/输出请求的方法、装置和计算机程序产品
WO2020026018A1 (zh) 文件的下载方法、装置、设备/终端/服务器及存储介质
US11316916B2 (en) Packet processing method, related device, and computer storage medium
US7804773B2 (en) System and method of managing data flow in a network
US20160014237A1 (en) Communication device, communication method, and computer-readable recording medium
US11088954B2 (en) Link detection method and related apparatus
US10154116B1 (en) Efficient synchronization of locally-available content
US8478813B2 (en) Transparent migration of endpoint
JP6886874B2 (ja) エッジ装置、データ処理システム、データ送信方法、及びプログラム
JP7097427B2 (ja) データ処理システム、及びデータ処理方法
US9247033B2 (en) Accessing payload portions of client requests from client memory storage hardware using remote direct memory access
US11528187B1 (en) Dynamically configurable networking device interfaces for directional capacity modifications
JP2016045510A (ja) 情報処理システム、情報処理装置、情報処理システムの制御方法及び情報処理装置の制御プログラム
US20140136647A1 (en) Router and operating method thereof
US11941445B2 (en) RLC channel management for low memory 5G devices
US10728291B1 (en) Persistent duplex connections and communication protocol for content distribution
CN110661731B (zh) 一种报文处理方法及其装置

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC ( EPO FORM 1205A DATED 18/08/2023 )