CN112583726A - Flow control method and device - Google Patents

Flow control method and device Download PDF

Info

Publication number
CN112583726A
CN112583726A CN201910924070.9A CN201910924070A CN112583726A CN 112583726 A CN112583726 A CN 112583726A CN 201910924070 A CN201910924070 A CN 201910924070A CN 112583726 A CN112583726 A CN 112583726A
Authority
CN
China
Prior art keywords
message
flow control
client
server
control policy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910924070.9A
Other languages
Chinese (zh)
Other versions
CN112583726B (en
Inventor
李华
段宝平
乔佳伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201910924070.9A priority Critical patent/CN112583726B/en
Priority to PCT/CN2020/117212 priority patent/WO2021057808A1/en
Publication of CN112583726A publication Critical patent/CN112583726A/en
Application granted granted Critical
Publication of CN112583726B publication Critical patent/CN112583726B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/12Applying verification of the received information
    • H04L63/123Applying verification of the received information received data contents, e.g. message integrity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources

Abstract

The application relates to the technical field of communication, and discloses a flow control method and device, which are used for solving the problems of unreasonable and inflexible current flow control mode. The method comprises the following steps: a server receives a first message sent by a client, wherein the first message is used for indicating that the client supports flow control; the server sends a second message aiming at the first message to the client, wherein the second message comprises a first flow control strategy, the first flow control strategy is used for determining a first number of service request messages which are allowed to be sent to the server by the client, and the first number is an integer which is greater than or equal to 0. When determining that the client supports flow control, the server may issue a flow control policy to the client to indicate the number of service request messages reported by the client. Therefore, the client can send a reasonable number of service request messages to the server according to the indication of the server.

Description

Flow control method and device
Technical Field
The embodiment of the application relates to the technical field of communication, in particular to a method and a device for controlling flow.
Background
The client sends a service request message to the server for requesting a service. Due to the limited traffic handling capacity of the server (the number of service request messages that can be processed over a period of time), if a client sends a large number of service request messages to the server, the traffic handling capacity of the server is exceeded, and the server may be congested. In order to avoid congestion, the service end may return a service unavailable response to the client for a part of the service request message.
Currently, the client may perform flow control by using the following scheme to avoid congestion of the server: the client may discard the service request message based on the number of received non-service unavailable responses (service request messages successfully processed by the server) and based on an empirically derived scaling factor. In the current flow control scheme, situations easily occur in which too many service request messages are discarded or in which there is insufficient service request message discard.
Disclosure of Invention
The embodiment of the application provides a flow control method and a flow control device, which are used for solving the problems that the current flow control mode is unreasonable and inflexible.
In a first aspect, an execution subject of the method may be a server, and the server may receive a first message sent by a client, where the first message may be used to indicate that the client supports flow control. The server may further send a second message for the first message to the client, where the second message may include a first traffic control policy, and the first traffic control policy may be used to determine a first number of server request messages that the client is allowed to send to the server, where the first number is an integer greater than or equal to 0.
When determining that the client supports flow control, the server may issue a flow control policy to the client to indicate the number of service request messages reported by the client. Therefore, the client can reasonably discard the service request message according to the indication of the server, namely, send a reasonable number of service request messages to the server. The service data can be flexibly and reasonably processed between the client and the server.
In one possible implementation, the first message sent by the client to the server may include a second traffic control policy being executed by the client, and the second traffic control policy may be used to determine the first traffic control policy. The second traffic control policy is used for determining a second number of service request messages allowed to be sent by the client to the server, wherein the second number is an integer greater than or equal to 0.
The client not only reports the self-supported flow control, but also reports the executing second flow control strategy, so that the server can determine the first flow control strategy by combining the second flow control strategy, and the first flow control strategy formulated for the client is more in line with the flow processing capacity of the server and the service requirement of the client.
In a possible implementation, when the server receives the first message, if the server is in a congested state, the server may determine the first traffic control policy according to its traffic processing capability. The server may make a first flow control policy for the client when the server is not in the congestion state, or may determine the first flow control policy according to the current flow processing capability when the server is in the congestion state. Further, if the first message includes a second traffic control policy being executed by the client, the server may determine the first traffic control policy according to its own traffic processing capability and the second traffic control policy.
In a possible implementation, the server may further receive a fifth message from a superior server, where the fifth message may include a third flow control policy, and the third flow control policy may be used to determine a third number of service request messages that the server is allowed to send to the superior server, where the third number is an integer greater than or equal to 0. Further, the server may determine the first flow control policy according to a local policy of the server and the third flow control policy. Further, if the first message includes a second traffic control policy being executed by the client, the server may determine the first traffic control policy according to a local policy of the server, the second traffic control policy, and the third traffic control policy.
In one possible implementation, a first traffic control policy may be used to determine not only a first number of server request messages that the client is allowed to send to the server, but the first traffic control policy may also include, but is not limited to: one or more of an effective time of the first traffic control policy, a traffic control manner, a type of the service request message, a generation time of the first traffic control policy, and an increment counter. Wherein, the flow control mode comprises: sending the service request messages exceeding the first number to other service terminals for processing or discarding the service request messages exceeding the first number; the increment counter is used for judging whether the first flow control strategy is the latest flow control strategy.
In one possible implementation, the second traffic control policy may be used to determine the second number of server request messages that the client is allowed to send to the server, and may include, but is not limited to: one or more of the effective time of the second traffic control policy, the traffic control mode, the type of the service request message, the generation time of the second traffic control policy, and the increment counter. Wherein, the flow control mode comprises: sending the service request messages exceeding the second number to other service terminals for processing or discarding the service request messages exceeding the second number; the increment counter is used for judging whether the second flow control strategy is the latest flow control strategy.
In a possible implementation, the third flow control policy may be used to determine the second number of server request messages that the server is allowed to send to the superior server, and may include, but is not limited to: one or more items of effective time of the third flow control strategy, a flow control mode, the type of the service request message, generation time of the third flow control strategy and an increment counter. Wherein, the flow control mode comprises: sending the service request messages exceeding the third quantity to other superior service terminals for processing or discarding the service request messages exceeding the third quantity; the incremental counter is used for judging whether the third flow control strategy is the latest flow control strategy.
In one possible implementation, the first message may be a hypertext transfer protocol (HTTP) -based service request message, and the second message may be an HTTP-based service response message.
In a possible implementation, the server may further receive a third message sent by the client, where the third message may include the first traffic control policy being executed by the client and/or an indication that the client supports traffic control; the server may send, to the client, a fourth message for the third message when determining that the congestion state of the server is relieved, where the fourth message may be used to indicate that the congestion state of the server is relieved.
In a possible implementation, the fourth message may include a first field for indicating a congestion status, and the service end may indicate congestion status release of the service end by a value of the first field. For example, when the value is "00", it indicates that the congestion state is released, and when the value is "01", it indicates that the congestion state is in.
The server side informs the client side of the information of whether the server side is in the congestion state in time, so that the client side can reasonably execute the flow control strategy.
In one possible implementation, the third message may be an HTTP-based service request message, and the fourth message may be an HTTP-based service response message.
In a possible implementation, the messages that the server interacts with the client have integrity protection, for example, the first message has integrity protection, the server may further check the integrity of the first message before sending the second message to the client, and send the second message after the integrity check of the first message is passed, and the second message may also have integrity protection.
In another example, the third message has integrity protection, and the server may further perform integrity check on the third message before sending the fourth message to the client, and send the fourth message after the integrity check on the third message is passed. The fourth message may also have integrity protection.
The messages interacted between the server side and the client side can be subjected to integrity protection so as to avoid the attack of an attacker.
In a possible implementation, the server may further receive a sixth message sent by the superior server, where the sixth message may include a first field used for indicating a congestion state, and the superior server may indicate, by a value of the first field, that the congestion state of the superior server is released. The server may also determine that the congestion state of the server is released when receiving the congestion state release of the upper-level server, and further send a fourth message to the client, where the fourth message may be used to indicate that the congestion state of the server is released.
In one possible implementation, the fifth message is an HTTP-based service request message, and the sixth message is an HTTP-based service request message.
In a second aspect, a method for flow control is provided, where an execution subject of the method may be a client, and the client may send a first message to a server, where the first message may be used to indicate that the client supports flow control. The client may further receive a second message for the first message from the server, where the second message may include a first traffic control policy, and the first traffic control policy may be used to determine a first number of service request messages allowed to be sent by the client to the server, where the first number is an integer greater than or equal to 0. And the client performs flow control according to the first flow control strategy.
The client informs the server of the capability of supporting flow control of the client, and performs flow control according to a first flow control strategy indicated by the server. Therefore, the client can reasonably discard the service request message, namely, send a reasonable number of service request messages to the server so as to ensure the reasonable operation of the service.
In one possible implementation, the first message received by the server may include a second traffic control policy being executed by the client, and the second traffic control policy may be used to determine the first traffic control policy. The second traffic control policy is used for determining a second number of service request messages allowed to be sent by the client to the server, wherein the second number is an integer greater than or equal to 0.
The client can report the self-supported flow control and inform the server of the second flow control strategy currently executed by the client, so that the server can formulate a first flow control strategy for the client according to the second flow control strategy, and the formulated first flow control strategy can better meet the flow processing capacity of the server and the service requirement of the client.
In one possible implementation, a first traffic control policy may be used to determine not only a first number of server request messages that the client is allowed to send to the server, but the first traffic control policy may also include, but is not limited to, at least one of: the effective time of the first flow control strategy, the flow control mode, the type of the service request message, the generation time of the first flow control strategy and an incremental counter; wherein, the flow control mode comprises: and sending the service request messages exceeding the first quantity to other service terminals for processing or discarding the service request messages exceeding the first quantity, wherein the incremental counter is used for judging whether the first flow control strategy is the latest flow control strategy.
In one possible implementation, the second traffic control policy may be used to determine the second number of server request messages that the client is allowed to send to the server, and may include, but is not limited to: one or more of the effective time of the second traffic control policy, the traffic control mode, the type of the service request message, the generation time of the second traffic control policy, and the increment counter. Wherein, the flow control mode comprises: sending the service request messages exceeding the second number to other service terminals for processing or discarding the service request messages exceeding the second number; the increment counter is used for judging whether the second flow control strategy is the latest flow control strategy.
In a possible implementation, if the first traffic control policy includes the generation time of the first traffic control policy, before the client performs traffic control according to the first traffic control policy, the client may further determine whether the generation time of the executing second traffic control policy is earlier than the generation time of the first traffic control policy; if yes, flow control can be carried out according to the first flow control strategy; if not, flow control can be carried out according to the second flow control strategy.
The client may determine, according to the generation time of each flow control policy, the latest flow control policy, that is, the latest flow control policy. And the client performs flow control according to the latest flow control strategy so as to more reasonably process the service data.
In one possible implementation, if an incremental counter is included in the first traffic control policy, the client may further compare the incremental counter in the first traffic control policy with an incremental counter of a second traffic control policy being executed to determine to use the first traffic control policy before performing traffic control according to the first traffic control policy.
The client can determine the latest flow control strategy according to the counter of each flow control strategy. And the client performs flow control according to the latest flow control strategy so as to more reasonably process the service data.
In one possible implementation, the first message may be an HTTP-based service request message, and the second message may be an HTTP-based service response message.
In a possible implementation, the client may further send a third message to the server, where the third message may include a second traffic control policy being executed by the client and/or an indication that the client supports traffic control, and the client may further receive a fourth message sent by the server for the third message, where the fourth message may be used to indicate that the congestion state of the server is released. The client no longer executes the first traffic control policy. The fourth message is sent by the server side after receiving the first flow control strategy and under the condition of determining that the congestion state is relieved.
In a possible implementation, the fourth message may include a first field for indicating a congestion status, and the service end may indicate congestion status release of the service end by a value of the first field. For example, when the value is "00", it indicates that the congestion state is released, and when the value is "01", it indicates that the congestion state is in.
The server side informs the client side of the information of whether the server side is in the congestion state in time, and the client side can reasonably execute the flow control strategy.
In one possible implementation, the third message may be a service request message that is HTTP, and the fourth message may be a service response message based on HTTP.
In a possible implementation, the message that the server interacts with the client may have integrity protection, for example, the second message has integrity protection, and the client may check the integrity of the second message before performing flow control according to the first flow control policy. After the integrity check of the second message is passed, the flow control is performed according to the first flow control strategy, and the first message may also have integrity protection.
In another example, the fourth message may have integrity protection, and the client may further perform integrity checking on the fourth message before determining to no longer execute the first traffic control policy. And after the integrity check of the fourth message passes, determining not to execute the first flow control strategy. The third message may also have integrity protection.
The messages interacted between the server side and the client side can be subjected to integrity protection so as to avoid the attack of an attacker.
In a third aspect, a communication device is provided, where the communication device has a function of implementing a server in the above method embodiments. These functions may be implemented by hardware, or by hardware executing corresponding software. The hardware or software includes one or more functional modules corresponding to the above functions.
In a fourth aspect, a communication device is provided, which has a function of implementing a client in the above method embodiments. These functions may be implemented by hardware, or by hardware executing corresponding software. The hardware or software includes one or more functional modules corresponding to the above functions.
In a fifth aspect, a communication device is provided, where the communication device may be the server in the above method embodiment, or a chip disposed in the server. The communication device comprises a transceiver and a processor, and optionally a memory, wherein the memory is used for storing a computer program or instructions, and the processor is coupled to the memory and the transceiver respectively, and when the processor executes the computer program or instructions, the communication device is enabled to execute the method executed by the server in the method embodiment.
In a sixth aspect, a communication device is provided, and the communication device may be the client in the above method embodiment, or a chip provided in the client. The communication device comprises a transceiver and a processor, and optionally a memory, wherein the memory is used for storing a computer program or instructions, and the processor is coupled with the memory and the transceiver respectively, and when the processor executes the computer program or instructions, the communication device is caused to execute the method executed by the client in the method embodiment.
In a seventh aspect, a computer program product is provided, the computer program product comprising: computer program code for causing a computer to perform the method performed by the server in any of the above described first aspect and possible implementations of the first aspect when said computer program code is run on a computer.
In an eighth aspect, there is provided a computer program product comprising: computer program code for causing a computer to perform the method performed by the client in any of the above described second aspect and possible implementations of the second aspect when said computer program code is run on a computer.
In a ninth aspect, the present application provides a chip system, which includes a processor and a memory, wherein the processor and the memory are electrically coupled; the memory to store computer program instructions; the processor is configured to execute part or all of the computer program instructions in the memory, and when the part or all of the computer program instructions are executed, the processor is configured to implement the functions of the server in the method according to any one of the foregoing first aspect and the first possible implementation of the first aspect.
In one possible design, the chip system further includes a transceiver, and the transceiver is configured to transmit a signal processed by the processor or receive a signal input to the processor. The chip system may be formed by a chip, or may include a chip and other discrete devices.
In a tenth aspect, a chip system is provided, which includes a processor and a memory, wherein the processor and the memory are electrically coupled; the memory to store computer program instructions; the processor is configured to execute part or all of the computer program instructions in the memory, and when the part or all of the computer program instructions are executed, the processor is configured to implement the functions of the client in the method according to any one of the second aspect and the possible implementation of the second aspect.
In one possible design, the chip system further includes a transceiver, and the transceiver is configured to transmit a signal processed by the processor or receive a signal input to the processor. The chip system may be formed by a chip, or may include a chip and other discrete devices.
In an eleventh aspect, a computer-readable storage medium is provided, where the computer-readable storage medium stores a computer program, and when the computer program is executed, the method performed by a server in any one of the above-mentioned first aspect and the first possible implementation is implemented.
In a twelfth aspect, the present application provides a computer-readable storage medium, which stores a computer program that, when executed, implements the method performed by the client in any of the second aspect and the possible implementations of the second aspect.
In a thirteenth aspect, a communication system is provided, which may comprise a server performing the method described in any of the possible implementations of the first aspect and the first aspect above, and a client performing the method described in any of the possible implementations of the second aspect and the second aspect above.
Drawings
Fig. 1 is a schematic structural diagram provided in an embodiment of the present application;
fig. 2 is a flow chart of flow control provided in an embodiment of the present application;
fig. 3 is a flow chart of flow control provided in an embodiment of the present application;
fig. 4 is a flow chart of flow control provided in an embodiment of the present application;
fig. 5 is a schematic diagram of a communication device provided in an embodiment of the present application;
fig. 6 is a schematic diagram of a communication device provided in an embodiment of the present application;
fig. 7 is a schematic diagram of a communication device provided in an embodiment of the present application;
fig. 8 is a schematic diagram of a communication device provided in an embodiment of the present application.
Detailed Description
The embodiments of the present application will be described in detail below with reference to the accompanying drawings.
In order to facilitate understanding of the embodiments of the present application, some terms of the embodiments of the present application are explained below to facilitate understanding by those skilled in the art.
1) A terminal, also referred to as User Equipment (UE), a Mobile Station (MS), a Mobile Terminal (MT), etc., is a device that provides voice and/or data connectivity to a user. For example, the terminal device includes a handheld device, an in-vehicle device, an internet of things device, and the like having a wireless connection function. Currently, the terminal device may be: a mobile phone (mobile phone), a tablet computer, a notebook computer, a palm top computer, a Mobile Internet Device (MID), a wearable device, a Virtual Reality (VR) device, an Augmented Reality (AR) device, a wireless terminal in industrial control (industrial control), a wireless terminal in self driving (self driving), a wireless terminal in remote surgery (remote medical supply), a wireless terminal in smart grid (smart grid), a wireless terminal in transportation safety (smart security), a wireless terminal in city (smart city), a wireless terminal in smart home (smart home), and the like.
2) And the client side is used for actively sending one end of the service request message to the server side or calling one end of the service of the server side.
3) And the server side receives the service request message sent by the client side and feeds back one end of the response to the client side or one end of the service provided for the client side.
4) Integrity protection refers to ensuring that information or data is not subject to unauthorized alteration or can be discovered quickly after alteration during the course of transmitting, storing, etc. the information or data is stored in a storage device. It should be noted that the message with integrity protection generally includes Message Authentication Code (MAC) information, when the message with integrity protection is checked, the same hash algorithm may be used to perform hash operation on the information in the message with integrity protection, calculate MAC information, compare the calculated MAC information with the MAC information in the message with integrity protection, determine whether the two pieces of MAC information are the same, and if the two pieces of MAC information are the same, the integrity check is passed.
In addition, it should be noted that the message with integrity protection in the present application may also have confidentiality protection and/or anti-replay protection, wherein confidentiality protection refers to the characteristic that information cannot be accessed or disclosed by unauthorized persons, entities, processes. The anti-replay protection means that a receiving party receives a message repeated by an attacker.
5) Traffic handling capacity refers to the number of service request messages that can be handled over a period of time.
6) Congestion, which means that the number of service request messages received within a period of time exceeds the number of service request messages that can be processed.
"and/or" in the present application, describing an association relationship of associated objects, means that there may be three relationships, for example, a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The plural in the present application means two or more.
In the description of the present application, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance, nor order.
In addition, in the embodiments of the present application, the word "exemplary" is used to mean serving as an example, instance, or illustration. Any embodiment or implementation described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments or implementations. Rather, the term using examples is intended to present concepts in a concrete fashion.
The embodiment of the application provides a method and a device for controlling flow, wherein the method and the device are based on the same technical concept, and because the principles of solving the problems of the method and the device are similar, the implementation of the device and the method can be mutually referred, and repeated parts are not described again.
The technical scheme of the embodiment of the application can be applied to various communication systems, for example: long Term Evolution (LTE) systems, Worldwide Interoperability for Microwave Access (WiMAX) communication systems, future fifth Generation (5G) systems, such as new radio access technology (NR), future communication systems, and the like.
Fig. 1 is a schematic diagram of a possible communication system architecture applicable to the present application, including: UE, (radio access network (R) AN), UPF network element, DN network element, AUSF network element, AMF network element, SMF network element, NESSF network element, NRF network element, PCF network element, UDM network element, and AF network element, where the network elements may also be referred to as functional entity devices, and the UPF network element, DN network element, AUSF network element, AMF network element, SMF network element, NESSF network element, NRF network element, PCF network element, UDM network element, and AF network may be core network elements. In the network architecture, nssf is a service-based interface presented by NESSF, Nausf is a service-based interface presented by AUSF, Namf is a service-based interface presented by AMF, Nsmf is a service-based interface presented by SMF, Nnef is a service-based interface presented by NEF, nrrf is a service-based interface presented by NRF, Npcf is a service-based interface presented by PCF, numm is a service-based interface presented by UDM, and Naf is a service-based interface presented by AF. A service-based interface may also be referred to as a servitization interface. Each network element may provide services of one or more service interfaces, which are called by other network elements, and may also call services of service interfaces of other network elements.
N1 is a reference point between the UE and the AMF, N2 is a reference point of (R) AN and AMF, used for transmission of non access stratum messages, etc.; n3 is a reference point between (R) AN and UPF for transmitting user plane data and the like; n4 is a reference point between the SMF and the UPF, and is used to transmit information such as tunnel identification information, data cache indication information, and downlink data notification message of the N3 connection; the N6 interface is a reference point between the UPF and DN for transmitting user plane data, etc.
The access and mobility management function (AMF) has a core network control plane function, provides a user mobility management and access management function, and the SEAF has functions of security authentication, management, and derivation of security keys.
Network storage function (NRF) network element: the method is used for storing information of network functions deployed in a core network, providing discovery of the network functions and services, and the like.
User Plane Function (UPF) network element: for packet routing and forwarding, quality of service (QoS) handling of user plane data, etc.
Data Network (DN) network elements: for providing a network for transmitting data.
Authentication server function (AUSF) network element: the method is used for realizing authentication and authorization of the user and the like.
Session Management Function (SMF) network element: the method is mainly used for session management, Internet Protocol (IP) address allocation and management of terminal equipment, selection of a termination point capable of managing a user plane function, a policy control and charging function interface, downlink data notification and the like.
Network open function (NEF) network element: for securely opening services and capabilities etc. provided by the 3GPP network function element to the outside.
Policy Control Function (PCF) network element: the unified policy framework is used for guiding network behavior, providing policy rule information for control plane function network elements (such as AMF, SMF network elements and the like), and the like.
Application Function (AF) network element: an apparatus for managing a terminal stores attribute information of the managed terminal, such as location information, a type, etc. of the terminal.
Unified Data Management (UDM) network elements: for handling subscriber identities, access authentication, registration, mobility management, etc.
A network Slice Selection Function (SSF) for selecting a set of network slice instances for serving the UE.
It should be understood that the network architecture applied to the embodiment of the present application is only an exemplary network architecture described in the service architecture, and the network architecture to which the embodiment of the present application is applied is not limited thereto, and any network architecture capable of implementing the functions of the network elements described above is applicable to the embodiment of the present application.
For convenience of understanding of the embodiment of the present application, an application scenario of the present application is introduced next, where the service scenario described in the embodiment of the present application is for more clearly explaining the technical solution of the embodiment of the present application, and does not constitute a limitation to the technical solution provided in the embodiment of the present application, and it is known by a person skilled in the art that with the occurrence of a new service scenario, the technical solution provided in the embodiment of the present application is also applicable to similar technical problems.
The UE side or the r (an) side may request a service from a network element of the core network side, and a lower network element of the core network side may also send a service request message to an upper network element to request a service. Referring to fig. 1, in an example, after receiving a request from a UE side or an r (an) side, an AMF network element (lower network element) may send a service request message to a UDM network element (upper network element), where the UDM network element processes the service request message and feeds back a service response message to the AMF. And after receiving the service response message aiming at the service request message, the AMF network element feeds back a response to the UE side or the R (AN) side. In this application, the end that actively sends the service request message, or the end that invokes the service of the service end, is referred to as the client. After receiving the service request message sent by the client, the end that feeds back the response to the client, or the end that provides the service for the client, is called a server. And if the AMF network element directly sends the service request message to the UDM network element, the AMF network element is a client, and the UDM network element is a server.
In another example, the AMF network element communicates with the UDM network element through an AUSF network element, the AMF network element sends a service request message to the AUSF network element, and the AUSF network element sends the service request message to the UDM network element. And when the AUSF network element receives the service request message of the AMF network element, the AMF network element is a client side, and the AUSF network element is a server side. When the AUSF network element sends the service request message to the UDM network element, the AUSF network element is a client, and the UDM network element is a server. It can be seen that the role of the AUSF network element may change when communicating with different network elements. It follows that a network element can be either a client or a server.
The traffic handling capability of the server is limited, and if a large number of service request messages are sent to the server by the client in a period of time and the traffic handling capability of the server is exceeded, the server is congested. In order to avoid congestion, the service end may return a service unavailable response to the client for a part of the service request message.
Currently, a client may perform flow control by using the following scheme to avoid congestion of a server: the client may estimate the traffic handling capacity of the server for the client based on the number of received non-service unavailable responses (service request messages successfully processed by the server) and based on an empirically derived scaling factor. And the client discards the service request message which exceeds the traffic processing capacity of the server aiming at the client according to the estimated traffic processing capacity of the server aiming at the client.
In the current flow control scheme, the client cannot accurately estimate the real situation of the server, and the situation that too many service request messages are discarded or the service request messages are not discarded sufficiently easily occurs. And the service end does not know whether the reason for the reduction of the client service request message is that the client performs flow control or the service request message sent by the client itself is reduced. Generally, a plurality of clients request the same server for service, and if one client performs flow control, other clients do not perform flow control. If the server replies a response that the service is unavailable to each client according to a uniform policy (for example, a ratio value of received service request messages), more and more request messages are discarded by the client that actively performs flow control, and fewer service request messages are normally processed.
For example, three clients c1, c2, and c3 are connected to the server, respectively, c1, c2, and c3 send 200 service request messages to the server, respectively, and the server determines that the number of request messages that cannot be successfully processed (for example, service is dropped or returned to unavailable) is 60 × 50 — 300 according to its own traffic processing capability. The server may process 100 service request messages for c1, c2, c3, respectively, and another 100 may drop or reply to a response that service is not available. If c3 performs flow control at this time, c3 considers that the server can only process 100 service request messages, and even if there are 150, 200 or even more service request messages in c3 in the next period, c3 only sends about 100 service request messages to the server, and the redundant service request messages are discarded. The server does not know that c3 is reported 100 service request messages only by performing flow control, and the server considers that c3 only has 100 service request messages. The server only processes 60 service messages for 100 service messages reported by c 3. The client considers that the server can only process 60 service request messages, and only about 60 service request messages are reported in the next period, and so on, which results in more and more request messages discarded by the client which actively performs flow control and less service request messages processed normally.
Based on this, an embodiment of the present application provides a method for flow control, where a client notifies a server that the client has a flow control capability, and the server may issue a flow control policy to the client, where the flow control policy is used to determine a quantity of service request messages that the server expects the client to send to the server, or a quantity of service request messages that the server expects the client to discard. The client can send a reasonable amount of service request messages to the server according to the flow control strategy issued by the server, so that the two parties can process the service data more flexibly and reasonably.
As shown in fig. 2, a flow chart for flow control is provided. The client shown in fig. 2 may be the AMF network element shown in fig. 1, and the corresponding server may be the UDM network element shown in fig. 1. The client shown in fig. 2 may be the AMF network element shown in fig. 1, and the corresponding server may be the AUSF network element shown in fig. 1. The client shown in fig. 2 may be the AUSF network element shown in fig. 1, and the corresponding server may be the UDM network element shown in fig. 1. Only a few examples of clients and servers are given here, and it is believed that one skilled in the art can reasonably devise other examples of clients and servers based on the business requirements of the actual application.
Step 201: the client side sends a first message to the server side, correspondingly, the server side receives the first message sent by the client side, and the first message is used for indicating that the client side supports flow control.
Since the lower network element may send a service request message to the upper network element to request a service, the upper network element may reply a service response message to the lower network element to provide the service. Therefore, the client, as a network element, may send a service request message to an upper network element (i.e., a server) when receiving the service request message sent by a lower network element. The client can inform the server of self-supporting flow control when sending the service request message to the server. For example, the first message may be a service request message, and further, the first message may be an HTTP-based service request message. The first message may carry an indication that the client supports flow control, and is used to indicate that the client supports flow control. The indication that the client supports flow control may be carried in a header (header) of the HTTP message or a body (body) of the HTTP message. The first message may also be a message separate from the service request message, for example defining a new message. The first message is not limited in this application.
The client may report the capability of supporting the flow control to the server, and the client may also report the flow control mode supported by the client to the server, so that the server may make a more reasonable flow control policy for the client, and the first message may further include the flow control mode supported by the client, where the flow control mode supported by the client may be to send the service request message to another server for processing, or to discard the service request message, or both.
If the client currently executes the second traffic control policy, the client may also notify the server of the second traffic control policy being executed, so that the server may formulate a more reasonable traffic control policy for the client, and the first message may further include the second traffic control policy.
Step 202: and the server side sends a second message aiming at the first message to the client side, and correspondingly, the client side receives the second message sent by the server side, wherein the second message comprises a first flow control strategy.
After receiving an indication that the client supports flow control, which is sent by the client, the server may formulate a first flow control policy for the client, and send the formulated first flow control policy to the client. The first traffic control policy may be used to determine a first number of service request messages allowed to be sent by the client to the server, where the first number is an integer greater than or equal to 0. The first number may be a number of service request messages allowed to be sent in a fixed time period, or may be a number of service request messages allowed to be sent in an unfixed time period. A fixed time length, for example, one period, and an unfixed time length, for example, a time interval in which the server feeds back two service response messages to the client, or a time interval in which a specific type of service response message is fed back. The first number may be a relative number, for example the first number may be a proportion of service request messages that the server indicates to the client to decrement. The first number may also be an absolute number, for example the first number may be a value indicating by the server the number of service request messages sent by the client.
Further, the first traffic control policy may also include, but is not limited to, at least one of:
the effective time of the first flow control strategy, the flow control mode, the type of the service request message, the generation time of the first flow control strategy and the increment counter.
The effective time can be a time length value, such as 3s, 10s and the like, or can be a time point, such as 20 minutes at 23 o' clock 1/5/2019. If the valid time is a duration value, the valid time may be embodied by a specific time value, for example, 10s, or may be embodied by the number of cycles, for example, the server may agree in advance with the client that the duration of one cycle is 2s, and the valid time may be 5 cycles. The client may execute the first traffic control policy for an active time of the first traffic control policy.
The flow control mode comprises the following steps: and the client sends the service request messages exceeding the first number to other servers for processing or discards the service request messages exceeding the first number. The client may pre-configure the identifier of the other server, the client may also query the identifier of the other server through a network storage function (NRF), or the flow control policy may carry the identifier of the other server.
The type of the service request message may be that the server instructs the client to flow control for those types of messages, and may not flow control for other types of messages. The type of service request message may be, for example: the subscription data may be a subscription data change request message, and the like, which may specifically refer to definitions in the 3GPP specification.
The generation time of the first flow control policy may be used to determine whether the first flow control policy is the most up-to-date flow control policy.
The incremented counter of the first flow control policy may also be used to determine whether the first flow control policy is the most current flow control policy.
The client may inform the server of a second traffic control policy that is being executed by the client, where the second traffic control policy may be used to determine a second number of server request messages that the client is allowed to send to the server, and may include, but is not limited to: one or more of the effective time of the second traffic control policy, the traffic control mode, the type of the service request message, the generation time of the second traffic control policy, and the increment counter. Wherein, the flow control mode comprises: the client sends the service request messages exceeding the second number to other server sides for processing or discards the service request messages exceeding the second number; the increment counter is used for judging whether the second flow control strategy is the latest flow control strategy.
If the client reports the second traffic control policy in the first message, the first traffic control policy sent by the server to the client may include one or more of the effective time, the traffic control mode, the type of the service request message, the generation time, and the incremental counter of the introduced first traffic control policy, or the server may notify the client of the difference between the first traffic control policy and the second traffic control policy. The client can adjust the second flow control strategy being executed according to the difference notified by the server, and the adjusted flow control strategy is the first flow control strategy.
The server may make a first flow control policy for the client in any state as long as an indication of supporting flow control sent by the client is received. In a specific example, the server may also make the first traffic control policy for the client only in the case of being in a congestion state. After receiving an indication that the client supports flow control sent by the client, the server may formulate a first flow control policy for the client according to the flow processing capability of the server if the server is currently in a congestion state. If the server is not currently in the congestion state, the server may ignore the indication supporting the flow control sent by the client.
Specifically, a process of setting a first traffic control policy for the client according to the traffic handling capacity when the service end is in the congestion state is described as follows:
the server can estimate the load degree of the cycle according to the use conditions of a plurality of resources (such as the use rate of a CPU, the message processing time and the like). For example, if the CPU utilization rate is greater than 98%, the cycle is estimated to be an extremely high load; the CPU utilization rate is between 90% and 98%, and the cycle is estimated to be high load; the CPU utilization rate is between 80% and 90%, and the period is estimated to be a higher load; the CPU utilization is lower than 80%, and the cycle is estimated to be light load. For another example, if the message processing delay is longer than 3 seconds, the cycle is estimated to be very high load, if the message processing delay is 1 second to 3 seconds, the cycle is estimated to be high load, and if the message processing delay is shorter than 1 second, the cycle is estimated to be light load. The load degree of the server may be the highest degree of the loads corresponding to the plurality of resources, or may be obtained by performing comprehensive calculation according to the load degrees corresponding to the plurality of resources.
The server may estimate the number of service request messages that can be processed (may be successfully processed or may be unsuccessfully processed) in the period, that is, the traffic processing capability, according to the number of service request messages processed in the previous period, the load degree in the previous period, and the load variation trend. The server can also estimate the number of the service request messages to be received in the period according to the number of the service request messages received in the previous period.
When the flow control is performed, the load of the system can be kept as high as possible. For example, the system load is very high when 1000 messages are processed in the last cycle. If the estimated load in this period is also extremely high, and the load condition is not changed, it is considered that 900 messages can be processed in this period (the load is extremely high, and the number of processed messages needs to be reduced). For another example, if the system load is high when 1000 messages are processed in the previous period, and the estimated load in this period is also extremely high, and the load degree is unchanged, it is considered that 1000 messages can be processed in this period (the system is stable at high load, and may not need to adjust messages). For another example, if 1000 messages are processed in the previous cycle, the system load is high, the estimated load in the present cycle is medium, and the load degree changes from high to medium, it is considered that 1100 messages can be processed in the present cycle (some messages can be processed while the system load is decreasing).
Furthermore, the server may estimate the number of service request messages that cannot be processed according to the estimated number of service request messages that can be processed in the current period and the estimated number of service request messages that will be received in the current period. The service request message for the service that cannot be processed by the server may be directly discarded or a failure (e.g., service unavailable) may be returned to the client.
The server may make the number of service request messages sent by all clients connected thereto close to the estimated traffic handling capacity (the number of service request messages that can be handled). For example, the traffic processing capacity of the server in the estimated current period is to process 1000 service request messages, and it is estimated that 2000 service request messages are received in the current period, it is estimated that 1000 service request messages cannot be processed. The server may instruct all clients connected thereto to transmit a total of 1000 service request messages.
In order to process more service request messages, the server may make the number of service request messages sent by all clients connected to the server greater than the estimated traffic processing capacity. For example, the server multiplies the number of service request messages that the server can process by a proportional value, which is generally greater than 1, and may be, for example, 1.1, 1.2, and so on, as the number of service request messages that all clients transmit in total. For example, the traffic processing capacity of the server in the estimated current period is to process 1000 service request messages, and it is estimated that 2000 service request messages are received in the current period, it is estimated that 1000 service request messages cannot be processed. The server may instruct all clients connected thereto to transmit a total of 1200 service request messages, and the ratio value is 1.2. That is, the server may instruct the client to discard 800 service request messages for 1000 service request messages estimated to be unable to process. The server side in 1200 pieces sent by the client side may return 200 pieces of failures according to the own traffic processing capability, or the server side sends the 200 pieces of service request messages to other server sides for processing after receiving the 200 pieces of service request messages, or the server side may instruct all the client sides connected thereto to send 1200 pieces of service request messages in total, and may also instruct the client side to send the service request messages to other server sides, so as to disperse the pressure of the current server side.
If the load of the server in the period is higher than that in the previous period, the number of the service request messages discarded by the client by the server is not enough, and more service request messages can be discarded by the client in the next period. For example, if the client drops 60% of the service request messages in the present cycle, the client may drop 80% of the service request messages in the next cycle.
If the load of the present period is significantly reduced from the load of the previous period, for example, the load is changed from high load to low load, the server may instruct the client to drop a smaller number of service request messages in the next period, or even not drop the service request messages. When the server side formulates the flow control strategy for the client side, the current load degree can be considered, and the change trend of the load degree can also be considered.
The second message sent by the server to the client may be a service response message, and further, the second message may be a service response message based on HTTP. The first traffic control policy may be carried in a header field (header) of the HTTP message, such as an extended Retry-After header field, or a non-standard header field such as X-headers, or may be carried in a body (body) of the HTTP message. The second message may also be a message other than the service response message, for example, a new message is defined, and the first message is not limited in this application.
In order to avoid the attack of an attacker, the client may perform integrity protection on the first message, and after receiving the first message, the server may perform integrity verification on the first message, and after the integrity verification passes, make a first flow control policy for the client. If the integrity check on the first message is not passed, the server can ignore the first message or inform the client that an attacker exists.
Step 203: and the client performs flow control according to the first flow control strategy.
After determining the first quantity of the service request messages allowed to be sent to the server according to the first flow control strategy, the client can send the service request messages of about the first quantity to the server. To achieve the purpose of reasonably discarding the service request message. It is noted that the first number may be larger than the number of service request messages that the client needs to send, and the client does not need to discard the service request messages.
The client may also have some flow control policies locally, e.g. some types of service request messages have a higher priority.
If the first traffic control policy issued by the server includes the type of the service request message, that is, the server instructs the client to perform traffic restriction on the service request message of the type, and if the client sets the priority of the message of the type to be higher, the client may not discard the service request message of the type. Or a certain session is in progress, even if the server instructs the client to perform traffic limitation on the service request message of the session, in order to ensure that the session is normally performed, the client may not drop the service request message of the session that occurs subsequently.
The generation time of the first flow control policy may be used to determine whether the first flow control policy is the most up-to-date flow control policy. If the first flow control strategy comprises the generation time of the first flow control strategy, before the client performs flow control according to the first flow control strategy, the client can also determine whether the generation time of the executing second flow control strategy is earlier than the generation time of the first flow control strategy, and if so, perform flow control according to the first flow control strategy. If not, flow control can be carried out according to the second flow control strategy.
The client can determine the latest flow control strategy according to the generation time of each flow control strategy. And the client performs flow control according to the latest flow control strategy so as to more reasonably process the service data.
The incremented counter of the first flow control policy may also be used to determine whether the first flow control policy is the most current flow control policy. If the increment counter is included in the first flow control strategy, before the client performs flow control according to the first flow control strategy, the client can also compare the increment counter in the first flow control strategy with the increment counter of the executing second flow control strategy to determine which flow control strategy to use. For example, the larger the value of the increment counter, the later the flow control policy is, i.e. the latest flow control policy. The first traffic control strategy may be used when the counter of the first traffic control strategy is greater than the counter of the second traffic control strategy.
And if the flow control mode issued by the server is that the client sends the service request messages exceeding the first quantity to other servers for processing. The flow control policy may carry the identifier of other service end. The client may also be configured with identifiers of other servers in advance, and the client may also query the identifiers of other servers through a network storage function (NRF).
In order to avoid the attack of the attacker, the first flow control strategy is tampered, and the server side can perform integrity protection on the second message. If the second message has integrity protection, the client may perform integrity check on the second message after receiving the second message, and then perform flow control according to the first flow control policy after the integrity check is passed. If the integrity check of the first message is not passed, the client can ignore the first flow control strategy in the second message or inform the server that an attacker exists.
When integrity protection is performed on the first message and the second message, the integrity protection key used may be a symmetric key preset between the server and the client, or an asymmetric key mechanism is used, for example, a certificate of the server is used to sign the first traffic control policy.
The above describes a process of issuing a first traffic control policy for a client when a server is in a congestion state. The congestion state of the server can be relieved along with the lapse of time, the server can also issue an indication of relieving the congestion state to the client, and the client can reasonably report a service request message to the server.
Referring to fig. 3, the flow of flow control is further described:
step 301 to step 303 are the same as step 201 to step 203, and the repetition is not repeated.
Step 304: optionally, the client sends a third message to the server, and correspondingly, the server receives the third message sent by the client, where the third message may include the first traffic control policy being executed by the client and/or an indication that the client supports the traffic control policy.
Step 305: and if the server determines that the congestion state is relieved, sending a fourth message to the client, and correspondingly, receiving the fourth message by the client, wherein the fourth message is used for indicating the relief of the congestion state of the server.
Step 306: the client determines that the first traffic control policy is no longer to be executed.
In one example, when determining congestion state release, the server may issue an indication of congestion state release to all clients connected to the server.
In another example, to save signaling overhead, the server may issue an instruction of congestion state relief to the client when determining that the congestion state of the server is relieved after receiving an instruction of a flow control policy sent by the client or an instruction that the client supports the flow control policy. For example, step 304 and step 305 in fig. 3.
In another example, the server may also store, for the identifier of the client, a flow control policy issued to the client. The server can monitor whether the flow control strategy aiming at each client is effective or not in real time. If the congestion state of the server is relieved, the server monitors that some flow control strategies are still effective, and the server can issue congestion state relief instructions to the client sides with the flow control strategies still effective. For the clients with invalid flow control strategies, the server may not issue congestion state relief instructions to the clients because the clients no longer execute the flow control strategies.
In one example, the fourth message includes a first field for indicating congestion status, and the congestion status release of the server is indicated by a value of the first field.
In another example, the server may indicate the release of the congestion state of the server by setting the validity time of the flow control policy to 0. For example, the fourth message may include a fourth flow control policy, and the effective time of the fourth flow control policy is 0.
The third message may be a service request message, and further, the third message may be an HTTP-based service request message. The first traffic control policy and/or the indication that the client supports traffic control may be carried in a header of the HTTP message or may be carried in a body of the HTTP message. The third message may also be a message separate from the service request message, for example defining a new message. The third message is not limited in this application.
To avoid attack by the attacker, the client may integrity protect the third message. If the third message has integrity protection, the server may perform integrity check on the third message after receiving the third message, and issue an indication of releasing the congestion state of the server for the client after the integrity check is passed. If the integrity check of the third message is not passed, the server may ignore the third message, or inform the client that an attacker exists.
The fourth message may be a service response message, and further, the fourth message may be an HTTP-based service response message. For example, the fourth message may carry a fourth traffic control policy, and the fourth traffic control policy may be carried in a header field (header) of the HTTP message, for example, an extended Retry-After header field, or a non-standard header field such as X-headers, and may also be carried in a body of the HTTP message. In another example, the fourth message may also carry an identifier of the congestion relief status of the server, for example, a value of a field. The fourth message may also be a message other than the service response message, for example, a new message is defined, and the fourth message is not limited in this application.
In order to avoid the attack of the attacker, the server may perform integrity protection on the fourth message. If the fourth message has integrity protection, the client may perform integrity check on the fourth message after receiving the fourth message, and determine not to execute the first traffic control policy after the integrity check is passed. If the integrity check of the first message is not passed, the client may ignore the indication that the server releases the congestion state in the fourth message, or inform the server that an attacker exists.
It is noted that according to the foregoing description, the generation time and/or the incrementing of the counter may be included in the flow control policy. After receiving the second message, if the client determines to execute the first flow control strategy according to the generation time and/or the incremental counter of the first flow control side, the third message comprises the first flow control strategy; if it is determined not to execute the first traffic control policy but to execute the second traffic control policy according to the generation time and/or the increment counter, the aforementioned third message may include the second traffic control policy instead of the first traffic control policy. Of course, the third message may also include the second traffic control policy and the first traffic control policy.
In another example, the server may further serve as a client, and receive a third flow control policy sent by a superior server, where the third flow control policy is used to determine a third number of service request messages that the server is allowed to send to the superior server, and the third number is an integer greater than or equal to 0. The server can make a first flow control strategy for the client according to the local strategy of the server and the third flow control strategy.
As shown in fig. 4, a flow chart for flow control is provided. The client shown in fig. 4 may be the AMF network element shown in fig. 1, and the corresponding server may be the AUSF network element shown in fig. 1. The upper level service end may be the UDM network element shown in fig. 1. The AUSF network element is a server for the AMF network element and the AUSF network element is a client for the UDM network element. Only one example of the client, the server and the upper server is given here, and it is believed that those skilled in the art can reasonably conceive other examples of the client, the server and the upper server according to business requirements in practical application.
Step 401: the method comprises the steps that a client (AMF network element) sends a service request message to a server (AUSF network element), and correspondingly, the server (AUSF network element) receives the service request message sent by the client, wherein the service request message comprises an indication for indicating that the client (AMF network element) supports flow control.
Optionally, the service request message may further include a flow control manner supported by the AMF network element.
Step 402: the service end (AUSF network element) is used as a client end to send a service request message to a superior service end (UDM network element), wherein the service request message comprises an indication for indicating that the client end (AUSF network element) supports flow control. Optionally, the service request message may further include a flow control manner supported by the AUSF network element.
Step 403: and the UDM network element establishes a third flow control strategy for the AUSF network element.
Illustratively, when the UDM network element determines that the network element is in the congestion state, a third flow control policy is formulated for the AUSF network element according to the system capability.
Step 404: and the UDM network element sends a service response message to the AUSF network element, and correspondingly, the AUSF network element receives the service response message sent by the UDM network element, wherein the service response message comprises a third flow control strategy.
Step 405: and the AUSF network element determines a first flow control strategy according to the third flow control strategy and the local strategy.
Specifically, the AMF network element reports the second traffic control policy being executed by the AMF network element to the AUSF network element, and the AUSF network element determines the first traffic control policy according to the third traffic control policy, the local policy, and the second traffic control side.
The following specifically introduces a process of determining, by the AUSF network element, the first flow control policy according to the third flow control policy and the local policy:
for example, the third flow control policy indicates, for the UDM network element, that the AUSF network element sends 1000 service request messages, for example, 1000 messages may be sent in one period, and one period may be, for example, 1 s. The AUSF network element expects to receive 1500 service request messages in this period, where 500 messages are non-discardable with high priority and 1000 messages are discardable with low priority, and then the AUSF network element may send 500 high-priority messages and 500 low-priority messages to the UDM network element.
In this scenario, the AUSF network element needs to discard 500 low-priority messages, and the local policy of the AUSF network element may be, for example, a ratio of the number of service request messages discarded by itself and sent to the UDM network element to the number of service request messages required by the lower client to be reduced, which may be, for example, 1:1, 1:2, 1.5, and the like. If the AUSF network element itself discards and requires that the messages sent by the AMF reduction are 1:1, the AUSF network element may instruct, in the next period, all AMF network elements connected to the AUSF network element to send 750 low-priority messages and 500 high-priority messages in total.
Step 406: and the AUSF network element sends a service response message to the AMF network element, and correspondingly, the AMF network element receives the service response message sent by the AUSF network element, wherein the service response message comprises a first flow control strategy.
Step 407: the AMF network element may perform flow control according to the first flow control policy.
Further, optionally, step 408: the AMF network element may further send a service request message to the AUSF network element, and correspondingly, the AUSF network element receives the service request message sent by the AMF network element, where the service request message includes an indication that the AMF network element supports flow control and/or a first flow control policy being executed by the AMF network element.
Optionally, in step 409: the AUSF network element sends a service request message to the UDM network element, correspondingly, the UDM network element receives the service request message sent by the AUSF network element, and the service request message comprises an indication that the AUSF network element supports flow control and/or a third flow control strategy.
The AUSF network element plays a role of an intermediate node between the AMF network element and the UDM network element, and the process of determining the third flow control policy by the AUSF network element according to the first flow control policy may be a reverse process of the step 405.
Step 410: the UDM network element may send a service response message to the AUSF network element when determining the congestion relief, and correspondingly, the AUSF network element receives the service response message sent by the UDM network element, where the service response message is used to indicate the congestion relief of the UDM network element.
Step 411: after receiving the instruction of relieving the congestion state of the UDM network element, the AUSF network element may not execute the third flow control policy any more, and in general, the AUSF network element does not need to perform flow control on the AMF any more, and the AUSF network element may send a service response message to the AMF network element to instruct the relieving of the congestion state of the AUSF network element. The AMF may no longer execute the first traffic control policy.
In fig. 4, information interaction may be performed between network elements through a service request message and a service response message, where the service request message may be an HTTP-based service request message, and the service response message may be an HTTP-based service response message. In fig. 4, the service request message and the service response message interacted between the network elements may have integrity protection, and one end receiving the message may perform integrity check on the message first, and then perform a corresponding process after the integrity check is passed.
The HTTP-based service request message and the service response message in the various embodiments provided above may be HTTP 2.0-based service request message and service response message.
Based on the same technical concept as the method for controlling flow described above, as shown in fig. 5, a communication device 500 is provided, and the communication device 500 can perform each step performed by the server in the methods of fig. 2, 3 and 4, and will not be described in detail herein to avoid repetition. The communication device 500 may be a server or a chip applied to the server. The communication apparatus 500 includes: the transceiver module 510, optionally, further includes a processing module 520, a storage module 530; the processing module 520 may be connected to the storage module 530 and the transceiver module 510, respectively, and the storage module 530 may also be connected to the transceiver module 510:
the storage module 530 is used for storing a computer program;
illustratively, the transceiver module 510 is configured to receive a first message sent by a client, where the first message is used to indicate that the client supports flow control; and sending a second message for the first message to the client, the second message comprising a first traffic control policy, the first traffic control policy being used to determine a first number of service request messages allowed to be sent by the client to the device, the first number being an integer greater than or equal to 0.
In one possible implementation, the processing module 520 is configured to determine the first traffic control policy according to a traffic processing capability of the apparatus when the apparatus is in a congestion state.
In a possible implementation, the transceiver module 510 is further configured to receive a fifth message from a superior server, where the fifth message includes a third flow control policy, and the third flow control policy is used to determine a third number of service request messages that are allowed to be sent by the apparatus to the superior server, where the third number is an integer greater than or equal to 0.
In one possible implementation, the processing module 520 is configured to determine the first flow control policy according to a local policy of the apparatus and the third flow control policy.
In a possible implementation, the transceiver module 510 is further configured to receive a third message sent by the client, where the third message includes the first traffic control policy being executed by the client; and in the event that congestion status release is determined, sending a fourth message for the third message to the client, the fourth message indicating congestion status release of the apparatus.
Based on the same technical concept as the method for flow control described above, as shown in fig. 6, a communication device 600 is provided, and the communication device 600 can perform each step performed by the client in the methods of fig. 2, fig. 3 and fig. 4, and in order to avoid repetition, the detailed description is omitted here. The communication device 600 may be a client, or may be a chip applied to the client. The communication apparatus 600 includes: the transceiver module 610, optionally, further includes a processing module 620, a storage module 630; the processing module 620 may be connected to the storage module 630 and the transceiver module 610, respectively, and the storage module 630 may also be connected to the transceiver module 610:
the storage module 630 is configured to store a computer program;
for example, the transceiver module 610 is configured to send a first message to a server, where the first message is used to indicate that the apparatus supports flow control; and receiving a second message aiming at the first message from the server, wherein the second message comprises a first flow control strategy, the first flow control strategy is used for determining a first number of service request messages allowed to be sent to the server by the device, and the first number is an integer greater than or equal to 0;
and the processing module 620 is configured to perform flow control according to the first flow control policy.
In one possible implementation, the first traffic control policy includes at least one of:
the effective time of the first flow control strategy, the flow control mode, the type of the service request message, the generation time of the first flow control strategy and an incremental counter;
wherein, the flow control mode comprises: and sending the service request messages exceeding the first quantity to other service terminals for processing or discarding the service request messages exceeding the first quantity, wherein the incremental counter is used for judging whether the first flow control strategy is the latest flow control strategy.
In one possible implementation, if the first traffic control policy includes a generation time of the first traffic control policy, the processing module 620, before being configured to perform traffic control according to the first traffic control policy, is further configured to: determining that a generation time of a second traffic control policy being executed is earlier than a generation time of the first traffic control policy.
In one possible implementation, if the incremented counter is included in the first traffic control policy, the processing module 620, before being configured to perform traffic control according to the first traffic control policy, is further configured to: comparing the incremented counter in the first flow control policy to the incremented counter of the executing second flow control policy to determine to use the first flow control policy.
In a possible implementation, the transceiver module 610 is further configured to send a third message to the server, where the third message includes the first traffic control policy being executed by the apparatus; receiving a fourth message aiming at the third message of the server, wherein the fourth message is used for indicating the congestion state release of the server;
the processing module 620 is further configured to determine that the first traffic control policy is no longer to be executed.
Fig. 7 is a schematic block diagram of a communication apparatus 700 according to an embodiment of the present application. It should be understood that the communication device 700 is capable of executing the steps executed by the server in the methods shown in fig. 2, 3 and 4, and the detailed description is omitted here to avoid redundancy. The communication apparatus 700 includes: a processor 701 and a memory 703, the processor 701 and the memory 703 being electrically coupled;
the memory 703 for storing computer program instructions;
the processor 701 is configured to execute a part or all of the computer program instructions in the memory, and when the part or all of the computer program instructions are executed, the apparatus receives a first message sent by a client, where the first message is used to indicate that the client supports flow control; and sending a second message for the first message to the client, the second message comprising a first traffic control policy, the first traffic control policy being used to determine a first number of service request messages allowed to be sent by the client to the device, the first number being an integer greater than or equal to 0.
In one possible implementation, the processor 701 is configured to determine the first traffic control policy according to a traffic processing capability of the apparatus when the apparatus is in a congestion state.
In a possible implementation, the processor 701 is further configured to receive a fifth message from a superior server, where the fifth message includes a third flow control policy, and the third flow control policy is used to determine a third number of service request messages that are allowed to be sent by the apparatus to the superior server, where the third number is an integer greater than or equal to 0.
In one possible implementation, the processor 701 is configured to determine the first flow control policy according to a local policy of the apparatus and the third flow control policy.
In a possible implementation, the processor 701 is further configured to receive a third message sent by the client, where the third message includes the first traffic control policy being executed by the client; and in the event that congestion status release is determined, sending a fourth message for the third message to the client, the fourth message indicating congestion status release of the apparatus.
Optionally, the method further includes: a transceiver 702 for communicating with other devices; for example, the second message and the fourth message are sent to the client, and the first message and the third message sent by the client are received.
It should be understood that the communication device 700 shown in fig. 7 may be a chip or a circuit. Such as a chip or circuit that may be provided within the server. The transceiver 702 may also be a communication interface. The transceiver includes a receiver and a transmitter. Further, the communication device 700 may also include a bus system.
The processor 701, the memory 703 and the transceiver 702 are connected by a bus system, and the processor 701 is configured to execute an instruction stored in the memory 703 to control the transceiver to receive and transmit a signal, thereby completing the steps of the server in the flow control method of the present application. The memory 703 may be integrated in the processor 701 or may be provided separately from the processor 701.
As an implementation manner, the function of the transceiver 702 can be considered to be implemented by a transceiver circuit or a transceiver dedicated chip. The processor 701 may be considered to be implemented by a dedicated processing chip, processing circuitry, a processor, or a general-purpose chip.
Fig. 8 is a schematic block diagram of a communication device 800 according to an embodiment of the present application. It should be understood that the communication device 800 is capable of performing the steps performed by the client in the methods of fig. 2, 3, and 4, and will not be described in detail herein to avoid repetition. The communication apparatus 800 includes: a processor 801 and a memory 803, said processor 801 and said memory 803 being electrically coupled;
the memory 803 for storing computer program instructions;
the processor 801 is configured to execute some or all of the computer program instructions in the memory, and when the some or all of the computer program instructions are executed, the apparatus sends a first message to a server, where the first message is used to indicate that the apparatus supports flow control; and receiving a second message aiming at the first message from the server, wherein the second message comprises a first flow control strategy, the first flow control strategy is used for determining a first number of service request messages allowed to be sent to the server by the device, and the first number is an integer greater than or equal to 0; and controlling the flow according to the first flow control strategy.
In one possible implementation, the first traffic control policy includes at least one of:
the effective time of the first flow control strategy, the flow control mode, the type of the service request message, the generation time of the first flow control strategy and an incremental counter;
wherein, the flow control mode comprises: and sending the service request messages exceeding the first quantity to other service terminals for processing or discarding the service request messages exceeding the first quantity, wherein the incremental counter is used for judging whether the first flow control strategy is the latest flow control strategy.
In one possible implementation, if the first flow control policy includes a generation time of the first flow control policy, the processor 801, before being configured to perform flow control according to the first flow control policy, is further configured to: determining that a generation time of a second traffic control policy being executed is earlier than a generation time of the first traffic control policy.
In one possible implementation, if the incrementing counter is included in the first flow control policy, the processor 801, before being configured to perform flow control according to the first flow control policy, is further configured to: comparing the incremented counter in the first flow control policy to the incremented counter of the executing second flow control policy to determine to use the first flow control policy.
In a possible implementation, the processor 801 is further configured to send a third message to the server, where the third message includes the first traffic control policy being executed by the apparatus; receiving a fourth message aiming at the third message of the server, wherein the fourth message is used for indicating the congestion state release of the server; and determining that the first traffic control policy is no longer to be executed.
Optionally, the method further includes: a transceiver 802 for communicating with other devices; for example, receiving the second message and the fourth message sent by the server, and sending the first message and the second message to the server.
It should be understood that the communication device 800 shown in fig. 8 may be a chip or a circuit. Such as a chip or circuit that may be provided within the client. The transceiver 802 may also be a communication interface. The transceiver includes a receiver and a transmitter. Further, the communication device 800 may also include a bus system.
The processor 801, the memory 803, and the transceiver 802 are connected via a bus system, and the processor 801 is configured to execute instructions stored in the memory 803 to control the transceiver to receive and transmit signals, thereby completing the steps of the client in the method for controlling traffic according to the present application. The memory 803 may be integrated with the processor 801 or may be separate from the processor 801.
As an implementation manner, the function of the transceiver 802 can be considered to be implemented by a transceiver circuit or a transceiver dedicated chip. The processor 801 may be considered to be implemented by a dedicated processing chip, processing circuit, processor, or a general-purpose chip.
The processor may be a Central Processing Unit (CPU), a Network Processor (NP), or a combination of a CPU and an NP.
The processor may further include a hardware chip or other general purpose processor. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The aforementioned PLDs may be Complex Programmable Logic Devices (CPLDs), field-programmable gate arrays (FPGAs), General Array Logic (GAL) and other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc., or any combination thereof. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It will also be appreciated that the memory referred to in the embodiments of the application may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of example, but not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double Data Rate Synchronous Dynamic random access memory (DDR SDRAM), Enhanced Synchronous SDRAM (ESDRAM), Synchronous link SDRAM (SLDRAM), and Direct Rambus RAM (DR RAM). It should be noted that the memory described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
An embodiment of the present application provides a computer storage medium storing a computer program, where the computer program includes a method for executing the flow control.
Embodiments of the present application provide a computer program product containing instructions, which when run on a computer, cause the computer to perform the method for flow control provided above.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the above-described apparatus embodiments are merely illustrative, for example, the division of the units is only one logical function division, and there may be other division manners in actual implementation, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the communication connections shown or discussed may be indirect couplings or communication connections between devices or units through interfaces, and may be electrical, mechanical or other forms.
In addition, each unit in the embodiments of the apparatus of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
It is understood that the processor in the embodiments of the present application may be a Central Processing Unit (CPU), other general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. The general purpose processor may be a microprocessor, but may be any conventional processor.
The methods in the embodiments of the present application may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer programs or instructions. When the computer program or instructions are loaded and executed on a computer, the processes or functions described in the embodiments of the present application are performed in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer program or instructions may be stored in or transmitted over a computer-readable storage medium. The computer readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server that integrates one or more available media. The usable medium may be a magnetic medium, such as a floppy disk, a hard disk, a magnetic tape; or optical media, such as CD-ROM, DVD; it may also be a semiconductor medium, such as a Solid State Disk (SSD), a Random Access Memory (RAM), a read-only memory (ROM), a register, and the like.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the embodiments of the present application without departing from the spirit and scope of the embodiments of the present application. Thus, if such modifications and variations of the embodiments of the present application fall within the scope of the claims of the present application and their equivalents, the present application is also intended to include such modifications and variations.

Claims (30)

1. A method of flow control, the method comprising:
a server receives a first message sent by a client, wherein the first message is used for indicating that the client supports flow control;
the server sends a second message aiming at the first message to the client, wherein the second message comprises a first flow control strategy, the first flow control strategy is used for determining a first number of service request messages which are allowed to be sent to the server by the client, and the first number is an integer which is greater than or equal to 0.
2. The method of claim 1, wherein the first message comprises: a second traffic control policy being executed by the client.
3. The method of claim 1 or 2, further comprising:
and when the server side is in a congestion state, determining the first flow control strategy according to the flow processing capacity of the server side.
4. The method of claim 1 or 2, further comprising:
the server receives a fifth message from a superior server, where the fifth message includes a third flow control policy, the third flow control policy is used to determine a third number of service request messages that the server is allowed to send to the superior server, and the third number is an integer greater than or equal to 0.
5. The method of claim 4, further comprising:
and the server side determines the first flow control strategy according to the local strategy and the third flow control strategy of the server side.
6. The method of any of claims 1-5, wherein the first flow control policy comprises at least one of:
the effective time of the first flow control strategy, the flow control mode, the type of the service request message, the generation time of the first flow control strategy and an incremental counter;
wherein, the flow control mode comprises: sending the service request messages exceeding the first number to other service terminals for processing or discarding the service request messages exceeding the first number; the increment counter is used for judging whether the first flow control strategy is the latest flow control strategy.
7. The method of claim 1, further comprising:
the server receives a third message sent by the client, wherein the third message comprises a first flow control strategy executed by the client;
and the server sends a fourth message aiming at the third message to the client under the condition of determining that the congestion state is relieved, wherein the fourth message is used for indicating the relief of the congestion state of the server.
8. The method of claim 7, wherein a first field for indicating the congestion status is included in the fourth message, and the congestion status release of the server is indicated by a value of the first field.
9. A method of flow control, the method comprising:
a client sends a first message to a server, wherein the first message is used for indicating that the client supports flow control;
the client receives a second message aiming at the first message from the server, wherein the second message comprises a first flow control strategy, the first flow control strategy is used for determining a first number of service request messages allowed to be sent to the server by the client, and the first number is an integer greater than or equal to 0;
and the client performs flow control according to the first flow control strategy.
10. The method of claim 9, wherein the first message comprises: a second traffic control policy being executed by the client, the second traffic control policy being used to determine the first traffic control policy.
11. The method of claim 9 or 10, wherein the first traffic control policy comprises at least one of:
the effective time of the first flow control strategy, the flow control mode, the type of the service request message, the generation time of the first flow control strategy and an incremental counter;
wherein, the flow control mode comprises: and sending the service request messages exceeding the first quantity to other service terminals for processing or discarding the service request messages exceeding the first quantity, wherein the incremental counter is used for judging whether the first flow control strategy is the latest flow control strategy.
12. The method of claim 11, wherein if the first traffic control policy includes a generation time of the first traffic control policy, the client, before performing traffic control according to the first traffic control policy, further comprises:
the client determines that a generation time of a second traffic control policy being executed is earlier than a generation time of the first traffic control policy.
13. The method of claim 11, wherein if the incremented counter is included in the first traffic control policy, the client, prior to controlling traffic according to the first traffic control policy, further comprises:
the client compares the incremented counter in the first traffic control policy to the incremented counter of the executing second traffic control policy to determine to use the first traffic control policy.
14. The method of claim 9, further comprising:
the client sends a third message to the server, wherein the third message comprises a first flow control strategy executed by the client;
the client receives a fourth message aiming at the third message of the server, wherein the fourth message is used for indicating the congestion state release of the server;
the client no longer executes the first traffic control policy.
15. The method of claim 14, wherein the fourth message comprises a first field for indicating a congestion status, wherein a congestion status release of the server is indicated by a value of the first field.
16. A communications apparatus, the apparatus comprising:
a receiving and sending module, configured to receive a first message sent by a client, where the first message is used to indicate that the client supports flow control; and sending a second message for the first message to the client, the second message comprising a first traffic control policy, the first traffic control policy being used to determine a first number of service request messages allowed to be sent by the client to the device, the first number being an integer greater than or equal to 0.
17. The apparatus of claim 16, wherein the first message comprises: a second traffic control policy being executed by the client.
18. The apparatus of claim 16 or 17, further comprising:
and the processing module is used for determining the first flow control strategy according to the flow processing capacity of the device when the device is in the congestion state.
19. The apparatus of claim 16 or 17, wherein the transceiver module is further configured to receive a fifth message from a superior server, the fifth message comprising a third flow control policy, the third flow control policy being configured to determine a third number of service request messages that the apparatus is allowed to send to the superior server, the third number being an integer greater than or equal to 0.
20. The apparatus of claim 19, further comprising:
and the processing module is used for determining the first flow control strategy according to the local strategy of the device and the third flow control strategy.
21. The apparatus of any of claims 16-20, wherein the first flow control policy comprises at least one of:
the effective time of the first flow control strategy, the flow control mode, the type of the service request message, the generation time of the first flow control strategy and an incremental counter;
wherein, the flow control mode comprises: sending the service request messages exceeding the first number to other devices for processing or discarding the service request messages exceeding the first number; the increment counter is used for judging whether the first flow control strategy is the latest flow control strategy.
22. The apparatus of claim 16, wherein the transceiver module is further configured to receive a third message sent by the client, the third message comprising a first traffic control policy being executed by the client; and in the event that congestion status release is determined, sending a fourth message for the third message to the client, the fourth message indicating congestion status release of the apparatus.
23. The apparatus of claim 22, wherein the fourth message includes a first field for indicating a congestion state, and wherein a congestion state release of the apparatus is indicated by a value of the first field.
24. A communications apparatus, the apparatus comprising:
a transceiver module, configured to send a first message to a server, where the first message is used to indicate that the apparatus supports flow control; and receiving a second message aiming at the first message from the server, wherein the second message comprises a first flow control strategy, the first flow control strategy is used for determining a first number of service request messages allowed to be sent to the server by the device, and the first number is an integer greater than or equal to 0;
and the processing module is used for controlling the flow according to the first flow control strategy.
25. The apparatus of claim 24, wherein the first message comprises: a second traffic control policy being executed by the device, the second traffic control policy being used to determine the first traffic control policy.
26. The apparatus of claim 24 or 25, wherein the first traffic control policy comprises at least one of:
the effective time of the first flow control strategy, the flow control mode, the type of the service request message, the generation time of the first flow control strategy and an incremental counter;
wherein, the flow control mode comprises: and sending the service request messages exceeding the first quantity to other service terminals for processing or discarding the service request messages exceeding the first quantity, wherein the incremental counter is used for judging whether the first flow control strategy is the latest flow control strategy.
27. The apparatus of claim 26, wherein if the first flow control policy includes a generation time of the first flow control policy, the processing module, prior to being configured to control flow according to the first flow control policy, is further configured to: determining that a generation time of a second traffic control policy being executed is earlier than a generation time of the first traffic control policy.
28. The apparatus of claim 26, wherein if the incremented counter is included in the first flow control policy, the processing module, prior to being configured to control flow in accordance with the first flow control policy, is further configured to: comparing the incremented counter in the first flow control policy to the incremented counter of the executing second flow control policy to determine to use the first flow control policy.
29. The apparatus of claim 24, further comprising:
the transceiver module is further configured to send a third message to the server, where the third message includes a first traffic control policy being executed by the apparatus; receiving a fourth message aiming at the third message of the server, wherein the fourth message is used for indicating the congestion state release of the server;
the processing module is further configured to determine that the first flow control policy is no longer to be executed.
30. A communication system, comprising: a server and a client;
the client is used for:
sending a first message to the server, wherein the first message is used for indicating that the client supports flow control;
receiving a second message aiming at the first message from the server, wherein the second message comprises a first flow control strategy, the first flow control strategy is used for determining a first number of service request messages allowed to be sent to the server by the device, and the first number is an integer greater than or equal to 0;
performing flow control according to the first flow control strategy
The server is used for:
receiving the first message sent by the client
Sending the second message for the first message to the client.
CN201910924070.9A 2019-09-27 2019-09-27 Flow control method and device Active CN112583726B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910924070.9A CN112583726B (en) 2019-09-27 2019-09-27 Flow control method and device
PCT/CN2020/117212 WO2021057808A1 (en) 2019-09-27 2020-09-23 Method and apparatus for traffic control

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910924070.9A CN112583726B (en) 2019-09-27 2019-09-27 Flow control method and device

Publications (2)

Publication Number Publication Date
CN112583726A true CN112583726A (en) 2021-03-30
CN112583726B CN112583726B (en) 2022-11-11

Family

ID=75109944

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910924070.9A Active CN112583726B (en) 2019-09-27 2019-09-27 Flow control method and device

Country Status (2)

Country Link
CN (1) CN112583726B (en)
WO (1) WO2021057808A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113489572A (en) * 2021-08-23 2021-10-08 杭州安恒信息技术股份有限公司 Request sending method, device, equipment and storage medium
CN114143377A (en) * 2021-11-29 2022-03-04 杭州逗酷软件科技有限公司 Resource request configuration method, server, client, equipment and storage medium
CN115396375A (en) * 2022-08-17 2022-11-25 支付宝(杭州)信息技术有限公司 Service processing method, device and equipment

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114531374B (en) * 2022-02-25 2023-08-25 深圳平安智慧医健科技有限公司 Network monitoring method, device, equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102137091A (en) * 2010-11-15 2011-07-27 华为技术有限公司 Overload control method, device and system as well as client-side
CN102164384A (en) * 2010-06-17 2011-08-24 华为技术有限公司 Method, device and system for improving service success rate
CN103369601A (en) * 2013-07-15 2013-10-23 厦门卓讯信息技术有限公司 Method for providing large concurrent processing and flow control for mobile phone client sides
CN105024933A (en) * 2014-04-22 2015-11-04 腾讯科技(深圳)有限公司 Request packet sending frequency control method and apparatus
US20160234118A1 (en) * 2015-02-10 2016-08-11 Telefonica Digital Espana, S.L.U. Method, System and Device for Managing Congestion in Network Services
CN106161219A (en) * 2016-09-29 2016-11-23 广州华多网络科技有限公司 Message treatment method and device
CN108134808A (en) * 2016-12-01 2018-06-08 阿里巴巴集团控股有限公司 A kind of network request method and device
CN108702240A (en) * 2016-02-18 2018-10-23 瑞典爱立信有限公司 System, method and apparatus for the data rate for managing control plane optimization
CN109088828A (en) * 2018-08-03 2018-12-25 网宿科技股份有限公司 server overload control method and system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6982956B2 (en) * 2000-04-26 2006-01-03 International Business Machines Corporation System and method for controlling communications network traffic through phased discard strategy selection
CN101917431A (en) * 2010-08-13 2010-12-15 中兴通讯股份有限公司 Method and device for preventing illegal invasion of internal network of intelligent home
CN107508860A (en) * 2017-07-21 2017-12-22 深圳市金立通信设备有限公司 One kind service current-limiting method, server and terminal
CN107707488A (en) * 2017-10-25 2018-02-16 北京数码视讯支付技术有限公司 Pay on-line transaction flow control methods, current limliting service end and client

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102164384A (en) * 2010-06-17 2011-08-24 华为技术有限公司 Method, device and system for improving service success rate
CN102137091A (en) * 2010-11-15 2011-07-27 华为技术有限公司 Overload control method, device and system as well as client-side
CN103369601A (en) * 2013-07-15 2013-10-23 厦门卓讯信息技术有限公司 Method for providing large concurrent processing and flow control for mobile phone client sides
CN105024933A (en) * 2014-04-22 2015-11-04 腾讯科技(深圳)有限公司 Request packet sending frequency control method and apparatus
US20160234118A1 (en) * 2015-02-10 2016-08-11 Telefonica Digital Espana, S.L.U. Method, System and Device for Managing Congestion in Network Services
CN108702240A (en) * 2016-02-18 2018-10-23 瑞典爱立信有限公司 System, method and apparatus for the data rate for managing control plane optimization
CN106161219A (en) * 2016-09-29 2016-11-23 广州华多网络科技有限公司 Message treatment method and device
CN108134808A (en) * 2016-12-01 2018-06-08 阿里巴巴集团控股有限公司 A kind of network request method and device
CN109088828A (en) * 2018-08-03 2018-12-25 网宿科技股份有限公司 server overload control method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
曹风华: "一种基于授权机制的分布式文件系统小文件访问优化策略", 《计算机系统应用》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113489572A (en) * 2021-08-23 2021-10-08 杭州安恒信息技术股份有限公司 Request sending method, device, equipment and storage medium
CN114143377A (en) * 2021-11-29 2022-03-04 杭州逗酷软件科技有限公司 Resource request configuration method, server, client, equipment and storage medium
CN114143377B (en) * 2021-11-29 2024-04-02 杭州逗酷软件科技有限公司 Resource request configuration method, server, client, device and storage medium
CN115396375A (en) * 2022-08-17 2022-11-25 支付宝(杭州)信息技术有限公司 Service processing method, device and equipment
CN115396375B (en) * 2022-08-17 2024-02-27 支付宝(杭州)信息技术有限公司 Service processing method, device and equipment

Also Published As

Publication number Publication date
CN112583726B (en) 2022-11-11
WO2021057808A1 (en) 2021-04-01

Similar Documents

Publication Publication Date Title
CN112583726B (en) Flow control method and device
US11917498B2 (en) Communication method and communications apparatus
JP7093842B2 (en) Techniques for managing integrity protection
WO2018045877A1 (en) Network slicing control method and related device
US11296976B2 (en) Optimization of MTC device trigger delivery
US11206538B2 (en) Control signaling processing method, device, and system
US20200228977A1 (en) Parameter Protection Method And Device, And System
US11848963B2 (en) Method for providing restricted service, and communications device
WO2017166221A1 (en) Radio access control method, device and system
EP3981190B1 (en) Method and apparatus for enforcement of maximum number of protocol data unit sessions per network slice in a communication system
EP3649761B1 (en) User data transported over non-access stratum
WO2011006410A1 (en) Network access control method, network access control device and network access system
US11689565B2 (en) Device monitoring method and apparatus and deregistration method and apparatus
WO2019096306A1 (en) Request processing method, and corresponding entity
CN113114649B (en) Method, device, equipment and medium for solving denial of service attack
WO2023041056A1 (en) Network verification method and apparatus
WO2022032525A1 (en) Group key distribution method and apparatus
US20230164566A1 (en) Network attack handling method and apparatus, device, computer-readable storage medium, and computer program product
JPWO2017022646A1 (en) COMMUNICATION SYSTEM, COMMUNICATION CONTROL DEVICE, COMMUNICATION CONTROL METHOD, AND COMMUNICATION PROGRAM
WO2023079075A1 (en) Usage monitoring control at smf relocation
WO2014071770A1 (en) Method, apparatus and user equipment for sending request
CN116684387A (en) IP address allocation in a wireless communication network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant