WO2023143276A1 - 流量控制方法、设备及存储介质 - Google Patents

流量控制方法、设备及存储介质 Download PDF

Info

Publication number
WO2023143276A1
WO2023143276A1 PCT/CN2023/072761 CN2023072761W WO2023143276A1 WO 2023143276 A1 WO2023143276 A1 WO 2023143276A1 CN 2023072761 W CN2023072761 W CN 2023072761W WO 2023143276 A1 WO2023143276 A1 WO 2023143276A1
Authority
WO
WIPO (PCT)
Prior art keywords
backlog
short message
risk level
message
target
Prior art date
Application number
PCT/CN2023/072761
Other languages
English (en)
French (fr)
Inventor
陆宗泽
罗自荣
曹栋尧
Original Assignee
阿里巴巴(中国)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴(中国)有限公司 filed Critical 阿里巴巴(中国)有限公司
Publication of WO2023143276A1 publication Critical patent/WO2023143276A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/12Messaging; Mailboxes; Announcements
    • H04W4/14Short messaging services, e.g. short message services [SMS] or unstructured supplementary service data [USSD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/60Context-dependent security
    • H04W12/61Time-dependent
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/60Context-dependent security
    • H04W12/67Risk-dependent, e.g. selecting a security level depending on risk profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/10Flow control between communication endpoints

Definitions

  • the present application relates to the technical field of communications, and in particular to a flow control method, device and storage medium.
  • SMS is a common and effective means of directly reaching users, and is widely used in various information push scenarios.
  • an intelligent short message service platform which can be used to receive short messages sent by upstream customers, and deliver short messages to target users through downstream operators.
  • the traffic generated by customers (such as merchants, financial institutions, etc.) sending SMS messages to the SMS service platform is much larger than the downstream of the SMS service platform The traffic that the operator can undertake. In this scenario, it is necessary to adjust the sending speed of the short message platform to avoid a heavy load on the resources of the downstream operator.
  • the short message service platform usually relies on manually estimating the receiving capacity of downstream operators, and manually adjusts the sending speed of short messages based on the receiving capacity.
  • This manual speed regulation method has poor accuracy and high labor cost. Therefore, a new solution remains to be proposed.
  • Various aspects of the present application provide a flow control method, device and storage medium, which are used to reduce the cost of human resources by realizing the automatic control of the short message decentralized flow.
  • the embodiment of the present application also provides a flow control method, including: receiving the short message sent by the messaging system according to the set short message receiving rate; allocating the short message to the target resource object, so that the target resource object can The short message is sent to the target user; the backlog risk level fed back by the target resource object for the short message is obtained; and the short message receiving rate is adjusted according to the distribution characteristics of the backlog risk level fed back by the target resource object.
  • the message system after receiving the short message sent by the message system, it also includes: In a message queue, determine the target message queue to which the short message belongs; according to the distribution characteristics of the backlog risk level fed back by the target resource object, adjust the receiving rate of the short message, including: from the backlog fed back by the target resource object In the risk level, determine the backlog risk level corresponding to the target message queue; adjust the short message receiving rate corresponding to the target message queue according to the distribution characteristics of the backlog risk level corresponding to the target message queue.
  • the multiple message queues respectively correspond to multiple scenarios, and the short messages in the message queues corresponding to any scenario have the same scenario tag.
  • obtaining the backlog risk level corresponding to the target resource object includes: determining multiple queue groups contained in the message topic corresponding to the target resource object; any queue group contains multiple priority queues; according to the The short message backlog and/or the short message backlog time of the priority queues contained in multiple queue groups respectively, calculate the respective backlog risk levels of the multiple queue groups; according to the respective backlog risk levels of the multiple queue groups, calculate The backlog risk level for the message topic in question.
  • calculating the respective backlog risk levels of the multiple queue groups according to the short message backlog amount and/or the short message backlog time of the priority queues contained in the multiple queue groups includes: A queue grouping, determining the short message backlog of the higher target priority queue in the queue grouping, and the backlog time of the first short message of the target priority queue; according to the target priority queue The short message backlog amount and the short message backlog duration of the team leader are used to calculate the backlog risk level of the queue group.
  • adjusting the receiving rate of the short message according to the distribution characteristics of the backlog risk level fed back by the target resource object includes: Feedback times, calculating the proportion of feedback times for each of the at least one backlog risk level; according to the proportion of feedback times for each of the at least one backlog risk level and the preset feedback frequency ratio range and speed regulation ratio of the backlog risk level determine the target speed adjustment ratio; and calculate the update value of the short message receiving rate according to the target speed adjustment ratio and the preset basic receiving rate.
  • determining the target speed regulation ratio according to the ratio of feedback times of each of the at least one backlog risk level and the preset corresponding relationship between the feedback frequency ratio range of the backlog risk level and the speed regulation ratio includes: In order from high to low, it is judged in turn whether the proportion of the feedback times of each of the at least one backlog risk level is greater than the corresponding proportion threshold of the at least one backlog risk level; in determining the at least one backlog risk level When the proportion of feedback times of any backlog risk level is greater than the proportion threshold of the backlog risk level, the operation of the judgment is stopped, and according to the proportion of feedback times of the backlog risk level and the proportion of feedback times of the backlog risk level The corresponding relationship between the range and the speed regulation ratio determines the target speed regulation ratio.
  • the short message receiving rate after adjusting the short message receiving rate, it also includes: according to the short message receiving rate, setting the token throwing rate of the token bucket used to control the short message flow; The number of tokens uses a token bucket algorithm to control the short message sending rate of the message system.
  • An embodiment of the present application also provides a server, including: a memory and a processor; the memory is used to store one or more computer instructions; the processor is used to execute the one or more computer instructions to: execute the Steps in the methods provided in the application examples.
  • the embodiment of the present application also provides a computer-readable storage medium storing a computer program.
  • the computer program When the computer program is executed by a processor, the steps in the method provided in the embodiment of the present application can be realized.
  • the scheduling node can receive the short message sent by the message system according to the set short message receiving rate, and send the short message to the target user through the target resource object.
  • the scheduling node can obtain the backlog risk level fed back by the target resource object for the short message, and adjust the short message receiving rate according to the distribution characteristics of the backlog risk level fed back by the target resource object.
  • the scheduling node can adjust the short message receiving rate according to the backlog risk of resource objects, realizing the automatic control of the flow of short messages, which is beneficial to reduce the cost of human resources. At the same time, it can reduce the influence of speed regulation lag caused by human factors, and the timeliness and accuracy are higher.
  • FIG. 1 is a schematic structural diagram of a short message service system provided by an exemplary embodiment of the present application
  • Fig. 2 is a schematic flow chart of short message processing provided by an exemplary embodiment of the present application
  • FIG. 3 is a schematic structural diagram of a resource object provided by an exemplary embodiment of the present application.
  • FIG. 4 is a schematic flowchart of a flow control method provided by an exemplary embodiment of the present application.
  • Fig. 5 is a schematic structural diagram of a server provided by an exemplary embodiment of the present application.
  • Fig. 1 is the schematic structural diagram of the short message service system that an exemplary embodiment of the present application provides, as shown in Fig. Messaging system 50. Wherein, there may be multiple scheduling nodes, as shown in FIG. 1 . Multiple scheduling nodes can perform distributed parallel processing on short messages.
  • the short message refers to the object of communication transmission, which is a carrier of information.
  • the short message can be implemented as a text short message (such as a short message), a voice short message, an image short message, etc. Do limit.
  • the client 10 is located at the client side of the short message service system 100, and the client refers to a user who needs to send short messages.
  • the customer may be implemented as a merchant. Merchants can issue marketing advertisements, preferential information, etc. to consumers through the short message service system 100 .
  • the customer can be implemented as a financial institution. The financial institution can send a verification code to the user to be paid through the short message server system 100, so as to ensure the safety of the payment process.
  • the client 10 may be installed on a smart device on the user side, such as a computer, a tablet computer, a smart phone, etc., which is not limited in this embodiment.
  • the access node 20 is located at the access layer of the short message service system 100, and refers to a device for receiving short messages sent by external devices (such as client devices), which can be implemented as routers, switches, modems, etc., in this implementation Examples are not limited.
  • the access node 20 is configured to: receive the short message sent by the client, and send the short message to any scheduling node.
  • FIG. 1 illustrates a situation where an access node 20 sends a short message to a scheduling node 30 .
  • the scheduling node 30 may be the scheduling node closest to the access node 20 .
  • the access node 20 may also send the short message to other scheduling nodes other than the scheduling node 30, which will not be shown one by one.
  • the scheduling node 30 may send a backlog request to the message system 50 according to the short message, so that the message system 50 may backlog the short message, so as to facilitate subsequent distributed processing of the short message.
  • the message system 50 is mainly used for: adding the received short message to the message queue, and According to the principle of "in, first out", the short messages in the message queue are dequeued.
  • the message system 50 can send the short message to any scheduling node.
  • Fig. 1 illustrates the message system 50 sends short message to the situation of dispatching node 30, except dispatching node 30, message system 50 also can send short message to other dispatching nodes outside dispatching node 30.That is, a plurality of dispatching nodes all can be sent from message system 50 acquires short messages, so as to perform distributed processing on the backlog of short messages in the message system 50, thereby improving the efficiency of sending short messages.
  • any scheduling node After any scheduling node receives the short message sent by the message system 50, it can distribute the short message to the corresponding user, and control the flow of the message system 50 during the distribution process.
  • different scheduling nodes have the same flow control logic for the message system, so any scheduling node (that is, the scheduling node 30 ) will be used as an example for illustration in subsequent embodiments.
  • the scheduling node 30 can make a decision on the resource path of the short message based on the decision node 40, and use the decided resource path to send the short message to the target user.
  • the target user refers to a user finally reached by the short message, such as a consumer in an electronic morning scene, a user waiting to pay in an electronic payment scene, and the like.
  • the message system 50 can be implemented based on a single server or a server cluster.
  • a short message processing application may be deployed on the single server or server cluster to provide consumers with short message related services.
  • the short message processing application can be implemented as MetaQ (a short message service engine).
  • any scheduling node can be implemented based on a server device, and the server device can be a conventional server, cloud server, cloud host, virtual center, or elastic computing instance on the cloud, etc., which is not limited in this embodiment.
  • the composition of the server equipment mainly includes a processor, a hard disk, a memory, a system bus, etc., and is similar to a general computer architecture, and will not be repeated here.
  • any scheduling node can execute the flow control method to control the rate of short messages received from the message system 50 .
  • the scheduling node 30 will be taken as an example to illustrate the flow control method executed by any scheduling node side.
  • the scheduling node 30 includes a flow control module and resource objects.
  • the scheduling node 30 can receive the short message sent by the message system 50 according to the set short message receiving rate based on the flow control module, and distribute the short message to the target resource object, so as to send the short message to the target resource object through the target resource object.
  • Target users The scheduling node 30 can obtain the backlog risk level of the target resource object for the short message feedback based on the flow control module, and adjust the short message reception according to the distribution characteristics of the backlog risk level fed back by the target resource object within the set time range. rate.
  • the short message reception rate refers to the rate at which the scheduling node receives short messages from the message system.
  • the short The message receiving rate can be represented by QPS (Queries-per-second, query rate per second).
  • the short message receiving rate can be dynamically adjusted.
  • the short message receiving rate can be dynamically adjusted according to a set adjustment period, or can be dynamically adjusted according to the usage of resource objects, which is not limited in this embodiment.
  • the set short message receiving rate may be the short message receiving rate obtained after last dynamic adjustment.
  • the resource object refers to a virtual resource obtained by combining the physical resources corresponding to the scheduling node 30, and one resource object may include multiple physical resources.
  • Each physical resource may be referred to as a child resource object of the resource object.
  • the physical resource refers to a communication channel resource for sending short messages, and one physical resource may correspond to one communication channel between the scheduling node and the operator.
  • the communication channel is used to provide short message transmission capability, and one communication channel can undertake multiple connections to realize concurrent processing of short messages.
  • the scheduling node 30 may correspond to multiple resource objects, and after receiving the short message sent by the message system 50, it may make a decision on the sending path of the short message.
  • the scheduling node can make a decision on the sending path of the short message through the decision node 40 shown in FIG. 1 .
  • the dispatching node 30 can send a decision request to the decision-making node 40, and the decision-making node 40 can select a resource object for sending the short message from a plurality of resource objects corresponding to the dispatching node 30 according to the set decision logic. resource object, and return the resource object decision result to the scheduling node 30.
  • the decision logic of the setting may include: making a decision according to the type of the short message corresponding to the short message, making a decision according to the region signature to which the short message belongs, making a decision according to the complaint rate of the short message, etc., which are not limited in this embodiment.
  • the decision node 40 may further consider the QPS of different communication channels when making specific decisions, so as to balance the pressure of different communication channels.
  • the resource object for releasing the short message determined by the scheduling node from multiple resource objects is marked as the target resource object.
  • the scheduling node 30 may distribute the short message to the target resource object, so as to send the short message to the target user through the target resource object.
  • the decision node 40 can be used to make a two-layer decision, so as to determine the physical resource for sending the short message.
  • the scheduling node 30 may send a decision request to the decision node 40 .
  • the decision node 40 may select a physical resource (ie, a sub-resource object) for sending the short message according to the respective utilization rates of multiple physical resources corresponding to the target resource object, and return the physical resource decision result to the scheduling node 30 .
  • the scheduling node 30 can issue a short message based on the determined physical resource, and send the short message to the user's terminal device through the gateway.
  • the scheduling node 30 can acquire the target resource object corresponding to the short message each time The backlog risk level of the target resource object's feedback for the short message.
  • the backlog risk level is used to indicate the level of the current message backlog risk of the target resource object, and the level of the backlog risk may be represented by a backlog risk level.
  • the backlog risk levels may be high, medium, and low; or, the backlog risk levels may be level one, level two, level three, level four, level five, etc. from high to low.
  • the backlog risk level can be calculated according to the actual backlog situation of the physical resources included in the target resource object, and the specific calculation process will be described in subsequent embodiments, and will not be repeated here.
  • the scheduling node 30 each time the received short message is sent to the user through the target resource object, the backlog risk level fed back by the target resource object can be obtained in real time, so as to perceive the real-time short message backlog of the target resource object. On this basis, the scheduling node 30 can adjust the short message receiving rate according to the distribution characteristics of the backlog risk level fed back by the target resource object.
  • the distribution characteristics of the backlog risk levels may be determined according to a certain number of backlog risk levels fed back by the target resource object. For example, according to the 500 backlog risk levels fed back by the target resource object, the distribution characteristics of the backlog risk levels can be analyzed. For example, the distribution characteristic analysis result of the 500 backlog risk levels may be: low backlog risks account for 30%, high backlog risks account for 50%, and medium backlog risks account for 20%.
  • the distribution characteristics of the backlog risk level may be determined according to the backlog risk level fed back by the target resource object within a set time range.
  • the set duration range can be set according to requirements, for example, the set duration range can be set to 30 seconds, 60 seconds, 90 seconds, etc., which is not limited in this embodiment.
  • the distribution characteristics of the backlog risk level are used to describe the feedback frequency of different backlog risk levels.
  • the greater the feedback frequency of the high backlog risk level the more serious the short message backlog of the target resource object is, and the scheduling node 30 should appropriately reduce the short message receiving rate to avoid breakdown of downstream resources.
  • the greater the feedback frequency of the low backlog risk level the smaller the short message backlog of the target resource object, and the short message receiving rate should be appropriately increased to make full use of existing resources.
  • the scheduling node can receive the short message sent by the message system according to the set short message receiving rate, and send the short message to the target user through the target resource object.
  • the scheduling node can obtain the backlog risk level fed back by the target resource object for the short message, and adjust the short message receiving rate according to the distribution characteristics of the backlog risk level fed back by the target resource object within the set time range.
  • the scheduling node can adjust the short message receiving rate according to the backlog risk of resource objects, realizing the automatic control of the flow of short messages, which is beneficial to reduce the cost of human resources. At the same time, it can reduce the influence of speed regulation lag caused by human factors, and the timeliness and accuracy are higher.
  • short messages are stored in queues.
  • the message system has multiple message queues, and each message queue can send short messages to different scheduling nodes according to their respective short message sending rates.
  • the scheduling node 30 after receiving the short message sent by the message system 50 , the scheduling node 30 can determine the target message queue to which the short message belongs from the multiple message queues of the message system 50 .
  • the scheduling node 30 adjusts the receiving rate of the short message according to the distribution characteristics of the backlog risk level fed back by the target resource object within the set time range
  • the backlog risk fed back from the target resource object within the set time range can be In the level, determine the backlog risk level corresponding to the target message queue; adjust the short message receiving rate corresponding to the target message queue according to the distribution characteristics of the backlog risk level corresponding to the target message queue. That is, the scheduling node can implement speed regulation at the message queue level.
  • the message queue on the message system side is divided according to scenarios. That is, multiple message queues of the message system respectively correspond to multiple scenarios, and each message queue may be called a scenario queue.
  • the scene is used to indicate the use of the short message
  • the multiple scene queues are respectively used to store short messages corresponding to different scenes sent by the client.
  • the message system may include message queues corresponding to education scenarios, message queues corresponding to financial scenarios, message queues corresponding to catering scenarios, message queues corresponding to audio and video scenarios, and so on.
  • it may also include a general scenario queue for storing short messages that cannot be classified into scenarios, as shown in FIG. 2 .
  • the client when it is necessary to send short messages in different scenarios, it can apply for different accounts from the short message service system 100 .
  • the sent short message may carry a scene tag matched with the account.
  • the message system 50 in the short message service system 100 can add the short message to the corresponding scene queue according to the scene tag carried in the short message.
  • tags of different dimensions may be added to each short message.
  • a tag corresponding to the type of the short message and/or the complaint rate may also be added to the short message.
  • the short message type includes: verification code type, notification type or advertisement type.
  • the label of each short message can be: finance_verification code_0001 (complaint rate 1%), finance_verification code_0002 (complaint rate 2%), finance_notification _0001 (5% complaint rate) and so on.
  • the different scene queues on it can send backlogged short messages to their corresponding scheduling nodes according to their respective short message sending rates.
  • the scheduling node 30 can determine the target scenario queue to which the short message belongs. And through a layer of decision-making shown in Figure 2, determine the use of Target resource object for sending the short message. After the target resource object is determined, the scheduling node 30 can send the short message to the user through the target resource object, and can obtain the backlog risk level fed back by the target resource object for the short message.
  • the scheduling node 30 can determine the distribution of the backlog risk level corresponding to the target scene queue, and adjust the distribution of the backlog risk level corresponding to the target scene queue.
  • the short message receiving rate corresponding to the target scenario is the short message receiving rate corresponding to the target scenario.
  • a scheduling node receives short messages from the financial queue of the messaging system at a rate of V1, and receives short messages from the education queue of the messaging system at a rate of V2. After receiving a short message each time, the scheduling node can use the corresponding resource object to send the short message to the user, and obtain the backlog risk level of the resource object for short message feedback. If within the set time range, the times of receiving resource objects to the high backlog risk level for short message feedback in the education queue are relatively large, the scheduling node may reduce the rate V2. If within the set duration, the number of times of receiving low backlog risk level for short message feedback in the financial queue is less, the scheduling node can increase the rate V1.
  • the pressure of the resource object for receiving short messages in different message queues can be calculated in real time. Based on the calculated receiving pressure, the scheduling node can adjust the short message receiving rate for different message queues, thereby realizing precise flow control at the queue level.
  • the following will exemplify an optional implementation manner in which the scheduling node obtains the backlog risk level corresponding to the resource object.
  • any resource object may use a message topic (topic) to store received short messages, such as topic short message a, topic short message b, and topic short message c shown in FIG. 2 .
  • the topic queue can be implemented based on EMQ (a short message push service), which is not limited in this embodiment.
  • EMQ is a message queue that is similar to MetaQ after being subpackaged, and its underlying database can split a large number of message topics (topics).
  • a resource object refers to a virtual resource obtained by combining physical resources.
  • a short message can be taken from the message subject of the resource object and sent to the user.
  • any message topic can be divided into multiple queue groups (groups) according to the number of connections of the communication channel, such as queue group 1, queue group 2...queue group N shown in FIG. 3 .
  • queue group 1 queue group 1
  • queue group 2 queue group 2
  • queue group N queue group N shown in FIG. 3
  • 50 connections can be divided into one queue group, so that the traffic acceptance capacity of one queue group is 50QPS.
  • any queue group includes multiple priority queues.
  • the priority queue refers to the Different message queues generated at different levels. In some embodiments, it can be divided according to user priority and/or short message type.
  • the short message type may include: a verification code type, a notification type, an advertisement type, and the like. For example, queues for member users have higher priority than queues for non-member users. For another example, the priority of the verification code type queue is higher than that of the notification type queue, and the priority of the notification type queue is higher than that of the advertisement type queue.
  • each queue group may include three priority queues: high, medium and low. The higher the priority level of the priority queue, the faster the short messages in the queue will be delivered to the user.
  • the scheduling node 30 when the scheduling node 30 acquires the backlog risk level corresponding to the target resource object, it can determine the multiple queue groups contained in the message topic corresponding to the target resource object, and The short message backlog amount and/or the short message backlog time of the priority queues included in each group is used to calculate the backlog risk level of each of the multiple queue groups.
  • the short message backlog of any priority queue refers to the number of short messages in the priority queue that have not been dequeued.
  • the short message backlog time of any priority queue may be the backlog time of short messages at the head of the priority queue. According to the first-in-first-out principle of the queue, the backlog time of the short messages at the head of the team is the maximum backlog time of the priority queue.
  • the short message backlog of the higher target priority queue in the queue group can be determined, and the backlog time of the head short message of the target priority queue; according to The short message backlog of the target priority queue and the short message backlog duration of the head of the team calculate the backlog risk level of the queue group.
  • the queue group includes three priority queues of high, medium and low, the target priority queue may be a high priority queue.
  • the relationship between the backlog of short messages, the backlog duration of short messages of the team leader and the risk level of the backlog can be expressed in a two-dimensional table.
  • the backlog of short messages in different numerical ranges and the corresponding relationship between the backlog duration of short messages of the team leader and the backlog risk level can be stored.
  • the backlog risk level of the target priority queue can be determined by querying the two-dimensional table according to the short message backlog of the target priority queue and the short message backlog duration of the team leader. Since the priority of the target priority queue is higher, the backlog risk level of the marked priority queue can be directly used as the backlog risk level of the queue group to which the target priority queue belongs.
  • the respective backlog risk levels of multiple queue groups can be calculated and obtained.
  • the backlog risk level of the message topic can be calculated.
  • the highest backlog risk level may be selected from the respective backlog risk levels of the plurality of queue groups as the backlog risk level of the message topic; or, the average value of the backlog risk levels of multiple queue groups may be calculated,
  • the backlog risk level of the message subject is not limited in this embodiment.
  • the scheduling node 30 can, according to the target resource object, within the set duration range Adjust the short message receiving rate according to the feedback times of the backlog risk level.
  • the resource object may feedback the backlog risk level.
  • the data collection module in the scheduling node can collect the backlog risk levels of short message feedback from resource objects in queues in different scenarios according to the labels of multiple dimensions of short messages, and calculate the distribution of backlog risk levels.
  • the scheduled tasks in the scheduling node can regularly obtain the backlog risk level, and adjust the speed according to the rules and scenarios.
  • the timing duration of the timing task may be 30 seconds, 60 seconds, 90 seconds or other durations, which is not limited in this embodiment.
  • the data collection module in the scheduling node when the data collection module in the scheduling node adjusts the receiving rate of the short message according to the distribution characteristics of the backlog risk level fed back by the target resource object within the set duration range, it may adjust the short message receiving rate according to the target resource object within the set
  • calculate the respective proportions of feedback times of the at least one backlog risk level For the number of times of feedback to at least one backlog risk level within the time length range, calculate the respective proportions of feedback times of the at least one backlog risk level.
  • the proportions of feedback times of the at least one backlog risk level can be used to represent the distribution characteristics of different backlog risk levels.
  • the scheduling node According to the proportions of feedback times of the at least one backlog risk level and the preset corresponding relationship between the proportion range of feedback times of the backlog risk level and the speed regulation ratio, the scheduling node can determine the target speed regulation ratio.
  • the corresponding relationship between the proportion range of feedback times and the speed regulation proportion of the preset backlog risk level can be shown in the following table:
  • the scheduling node when determining the target speed regulation ratio based on the above-mentioned corresponding relationship of the speed regulation ratio, may sequentially determine the proportion of the feedback times of the at least one backlog risk level in descending order. Whether the ratio is greater than the proportion threshold corresponding to each of the at least one backlog risk level. When it is determined that the proportion of feedback times of any one of the backlog risk levels in the at least one backlog risk level is greater than the proportion threshold of the backlog risk level, the operation of the judgment is stopped, and according to the proportion of feedback times of the backlog risk level and the The corresponding relationship between the feedback frequency ratio range of the backlog risk level and the speed regulation ratio determines the target speed regulation ratio.
  • Step S1 counting the number of feedbacks corresponding to high backlog risk levels within a backlog risk level collection cycle.
  • step S2 may be performed.
  • Step S2 counting the number of feedbacks corresponding to the backlog risk level within a backlog risk level collection cycle.
  • step S3 If the proportion of feedback times of medium backlog risk level is in the range of 90-100(%), then query the above table to determine the target speed adjustment ratio is 0.5; if the proportion of feedback times of medium backlog risk level is 80-90(%) ) range, check the above table to determine the target speed adjustment ratio is 0.7; if the proportion of the feedback times of the middle backlog risk level is within the range of 50-80 (%), then check the above table to determine the target speed adjustment ratio is 0.8; if If the proportion of feedback times of medium backlog risk level is in the range of 30-50 (%), then query the above table to determine the target speed adjustment ratio is 0.9; if the proportion of feedback times of medium backlog risk level is below 30 (%), Then step S3 is executed.
  • Step S3 counting the number of times of feedback corresponding to a low backlog risk level within a backlog risk level collection cycle.
  • step S4 If the proportion of feedback times with low backlog risk level is in the range of 90-100(%), then query the above table to determine the target speed adjustment ratio is 1.0; if the proportion of feedback times with low backlog risk level is 80-90(%) ) range, check the above table to determine the target speed adjustment ratio is 1.5; if the proportion of feedback times with low backlog risk level is within the range of 50-80 (%), then query the above table to determine the target speed adjustment ratio is 2.0; if If the proportion of feedback times with low backlog risk level is in the range of 30-50(%), check the above table to determine the target speed adjustment ratio is 2.7; if the proportion of feedback times with low backlog risk level is below 30(%), Then step S4 is executed.
  • Step S4 setting the speed regulation ratio to 4.0.
  • the backlog capacity of downstream resources can be measured more accurately, and whether the downstream resources are sufficient can be judged.
  • the scheduling node can further set the token throwing rate of the token bucket for controlling short message flow according to the short message receiving rate.
  • a short message may correspond to a token in the token bucket.
  • the token release rate of the token bucket can also be set to 14W QPS.
  • the scheduling node can use the token bucket algorithm to control the short message sending rate of the message system.
  • the token bucket algorithm means that when a request for sending a short message arrives, if there is at least one token in the token bucket, the short message is received and a token is deleted.
  • any scheduling node can use the flow control method provided by the above and the following embodiments to control the receiving rate of the short message, and realize the flow control of a single machine (that is, a single scheduling node) .
  • cluster flow control can be transformed into stand-alone flow control, avoiding problems caused by data interaction among multiple devices in the cluster, and greatly improving data processing capabilities and response efficiency.
  • the embodiment of the present application also provides a flow control method executed on the scheduling node side.
  • Fig. 4 is a schematic flow diagram of a flow control method provided by an exemplary embodiment of the application. As shown in Fig. 4, when the method is executed on any scheduling node side, it is mainly used for:
  • Step 401 Receive the short message sent by the messaging system according to the set short message receiving rate.
  • Step 402 distribute the short message to the target resource object, so as to send the short message to the target user through the target resource object.
  • Step 403 obtaining the backlog risk level of the target resource object for the short message feedback.
  • Step 404 Adjust the short message receiving rate according to the distribution characteristics of the backlog risk level fed back by the target resource object.
  • the message system after receiving the short message sent by the message system, it also includes: from multiple message queues of the message system, determining the target message queue to which the short message belongs; according to the distribution of the backlog risk level fed back by the target resource object feature, adjusting the short message receiving rate, including: determining the corresponding backlog risk level of the target message queue from the backlog risk level fed back by the target resource object; according to the distribution characteristics of the backlog risk level corresponding to the target message queue , to adjust the short message receiving rate corresponding to the target queue.
  • the multiple message queues respectively correspond to multiple scenarios, and the short messages in the message queues corresponding to any scenario have the same scenario tag.
  • a way of obtaining the backlog risk level corresponding to the target resource object may include: determining multiple queue groups contained in the message topic corresponding to the target resource object; any queue group contains multiple priority queues; According to the short message backlog amount and/or the short message backlog time of the priority queues contained in the plurality of queue groups respectively, calculate the respective backlog risk levels of the plurality of queue groups; according to the respective backlog risk levels of the plurality of queue groups, calculate The backlog risk level for this message topic.
  • a manner of calculating the respective backlog risk levels of the multiple queue groups may include: This any queue grouping determines the short message backlog of the target priority queue with higher priority in the queue grouping, and the backlog time of the head short message of the target priority queue; according to the short message backlog of the target priority queue Calculate the backlog risk level of the queue group based on the amount of message backlog and the backlog time of the first short message of the team.
  • a manner of adjusting the receiving rate of the short message may include: According to the number of feedback times of the level, calculate the proportion of the respective feedback times of the at least one backlog risk level; according to the proportion of the respective feedback times of the at least one backlog risk level and the preset feedback frequency ratio range and speed adjustment ratio of the backlog risk level Determine the target speed adjustment ratio; according to the target speed adjustment ratio and the preset basic receiving rate, calculate the update value of the short message receiving rate.
  • a way of determining the target speed regulation ratio can be Including: according to the order from high to low, sequentially determine whether the proportion of the feedback times of the at least one backlog risk level is greater than the corresponding proportion threshold of the at least one backlog risk level; when determining the at least one backlog risk level When the proportion of feedback times of any backlog risk level is greater than the proportion threshold of the backlog risk level, the operation of the judgment is stopped, and according to the proportion of feedback times of the backlog risk level and the proportion range of feedback times of the backlog risk level and The corresponding relationship of the speed regulation ratio determines the target speed regulation ratio.
  • the receiving rate of the short message after adjusting the receiving rate of the short message, it also includes: according to the receiving rate of the short message, setting the token delivery rate of the token bucket for controlling the flow of the short message; , using the token bucket algorithm to control the short message sending rate of the message system.
  • the scheduling node can receive the short message sent by the message system according to the set short message receiving rate, and send the short message to the target user through the target resource object.
  • the scheduling node can obtain the backlog risk level fed back by the target resource object for the short message, and adjust the short message receiving rate according to the distribution characteristics of the backlog risk level fed back by the target resource object.
  • the scheduling node can adjust the short message receiving rate according to the backlog risk of resource objects, realizing the automatic control of the flow of short messages, which is beneficial to reduce the cost of human resources. At the same time, it can reduce the influence of speed regulation lag caused by human factors, and the timeliness and accuracy are higher.
  • each step of the method may be the same device, or the method may also be executed by different devices.
  • the execution subject of steps 401 to 404 may be device A; for another example, the execution subject of steps 401 and 402 may be device A, and the execution subject of step 403 may be device B; and so on.
  • Fig. 5 shows a schematic structural diagram of a server provided by an exemplary embodiment of the present application, and the server is suitable for a scheduling node in the short message service system provided by the foregoing embodiments.
  • the server includes: a memory 501 , a processor 502 and a communication component 503 .
  • the memory 501 is used to store computer programs, and can be configured to store other various data to support operations on the server. Examples of such data include instructions for any application or method operating on the server.
  • the memory 501 can be implemented by any type of volatile or non-volatile storage device or their combination, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), which can Erase Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic Disk or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM Erase Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Magnetic Disk Magnetic Disk or Optical Disk.
  • the processor 502 coupled with the memory 501, is used to execute the computer program in the memory 501, so as to: receive the short message sent by the message system through the communication component 503 according to the set short message receiving rate; distribute the short message to The target resource object is to send the short message to the target user through the target resource object; obtain the backlog risk level of the target resource object feedback for the short message; according to the distribution characteristics of the backlog risk level feedback of the target resource object, adjust the Short message reception rate.
  • the processor 502 is further configured to: determine the target message queue to which the short message belongs from among multiple message queues of the message system; According to the distribution characteristics of the backlog risk level, adjusting the short message receiving rate includes: determining the corresponding backlog risk level of the target message queue from the backlog risk level fed back by the target resource object; according to the backlog risk level corresponding to the target message queue According to the distribution characteristics of the risk level, the short message receiving rate corresponding to the target queue is adjusted.
  • the multiple message queues respectively correspond to multiple scenarios, and the short messages in the message queues corresponding to any scenario have the same scenario tag.
  • the processor 502 when acquiring the backlog risk level corresponding to the target resource object, is specifically configured to: determine multiple queue groups included in the message topic corresponding to the target resource object; any queue group contains multiple priority levels Queue; according to the short message backlog and/or short message backlog time of the priority queues contained in the multiple queue groups, calculate the respective backlog risk levels of the multiple queue groups; according to the respective backlog risk levels of the multiple queue groups , to calculate the backlog risk level for this message topic.
  • the processor 502 calculates the backlog risk level of each of the multiple queue groups according to the short message backlog amount and/or the short message backlog time of the priority queues contained in the multiple queue groups, specifically for : For any queue group, determine the short message backlog of the target priority queue with higher priority in the queue group, and the short message backlog time of the head of the target priority queue; according to the target priority queue Calculate the backlog risk level of the queue group based on the backlog amount of short messages and the backlog time of the first short message of the team.
  • the processor 502 adjusts the short message receiving rate according to the distribution characteristics of the backlog risk level fed back by the target resource object, it is specifically configured to: According to the number of feedback times of the backlog risk level, calculate the respective feedback frequency ratios of the at least one backlog risk level; According to the corresponding relationship of the speed ratio, the target speed regulation ratio is determined; according to the target speed regulation ratio, the and a preset basic receiving rate, and calculate an update value of the short message receiving rate.
  • the processor 502 determines the target speed regulation ratio according to the respective proportions of feedback times of the at least one backlog risk level and the preset corresponding relationship between the feedback frequency ratio range of the backlog risk level and the speed regulation ratio, It is specifically used for: according to the order from high to low, sequentially judge whether the respective feedback frequency ratios of the at least one backlog risk level are greater than the corresponding proportion thresholds of the at least one backlog risk level; when determining the at least one backlog risk level When the proportion of feedback times of any backlog risk level in the risk level is greater than the proportion threshold of the backlog risk level, the judgment operation is stopped, and according to the proportion of feedback times of the backlog risk level and the proportion of feedback times of the backlog risk level The corresponding relationship between the range and the speed regulation ratio determines the target speed regulation ratio.
  • the processor 502 is further configured to: according to the short message receiving rate, set the token throwing rate of the token bucket used to control the short message flow; The number of remaining tokens, using the token bucket algorithm to control the short message sending rate of the message system.
  • the server further includes: a power supply component 504 and other components.
  • Fig. 5 only schematically shows some components, which does not mean that the server only includes the components shown in Fig. 5 .
  • the communication component 503 is configured to facilitate wired or wireless communication between the device where the communication component is located and other devices.
  • the device where the communication component is located can access a wireless network based on communication standards, such as WiFi, 2G, 3G, 4G or 5G, or a combination thereof.
  • the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component may be based on Near Field Communication (NFC) technology, Radio Frequency Identification (RFID) technology, Infrared Data Association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies to fulfill.
  • NFC Near Field Communication
  • RFID Radio Frequency Identification
  • IrDA Infrared Data Association
  • UWB Ultra Wideband
  • Bluetooth Bluetooth
  • the power supply component 504 provides power for various components of the device where the power supply component is located.
  • a power supply component may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to the device in which the power supply component resides.
  • the scheduling node can receive the short message sent by the message system according to the set short message receiving rate, and send the short message to the target user through the target resource object.
  • the scheduling node can obtain the backlog risk level fed back by the target resource object for the short message, and adjust the short message receiving rate according to the distribution characteristics of the backlog risk level fed back by the target resource object within the set time range.
  • the scheduling node can adjust the short message receiving rate according to the backlog risk of resource objects, realizing the automatic control of the flow of short messages, which is beneficial to reduce the cost of human resources. At the same time, it can reduce the influence of speed regulation lag caused by human factors, and the timeliness and accuracy are higher.
  • the embodiment of the present application also provides a computer-readable storage medium storing a computer program.
  • the computer program When the computer program is executed, the steps that can be executed by the server in the above method embodiments can be realized.
  • the embodiments of the present invention may be provided as methods, systems, or computer program products. Accordingly, the present invention can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to operate in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture comprising instruction means, the instructions
  • the device realizes the function specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.
  • a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • Memory may include non-permanent storage in computer readable media, in the form of random access memory (RAM) and/or nonvolatile memory such as read-only memory (ROM) or flash RAM. Memory is an example of computer readable media.
  • RAM random access memory
  • ROM read-only memory
  • flash RAM flash random access memory
  • Computer-readable media including both permanent and non-permanent, removable and non-removable media, can be implemented by any method or technology for storage of information.
  • Information may be computer readable instructions, data structures, modules of a program, or other data.
  • Examples of storage media for computers include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Flash memory or other memory technologies, only Compact disc read-only memory (CD-ROM), digital versatile disc (DVD) or other optical storage, magnetic tape cartridge, magnetic disk storage or other magnetic storage device or any other non-transmission medium that can be used to store information that can be accessed by a computing device Information.
  • computer-readable media excludes transitory computer-readable media, such as modulated data signals and carrier waves.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请实施例提供一种流量控制方法、设备及存储介质。在流量控制方法中,调度节点可按照设定的短消息接收速率,接收消息系统发送的短消息,并通过目标资源对象将该短消息发送至目标用户。同时,调度节点可获取目标资源对象针对该短消息反馈的积压风险等级,并根据该目标资源对象反馈的积压风险等级的分布特征,调整该短消息接收速率。在这种实施方式中,基于资源对象的反馈机制,调度节点可根据资源对象的积压风险,对短消息接收速率进行调整,实现了短消息下放流量的自动控制,有利于降低人力资源成本。同时,可降低人为因素产生的调速滞后影响,时效性和准确性更高。

Description

流量控制方法、设备及存储介质
本申请要求于2022年01月28日提交中国专利局、申请号为202210108247.X、申请名称为“流量控制方法、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及通信技术领域,尤其涉及一种流量控制方法、设备及存储介质。
背景技术
短信是直接触达用户的一种常见且有效的手段,被广泛应用于多种信息推送场景。目前,存在一种智能的短信服务平台,可用于接收上游客户发送的短信,并将短信通过下游运营商下发给目标用户。在一些场景中,当短信推送需求量较大(例如购物节集中推送短信验证码)时,客户(例如商家、金融机构等)向短信服务平台发送短信产生的流量,远大于短信服务平台的下游运营商所能承接的流量。在这种场景下,需要对短信平台的短信下发速度进行调整,以避免对下游运营商的资源造成较大负荷。
现有技术中,在短信服务平台侧通常依赖于人工估算下游运营商的承接能力,并基于该承接能力由人工手动完成短信下放速度的调整。这种人工调速方式准确率较差,且人力成本较高。因此,有待提出一种新的解决方案。
发明内容
本申请的多个方面提供一种流量控制方法、设备及存储介质,用以根据实现短消息下放流量的自动控制,降低人力资源成本。
本申请实施例还提供一种流量控制方法,包括:按照设定的短消息接收速率,接收消息系统发送的短消息;将所述短消息分配给目标资源对象,以通过所述目标资源对象将所述短消息发送至目标用户;获取所述目标资源对象针对所述短消息反馈的积压风险等级;根据所述目标资源对象反馈的积压风险等级的分布特征,调整所述短消息接收速率。
进一步可选地,接收消息系统发送的短消息之后,还包括:从所述消息系统的多 个消息队列中,确定所述短消息所属的目标消息队列;根据所述目标资源对象反馈的积压风险等级的分布特征,调整所述短消息接收速率,包括:从所述目标资源对象反馈的积压风险等级中,确定与所述目标消息队列的对应的积压风险等级;根据与所述目标消息队列对应的积压风险等级的分布特征,调整与所述目标队列对应的短消息接收速率。
进一步可选地,所述多个消息队列分别与多个场景对应,任一场景对应的消息队列中的短消息具有相同的场景标签。
进一步可选地,获取所述目标资源对象对应的积压风险等级,包括:确定所述目标资源对象对应的消息主题包含的多个队列分组;任一队列分组包含多个优先级队列;根据所述多个队列分组各自包含的优先级队列的短消息积压量和/或短消息积压时间,计算所述多个队列分组各自的积压风险等级;根据所述多个队列分组各自的积压风险等级,计算所述消息主题的积压风险等级。
进一步可选地,根据所述多个队列分组各自包含的优先级队列的短消息积压量和/或短消息积压时间,计算所述多个队列分组各自的积压风险等级,包括:针对所述任一队列分组,确定所述队列分组中的优先级较高的目标优先级队列的短消息积压量,以及所述目标优先级队列的队首短消息的积压时长;根据所述目标优先级队列的短消息积压量以及所述队首短消息的积压时长,计算所述队列分组的积压风险等级。
进一步可选地,根据所述目标资源对象反馈的积压风险等级的分布特征,调整所述短消息接收速率,包括:根据所述目标资源对象在设定时长范围内对至少一种积压风险等级的反馈次数,计算所述至少一种积压风险等级各自的反馈次数占比;根据所述至少一种积压风险等级各自的反馈次数占比以及预设的积压风险等级的反馈次数比例范围和调速比例的对应关系,确定目标调速比例;根据所述目标调速比例以及预设的基础接收速率,计算所述短消息接收速率的更新值。
进一步可选地,根据所述至少一种积压风险等级各自的反馈次数占比以及预设的积压风险等级的反馈次数比例范围和调速比例的对应关系,确定目标调速比例,包括:按照由高到低的顺序,依次判断所述至少一种积压风险等级各自的反馈次数占比是否大于所述至少一种积压风险等级各自对应的占比阈值;在确定所述至少一种积压风险等级中的任一积压风险等级的反馈次数占比大于所述积压风险等级的占比阈值时,停止所述判断的操作,并根据积压风险等级的反馈次数占比以及所述积压风险等级的反馈次数比例范围和调速比例的对应关系,确定所述目标调速比例。
进一步可选地,调整所述短消息接收速率之后,还包括:根据所述短消息接收速率,设置用于控制短消息流量的令牌桶的令牌投放速率;根据所述令牌桶的剩余令牌数,采用令牌桶算法,对所述消息系统的短消息发送速率进行控制。
本申请实施例还提供一种服务器,包括:存储器和处理器;所述存储器用于存储一条或多条计算机指令;所述处理器用于执行所述一条或多条计算机指令以用于:执行本申请实施例提供的方法中的步骤。
本申请实施例还提供一种存储有计算机程序的计算机可读存储介质,计算机程序被处理器执行时能够实现本申请实施例提供的方法中的步骤。
在本申请实施例中,调度节点可按照设定的短消息接收速率,接收消息系统发送的短消息,并通过目标资源对象将该短消息发送至目标用户。同时,调度节点可获取目标资源对象针对该短消息反馈的积压风险等级,并根据该目标资源对象反馈的积压风险等级的分布特征,调整该短消息接收速率。在这种实施方式中,基于资源对象的反馈机制,调度节点可根据资源对象的积压风险,对短消息接收速率进行调整,实现了短消息下放流量的自动控制,有利于降低人力资源成本。同时,可降低人为因素产生的调速滞后影响,时效性和准确性更高。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1为本申请一示例性实施例提供的短消息服务系统的结构示意图;
图2为本申请一示例性实施例提供的短消息处理的流程示意图;
图3为本申请一示例性实施例提供的资源对象的结构示意图;
图4为本申请一示例性实施例提供的流量控制方法的流程示意图;
图5为本申请一示例性实施例提供的服务器的结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合本申请具体实施例及相应的附图对本申请技术方案进行清楚、完整地描述。显然,所描述的实施例仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
在本申请实施例中使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本申请。在本申请实施例和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义,“多种”一般包含至少两种,但是不排除包含至少一种的情况。
针对现有技术中,通过人工对短信下放速度进行调整存在的准确率较差,且人力成本较高的技术问题,在本申请一些实施例中,提供了一种解决方案,以下结合附图,详细说明本申请各实施例提供的技术方案。
图1为本申请一示例性实施例提供的短消息服务系统的结构示意图,如图1所示,短消息服务系统100包括:客户端10、接入节点20、调度节点30、决策节点40以及消息系统50。其中,调度节点的数量可以为多个,如图1所示。多个调度节点可对短消息进行分布式并行处理。
在短消息服务系统100中,短消息是指通信传输的对象,是一种信息的载体,短消息可实现为文本短消息(例如短信)、语音短消息、图像短消息等,本实施例不做限制。
其中,客户端10位于短消息服务系统100的客户侧,该客户指的是存在短消息发送需求的用户。例如,在电子商务场景中,该客户可实现为商家。商家可通过短消息服务系统100向消费者发放营销广告、优惠信息等。在电子支付场景,该客户可实现为金融机构。金融机构可通过短消息服务器系统100向待支付用户发送验证码,以确保支付过程的安全性。其中,该客户端10可安装在用户侧的智能设备上,例如计算机、平板电脑、智能手机等,本实施例不做限制。
其中,接入节点20位于短消息服务系统100的接入层,指的是用于接收外部设备(例如客户端设备)发送的短消息的设备,可实现为路由器、交换机、调制解调器等,本实施例不做限制。在本实施例中,接入节点20,用于:接收客户端发送的短消息,并将该短消息发送至任一调度节点。图1示意了接入节点20将短消息发送至调度节点30的情形。调度节点30可以是与接入节点20距离最近的调度节点。除调度节点30之外,接入节点20也可将短消息发送至调度节点30之外的其他调度节点,不再一一进行图示。
调度节点30可在接收到该短消息后,可根据该短消息,向消息系统50发送积压请求,以使得消息系统50对该短消息进行积压,以便于后续对短消息进行分布式处理。
其中,消息系统50,主要用于:将接收到的短消息添加到消息队列中,并按照“先 进先出”的原则,对消息队列中的短消息进行出队处理。其中,任一短消息出队时,消息系统50可将该短消息发送到任一调度节点。图1示意了消息系统50将短消息发送至调度节点30的情形,除调度节点30之外,消息系统50也可将短消息发送至调度节点30之外的其他调度节点。即,多个调度节点均可从消息系统50获取短消息,以对消息系统50积压的短消息进行分布式处理,从而提升短消息发送效率。
任一调度节点接收到消息系统50发送的短消息后,可将该短消息分发至对应的用户,并在分发的过程中对消息系统50进行流量控制。其中,不同的调度节点对消息系统的流量控制逻辑相同,因此,在后续实施例中将以任一调度节点(即调度节点30)为例进行示例性说明。
调度节点30接收到消息系统50发送的短消息后,可基于决策节点40对该短消息的资源路径进行决策,并利用决策出的资源路径将该短消息发送至目标用户。其中,该目标用户指的短消息最终触达的用户,例如电子上午场景中的消费者、电子支付场景中的待支付用户等等。
在短消息服务系统100中,消息系统50可基于单个服务器或者服务器集群实现。该单个服务器或者服务器集群上,可部署有短消息处理应用,以向消费者提供短消息相关的服务。在一些实施例中,该短消息处理应用可实现为MetaQ(一种短消息服务引擎)。
其中,任一调度节点,可基于服务器设备实现,该服务器设备可以是常规服务器、云服务器、云主机、虚拟中心或者云上的弹性计算实例等,本实施例不做限制。其中,服务器设备的构成主要包括处理器、硬盘、内存、系统总线等,和通用的计算机架构类似,不再赘述。在本实施例中,任一调度节点可执行流量控制方法,以控制从消息系统50处接收到短消息的速率。以下将以调度节点30为例,对任一调度节点侧执行的流量控制方法进行示例性说明。
如图1所示,调度节点30包括流量控制模块以及资源对象。调度节点30可基于流量控制模块,按照设定的短消息接收速率,接收消息系统50发送的短消息,并将该短消息分配给目标资源对象,以通过该目标资源对象将该短消息发送至目标用户。调度节点30可基于流量控制模块,获取该目标资源对象针对该短消息反馈的积压风险等级,并根据该目标资源对象在设定时长范围内反馈的积压风险等级的分布特征,调整该短消息接收速率。
其中,短消息接收速率,指的是调度节点从消息系统处接收短消息的速率。该短 消息接收速率,可采用QPS(Queries-per-second,每秒查询率)进行表示。在本实施例中,该短消息接收速率可以是动态调整的。例如,可按照设定的调整周期,动态调整短消息接收速率,也可根据资源对象的使用情况动态调整短消息接收速率,本实施例不做限制。该设定的短消息接收速率,可以是上一次动态调整后得到的短消息接收速率。
其中,资源对象,指的是对调度节点30对应的物理资源进行组合后得到的虚拟资源,一个资源对象可包括多个物理资源。每个物理资源可称为资源对象的一个子资源对象。其中,物理资源,指的是用于发送短消息的通信通道资源,一个物理资源,可对应调度节点与运营商之间的一个通信通道。其中,该通信通道用于提供短消息传输能力,一个通信通道可承接多个连接以实现短消息的并发处理。
其中,调度节点30可对应多个资源对象,在接收到消息系统50发送的短消息之后,可对该短消息的下发路径进行决策。在一些实施例中,调度节点可通过图1示意的决策节点40对短消息的下发路径进行决策。如图1所示,调度节点30可向决策节点40发送决策请求,决策节点40可按照设定的决策逻辑,从调度节点30对应的多个资源对象中,选择出用于下发该短消息的资源对象,并向调度节点30返回资源对象决策结果。
其中,该设定的决策逻辑可包括:按照短消息对应的短消息类型进行决策、按照短消息所属的地区签名进行决策、按照短消息的投诉率进行决策等,本实施例不做限制。决策节点40在具体进行决策时,还可进一步考虑下不同通信通道的QPS,以平衡不同通信通道的压力。在本实施例中,将调度节点从多个资源对象中决策出的用于下放该短消息的资源对象,标记为目标资源对象。
通过决策节点40决策出目标资源对象后,调度节点30可将该短消息分配给目标资源对象,以通过该目标资源对象将该短消息发送至目标用户。
对于目标资源对象而言,在接收到短消息后,可利用决策节点40进行二层决策,以决策出用于下发短消息的物理资源。如图1所示,调度节点30可向决策节点40发送决策请求。决策节点40可按照目标资源对象对应的多个物理资源各自的利用率,选择出用于下发该短消息的物理资源(即子资源对象),并向调度节点30返回物理资源决策结果。调度节点30可基于该决策出的物理资源下发短消息,并通过网关将短消息发送至用户的终端设备。
在本实施例中,调度节点30可在每次确定短消息对应的目标资源对象时,获取该 目标资源对象针对该短消息反馈的积压风险等级。其中,该积压风险等级,用于表示目标资源对象当前的消息积压风险的高低程度,该积压风险的高低程度可采用积压风险等级进行表示。例如,积压风险等级可以为高级、中级、低级;或者,积压风险等级可以从高到低依次为一级、二级、三级、四级、五级等。当目标资源对象包含的物理资源的数量相对固定时,向目标资源对象分配短消息的速率越快,则目标资源的消息积压风险越高。其中,积压风险等级,可根据目标资源对象包含的物理资源的实际积压情况进行计算得到,具体的计算过程将在后续的实施例中进行说明,此处不赘述。
对调度节点30而言,可在每次通过目标资源对象将接收到的短消息发送给用户时,实时获取目标资源对象反馈的积压风险等级,以感知目标资源对象实时的短消息积压情况。在此基础上,调度节点30可根据目标资源对象反馈的积压风险等级的分布特征,调整短消息接收速率。
在一些实施例中,积压风险等级的分布特征可根据目标资源对象反馈的一定数量的积压风险等级确定。例如,可根据目标资源对象反馈的500个积压风险等级,分析积压风险等级的分布特征。例如,该500个积压风险等级的分布特征分析结果可以为:低积压风险占比为30%、高积压风险占比为50%、中积压风险占比为20%。
在另一些实施例中,积压风险等级的分布特征可根据目标资源对象在设定时长范围内反馈的积压风险等级确定。其中,该设定时长范围,可根据需求进行设置,例如,可设置设定时长范围为30秒、60秒、90秒等,本实施例不做限制。
其中,积压风险等级的分布特征,用于描述不同积压风险等级的反馈频次。高积压风险等级的反馈频次越大,则表明目标资源对象的短消息积压情况越严重,调度节点30应当适当降低短消息接收速率以避免下游资源被击穿。低积压风险等级的反馈频次越大,则表明目标资源对象的短消息积压情况越轻微,应当适当提升短消息接收速率以充分利用已有资源。
在本实施例中,调度节点可按照设定的短消息接收速率,接收消息系统发送的短消息,并通过目标资源对象将该短消息发送至目标用户。同时,调度节点可获取目标资源对象针对该短消息反馈的积压风险等级,并根据该目标资源对象在设定时长范围内反馈的积压风险等级的分布特征,调整该短消息接收速率。在这种实施方式中,基于资源对象的反馈机制,调度节点可根据资源对象的积压风险,对短消息接收速率进行调整,实现了短消息下放流量的自动控制,有利于降低人力资源成本。同时,可降低人为因素产生的调速滞后影响,时效性和准确性更高。
在一些示例性的实施例中,在消息系统侧,短消息按照队列进行存储。消息系统具有多个消息队列,每个消息队列可按照各自的短消息发送速率向不同的调度节点下发短消息。
基于此,针对调度节点30而言,调度节点30在接收消息系统50发送的短消息之后,可从消息系统50的多个消息队列中,确定该短消息所属的目标消息队列。相应地,调度节点30根据该目标资源对象在设定时长范围内反馈的积压风险等级的分布特征,调整该短消息接收速率时,可从该目标资源对象在设定时长范围内反馈的积压风险等级中,确定与该目标消息队列的对应的积压风险等级;根据与该目标消息队列对应的积压风险等级的分布特征,调整与该目标队列对应的短消息接收速率。即,调度节点可实现消息队列级别的调速。
可选地,消息系统侧的消息队列按照场景进行划分。即,消息系统的多个消息队列分别与多个场景对应,每个消息队列可称为场景队列。其中,场景用于表示短消息的用途,多个场景队列,分别用于存放客户端发送的不同场景对应的短消息。例如,消息系统可包括教育场景对应的消息队列、金融场景对应的消息队列、餐饮场景对应的消息队列、音视频场景对应的消息队列等等。除上述场景队列外,还可包括用于存放未能进行场景分类的短消息的通用场景队列,如图2所示。
其中,对于客户端而言,当需要发送不同场景的短消息时,可向短消息服务系统100申请不同的账号。当通过某一账号向短消息服务系统发送短消息时,该被发送的短消息可携带与账号匹配的场景标签。短消息服务系统100中的消息系统50,可根据短消息携带的场景标签,将短消息添加至对应的场景队列中。
在本实施例中,为便于将不同的短消息存放进不同的场景队列,可为每个短消息添加不同维度的标签。可选地,除了场景标签之外,还可为短消息添加短消息类型和/或投诉率对应的标签。其中,该短消息类型包括:验证码类型、通知类型或者广告类型。例如,金融场景对应的消息队列中,每个短消息的标签可以为:金融_验证码_0001(投诉率为1%)、金融_验证码_0002(投诉率为2%)、金融_通知_0001(投诉率为5%)等等。
对于消息系统50而言,其上的不同场景队列,可分别按照各自的短消息发送速率向各自对应的调度节点下放被积压的短消息。调度节点30在接收消息系统发送的短消息之后,可确定该短消息所属的目标场景队列。并通过图2所示的一层决策,确定用 于下发该短消息的目标资源对象。确定目标资源对象后,调度节点30可通过目标资源对象将该短消息发送至用户,并可获取目标资源对象针对该短消息反馈的积压风险等级。基于目标资源对象在设定时长范围内反馈的积压风险等级,调度节点30可确定与该目标场景队列对应的积压风险等级的分布,并根据与目标场景队列对应的积压风险等级的分布,调整与目标场景对应的短消息接收速率。
继续以金融场景以及教育场景为例。假设,某一调度节点按照速率V1接收消息系统的金融队列下放的短消息,按照速率V2接收消息系统的教育队列下放的短消息。在每次接收到短消息后,调度节点可利用对应的资源对象将短消息发送至用户,并获取资源对象针对短消息反馈的积压风险等级。若在设定时长范围内,接收资源对象到针对教育队列中的短消息反馈的高积压风险等级的次数较多,则调度节点可降低速率V2。若在设定时长范围内,接收到针对金融队列中的短消息反馈的低积压风险等级的次数较少,则调度节点可提升速率V1。
在这种实施方式中,通过统计资源对象对不同消息队列中的短消息反馈的积压风险等级,可实时计算出资源对象针对不同消息队列中的短消息的承接压力。基于计算出的承接压力,调度节点可调整针对不同消息队列的短消息接收速率,从而实现了队列级别的、精准的流量控制。
以下将对调度节点获取资源对象对应的积压风险等级的可选实施方式进行示例性说明。
在一些可选的实施例中,如任一资源对象可采用消息主题(topic)存放接收到的短消息,图2所示的主题短消息a、主题短消息b以及主题短消息c。可选地,该主题队列,可基于EMQ(一种短消息推送服务)实现,本实施例对此不做限制。EMQ是一种分装后与MetaQ相似的消息队列,其底层数据库可拆分出大量的消息主题(topic)。
根据前述实施例的记载,资源对象,是指对物理资源进行组合后得到的虚拟资源。任一该资源对象对应的物理资源,可从该资源对象的消息主题中取出短消息,并发送至用户。
其中,任一消息主题可按照通信通道的连接数,被划分为多个队列分组(group),如图3所示的队列分组1、队列分组2…队列分组N。例如,可将50个连接划分为一个队列分组,从而使得一个队列分组的流量承接能力为50QPS。
其中,任一队列分组包含多个优先级队列。其中,优先级队列,指的是按照优先 级高低不同生成的不同消息队列。在一些实施例中,可按照用户优先级和/或短消息类型进行划分。可选地,该短消息类型可包括:验证码类型、通知类型、广告类型等等。例如,会员用户的队列的优先级高于非会员用户的队列的优先级。又例如,验证码类型的队列的优先级高于通知类型的队列的优先级,通知类型的队列的优先级,高于广告类型的队列的优先级。如图3所示,每个队列分组可包括高、中、低三个优先级队列。优先级队列的优先级等级越高,则其队列中的短消息下发至用户的速率越快。
继续以目标资源对象为例,可选地,调度节点30在获取目标资源对象对应的积压风险等级时,可确定该目标资源对象对应的消息主题包含的多个队列分组,并根据该多个队列分组各自包含的优先级队列的短消息积压量和/或短消息积压时间,计算该多个队列分组各自的积压风险等级。
其中,任一优先级队列的短消息积压量,指的是该优先级队列中尚未出队的短消息的数量。任一优先级队列的短消息积压时间,可以是该优先级队列中位于队首的短消息的积压时间。按照队列先进先出的原则,该队首的短消息的积压时间,是该优先级队列的最大积压时间。
以任一队列分组为例,可选地,可确定该队列分组中的优先级较高的目标优先级队列的短消息积压量,以及该目标优先级队列的队首短消息的积压时长;根据该目标优先级队列的短消息积压量以及该队首短消息的积压时长,计算该队列分组的积压风险等级。若队列分组包含高、中、低三个优先级队列,则目标优先级队列可以为高优先级队列。其中,短消息积压量、队首短消息的积压时长与积压风险等级的关系,可以采用二维表格进行表达。在该二维表格中,可存储有不同数值范围的短消息积压量以及队首短消息的积压时长与积压风险等级的对应关系。根据目标优先级队列的短消息积压量以及队首短消息的积压时长查询该二维表格,即可确定目标优先级队列的积压风险等级。由于目标优先级队列的优先级较高,因此,可直接将标优先级队列的积压风险等级作为目标优先级队列所属的队列分组的积压风险等级。
同理,可基于上述实施方式,计算得到多个队列分组各自的积压风险等级。根据该多个队列分组各自的积压风险等级,可计算该消息主题的积压风险等级。可选地,可从该多个队列分组各自的积压风险等级中,选择最高的积压风险等级,作为该消息主题的积压风险等级;或者,可以计算多个队列分组的积压风险等级的平均值,作为该消息主题的积压风险等级,本实施例不做限制。
在上述各实施例的基础上,调度节点30可根据该目标资源对象在设定时长范围内 的积压风险等级的反馈次数,对短消息接收速率进行调整。继续参考图2所示,调度节点在将场景队列中的短消息下放到资源对象后,资源对象可反馈积压风险等级。调度节点中的数据收集模块,可按照短消息的多个维度的标签,收集资源对象对不同场景队列中的短消息反馈的积压风险等级,并计算积压风险等级的分布。调度节点中的定时任务,可定时获取积压风险等级,并按规则分场景调速。其中,该定时任务的定时的时长可以是30秒、60秒、90秒或者其他时长,本实施例不做限制。
可选地,调度节点中的数据收集模块在根据该目标资源对象在设定时长范围内反馈的积压风险等级的分布特征,调整该短消息接收速率时,可根据该目标资源对象在该设定时长范围内对至少一种积压风险等级的反馈次数,计算该至少一种积压风险等级各自的反馈次数占比。该至少一种积压风险等级各自的反馈次数占比,可用于表示不同积压风险等级的分布特征。根据该至少一种积压风险等级各自的反馈次数占比以及预设的积压风险等级的反馈次数比例范围和调速比例的对应关系,调度节点可确定目标调速比例。
在一些可选的实施例中,预设的积压风险等级的反馈次数比例范围和调速比例的对应关系可采用以下表格所示:
应当理解,上述表格中的积压风险等级划分以及具体数值,仅用于示例性举例说 明使用,并不对本申请的保护范围构成限制。
在一些示例性的实施例中,基于上述调速比例的对应关系,确定目标调速比例时,调度节点可按照由高到低的顺序,依次判断该至少一种积压风险等级各自的反馈次数占比是否大于该至少一种积压风险等级各自对应的占比阈值。在确定该至少一种积压风险等级中的任一积压风险等级的反馈次数占比大于该积压风险等级的占比阈值时,停止该判断的操作,并根据积压风险等级的反馈次数占比以及该积压风险等级的反馈次数比例范围和调速比例的对应关系,确定该目标调速比例。
以下将结合上述表格进行进一步示例性说明。
步骤S1、统计一个积压风险等级收集周期内,高积压风险等级对应的反馈次数。
若高积压风险等级的反馈次数的占比在90-100(%)范围内,则查询上述表格确定目标调速比例为0.1;若高积压风险等级的反馈次数的占比在60-90(%)范围内,则查询上述表格确定目标调速比例为0.2;若高积压风险等级的反馈次数的占比在30-60(%)范围内,则查询上述表格确定目标调速比例为0.3;若高积压风险等级的反馈次数的占比在30(%)以下,则可执行步骤S2。
步骤S2、统计一个积压风险等级收集周期内,中积压风险等级对应的反馈次数。
若中积压风险等级的反馈次数的占比在90-100(%)范围内,则查询上述表格确定目标调速比例为0.5;若中积压风险等级的反馈次数的占比在80-90(%)范围内,则查询上述表格确定目标调速比例为0.7;若中积压风险等级的反馈次数的占比在50-80(%)范围内,则查询上述表格确定目标调速比例为0.8;若中积压风险等级的反馈次数的占比在30-50(%)范围内,则查询上述表格确定目标调速比例为0.9;若中积压风险等级的反馈次数的占比在30(%)以下,则执行步骤S3。
步骤S3、统计一个积压风险等级收集周期内,低积压风险等级对应的反馈次数。
若低积压风险等级的反馈次数的占比在90-100(%)范围内,则查询上述表格确定目标调速比例为1.0;若低积压风险等级的反馈次数的占比在80-90(%)范围内,则查询上述表格确定目标调速比例为1.5;若低积压风险等级的反馈次数的占比在50-80(%)范围内,则查询上述表格确定目标调速比例为2.0;若低积压风险等级的反馈次数的占比在30-50(%)范围内,则查询上述表格确定目标调速比例为2.7;若低积压风险等级的反馈次数的占比在30(%)以下,则执行步骤S4。
步骤S4、将调速比例设置为4.0。
确定目标调速比例后,调度节点可根据该目标调速比例以及预设的基础接收速率, 计算该短消息接收速率的更新值。即,新的短消息接收速率=基础接收速率*目标调速比例。
在上述实施方式中,通过建立积压风险等级,可较为准确地衡量下游资源的积压能力,并判断下游资源是否充足。下游资源的高风险等级的占比越高,则短消息的接收速率越低,从而,可灵活地根据下游资源的积压风险比例分布调整短消息的接收速率,提升资源的利用率。
在上述各实施例的基础上,可选地,调度节点在调整该短消息接收速率之后,可进一步根据该短消息接收速率,设置用于控制短消息流量的令牌桶的令牌投放速率。例如,一个短消息可对应令牌桶中的一个令牌。若调度节点的短消息接收速率为14W QPS,则可设置令牌桶的令牌投放速率也为14W QPS。根据该令牌桶的剩余令牌数,调度节点可采用令牌桶算法,对消息系统的短消息发送速率进行控制。其中,令牌桶算法是指,当一个短消息下发请求到达时,若令牌桶中存在至少一个令牌,则接收该短消息,并删除一个令牌。当一个短消息下发请求到达时,若令牌桶中不存在令牌,则拒绝该短消息。当令牌桶中的令牌可按照令牌投放速率不断增加时,可通过令牌投放速率,控制消息系统的短消息发送速率,不再赘述。
值得说明的是,在本申请实施例中,任一调度节点,可采用上述以及下述各实施例提供的流量控制方法控制短消息的接收速率,实现了单机(即单个调度节点)的流量控制。从而,可将集群流量控制转化为单机流量控制,避免集群多设备之间因数据交互产生的问题,极大提升了数据处理能力以及响应效率。
除前述实施例提供的短消息服务系统之外,本申请实施例还提供一种在调度节点侧执行的流量控制方法。
图4为申请一示例性实施例提供的流量控制方法的流程示意图,如图4所示,该方法在任一调度节点一侧执行时,主要用于:
步骤401、按照设定的短消息接收速率,接收消息系统发送的短消息。
步骤402、将该短消息分配给目标资源对象,以通过该目标资源对象将该短消息发送至目标用户。
步骤403、获取该目标资源对象针对该短消息反馈的积压风险等级。
步骤404、根据该目标资源对象反馈的积压风险等级的分布特征,调整该短消息接收速率。
进一步可选地,接收消息系统发送的短消息之后,还包括:从该消息系统的多个消息队列中,确定该短消息所属的目标消息队列;根据该目标资源对象反馈的积压风险等级的分布特征,调整该短消息接收速率,包括:从该目标资源对象反馈的积压风险等级中,确定与该目标消息队列的对应的积压风险等级;根据与该目标消息队列对应的积压风险等级的分布特征,调整与该目标队列对应的短消息接收速率。
进一步可选地,该多个消息队列分别与多个场景对应,任一场景对应的消息队列中的短消息具有相同的场景标签。
进一步可选地,获取该目标资源对象对应的积压风险等级的一种方式,可包括:确定该目标资源对象对应的消息主题包含的多个队列分组;任一队列分组包含多个优先级队列;根据该多个队列分组各自包含的优先级队列的短消息积压量和/或短消息积压时间,计算该多个队列分组各自的积压风险等级;根据该多个队列分组各自的积压风险等级,计算该消息主题的积压风险等级。
进一步可选地,根据该多个队列分组各自包含的优先级队列的短消息积压量和/或短消息积压时间,计算该多个队列分组各自的积压风险等级的一种方式,可包括:针对该任一队列分组,确定该队列分组中的优先级较高的目标优先级队列的短消息积压量,以及该目标优先级队列的队首短消息的积压时长;根据该目标优先级队列的短消息积压量以及该队首短消息的积压时长,计算该队列分组的积压风险等级。
进一步可选地,根据该目标资源对象反馈的积压风险等级的分布特征,调整该短消息接收速率的一种方式,可包括:根据该目标资源对象在设定时长范围内对至少一种积压风险等级的反馈次数,计算该至少一种积压风险等级各自的反馈次数占比;根据该至少一种积压风险等级各自的反馈次数占比以及预设的积压风险等级的反馈次数比例范围和调速比例的对应关系,确定目标调速比例;根据该目标调速比例以及预设的基础接收速率,计算该短消息接收速率的更新值。
进一步可选地,根据该至少一种积压风险等级各自的反馈次数占比以及预设的积压风险等级的反馈次数比例范围和调速比例的对应关系,确定目标调速比例的一种方式,可包括:按照由高到低的顺序,依次判断该至少一种积压风险等级各自的反馈次数占比是否大于该至少一种积压风险等级各自对应的占比阈值;在确定该至少一种积压风险等级中的任一积压风险等级的反馈次数占比大于该积压风险等级的占比阈值时,停止该判断的操作,并根据积压风险等级的反馈次数占比以及该积压风险等级的反馈次数比例范围和调速比例的对应关系,确定该目标调速比例。
进一步可选地,调整该短消息接收速率之后,还包括:根据该短消息接收速率,设置用于控制短消息流量的令牌桶的令牌投放速率;根据该令牌桶的剩余令牌数,采用令牌桶算法,对该消息系统的短消息发送速率进行控制。
在本申请实施例中,调度节点可按照设定的短消息接收速率,接收消息系统发送的短消息,并通过目标资源对象将该短消息发送至目标用户。同时,调度节点可获取目标资源对象针对该短消息反馈的积压风险等级,并根据该目标资源对象反馈的积压风险等级的分布特征,调整该短消息接收速率。在这种实施方式中,基于资源对象的反馈机制,调度节点可根据资源对象的积压风险,对短消息接收速率进行调整,实现了短消息下放流量的自动控制,有利于降低人力资源成本。同时,可降低人为因素产生的调速滞后影响,时效性和准确性更高。
需要说明的是,上述实施例所提供方法的各步骤的执行主体均可以是同一设备,或者,该方法也由不同设备作为执行主体。比如,步骤401至步骤404的执行主体可以为设备A;又比如,步骤401和402的执行主体可以为设备A,步骤403的执行主体可以为设备B;等等。
另外,在上述实施例及附图中的描述的一些流程中,包含了按照特定顺序出现的多个操作,但是应该清楚了解,这些操作可以不按照其在本文中出现的顺序来执行或并行执行,操作的序号如401、402等,仅仅是用于区分开各个不同的操作,序号本身不代表任何的执行顺序。另外,这些流程可以包括更多或更少的操作,并且这些操作可以按顺序执行或并行执行。需要说明的是,本文中的“第一”、“第二”等描述,是用于区分不同的短消息、设备、模块等,不代表先后顺序,也不限定“第一”和“第二”是不同的类型。
图5示意了本申请一示例性实施例提供的服务器的结构示意图,该服务器适用于前述实施例提供的短消息服务系统中的调度节点。如图5所示,该服务器包括:存储器501、处理器502以及通信组件503。
存储器501,用于存储计算机程序,并可被配置为存储其它各种数据以支持在服务器上的操作。这些数据的示例包括用于在服务器上操作的任何应用程序或方法的指令。
其中,存储器501可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可 擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
处理器502,与存储器501耦合,用于执行存储器501中的计算机程序,以用于:通过通信组件503按照设定的短消息接收速率,接收消息系统发送的短消息;将该短消息分配给目标资源对象,以通过该目标资源对象将该短消息发送至目标用户;获取该目标资源对象针对该短消息反馈的积压风险等级;根据该目标资源对象反馈的积压风险等级的分布特征,调整该短消息接收速率。
进一步可选地,处理器502在接收消息系统发送的短消息之后,还用于:从该消息系统的多个消息队列中,确定该短消息所属的目标消息队列;根据该目标资源对象反馈的积压风险等级的分布特征,调整该短消息接收速率,包括:从该目标资源对象反馈的积压风险等级中,确定与该目标消息队列的对应的积压风险等级;根据与该目标消息队列对应的积压风险等级的分布特征,调整与该目标队列对应的短消息接收速率。
进一步可选地,该多个消息队列分别与多个场景对应,任一场景对应的消息队列中的短消息具有相同的场景标签。
进一步可选地,处理器502在获取该目标资源对象对应的积压风险等级时,具体用于:确定该目标资源对象对应的消息主题包含的多个队列分组;任一队列分组包含多个优先级队列;根据该多个队列分组各自包含的优先级队列的短消息积压量和/或短消息积压时间,计算该多个队列分组各自的积压风险等级;根据该多个队列分组各自的积压风险等级,计算该消息主题的积压风险等级。
进一步可选地,处理器502在根据该多个队列分组各自包含的优先级队列的短消息积压量和/或短消息积压时间,计算该多个队列分组各自的积压风险等级时,具体用于:针对该任一队列分组,确定该队列分组中的优先级较高的目标优先级队列的短消息积压量,以及该目标优先级队列的队首短消息的积压时长;根据该目标优先级队列的短消息积压量以及该队首短消息的积压时长,计算该队列分组的积压风险等级。
进一步可选地,处理器502在根据该目标资源对象反馈的积压风险等级的分布特征,调整该短消息接收速率时,具体用于:根据该目标资源对象在设定时长范围内对至少一种积压风险等级的反馈次数,计算该至少一种积压风险等级各自的反馈次数占比;根据该至少一种积压风险等级各自的反馈次数占比以及预设的积压风险等级的反馈次数比例范围和调速比例的对应关系,确定目标调速比例;根据该目标调速比例以 及预设的基础接收速率,计算该短消息接收速率的更新值。
进一步可选地,处理器502在根据该至少一种积压风险等级各自的反馈次数占比以及预设的积压风险等级的反馈次数比例范围和调速比例的对应关系,确定目标调速比例时,具体用于:按照由高到低的顺序,依次判断该至少一种积压风险等级各自的反馈次数占比是否大于该至少一种积压风险等级各自对应的占比阈值;在确定该至少一种积压风险等级中的任一积压风险等级的反馈次数占比大于该积压风险等级的占比阈值时,停止该判断的操作,并根据积压风险等级的反馈次数占比以及该积压风险等级的反馈次数比例范围和调速比例的对应关系,确定该目标调速比例。
进一步可选地,处理器502在调整该短消息接收速率之后,还用于:根据该短消息接收速率,设置用于控制短消息流量的令牌桶的令牌投放速率;根据该令牌桶的剩余令牌数,采用令牌桶算法,对该消息系统的短消息发送速率进行控制。
进一步,如图5所示,该服务器还包括:电源组件504等其它组件。图5中仅示意性给出部分组件,并不意味着服务器只包括图5所示组件。
通信组件503被配置为便于通信组件所在设备和其他设备之间有线或无线方式的通信。通信组件所在设备可以接入基于通信标准的无线网络,如WiFi,2G、3G、4G或5G,或它们的组合。在一个示例性实施例中,通信组件经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,通信组件可基于近场通信(NFC)技术、射频识别(RFID)技术、红外数据协会(IrDA)技术、超宽带(UWB)技术、蓝牙(BT)技术和其他技术来实现。
电源组件504,为电源组件所在设备的各种组件提供电力。电源组件可以包括电源管理系统,一个或多个电源,及其他与为电源组件所在设备生成、管理和分配电力相关联的组件。
在本申请实施例中,调度节点可按照设定的短消息接收速率,接收消息系统发送的短消息,并通过目标资源对象将该短消息发送至目标用户。同时,调度节点可获取目标资源对象针对该短消息反馈的积压风险等级,并根据该目标资源对象在设定时长范围内反馈的积压风险等级的分布特征,调整该短消息接收速率。在这种实施方式中,基于资源对象的反馈机制,调度节点可根据资源对象的积压风险,对短消息接收速率进行调整,实现了短消息下放流量的自动控制,有利于降低人力资源成本。同时,可降低人为因素产生的调速滞后影响,时效性和准确性更高。
相应地,本申请实施例还提供一种存储有计算机程序的计算机可读存储介质,计 算机程序被执行时能够实现上述方法实施例中可由服务器执行的各步骤。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、 动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
以上所述仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (10)

  1. 一种流量控制方法,其特征在于,包括:
    按照设定的短消息接收速率,接收消息系统发送的短消息;
    将所述短消息分配给目标资源对象,以通过所述目标资源对象将所述短消息发送至目标用户;
    获取所述目标资源对象针对所述短消息反馈的积压风险等级;
    根据所述目标资源对象反馈的积压风险等级的分布特征,调整所述短消息接收速率。
  2. 根据权利要求1所述的方法,其特征在于,接收消息系统发送的短消息之后,还包括:从所述消息系统的多个消息队列中,确定所述短消息所属的目标消息队列;
    根据所述目标资源对象反馈的积压风险等级的分布特征,调整所述短消息接收速率,包括:
    从所述目标资源对象反馈的积压风险等级中,确定与所述目标消息队列的对应的积压风险等级;
    根据与所述目标消息队列对应的积压风险等级的分布特征,调整与所述目标队列对应的短消息接收速率。
  3. 根据权利要求2所述的方法,其特征在于,所述多个消息队列分别与多个场景对应,任一场景对应的消息队列中的短消息具有相同的场景标签。
  4. 根据权利要求1所述的方法,其特征在于,获取所述目标资源对象对应的积压风险等级,包括:
    确定所述目标资源对象对应的消息主题包含的多个队列分组;任一队列分组包含多个优先级队列;
    根据所述多个队列分组各自包含的优先级队列的短消息积压量和/或短消息积压时间,计算所述多个队列分组各自的积压风险等级;
    根据所述多个队列分组各自的积压风险等级,计算所述消息主题的积压风险等级。
  5. 根据权利要求4所述的方法,其特征在于,根据所述多个队列分组各自包含的优先级队列的短消息积压量和/或短消息积压时间,计算所述多个队列分组各自的积压风险等级,包括:
    针对所述任一队列分组,确定所述队列分组中的优先级较高的目标优先级队列的短消息积压量,以及所述目标优先级队列的队首短消息的积压时长;
    根据所述目标优先级队列的短消息积压量以及所述队首短消息的积压时长,计算所述队列分组的积压风险等级。
  6. 根据权利要求1-5任一所述的方法,其特征在于,根据所述目标资源对象反馈的积压风险等级的分布特征,调整所述短消息接收速率,包括:
    根据所述目标资源对象在设定时长范围内对至少一种积压风险等级的反馈次数, 计算所述至少一种积压风险等级各自的反馈次数占比;
    根据所述至少一种积压风险等级各自的反馈次数占比以及预设的积压风险等级的反馈次数比例范围和调速比例的对应关系,确定目标调速比例;
    根据所述目标调速比例以及预设的基础接收速率,计算所述短消息接收速率的更新值。
  7. 根据权利要求6所述的方法,其特征在于,根据所述至少一种积压风险等级各自的反馈次数占比以及预设的积压风险等级的反馈次数比例范围和调速比例的对应关系,确定目标调速比例,包括:
    按照由高到低的顺序,依次判断所述至少一种积压风险等级各自的反馈次数占比是否大于所述至少一种积压风险等级各自对应的占比阈值;
    在确定所述至少一种积压风险等级中的任一积压风险等级的反馈次数占比大于所述积压风险等级的占比阈值时,停止所述判断的操作,并根据积压风险等级的反馈次数占比以及所述积压风险等级的反馈次数比例范围和调速比例的对应关系,确定所述目标调速比例。
  8. 根据权利要求1-5任一项所述的方法,其特征在于,调整所述短消息接收速率之后,还包括:
    根据所述短消息接收速率,设置用于控制短消息流量的令牌桶的令牌投放速率;
    根据所述令牌桶的剩余令牌数,采用令牌桶算法,对所述消息系统的短消息发送速率进行控制。
  9. 一种服务器,其特征在于,包括:存储器和处理器;
    所述存储器用于存储一条或多条计算机指令;
    所述处理器用于执行所述一条或多条计算机指令以用于:执行权利要求1-8任一项所述的方法中的步骤。
  10. 一种存储有计算机程序的计算机可读存储介质,其特征在于,计算机程序被处理器执行时能够实现权利要求1-8任一项所述方法中的步骤。
PCT/CN2023/072761 2022-01-28 2023-01-18 流量控制方法、设备及存储介质 WO2023143276A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210108247.X 2022-01-28
CN202210108247.XA CN114501351B (zh) 2022-01-28 2022-01-28 流量控制方法、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2023143276A1 true WO2023143276A1 (zh) 2023-08-03

Family

ID=81477373

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/072761 WO2023143276A1 (zh) 2022-01-28 2023-01-18 流量控制方法、设备及存储介质

Country Status (2)

Country Link
CN (1) CN114501351B (zh)
WO (1) WO2023143276A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116719630A (zh) * 2023-08-11 2023-09-08 中邮消费金融有限公司 案件调度方法、设备、存储介质及装置

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114501351B (zh) * 2022-01-28 2024-04-26 阿里巴巴(中国)有限公司 流量控制方法、设备及存储介质
CN115277597B (zh) * 2022-09-30 2022-12-23 北京金楼世纪科技有限公司 一种短消息队列调度方法、装置及可读存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120155297A1 (en) * 2010-12-17 2012-06-21 Verizon Patent And Licensing Inc. Media gateway health
CN108989239A (zh) * 2017-06-02 2018-12-11 中兴通讯股份有限公司 过载保护方法及装置、控制器及存储介质
CN110324250A (zh) * 2018-03-29 2019-10-11 阿里巴巴集团控股有限公司 消息推送方法、设备及系统
CN114501351A (zh) * 2022-01-28 2022-05-13 阿里巴巴(中国)有限公司 流量控制方法、设备及存储介质

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8812712B2 (en) * 2007-08-24 2014-08-19 Alcatel Lucent Proxy-driven content rate selection for streaming media servers
CN101404811B (zh) * 2008-11-17 2010-12-08 中兴通讯股份有限公司 一种小区短信的流量控制方法及系统
CN101594588B (zh) * 2009-06-30 2012-05-23 中兴通讯股份有限公司 一种短信终呼流量控制方法和系统
CN102065382B (zh) * 2010-12-02 2014-10-22 中兴通讯股份有限公司 一种多媒体消息业务下发消息动态流控的方法和系统
CN102740258B (zh) * 2012-07-05 2015-09-30 甘肃银光聚银化工有限公司 一种短信平台控制装置
CN105306258A (zh) * 2015-09-25 2016-02-03 上海大汉三通数据通信有限公司 一种提交短信的控制方法及系统
CN105813040B (zh) * 2016-05-12 2019-04-23 中国联合网络通信集团有限公司 短信发送方法、服务器和移动终端
CN107734475B (zh) * 2017-11-15 2021-05-18 中国联合网络通信集团有限公司 基于短信链路的短信发送方法及业务平台
CN108200544B (zh) * 2018-03-02 2021-12-28 北京中电普华信息技术有限公司 短信下发方法和短信平台
CN108933993B (zh) * 2018-07-03 2021-08-24 平安科技(深圳)有限公司 短信缓存队列选择方法、装置、计算机设备和存储介质
CN108966160B (zh) * 2018-09-25 2021-07-16 厦门集微科技有限公司 一种短信处理方法、装置及计算机可读存储介质
CN109347757B (zh) * 2018-11-09 2022-12-09 锐捷网络股份有限公司 消息拥塞控制方法、系统、设备及存储介质
CN111355669B (zh) * 2018-12-20 2022-11-25 华为技术有限公司 控制网络拥塞的方法、装置及系统
CN113301515B (zh) * 2020-06-01 2022-07-05 阿里巴巴集团控股有限公司 短信通道连接的处理方法、装置、系统、设备和存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120155297A1 (en) * 2010-12-17 2012-06-21 Verizon Patent And Licensing Inc. Media gateway health
CN108989239A (zh) * 2017-06-02 2018-12-11 中兴通讯股份有限公司 过载保护方法及装置、控制器及存储介质
CN110324250A (zh) * 2018-03-29 2019-10-11 阿里巴巴集团控股有限公司 消息推送方法、设备及系统
CN114501351A (zh) * 2022-01-28 2022-05-13 阿里巴巴(中国)有限公司 流量控制方法、设备及存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116719630A (zh) * 2023-08-11 2023-09-08 中邮消费金融有限公司 案件调度方法、设备、存储介质及装置
CN116719630B (zh) * 2023-08-11 2024-03-15 中邮消费金融有限公司 案件调度方法、设备、存储介质及装置

Also Published As

Publication number Publication date
CN114501351A (zh) 2022-05-13
CN114501351B (zh) 2024-04-26

Similar Documents

Publication Publication Date Title
WO2023143276A1 (zh) 流量控制方法、设备及存储介质
WO2018133306A1 (zh) 内容分发网络中的调度方法和设备
Gao et al. Resource provisioning and profit maximization for transcoding in clouds: A two-timescale approach
CN110443695A (zh) 数据处理方法及其装置、电子设备和介质
CN107580023A (zh) 一种动态调整任务分配的流处理作业调度方法及系统
CN105760449B (zh) 一种面向多源异构数据的云推送方法
AU2012308935A1 (en) Marketplace for timely event data distribution
CN105446810B (zh) 基于成本代价的多农场云渲染任务分发系统与方法
CN110619701A (zh) 排队渠道推荐方法及装置、存储介质及电子设备
CN109347757A (zh) 消息拥塞控制方法、系统、设备及存储介质
CN105446817B (zh) 移动云计算中一种基于鲁棒优化的联合资源预留配置算法
WO2020015578A1 (zh) 一种调度缓存节点的方法、装置、系统、介质及设备
CN103139100A (zh) 处理业务的方法和系统
US9721215B2 (en) Enhanced management of a web conferencing server
CN109656685A (zh) 容器资源调度方法和系统、服务器及计算机可读存储介质
WO2019056484A1 (zh) 保险产品配送管理方法、装置、计算机设备及存储介质
US9326296B2 (en) Method and apparatus for scheduling delivery of content according to quality of service parameters
CN112468551A (zh) 一种基于业务优先级的智能调度工作方法
US9125045B2 (en) Delayed data delivery options
CN107645411B (zh) 一种基于线性规划的通道流量调拨方法及装置
CN109688171B (zh) 缓存空间调度方法、装置和系统
CN110782167B (zh) 一种收派件区域管理方法、装置及存储介质
CN113852723B (zh) 号码调度方法、设备及存储介质
US20150003238A1 (en) System and method for management and control of communication channels
Larisa et al. The method of resources allocation for processing requests in online charging system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23746158

Country of ref document: EP

Kind code of ref document: A1