CN114500402A - Message processing method, device and storage medium - Google Patents

Message processing method, device and storage medium Download PDF

Info

Publication number
CN114500402A
CN114500402A CN202111626083.1A CN202111626083A CN114500402A CN 114500402 A CN114500402 A CN 114500402A CN 202111626083 A CN202111626083 A CN 202111626083A CN 114500402 A CN114500402 A CN 114500402A
Authority
CN
China
Prior art keywords
message
task queue
time slot
processing
bandwidth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111626083.1A
Other languages
Chinese (zh)
Inventor
陈理辉
石金博
郑荣魁
沙琪
王彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
QKM Technology Dongguan Co Ltd
Original Assignee
QKM Technology Dongguan Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by QKM Technology Dongguan Co Ltd filed Critical QKM Technology Dongguan Co Ltd
Priority to CN202111626083.1A priority Critical patent/CN114500402A/en
Publication of CN114500402A publication Critical patent/CN114500402A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • H04L47/525Queue scheduling by attributing bandwidth to queues by redistribution of residual bandwidth
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6215Individual queue per QOS, rate or priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/626Queue scheduling characterised by scheduling criteria for service slots or service orders channel conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6275Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application provides a method, equipment and a storage medium for processing a message, which relate to the technical field of communication, and the method comprises the following steps: caching the first message to a first task queue or a second task queue according to the message parameter of the first message to be processed; the second task queue is one of a plurality of third task queues with different processing priorities; the processing priority of the first task queue is greater than that of any third task queue; and in each scheduling period, according to the processing priority, scheduling and processing the second message of the first task queue in a first time slot corresponding to the first task queue and scheduling and processing the third message of the third task queue in a second time slot corresponding to each third task queue in sequence. The device and the storage medium apply the method, and the transmission bandwidth and the real-time performance of the first message transmission can be improved through the method.

Description

Message processing method, device and storage medium
Technical Field
The present embodiment relates to, but not limited to, the field of communications technologies, and in particular, to a method, a device, and a storage medium for processing a packet.
Background
The existing industrial field bus completely depends on the data link layer self-regulation of IEEE 802.3; along with the demand of industrial automation on an industrial field network is higher and higher, besides the need of ensuring the stable punctual transmission of a real-time message, massive data needs to be processed and transmitted in time, how to ensure the transmission of the real-time message and how to improve the bandwidth utilization rate of the network so as to meet the transmission of more data, and the problem to be solved urgently in the technical field of industrial control is solved.
Disclosure of Invention
The following is a summary of the subject matter described in detail herein. This summary is not intended to limit the scope of the claims.
Embodiments of the present invention provide a method, an apparatus, and a storage medium for message processing, which can improve the real-time performance of transmission and improve the bandwidth utilization.
In a first aspect, an embodiment of the present invention provides a method for processing a packet, where the method includes:
caching a first message to be processed to a first task queue or a second task queue according to a message parameter of the first message; the second task queue is one of a plurality of third task queues with different processing priorities, and the processing priority of the second task queue is matched with the configuration priority in the message parameter; the processing priority of the first task queue is greater than that of any third task queue;
and in each scheduling period, according to the processing priority, scheduling and processing the second message of the first task queue in a first time slot corresponding to the first task queue, and scheduling and processing the third message of the third task queue in a second time slot corresponding to each third task queue.
In a second aspect, an embodiment of the present invention further provides an apparatus, including:
the distribution module is used for caching the first message to a first task queue or a second task queue according to the message parameter of the first message to be processed; the second task queue is one of a plurality of third task queues with different processing priorities, and the processing priority of the second task queue is matched with the configuration priority in the message parameter; the processing priority of the first task queue is greater than that of any third task queue;
and the scheduling processing module is used for scheduling and processing the second message of the first task queue in a first time slot corresponding to the first task queue and scheduling and processing the third message of the third task queue in a second time slot corresponding to each third task queue in sequence according to the processing priority in each scheduling period.
In a third aspect, an embodiment of the present invention further provides an apparatus, including: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor executing the computer program the message processing method according to any one of the first aspect.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, which stores computer-executable instructions, where the computer-executable instructions are configured to execute the method for packet processing according to any one of the first aspect.
According to the above-mentioned embodiments of the application, at least the following beneficial effects are achieved: the first task queue and the third task queue are respectively scheduled by setting the first time slot and the plurality of second time slots, and the processing priority of each second time slot is matched with the configuration priority in the message parameter, so that the first message configured with the configuration priority can still be scheduled from the first task queue in an emergency situation, the message real-time performance is ensured, the first message configured with the configuration priority can be processed in order, the first message with higher configuration priority can be processed in priority, and the transmission link can be ensured to be occupied by the higher configuration priority and the second message in the first task queue under the condition of poor link quality, thereby improving the transmission bandwidth and the real-time performance of the transmission of the first message.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the claimed subject matter and are incorporated in and constitute a part of this specification, illustrate embodiments of the subject matter and together with the description serve to explain the principles of the subject matter and not to limit the subject matter.
FIG. 1 is a schematic block diagram of an apparatus according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a message processing method according to an embodiment of the present application;
fig. 3 is a schematic flow chart illustrating adjustment of transmission bandwidth in a message processing method according to an embodiment of the present application;
fig. 4 is a schematic flow chart of second time slot fusion in the message processing method according to the embodiment of the present application;
fig. 5 is a schematic structural diagram of another apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It should be noted that although functional blocks are partitioned in a schematic diagram of an apparatus and a logical order is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the partitioning of blocks in the apparatus or the order in the flowchart. The terms first, second and the like in the description and in the claims, and the drawings described above, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The existing industrial field bus completely depends on the data link layer self-regulation of IEEE 802.3; along with the demand of industrial automation on an industrial field network is higher and higher, besides the need of ensuring the stable punctual transmission of a real-time message, massive data needs to be processed and transmitted in time, how to ensure the transmission of the real-time message and how to improve the bandwidth utilization rate of the network so as to meet the transmission of more data, and the problem to be solved urgently in the technical field of industrial control is solved. Based on this, embodiments of the present application provide a method, an apparatus, and a storage medium for message processing, which can improve the real-time performance of transmission and improve the bandwidth utilization.
As an example shown in fig. 1, the present application proposes an apparatus comprising:
the distribution module 100 is configured to cache the first packet to the first task queue or the second task queue according to a packet parameter of the first packet to be processed; the second task queue is one of a plurality of third task queues with different processing priorities, and the processing priority of the second task queue is matched with the configuration priority in the message parameter; the processing priority of the first task queue is greater than that of any third task queue;
and the scheduling processing module 200 is configured to sequentially schedule and process the second packet of the first task queue in the first time slot corresponding to the first task queue and schedule and process the third packet of the third task queue in the second time slot corresponding to each third task queue according to the processing priority in each scheduling period.
It should be noted that, in some embodiments, the apparatus further includes a buffer module 300, where the buffer module 300 is configured to buffer the messages matched with the first time slot and the second time slot, that is, the first task queue and the second task queue are in the buffer module 300, and both the scheduling processing module 200 and the allocating module 100 are electrically connected to the buffer module 300.
It should be noted that the first message may be transmitted to the device by the external device through a communication link, or may be directly transmitted to the device by a network cable, an optical fiber, or the like, or generated by an upper layer interactive component of the device.
It should be noted that the message parameter includes at least one of a priority and a message type. For the second time slot, in some embodiments, the second time slot is set in one-to-one correspondence with the packet type, that is, the second time slot is set in one-to-one correspondence with the configuration priority corresponding to the packet type, and in other embodiments, the second time slot is set in association with the packet type and the priority. Assuming that the first packet a, the first packet B, and the first packet C are all non-real-time packets, and the first packet a, the first packet B, and the first packet C are all packets with different packet types, for example, the first packet a, the first packet B, and the first packet C respectively correspond to a second time slot. In other embodiments, assuming that the first packet B and the first packet C are packets with the same priority, the first packet a is a packet with other priorities, and the processing priorities of the second time slots are respectively matched with the priorities of the first packet a, the first packet B, and the first packet C, the first packet B and the first packet C correspond to the same second time slot, and the first packet a corresponds to another second time slot. For another example, in other embodiments, the second time slot is configured with a processing priority and a supported packet type, and when the first packet B has no priority set and the packet type of the first packet B matches the packet type of one of the second time slots, the first packet B may be cached in the third task queue corresponding to the second time slot.
It should be noted that, when the device initializes, the processing priority of the second time slot may be set according to the priority or the packet type supported by the device. In actual operation, the processing priority of the second time slot may be changed by human or modified by a configuration file.
It should be noted that the first message is referred to as the second message and the third message. Each first time slot is assigned a first task queue and each second time slot is assigned a third task queue. The processing priority and the third task queue are changed along with the corresponding second time slot. For the first time slot, it is used to transmit real-time messages, and the real-time messages indicate that the requirement on real-time performance is high, and may be any message type.
Those skilled in the art will appreciate that the topology shown in fig. 1 does not constitute a limitation of embodiments of the present application and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
On the other hand, embodiments of the message processing method of the present application are provided.
Referring to the embodiment shown in fig. 2, the method for processing a message includes:
step S100, caching the first message to a first task queue or a second task queue according to the message parameter of the first message to be processed; the second task queue is one of a plurality of third task queues with different processing priorities, and the processing priority of the second task queue is matched with the configuration priority in the message parameter; the processing priority of the first task queue is greater than the processing priority of any third task queue.
It should be noted that the message parameters include a message type, a configuration priority, and the like.
And step S200, in each scheduling period, according to the processing priority, scheduling and processing the second message of the first task queue in a first time slot corresponding to the first task queue, and scheduling and processing the third message of the third task queue in a second time slot corresponding to each third task queue.
Therefore, the first task queue and the third task queue are respectively scheduled by setting the first time slot and the plurality of second time slots, and the processing priority of each second time slot is matched with the configuration priority in the message parameter, so that in an emergency situation, the first message configured with the configuration priority can still be scheduled from the first task queue, the message real-time performance is ensured, meanwhile, the first message configured with the configuration priority can be sequentially processed, the first message with the higher configuration priority can be preferentially processed, and under the condition of poor link quality, the transmission link can be ensured to be occupied by the higher configuration priority and occupied by the second message in the first task queue, so that the transmission bandwidth is improved, and the real-time performance of the transmission of the first message is improved.
It should be noted that the scheduling period is composed of a first time slot and several second time slots. The execution duration corresponding to each second time slot may be the same or different (i.e., the occupied transmission bandwidths may be different or consistent), and the first time slot and the second time slot may also be the same or different. For example, assuming that a scheduling period is 1s, a first time slot is 300ms, and 4 second time slots are provided, where the number of the second time slots is 200ms for the second time slot 1, 200ms for the second time slot 2, 200ms for the second time slot 3, and 100ms for the second time slot 4, respectively, in each scheduling period, a second packet of a first task queue with a duration of 300ms is processed first, and then a third packet with a corresponding duration is executed according to processing priorities of the second time slot 1, the second time slot 2, the second time slot 3, and the second time slot 4, respectively.
It should be noted that the durations of the second time slots can be dynamically adjusted, so that the transmission of the real-time messages can be ensured through the first time slot, and other first messages with instantaneity lower than that of the real-time messages can be timely processed by dynamically adjusting the second time slots, thereby improving the utilization rate of the bandwidth.
It can be understood that the message parameter includes a message type, and the method includes: and matching the configuration priority of the first message into the highest processing priority in the third task queues when the message type of the first message is a construction event.
It should be noted that the message requiring priority processing, which is constructed as an emergency event or an alarm event, is triggered by a condition. It is set to a higher processing priority in the third task queue so that the build time can be processed in time.
It can be understood that the message processing method further includes: detecting the link quality of a preset transmission link; the transmission link is used for transmitting a third message; and adjusting the transmission bandwidth of at least one second time slot in the scheduling period according to the link quality.
It should be noted that the transmission link may be provided in plural, or may be one. When a plurality of transmission links are provided, the link quality of all the transmission links is counted. The link quality may be determined from the MSE or the signal-to-noise ratio SNR of the link, etc. The transmission link can also be used to transmit the second message.
It should be noted that, when the link quality of the transmission link deteriorates, the available total transmission bandwidth is reduced, and at this time, a large-capacity third packet is transmitted on the transmission link, which may cause the loss of the third packet. When the link quality is recovered, the corresponding second time slot is recovered. Illustratively, when a plurality of transmission links are provided, the processed third packet of the second time slot is allocated according to the bandwidth ratio of each transmission link, so that the accuracy of transmitting the third packet of each transmission link can be further improved, and the effective utilization rate of the transmission links can be further improved.
It can be understood that, referring to the flow chart shown in fig. 3, the method for message processing further includes:
and step S410, calculating the network transmission rate of the transmission link in a preset first historical time period.
It should be noted that, by counting the network transmission rate in the first historical time period, the state of the current transmission link can be better reflected, and the number of times of adjustment is reduced. When a plurality of transmission links are arranged, the statistical result is the network transmission rate states of the plurality of transmission links in the first historical time period. Taking the network transmission rate as an average rate as an example, the average transmission rate v of the transmission link in the first historical time period is respectively calculatednWhen the network transmission rate is
Figure BDA0003438837730000051
The transmission rate represents the bandwidth per unit time, and corresponds to the transmission capacity of the link per unit time.
Step S420, when the network transmission rate satisfies a preset threshold, calculating a total transmission bandwidth corresponding to the transmission link in a scheduling period.
It should be noted that, according to the relationship between the scheduling period and the unit time, the total transmission bandwidth in one scheduling period can be calculated.
Step S430, readjusting the transmission bandwidth of at least one second time slot in the scheduling period according to the total transmission bandwidth.
It should be noted that, when the total transmission bandwidth becomes smaller, the number of the third packets processed in the second time slot with the smaller processing priority may be adjusted. In other embodiments, the transmission bandwidth may be reallocated to all or some of the second time slots by a certain algorithm.
It is understood that step S430 includes: and dividing the total transmission bandwidth through a preset bandwidth allocation algorithm to obtain the transmission bandwidth corresponding to each second time slot.
It can be understood that, the step of dividing the total transmission bandwidth by a preset bandwidth allocation algorithm to obtain the transmission bandwidth corresponding to each second time slot includes: calculating to obtain an average transmission bandwidth according to the number N of the second time slots and the total transmission bandwidth; rounding the average transmission bandwidth downwards to obtain a first transmission bandwidth; calculating to obtain residual bandwidth according to the total transmission bandwidth and the N-1 first transmission bandwidths; and allocating the residual bandwidth and the first transmission bandwidth to the N second time slots according to a preset allocation principle.
It should be noted that, in some embodiments, when the network transmission rate is lower than the preset threshold, all the second time slots are reallocated with transmission bandwidth. In some embodiments, when the remaining bandwidth is smaller, the number of third packets buffered in a third task queue corresponding to the second time slot in the historical time period may be counted, and the remaining bandwidth is allocated to a second time slot corresponding to a third task queue with a smaller buffering amount.
For example, assuming that the total transmission bandwidth is band 1-50M and the number N of the second time slots is 4, the average total transmission bandwidth av is band 1/4-12.5M. The first transmission bandwidth is 12M; the second total transmission bandwidth is 50-12 × 3 ═ 14M. The transmission bandwidths of the four second time slots are 12M, 12M and 14M at this time.
It can be understood that, when the remaining bandwidth is greater than the first transmission bandwidth, allocating the remaining bandwidth and the first transmission bandwidth to N second time slots according to a preset allocation principle includes: allocating the residual bandwidth to a second time slot with the highest processing priority; the first transmission bandwidth is assigned to other unassigned second time slots.
Therefore, the third message with high priority can be further ensured to be processed by the above mode.
It can be understood that, according to the preset allocation principle, the remaining bandwidth and the first transmission bandwidth are allocated to N second time slots, including: determining a second time slot for distributing the residual bandwidth according to the first message quantity of a third message cached in each second time slot in a preset second historical time period; the first transmission bandwidth is assigned to other unassigned second time slots.
It should be noted that, when the remaining bandwidth is greater than the first transmission bandwidth, the remaining bandwidth is allocated to a second time slot corresponding to a third task queue with the largest number of first messages, and when the remaining bandwidth is smaller than the first transmission bandwidth, the remaining bandwidth is allocated to a second time slot corresponding to a third task queue with the smallest number of first messages. Illustratively, the number of the second time slots is three, which is A, B, C, and the number of the first messages corresponding to the second time slot A, B, C is N1, N2, and N3. Where N3 is the largest and N1 is the smallest, when the remaining bandwidth is larger than the first transmission bandwidth, the remaining bandwidth is allocated to the second time slot C corresponding to N3, and the first transmission bandwidth is allocated to the second time slot A, B. When the remaining bandwidth is less than the first transmission bandwidth, the remaining bandwidth is assigned to a second time slot a and the first transmission bandwidth is assigned to a second time slot B, C.
Therefore, the utilization rate of the transmission link can be further improved through the mode.
It should be noted that, in practical applications, one of the two manners may be selected as needed to perform dynamic allocation of the second time slot, and the configuration policy may also be dynamically adjusted. If the time is idle at night, the selection can be preferentially carried out according to the first message quantity in the second historical time period, and the third message with more bandwidth is allocated to the high-priority level in the peak period.
It will be appreciated that the method further comprises: respectively acquiring the second message quantity of the third messages cached in each second time slot in a preset third history time period and the third message quantity cached at the current moment; and readjusting the processing priority of at least one second time slot according to the second message quantity and the third message quantity corresponding to the second time slots.
It should be noted that when the number of messages processed in a second time slot within a duration is small, that is, the cumulative amount of the second message and the third message is smaller than the preset message threshold value, or the current third message does not conform to the rule of the change of the historical second message number, it can be determined that the message number is processed less. At this time, the processing priority of the second time slot can be reduced, the message processing of other processing priorities is preferentially ensured, and all third messages can be further ensured to be processed in time.
It can be understood that the step of readjusting the processing priority of at least one second time slot according to the second packet number and the third packet number corresponding to the plurality of second time slots includes: obtaining the number of cache messages according to the number of second messages and the number of third messages corresponding to the plurality of second time slots; and reducing the processing priority of the second time slot corresponding to the number of the cache messages lower than the preset lower limit threshold value.
It should be noted that the number of the cache messages may be an accumulated value of the second message number and the third message number, or a smaller value of the second message number and the third message number, in other embodiments, weights may be set for the historical second message number and the current third message number, so as to obtain the number of the cache messages corresponding to each second time slot, and by comparing the number of the cache messages with the lower limit threshold, the processing priority of the second time slot lower than the lower limit threshold is reduced.
It can be understood that, referring to the embodiment shown in fig. 4, the method for message processing further includes:
step S510, in each scheduling period, a third task queue that is not scheduled is obtained.
And step S520, fusing a second time slot corresponding to the third task queue which continues not to be subjected to scheduling processing to a second time slot with adjacent processing priority.
It should be noted that, in the polling processing process for the second time slot and the first time slot, one of the second time slots occupies the scheduling time of the adjacent second time slot, for example, time slice scheduling implemented by statistics of the message in a statistical multiplexing manner is not strictly processed according to the duration of the second time slot, so that there is insufficient remaining time to support the adjacent second time slot to perform message processing corresponding to the second task queue, and at least one second task queue cannot be executed. Through the fusion mode, the scheduling probability of the third message which originally belongs to the continuous unscheduled third task queue can be improved. It should be noted that whether the second time slot is occupied all the time or not may be determined by setting a flag or setting the time count, so that the third packet corresponding to the second task queue cannot be scheduled.
It is to be appreciated that the number of second time slots includes a polling slot.
It should be noted that the polling slot is used for processing the first packet that does not match with the first time slot and the second time slot. The compatibility in the communication system can be improved by arranging the polling slot, and the reliability of the communication system is further improved.
It is understood that, before step S100, the method further includes: acquiring a preset processing priority list; allocating an initial processing priority to each second time slot according to the processing priority list; a third task queue is allocated for each second time slot.
It should be noted that, in some embodiments, the processing priority list corresponds to the configuration priority of the supported first packet one to one. In other embodiments, the processing priority list matches the type of the first packet.
It will be appreciated that with reference to the embodiment shown in figure 5, the present application also proposes an apparatus comprising: a memory 500, a processor 600 and a computer program stored on the memory 500 and executable on the processor 600, the method of message processing as described above when the processor 600 executes the computer program.
The memory 500, which is a non-transitory computer-readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer-executable programs. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and these remote memories may be connected to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
It should be noted that the device in this embodiment may be applied to the device in the embodiment shown in fig. 1, and the device in this embodiment and the message processing method shown in fig. 2 have the same inventive concept, so that these embodiments have the same implementation principle and technical effect, and are not described in detail here.
The non-transitory software programs and instructions required to implement the information processing method of the above-described embodiment are stored in the memory, and when executed by the processor, perform the information processing method of the above-described embodiment, for example, perform the method steps corresponding to fig. 2, 3, and 4 or the sub-steps of the method corresponding to fig. 2, 3, and 4 described above.
It is understood that the present invention also provides a computer-readable storage medium storing computer-executable instructions for performing the above-mentioned message processing method.
One of ordinary skill in the art will appreciate that all or some of the steps, systems, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
While the preferred embodiments of the present invention have been described in detail, it will be understood by those skilled in the art that the foregoing and various other changes, omissions and deviations in the form and detail thereof may be made without departing from the scope of this invention.

Claims (16)

1. A method for message processing, the method comprising:
caching a first message to be processed to a first task queue or a second task queue according to a message parameter of the first message; the second task queue is one of a plurality of third task queues with different processing priorities, and the processing priority of the second task queue is matched with the configuration priority in the message parameter; the processing priority of the first task queue is greater than that of any third task queue;
and in each scheduling period, according to the processing priority, scheduling and processing the second message of the first task queue in a first time slot corresponding to the first task queue, and scheduling and processing the third message of the third task queue in a second time slot corresponding to each third task queue.
2. The method of claim 1, wherein the packet parameter comprises a packet type, the method further comprising:
and when the message type of the first message is a construction event, matching the configuration priority of the first message with the highest processing priority in the third task queues.
3. The method of claim 1, further comprising:
detecting the link quality of a preset transmission link; wherein, the transmission link is used for transmitting the third message;
and adjusting the transmission bandwidth of at least one second time slot in the scheduling period according to the link quality.
4. The method of claim 1, further comprising:
calculating the network transmission rate of a transmission link in a preset first historical time period;
when the network transmission rate meets a preset threshold value, calculating the total transmission bandwidth corresponding to the transmission link in a scheduling period;
and readjusting the transmission bandwidth of at least one second time slot in the scheduling period according to the total transmission bandwidth.
5. The method of claim 4,
the readjusting the transmission bandwidth of at least one of the second time slots in the scheduling period according to the total transmission bandwidth includes:
and dividing the total transmission bandwidth by a preset bandwidth allocation algorithm to obtain the transmission bandwidth corresponding to each second time slot.
6. The method of claim 5,
the dividing the total transmission bandwidth by a preset bandwidth allocation algorithm to obtain the transmission bandwidth corresponding to each second time slot includes:
calculating to obtain an average transmission bandwidth according to the number N of the second time slots and the total transmission bandwidth;
rounding the average transmission bandwidth downwards to obtain a first transmission bandwidth;
calculating to obtain residual bandwidth according to the total transmission bandwidth and the N-1 first transmission bandwidths;
and allocating the residual bandwidth and the first transmission bandwidth to the N second time slots according to a preset allocation principle.
7. The method of claim 6, wherein when the remaining bandwidth is larger than the first transmission bandwidth, the allocating the remaining bandwidth and the first transmission bandwidth to N second time slots according to a preset allocation rule comprises:
allocating the residual bandwidth to a second time slot with highest processing priority;
the first transmission bandwidth is allocated to other unassigned second time slots.
8. The method according to claim 6, wherein said allocating the remaining bandwidth and the first transmission bandwidth to N second time slots according to a preset allocation principle comprises:
determining the second time slot allocated by the residual bandwidth according to the first message quantity of the third messages cached in each second time slot in a preset second historical time period;
the first transmission bandwidth is allocated to other unassigned second time slots.
9. The method of claim 1, further comprising:
respectively acquiring the second message quantity of the third messages cached in each second time slot in a preset third history time period and the third message quantity cached at the current moment;
and readjusting the processing priority of at least one second time slot according to the second message quantity and the third message quantity corresponding to the second time slots.
10. The method according to claim 9, wherein said re-adjusting the processing priority of at least one of the second time slots according to the second packet number and the third packet number corresponding to a plurality of the second time slots comprises:
obtaining the number of cache messages according to the number of the second messages and the number of the third messages corresponding to the second time slots;
and reducing the processing priority of the second time slot corresponding to the number of the cache messages lower than the preset lower limit threshold value.
11. The method of claim 1, further comprising:
in each scheduling period, acquiring the third task queue which is not scheduled to be processed;
and fusing a second time slot corresponding to the third task queue which is continuously not scheduled to be processed to a second time slot of an adjacent processing priority.
12. The method of claim 1, wherein the number of second time slots comprises a polling slot.
13. The method of claim 1, wherein between buffering the first packet into a first task queue or a second task queue, the method further comprises:
acquiring a preset processing priority list;
allocating an initial processing priority to each second time slot according to the processing priority list;
and allocating the third task queue for each second time slot.
14. An apparatus, comprising:
the distribution module is used for caching the first message to a first task queue or a second task queue according to the message parameter of the first message to be processed; the second task queue is one of a plurality of third task queues with different processing priorities, and the processing priority of the second task queue is matched with the configuration priority in the message parameter; the processing priority of the first task queue is greater than that of any third task queue;
and the scheduling processing module is used for scheduling and processing the second message of the first task queue in a first time slot corresponding to the first task queue and scheduling and processing the third message of the third task queue in a second time slot corresponding to each third task queue in sequence according to the processing priority in each scheduling period.
15. An apparatus, comprising: memory, processor and computer program stored on the memory and executable on the processor, characterized in that the processor executes the computer program with a method for message processing according to one of claims 1 to 13.
16. A computer-readable storage medium having stored thereon computer-executable instructions for performing at least the method of message processing according to any of claims 1 to 13.
CN202111626083.1A 2021-12-28 2021-12-28 Message processing method, device and storage medium Pending CN114500402A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111626083.1A CN114500402A (en) 2021-12-28 2021-12-28 Message processing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111626083.1A CN114500402A (en) 2021-12-28 2021-12-28 Message processing method, device and storage medium

Publications (1)

Publication Number Publication Date
CN114500402A true CN114500402A (en) 2022-05-13

Family

ID=81496513

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111626083.1A Pending CN114500402A (en) 2021-12-28 2021-12-28 Message processing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN114500402A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101478476A (en) * 2008-12-08 2009-07-08 华为技术有限公司 Transmission processing method, apparatus and system for packet microwave data
CN101656585A (en) * 2009-08-27 2010-02-24 华为技术有限公司 Method for dispatching time division multiplexing business and device thereof
CN101790239A (en) * 2010-02-23 2010-07-28 中国电信股份有限公司 Packet dispatching method and forward service dispatcher
CN102025639A (en) * 2010-12-23 2011-04-20 北京星网锐捷网络技术有限公司 Queue scheduling method and system
WO2014067052A1 (en) * 2012-10-29 2014-05-08 Qualcomm Incorporated Selective high-priority bandwidth allocation for time-division multiple access communications
CN112929974A (en) * 2021-02-05 2021-06-08 四川安迪科技实业有限公司 Efficient TDMA satellite resource allocation method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101478476A (en) * 2008-12-08 2009-07-08 华为技术有限公司 Transmission processing method, apparatus and system for packet microwave data
CN101656585A (en) * 2009-08-27 2010-02-24 华为技术有限公司 Method for dispatching time division multiplexing business and device thereof
CN101790239A (en) * 2010-02-23 2010-07-28 中国电信股份有限公司 Packet dispatching method and forward service dispatcher
CN102025639A (en) * 2010-12-23 2011-04-20 北京星网锐捷网络技术有限公司 Queue scheduling method and system
WO2014067052A1 (en) * 2012-10-29 2014-05-08 Qualcomm Incorporated Selective high-priority bandwidth allocation for time-division multiple access communications
CN112929974A (en) * 2021-02-05 2021-06-08 四川安迪科技实业有限公司 Efficient TDMA satellite resource allocation method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BELLE COOPER: "How to Ruthlessly Prioritize Tasks to Get More Done", Retrieved from the Internet <URL:https://zapier.com/blog/prioritize-task-list-methods/> *
王利村;王秀妮;: "EPON支持QoS的动态带宽算法研究", 广东通信技术, no. 09 *

Similar Documents

Publication Publication Date Title
CN107682135B (en) NOMA-based network slice self-adaptive virtual resource allocation method
CN107579926B (en) QoS setting method of Ceph cloud storage system based on token bucket algorithm
EP1415443B1 (en) Apparatus and method for flow scheduling based on priorities in a mobile network
EP2702730B1 (en) Effective circuits in packet-switched networks
US20020052205A1 (en) Quality of service scheduling scheme for a broadband wireless access system
CN113141590B (en) Industrial Internet of things-oriented wireless communication scheduling method and device
CN112996116B (en) Resource allocation method and system for guaranteeing quality of power time delay sensitive service
CN109617836B (en) Intelligent bandwidth allocation method and system for satellite data transmission
CN111654712A (en) Dynamic self-adaptive streaming media multicast method suitable for mobile edge computing scene
US20220104213A1 (en) Method of scheduling plurality of packets related to tasks of plurality of user equipments using artificial intelligence and electronic device performing the method
CN109699089A (en) A kind of channel access method and device
CN115473855B (en) Network system and data transmission method
US9621438B2 (en) Network traffic management
CN109257303B (en) QoS queue scheduling method and device and satellite communication system
CN112039804B (en) Method and system for dynamically allocating burst service bandwidth based on weight ratio
CN112737980B (en) Time-based network slice resource dynamic partitioning method and device
CN109792411B (en) Apparatus and method for managing end-to-end connections
US10439947B2 (en) Method and apparatus for providing deadline-based segmentation for video traffic
CN112714081B (en) Data processing method and device
CN114500402A (en) Message processing method, device and storage medium
CN111131066B (en) Traffic shaping method and device
CN114845400A (en) Flexe-based resource allocation method and system
CN110636013B (en) Dynamic scheduling method and device for message queue
CN115774614A (en) Resource regulation and control method, terminal and storage medium
CN110996398A (en) Wireless network resource scheduling method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination