CN105763481A - Information caching method and device - Google Patents

Information caching method and device Download PDF

Info

Publication number
CN105763481A
CN105763481A CN201410804262.3A CN201410804262A CN105763481A CN 105763481 A CN105763481 A CN 105763481A CN 201410804262 A CN201410804262 A CN 201410804262A CN 105763481 A CN105763481 A CN 105763481A
Authority
CN
China
Prior art keywords
information
queue
priority
stored
message queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410804262.3A
Other languages
Chinese (zh)
Inventor
赵嫘
王世平
刘云飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Founder Mobile Media Technology Beijing Co Ltd
Peking University Founder Group Co Ltd
Original Assignee
Founder Mobile Media Technology Beijing Co Ltd
Peking University Founder Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Founder Mobile Media Technology Beijing Co Ltd, Peking University Founder Group Co Ltd filed Critical Founder Mobile Media Technology Beijing Co Ltd
Priority to CN201410804262.3A priority Critical patent/CN105763481A/en
Publication of CN105763481A publication Critical patent/CN105763481A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides an information caching method. The method includes the following steps that: information is stored in corresponding information queues according to the priorities of the information, wherein a plurality of information queues are provided, the priorities of the information queues are different, information with high priorities is stored in information queues with high priorities, and information with low priorities is stored in information queues with low priorities; and information is extracted from each information queue according to the sequence of the priorities of the plurality of information queues, and the information is stored in a caching queue. According to the information caching method provided by the invention, the information in the information queues is extracted out according to the sequence of the priorities of the plurality of the information queues, and the information is stored into the same caching queue, and therefore, system resources can be saved. With the information caching method provided by the invention adopted, when the caching queue processes the information, only one information processing channel is needed, and therefore, the condition of resource waste caused by the use of a plurality of processing channels can be avoided.

Description

A kind of method for caching information and device
Technical field
The present invention relates to technical field of data processing, be specifically related to a kind of method for caching information and device.
Background technology
Under the background of information-based high speed development, having substantial amounts of information in system needs processed, and this some of which information is that requirement of real-time is significantly high, it is desirable to carry out as early as possible processing, some requirement of real-time not high, it is possible to reprocess when system relative free.
At present, message scheduling processing method mainly includes two kinds, and one of which is that the information of pretreatment is stored in queue, first-in first-out rule according to queue, taking out the information in queue and process, information is not handled differently by the method, it is impossible to embody the real-time of information.Another kind of method is that the passage to the information of process is easily separated, the information of high and low priority is processed by each different passages, each passage is independent of each other, although the method can realize being handled differently information, but it is insufficient to cause that treatment channel uses, thus causing the wasting of resources, and possibly cannot meet and be actually needed when sendaisle resource-constrained.
Summary of the invention
For this, the technical problem to be solved is in that difference stores the information of various priority, economizes on resources while embodying information real-time.
The present invention provides a kind of method for caching information, including:
Described information is stored in the message queue of correspondence by the priority according to information, wherein said message queue has multiple, the priority of each described message queue differs, and the information of high priority is stored in the message queue of high priority, and the information of low priority is stored in the message queue of low priority;
Priority orders according to the plurality of message queue, from each described message queue, taking-up information is stored in buffer queue successively.
Preferably, the described priority orders according to the plurality of message queue, from each described message queue, taking-up information is stored in buffer queue and includes successively:
At least some of information taken out in high priority message queue is stored in described buffer queue;
If described buffer queue is not filled with, then at least one of information taken out in low-priority information queue is stored in described buffer queue.
Preferably, the described priority orders according to the plurality of message queue, from each described message queue, taking-up information is stored in buffer queue and includes successively:
At least some of information taken out in limit priority message queue is stored in described buffer queue;
If described buffer queue is not filled with, then at least some of information taken out successively in low one-level precedence information queue is stored in described buffer queue, until described buffer queue is filled with.
Preferably, the described at least some of information taken out successively in low one-level precedence information queue is stored in described buffer queue and includes:
Proportion, the proportion of the message queue that the proportion of the message queue that its medium priority is higher is relatively low more than priority is given to each information storage queue;The information taken out successively in low one-level precedence information queue according to described proportion is stored in described buffer queue.
Preferably, described at least some of information is full detail.
Correspondingly, example of the present invention provides a kind of information cache device, including:
Enter group unit, for the priority according to information, described information is stored in the message queue of correspondence, wherein said message queue has multiple, the priority of each described message queue differs, the information of high priority is stored in the message queue of high priority, and the information of low priority is stored in the message queue of low priority;
Going out group buffer unit, the priority orders according to the plurality of message queue, from each described message queue, taking-up information is stored in buffer queue successively.
Preferably, go out group buffer unit described in include:
First high priority goes out group subelement, is stored in described buffer queue at least some of information taken out in high priority message queue;
First low priority goes out group subelement, and for when described buffer queue is not filled with, at least one of information taken out in low-priority information queue is stored in described buffer queue.
Preferably, go out group buffer unit described in include:
Second high priority goes out group subelement, and at least some of information taken out in limit priority message queue is stored in described buffer queue;
Second low priority goes out group subelement, and for when described buffer queue is not filled with, at least some of information taken out successively in low one-level precedence information queue is stored in described buffer queue, until described buffer queue is filled with.
Preferably, described second low priority goes out group subelement and includes:
Go out team than reallocation module, for giving proportion, the proportion of the message queue that the proportion of the message queue that its medium priority is higher is relatively low more than priority to each information storage queue;
Low priority goes out group module, is stored in described buffer queue for the information taken out successively in low one-level precedence information queue according to described proportion.
Preferably, described at least some of information is full detail.
The method for caching information of the present embodiment offer and device, utilize the information of the message queue storage different priorities of different priorities, then take out the information in each message queue according to the priority orders of message queue and be stored in same buffer queue, during process information, have only to from extraction information this buffer queue and process, the real-time of process information can be embodied.When utilizing the buffer queue that the information cache device that the present embodiment provides is formed to process information, only an information processing passage need to be set, it is to avoid use the wasting of resources situation caused during multiple treatment channel.
Accompanying drawing explanation
In order to make present disclosure be more likely to be clearly understood, below according to specific embodiments of the invention and in conjunction with accompanying drawing, the present invention is further detailed explanation, wherein
Fig. 1 is the flow chart of the method for caching information that the embodiment of the present invention one provides;
The schematic diagram of cache information when Fig. 2 is to comprise two information storage queues in the invention process one;
Fig. 3 is the first method flow chart that in the method for caching information that the embodiment of the present invention one provides, information is stored in buffer queue;
Fig. 4 is the second method flow chart that in the method for caching information that the embodiment of the present invention one provides, information is stored in buffer queue;
The schematic diagram of cache information when Fig. 5 is to comprise four information storage queues in the invention process one;
Fig. 6 is the sub-process figure of method shown in Fig. 4;
Fig. 7 is the structural representation of the information cache device that the embodiment of the present invention two provides;
The one that Fig. 8 is in the information cache device that the embodiment of the present invention two provides goes out group buffer unit structural representation.
Detailed description of the invention
Embodiment one
The present embodiment provides a kind of method for caching information, as it is shown in figure 1, the method includes:
Step 11, information is stored in the message queue of correspondence by the priority according to information, and message queue has multiple, and the priority of each message queue differs, the information of high priority is stored in the message queue of high priority, and the information of low priority is stored in the message queue of low priority.Specifically, higher for requirement of real-time, namely the information carrying out processing as early as possible is needed, high priority flag, relatively low for requirement of real-time can be added for it, namely can wait that resource space idle just carries out the information processed, can adding low-priority flags for it, correspondingly, message queue also has different priority;Then according to the labelling of information, information is stored in the message queue of correspondence.Such as, as shown in Figure 2, information processing system is provided with first information queue 21 and the second message queue 22, and the priority of first information queue 21 is more than the second message queue 22, when there being high priority message to need processed, then be deposited into first information queue 21, when there being low-priority information to need processed, then be deposited into the second message queue 22.It will be appreciated by those skilled in the art that the situation being likely to comprise more than two kinds of precedence informations in practical situation, during for multiple priorities, it is possible to be arranged in correspondence with multiple message queue.
Step 12, the priority orders according to multiple message queues, from each message queue, taking-up information is stored in buffer queue successively.Message handling system only arranges a buffer queue, specifically, is select message queue to take out information according to priority from the order of high to low.Such as, as in figure 2 it is shown, owing to the priority of first information queue 21 is more than the second message queue 22, so first taking out message from first information queue 21 to be stored in buffer queue 23, then taking out message again from the second message queue 22 and being stored in buffer queue 23.After all pending message are all stored into buffer queue, directly can take out according to putting in order of message in buffer queue and process, thus can embody the real-time of message.
The method for caching information that the present embodiment provides, utilize the information of the message queue storage different priorities of different priorities, then take out the information in each message queue according to the priority orders of message queue and be stored in same buffer queue, when processing message, have only to from this buffer queue, extract message and process, the real-time processing message can be embodied.When utilizing the buffer queue that the message buffering method that the present embodiment provides is formed to process information, only an information processing passage need to be set, it is to avoid use the wasting of resources situation caused during multiple treatment channel.
Preferably, as it is shown on figure 3, the step 12 of said method may include that
Step 121, at least some of information taken out in high priority message queue is stored in buffer queue.In practical situation, buffer queue has certain memory capacity, and the message number in each queue is dynamically change, thus win the confidence from message queue breath time should judge the residual capacity that buffer queue is current.Such as, set the memory capacity of buffer queue as 1000, namely can store 1000 information, it is assumed that currently without information in buffer queue, so when the information content in a high priority message queue is more than 1000, then can only take out at most 1000 information from this message queue;If the information content in this message queue is less than or equal to 1000, then can take out full detail from this message queue and be stored in buffer queue, after having processed this high priority message queue, if buffer queue is filled with, then breath of no longer winning the confidence from low-priority information queue.
Step 122, if buffer queue is not filled with, then at least one of information in low-priority information queue of taking out is stored in buffer queue.Win the confidence from low-priority information queue breath time, it should also judge buffer queue current remaining capacity, it is possible to be take out a part of information, it is also possible to be take out full detail.The method for caching information that the present embodiment provides, while guarantee information real-time, is sufficiently used the capacity of buffer queue, makes the efficiency that cache information operates be further enhanced.
Preferably for the message queue quantity situation more than two, as shown in Figure 4, the step 12 of said method may include that
Step 123, at least some of information taken out in limit priority message queue is stored in buffer queue.For message queue quantity more than two in the case of, breath of first winning the confidence from limit priority message queue, judge according to the capacity of buffer queue, it is possible to be take out all, it is also possible to be take out a part.
Step 124, if buffer queue is not filled with, then at least some of information taken out successively in low one-level precedence information queue is stored in described buffer queue, until described buffer queue is filled with.This operation be according to priority from the order of high to low successively from each buffer queue taking-up information be stored in buffer queue, until buffer queue information is filled with.Such as, as shown in Figure 5, message handling system is provided with first information queue the 51, second message queue 52,3rd message queue 53 and the 4th message queue 54, their priority orders is first information queue 51 > the second message queue 52 > the 3rd message queue 53 > the 4th message queue 54, the mode of storage message is then that first from first information queue 51, taking-up part or full detail are stored in buffer queue 55;If buffer queue 55 is not filled with, then from the second message queue 52, takes out part or full detail is stored in buffer queue 55;If buffer queue 55 is filled with not yet, then from the 3rd message queue 53 and the 4th message queue 54, take out part successively again or full detail is stored in buffer queue 55, if buffer queue is full in this process, then no longer take the information in low one-level precedence information queue.
Preferably, as shown in Figure 6, the step 124 of said method may include that
Step 1241, gives proportion, the proportion of the message queue that the proportion of the message queue that its medium priority is higher is relatively low more than priority to each information storage queue;
Step 1242, the information taken out successively in low one-level precedence information queue according to described proportion is stored in buffer queue.
This operation is the optimal way extracting information for each lower than the buffer queue of limit priority, such as, can preset from the second message queue 52, 3rd message queue 53 and the 4th message queue 54 are taken out the proportion of message, this proportion is that the priority according to each message queue is arranged, and the residual capacity of buffer queue 55 determines the particular number taking out message, specifically, after taking information from limit priority buffer queue 51, residual capacity in current cache queue 55 is 500, assume that the proportion cancelling breath from the second message queue 52 is 0.6, from the second message queue 52, then take out at most 300 information;Assume that the proportion cancelling breath from the 3rd message queue 53 is 0.3, then from the 3rd message queue 53, take out at most 150 articles of information;Assume that the proportion cancelling breath from the 4th message queue 54 is 0.1, then from the 4th message queue 54, take out at most 50 articles of information.If information to be removed in each message queue is less than the above-mentioned distribution upper limit, namely after having stored the information of all message queues, buffer queue 55 still has remaining space, now can also be stored in buffer queue 55 according to the information presetting proportion continuation each message queue of taking-up.
Preferably, described in the present embodiment, at least some of information is full detail, namely when the current capacities of buffer queue is sufficiently large, it is preferred to the full detail taking out each information storage queue is stored in buffer queue;If the current capacities of buffer queue is not enough to take out the full detail of certain information storage queue, then can wait the regular hour, after the information in buffer queue goes out team, take out information from this information storage queue again.
The method for caching information that the present embodiment provides may apply in moving communicating field, such as it is used for sending short message, short message to be sent is generally of different requirement of real-times, in the case, above-mentioned method for caching information can be utilized to carry out buffer memory to sent short message, and short message transmitter directly can take out short message from buffer queue and be transmitted.
The method sending short message comprises the steps:
(1) length of buffer queue len is checked;
(2) if len=0, suspend a period of time, jump to (1);
(3) if len > 0, go out group thread from this queue, take out a note request message;
(4) from note request message, each field contents is analyzed;
(5) short message content length is judged, if need to carry out long SMS fractionation;Need not then jump to (7);
(6) long SMS split into some sub-notes according to splitting algorithm;
(7) according to SMS gateway CMPP specification, note is assembled into the message meeting SUBMIT order;
(8) submit the SUBMIT message submitting to thread to assemble in thread pool to, call submission agency service, submit to mobile side Short Message Service Gateway, namely perform note SUBMIT order;
(9) receive note to turn in a report, and this report is joined note turn in a report in queue;
(10) receive short message state report, and this report is joined in short message state report queue;
Continue cycling through execution, jump to (1).
Send the rate controlling mechanism of short message:
(1) split after going out team (content more than 70 Chinese character length split into some sub-notes) to long SMS;
(2) gateway side speed limit is N bar/second, creates N number of submission thread;
(3) each submission thread is per second does note submission (the Submit order of CMPP agreement).
Utilize the buffer queue that above-mentioned method for caching information generates to be transmitted short message processing, a passage sending short message only need to be set, and may insure that this passage is fully utilized, it is to avoid the wasting of resources, embody the real-time of each short message.
Embodiment two
The present embodiment provides a kind of information cache device, as it is shown in fig. 7, this device includes:
Enter group unit 71, for the priority according to information, described information is stored in the message queue of correspondence, wherein said message queue has multiple, the priority of each described message queue differs, the information of high priority is stored in the message queue of high priority, and the information of low priority is stored in the message queue of low priority;
Going out group buffer unit 72, the priority orders according to the plurality of message queue, from each described message queue, taking-up information is stored in buffer queue successively.
Preferably, go out group buffer unit 72 may include that
First high priority goes out group subelement 721, is stored in described buffer queue at least some of information taken out in high priority message queue;
First low priority goes out group subelement 722, and for when described buffer queue is not filled with, at least one of information taken out in low-priority information queue is stored in described buffer queue.
Preferably, as shown in Figure 8, go out group buffer unit 72 may include that
Second high priority goes out group subelement 723, and at least some of information taken out in limit priority message queue is stored in described buffer queue;
Second low priority goes out group subelement 724, and for when described buffer queue is not filled with, at least some of information taken out successively in low one-level precedence information queue is stored in described buffer queue, until described buffer queue is filled with.
Preferably, the second low priority goes out group subelement 724 and may include that
Go out team than reallocation module 7241, for giving proportion, the proportion of the message queue that the proportion of the message queue that its medium priority is higher is relatively low more than priority to each information storage queue;
Low priority goes out group module 7242, is stored in described buffer queue for the information taken out successively in low one-level precedence information queue according to described proportion.
Preferably, at least some of information in the present embodiment is full detail.
The information cache device that the present embodiment provides, utilize the information of the message queue storage different priorities of different priorities, then take out the information in each message queue according to the priority orders of message queue and be stored in same buffer queue, during process information, have only to from extraction information this buffer queue and process, the real-time of process information can be embodied.When utilizing the buffer queue that the information cache device that the present embodiment provides is formed to process information, only an information processing passage need to be set, it is to avoid use the wasting of resources situation caused during multiple treatment channel.
Obviously, above-described embodiment is only for clearly demonstrating example, and is not the restriction to embodiment.For those of ordinary skill in the field, can also make other changes in different forms on the basis of the above description.Here without also cannot all of embodiment be given exhaustive.And the apparent change thus extended out or variation are still among the protection domain of the invention.

Claims (10)

1. a method for caching information, it is characterised in that including:
Described information is stored in the message queue of correspondence by the priority according to information, wherein said message queue has multiple, the priority of each described message queue differs, and the information of high priority is stored in the message queue of high priority, and the information of low priority is stored in the message queue of low priority;
Priority orders according to the plurality of message queue, from each described message queue, taking-up information is stored in buffer queue successively.
2. method according to claim 1, it is characterised in that the described priority orders according to the plurality of message queue, from each described message queue, taking-up information is stored in buffer queue and includes successively:
At least some of information taken out in high priority message queue is stored in described buffer queue;
If described buffer queue is not filled with, then at least one of information taken out in low-priority information queue is stored in described buffer queue.
3. method according to claim 1, it is characterised in that the described priority orders according to the plurality of message queue, from each described message queue, taking-up information is stored in buffer queue and includes successively:
At least some of information taken out in limit priority message queue is stored in described buffer queue;
If described buffer queue is not filled with, then at least some of information taken out successively in low one-level precedence information queue is stored in described buffer queue, until described buffer queue is filled with.
4. method according to claim 3, it is characterised in that the described at least some of information taken out successively in low one-level precedence information queue is stored in described buffer queue and includes:
Proportion, the proportion of the message queue that the proportion of the message queue that its medium priority is higher is relatively low more than priority is given to each information storage queue;The information taken out successively in low one-level precedence information queue according to described proportion is stored in described buffer queue.
5. the method according to any one of claim 2-4, it is characterised in that described at least some of information is full detail.
6. an information cache device, it is characterised in that including:
Enter group unit, for the priority according to information, described information is stored in the message queue of correspondence, wherein said message queue has multiple, the priority of each described message queue differs, the information of high priority is stored in the message queue of high priority, and the information of low priority is stored in the message queue of low priority;
Going out group buffer unit, the priority orders according to the plurality of message queue, from each described message queue, taking-up information is stored in buffer queue successively.
7. device according to claim 6, it is characterised in that described in go out group buffer unit and include:
First high priority goes out group subelement, is stored in described buffer queue at least some of information taken out in high priority message queue;
First low priority goes out group subelement, and for when described buffer queue is not filled with, at least one of information taken out in low-priority information queue is stored in described buffer queue.
8. device according to claim 6, it is characterised in that described in go out group buffer unit and include:
Second high priority goes out group subelement, and at least some of information taken out in limit priority message queue is stored in described buffer queue;
Second low priority goes out group subelement, and for when described buffer queue is not filled with, at least some of information taken out successively in low one-level precedence information queue is stored in described buffer queue, until described buffer queue is filled with.
9. device according to claim 8, it is characterised in that described second low priority goes out group subelement and includes:
Go out team than reallocation module, for giving proportion, the proportion of the message queue that the proportion of the message queue that its medium priority is higher is relatively low more than priority to each information storage queue;
Low priority goes out group module, is stored in described buffer queue for the information taken out successively in low one-level precedence information queue according to described proportion.
10. the device according to any one of claim 7-9, it is characterised in that described at least some of information is full detail.
CN201410804262.3A 2014-12-19 2014-12-19 Information caching method and device Pending CN105763481A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410804262.3A CN105763481A (en) 2014-12-19 2014-12-19 Information caching method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410804262.3A CN105763481A (en) 2014-12-19 2014-12-19 Information caching method and device

Publications (1)

Publication Number Publication Date
CN105763481A true CN105763481A (en) 2016-07-13

Family

ID=56341368

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410804262.3A Pending CN105763481A (en) 2014-12-19 2014-12-19 Information caching method and device

Country Status (1)

Country Link
CN (1) CN105763481A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107770090A (en) * 2017-10-20 2018-03-06 深圳市楠菲微电子有限公司 Method and apparatus for controlling register in streamline
CN108037983A (en) * 2017-11-22 2018-05-15 链家网(北京)科技有限公司 Method for scheduling task and distributed scheduling system in distributed scheduling system
CN108200134A (en) * 2017-12-25 2018-06-22 腾讯科技(深圳)有限公司 Request message management method and device, storage medium
CN111355673A (en) * 2018-12-24 2020-06-30 深圳市中兴微电子技术有限公司 Data processing method, device, equipment and storage medium
CN115002224A (en) * 2022-05-26 2022-09-02 阿里巴巴(中国)有限公司 Message processing method, device, system, electronic equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1471264A (en) * 2002-07-27 2004-01-28 华为技术有限公司 Dynamic RAM quene regulating method based on dynamic packet transmsision
CN1536820A (en) * 2003-04-09 2004-10-13 华为技术有限公司 Method for raising data transmission performance when the network is congested
CN101035067A (en) * 2007-01-25 2007-09-12 华为技术有限公司 Flow control implementation method and device based on the output queue
CN101094181A (en) * 2007-07-25 2007-12-26 华为技术有限公司 Dispatch device and method of enqueuing and dequeuing message
CN101242360A (en) * 2008-03-13 2008-08-13 中兴通讯股份有限公司 A network address conversion method and system based on priority queue
CN101291546A (en) * 2008-06-11 2008-10-22 清华大学 Switching structure coprocessor of core router
CN101374109A (en) * 2008-10-07 2009-02-25 中兴通讯股份有限公司 Method and apparatus for scheduling packets
CN101834790A (en) * 2010-04-22 2010-09-15 上海华为技术有限公司 Multicore processor based flow control method and multicore processor
EP2301002A1 (en) * 2008-05-30 2011-03-30 Sony Computer Entertainment America LLC File input/output scheduler
CN102195885A (en) * 2011-05-27 2011-09-21 成都市华为赛门铁克科技有限公司 Message processing method and device
CN102752321A (en) * 2012-08-07 2012-10-24 广州微仕科信息技术有限公司 Firewall realization method based on multicore network processor

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1471264A (en) * 2002-07-27 2004-01-28 华为技术有限公司 Dynamic RAM quene regulating method based on dynamic packet transmsision
CN1536820A (en) * 2003-04-09 2004-10-13 华为技术有限公司 Method for raising data transmission performance when the network is congested
CN101035067A (en) * 2007-01-25 2007-09-12 华为技术有限公司 Flow control implementation method and device based on the output queue
CN101094181A (en) * 2007-07-25 2007-12-26 华为技术有限公司 Dispatch device and method of enqueuing and dequeuing message
CN101242360A (en) * 2008-03-13 2008-08-13 中兴通讯股份有限公司 A network address conversion method and system based on priority queue
EP2301002A1 (en) * 2008-05-30 2011-03-30 Sony Computer Entertainment America LLC File input/output scheduler
CN101291546A (en) * 2008-06-11 2008-10-22 清华大学 Switching structure coprocessor of core router
CN101374109A (en) * 2008-10-07 2009-02-25 中兴通讯股份有限公司 Method and apparatus for scheduling packets
CN101834790A (en) * 2010-04-22 2010-09-15 上海华为技术有限公司 Multicore processor based flow control method and multicore processor
CN102195885A (en) * 2011-05-27 2011-09-21 成都市华为赛门铁克科技有限公司 Message processing method and device
CN102752321A (en) * 2012-08-07 2012-10-24 广州微仕科信息技术有限公司 Firewall realization method based on multicore network processor

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107770090A (en) * 2017-10-20 2018-03-06 深圳市楠菲微电子有限公司 Method and apparatus for controlling register in streamline
CN107770090B (en) * 2017-10-20 2020-05-01 深圳市楠菲微电子有限公司 Method and apparatus for controlling registers in a pipeline
CN108037983A (en) * 2017-11-22 2018-05-15 链家网(北京)科技有限公司 Method for scheduling task and distributed scheduling system in distributed scheduling system
CN108200134A (en) * 2017-12-25 2018-06-22 腾讯科技(深圳)有限公司 Request message management method and device, storage medium
WO2019128535A1 (en) * 2017-12-25 2019-07-04 腾讯科技(深圳)有限公司 Message management method and device, and storage medium
US10891177B2 (en) 2017-12-25 2021-01-12 Tencent Technology (Shenzhen) Company Limited Message management method and device, and storage medium
CN108200134B (en) * 2017-12-25 2021-08-10 腾讯科技(深圳)有限公司 Request message management method and device, and storage medium
CN111355673A (en) * 2018-12-24 2020-06-30 深圳市中兴微电子技术有限公司 Data processing method, device, equipment and storage medium
CN115002224A (en) * 2022-05-26 2022-09-02 阿里巴巴(中国)有限公司 Message processing method, device, system, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN105763481A (en) Information caching method and device
CN105511954B (en) Message processing method and device
WO2016011894A1 (en) Message processing method and apparatus
EP2783490B1 (en) Time-sensitive data delivery
CN102223306B (en) A kind of message transmitting method and device
CN107832143B (en) Method and device for processing physical machine resources
US11507419B2 (en) Method,electronic device and computer program product for scheduling computer resources in a task processing environment
CN104202261A (en) Service request processing method and device
US20140215492A1 (en) Dynamic provisioning of message groups
CN107454014A (en) A kind of method and device of Priority Queuing
CN111221638B (en) Concurrent task scheduling processing method, device, equipment and medium
CN102891809B (en) Multi-core network device message presses interface order-preserving method and system
EP3238386B1 (en) Apparatus and method for routing data in a switch
CN105409170A (en) Packet output controller and method for dequeuing multiple packets from one scheduled output queue and/or using over- scheduling to schedule output queues
WO2014173166A1 (en) Shared resource scheduling method and system
RU2641250C2 (en) Device and method of queue management
CN102609307A (en) Multi-core multi-thread dual-operating system network equipment and control method thereof
CN108462649B (en) Method and device for reducing high-priority data transmission delay in congestion state of ONU
CN114363269B (en) Message transmission method, system, equipment and medium
EP2922257A1 (en) Traffic management scheduling method and apparatus
CN102333280A (en) Business secret key renewing method and system and business processing server
CN108462653B (en) TTE-based rapid protocol control frame sending method
CN104951373A (en) Message queue processing method of scheduling system
CN110865891B (en) Asynchronous message arrangement method and device
CN114168367A (en) Method and system for solving queue backlog through batch tasks

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20160713