CN109039898B - Management method and device of drainage information - Google Patents

Management method and device of drainage information Download PDF

Info

Publication number
CN109039898B
CN109039898B CN201810898266.0A CN201810898266A CN109039898B CN 109039898 B CN109039898 B CN 109039898B CN 201810898266 A CN201810898266 A CN 201810898266A CN 109039898 B CN109039898 B CN 109039898B
Authority
CN
China
Prior art keywords
drainage
information
data set
drainage information
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810898266.0A
Other languages
Chinese (zh)
Other versions
CN109039898A (en
Inventor
张聪桂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wangsu Science and Technology Co Ltd
Original Assignee
Wangsu Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wangsu Science and Technology Co Ltd filed Critical Wangsu Science and Technology Co Ltd
Priority to CN201810898266.0A priority Critical patent/CN109039898B/en
Publication of CN109039898A publication Critical patent/CN109039898A/en
Application granted granted Critical
Publication of CN109039898B publication Critical patent/CN109039898B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/38Flow based routing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention discloses a management method and a device of drainage information, wherein the method comprises the following steps: receiving drainage information sent by a drainage node, wherein the drainage information comprises an IP address, a port and application information; storing the drainage information in a cache unit; when request information sent by the drainage node is received, acquiring global drainage information in the cache unit; and sending the global drainage information to the drainage node. The method can collect and arrange the drainage information generated by each local client so that each drainage node can obtain comprehensive drainage information, and plan the routing path used by the flow sent by each application program on the client according to the comprehensive drainage information, the routing path is used dispersedly, and the network service quality of the application program on each client can be effectively ensured.

Description

Management method and device of drainage information
Technical Field
The invention relates to the technical field of internet, in particular to a management method and a management device for drainage information.
Background
With the rapid development of the internet, various application programs have come into play. Many applications, such as network games, online video, online music, etc., can consume a significant amount of bandwidth. In real life, each client runs a plurality of application programs simultaneously, a plurality of clients exist in the same region, and each client needs to be connected with a corresponding application server when running the application programs. In order to ensure the network service quality of each application program and avoid the occurrence of network congestion, a network operator sets a drainage node server (hereinafter referred to as a "drainage node") to perform drainage, that is, a routing path used by traffic sent by each application program in a client is planned, so that the condition that a link has an excessive load is avoided.
Each drainage node is generally responsible for a client in a certain area, when the client runs an application program, generated flow can reach the drainage node first, the drainage node acquires an IP address of the client, a used port and application information of the application program generating the flow based on the flow, and the drainage node can conduct drainage based on the IP address, the port and the application information of the client corresponding to the flow in the area for which the drainage node is responsible, so that each routing path is used dispersedly, and network congestion is avoided.
However, the drainage node can only drain based on the relevant information obtained from the client traffic in the area in which the drainage node is responsible, and a link between the client and the application server may pass through the areas in which a plurality of drainage nodes are responsible, and since it is not clear that the link load conditions in other areas are, it is likely that the link with a large load passes through, thereby affecting the running speed of the application program.
Disclosure of Invention
In order to solve the problems in the prior art, embodiments of the present invention provide a management method and an apparatus for drainage information. The technical scheme is as follows:
in a first aspect, a management method for drainage information is provided, and is applied to a summary node, where the method includes:
receiving drainage information sent by a drainage node, wherein the drainage information comprises an IP address, a port and application information;
storing the drainage information in a cache unit;
when request information sent by the drainage node is received, acquiring global drainage information in the cache unit;
and sending the global drainage information to the drainage node.
Optionally, the storing the drainage information in a cache unit includes:
determining whether the first data set stored in the cache unit is in the drainage information or not;
when the drainage information is stored in the first data set, storing the drainage information in a second data set in the cache unit;
and when the drainage information is not stored in the first data set, storing the drainage information in a third data set in the cache unit.
Optionally, after storing the drainage information in the second data set in the cache unit, the method includes:
and updating the expiration time of the drainage information in the first data set according to a first preset period.
Optionally, after the storing the drainage information in the third data set in the cache unit, the storing includes:
and storing the drainage information in the first data set according to a first preset period, setting the expiration time of the drainage information, and deleting the drainage information in the third data set.
Optionally, the obtaining of the global drainage information in the cache unit includes:
and acquiring the drainage information contained in the first data set and the third data set.
Optionally, the method further comprises:
and generating a backup file according to a second preset period and storing the backup file, wherein the backup file comprises the global drainage information in the cache unit.
In a second aspect, there is provided an apparatus for managing drainage information, the apparatus including:
the receiving unit is used for receiving the drainage information sent by the drainage node, wherein the drainage information comprises an IP address, a port and application information;
the storage unit is used for storing the drainage information in a cache unit;
the acquisition unit is used for acquiring the global drainage information in the cache unit when receiving the request information sent by the drainage node;
and the sending unit is used for sending the global drainage information to the drainage node.
Optionally, the saving unit is specifically configured to:
determining whether the first data set stored in the cache unit is in the drainage information or not;
when the drainage information is stored in the first data set, storing the drainage information in a second data set in the cache unit;
and when the drainage information is not stored in the first data set, storing the drainage information in a third data set in the cache unit.
Optionally, the storage unit is further configured to update an expiration time of the drainage information in the first data set according to a first preset period.
Optionally, the storing unit is further configured to store the drainage information in the first data set according to a first preset period, set an expiration time of the drainage information, and delete the drainage information in the third data set.
Optionally, the obtaining unit is specifically configured to obtain the drainage information included in the first data set and the third data set.
Optionally, the obtaining unit is further configured to generate and store a backup file according to a second preset period, where the backup file includes the global drainage information in the cache unit.
In a third aspect, there is provided a summarizing node comprising a processor and a memory, wherein the memory stores at least one instruction, at least one program, a set of codes, or a set of instructions, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the management method of the drainage information according to the first aspect.
The management method and device for the drainage information provided by the embodiment of the invention can collect and arrange the drainage information reported by each local drainage node, so that each drainage node can obtain comprehensive drainage information, quickly confirm the application program information corresponding to the network flow according to the comprehensive drainage information, and plan the routing path used by the flow sent by each application program on the client based on the information, and the routing path can be distributed for use or distributed according to the requirement in a targeted manner, for example, the flow of video application can plan the line with larger bandwidth, so that the optimal distribution can be carried out based on the existing network resources, the network service quality of the application program on each client is ensured, and the method takes the cache unit as a storage medium, and can quickly respond even if the concurrent request is larger, thereby improving the system performance.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of a network framework according to an embodiment of the present invention;
fig. 2 is a flowchart of a management method of drainage information according to an embodiment of the present invention;
fig. 3 is a block diagram of a management apparatus for guiding information according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a summary node according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
The embodiment of the invention provides a management method of drainage information, which can be applied to a network framework shown in fig. 1. The network framework comprises a client, a drainage node and a summary node. The method comprises the steps that a drainage node is responsible for flow guiding of clients in a certain area, the clients in the same area are all connected with the drainage node responsible for the area, when an application program running on the client generates network browsing, flow will reach the drainage node first, and the drainage node can determine drainage information based on the flow, wherein the drainage information comprises an IP address of the client, a used port and application information of the application program.
Each drainage node can report the drainage information obtained by the drainage node to the summarizing node at regular time, and the summarizing node receives the drainage information reported by each drainage node and performs summarizing and sorting to obtain the global drainage information.
When the drainage node sends a drainage information request to the summary node, the summary node sends the summarized global drainage information to the drainage node, the drainage node can plan a routing path used by flow sent by each application program on a client according to the global drainage information, and the routing path is used dispersedly, so that optimal distribution can be carried out based on the existing network resources, and the network service quality of the application programs on each client can be effectively ensured.
Referring to fig. 2, a flowchart of a management method for drainage information according to an embodiment of the present invention is provided, where the method is applied to a summary node, that is, executed by the summary node, and the method specifically includes the following steps.
Step 201, receiving drainage information sent by a drainage node, where the drainage information includes an IP address, a port, and application information.
The IP address and the port included in the drainage information are the IP address and the port of the client, where the application information is the application category and the application identifier of the application program running on the client, for example, the application category is a game category, and the application identifier is an identifier of a specific game. The drainage information may also include a transport protocol such that the drainage node selects a corresponding link according to a particular transport protocol. The drainage nodes are distributed in all places, so the summary node can receive the drainage information reported by all the local nodes, and comprehensive summary and arrangement are performed.
Specifically, when receiving network traffic sent by a client in a region in charge, a drainage node performs matching based on local drainage information, and if the matching is successful, stores the drainage information record and prepares to upload; if the matching is unsuccessful, the network traffic can be further identified based on a traffic characteristic identification technology to obtain the application program sending the network traffic, and simultaneously, a source IP address, a port (namely a client IP and a port) and application program information and the like corresponding to the network traffic are stored to prepare for uploading.
When the application program information corresponding to the network flow is obtained by the drainage node, the routing path corresponding to the application program information can be determined based on a preset routing strategy, and forwarding is carried out. Therefore, aiming at the guidance of different application program flows, corresponding routing paths are configured for different application programs so as to ensure the reasonable configuration of resources.
When receiving the drainage information sent by the drainage node, the summary node may check the drainage information and the identity of the drainage node, for example, may verify the type and format of the drainage information, so as to ensure the integrity, validity, and correctness of the data.
Step 202, storing the drainage information in a cache unit.
The cache unit may use a non-relational database, such as a redis cache or a Memcached cache, etc. Both the redis cache and the Memcached cache belong to a key-value distributed storage system, and have the characteristics of high query speed, large data storage quantity and high support for concurrency.
The invention does not adopt the traditional data storage method, namely, the data storage is carried out by adopting a database (namely, a relational database), but the received drainage information is stored in the cache unit, thereby avoiding the problems of too low response speed, reduced system performance and the like caused by too much pressure of the connection number of the database when a plurality of drainage nodes request simultaneously. The embodiment of the invention adopts a cache mode, can store data in the memory so as to improve the data processing speed, and the data can be directly read and written by the application program without establishing connection, so that the embodiment of the invention has no limitation of the number of the connection and can support a scene of high concurrent requests.
The summary node can store the drainage information in a classified manner, and the specific process can be as follows: determining whether the first data set stored in the cache unit is in the drainage information or not; when the drainage information is stored in the first data set, storing the drainage information in a second data set in the cache unit; and when the drainage information is not stored in the first data set, storing the drainage information in a third data set in the cache unit. The flow guide information received by the summarizing node can be stored in the cache unit according to the process, and then the flow guide information in the second data set and the flow guide information in the third data set are sorted into the first data set according to a first preset period. The specific finishing process is as follows.
Because the drainage information in the second data set already exists in the first data set, when the drainage information in the second data set is sorted into the first data set according to the first preset period, the drainage information does not need to be written into the first data set again, but only the expiration time of the drainage information in the first data set needs to be updated, and the sorted drainage information in the second data set is deleted. When the drainage information in the third data set is sorted into the first data set according to a first preset period, the drainage information in the third data set is written into the first data set, the expiration time of the drainage information is set, and the sorted drainage information in the third data set is deleted. That is, if the drainage information is stored in the second data set in the cache unit, the expiration time of the drainage information in the first data set may be updated according to a first preset period. If the drainage information is stored in a third data set in the cache unit, the drainage information may be stored in the first data set according to a first preset period, an expiration time of the drainage information may be set, and the drainage information in the third data set may be deleted.
When sorting the drainage information in the second data set and the third data set, the second data set may be renamed to a fourth data set, the third data set may be renamed to a fifth data set, and the drainage information in the fourth data set and the fifth data set may be sorted into the first data set according to the sorting process. Meanwhile, an empty second data set and an empty third data set are newly built for storing the next received new drainage information, so that the accuracy of summarizing and sorting the drainage information can be guaranteed, and the accurate storage of the new drainage information can be guaranteed. Each data set in the embodiment of the present invention may adopt a storage manner of a hash table.
Each piece of drainage information is provided with expiration time, the expiration time of the drainage information is prolonged backwards when the drainage information is received again, and the drainage information is deleted when the expiration time of the drainage information expires. When the expiration time of the drainage information is determined, the current time can be prolonged backwards by a preset time to obtain the expiration time, and the reporting time of the drainage information can also be prolonged backwards by a preset time to obtain the expiration time. The reporting time of the drainage information can be the time when the drainage node confirms to obtain the drainage information based on the client flow, or the time when the drainage node reports the drainage information to the summary node.
The embodiment of the invention firstly classifies and stores the received new drainage information, then regularly carries out summarization by stages, and stores the new information which is different from the data set of the summarized information, so that the processing of the new information and the summarization of the information can be processed in parallel, thereby avoiding the situation that when the database is used for storing data, the new information is reported when the historical drainage information is summarized and collated, and the new information is not processed in time due to time consumption of the summarization and the collation, thereby causing the incomplete condition of the response information. The embodiment of the invention can ensure that newly reported drainage information is timely stored in the cache unit, thereby ensuring the integrity of the information in each response and effectively shortening the whole time consumption of new information processing and summarizing.
Step 203, when receiving the request information sent by the drainage node, obtaining the global drainage information in the cache unit.
The drainage node can request the drainage information through a request interface provided by the summary node, and the request interface can simultaneously accept high concurrent requests.
And step 204, sending the global drainage information to the drainage node.
When request information sent by the drainage node is received, the drainage information contained in the first data set and the third data set can be obtained, and the drainage information in the first data set and the third data set is all the drainage information received by the summary node.
In this embodiment of the present invention, the summary node may further generate a backup file at regular time, that is, according to a second preset period, and store the backup file in the database, so as to facilitate error checking and manual verification, where the backup file includes the global drainage information in the cache unit, or in other words, the backup file includes the drainage information included in the first data set and the third data set.
The management method of the drainage information provided by the embodiment of the invention can collect and arrange the drainage information reported by the drainage nodes in each place, so that each drainage node can obtain comprehensive drainage information, quickly confirm the application program information corresponding to the network flow according to the comprehensive drainage information, and plan the routing path used by the flow sent by each application program on the client based on the information, and the routing path can be dispersedly used or distributed according to the requirement in a targeted manner, for example, the flow of the video application can plan the line with larger bandwidth, so that the optimal distribution can be carried out based on the existing network resources, the network service quality of the application program on each client can be ensured, and the method takes the cache unit as a storage medium, and can quickly respond even when the concurrent request is larger, thereby improving the system performance.
Referring to fig. 3, a block diagram of a management apparatus for drainage information according to an embodiment of the present invention is shown, where the apparatus may be configured in a summary node or may be a summary node itself, and the apparatus may include a receiving unit 301, a storing unit 302, an obtaining unit 303, and a sending unit 304.
The receiving unit 301 is configured to receive drainage information sent by a drainage node, where the drainage information includes an IP address, a port, and application information.
A storing unit 302, configured to store the drainage information in a cache unit.
An obtaining unit 303, configured to obtain, when receiving the request information sent by the drainage node, the global drainage information in the cache unit.
A sending unit 304, configured to send the global drainage information to the drainage node.
Preferably, the receiving unit 301 may send the received drainage information to the storing unit 302 through an ActiveMQ.
Preferably, the saving unit 302 is specifically configured to:
determining whether the first data set stored in the cache unit is in the drainage information or not;
when the drainage information is stored in the first data set, storing the drainage information in a second data set in the cache unit;
and when the drainage information is not stored in the first data set, storing the drainage information in a third data set in the cache unit.
Preferably, the saving unit 302 is further configured to update an expiration time of the drainage information in the first data set according to a first preset period.
Preferably, the storing unit 302 is further configured to store the drainage information in the first data set according to a first preset period, set an expiration time of the drainage information, and delete the drainage information in the third data set.
Preferably, the obtaining unit 303 is specifically configured to obtain the drainage information included in the first data set and the third data set.
Preferably, the obtaining unit 303 is further configured to generate and store a backup file according to a second preset period, where the backup file includes the global drainage information in the cache unit.
The management device for the drainage information provided by the embodiment of the invention can collect and arrange the drainage information reported by the drainage nodes in each place, so that each drainage node can obtain comprehensive drainage information, quickly confirm the application program information corresponding to the network flow according to the comprehensive drainage information, and plan the routing path used by the flow sent by each application program on the client based on the information, the routing path can be dispersedly used or distributed according to the requirement in a targeted manner, for example, the flow of video application can plan the line with larger bandwidth, so that the optimal distribution can be carried out based on the existing network resources, the network service quality of the application program on each client is ensured, and the method takes the cache unit as a storage medium, and can quickly respond even when the concurrent request is larger, thereby improving the system performance.
It should be noted that: in the management device for the drainage information provided in the above embodiment, when the drainage information is summarized, only the division of each functional module is exemplified, and in practical applications, the function distribution may be completed by different functional units according to needs, that is, the internal structure of the device is divided into different functional units, so as to complete all or part of the functions described above. In addition, the management apparatus for the drainage information and the management method for the drainage information provided in the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments and are not described herein again.
Fig. 4 is a schematic structural diagram of a summary node provided in the embodiment of the present invention. The aggregation node 400 may vary significantly due to configuration or performance differences and may include one or more central processors 422 (e.g., one or more processors) and memory 432, one or more storage media 430 (e.g., one or more mass storage devices) storing applications 442 or data 444. Wherein the memory 432 and storage medium 430 may be transient or persistent storage. The program stored on the storage medium 430 may include one or more unit modules (not shown), each of which may include a series of instruction operations on the aggregation nodes. Still further, the central processor 422 may be arranged in communication with the storage medium 430 to execute a series of instruction operations in the storage medium 430 on the summing node 400.
The aggregation node 400 may also include one or more power supplies 426, one or more wired or wireless network interfaces 450, one or more input-output interfaces 458, one or more keyboards 456, and/or one or more operating systems 441, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, and so forth.
The aggregation node 400 may include memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing the aggregation of the drainage information described above.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (9)

1. A management method for drainage information is applied to a summary node, and comprises the following steps:
receiving drainage information sent by a drainage node, wherein the drainage information comprises an IP address, a port and application information;
storing the drainage information in a cache unit, including:
determining whether the first data set stored in the cache unit is in the drainage information or not;
when the drainage information is stored in the first data set, storing the drainage information in a second data set in the cache unit;
when the drainage information is not stored in the first data set, storing the drainage information in a third data set in the cache unit;
then, sorting the drainage information in the second data set and the third data set into the first data set according to a first preset period;
when request information sent by the drainage node is received, acquiring global drainage information in the cache unit; wherein the global drainage information comprises drainage information contained in the first data set and the third data set;
and sending the global drainage information to the drainage node so that the drainage node plans a routing path used by the flow sent by each application program on the client according to the global flow information.
2. The method of claim 1, wherein storing the drainage information in the second data set in the cache unit comprises:
and updating the expiration time of the drainage information in the first data set according to a first preset period.
3. The method of claim 1, wherein after storing the drainage information in the third data set in the cache unit, comprising:
and storing the drainage information in the first data set according to a first preset period, setting the expiration time of the drainage information, and deleting the drainage information in the third data set.
4. The method of claim 1, further comprising:
and generating a backup file according to a second preset period and storing the backup file, wherein the backup file comprises the global drainage information in the cache unit.
5. An apparatus for managing drainage information, the apparatus comprising:
the receiving unit is used for receiving the drainage information sent by the drainage node, wherein the drainage information comprises an IP address, a port and application information;
a storage unit, configured to store the drainage information in a cache unit, and specifically configured to:
determining whether the first data set stored in the cache unit is in the drainage information or not;
when the drainage information is stored in the first data set, storing the drainage information in a second data set in the cache unit;
when the drainage information is not stored in the first data set, storing the drainage information in a third data set in the cache unit
Then, sorting the drainage information in the second data set and the third data set into the first data set according to a first preset period;
the acquisition unit is used for acquiring the global drainage information in the cache unit when receiving the request information sent by the drainage node; wherein the global drainage information comprises drainage information contained in the first data set and the third data set;
and the sending unit is used for sending the global flow guiding information to the flow guiding node so that the flow guiding node plans a routing path used by the flow sent by each application program on the client according to the global flow information.
6. The apparatus of claim 5,
the storage unit is further configured to update the expiration time of the drainage information in the first data set according to a first preset period.
7. The apparatus of claim 5,
the storage unit is further configured to store the drainage information in the first data set according to a first preset period, set an expiration time of the drainage information, and delete the drainage information in the third data set.
8. The apparatus of claim 5,
the obtaining unit is further configured to generate a backup file according to a second preset period and store the backup file, where the backup file includes the global drainage information in the cache unit.
9. An aggregator node comprising a processor and a memory having stored therein at least one instruction, at least one program, set of codes or set of instructions that is loaded and executed by the processor to implement a method of managing drainage information according to any of claims 1 to 4.
CN201810898266.0A 2018-08-08 2018-08-08 Management method and device of drainage information Active CN109039898B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810898266.0A CN109039898B (en) 2018-08-08 2018-08-08 Management method and device of drainage information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810898266.0A CN109039898B (en) 2018-08-08 2018-08-08 Management method and device of drainage information

Publications (2)

Publication Number Publication Date
CN109039898A CN109039898A (en) 2018-12-18
CN109039898B true CN109039898B (en) 2021-12-07

Family

ID=64632281

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810898266.0A Active CN109039898B (en) 2018-08-08 2018-08-08 Management method and device of drainage information

Country Status (1)

Country Link
CN (1) CN109039898B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105446653A (en) * 2014-08-27 2016-03-30 阿里巴巴集团控股有限公司 Data merging method and device
CN106034330A (en) * 2015-03-17 2016-10-19 网宿科技股份有限公司 Mobile terminal flow processing method based on content distribution network, apparatus and system thereof
CN107295573A (en) * 2017-07-12 2017-10-24 网宿科技股份有限公司 The bootstrap technique and system of a kind of service application flow
CN108282414A (en) * 2017-12-29 2018-07-13 网宿科技股份有限公司 A kind of bootstrap technique of data flow, server and system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090015447A1 (en) * 2007-03-16 2009-01-15 Daniel Kilbank Method for processing data using quantum system
CN104734964B (en) * 2013-12-24 2018-12-14 华为技术有限公司 Message processing method, node and system
CN104811380B (en) * 2014-01-26 2018-08-14 华为技术有限公司 A kind of method and cleaning equipment sending drainage routing iinformation
CN105591967B (en) * 2014-11-12 2019-06-28 华为技术有限公司 A kind of data transmission method and device
CN108234312B (en) * 2016-12-15 2021-03-05 中国电信股份有限公司 Flow scheduling method, PCE (path computation element) and SDN (software defined network) system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105446653A (en) * 2014-08-27 2016-03-30 阿里巴巴集团控股有限公司 Data merging method and device
CN106034330A (en) * 2015-03-17 2016-10-19 网宿科技股份有限公司 Mobile terminal flow processing method based on content distribution network, apparatus and system thereof
CN107295573A (en) * 2017-07-12 2017-10-24 网宿科技股份有限公司 The bootstrap technique and system of a kind of service application flow
CN108282414A (en) * 2017-12-29 2018-07-13 网宿科技股份有限公司 A kind of bootstrap technique of data flow, server and system

Also Published As

Publication number Publication date
CN109039898A (en) 2018-12-18

Similar Documents

Publication Publication Date Title
JP7291719B2 (en) Automatically optimize resource usage on the target database management system to increase workload performance
US20120233308A1 (en) Determining Network Node Performance Data Based on Location and Proximity of Nodes
CN111200657B (en) Method for managing resource state information and resource downloading system
US20190182108A1 (en) Message Flow Management for Virtual Networks
CN111966289B (en) Partition optimization method and system based on Kafka cluster
CN111177222A (en) Model testing method and device, computing equipment and storage medium
CN110445828B (en) Data distributed processing method based on Redis and related equipment thereof
US11102289B2 (en) Method for managing resource state information and system for downloading resource
CN108563697B (en) Data processing method, device and storage medium
US20180337840A1 (en) System and method for testing filters for data streams in publisher-subscriber networks
WO2016169237A1 (en) Data processing method and device
CN112121413A (en) Response method, system, device, terminal and medium of function service
CN111339183A (en) Data processing method, edge node, data center and storage medium
CN103164262B (en) A kind of task management method and device
CN113127477A (en) Method and device for accessing database, computer equipment and storage medium
CN106656592B (en) Service management method and device based on role configuration
CN109039898B (en) Management method and device of drainage information
CN110286854B (en) Method, device, equipment and storage medium for group member management and group message processing
CN105893150B (en) Interface calling frequency control method and device and interface calling request processing method and device
CN113448747B (en) Data transmission method, device, computer equipment and storage medium
CN112035274B (en) Service processing method, device and system
CN111221857B (en) Method and apparatus for reading data records from a distributed system
US8713149B2 (en) Data feed management
CN115174696B (en) Node scheduling method and device, electronic equipment and storage medium
CN111917599B (en) Management system and method for cloud platform host state

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant