CN112199185A - Data exchange method and device, readable storage medium and computer equipment - Google Patents

Data exchange method and device, readable storage medium and computer equipment Download PDF

Info

Publication number
CN112199185A
CN112199185A CN202011413483.XA CN202011413483A CN112199185A CN 112199185 A CN112199185 A CN 112199185A CN 202011413483 A CN202011413483 A CN 202011413483A CN 112199185 A CN112199185 A CN 112199185A
Authority
CN
China
Prior art keywords
strategy
historical
policy
target
time cost
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011413483.XA
Other languages
Chinese (zh)
Inventor
涂旭青
周金平
闵红星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Thinvent Digital Technology Co Ltd
Original Assignee
Thinvent Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thinvent Digital Technology Co Ltd filed Critical Thinvent Digital Technology Co Ltd
Priority to CN202011413483.XA priority Critical patent/CN112199185A/en
Publication of CN112199185A publication Critical patent/CN112199185A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a data exchange method, a device, a readable storage medium and computer equipment, wherein the method comprises the following steps: acquiring historical configuration information of a data exchange task whole network, wherein the historical configuration information comprises a historical trigger time strategy, a historical routing strategy, a historical message queue strategy, a historical data packet cutting strategy and a historical thread use strategy; analyzing the historical configuration information through a clustering algorithm to obtain optimal configuration information executed by a data exchange task, wherein the optimal configuration information comprises a target trigger time strategy, a target routing strategy, a target message queue strategy, a target data packet cutting strategy and a target thread using strategy; and executing a data exchange task according to the target trigger time strategy, the target routing strategy, the target message queue strategy, the target data packet cutting strategy and the target thread using strategy. The invention can improve the execution efficiency of the data exchange task.

Description

Data exchange method and device, readable storage medium and computer equipment
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a data exchange method, an apparatus, a readable storage medium, and a computer device.
Background
Data exchange is a very important link of big data application, various types of data (including structured, semi-structured, unstructured and the like) are generally required to be transmitted in consideration of various complex network environments, and under a large-scale cross-network multi-stage exchange system, due to the large quantity and scale of exchange nodes (front-end processors), the historical task amount of the whole exchange system which is already running is huge, and data transmission and exchange among the nodes need to be forwarded in a multi-link routing mode.
In the prior art, platform implementers manually configure various parameters during data exchange by experience, and with the continuous improvement of shared exchange service and network complexity, various exchange tasks are interleaved and synchronously executed under the cross-network multi-stage network environment, so that the situations of various load increase or message packet congestion easily occur, the situations of data message congestion and load imbalance of the whole platform occur, and the execution efficiency of the data exchange tasks is influenced.
Disclosure of Invention
Therefore, an object of the present invention is to provide a data exchange method to improve the execution efficiency of data exchange tasks.
The invention provides a data exchange method, which comprises the following steps:
acquiring historical configuration information of a data exchange task whole network, wherein the historical configuration information comprises a historical trigger time strategy, a historical routing strategy, a historical message queue strategy, a historical data packet cutting strategy and a historical thread use strategy;
analyzing the historical configuration information through a clustering algorithm to obtain optimal configuration information executed by a data exchange task, wherein the optimal configuration information comprises a target trigger time strategy, a target routing strategy, a target message queue strategy, a target data packet cutting strategy and a target thread using strategy;
and executing a data exchange task according to the target trigger time strategy, the target routing strategy, the target message queue strategy, the target data packet cutting strategy and the target thread using strategy.
According to the data exchange method provided by the invention, under the cross-network multi-stage network environment, especially under a large cross-network multi-stage data exchange platform with a large number of data exchange nodes, the influence factors of the exchange efficiency mainly comprise: the invention analyzes historical configuration information through a clustering algorithm to obtain the optimal configuration information executed by a data exchange task, and plans an optimal exchange scheme in advance in dimensions such as trigger time, route forwarding, queue usage, data packet size, thread number and the like, thereby improving the execution efficiency of the data exchange task and ensuring high speed, stability and smoothness of data exchange.
In addition, the data exchange method according to the present invention may further have the following additional technical features:
further, the step of analyzing the historical configuration information through a clustering algorithm to obtain the optimal configuration information executed by the data exchange task includes:
analyzing result data recovered by an actively-initiated full-route test based on the link performance of the switching node, exhausting all possible solutions of the historical trigger time strategy, the historical routing strategy, the historical message queue strategy, the historical data packet cutting strategy and the historical thread use strategy within a preset rule range based on a mathematical model of a time cost function, calculating total time cost corresponding to different configuration information, and taking the configuration information with the lowest total time cost as the optimal configuration information.
Further, the full routing test based on the link performance of the switching node is a test process in which a data packet is transmitted from a source node to a target node through a preset number of node links.
Further, the calculation formula of the total time cost is as follows:
T=t1+ t2+ t3+ t4+ t5;
wherein T is the total time cost, T1 is a contribution value of a trigger time policy to the total time cost, T2 is a contribution value of a routing policy to the total time cost, T3 is a contribution value of a message queue policy to the total time cost, T4 is a contribution value of a packet fragmentation policy to the total time cost, and T5 is a contribution value of a thread usage policy to the total time cost.
Further, the calculation formula of the total time cost is as follows:
T=a*t1+ b*t2+ c*t3+ d*t4+ e*t5;
wherein T is the total time cost, T1 is a contribution value of the trigger time policy to the total time cost, a is a weight coefficient of the trigger time policy, T2 is a contribution value of the routing policy to the total time cost, b is a weight coefficient of the routing policy, T3 is a contribution value of the message queue policy to the total time cost, c is a weight coefficient of the message queue policy, T4 is a contribution value of the packet fragmentation policy to the total time cost, d is a weight coefficient of the packet fragmentation policy, T5 is a contribution value of the thread usage policy to the total time cost, and e is a weight coefficient of the thread usage policy.
Another objective of the present invention is to provide a data exchange device to improve the execution efficiency of data exchange tasks.
The invention provides a data exchange device, comprising:
the acquisition module is used for acquiring historical configuration information of the whole network of the data exchange task, wherein the historical configuration information comprises a historical trigger time strategy, a historical routing strategy, a historical message queue strategy, a historical data packet cutting strategy and a historical thread use strategy;
the analysis module is used for analyzing the historical configuration information through a clustering algorithm to obtain optimal configuration information executed by the data exchange task, wherein the optimal configuration information comprises a target trigger time strategy, a target routing strategy, a target message queue strategy, a target data packet cutting strategy and a target thread using strategy;
and the execution module is used for executing a data exchange task according to the target trigger time strategy, the target routing strategy, the target message queue strategy, the target data packet cutting strategy and the target thread use strategy.
According to the data exchange device provided by the invention, under the cross-network multi-stage network environment, especially under a large cross-network multi-stage data exchange platform with a large number of data exchange nodes, the influence factors of the exchange efficiency mainly comprise: the invention analyzes historical configuration information through a clustering algorithm to obtain the optimal configuration information executed by a data exchange task, and plans an optimal exchange scheme in advance in dimensions such as trigger time, route forwarding, queue usage, data packet size, thread number and the like, thereby improving the execution efficiency of the data exchange task and ensuring high speed, stability and smoothness of data exchange.
In addition, the data exchange device according to the present invention may further have the following additional features:
further, the analysis module is specifically configured to:
analyzing result data recovered by an actively-initiated full-route test based on the link performance of the switching node, exhausting all possible solutions of the historical trigger time strategy, the historical routing strategy, the historical message queue strategy, the historical data packet cutting strategy and the historical thread use strategy within a preset rule range based on a mathematical model of a time cost function, calculating total time cost corresponding to different configuration information, and taking the configuration information with the lowest total time cost as the optimal configuration information.
Further, the full routing test based on the link performance of the switching node is a test process in which a data packet is transmitted from a source node to a target node through a preset number of node links.
Further, the calculation formula of the total time cost is as follows:
T=t1+ t2+ t3+ t4+ t5;
wherein T is the total time cost, T1 is a contribution value of a trigger time policy to the total time cost, T2 is a contribution value of a routing policy to the total time cost, T3 is a contribution value of a message queue policy to the total time cost, T4 is a contribution value of a packet fragmentation policy to the total time cost, and T5 is a contribution value of a thread usage policy to the total time cost.
Further, the calculation formula of the total time cost is as follows:
T=a*t1+ b*t2+ c*t3+ d*t4+ e*t5;
wherein T is the total time cost, T1 is a contribution value of the trigger time policy to the total time cost, a is a weight coefficient of the trigger time policy, T2 is a contribution value of the routing policy to the total time cost, b is a weight coefficient of the routing policy, T3 is a contribution value of the message queue policy to the total time cost, c is a weight coefficient of the message queue policy, T4 is a contribution value of the packet fragmentation policy to the total time cost, d is a weight coefficient of the packet fragmentation policy, T5 is a contribution value of the thread usage policy to the total time cost, and e is a weight coefficient of the thread usage policy.
The invention also proposes a readable storage medium on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method.
The invention also proposes a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the above method when executing the program.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of embodiments of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow diagram of a method of data exchange according to an embodiment of the invention;
fig. 2 is a block diagram of a data exchange apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a data exchange method according to an embodiment of the invention includes steps S101 to S103.
S101, obtaining historical configuration information of the whole network of the data exchange task, wherein the historical configuration information comprises a historical trigger time strategy, a historical routing strategy, a historical message queue strategy, a historical data packet cutting strategy and a historical thread using strategy.
In the cross-network multi-stage network environment, especially a large cross-network multi-stage data exchange platform with a large number of data exchange nodes, the factors influencing the exchange efficiency mainly include: the idle degree of trigger time, the route forwarding path of message transmission, the utilization rate of message queues, the size of a single message packet after data packet cutting, and the number of processing threads.
Specifically, the trigger time strategy is as follows: judging the optimal time point for periodically triggering the exchange task aiming at the time period of executing the exchange task among all nodes so as to obtain a task triggering time scheme with the least influence/more idleness of a platform; (optional ranges for trigger time are hour and half of each day)
Routing strategy: the optimal switching node channel is obtained by analyzing factors which can influence the data switching efficiency, such as switching task state, switching frequency, switching time, task starting, software and hardware parameters and the like among all nodes.
Message queue policy: the method is characterized in that a message queue creating mode or an existing message queue sharing mode is determined to be adopted after analysis and calculation according to factors such as the number of data packets, network bandwidth, hardware performance and the like.
Data packet cutting strategy: the optimal data packet cutting granularity is obtained by analyzing and calculating the factors such as the network state, the exchange data volume, the server resource state, the number of exchange tasks and the like; (the cutting alternative is 500K/1M/2M/5M/10M/20M/50M and other customized packet size parameters, etc.).
Thread use policy: the optimal utilization scheme of resources is achieved by analyzing and calculating the factors such as a CPU, a memory, the number of tasks, the frequency of the tasks and the like on the server. (thread alternatives: 5/10/15/20/30 and other customized thread numbers, etc.).
In step S101, historical configuration information of the data exchange task overall network is first acquired, where the historical configuration information specifically includes a historical trigger time policy, a historical routing policy, a historical message queue policy, a historical data packet cutting policy, and a historical thread usage policy. For example, the historical configuration information is, for example, a trigger time policy, a routing policy, a message queue policy, a packet cutting policy, a thread usage policy in the last 6 months.
S102, analyzing the historical configuration information through a clustering algorithm to obtain optimal configuration information executed by the data exchange task, wherein the optimal configuration information comprises a target trigger time strategy, a target routing strategy, a target message queue strategy, a target data packet cutting strategy and a target thread using strategy.
The step of analyzing the historical configuration information through a clustering algorithm to obtain the optimal configuration information executed by the data exchange task comprises the following steps:
analyzing result data recovered by an actively-initiated full-route test based on the link performance of the switching node, exhausting all possible solutions of the historical trigger time strategy, the historical routing strategy, the historical message queue strategy, the historical data packet cutting strategy and the historical thread use strategy within a preset rule range based on a mathematical model of a time cost function, calculating total time cost corresponding to different configuration information, and taking the configuration information with the lowest total time cost as the optimal configuration information.
The full routing test based on the link performance of the switching node is a test process of transmitting a data packet from a source node to a target node through a preset number of node links.
Whether the execution strategy of the data interaction task is optimal or not is comprehensively influenced by a trigger time strategy, a routing strategy, a message queue strategy, a data packet cutting strategy, a thread using strategy and the like, namely, the execution effect is completely different after different strategies are arranged and combined. According to the method, all possible solutions related to a trigger time strategy, a routing strategy, a message queue strategy, a data packet cutting strategy and a thread using strategy are calculated and exhausted in a certain rule range through analysis of result data recovered by actively initiated full-route test particles based on the link performance of a switching node and a mathematical model based on a time cost function, and the optimal solution of the function to task execution is comprehensively calculated and executed.
Under a large-scale multi-stage three-dimensional switching system which needs to consider various complex network environments for transmitting various data (including structured, semi-structured, unstructured and the like), due to the fact that the number of switching nodes is large, the structure is complex and certain unstable characteristics are achieved, a model which can judge the comprehensive effect of a trigger time strategy, a routing strategy, a message queue strategy, a data packet segmentation strategy and a thread use strategy and can quickly obtain an optimal data switching task execution scheme needs to be designed. From the data perspective, namely, in any time t, minimizing the sum of the cost formed by the sum of the transit time paid by all the fragmented data packets due to network congestion and the like in one data transmission task. A data transmission task should include all the packets segmented by the transmission task. All other concepts like optimal trigger time point, optimal routing, optimal slicing, optimal queues, optimal thread allocation etc. are sub-concepts evolved from this purpose without special handling in the present model.
Time cost: the data transmission task at one time is divided into a series of data packets, the data packets are transmitted from a source node to a target node through a series of node links, the transmission time of the whole data packet is s, z is a set of strategy variables of any divided packet in the data transmission task at one time (for example, z can be a trigger time strategy, a routing strategy, a message queue strategy selection, a data packet division size strategy selection, a thread use strategy selection and the like), a time cost function can be adjusted to be f (s, z), and the change of f relative to z depends on the time cost consumption brought by the comprehensive influence of different strategy variable values. As a simple example, z may take five values of a trigger time policy, a routing policy, a message queue policy, a packet segmentation policy, and a thread use policy, which means that parameter scheme configuration choices of different policies are intended, and time costs for a packet to pass through a given node link are different.
Specifically, the calculation formula of the total time cost is as follows:
T=t1+ t2+ t3+ t4+ t5;
wherein T is the total time cost, T1 is a contribution value of a trigger time policy to the total time cost, T2 is a contribution value of a routing policy to the total time cost, T3 is a contribution value of a message queue policy to the total time cost, T4 is a contribution value of a packet fragmentation policy to the total time cost, and T5 is a contribution value of a thread usage policy to the total time cost.
For example:
alternative 1: test reference packet size was set to 100M, 12: 30 triggering, selecting a line A by a routing strategy, sharing a queue strategy, setting a packet switching parameter to be 2M, and setting a thread parameter to be: 10, the number of the channels is 10;
the total time cost of alternative 1, T = T1+ T2+ T3+ T4+ T5=15.11 seconds;
alternative 2: test reference packet size was set to 100M, 0: triggering 30, selecting a line B by a routing strategy, establishing a new queue strategy, setting a packet switching parameter to be 5M, and setting a thread parameter to be: 15, the number of the cells is 15;
the total time cost of alternative 2, T = T1+ T2+ T3+ T4+ T5=13.21 seconds;
alternative 3: the test reference packet size was set to 100M, 1: triggering 30, selecting a line B by a routing strategy, establishing a new queue strategy, setting a packet switching parameter to be 2M, and setting a thread parameter to be: 5, the number of the cells is 5;
the total time cost of alternative 3, T = T1+ T2+ T3+ T4+ T5=10.91 seconds;
and (4) conclusion: the total time cost of option 3 is the lowest, and is the optimal solution, and the corresponding configuration information is the optimal configuration information, then the target trigger time policy is: 1 part per day: the 30 trigger, target routing policy is: the route strategy selection line B and the target message queue strategy are as follows: the queue strategy is a new one, and the target data packet cutting strategy is as follows: the packet cutting parameter is 2M, and the target thread use strategy is as follows: the number of thread parameters is 5.
Furthermore, as another embodiment, if it is considered that the cost of thread usage policy is 2 times greater than the routing policy by also 1 second through a given node link (say because the thread usage policy puts more stress on the routing link), then there may be f (s, line) =2f (s, way). Obviously, according to the high load of each data exchange or the reason of the occurrence of message blocking and the known reasons of the hardware environment for transmitting the optional multiple links, the time cost of 5 z values z1 (trigger time policy), z2 (routing policy), z3 (message queue policy), z4 (packet segmentation policy) and z5 (thread use policy) can be assigned to achieve the purpose of leading the model to derive the optimal scheme.
Therefore, the calculation formula of the total time cost may also be as follows:
T=a*t1+ b*t2+ c*t3+ d*t4+ e*t5;
wherein T is the total time cost, T1 is a contribution value of the trigger time policy to the total time cost, a is a weight coefficient of the trigger time policy, T2 is a contribution value of the routing policy to the total time cost, b is a weight coefficient of the routing policy, T3 is a contribution value of the message queue policy to the total time cost, c is a weight coefficient of the message queue policy, T4 is a contribution value of the packet fragmentation policy to the total time cost, d is a weight coefficient of the packet fragmentation policy, T5 is a contribution value of the thread usage policy to the total time cost, and e is a weight coefficient of the thread usage policy. The weight coefficients a, b, c, d and e can be set to an empirical value manually, and then adjusted according to actual conditions, so that the accuracy of strategy optimization is improved.
S103, executing a data exchange task according to the target trigger time strategy, the target routing strategy, the target message queue strategy, the target data packet cutting strategy and the target thread use strategy.
When the exchange task is configured, the system can automatically recommend an optimal execution strategy scheme, after the administrator confirms, the current exchange task automatically carries out parameter configuration according to the optimal execution strategy, after the exchange task is issued, the task is executed according to the optimal recommendation strategy, the data exchange task is executed in an optimal mode, and the recommended optimal execution strategy is used for coping with the high load period of long-term and periodic data exchange operation.
In summary, according to the data exchange method provided in this embodiment, in an internetwork multistage network environment, especially in a large internetwork multistage data exchange platform with a large number of data exchange nodes, the influencing factors of the exchange efficiency mainly include: the invention analyzes historical configuration information through a clustering algorithm to obtain the optimal configuration information executed by a data exchange task, and plans an optimal exchange scheme in advance in dimensions such as trigger time, route forwarding, queue usage, data packet size, thread number and the like, thereby improving the execution efficiency of the data exchange task and ensuring high speed, stability and smoothness of data exchange.
Referring to fig. 2, a data exchange device according to an embodiment of the present invention includes:
the acquisition module is used for acquiring historical configuration information of the whole network of the data exchange task, wherein the historical configuration information comprises a historical trigger time strategy, a historical routing strategy, a historical message queue strategy, a historical data packet cutting strategy and a historical thread use strategy;
the analysis module is used for analyzing the historical configuration information through a clustering algorithm to obtain optimal configuration information executed by the data exchange task, wherein the optimal configuration information comprises a target trigger time strategy, a target routing strategy, a target message queue strategy, a target data packet cutting strategy and a target thread using strategy;
and the execution module is used for executing a data exchange task according to the target trigger time strategy, the target routing strategy, the target message queue strategy, the target data packet cutting strategy and the target thread use strategy.
In this embodiment, the analysis module is specifically configured to:
analyzing result data recovered by an actively-initiated full-route test based on the link performance of the switching node, exhausting all possible solutions of the historical trigger time strategy, the historical routing strategy, the historical message queue strategy, the historical data packet cutting strategy and the historical thread use strategy within a preset rule range based on a mathematical model of a time cost function, calculating total time cost corresponding to different configuration information, and taking the configuration information with the lowest total time cost as the optimal configuration information.
In this embodiment, the full routing test based on the link performance of the switching node is a test process in which a data packet is transmitted from a source node to a target node through a preset number of node links.
Optionally, the calculation formula of the total time cost is as follows:
T=t1+ t2+ t3+ t4+ t5;
wherein T is the total time cost, T1 is a contribution value of a trigger time policy to the total time cost, T2 is a contribution value of a routing policy to the total time cost, T3 is a contribution value of a message queue policy to the total time cost, T4 is a contribution value of a packet fragmentation policy to the total time cost, and T5 is a contribution value of a thread usage policy to the total time cost.
Optionally, the calculation formula of the total time cost is as follows:
T=a*t1+ b*t2+ c*t3+ d*t4+ e*t5;
wherein T is the total time cost, T1 is a contribution value of the trigger time policy to the total time cost, a is a weight coefficient of the trigger time policy, T2 is a contribution value of the routing policy to the total time cost, b is a weight coefficient of the routing policy, T3 is a contribution value of the message queue policy to the total time cost, c is a weight coefficient of the message queue policy, T4 is a contribution value of the packet fragmentation policy to the total time cost, d is a weight coefficient of the packet fragmentation policy, T5 is a contribution value of the thread usage policy to the total time cost, and e is a weight coefficient of the thread usage policy.
According to the data exchange device provided in this embodiment, in an internetwork multistage network environment, especially in a large internetwork multistage data exchange platform with a large number of data exchange nodes, the influencing factors of exchange efficiency mainly include: the invention analyzes historical configuration information through a clustering algorithm to obtain the optimal configuration information executed by a data exchange task, and plans an optimal exchange scheme in advance in dimensions such as trigger time, route forwarding, queue usage, data packet size, thread number and the like, thereby improving the execution efficiency of the data exchange task and ensuring high speed, stability and smoothness of data exchange.
Furthermore, an embodiment of the present invention also proposes a readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the method described in the above embodiment.
Furthermore, an embodiment of the present invention also provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the method in the above embodiment when executing the program.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (10)

1. A method of data exchange, comprising:
acquiring historical configuration information of a data exchange task whole network, wherein the historical configuration information comprises a historical trigger time strategy, a historical routing strategy, a historical message queue strategy, a historical data packet cutting strategy and a historical thread use strategy;
analyzing the historical configuration information through a clustering algorithm to obtain optimal configuration information executed by a data exchange task, wherein the optimal configuration information comprises a target trigger time strategy, a target routing strategy, a target message queue strategy, a target data packet cutting strategy and a target thread using strategy;
and executing a data exchange task according to the target trigger time strategy, the target routing strategy, the target message queue strategy, the target data packet cutting strategy and the target thread using strategy.
2. The data exchange method of claim 1, wherein the step of analyzing the historical configuration information by a clustering algorithm to obtain the optimal configuration information for the data exchange task to perform comprises:
analyzing result data recovered by an actively-initiated full-route test based on the link performance of the switching node, exhausting all possible solutions of the historical trigger time strategy, the historical routing strategy, the historical message queue strategy, the historical data packet cutting strategy and the historical thread use strategy within a preset rule range based on a mathematical model of a time cost function, calculating total time cost corresponding to different configuration information, and taking the configuration information with the lowest total time cost as the optimal configuration information.
3. The data switching method according to claim 2, wherein the full routing test based on the link performance of the switching node is a test procedure in which the data packet is transmitted from the source node to the destination node through a preset number of node links.
4. A data exchange method according to claim 2, wherein the total time cost is calculated as follows:
T=t1+ t2+ t3+ t4+ t5;
wherein T is the total time cost, T1 is a contribution value of a trigger time policy to the total time cost, T2 is a contribution value of a routing policy to the total time cost, T3 is a contribution value of a message queue policy to the total time cost, T4 is a contribution value of a packet fragmentation policy to the total time cost, and T5 is a contribution value of a thread usage policy to the total time cost.
5. A data exchange method according to claim 2, wherein the total time cost is calculated as follows:
T=a*t1+ b*t2+ c*t3+ d*t4+ e*t5;
wherein T is the total time cost, T1 is a contribution value of the trigger time policy to the total time cost, a is a weight coefficient of the trigger time policy, T2 is a contribution value of the routing policy to the total time cost, b is a weight coefficient of the routing policy, T3 is a contribution value of the message queue policy to the total time cost, c is a weight coefficient of the message queue policy, T4 is a contribution value of the packet fragmentation policy to the total time cost, d is a weight coefficient of the packet fragmentation policy, T5 is a contribution value of the thread usage policy to the total time cost, and e is a weight coefficient of the thread usage policy.
6. A data exchange device, comprising:
the acquisition module is used for acquiring historical configuration information of the whole network of the data exchange task, wherein the historical configuration information comprises a historical trigger time strategy, a historical routing strategy, a historical message queue strategy, a historical data packet cutting strategy and a historical thread use strategy;
the analysis module is used for analyzing the historical configuration information through a clustering algorithm to obtain optimal configuration information executed by the data exchange task, wherein the optimal configuration information comprises a target trigger time strategy, a target routing strategy, a target message queue strategy, a target data packet cutting strategy and a target thread using strategy;
and the execution module is used for executing a data exchange task according to the target trigger time strategy, the target routing strategy, the target message queue strategy, the target data packet cutting strategy and the target thread use strategy.
7. The data exchange device of claim 6, wherein the analysis module is specifically configured to:
analyzing result data recovered by an actively-initiated full-route test based on the link performance of the switching node, exhausting all possible solutions of the historical trigger time strategy, the historical routing strategy, the historical message queue strategy, the historical data packet cutting strategy and the historical thread use strategy within a preset rule range based on a mathematical model of a time cost function, calculating total time cost corresponding to different configuration information, and taking the configuration information with the lowest total time cost as the optimal configuration information.
8. The data exchange device of claim 7, wherein the total time cost is calculated as follows:
T=a*t1+ b*t2+ c*t3+ d*t4+ e*t5;
wherein T is the total time cost, T1 is a contribution value of the trigger time policy to the total time cost, a is a weight coefficient of the trigger time policy, T2 is a contribution value of the routing policy to the total time cost, b is a weight coefficient of the routing policy, T3 is a contribution value of the message queue policy to the total time cost, c is a weight coefficient of the message queue policy, T4 is a contribution value of the packet fragmentation policy to the total time cost, d is a weight coefficient of the packet fragmentation policy, T5 is a contribution value of the thread usage policy to the total time cost, and e is a weight coefficient of the thread usage policy.
9. A readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-5.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 5 when executing the program.
CN202011413483.XA 2020-12-07 2020-12-07 Data exchange method and device, readable storage medium and computer equipment Pending CN112199185A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011413483.XA CN112199185A (en) 2020-12-07 2020-12-07 Data exchange method and device, readable storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011413483.XA CN112199185A (en) 2020-12-07 2020-12-07 Data exchange method and device, readable storage medium and computer equipment

Publications (1)

Publication Number Publication Date
CN112199185A true CN112199185A (en) 2021-01-08

Family

ID=74033813

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011413483.XA Pending CN112199185A (en) 2020-12-07 2020-12-07 Data exchange method and device, readable storage medium and computer equipment

Country Status (1)

Country Link
CN (1) CN112199185A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110519231A (en) * 2019-07-25 2019-11-29 浙江公共安全技术研究院有限公司 A kind of cross-domain data exchange supervisory systems and method
US20200342490A1 (en) * 2018-05-18 2020-10-29 Thryv, Inc. Method and system for lead budget allocation and optimization on a multi-channel multi-media campaign management and payment platform

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200342490A1 (en) * 2018-05-18 2020-10-29 Thryv, Inc. Method and system for lead budget allocation and optimization on a multi-channel multi-media campaign management and payment platform
CN110519231A (en) * 2019-07-25 2019-11-29 浙江公共安全技术研究院有限公司 A kind of cross-domain data exchange supervisory systems and method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
孙杨等: ""应对数据交换潮汐和数据浪涌的智能化调度算法分析与研究"", 《计算机与网络》 *
尚金成,黄永皓等著: "《电力市场技术支持系统设计与关键技术研究》", 30 August 2002 *
杨平: ""G2G数据交换平台数据管理中心子系统的设计与实现"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Similar Documents

Publication Publication Date Title
Mitrani Managing performance and power consumption in a server farm
CN102724103B (en) Proxy server, hierarchical network system and distributed workload management method
US8797864B2 (en) Adaptive traffic management via analytics based volume reduction
US10833934B2 (en) Energy management in a network
WO2017000628A1 (en) Resource scheduling method and apparatus for cloud computing system
EP2116014B1 (en) System and method for balancing information loads
CN108491255B (en) Self-service MapReduce data optimal distribution method and system
US10044621B2 (en) Methods and systems for transport SDN traffic engineering using dual variables
CN112261120B (en) Cloud-side cooperative task unloading method and device for power distribution internet of things
CN107291544A (en) Method and device, the distributed task scheduling execution system of task scheduling
WO2009029833A1 (en) Scheduling processing tasks used in active network measurement
CN112202644B (en) Collaborative network measurement method and system oriented to hybrid programmable network environment
Monil et al. QoS-aware virtual machine consolidation in cloud datacenter
Ning et al. Deep reinforcement learning for NFV-based service function chaining in multi-service networks
Breitgand et al. On cost-aware monitoring for self-adaptive load sharing
Truong et al. Performance analysis of large-scale distributed stream processing systems on the cloud
EP2545684B1 (en) Capacity adaptation between service classes in a packet network.
CN116302578B (en) QoS (quality of service) constraint stream application delay ensuring method and system
CN117608840A (en) Task processing method and system for comprehensive management of resources of intelligent monitoring system
CN112199185A (en) Data exchange method and device, readable storage medium and computer equipment
Tabash et al. A fuzzy logic based network congestion control using active queue management techniques
CN113965616B (en) SFC mapping method based on VNF resource change matrix
Wang et al. Model-based scheduling for stream processing systems
Ethilu et al. An Efficient Switch Migration Scheme for Load Balancing in Software Defined Networking
Kargahi et al. Utility accrual dynamic routing in real-time parallel systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210108