CN109905459B - Data transmission method and device - Google Patents

Data transmission method and device Download PDF

Info

Publication number
CN109905459B
CN109905459B CN201910039461.2A CN201910039461A CN109905459B CN 109905459 B CN109905459 B CN 109905459B CN 201910039461 A CN201910039461 A CN 201910039461A CN 109905459 B CN109905459 B CN 109905459B
Authority
CN
China
Prior art keywords
execution result
server
instruction
cache server
sending
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910039461.2A
Other languages
Chinese (zh)
Other versions
CN109905459A (en
Inventor
王龙龙
陈聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910039461.2A priority Critical patent/CN109905459B/en
Publication of CN109905459A publication Critical patent/CN109905459A/en
Application granted granted Critical
Publication of CN109905459B publication Critical patent/CN109905459B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The application provides a data transmission method and a data transmission device, which are applied to a distributed system, wherein the distributed system comprises a load balancing server, a cache server and a server cluster, the server cluster comprises a first server and a second server, the load balancing server is used for fragmenting an original instruction after receiving the original instruction from a calling system to obtain a first instruction and a second instruction, and the method comprises the following steps: the first server receives the first instruction and executes the first instruction; under the condition that the first instruction is executed successfully, obtaining a first execution result; querying a second execution result in the cache server, wherein the second execution result is an execution result obtained by executing a second instruction by a second server; if the second execution result is inquired, the first execution result and the second execution result are sent to the calling system; otherwise, storing the first execution result in the upper cache server. By implementing the method and the device, the loss of the server can be reduced, and the performance of the server is improved.

Description

Data transmission method and device
Technical Field
The present invention relates to the field of computers, and in particular, to a data transmission method and apparatus.
Background
With the development of the internet, the call instruction between network systems becomes more and more complex. In practical application, in order to simplify the functions of the server, complex instructions are fragmented to obtain a plurality of asynchronous instructions with relatively simple functions, so that the pressure of the server is reduced. For example, after receiving a call instruction indicating that 30 ten thousand pieces of user information are stored, the distributed system may fragment the call instruction to obtain a first instruction, a second instruction, and a third instruction. The first instruction is used for indicating that 1 st to 10 th pieces of user information are stored, the second instruction is used for indicating that 10 th to 20 th pieces of user information are stored, and the third instruction is used for indicating that 20 th to 30 th pieces of user information are stored. For such complex and numerous asynchronous instructions, a single server obviously cannot process them in time.
To handle a large number of complex asynchronous instructions, these asynchronous instructions are typically sent to all servers in a server cluster at the same time. After a certain server in the server cluster executes an asynchronous instruction, the information of successful execution is sent to other servers in a broadcasting mode, so that when all asynchronous instructions are executed, the execution result is returned to the calling system in time.
According to the execution scheme, each server in the server cluster needs to send broadcast messages to the plurality of servers and receive the broadcast messages of the plurality of servers, the server resource consumption is high, the safety and the stability of data transmission are low, and the coupling degree between the servers in the server cluster is high.
Disclosure of Invention
The application provides a data transmission method and device, which can reduce server loss, improve server performance, improve data transmission safety and stability, and reduce coupling among servers in a server cluster.
In a first aspect, the present application provides a data transmission method applied to a distributed system, where the distributed system includes a load balancing server, a cache server, and a server cluster, where the server cluster includes a first server and a second server, and the load balancing server is configured to fragment an original instruction from a calling system after receiving the original instruction, so as to obtain the first instruction and the second instruction, where the method includes:
the first server receives the first command and executes the first command;
under the condition that the first instruction is successfully executed, obtaining a first execution result;
Querying a second execution result in the cache server, where the second execution result is an execution result obtained by the second server executing the second instruction;
if the second execution result is inquired, sending the first execution result and the second execution result to the calling system;
otherwise, storing the first execution result in the cache server.
With reference to the first aspect, in a possible implementation manner, before the receiving, by the first server, the first instruction from a load balancing server and executing the first instruction, the method further includes:
sending a first request message to the load balancing server, where the first request message is used to request the first instruction; the first request message is a request message which reaches the load balancing server first.
By implementing the embodiment of the application, the first server sends the first request message to the load balancing server, that is, the first instruction is actively acquired through the task preemption mechanism, so that the efficiency of executing the instruction can be improved.
With reference to the first aspect, in a possible implementation manner, the querying a second execution result in the cache server includes:
Sending a query instruction to the cache server, wherein the query instruction is used for indicating to query the second execution result;
receiving a query result from the cache server;
if the second execution result is queried, the sending the first execution result and the second execution result to a calling system includes:
and sending the first execution result and the second execution result to the calling system under the condition that the second execution result is determined to be inquired according to the inquiry result.
With reference to the first aspect, in a possible implementation manner, the storing the first execution result in the cache server includes:
sending a storage request to the cache server, wherein the storage request is used for requesting to store the first execution result;
receiving a storage response from the cache server;
and if the storage response is received, sending the first execution result to the cache server.
By implementing the method and the device, the storage request is sent to the cache server before the first execution result is stored, and the response message is received, so that the stability of storing the first execution result can be improved, and transmission errors can be avoided.
With reference to the first aspect, in a possible implementation manner, after the executing the first instruction, before the obtaining a first execution result, the method further includes:
and when the first instruction fails to be executed, the first instruction is received again, and the first instruction is executed.
By implementing the method, the first instruction is re-executed under the condition that the execution of the first instruction fails, so that the first instruction can be guaranteed to be successfully executed.
With reference to the first aspect, in a possible implementation manner, after the storing the first execution result in the cache server, the method further includes:
starting a timer for timing;
when the timing duration of the timer reaches a preset duration, querying the first execution result and the second execution result in the cache server;
and generating error information and sending the error information to a calling system under the condition that the first execution result is inquired and the second execution result is not inquired.
By implementing the method and the system, an exception handling mechanism can be added, namely after the preset time length, if the distributed system does not successfully execute all the instructions corresponding to the original instructions, the task is abandoned, and error information is returned to the calling system, so that the resources of the server can be saved, and the waiting time of the calling system is reduced.
In a second aspect, the present application provides a data transmission apparatus, including:
the first execution unit is used for receiving a first instruction and executing the first instruction;
the acquisition unit is used for acquiring a first execution result under the condition that the first instruction is successfully executed;
the first query unit is used for querying a second execution result in the cache server, wherein the second execution result is an execution result obtained by executing a second instruction by the second server;
a first sending unit, configured to send the first execution result and the second execution result to a calling system when the second execution result is queried;
and a storage unit, configured to store the first execution result in the cache server if the second execution result is not queried.
With reference to the second aspect, in a possible implementation manner, the apparatus further includes:
a second sending unit, configured to send a first request message to a load balancing server, where the first request message is used to request the first instruction.
With reference to the second aspect, in a possible implementation manner, the first querying unit is specifically configured to send a query instruction to the cache server, where the query instruction is used to instruct to query the second execution result; receiving the query result from the cache server; the first sending unit is specifically configured to send the first execution result and the second execution result to the calling system when the second execution result is determined to be queried according to the query result.
With reference to the second aspect, in a possible implementation manner, the storage unit is specifically configured to send a storage request to the cache server, where the storage request is used to request to store the first execution result; receiving a storage response from the cache server; and if the storage response is received, sending the first execution result to the cache server.
With reference to the second aspect, in a possible implementation manner, the apparatus further includes:
and a second execution unit, configured to, in a case where execution of the first instruction fails, re-receive the first instruction and execute the first instruction.
With reference to the second aspect, in a possible implementation manner, the apparatus further includes:
the second query unit is used for starting a timer to time after the first execution result is stored in the cache server; under the condition that the timing duration of the timer reaches a preset duration, inquiring the first execution result and the second execution result in the cache server; and generating error information and sending the error information to a calling system under the condition that the first execution result is inquired and the second execution result is not inquired.
In a third aspect, the present application provides a data transmission apparatus, including a processor and a memory; the processor and the memory are connected with each other through a bus; wherein the memory is configured to store a computer program, the computer program comprises program instructions, and the processor is configured to call the program instructions to perform the method according to the first aspect.
In a fourth aspect, the present application proposes a computer-readable storage medium storing a computer program comprising program instructions which, when executed by a processor, cause the processor to perform the method as set forth in the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product containing program instructions, which when run on a computer, cause the computer to perform the method as set forth in the first aspect.
By implementing the method and the system, the loss of the server can be reduced, the performance of the server is improved, the safety and the stability of data transmission are improved, and the coupling between the servers in the server cluster is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the background art of the present application, the drawings required to be used in the embodiments or the background art of the present application will be described below.
FIG. 1 is a block diagram of a data transmission system according to the present application;
FIG. 2 is a flow chart of a data transmission method proposed in the present application;
FIG. 3 is a flow chart of another data transmission method proposed in the present application;
fig. 4 is a flowchart of a specific application scenario of a data transmission method proposed in the present application;
fig. 5 is a schematic structural diagram of a data transmission device proposed in the present application;
fig. 6 is a schematic structural diagram of another data transmission device proposed in the present application.
Detailed Description
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, or apparatus.
The application provides a data transmission method, which can reduce server loss, improve server performance, improve data transmission safety and stability, and reduce coupling among servers in a server cluster.
Fig. 1 is a block diagram of a data transmission system proposed in the present application, which includes a calling system 101 and a distribution system 102. The distributed system 102 includes a load balancing server 1021, a server cluster 1022, and a cache server 1023. The server cluster 1022 includes at least two servers.
The load balancing server 1021 receives the original instruction from the calling system 101, and segments the original instruction to obtain a first instruction and a second instruction. The load balancing server 1021 sends the first instruction to a first server in a server cluster and sends the second instruction to a second server in the server cluster.
After receiving the first instruction, the first server executes the first instruction to obtain a first execution result, and queries whether the cache server 1023 includes a second execution result. If the second execution result is included, the first execution result and the second execution result are sent to the calling system 101. Otherwise, the first execution result is sent to the cache server 1023.
After receiving the second instruction, the second server executes the second instruction to obtain a second execution result, and queries whether the cache server 1023 includes the first execution result. If the first execution result is included, the first execution result and the second execution result are sent to the calling system 101. Otherwise, the second execution result is sent to the cache server 1023.
The cache server 1023 is used for storing the execution result of the servers in the server cluster 1022. The server that has executed the instruction last in the server cluster 1022 reads all the execution results from the cache server 1023, and sends all the execution results to the calling system 101.
In a possible implementation manner, the load balancing server 1021 may be coupled with the calling system 101 through a queue server, that is, the calling system 101 sends the original instruction to the queue server, and the load balancing server 1021 receives the original instruction from the queue server, so as to achieve the purpose of asynchronously calling the distributed system by the calling system. Alternatively, the calling system 101 may directly connect to the load balancing server 1021 through an Extensible Messaging and Presence Protocol (XMPP) Protocol, and send the original command to the load balancing server 1021.
By implementing the method and the system, the loss of the server can be reduced, the performance of the server is improved, the safety and the stability of data transmission are improved, and the coupling between the servers in the server cluster is reduced.
Fig. 2 is a data transmission method provided in the present application, where the method is applied to a distributed system, where the distributed system includes a load balancing server, a cache server, and a server cluster, where the server cluster includes a first server and a second server, and the load balancing server is configured to fragment an original instruction after receiving the original instruction from a calling system, so as to obtain the first instruction and the second instruction, where the method includes:
201. the first server receives the first instruction and executes the first instruction.
Wherein the first instruction is sent from the load balancing server to the first server.
In a possible implementation manner, before the first server receives the first instruction, a first request message may be sent to the load balancing server, where the first request message is used to request the first instruction. Specifically, the server cluster includes at least two server devices, and the first request message is a request message that reaches the load balancing server first among all request messages sent by the server cluster. Correspondingly, after receiving the first request message, the load balancing server determines the server corresponding to the first request message as the first server.
In a possible implementation manner, the first server may perform data communication with the load balancing server through an XMPP protocol. That is, the first server sends a first request message to the load balancing server, and the load balancing server sends the first command to the first server after receiving the first request.
Specifically, the priority of the first instruction is the same as that of the second instruction, that is, there is no precedence relationship between the first instruction and the second instruction. The load balancing server may send the first instruction to a first server and send the second instruction to a second server at the same time.
In a possible implementation manner, the manner in which the load balancing server selects the second server may be the same as the manner in which the first server is selected, that is, the server corresponding to the first arriving second request message in the server cluster is selected as the second server.
202. And obtaining a first execution result under the condition that the first instruction is successfully executed.
In a possible implementation manner, the first execution result may include a first execution identifier, where the first execution identifier is used to indicate that the execution of the first instruction is successful. Alternatively, the first execution result may include a first execution identifier and first execution result data, where the first execution result data is data obtained successfully by executing the first instruction.
For example, the first instruction may be information of ten thousand users, and the first server obtains a first execution result when the first instruction is successfully executed. The first execution result comprises a first execution identifier for indicating that the first instruction is successfully executed and read information data of ten thousand users.
In a possible scenario, the first server may execute the first instruction in error. When the first instruction is executed incorrectly, a correct first execution result cannot be obtained. The first server may take action to remedy the situation.
Specifically, if the first instruction executed by the first server is incorrect, the first server may receive the first instruction from the load balancing server again, and execute the first instruction.
In a possible implementation manner, if the number of times of the error occurrence of the re-execution of the first instruction by the first server reaches a threshold, the task may be abandoned, and an error message may be returned to the load balancing server.
203. And querying a second execution result in the cache server, wherein the second execution result is an execution result obtained by the second server executing the second instruction.
The second instruction is sent from the load balancing server to the second server, and the cache server is configured to store execution results of the first server and the second server.
In a possible implementation manner, after the first instruction is executed to obtain the execution result, the first server may send a query instruction to the cache server, where the query instruction is used to instruct to query the second execution result. And after receiving the query instruction, the cache server queries a second execution result according to the query instruction to obtain a query result, and sends the query result to the cache server.
204. And if the second execution result is inquired, sending the first execution result and the second execution result to the calling system.
In a possible implementation manner, the first server receives a query result, and determines whether the cache server includes a second execution result according to the query result. If the cache server contains a second execution result, it indicates that the second server has executed the second instruction.
Specifically, when the second execution result is queried, the first server may receive the second execution result from the cache server, and send the first execution result and the second execution result to a calling system. Or sending the first execution result to a cache server, and sending a forwarding instruction to the cache server, where the forwarding instruction is used to instruct the cache server to send the execution result to the calling system.
Specifically, the first server may be coupled to the calling system through a queue server. And under the condition that a second execution result is inquired by the first server, sending the first execution result and the second execution result to a queue server, and reading the first execution result and the second execution result from the queue server by using a calling system.
In the embodiment of the present application, the server cluster may further include a third server and a fourth server in addition to the first server and the second server. After the original instruction is fragmented, a third instruction and a fourth instruction may also be obtained.
Specifically, when there are more than two servers or more than two instructions, after each server in the server cluster executes the instruction, it is queried whether the cache server contains the execution result of all the instructions, so as to determine whether the original instruction is executed completely. When it is determined that the cache server contains the execution results of all the instructions, the server executing the last instruction in the server cluster may send all the execution results to the calling system.
205. Otherwise, storing the first execution result to the cache server.
Specifically, if the second execution result is not queried, it indicates that the second server has not executed the second instruction, that is, all instruction fragments of the original instruction have not been executed.
In a possible implementation manner, if the second execution result is not queried, the first server may send a storage request to the cache server, where the storage request is used to request to store the first execution result. The first server receives a storage response, where the storage response indicates that the cache server may store the first execution result. And if the storage response is received, the first server sends the first execution result to a cache server.
By implementing the method and the system, the loss of the server can be reduced, the performance of the server is improved, the safety and the stability of data transmission are improved, and the coupling between the servers in the server cluster is reduced.
Fig. 3 is a flowchart of another data transmission method provided in the present application, where the method is applied to a distributed system, where the distributed system includes a load balancing server, a cache server, and a server cluster, where the server cluster includes a first server and a second server, and the load balancing server is configured to fragment an original instruction after receiving the original instruction from a calling system, so as to obtain a first instruction and a second instruction, where the method includes the following steps:
301. The first server sends a first request message to the load balancing server, where the first request message is used to request a first instruction.
Specifically, the load balancing server may send each instruction obtained by fragmenting the original instruction to a server in the server cluster by using a task preemption mechanism. That is, the load balancing server receives a request message from each server in the server cluster, and determines a server corresponding to a received first request message as a first server. Accordingly, the first request message is the request message that reaches the load balancing server first.
302. And receiving a first instruction and executing the first instruction.
And sending the first instruction to the first server after the load balancing server selects the first server. And after receiving the first instruction, the first server executes the first instruction.
303. And when the first instruction fails to be executed, re-receiving the first instruction and executing the first instruction.
Specifically, if the execution of the first command fails, the first server may resend the request message to the load balancing server to request the load balancing server to resend the first command. And after receiving the request message, the load balancing server resends the first instruction to the first server. And the first server receives the first instruction again and executes the first instruction.
In a possible implementation manner, the load balancing server may back up the first instruction. And when the second request message is received, the first backup instruction is sent to the first server again.
304. And obtaining a first execution result under the condition that the first instruction is successfully executed.
305. Sending a query instruction to the cache server, wherein the query instruction is used for indicating to query a second execution result; receiving the query result from the cache server; the second execution result is an execution result obtained by the second server executing the second instruction.
Specifically, when the first instruction is executed successfully, the first server sends a query instruction to the cache server, where the query instruction is used to instruct to query a second execution result. And the cache server receives the query instruction, executes the query instruction and obtains a query result. The query result may indicate that the second execution result is queried or not queried.
306. And sending the first execution result and the second execution result to the calling system under the condition that the second execution result is determined to be inquired according to the inquiry result.
Specifically, after receiving the query result, the first server parses the query result, and determines whether the cache server includes a second execution result. And under the condition that the second execution result is determined to be inquired, sending the first execution result and the second execution result to the calling system.
Specifically, after determining that the second execution result is queried, the first server may send a third request message to the calling system, where the third request message is used to request the second execution result in the cache server. And after the first server responds to the cache server, receiving the second execution result and sending the second execution result and the first execution result to a calling system.
In a possible implementation manner, the first server may send the first execution result to a cache server, and send a forwarding instruction to the cache server. The forwarding instruction is used for instructing the cache server to send the first execution result and the second execution result to the calling system.
307. And storing the first execution result in the cache server under the condition that the second execution result is not inquired according to the inquiry result.
Specifically, if the first execution result is not queried, it indicates that the second server has not executed the second instruction. And storing the first execution result in the cache server so that the second server can send the first execution result and the second execution result to a calling system after executing the second instruction.
Specifically, the storing the first execution result in the cache server includes: sending a storage request to the cache server, wherein the storage request is used for requesting to store the first execution result; receiving a storage response from the cache server; and if the storage response is received, sending the first execution result to the cache server.
By implementing the method and the device, the storage request is sent before the first execution result is stored, the first execution result cannot be stored by the cache server due to reasons such as insufficient storage space, and the loss of the execution result can be avoided.
308. Starting a timer for timing; when the timing duration of the timer reaches a preset duration, inquiring a first execution result and a second execution result in the cache server; and if the first execution result is inquired and the second execution result is not inquired, generating error information and sending the error information to a calling system.
Specifically, after waiting for a preset duration, if the first server queries the first execution result and does not query the second execution result, it is determined that the second server executes the second instruction with an error. If the first server does not inquire the first execution result nor the second execution result, it indicates that the first execution result and the second execution result have been sent to the calling system by the second server.
Specifically, if the preset time duration is too short, the servers in the server cluster may not have enough time to process all the instructions, and the process of executing the instructions may be aborted. If the preset time is too long, when the server cluster executes the instruction and has an error, the error information cannot be returned in time, and the waiting time for calling the system is prolonged. Preferably, the preset time period can be between 1 and 10S.
Specifically, if the first server queries the first execution result and does not query the second execution result, an error message is generated and sent to a calling system. The error information may include information of a cause of the error, an address of the server where the error occurred, and an obtained abnormal execution result.
According to the embodiment of the application, the exception handling mechanism is added, so that error information can be timely returned to the calling system when the server executes the instruction and has an error, the time can be saved, and the instruction handling efficiency can be improved.
By implementing the embodiment of the application, the loss of the server can be reduced, the performance of the server is improved, the safety and the stability of data transmission are improved, and the coupling between the servers in the server cluster is reduced.
Fig. 4 is a flowchart of a specific application scenario of a data transmission method proposed in the present application, where the method includes the following steps:
401. the method comprises the steps that a first server sends a first request message to a load balancing server, wherein the first request message is used for requesting a first instruction, and the first instruction is used for indicating that advertising information is sent to users with the numbers of 0-10 ten thousand.
Specifically, before a first server sends a first request message to a load balancing server, the load balancing server receives an original instruction from a calling system, where the original instruction is used to instruct a user with a serial number of 0 to 20 ten thousand to send advertisement information. And after receiving the original instruction, the load balancing server fragments the original instruction to obtain a first instruction and a second instruction. The first instruction is used for instructing to send the advertisement information to users with the number of 0-10 ten thousand, and the second instruction is used for instructing to send the advertisement information to users with the number of 10-20 ten thousand.
Specifically, the load balancing server sends the first instruction to the servers in the server cluster by using a task preemption mechanism. That is, the load balancing server receives a request message from one or more servers in the server cluster, and determines a server corresponding to a received first request message as a first server. Accordingly, the first request message is the request message that reaches the load balancing server first.
402. And receiving the first instruction and executing the first instruction.
403. And when the first instruction fails to be executed, re-receiving the first instruction and executing the first instruction.
404. And under the condition that the first instruction is successfully executed, obtaining a first execution result, wherein the first execution result is used for indicating that the advertisement information is successfully sent to the users with the numbers of 0-10 ten thousand.
Wherein the first execution result is an identifier of successful execution of the first instruction.
405. Sending a query instruction to the cache server, wherein the query instruction is used for indicating to query a second execution result; receiving the query result from the cache server; the second execution result is an execution result obtained by executing a second instruction by the second server, and the second instruction is used for instructing to send the advertisement information to the users with the numbers of 10 to 20 ten thousand.
Wherein the second execution result is an indication that the first instruction was successfully executed.
406. And sending the first execution result and the second execution result to the calling system under the condition that the second execution result is determined to be inquired according to the inquiry result.
Specifically, when the second execution result is determined to be queried according to the query result, the first server sends a third request message to the cache server, where the third request message is used to request the second execution result. And after receiving a second execution result, the first server sends the first execution result and the second execution result to a calling system.
407. And storing the first execution result in the cache server under the condition that the second execution result is not inquired according to the inquiry result.
Specifically, if the second execution result is not queried, it indicates that the second instruction is not successfully executed, and the first execution result needs to be stored in the cache server, so that the second server can send the first execution result and the second execution result to the calling system after the second instruction is successfully executed.
Specifically, the storing the first execution result in the cache server includes: sending a storage request to the cache server, wherein the storage request is used for requesting to store the first execution result; receiving a storage response from the cache server; and if the storage response is received, sending the first execution result to the cache server.
408. Starting a timer to time; when the timing duration of the timer reaches 5 seconds, inquiring a first execution result and a second execution result in the cache server; and if the first execution result is inquired and the second execution result is not inquired, generating error information and sending the error information to a calling system.
Specifically, when the timing duration reaches 5S, the first server may edit a second query instruction, which is used to instruct to query the first execution result and the second execution result, and send the second query instruction to the cache server. After the cache server obtains the query result, the first server receives the query result, and determines whether the cache server includes the first execution result and the second execution result according to the query result.
Specifically, the error information includes information of a cause of the error and a destination address of the second server, and the cause information may be that the second server has failed.
By implementing the method and the system, the loss of the server can be reduced, the performance of the server is improved, the safety and the stability of data transmission are improved, and the coupling between the servers in the server cluster is reduced.
Fig. 5 is a schematic structural diagram of a data transmission device proposed in the present application, where the device includes:
a first execution unit 501, configured to receive a first instruction and execute the first instruction;
an obtaining unit 502, configured to obtain a first execution result when the first instruction is successfully executed;
a first querying unit 503, configured to query a second execution result in the cache server, where the second execution result is an execution result obtained by executing a second instruction by a second server;
a first sending unit 504, configured to send the first execution result and the second execution result to a calling system when the second execution result is queried;
a storage unit 505, configured to store the first execution result in the cache server if the second execution result is not queried.
As shown in fig. 5, the apparatus further includes:
a second sending unit 506, configured to send a first request message to the load balancing server, where the first request message is used to request the first instruction.
A second execution unit 507, configured to receive the first instruction again and execute the first instruction if the execution of the first instruction fails.
A second query unit 508, configured to start a timer to time after the first execution result is stored in the cache server; when the timing duration of the timer reaches a preset duration, inquiring a first execution result and a second execution result in the cache server; and if the first execution result is inquired and the second execution result is not inquired, generating error information and sending the error information to the calling system.
In a possible implementation manner, the first querying unit 503 is specifically configured to send a query instruction to the cache server, where the query instruction is used to instruct to query the second execution result; receiving the query result from the cache server; the first sending unit is specifically configured to send the first execution result and the second execution result to the calling system when the second execution result is determined to be queried according to the query result.
In a possible implementation manner, the storage unit 505 is specifically configured to send a storage request to the cache server, where the storage request is used to request to store the first execution result; receiving a storage response from the cache server; and if the storage response is received, sending the first execution result to the cache server.
It is understood that the specific implementation of the data transmission apparatus shown in fig. 5 can also refer to the methods shown in fig. 2, fig. 3 and fig. 4, and detailed description thereof is omitted here.
By implementing the device provided by the application, the loss of the server can be reduced, the performance of the server is improved, the safety and the stability of data transmission are improved, and the coupling between the servers in the server cluster is reduced.
Referring to fig. 6, fig. 6 is a schematic structural diagram of another data transmission device according to an embodiment of the present disclosure. The device includes: at least one processor 601, such as a Central Processing Unit (CPU), at least one memory 602, at least one transceiver 603, and at least one bus 604. The bus 604 may be a set of parallel data lines for interconnecting the processor 601, the memory 602, and the transceiver 603; the memory 602 may be a Random Access Memory (RAM) or a non-volatile memory (ROM), such as at least one Read Only Memory (ROM).
Specifically, the transceiver 603 may be configured to receive a first instruction sent by a load balancing server; sending a request message to a load balancing server; sending a query instruction and a first execution result to a cache server; and sending the execution result and the error information to the client.
In one possible implementation, the memory 602 may store program instructions, and the processor 601 may be configured to call the program instructions to execute the methods shown in fig. 2, fig. 3, and fig. 4.
It will be understood by those of ordinary skill in the art that all or part of the steps in the methods of the above embodiments may be performed by associated hardware instructed by a program, and the program may be stored in a computer-readable storage medium, such as a Read Only Memory (ROM), a Random Access Memory (RAM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), a one-time programmable read only memory (EPROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a compact disc read only memory (CD-ROM), or other memory, a magnetic disk, a magnetic tape memory, or a combination thereof, Or any other medium which can be used to carry or store data that is readable by a computer.
The data transmission method and apparatus disclosed in the embodiments of the present application are described in detail above, and specific examples are applied in the description to explain the principle and the implementation of the present application, and the description of the embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, the specific implementation and the application range may be changed. In view of the above, the description should not be taken as limiting the application.

Claims (10)

1. A data transmission method is applied to a distributed system, the distributed system comprises a load balancing server, a cache server and a server cluster, the server cluster comprises a first server and a second server, a server in the server cluster obtains an instruction from the load balancing server through a task preemption mechanism, the load balancing server is used for fragmenting an original instruction after receiving the original instruction from a calling system to obtain a first instruction and a second instruction, and the method comprises the following steps:
the first server receives the first instruction and executes the first instruction;
Obtaining a first execution result under the condition that the first instruction is successfully executed;
querying a second execution result in the cache server, wherein the second execution result is an execution result obtained by the second server executing the second instruction;
if the second execution result is inquired, sending the first execution result and the second execution result to the calling system;
otherwise, storing the first execution result to the cache server.
2. The method of claim 1, wherein before the first server receives the first instruction from a load balancing server and executes the first instruction, the method further comprises:
and sending a first request message to the load balancing server, wherein the first request message is used for requesting the first instruction.
3. The method of claim 1, wherein querying the second execution result in the cache server comprises:
sending a query instruction to the cache server, wherein the query instruction is used for indicating to query the second execution result;
receiving a query result from the cache server;
if the second execution result is queried, sending the first execution result and the second execution result to a calling system, including:
And sending the first execution result and the second execution result to the calling system under the condition that the second execution result is determined to be queried according to the query result.
4. The method of claim 1, wherein storing the first execution result to the cache server comprises:
sending a storage request to the cache server, wherein the storage request is used for requesting to store the first execution result;
receiving a storage response from the cache server;
and if the storage response is received, sending the first execution result to the cache server.
5. The method of claim 1, wherein after said executing said first instruction, and before said obtaining a first execution result, further comprising:
and in the case of failure of executing the first instruction, re-receiving the first instruction and executing the first instruction.
6. The method according to any one of claims 1 to 5, further comprising, after the storing the first execution result to the cache server:
starting a timer to time;
when the timing duration of the timer reaches a preset duration, inquiring the first execution result and the second execution result in the cache server;
And generating error information under the condition that the first execution result is inquired and the second execution result is not inquired, and sending the error information to a calling system.
7. A data transmission apparatus, comprising:
the first execution unit is used for receiving a first instruction and executing the first instruction;
the acquisition unit is used for acquiring a first execution result under the condition that the first instruction is successfully executed;
the first query unit is used for querying a second execution result in the cache server, wherein the second execution result is an execution result obtained by the second server executing a second instruction;
the first sending unit is used for sending the first execution result and the second execution result to a calling system under the condition that the second execution result is inquired;
the storage unit is used for storing the first execution result to the cache server under the condition that the second execution result is not inquired;
the first instruction and the second instruction are obtained from a load balancing server through a task preemption mechanism.
8. The apparatus of claim 7, further comprising:
a second sending unit, configured to send a first request message to a load balancing server, where the first request message is used to request the first instruction.
9. A data transmission device comprising a processor and a memory; the processor and the memory are connected with each other through a bus; wherein the memory is for storing a computer program comprising program instructions, the processor being configured for invoking the program instructions for performing the method of any of claims 1 to 6.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to carry out the method according to any one of claims 1 to 6.
CN201910039461.2A 2019-01-16 2019-01-16 Data transmission method and device Active CN109905459B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910039461.2A CN109905459B (en) 2019-01-16 2019-01-16 Data transmission method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910039461.2A CN109905459B (en) 2019-01-16 2019-01-16 Data transmission method and device

Publications (2)

Publication Number Publication Date
CN109905459A CN109905459A (en) 2019-06-18
CN109905459B true CN109905459B (en) 2022-06-28

Family

ID=66943738

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910039461.2A Active CN109905459B (en) 2019-01-16 2019-01-16 Data transmission method and device

Country Status (1)

Country Link
CN (1) CN109905459B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112286952B (en) * 2020-12-23 2021-10-01 智道网联科技(北京)有限公司 Method, device and system for processing real-time traffic information
CN113158002A (en) * 2021-04-28 2021-07-23 北京达佳互联信息技术有限公司 Searching method, searching device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107179940A (en) * 2016-03-10 2017-09-19 阿里巴巴集团控股有限公司 A kind of method and device of tasks carrying
CN107766136A (en) * 2017-09-30 2018-03-06 南威软件股份有限公司 A kind of method of task cluster management and running
CN108804214A (en) * 2018-05-24 2018-11-13 阿里巴巴集团控股有限公司 A kind of dispatching method of asynchronous task, device and electronic equipment
CN108829508A (en) * 2018-03-30 2018-11-16 北京趣拿信息技术有限公司 task processing method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120151479A1 (en) * 2010-12-10 2012-06-14 Salesforce.Com, Inc. Horizontal splitting of tasks within a homogenous pool of virtual machines

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107179940A (en) * 2016-03-10 2017-09-19 阿里巴巴集团控股有限公司 A kind of method and device of tasks carrying
CN107766136A (en) * 2017-09-30 2018-03-06 南威软件股份有限公司 A kind of method of task cluster management and running
CN108829508A (en) * 2018-03-30 2018-11-16 北京趣拿信息技术有限公司 task processing method and device
CN108804214A (en) * 2018-05-24 2018-11-13 阿里巴巴集团控股有限公司 A kind of dispatching method of asynchronous task, device and electronic equipment

Also Published As

Publication number Publication date
CN109905459A (en) 2019-06-18

Similar Documents

Publication Publication Date Title
US5396613A (en) Method and system for error recovery for cascaded servers
WO2021121370A1 (en) Message loss detection method and apparatus for message queue
CN110636128A (en) Data synchronization method, system, electronic equipment and storage medium
JP2014515152A (en) Method and apparatus for managing message subscriptions in a publish / subscribe messaging system and computer program
CN115004673B (en) Message pushing method, device, electronic equipment and computer readable medium
CN109905459B (en) Data transmission method and device
US9614646B2 (en) Method and system for robust message retransmission
CN103701867A (en) Method, system and central server for processing call requests
CN110796545A (en) Batch processing method, equipment and storage medium for blockchain transaction
CN111510469A (en) Message processing method and device
CN105373563B (en) Database switching method and device
US20160156704A1 (en) Exchange of information between processing servers
CN110912805B (en) Message reading state synchronization method, terminal, server and system
CN110213213B (en) Timing task processing method and system for application
CN111666745A (en) File downloading method, device, server and medium
CN109408251B (en) Message sending method and device and message receiving processing method and device
WO2022033586A1 (en) Message sending method and device
CN108200157B (en) Log synchronization method and device for triggering rollback by master node
JP2005301436A (en) Cluster system and failure recovery method for it
CN112328560A (en) File scheduling method and system
CN112114938A (en) Transaction processing method and device and server
CN113098978B (en) Data transmission method, device and medium
EP4102367A1 (en) Message format indicator for resource-constrained devices
CN107563942B (en) Logistics data batch processing method, logistics processing system and processing device
CN114756356A (en) Task processing method, work node device, main node device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant