Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 is a flowchart of a speed limiting method according to an embodiment of the present invention. As shown in fig. 1, in this embodiment, the present invention provides a speed limiting method applicable to a blockchain node, where the method includes:
s12: when the data uploaded by the current node is congested, monitoring whether a block synchronization task exists:
if yes, go to step S13: carrying out speed limitation on the flow occupied by the block synchronization task according to a pre-configured speed limitation rule; and the number of the first and second groups,
s14: monitoring whether the data uploaded by the current node is congested after a preconfigured duration: if yes, limiting the speed of the flow occupied by the block synchronization task after speed limiting according to a speed limiting rule, and monitoring whether the data uploaded by the current node is congested or not after returning to the monitoring unit after a preset time length;
wherein the block synchronization task comprises sending blocks and/or block headers to other block link points.
Specifically, taking the preconfigured speed limit rule as an example of limiting the speed of the traffic occupied by the block synchronization task to 50% of the traffic occupied by the current block synchronization task, wherein the preconfigured duration is 2 minutes;
the block chain node executes step S12, and when the current node uploads data congestion, monitors whether a block synchronization task exists:
if yes, go to step S13: limiting the flow occupied by the block synchronization task to 50% of the flow occupied by the current block synchronization task, and assuming that the flow occupied by the current block synchronization task is 1M, and at the moment, limiting the speed to 512 kb; and the number of the first and second groups,
the block chain node executes step S141, and monitors whether the data uploaded by the current node is congested after 2 minutes: if yes, go to step S142: limiting the flow occupied by the block synchronization task to 50% of the original occupied flow, wherein the flow occupied by the block synchronization task after speed limiting is 512kb, the speed limiting is 256kb at the moment, and monitoring whether the data uploaded by the current node is congested or not after returning to the monitoring device for the preconfigured duration;
when the block chain node monitors that the data uploaded by the current node is not blocked after 2 minutes, the flow rate limit for releasing the block synchronization task is recovered to 1M;
and when the data congestion is uploaded at the current node and a block synchronization task does not exist, the speed limit operation is not performed on the network flow.
In more embodiments, the preconfigured speed limit rule is not limited to the above example, and may also be configured according to actual requirements, for example, the preconfigured speed limit rule is configured to limit the speed of the traffic occupied by the block synchronization task to 40% of the traffic occupied by the current block synchronization task, so that the same technical effect can be achieved.
In further embodiments, the preconfigured duration is not limited to the above examples, and may also be configured according to actual requirements, for example, configured to be 30 seconds, so that the same technical effect can be achieved.
In more embodiments, data congestion uploaded at the current node may also be configured according to actual requirements, and operations when a block synchronization task does not exist, for example, the operation is configured to limit the speed of traffic occupied by the block synchronization task to be generated, so that the same technical effect may be achieved.
In more embodiments, the operation of the block chain node when the data uploaded by the current node is not congested can be monitored after 2 minutes according to the actual demand, for example, the operation is configured to release the flow speed limit of the block synchronization task and restore the flow speed limit to the last speed limit, that is, 512kb, so that the same technical effect can be achieved.
In more embodiments, the final speed limit speed may be recorded according to actual requirements, and the recorded speed limit speed may be used as a reference initial value for speed limit of the next block synchronization task.
The above-described embodiments effectively manage network traffic for blockchain nodes.
FIG. 2 is a flow diagram of a preferred embodiment of the method shown in FIG. 1. As shown in fig. 2, in a preferred embodiment, the method further comprises:
s151: judging whether the first list of the block synchronization task has enough capacity to store a block and/or a block header task to be sent:
if yes, go to step S152: storing a block and/or a block head task to be sent into a first list;
otherwise, step S153 is executed: blocks and/or block header tasks to be sent are discarded.
Due to the characteristics of the block synchronization task, namely, the opposite-end block link node repeatedly requests a block and/or a block header from the current node, or when the data congestion uploaded by the current node cannot transmit the requested block and/or block header to the opposite-end node, the opposite-end block link node requests the required block and/or block header from other block link nodes; therefore, when there is not enough capacity in the first list to store the block and/or block header task to be sent, the block and/or block header task to be sent is discarded.
The above-described embodiments efficiently manage the storage and discarding of block synchronization tasks for blockchain nodes.
FIG. 3 is a flow diagram of a preferred embodiment of the method shown in FIG. 1. As shown in fig. 3, in a preferred embodiment, step S12 includes:
s121: when the data uploaded by the current node is congested, judging whether a second list of the broadcast data tasks is empty:
otherwise, step S122 is executed: monitoring whether a block synchronization task exists; the data broadcasting task comprises broadcasting blocks and/or transactions to other block chain nodes, the second list is used for storing first type transactions to be broadcasted and/or first type block tasks to be broadcasted, the transactions of the first type transaction tasks are broadcasted by other block chain nodes, and the blocks of the first type block tasks are generated and broadcasted by other block chain nodes;
step S141 includes:
s1411: monitoring whether the remaining capacity of the second list is greater than a preconfigured first value after the duration;
step S142 includes:
s1421: and limiting the speed of the flow occupied by the block synchronization task after the speed is limited according to the speed limiting rule, and monitoring whether the residual capacity of the second list is greater than a pre-configured first numerical value or not after the time length is returned.
In particular, assume that the preconfigured first value is the total capacity of the second list by 60%;
when the data uploaded by the current node is not congested, the current node immediately broadcasts the block to be broadcasted and/or the transaction when receiving the block and/or the transaction broadcasted by the link nodes of other blocks;
when the current node uploads data congestion and the second list has enough capacity, when the current node receives the blocks broadcasted by the link nodes of other blocks, the block task to be broadcasted is added into the second list; when the current node receives the transaction of the broadcast of the link nodes of other blocks, the block task to be broadcast is added into a second list; when the current node generates a block, the generated block is immediately broadcasted, and when the current node receives a transaction broadcasted by a client, the transaction is immediately broadcasted;
the block chain node executes step S121, and when the data uploaded by the current node is congested, determines whether the second list of the broadcast data task is empty:
otherwise, step S122 is executed: monitoring whether a block synchronization task exists:
if yes, go to step S13: limiting the flow occupied by the block synchronization task to 50% of the flow occupied by the current block synchronization task, and assuming that the flow occupied by the current block synchronization task is 1M, and at the moment, limiting the speed to 512 kb; and the number of the first and second groups,
the blockchain node performs step S141, monitoring after 2 minutes whether the remaining capacity of the second list is greater than the total capacity of the second list by 60%: if so, limiting the flow occupied by the block synchronization task to 50% of the original occupied flow, and returning to monitor whether the uploaded data of the current node is congested after the preconfigured duration, wherein the flow occupied by the block synchronization task after limiting the speed is 512kb, and the speed is 256kb at the moment;
when the block chain node monitors that the residual capacity of the second list of the current node is larger than the total capacity of the second list by 60 percent after 2 minutes, the flow rate limit for releasing the block synchronization task is recovered to 1M;
and when the data congestion is uploaded at the current node and a block synchronization task does not exist, the speed limit operation is not performed on the network flow.
In further embodiments, the preconfigured first value is not limited to the above examples, and may also be configured according to actual requirements, for example, configured as a total capacity of 80% of the second list, so that the same technical effect may be achieved.
The above-described embodiments effectively manage network traffic for blockchain nodes.
FIG. 4 is a flow chart of a preferred embodiment of the method shown in FIG. 3. As shown in fig. 4, in a preferred embodiment, the method further includes:
s161: determining whether there is sufficient capacity in the second list to store the first type transactions and/or the first type block tasks to be broadcast:
if yes, go to step S162: the first transactions and/or blocks in the second list are deleted to store the first type transactions and/or first type block tasks to be broadcast.
Due to the characteristic of the data broadcasting task, namely, the first type of transaction task which is more advanced in the second list and/or the first type of block task to be broadcasted are/is broadcasted by other block chain nodes with uncongested uploaded data; therefore, when there is not enough capacity in the second list to store the first type transactions and/or first type block tasks to be transmitted, the first transactions and/or blocks in the second list are deleted to store the first type transactions and/or first type block tasks to be broadcasted.
The above-described embodiments efficiently manage the storage and discarding of broadcast data tasks for blockchain nodes.
FIG. 5 is a flow chart of a preferred embodiment of the method shown in FIG. 4. As shown in fig. 5, in a preferred embodiment, the method further includes:
s171: monitoring whether a node information request task exists:
if yes, go to step S172: and immediately executing the node information request task.
The node information request task includes but is not limited to: testing whether the connection with other block chain nodes is established; broadcasting node information of a current node; and acquiring node information of the opposite-end block chain node.
The above-described embodiment efficiently manages the node information request task of the block chain node.
FIG. 6 is a flow chart of a preferred embodiment of the method shown in FIG. 5. As shown in fig. 6, in a preferred embodiment, before step S12, the method further includes:
s11: setting the upper limit of the total flow occupied by the node information request task, the block synchronization task and the broadcast data task; for example, the upper limit of the total amount of traffic occupied by the node information request task, the block synchronization task, and the broadcast data task is set to Z.
Fig. 7 is a schematic structural diagram of an apparatus according to an embodiment of the present invention.
As shown in fig. 7, as another aspect, the present application also provides an apparatus 700 including one or more Central Processing Units (CPUs) 701 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data necessary for the operation of the apparatus 700 are also stored. The CPU701, the ROM702, and the RAM703 are connected to each other via a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
In particular, according to an embodiment of the present disclosure, the method described in any of the above embodiments may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing any of the methods described above. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711.
As yet another aspect, the present application also provides a computer-readable storage medium, which may be the computer-readable storage medium included in the apparatus of the above-described embodiment; or it may be a separate computer readable storage medium not incorporated into the device. The computer readable storage medium stores one or more programs for use by one or more processors in performing the methods described in the present application.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present application may be implemented by software or hardware. The described units or modules may also be provided in a processor, for example, each unit may be a software program provided in a computer or a mobile intelligent device, or may be a separately configured hardware device. Wherein the designation of a unit or module does not in some way constitute a limitation of the unit or module itself.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the present application. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.