CN109376020B - Data processing method, device and storage medium under multi-block chain interaction concurrence - Google Patents

Data processing method, device and storage medium under multi-block chain interaction concurrence Download PDF

Info

Publication number
CN109376020B
CN109376020B CN201811086564.6A CN201811086564A CN109376020B CN 109376020 B CN109376020 B CN 109376020B CN 201811086564 A CN201811086564 A CN 201811086564A CN 109376020 B CN109376020 B CN 109376020B
Authority
CN
China
Prior art keywords
channel
transaction information
block chain
queue
thread
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811086564.6A
Other languages
Chinese (zh)
Other versions
CN109376020A (en
Inventor
祝赫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bank of China Ltd
Original Assignee
Bank of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bank of China Ltd filed Critical Bank of China Ltd
Priority to CN201811086564.6A priority Critical patent/CN109376020B/en
Publication of CN109376020A publication Critical patent/CN109376020A/en
Application granted granted Critical
Publication of CN109376020B publication Critical patent/CN109376020B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/542Event management; Broadcasting; Multicasting; Notifications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/547Remote procedure calls [RPC]; Web services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/04Trading; Exchange, e.g. stocks, commodities, derivatives or currency exchange

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • General Engineering & Computer Science (AREA)
  • Development Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Technology Law (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Multimedia (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a data processing method, a device and a storage medium under multi-block chain interaction concurrence, wherein the method comprises the following steps: taking out an idle thread from the appointed thread pool after receiving the trade uplink message; in an idle thread, analyzing a transaction uplink message to obtain transaction information, taking out a Channel from a head of a specified bidirectional queue, and determining whether the Channel is a Channel corresponding to a block chain to which the transaction information belongs; if yes, using the Channel to send transaction information, and after the transaction information is used, putting the Channel back to the head of the bidirectional queue; if not, the Channel is put to the tail of the bidirectional queue. The method and the device can improve the reusability of the Channel under the multi-block chain interaction concurrency scene.

Description

Data processing method, device and storage medium under multi-block chain interaction concurrence
Technical Field
The present application relates to the field of block chain technologies, and in particular, to a method, an apparatus, and a storage medium for processing data under multi-block chain interaction concurrency.
Background
At present, in blockchain transaction, before each transaction, a node address port in a Channel needs to be opened to perform socket connection for transaction; wherein the Channel is to be re-created at each use and closed after the use is completed. In a multi-blockchain interaction concurrency scenario, the Channel is repeatedly switched, created, and closed, which means that the same socket connection may be opened/closed multiple times, which is very likely to cause performance waste (e.g., memory overflow caused by an overload), and may even cause program crash to hinder the fluency of the service. Therefore, the non-uniform high-concurrency scenario of multi-zone chains makes multiplexing of channels very difficult. Of course, although it is also possible to separate different blockchains into different independent programs, each processing information on one blockchain separately; however, in a scenario with many chains, it is necessary to develop a plurality of java programs, and development and maintenance costs are enormous, which is not practical.
Therefore, how to improve Channel multiplexing in a multi-blockchain interaction concurrency scenario is a technical problem that needs to be solved at present.
Disclosure of Invention
An object of the present invention is to provide a data processing method, an apparatus, and a storage medium under a multi-zone chain interaction concurrence scenario, so as to improve reusability of a Channel under the multi-zone chain interaction concurrence scenario.
To achieve the above object, in one aspect, an embodiment of the present application provides a data processing method under multi-blockchain interaction, including:
taking out an idle thread from the appointed thread pool after receiving the trade uplink message;
in the idle thread, analyzing the transaction uplink message to obtain transaction information, taking out a Channel from the head of the appointed bidirectional queue, and determining whether the Channel is a Channel corresponding to the block chain to which the transaction information belongs;
if the Channel is the Channel corresponding to the block chain to which the transaction information belongs, the transaction information is sent by using the Channel, and the Channel is placed back to the head of the bidirectional queue after being used;
and if the Channel is not the Channel corresponding to the block chain to which the transaction information belongs, putting the Channel to the tail of the bidirectional queue.
Preferably, the data processing method based on multi-blockchain interaction concurrently further includes:
and if the bidirectional queue is empty, creating a Channel corresponding to the block chain to which the transaction information belongs, sending the transaction information by using the Channel, and placing the Channel to the head of the bidirectional queue after use.
Preferably, the data processing method based on multi-blockchain interaction concurrently further includes:
and when the number of times of taking out the Channel from the head of the bidirectional queue exceeds the queue length of the bidirectional queue and the Channel corresponding to the block chain to which the transaction information belongs is not matched, closing the taken out Channel, creating a Channel corresponding to the block chain to which the transaction information belongs, sending the transaction information by using the Channel, and putting the Channel to the head of the bidirectional queue after use.
Preferably, in the data processing method under the multi-block chain interaction concurrency, the queue length of the bidirectional queue is equal to the thread pool size of the thread pool.
On the other hand, an embodiment of the present application further provides a data processing apparatus under the multi-block chain interaction concurrence, including:
the thread taking module is used for taking out an idle thread from the appointed thread pool after receiving the transaction uplink message;
a Channel matching module, configured to analyze the uplink transaction message in the idle thread to obtain transaction information, take out a Channel from a head of an assigned bidirectional queue, and determine whether the Channel is a Channel corresponding to a blockchain to which the transaction information belongs;
the thread first logic module is used for sending the transaction information by using the Channel when the Channel is the Channel corresponding to the block chain to which the transaction information belongs, and putting the Channel back to the head of the bidirectional queue after the Channel is used;
and the thread second logic module is used for placing the Channel to the tail of the bidirectional queue when the Channel is not the Channel corresponding to the block chain to which the transaction information belongs.
Preferably, the data processing apparatus for multi-blockchain interaction and concurrence further includes:
and the thread third logic module is used for creating a Channel corresponding to the block chain to which the transaction information belongs when the bidirectional queue is empty, sending the transaction information by using the Channel, and placing the Channel to the head of the bidirectional queue after the Channel is used.
Preferably, the data processing apparatus for multi-blockchain interaction and concurrence further includes:
and the thread fourth logic module is used for closing the taken Channel when the frequency of taking the Channel from the head of the bidirectional queue exceeds the queue length of the bidirectional queue and the Channel corresponding to the block chain to which the transaction information belongs is not matched, creating a Channel corresponding to the block chain to which the transaction information belongs, sending the transaction information by using the Channel, and putting the Channel to the head of the bidirectional queue after use.
Preferably, in the data processing apparatus concurrently with the multi-partition chain interaction, the queue length of the bidirectional queue is equal to the thread pool size of the thread pool.
In another aspect, an embodiment of the present application further provides another data processing apparatus under multi-blockchain interaction concurrence, including a memory, a processor, and a computer program stored on the memory, where the computer program, when executed by the processor, performs the following steps:
taking out an idle thread from the appointed thread pool after receiving the trade uplink message;
in the idle thread, analyzing the transaction uplink message to obtain transaction information, taking out a Channel from the head of the appointed bidirectional queue, and determining whether the Channel is a Channel corresponding to the block chain to which the transaction information belongs;
if the Channel is the Channel corresponding to the block chain to which the transaction information belongs, the transaction information is sent by using the Channel, and the Channel is placed back to the head of the bidirectional queue after being used;
and if the Channel is not the Channel corresponding to the block chain to which the transaction information belongs, putting the Channel to the tail of the bidirectional queue.
In another aspect, an embodiment of the present application further provides a computer storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the following steps:
taking out an idle thread from the appointed thread pool after receiving the trade uplink message;
in the idle thread, analyzing the transaction uplink message to obtain transaction information, taking out a Channel from the head of the appointed bidirectional queue, and determining whether the Channel is a Channel corresponding to the block chain to which the transaction information belongs;
if the Channel is the Channel corresponding to the block chain to which the transaction information belongs, the transaction information is sent by using the Channel, and the Channel is placed back to the head of the bidirectional queue after being used;
and if the Channel is not the Channel corresponding to the block chain to which the transaction information belongs, putting the Channel to the tail of the bidirectional queue.
As can be seen from the technical solutions provided by the embodiments of the present application, the embodiments of the present application can enable channels with high use frequency to appear at the head of the bidirectional queue more probably, and channels with low use frequency to appear at the tail of the bidirectional queue more probably; moreover, the bidirectional queues can be adaptively adjusted according to different concurrency times, and if the chains of a certain block are changed frequently in a certain period, the position of the corresponding Channel appearing in the queues is advanced. Therefore, the Channel which is frequently used recently can be taken out preferentially when the Channel is taken next time through the mechanism, and the purpose of improving the Channel reusability under the scene of multi-block chain interaction concurrency is achieved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort. In the drawings:
FIG. 1 is a flow chart of a data processing method concurrently with multi-zone chain interactions according to an embodiment of the present application;
FIG. 2 is a flow chart of a data processing method concurrently with multi-blockchain interactions according to another embodiment of the present application;
FIG. 3 is a block diagram of a data processing apparatus concurrently with multi-partition chain interactions according to an embodiment of the present application;
FIG. 4 is a block diagram of a data processing apparatus according to an embodiment of the present application, wherein the data processing apparatus is configured to perform multi-partition chain interaction concurrently.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The following embodiments of the present application are applicable to a multi-blockchain interaction concurrency scenario, and in particular, to a multi-blockchain interaction uneven high concurrency scenario. Specifically, an application processes uplink messages of multiple block chains, and uplink messages of different block chains may have high concurrency periods different according to time or scene. In such a scenario, repeated switching, creation, and closing of the Channel is very likely to cause performance waste. Therefore, the non-uniform high-concurrency scenario of multi-zone chains makes multiplexing of channels very difficult. Of course, although it is also possible to separate different blockchains into different independent programs, each processing information on one blockchain separately; however, in a scenario with many chains, it is necessary to develop a plurality of java programs, and development and maintenance costs are enormous, which is not practical. In addition, under the above scenario of uneven multi-blockchain interaction and high concurrency, if channels required by all blockchains are simply stored and traversed uniformly, time and memory cannot be optimized effectively. In view of the above-mentioned technical problems, the inventors of the present application have made studies to propose the following solutions (specifically, as described below).
In the following embodiments of the present application, a Channel refers to a Channel, which represents a blockchain (or a Channel and a blockchain are in one-to-one correspondence), and specifically may include a series of peer (node), orderer (sorting node) and event hub (event hub) ip and port. During communication, a Channel opens a free port locally to communicate with part of port connections stored in the Channel, and each Channel can represent a client instance of a block chain in java.
Referring to fig. 1, a data processing method under multi-block chain interaction concurrence according to an embodiment of the present application may include the following steps:
s101, taking out an idle thread from the appointed thread pool after receiving the trade uplink message.
In an embodiment of the present application, when a node in a block chain is ready to send an uplink transaction, the node may first encapsulate information to generate an uplink transaction message, and provide the uplink transaction message to a sending module of the node. Before this, of course, a series of initialization actions of the blockchain may be performed, which may include, for example, registering a certificate, creating a Channel, installing a contract, initializing a contract, and the like.
The uplink transaction message may include, for example, transaction information (e.g., transaction time, transaction content, etc.), a blockchain identifier corresponding to the uplink transaction (i.e., a blockchain identifier of a blockchain in preparation for a transaction), and the like. An idle thread may then be taken from the designated thread pool to process the uplink transaction message. If no threads are currently available in the thread pool (i.e., idle threads), the blocking wait may be synchronized until an idle thread occurs. Of course, in other embodiments, if there is no currently available thread in the thread pool, the method may also be implemented by a standby mode such as synchronous non-blocking, asynchronous blocking, or asynchronous non-blocking, according to actual needs.
In one embodiment of the present application, a thread pool can be created in advance before the business logic starts and can be reclaimed before the program ends. The use of thread pools can prevent system crashes that may result from too much memory resources being taken up by too high of a concurrency. The size (or called dimension) of the thread pool determines the number of concurrent uplink transaction messages that the node can process simultaneously, so that the larger the thread pool is, the better the concurrent processing performance of the node is, but the higher the memory requirement is. Therefore, the size of the thread pool needs to be set reasonably according to specific situations. In addition, the thread pool needs to be put back after the thread is used up.
S102, in the idle thread, analyzing the transaction uplink message to obtain transaction information, taking out a Channel from the head of the appointed bidirectional queue, and determining whether the Channel is the Channel corresponding to the block chain to which the transaction information belongs. If yes, executing step S103; otherwise, step S104 is executed.
In an embodiment of the present application, before taking out one Channel from the head of the specified bidirectional queue, the sending module of the node may also perform some preprocessing; for example, after receiving a transaction uplink message, the message is decapsulated to analyze the transaction information (including message content, corresponding blockchain scenario, etc.) that needs to be uplink, which may depend on the service requirement.
In an embodiment of the present application, the uplink transaction message carries a corresponding blockchain identifier of the uplink transaction, and channels are in one-to-one correspondence with the blockchains; therefore, by matching the extracted Channel identifier of the Channel with the blockchain identifier carried in the uplink transaction message, it can be determined whether the Channel is the Channel corresponding to the blockchain to which the uplink transaction message belongs.
In one embodiment of the present application, a bi-directional queue is created before the business logic begins. The bidirectional queue is a data structure which can store information and can fetch or store data from or at the head of the queue or at the tail of the queue. In an exemplary embodiment, the bidirectional queue may be, for example, a doubly linked list or the like. The bi-directional linked list is usually represented as linkedlst in java, and is a kind of linked list, and each data node of the bi-directional linked list has two pointers which can point to direct successors and direct predecessors respectively. Thus, the head and tail elements of the queue can be accessed and a bidirectional loop can be constructed very quickly through the doubly linked list. Generally, it is a common behavior to use a certain data structure to manage the life cycle of a highly concurrent scene object in a java program, and the embodiments of the present application mainly aim at the situation where different scenes in a block chain are interleaved and concurrent and have different concurrent frequencies, and the implementation using a doubly linked list may have more excellent efficiency.
In some embodiments of the present application, it is considered that an oversize thread pool may cause channels entering from the tail of a queue to take too long to reach the head of the queue, and an undersize thread pool may cause overflow of the queue and waste of resources due to excessive creation of new channels by some threads; therefore, the size of the thread pool size needs to be set reasonably. Similarly, bidirectional queues also need to be properly set. Assuming a total of n threads, m blockchains, theoretically require m × n queue lengths to implement non-duplicate queues, however, if this is done, a large number of redundant channels may be idle, which is likely to cause space waste. Preferably, the queue length of the bidirectional queue may be equal to the thread pool size of the thread pool. Thus, using a queue length of n ensures that each thread can be assigned to a Channel even when the program is busy. Therefore, the length of the bidirectional queue is set as the length of the thread pool, and the system can be ensured to be in better operation efficiency and memory occupation after the program runs stably. In addition, during a specific development process, the queue length can be adjusted according to the concurrency frequency and the transaction type number so as to optimize the performance.
S103, if the Channel is the Channel corresponding to the block chain to which the transaction information belongs, the Channel is used for sending the transaction information, and the Channel is placed back to the head of the bidirectional queue after being used.
In an embodiment of the present application, if the Channel is a Channel corresponding to a block chain to which the transaction information belongs, the sending module of the node performs uplink transaction by using the Channel in a manner of function call or remote transmission.
And S104, if the Channel is not the Channel corresponding to the block chain to which the transaction information belongs, putting the Channel to the tail of the bidirectional queue.
It can be seen that in some embodiments of the present application, the channels required for each blockchain transaction may be stored in a bidirectional queue (specifically, the required channels are placed at the head of the bidirectional queue, and the unnecessary channels are placed at the tail of the bidirectional queue), and the channels are not closed immediately after the transaction is ended, but are stored by using the data structure of the bidirectional queue. In the loop iteration of the above steps, the channels with high usage frequency are more likely to appear at the head of the two-way queue, and the channels with low usage frequency are more likely to appear at the tail of the two-way queue. And the bidirectional queues can be adaptively adjusted according to different concurrency times, and if the chains of a certain block are changed frequently in a certain period, the position of the corresponding Channel in the queues is advanced. Therefore, the Channel which is frequently used recently can be taken out preferentially when the Channel is taken next time through the mechanism, and the purpose of improving the Channel reusability under the scene of multi-block chain interaction concurrency is achieved.
In the data processing method under the multi-zone block chain interaction concurrence in another embodiment of the present application, if the bidirectional queue is empty, a Channel corresponding to the block chain to which the transaction information belongs is created, the transaction information is sent by using the Channel, and the Channel is placed to the head of the bidirectional queue after being used. At the beginning of transaction, the thread may create more channels because the thread does not obtain the channels needed by the thread in the queue, which may occur when the concurrent situation of the transaction is changed greatly, and when the concurrent situation of the transaction approaches stability, the thread may take out the channels needed by the thread from the queue preferentially at a high probability in two directions without closing the channels which are not needed and then creating new channels.
In another embodiment of the present application, in the data processing method under multi-zone block chain interaction, when the number of times of taking a Channel from the head of the bidirectional queue exceeds the queue length of the bidirectional queue and does not match the Channel corresponding to the block chain to which the transaction information belongs, the taken Channel is closed, a Channel corresponding to the block chain to which the transaction information belongs is created, the transaction information is sent by using the Channel, and the Channel is placed to the head of the bidirectional queue after use. Based on this, in other embodiments of the present application, a data processing method concurrently with multi-blockchain interaction may be as shown in fig. 2. In fig. 2, the step of determining whether the currently taken Channel is usable refers to determining whether the currently taken Channel is a Channel corresponding to the block chain to which the uplink transaction message belongs. In the embodiment of the application, the judgment that a new Channel is created after the bidirectional queue length and the Channel is unavailable for multiple times is used, so that the dead loop that the program does not stop taking the Channel from the bidirectional link is prevented. In addition, the upper limit of the Channel taking times is set as the length of the thread pool, so that better operation efficiency and memory occupation can be guaranteed after the program runs stably.
Referring to fig. 3, a data processing apparatus for multi-partition chain interaction and concurrence according to an embodiment of the present application may include:
the thread fetching module 31 may be configured to fetch an idle thread from the designated thread pool after receiving the transaction uplink message;
the Channel matching module 32 may be configured to, in the idle thread, analyze the transaction uplink message to obtain transaction information, take out a Channel from a head of an assigned bidirectional queue, and determine whether the Channel is a Channel corresponding to a block chain to which the transaction information belongs;
the thread first logic module 33 may be configured to send the transaction information using the Channel when the Channel is a Channel corresponding to the blockchain to which the transaction information belongs, and put the Channel back to the head of the bidirectional queue after use;
and the thread second logic module 34 may be configured to, when the Channel is not the Channel corresponding to the block chain to which the transaction information belongs, put the Channel to the tail of the bidirectional queue.
In another embodiment of the present application, the data processing apparatus under the multi-block chain interaction concurrence may further include:
and the thread third logic module can be used for creating a Channel corresponding to the block chain to which the transaction information belongs when the bidirectional queue is empty, sending the transaction information by using the Channel, and placing the Channel to the head of the bidirectional queue after the Channel is used.
In another embodiment of the present application, the data processing apparatus under the multi-block chain interaction concurrence may further include:
and the thread fourth logic module can be used for closing the taken-out Channel when the number of times of taking out the Channel from the head of the bidirectional queue exceeds the queue length of the bidirectional queue and the Channel corresponding to the block chain to which the transaction information belongs is not matched, creating a Channel corresponding to the block chain to which the transaction information belongs, sending the transaction information by using the Channel, and placing the Channel to the head of the bidirectional queue after use.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
Referring to fig. 4, another data processing apparatus according to an embodiment of the present application, which is capable of performing multi-zone chain interaction concurrently, includes a memory, a processor, and a computer program stored in the memory, where the computer program is executed by the processor to perform the following steps:
taking out an idle thread from the appointed thread pool after receiving the trade uplink message;
in the idle thread, analyzing the transaction uplink message to obtain transaction information, taking out a Channel from the head of the appointed bidirectional queue, and determining whether the Channel is a Channel corresponding to the block chain to which the transaction information belongs;
if the Channel is the Channel corresponding to the block chain to which the transaction information belongs, the transaction information is sent by using the Channel, and the Channel is placed back to the head of the bidirectional queue after being used;
and if the Channel is not the Channel corresponding to the block chain to which the transaction information belongs, putting the Channel to the tail of the bidirectional queue.
While the process flows described above include operations that occur in a particular order, it should be appreciated that the processes may include more or less operations that are performed sequentially or in parallel (e.g., using parallel processors or a multi-threaded environment).
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A method for processing data under multi-blockchain interaction concurrence is characterized by comprising the following steps:
taking out an idle thread from the appointed thread pool after receiving the trade uplink message;
in the idle thread, analyzing the transaction uplink message to obtain transaction information, taking out a Channel from the head of the appointed bidirectional queue, and determining whether the Channel is a Channel corresponding to the block chain to which the transaction information belongs;
if the Channel is the Channel corresponding to the block chain to which the transaction information belongs, the transaction information is sent by using the Channel, and the Channel is placed back to the head of the bidirectional queue after being used;
and if the Channel is not the Channel corresponding to the block chain to which the transaction information belongs, putting the Channel to the tail of the bidirectional queue.
2. The method of data processing concurrently with multi-blockchain interaction of claim 1, further comprising:
and if the bidirectional queue is empty, creating a Channel corresponding to the block chain to which the transaction information belongs, sending the transaction information by using the Channel, and placing the Channel to the head of the bidirectional queue after use.
3. The method of data processing concurrently with multi-blockchain interaction of claim 1, further comprising:
and when the number of times of taking out the Channel from the head of the bidirectional queue exceeds the queue length of the bidirectional queue and the Channel corresponding to the block chain to which the transaction information belongs is not matched, closing the taken out Channel, creating a Channel corresponding to the block chain to which the transaction information belongs, sending the transaction information by using the Channel, and putting the Channel to the head of the bidirectional queue after use.
4. The method of data processing concurrently with multi-blockchain interaction of claim 1, wherein a queue length of the bidirectional queue is equal to a thread pool size of the thread pool.
5. A data processing apparatus for multi-blockchain interaction concurrently, comprising:
the thread taking module is used for taking out an idle thread from the appointed thread pool after receiving the transaction uplink message;
a Channel matching module, configured to analyze the uplink transaction message in the idle thread to obtain transaction information, take out a Channel from a head of an assigned bidirectional queue, and determine whether the Channel is a Channel corresponding to a blockchain to which the transaction information belongs;
the thread first logic module is used for sending the transaction information by using the Channel when the Channel is the Channel corresponding to the block chain to which the transaction information belongs, and putting the Channel back to the head of the bidirectional queue after the Channel is used;
and the thread second logic module is used for placing the Channel to the tail of the bidirectional queue when the Channel is not the Channel corresponding to the block chain to which the transaction information belongs.
6. The apparatus for data processing concurrently with multi-blockchain interaction of claim 5, further comprising:
and the thread third logic module is used for creating a Channel corresponding to the block chain to which the transaction information belongs when the bidirectional queue is empty, sending the transaction information by using the Channel, and placing the Channel to the head of the bidirectional queue after the Channel is used.
7. The apparatus for data processing concurrently with multi-blockchain interaction of claim 5, further comprising:
and the thread fourth logic module is used for closing the taken Channel when the frequency of taking the Channel from the head of the bidirectional queue exceeds the queue length of the bidirectional queue and the Channel corresponding to the block chain to which the transaction information belongs is not matched, creating a Channel corresponding to the block chain to which the transaction information belongs, sending the transaction information by using the Channel, and putting the Channel to the head of the bidirectional queue after use.
8. The apparatus of claim 5 wherein a queue length of the bidirectional queue is equal to a thread pool size of the thread pool.
9. A data processing apparatus for multi-blockchain interaction concurrency, comprising a memory, a processor, and a computer program stored on the memory, wherein the computer program when executed by the processor performs the steps of:
taking out an idle thread from the appointed thread pool after receiving the trade uplink message;
in the idle thread, analyzing the transaction uplink message to obtain transaction information, taking out a Channel from the head of the appointed bidirectional queue, and determining whether the Channel is a Channel corresponding to the block chain to which the transaction information belongs;
if the Channel is the Channel corresponding to the block chain to which the transaction information belongs, the transaction information is sent by using the Channel, and the Channel is placed back to the head of the bidirectional queue after being used;
and if the Channel is not the Channel corresponding to the block chain to which the transaction information belongs, putting the Channel to the tail of the bidirectional queue.
10. A computer storage medium having a computer program stored thereon, the computer program, when executed by a processor, performing the steps of:
taking out an idle thread from the appointed thread pool after receiving the trade uplink message;
in the idle thread, analyzing the transaction uplink message to obtain transaction information, taking out a Channel from the head of the appointed bidirectional queue, and determining whether the Channel is a Channel corresponding to the block chain to which the transaction information belongs;
if the Channel is the Channel corresponding to the block chain to which the transaction information belongs, the transaction information is sent by using the Channel, and the Channel is placed back to the head of the bidirectional queue after being used;
and if the Channel is not the Channel corresponding to the block chain to which the transaction information belongs, putting the Channel to the tail of the bidirectional queue.
CN201811086564.6A 2018-09-18 2018-09-18 Data processing method, device and storage medium under multi-block chain interaction concurrence Active CN109376020B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811086564.6A CN109376020B (en) 2018-09-18 2018-09-18 Data processing method, device and storage medium under multi-block chain interaction concurrence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811086564.6A CN109376020B (en) 2018-09-18 2018-09-18 Data processing method, device and storage medium under multi-block chain interaction concurrence

Publications (2)

Publication Number Publication Date
CN109376020A CN109376020A (en) 2019-02-22
CN109376020B true CN109376020B (en) 2021-02-12

Family

ID=65405516

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811086564.6A Active CN109376020B (en) 2018-09-18 2018-09-18 Data processing method, device and storage medium under multi-block chain interaction concurrence

Country Status (1)

Country Link
CN (1) CN109376020B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110264348B (en) * 2019-05-07 2021-08-20 北京奇艺世纪科技有限公司 Processing method, device and storage medium for transaction uplink
CN110163609B (en) * 2019-05-28 2024-02-27 深圳前海微众银行股份有限公司 Method and device for processing data in block chain
CN111782350B (en) * 2019-06-25 2024-06-14 北京京东尚科信息技术有限公司 Service processing method and device
CN113656408A (en) * 2021-08-19 2021-11-16 龙兴(杭州)航空电子有限公司 Full-life-cycle management method and system for aviation material based on RFID combined block chain technology

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103543988A (en) * 2013-10-23 2014-01-29 华为终端有限公司 Method for processing array information, method and device of controlling information to enter arrays
CN104516827A (en) * 2013-09-27 2015-04-15 杭州信核数据科技有限公司 Cache reading method and device
CN106293973A (en) * 2016-08-17 2017-01-04 深圳市金证科技股份有限公司 Lock-free message queue communication means and system
CN107241279A (en) * 2017-06-22 2017-10-10 北京天德科技有限公司 A kind of block chain transaction current-limiting method based on multi-buffer queue
CN107402824A (en) * 2017-05-31 2017-11-28 阿里巴巴集团控股有限公司 A kind of method and device of data processing

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102117309B (en) * 2010-01-06 2013-04-17 卓望数码技术(深圳)有限公司 Data caching system and data query method
US8544029B2 (en) * 2011-05-24 2013-09-24 International Business Machines Corporation Implementing storage adapter performance optimization with chained hardware operations minimizing hardware/firmware interactions
US20160283379A1 (en) * 2015-03-27 2016-09-29 Avago Technologies General Ip (Singapore) Pte. Ltd. Cache flushing utilizing linked lists
CN106453029A (en) * 2015-08-07 2017-02-22 中兴通讯股份有限公司 Notification information processing method and apparatus
US9319365B1 (en) * 2015-10-09 2016-04-19 Machine Zone, Inc. Systems and methods for storing and transferring message data
CN106777371B (en) * 2017-01-23 2019-12-06 北京齐尔布莱特科技有限公司 Log collection system and method
CN107562535A (en) * 2017-08-02 2018-01-09 广东睿江云计算股份有限公司 A kind of load-balancing method of task based access control scheduling, system
CN107656812A (en) * 2017-09-27 2018-02-02 咪咕文化科技有限公司 block chain processing method, system, node device, terminal and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104516827A (en) * 2013-09-27 2015-04-15 杭州信核数据科技有限公司 Cache reading method and device
CN103543988A (en) * 2013-10-23 2014-01-29 华为终端有限公司 Method for processing array information, method and device of controlling information to enter arrays
CN106293973A (en) * 2016-08-17 2017-01-04 深圳市金证科技股份有限公司 Lock-free message queue communication means and system
CN107402824A (en) * 2017-05-31 2017-11-28 阿里巴巴集团控股有限公司 A kind of method and device of data processing
CN107241279A (en) * 2017-06-22 2017-10-10 北京天德科技有限公司 A kind of block chain transaction current-limiting method based on multi-buffer queue

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"基于Socket.IO的物联网网关实时双向通信系统";陈文艺等;《西安邮电大学学报》;20171110;第22卷(第6期);第111-116页 *

Also Published As

Publication number Publication date
CN109376020A (en) 2019-02-22

Similar Documents

Publication Publication Date Title
CN109376020B (en) Data processing method, device and storage medium under multi-block chain interaction concurrence
CN111756550B (en) Block chain consensus method and device
CN108280080B (en) Data synchronization method and device and electronic equipment
CN107241281B (en) Data processing method and device
CN111143093B (en) Asynchronous message distributed processing method, device, equipment and storage medium
CN112506498A (en) Intelligent visual API arrangement method, storage medium and electronic equipment
CN112288423A (en) Aggregation payment method and system of distributed framework
WO2021103646A1 (en) Pod deployment method and device
CN107977254B (en) Method for responding to request in cloud data system and computer-readable storage medium
US20140245309A1 (en) System and method for transforming a queue from non-blocking to blocking
CN109857516B (en) Cluster migration method and device based on container
CN112463290A (en) Method, system, apparatus and storage medium for dynamically adjusting the number of computing containers
CN109561128A (en) Data transmission method and device
CN108304272B (en) Data IO request processing method and device
CN110716813A (en) Data stream processing method and device, readable storage medium and processor
CN113032134B (en) Method and device for realizing cloud computing resource allocation and cloud management server
CN103927244A (en) Plug-in scheduling process monitoring method implemented based on dynamic proxy
CN107678863A (en) The page assembly means of communication and device
CN111435329A (en) Automatic testing method and device
CN110413427B (en) Subscription data pulling method, device, equipment and storage medium
CN115220887A (en) Processing method of scheduling information, task processing system, processor and electronic equipment
CN112015515B (en) Instantiation method and device of virtual network function
CN110971642B (en) Data processing method and device for cloud computing platform
CN113254143A (en) Virtual network function network element arranging and scheduling method, device and system
CN111797070A (en) Ticket data processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant