CN113691611B - Block chain distributed high-concurrency transaction processing method, system, equipment and storage medium - Google Patents

Block chain distributed high-concurrency transaction processing method, system, equipment and storage medium Download PDF

Info

Publication number
CN113691611B
CN113691611B CN202110965424.1A CN202110965424A CN113691611B CN 113691611 B CN113691611 B CN 113691611B CN 202110965424 A CN202110965424 A CN 202110965424A CN 113691611 B CN113691611 B CN 113691611B
Authority
CN
China
Prior art keywords
processing
request
cluster
pool
distributed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110965424.1A
Other languages
Chinese (zh)
Other versions
CN113691611A (en
Inventor
马超群
熊园坤
周中定
李信儒
兰秋军
万丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN202110965424.1A priority Critical patent/CN113691611B/en
Publication of CN113691611A publication Critical patent/CN113691611A/en
Application granted granted Critical
Publication of CN113691611B publication Critical patent/CN113691611B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1023Server selection for load balancing based on a hash applied to IP addresses or costs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/45Network directories; Name-to-address mapping
    • H04L61/4505Network directories; Name-to-address mapping using standardised directories; using standardised directory access protocols
    • H04L61/4511Network directories; Name-to-address mapping using standardised directories; using standardised directory access protocols using domain name system [DNS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/63Routing a service request depending on the request content or context

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a block chain distributed high-concurrency transaction processing method, a system, equipment and a storage medium, wherein the method distributes a user request to a plurality of processing servers by adopting a multi-level load balancing and multi-level cache mechanism, so that the response speed of the user request can be accelerated, the throughput is improved, and the condition that the processing servers are crashed due to sudden increase of the user request is prevented. And then, the high concurrency pool cluster is used for carrying out distributed and parallel processing on the mass data, so that the scheduling pressure can be reduced, the problem of large-scale scheduling is effectively solved, the memory pressure is relieved, and the resource utilization efficiency is improved. And then the data processed by the high concurrency pool cluster is issued through a distributed message system, so that the message can be duralized, the data can be conveniently processed in batch, and the data is written into the corresponding block chain in high speed and in parallel through a plurality of cache clusters.

Description

Block chain distributed high-concurrency transaction processing method, system, equipment and storage medium
Technical Field
The present invention relates to the field of blockchain technologies, and in particular, to a method, a system, a device, and a computer-readable storage medium for processing distributed highly concurrent transactions of blockchains.
Background
The block chain technology, also called distributed ledger technology, is an emerging technology in which several computer devices participate in "accounting" together, and maintain a complete distributed database together. The block chain technology is favored in different fields because of its own characteristics of decentralization and public transparency. Each computer on the block chain has data records, and data synchronization can be performed among all computer devices in real time, so that the safety and timeliness of data are guaranteed. At present, the transaction processing flow of the blockchain is as follows: the client sends a request, the service node interacts with the block chain network after receiving the user request, then calls the chain code to realize data chaining, and the client executes the process once every time the client sends the request. However, the existing block chain transaction processing method cannot cope with the situation of sudden increase of user requests, is difficult in computing resource scheduling, low in computing resource utilization efficiency, incapable of performing distributed and parallel computing, slow in response speed and small in system throughput.
Disclosure of Invention
The invention provides a distributed high-concurrency transaction processing method, a system, equipment and a computer readable storage medium of a block chain, which are used for solving the defects of the existing block chain transaction processing method.
According to an aspect of the present invention, there is provided a method for distributed highly concurrent transaction processing of a blockchain, including the following steps:
step S1: the client sends a plurality of user requests, and the service node receives the user requests and uniformly stores a plurality of request messages to a request cache pool;
step S2: respectively carrying out domain name resolution on the IP addresses in the plurality of request messages by adopting a DNS server cluster;
and step S3: respectively converting the IP address analyzed from each request message by adopting a load balancer cluster, modifying a target IP address in one request message into a server address selected by a scheduling algorithm by each load balancer, and repackaging the request messages to form a data packet;
and step S4: processing the repackaged data packets by adopting a reverse proxy server cluster, if the reverse proxy server can obtain a processing result, feeding the result back to the client, and if the reverse proxy server cannot obtain the processing result, forwarding the data packets;
step S5: a high concurrency pool cluster is adopted to receive a plurality of data packets forwarded by a plurality of reverse proxy servers, and the plurality of forwarded data packets are subjected to distributed parallel processing through a plurality of concurrency pools;
step S6: distributing the data processed by the high concurrency pool cluster by adopting a distributed message system;
step S7: and correspondingly synchronizing the data issued by the distributed message system to different block chains through a plurality of cache clusters.
Further, in step S2, each DNS server compares one request packet with resource records in the DNS database to obtain a domain name resolution result to implement first-level load balancing, and sends the domain name resolution result and the request packet to the load balancer cluster.
Further, in step S3, each load balancer converts an IP address parsed from one request packet to implement second-level load balancing, modifies a target IP address in the request packet to an IP address of a next-level proxy server selected by using a weighted round-robin scheduling algorithm through the scheduler, and meanwhile, correspondingly modifies a port address, repacks and packages the request packet, and correspondingly sends the data packet to the next-level proxy server.
Further, in step S4, each reverse proxy server correspondingly receives the data packet sent by each load balancer to implement third-level load balancing, after receiving the request data packet, each reverse proxy server compares the request packet with the local cache, if a processing result can be obtained, the processing result is directly fed back to the client, if no processing result is obtained, the request packet is compared with the cache cluster, if a processing result can be obtained, the processing result is directly fed back to the client, if no processing result is obtained, the data packet is sent to the processing server, and the processing server performs forwarding processing.
Further, in step S5, after receiving the data packet, each concurrency pool constructs a task queue and a distributor, introduces the request packet into the task queue to queue, waits to be scheduled, then creates a work pool and instantiates a worker by the distributor, and performs maximum processing, the distributor uninterruptedly retrieves the tasks from the task queue, schedules a waiting worker to process the tasks each time a task is retrieved, constructs a task channel between the worker and the tasks, schedules the request and the resource one-to-one, registers the worker in the work pool when the worker is started, and completes data processing by the worker.
Further, if the distributor judges that the number of workers operating in the working pool exceeds the capacity of the working pool, the distributor stops scheduling, and the concurrent pool is in a blocking state, and scheduling is performed until available workers finish tasks are released into the working pool.
Further, in step S6, a plurality of message producers of the distributed message system publish the data processed by the high concurrency pool cluster to different partitions, each partition has a specific number and is arranged in order, each consumer of the distributed message system subscribes to a different partition, and publishing and subscribing the messages are performed asynchronously.
In addition, the present invention also provides a block chain distributed highly concurrent transaction processing system, including:
the request cache pool is used for uniformly storing a plurality of user requests sent by the client;
the DNS server cluster is used for carrying out domain name resolution on the IP addresses in the request messages;
the load balancer cluster is used for converting the IP address analyzed from each request message, each load balancer modifies a target IP address in one request message into a server address selected by the scheduling algorithm, and encapsulates the request message again to form a data packet;
the reverse proxy server cluster is used for processing the data packet re-encapsulated by the load balancer, if the reverse proxy server can obtain a processing result, the result is fed back to the client, and if the reverse proxy server cannot obtain the processing result, the data packet is forwarded;
the high concurrency pool cluster is used for receiving a plurality of data packets forwarded by the plurality of reverse proxy servers and performing distributed parallel processing on the forwarded data packets through the plurality of concurrency pools;
the distributed message system is used for issuing the data processed by the high concurrency pool cluster;
and the cache clusters are used for correspondingly synchronizing the data issued by the distributed message system to different block chains respectively.
In addition, the present invention also provides an apparatus comprising a processor and a memory, wherein the memory stores a computer program, and the processor is used for executing the steps of the method by calling the computer program stored in the memory.
The present invention also provides a computer-readable storage medium for storing a computer program for distributed highly concurrent transaction processing of a block chain, which computer program, when running on a computer, performs the steps of the method as described above.
The invention has the following effects:
according to the distributed high-concurrency transaction processing method of the block chain, disclosed by the invention, a large number of user requests sent by the client are received and then uniformly stored in the request cache pool, so that the first-level cache is realized, the problem of request loss during the sudden increase of the requests is avoided, and then, a DNS server cluster is utilized to carry out IP address resolution on a large number of request messages, so that the first-level load balance is realized. And then, converting the IP address analyzed by the DNS server cluster through the load balancer cluster, modifying a target IP address in a request message into a server address selected by a scheduling algorithm by each load balancer, repackaging the request message to form a data packet, and realizing second-level load balancing through the load balancer cluster. Then, the mass data packets are processed through the reverse proxy server cluster, and third-level load balancing and multi-level caching are achieved. By adopting a multi-level load balancing and multi-level cache mechanism to distribute the user request to a plurality of processing servers, the response speed of the user request can be accelerated, the throughput is improved, and the condition that the processing servers are crashed due to the sudden increase of the user request is prevented. And then, the high concurrency pool cluster is used for carrying out distributed and parallel processing on the mass data, so that the scheduling pressure can be reduced, the problem of large-scale scheduling is effectively solved, the memory pressure is relieved, and the resource utilization efficiency is improved. And then the data processed by the high concurrency pool cluster is issued through a distributed message system, so that the message can be duralized, the data can be conveniently processed in batch, and the data is written into the corresponding block chain in high speed and in parallel through a plurality of cache clusters. The distributed high-concurrency transaction processing method of the block chain can effectively improve cluster throughput and data processing capacity, can respond to a large number of high-concurrency user requests in each scene of the block chain, and meets the requirement of high concurrency and high performance of the block chain in the future.
In addition, the distributed high-concurrency transaction processing system, the distributed high-concurrency transaction processing equipment and the computer-readable storage medium of the block chain have the advantages.
In addition to the above-described objects, features and advantages, the present invention has other objects, features and advantages. The present invention will be described in further detail below with reference to the drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a flow chart of a distributed high-concurrency transaction processing method of a block chain according to a preferred embodiment of the present invention.
Fig. 2 is a schematic block diagram of a distributed high-concurrency transaction processing system with a block chain according to another embodiment of the present invention.
Detailed Description
The embodiments of the invention will be described in detail below with reference to the drawings, but the invention can be practiced in many different ways, which are defined and covered by the following.
As shown in fig. 1, a preferred embodiment of the present invention provides a method for distributed high-concurrency transaction processing of a blockchain, which includes the following steps:
step S1: the client sends a plurality of user requests, and the service node receives the user requests and uniformly stores a plurality of request messages into a request cache pool;
step S2: domain name resolution is carried out on the IP addresses in the request messages by adopting a DNS server cluster;
and step S3: respectively converting the IP address analyzed from each request message by adopting a load balancer cluster, modifying a target IP address in one request message into a server address selected by a scheduling algorithm by each load balancer, and repackaging the request messages to form a data packet;
and step S4: processing the repackaged data packets by adopting a reverse proxy server cluster, if the reverse proxy server can obtain a processing result, feeding the result back to the client, and if the reverse proxy server cannot obtain the processing result, forwarding the data packets;
step S5: a high concurrency pool cluster is adopted to receive a plurality of data packets forwarded by a plurality of reverse proxy servers, and the plurality of forwarded data packets are subjected to distributed parallel processing through a plurality of concurrency pools;
step S6: distributing the data processed by the high concurrency pool cluster by adopting a distributed message system;
step S7: and correspondingly synchronizing the data issued by the distributed message system to different block chains through a plurality of cache clusters.
It can be understood that, in the distributed highly-concurrent transaction processing method for a block chain according to this embodiment, a large number of user requests sent by a client are received and then are stored in a request cache pool in a unified manner, so that a first-level cache is implemented, and the problem of request loss during a sudden increase of requests is avoided. And then, converting the IP address analyzed by the DNS server cluster through the load balancer cluster, modifying a target IP address in a request message into a server address selected by a scheduling algorithm through each load balancer, and encapsulating the request message again to form a data packet, thereby realizing second-level load balancing through the load balancer cluster. Then, the mass data packets are processed through the reverse proxy server cluster, and third-level load balancing and multi-level caching are achieved. By adopting a multi-level load balancing and multi-level cache mechanism to distribute the user request to a plurality of processing servers, the response speed of the user request can be accelerated, the throughput is improved, and the condition that the processing servers are crashed due to the sudden increase of the user request is prevented. And then, the high concurrency pool cluster is used for carrying out distributed and parallel processing on the mass data, so that the scheduling pressure can be reduced, the problem of large-scale scheduling is effectively solved, the memory pressure is relieved, and the resource utilization efficiency is improved. And then the distributed message system is used for releasing the data processed by the high concurrency pool cluster, so that the persistence of the message can be realized, the batch processing of the data is convenient, and the data is written into the corresponding block chain in high speed and in parallel through a plurality of cache clusters. The distributed high-concurrency transaction processing method of the block chain can effectively improve cluster throughput and data processing capacity, can respond to a large number of high-concurrency user requests in each scene of the block chain, and meets the requirement of high concurrency and high performance of the block chain in the future.
It can be understood that, in the step S1, when a user request is suddenly increased, a plurality of clients may send a large amount of user requests, and the service node receives the user requests and then stores them in the request cache pool in a unified manner, to wait for batch processing, instead of processing one by one, so as to prevent the occurrence of a request loss.
It can be understood that, in step S2, each DNS server compares one request packet with resource records in the DNS database to obtain a domain name resolution result to implement first-level load balancing, and sends the domain name resolution result and the request packet to the load balancer cluster. The DNS is a distributed database with domain names and IP addresses mapped to each other, a plurality of resource records are stored in the DNS, different resource records represent specific resources, and SOA, NS and A records are used for establishing the DNS. When a request for domain name resolution comes in, the server can complete one-time scheduling according to a preset scheduling algorithm, return a calculated result, and match different A records by constructing a DNS cluster, so that load balancing can be realized.
It can be understood that, in the step S3, each load balancer converts the IP address parsed from one request packet to implement the second level load balancing, modifies the target IP address in the request packet into the IP address of the next level proxy server selected by using the weighted round-robin scheduling algorithm through the scheduler, and meanwhile, correspondingly modifies the port address, repacks and packages the request packet, and correspondingly sends the data packet to the next level proxy server. Load balancing is a technology for optimizing resource utilization rate, and various resources including network resources, memories, CPUs, disk devices and the like can be fully utilized by applying different load balancing technologies. The reasonable utilization of the load balancing technology can improve the throughput of the system, optimize the application response time and simultaneously avoid the overload of computing resources. The scheduling algorithm for realizing load balancing in the invention is a weighted polling scheduling algorithm, namely, the request is distributed according to the running condition of the server. Specifically, the scheduler sets a different weight value for each server, and if the load of a server is low and the response is fast, the weight value of the server is high, otherwise the weight value of the server is set to be low. The scheduler can dynamically update the weighted value of the server according to the load degree of the server, meanwhile, scheduling is carried out according to the sequence from first to last, the request has a waiting queue, the server also has a waiting queue, each request and the server are treated fairly under the condition that the weighted values are the same, and the rule of first come and first serve is adopted, so that resources can be distributed more reasonably, and the response speed of the request can be accelerated.
It can be understood that, in step S4, each reverse proxy server correspondingly receives the data packet sent by each load balancer to implement third-level load balancing, after receiving the request data packet, each reverse proxy server compares the request packet with the local cache, if a processing result can be obtained, the processing result is directly fed back to the client, if a processing result is not obtained, the request packet is compared with the cache cluster, if a processing result can be obtained, the processing result is directly fed back to the client, if a processing result is not obtained, the data packet is sent to the processing server, and the processing server performs forwarding processing.
It can be understood that, in step S5, after each concurrence pool receives a data packet, a task queue and a distributor are constructed, a request packet is introduced into the task queue to be queued and wait to be scheduled, then a work pool is created and a worker is instantiated by the distributor, and maximization processing is performed, the distributor uninterruptedly calls the task from the task queue, a waiting worker is scheduled to process the task each time the task is called, a task channel is constructed between the worker and the task, the request and the resource are scheduled one to one, the worker is registered in the work pool when being started, and data processing is completed by the worker. In order to solve the problem of low data processing capacity of a block chain, the distributed parallel computing is performed by adopting a high concurrency pool cluster, computing resources are multiplexed, the scheduling pressure of the computing resources can be reduced, the resource utilization rate is improved, and the data processing speed is improved.
In addition, if the distributor judges that the number of workers operating in the working pool exceeds the capacity of the working pool, the distributor stops scheduling, the concurrent pool is in a blocking state, and scheduling is carried out until available workers finish tasks are released into the working pool.
It can be understood that, in step S6, multiple message producers of the distributed message system publish the data processed by the high concurrency pool cluster to different partitions, and each partition has its own specific number and is ordered, each consumer of the distributed message system subscribes to different partitions, one consumer can subscribe to multiple partitions at the same time, and one partition can also be subscribed to multiple consumers, and the message is not cleared immediately after being consumed, and the data can be stored persistently, and the response to the data is very fast, and has a certain fault tolerance, and publishing the message and subscribing the message can be performed synchronously or asynchronously.
It can be understood that, in the step S7, each blockchain corresponds to one cache cluster, each cache cluster corresponds to one consumer in the distributed message system, and each cache cluster respectively synchronizes data issued by the distributed message system to the blockchain correspondingly, thereby implementing uplink of the data. Specifically, the cache cluster correspondingly synchronizes the subscribed data to the blockchain, creates a configuration file for the client, creates and instantiates an SDK (software development kit), and interacts with the blockchain network by using the SDK.
It can be understood that the distributed high-concurrency transaction processing method of the block chain of the invention utilizes the DNS domain name resolution technology, the load balancer technology and the reverse proxy technology to realize load-balanced distribution of the user's request to a plurality of servers, improves the throughput and the network bearing capacity of the system, utilizes the distributed message system technology to perform persistence processing on the message, facilitates batch consumption, constructs producer and consumer clusters, achieves the function of elastic expansion and contraction, and thus provides high throughput. In addition, a high concurrency pool cluster is constructed to perform parallel processing on mass data, resource and service scheduling is optimized, the response speed of a user request is improved, the resource utilization efficiency is improved, the data processed by the high concurrency pool is subjected to fast uplink transmission by using the SDK, and the problem of low block chain data processing speed is solved.
In addition, as shown in fig. 2, another embodiment of the present invention further provides a distributed high-concurrency transaction processing system for a blockchain, which preferably adopts the method of the foregoing embodiment, where the system includes:
the request cache pool is used for uniformly storing a plurality of user requests sent by the client;
the DNS server cluster is used for carrying out domain name resolution on the IP addresses in the request messages;
the load balancer cluster is used for converting the IP address analyzed from each request message, each load balancer modifies a target IP address in one request message into a server address selected by the scheduling algorithm, and encapsulates the request message again to form a data packet;
the reverse proxy server cluster is used for processing the data packet re-encapsulated by the load balancer, if the reverse proxy server can obtain a processing result, the result is fed back to the client, and if the reverse proxy server cannot obtain the processing result, the data packet is forwarded;
the high concurrency pool cluster is used for receiving a plurality of data packets forwarded by the plurality of reverse proxy servers and performing distributed parallel processing on the forwarded data packets through the plurality of concurrency pools;
the distributed message system is used for issuing the data processed by the high concurrency pool cluster;
and the cache clusters are used for correspondingly synchronizing the data issued by the distributed message system to different block chains respectively.
It can be understood that, in the distributed highly-concurrent transaction processing system of the block chain according to this embodiment, after receiving a large number of user requests sent by the client, the user requests are uniformly stored in the request cache pool, so that a first-level cache is implemented, and a problem of request loss during a sudden increase of requests is avoided. And then, converting the IP address analyzed by the DNS server cluster through the load balancer cluster, modifying a target IP address in a request message into a server address selected by a scheduling algorithm by each load balancer, repackaging the request message to form a data packet, and realizing second-level load balancing through the load balancer cluster. Then, the mass data packets are processed through the reverse proxy server cluster, and third-level load balancing and multi-level caching are achieved. By adopting a multi-level load balancing and multi-level cache mechanism to distribute the user request to a plurality of processing servers, the response speed of the user request can be accelerated, the throughput is improved, and the condition that the processing servers are crashed due to the sudden increase of the user request is prevented. And then, the high concurrency pool cluster is used for carrying out distributed and parallel processing on the mass data, so that the scheduling pressure can be reduced, the problem of large-scale scheduling is effectively solved, the memory pressure is relieved, and the resource utilization efficiency is improved. And then the data processed by the high concurrency pool cluster is issued through a distributed message system, so that the message can be duralized, the data can be conveniently processed in batch, and the data is written into the corresponding block chain in high speed and in parallel through a plurality of cache clusters. The distributed high-concurrency transaction processing method of the block chain can effectively improve cluster throughput and data processing capacity, can respond to a large number of high-concurrency user requests in each scene of the block chain, and meets the requirement of high concurrency and high performance of the block chain in the future.
In addition, another embodiment of the present invention also provides an apparatus, which includes a processor and a memory, wherein the memory stores a computer program, and the processor is used for executing the steps of the method described above by calling the computer program stored in the memory.
In addition, as shown in fig. 2, another embodiment of the present invention further provides a computer readable storage medium for storing a computer program for performing distributed high-concurrency transaction processing of a block chain, where the computer program performs the steps of the method described above when the computer program runs on a computer.
Typical forms of computer-readable storage media include: floppy disk (floppy disk), flexible disk (flexible disk), hard disk, magnetic tape, any of its magnetic media, CD-ROM, any of the other optical media, punch cards (punch cards), paper tape (paper tape), any of the other physical media with patterns of holes, random Access Memory (RAM), programmable Read Only Memory (PROM), erasable Programmable Read Only Memory (EPROM), FLASH erasable programmable read only memory (FLASH-EPROM), any of the other memory chips or cartridges, or any of the other media from which a computer can read. The instructions may further be transmitted or received by a transmission medium. The term transmission medium may include any tangible or intangible medium that is operable to store, encode, or carry instructions for execution by the machine, and includes digital or analog communications signals or intangible medium to facilitate communication of such instructions. Transmission media include coaxial cables, copper wire and fiber optics, including the wires that comprise a bus for transmitting a computer data signal.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. A method for processing distributed high-concurrency transaction of a block chain is characterized by comprising the following steps:
step S1: the client sends a plurality of user requests, and the service node receives the user requests and uniformly stores a plurality of request messages to a request cache pool;
step S2: domain name resolution is carried out on the IP addresses in the request messages by adopting a DNS server cluster; each DNS server compares one request message with resource records in a DNS database to obtain a domain name resolution result so as to realize first-level load balancing, and sends the domain name resolution result and the request message to a load balancer cluster;
and step S3: respectively converting the IP address analyzed from each request message by adopting a load balancer cluster, modifying a target IP address in one request message into a server address selected by a scheduling algorithm by each load balancer, and repackaging the request messages to form a data packet;
and step S4: processing the plurality of repackaged data packets by adopting a reverse proxy server cluster, if the reverse proxy server can obtain a processing result, feeding the result back to the client, and if the reverse proxy server cannot obtain the processing result, forwarding the data packets; each reverse proxy server correspondingly receives the data packet sent by each load balancer to realize third-level load balancing, each reverse proxy server compares the request message with a local cache after receiving the request data packet, if a processing result can be obtained, the processing result is directly fed back to the client, if the processing result is not obtained, the request message is compared with the cache cluster, if the processing result can be obtained, the processing result is directly fed back to the client, and if the processing result is not obtained, the data packet is sent to the processing server and is forwarded by the processing server;
step S5: a high concurrency pool cluster is adopted to receive a plurality of data packets forwarded by a plurality of reverse proxy servers, and the plurality of forwarded data packets are subjected to distributed parallel processing through a plurality of concurrency pools;
step S6: distributing the data processed by the high concurrency pool cluster by adopting a distributed message system;
step S7: and correspondingly synchronizing the data issued by the distributed message system to different block chains through a plurality of cache clusters.
2. The method as claimed in claim 1, wherein in step S3, each load balancer converts an IP address parsed from a request packet to implement second-level load balancing, and the scheduler modifies a target IP address in the request packet to an IP address of a next-level proxy server selected by using a weighted round-robin scheduling algorithm, and meanwhile, modifies a port address accordingly, repackages and packages the request packet, and correspondingly sends the data packet to the next-level proxy server.
3. The method according to claim 1, wherein in step S5, each concurrency pool establishes a task queue and a distributor after receiving a data packet, introduces a request packet into the task queue to queue, waits to be scheduled, then creates a work pool and instantiates a worker by the distributor, and performs maximization processing, the distributor constantly calls tasks from the task queue, schedules a waiting worker to process a task each time a task is called, a task channel is established between the worker and the task, schedules requests and resources one-to-one, and the worker is registered in the work pool when starting, and completes data processing by the worker.
4. The distributed high-concurrency transaction processing method of the blockchain according to claim 3, wherein if the distributor judges that the number of workers operating in the working pool exceeds the capacity of the working pool, the distributor stops scheduling, and the concurrency pool is in a blocking state, and scheduling is performed until available worker completion tasks are released into the working pool.
5. The distributed highly concurrent transaction processing method for block chains according to claim 3, wherein in step S6, a plurality of message producers of the distributed message system publish the data processed by the highly concurrent pool cluster to different partitions, each partition has its own specific number and is arranged in order, each consumer of the distributed message system subscribes to different partitions, and publishing and subscribing messages are performed asynchronously.
6. A blockchain distributed highly concurrent transaction processing system, comprising:
the request cache pool is used for uniformly storing a plurality of user requests sent by the client;
the DNS server cluster is used for carrying out domain name resolution on the IP addresses in the request messages; each DNS server compares one request message with resource records in a DNS database to obtain a domain name resolution result so as to realize first-level load balancing, and sends the domain name resolution result and the request message to a load balancer cluster;
the load balancer cluster is used for converting the IP address analyzed from each request message, each load balancer modifies a target IP address in one request message into a server address selected by the scheduling algorithm, and encapsulates the request message again to form a data packet;
the reverse proxy server cluster is used for processing the data packet re-encapsulated by the load balancer, if the reverse proxy server can obtain a processing result, the result is fed back to the client, and if the reverse proxy server cannot obtain the processing result, the data packet is forwarded; each reverse proxy server correspondingly receives the data packet sent by each load balancer to realize third-level load balancing, each reverse proxy server compares the request message with a local cache after receiving the request data packet, if the processing result can be obtained, the processing result is directly fed back to the client, if the processing result is not obtained, the request message is compared with the cache cluster, if the processing result can be obtained, the processing result is directly fed back to the client, if the processing result is not obtained, the data packet is sent to the processing server, and the processing server forwards the data packet;
the high concurrency pool cluster is used for receiving a plurality of data packets forwarded by the plurality of reverse proxy servers and performing distributed parallel processing on the plurality of forwarded data packets through the plurality of concurrency pools;
the distributed message system is used for issuing the data processed by the high concurrency pool cluster;
and the cache clusters are used for correspondingly synchronizing the data issued by the distributed message system to different block chains respectively.
7. An electronic device comprising a processor and a memory, the memory having stored therein a computer program, the processor being configured to perform the steps of the method of any one of claims 1~5 by invoking the computer program stored in the memory.
8. A computer-readable storage medium for storing a computer program for distributed highly concurrent transaction processing of a block chain, wherein the computer program when run on a computer performs the steps of the method of any of claims 1~5.
CN202110965424.1A 2021-08-23 2021-08-23 Block chain distributed high-concurrency transaction processing method, system, equipment and storage medium Active CN113691611B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110965424.1A CN113691611B (en) 2021-08-23 2021-08-23 Block chain distributed high-concurrency transaction processing method, system, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110965424.1A CN113691611B (en) 2021-08-23 2021-08-23 Block chain distributed high-concurrency transaction processing method, system, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113691611A CN113691611A (en) 2021-11-23
CN113691611B true CN113691611B (en) 2022-11-22

Family

ID=78581245

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110965424.1A Active CN113691611B (en) 2021-08-23 2021-08-23 Block chain distributed high-concurrency transaction processing method, system, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113691611B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113986516A (en) * 2021-12-27 2022-01-28 广州朗国电子科技股份有限公司 Distributed task scheduling system based on Hongming system
CN114640681B (en) * 2022-03-10 2024-05-17 京东科技信息技术有限公司 Data processing method and system
CN115720238B (en) * 2022-09-01 2024-04-02 西安电子科技大学 System and method for processing block chain request supporting high concurrency
CN117354117B (en) * 2023-10-10 2024-05-31 南京汇银迅信息技术有限公司 Java and MQ-based distributed message communication method and system
CN117896380B (en) * 2024-03-14 2024-05-31 广州云积软件技术有限公司 High concurrency information processing method, system and device for cloud examination

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204232A (en) * 2016-07-18 2016-12-07 苏州华车网络科技有限公司 A kind of system and method processing high concurrent interaction data request
WO2017016336A1 (en) * 2015-07-30 2017-02-02 中兴通讯股份有限公司 Method and apparatus for data processing and query
CN109344172A (en) * 2018-08-31 2019-02-15 深圳市元征科技股份有限公司 A kind of high concurrent data processing method, device and client-server
CN109800260A (en) * 2018-12-14 2019-05-24 深圳壹账通智能科技有限公司 High concurrent date storage method, device, computer equipment and storage medium
WO2021068567A1 (en) * 2019-10-12 2021-04-15 平安科技(深圳)有限公司 Blockchain block distribution method, apparatus, computer device and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104734946A (en) * 2015-04-09 2015-06-24 北京易掌云峰科技有限公司 Multi-tenant high-concurrency instant messaging cloud platform
CN107872398A (en) * 2017-06-25 2018-04-03 平安科技(深圳)有限公司 High concurrent data processing method, device and computer-readable recording medium
CN107734004A (en) * 2017-09-26 2018-02-23 河海大学 A kind of high concurrent SiteServer LBS based on Nginx, Redis
CN109446273B (en) * 2018-12-04 2022-07-22 深圳前海环融联易信息科技服务有限公司 Data synchronization method and device of block chain, computer equipment and storage medium
CN112449012B (en) * 2020-11-17 2024-04-05 中国平安财产保险股份有限公司 Data resource scheduling method, system, server and read storage medium
CN112486655A (en) * 2020-12-08 2021-03-12 珠海格力电器股份有限公司 High-concurrency data processing system and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017016336A1 (en) * 2015-07-30 2017-02-02 中兴通讯股份有限公司 Method and apparatus for data processing and query
CN106204232A (en) * 2016-07-18 2016-12-07 苏州华车网络科技有限公司 A kind of system and method processing high concurrent interaction data request
CN109344172A (en) * 2018-08-31 2019-02-15 深圳市元征科技股份有限公司 A kind of high concurrent data processing method, device and client-server
CN109800260A (en) * 2018-12-14 2019-05-24 深圳壹账通智能科技有限公司 High concurrent date storage method, device, computer equipment and storage medium
WO2021068567A1 (en) * 2019-10-12 2021-04-15 平安科技(深圳)有限公司 Blockchain block distribution method, apparatus, computer device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
构建高可用性WEB平台关键技术分析;席剑霄;《数字技术与应用》;20160115(第01期);全文 *

Also Published As

Publication number Publication date
CN113691611A (en) 2021-11-23

Similar Documents

Publication Publication Date Title
CN113691611B (en) Block chain distributed high-concurrency transaction processing method, system, equipment and storage medium
CN109257293B (en) Speed limiting method and device for network congestion and gateway server
CN106170016A (en) A kind of method and system processing high concurrent data requests
US20040024873A1 (en) Load balancing the servicing of received packets
CN111522653A (en) Container-based network function virtualization platform
CN111913784B (en) Task scheduling method and device, network element and storage medium
CN113422842B (en) Distributed power utilization information data acquisition system considering network load
CN110071965B (en) Data center management system based on cloud platform
WO2020019743A1 (en) Traffic control method and device
CN106294472A (en) The querying method of a kind of Hadoop data base HBase and device
CN113064742A (en) Message processing method, device, equipment and storage medium
US11556382B1 (en) Hardware accelerated compute kernels for heterogeneous compute environments
CN102904961A (en) Method and system for scheduling cloud computing resources
US20130054735A1 (en) Wake-up server
TWI647636B (en) Load balancing system for blockchain and method thereof
CN112104679B (en) Method, apparatus, device and medium for processing hypertext transfer protocol request
CN113626221B (en) Message enqueuing method and device
CN111741079A (en) Micro-service architecture based interface processing method and system
CN111338750A (en) Pressure adjusting method and device for execution node, server and storage medium
CN111475315A (en) Server and subscription notification push control and execution method
CN112260962B (en) Bandwidth control method and device
CN108259605B (en) Data calling system and method based on multiple data centers
CN109257303A (en) QoS queue dispatching method, device and satellite communication system
CN113364888A (en) Service scheduling method, system, electronic device and computer readable storage medium
CN115391053B (en) Online service method and device based on CPU and GPU hybrid calculation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant