CN109344172B - High-concurrency data processing method and device and client server - Google Patents

High-concurrency data processing method and device and client server Download PDF

Info

Publication number
CN109344172B
CN109344172B CN201811015482.2A CN201811015482A CN109344172B CN 109344172 B CN109344172 B CN 109344172B CN 201811015482 A CN201811015482 A CN 201811015482A CN 109344172 B CN109344172 B CN 109344172B
Authority
CN
China
Prior art keywords
data
processed
task
query
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811015482.2A
Other languages
Chinese (zh)
Other versions
CN109344172A (en
Inventor
刘均
魏玉林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Launch Technology Co Ltd
Original Assignee
Shenzhen Launch Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Launch Technology Co Ltd filed Critical Shenzhen Launch Technology Co Ltd
Priority to CN201811015482.2A priority Critical patent/CN109344172B/en
Publication of CN109344172A publication Critical patent/CN109344172A/en
Application granted granted Critical
Publication of CN109344172B publication Critical patent/CN109344172B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a high-concurrency data processing method, a high-concurrency data processing device, a client server and a computer readable storage medium, wherein the high-concurrency data processing method comprises the following steps: periodically triggering a data query task based on a preset first interval time, and storing data to be processed into a message queue if the data to be processed exists; periodically triggering a data processing task based on a preset second interval time, wherein the data processing task comprises: detecting whether the data to be processed exists in a message queue; if the data to be processed exists in the message queue, acquiring a preset number of data to be processed; and processing the acquired data to be processed based on the acquired data to be processed. According to the scheme, the data processing speed and efficiency are improved by periodically triggering the data query task and the data processing task.

Description

High-concurrency data processing method and device and client server
Technical Field
The present application belongs to the field of data processing technologies, and in particular, to a high-concurrency data processing method, a high-concurrency data processing apparatus, a client server, and a computer-readable storage medium.
Background
However, the data of the internet shows geometric explosion-like growth, with the increase of information amount and users, big data and high concurrency become problems which must be considered by software design, the traditional single request is evolved into a large-scale and large-level quantity of service requests, and the pressure of serving as a client service request party and a block chain service party for providing service for the service requests is huge in the face of large-scale concurrent requests and data transmission.
The conventional method for high-concurrency request scheme of mass data is provided with means such as batch asynchronous request, interface lightweight split and the like. However, batch asynchronous requests can increase the instantaneous pressure of the data processing server and the client server, increase the instantaneous data throughput, and finally cause the situations of downtime, hang-up, crash, request timeout, delay increase, program crash and the like of the client server and/or the data processing server. The interface lightweight splitting is to split the interface request and to reduce the data requested and the data returned by the server, so as to reduce the data throughput, the data exchange amount and the request amount to reduce the load pressure of the requester and the service provider, but this way can cause that the service requirements of large internet companies and large companies across countries cannot be met frequently.
Disclosure of Invention
In view of the above, the present application provides a high-concurrency data processing method, a high-concurrency data processing apparatus, a client server and a computer readable storage medium, which can improve the speed and efficiency of high-concurrency data processing.
A first aspect of the present application provides a high-concurrency data processing method, including:
periodically triggering a data query task based on a preset first interval time, and storing the data to be processed into a message queue if the data to be processed exists;
periodically triggering a data processing task based on a preset second interval time, wherein the data processing task comprises: detecting whether the data to be processed exists in the message queue; if the data to be processed exists in the message queue, acquiring a preset number of the data to be processed; and processing the acquired data to be processed based on the acquired data to be processed.
Optionally, the periodically triggering the task of querying data based on the preset first interval time includes:
detecting the data amount of the queried data;
and if the ratio of the data volume of the inquired data to the total data volume reaches a preset ratio, performing page refreshing jump, and after the page refreshing jump, continuing to periodically trigger the data inquiring task based on the first interval time.
Optionally, after the data query task is triggered periodically based on the preset first interval time, the data processing method further includes:
the flag bit of the data that has been queried is changed.
Optionally, the periodically triggering the data processing task based on the preset second interval time includes:
detecting whether the current data processing pressure condition is matched with a preset pressure condition or not;
and if the current data processing pressure condition is matched with the preset pressure condition, periodically triggering more than two data processing tasks based on a preset second interval time.
Optionally, the method for processing high concurrency data further includes:
if the data to be processed fails to be processed, the data to be processed is pressed into the message queue again;
and writing the failure condition of the current processing into a log record.
Optionally, the data querying task includes a step of querying whether to-be-processed data exists in a paging batch, where the step of querying whether to-be-processed data exists in the paging batch includes:
for any data table, sorting each data in an ascending order according to the primary key ID of each data in the data table;
acquiring the maximum value of the primary key ID in the inquired data of the previous batch;
determining the value of the primary key ID of the initial data of the next batch to be inquired according to the maximum value of the primary key ID;
and querying a batch of data based on the initial data and the sorting sequence, wherein the data volume of the batch of data does not exceed a preset data volume threshold.
Optionally, if the highly concurrent data processing method is applied to the block chain uplink, the processing the acquired to-be-processed data includes:
sending an asynchronous uplink request to a block chain server so as to uplink the acquired data to be processed to the block chain server to become a node of a block chain;
the high concurrency data processing method further comprises the following steps:
if the data to be processed is successfully linked, receiving a transaction hash returned by the block chain server;
generating an asset ID based on the uplink data;
encrypting the transaction hash, the asset ID and the uplink data;
and updating the encrypted data to the corresponding position of the database.
A second aspect of the present application provides a high-concurrency data processing apparatus including:
the query data task triggering module is used for periodically triggering a query data task based on a preset first interval time, and storing the data to be processed into a message queue if the data to be processed exists;
the processing data task triggering module is used for periodically triggering the processing data task based on a preset second interval time;
wherein, the data task processing triggering module comprises:
a detecting unit, configured to detect whether the to-be-processed data exists in the message queue;
an obtaining unit, configured to obtain a preset number of the to-be-processed data if the to-be-processed data exists in the message queue;
and the processing unit is used for processing the acquired data to be processed based on the acquired data to be processed.
Optionally, the query data task triggering module is specifically configured to detect a data amount of queried data; and if the ratio of the data volume of the inquired data to the total data volume reaches a preset ratio, performing page refreshing jump, and after the page refreshing jump, continuing to periodically trigger the data inquiring task based on the first interval time.
Optionally, the data query task triggering module further includes:
and the flag bit changing unit is used for changing the flag bit of the inquired data.
Optionally, the data processing task triggering module is specifically configured to detect whether a current data processing pressure condition matches a preset pressure condition; and if the current data processing pressure condition is matched with the preset pressure condition, periodically triggering more than two data processing tasks based on the preset second interval time.
Optionally, the high-concurrency data processing apparatus further includes a processing failure module, where the processing failure module includes:
a pushing unit, configured to push the to-be-processed data into the message queue again if the to-be-processed data fails to be processed;
and the recording unit is used for writing the failure condition of the current processing into the log record.
Optionally, the query data task triggering module includes a query unit;
the query unit is used for querying whether the data to be processed exists in a paging batch manner;
the query unit includes:
the sorting subunit is used for sorting each data in an ascending order according to the primary key ID of each data in any data table;
a primary key ID acquisition subunit, configured to acquire a value of a largest primary key ID in data that has been queried in a previous batch;
a primary key ID determining subunit, configured to determine, according to the maximum primary key ID value, a primary key ID value of the initial data of the next batch to be queried;
and the batch query subunit is used for querying data of a batch based on the initial data and the sorting sequence, wherein the data volume of the data of the batch does not exceed a preset data volume threshold.
Optionally, if the high-concurrency data processing apparatus is applied to the blockchain uplink, the processing unit is specifically configured to send an asynchronous uplink request to a blockchain server, so as to uplink the obtained to-be-processed data to the blockchain server to become a node of the blockchain;
the high concurrency data processing device further comprises a successful processing module, wherein the successful processing module comprises:
a receiving unit, configured to receive a transaction hash returned by the blockchain server if the pending data is successfully linked;
a generation unit, configured to generate an asset ID based on the data of the current uplink;
the encryption unit is used for encrypting the transaction hash, the asset ID and the uplink data;
and the updating unit is used for updating the encrypted data to the corresponding position of the database.
A third aspect of the present application provides a client server comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method according to the first aspect when executing the computer program.
A fourth aspect of the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of the first aspect as described above.
A fifth aspect of the present application provides a computer program product comprising a computer program which, when executed by one or more processors, performs the steps of the method of the first aspect as described above.
According to the method, the data query task is triggered periodically based on the preset first interval time, and if the data to be processed is queried, the data to be processed is stored in the message queue; and periodically triggering a data processing task based on a preset second interval time, wherein the data processing task comprises: detecting whether the data to be processed exists in the message queue; if the data to be processed exists in the message queue, acquiring a preset number of the data to be processed; and processing the acquired data to be processed based on the acquired data to be processed. According to the scheme, the processing process of the high-concurrency data is divided into a data query task and a data processing task, the data query task is used for storing the data to be processed into the message queue, the data processing task is used for processing the high-concurrency data in batches, and the speed and the efficiency of data processing are improved in a multi-thread and multi-process mode.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart of an implementation of a high-concurrency data processing method provided by an embodiment of the present application;
fig. 2 is a schematic flowchart of a specific implementation of step 102 in the high-concurrency data processing method provided in the embodiment of the present application;
FIG. 3 is a schematic flow chart of another implementation of a high concurrency data processing method provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of a high concurrency data processing apparatus provided by an embodiment of the present application;
fig. 5 is a schematic diagram of a client server provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to explain the technical solution of the present application, the following description will be given by way of specific examples.
The embodiment of the application provides a high-concurrency data processing method, a high-concurrency data processing device, a client server and a computer readable storage medium. Currently, with the development of big data, users in a software architecture increase, the amount of data to be processed by the users also increases, and high-concurrency data processing becomes a research and development focus. Some typical application scenarios include that multiple users access a cloud database simultaneously (for example, multiple users access the same cloud document together), or that a user links data stored in a large number of client servers to a block chain for performing a forensic operation.
Example one
Referring to fig. 1, a method for processing high-concurrency data provided in an embodiment of the present application is described below, where the method for processing high-concurrency data in the embodiment of the present application includes:
in step 101, a data query task is triggered periodically based on a preset first interval time, and if the data to be processed exists, the data to be processed is stored in a message queue;
in the embodiment of the application, the client server is used for receiving data of each client node, and high concurrency data is formed. That is, the execution subject of the high-concurrency data processing method is the client server. The client server collects all the received data in a database, and in order to process the high-concurrency data, the data to be processed can be inquired in the database. In the embodiment of the present application, the client server queries the data to be processed by invoking the data query task of the Linux server, and since the data query task is triggered periodically, the data query task may also be referred to as a data query timing task, that is, the data query task is triggered and executed by the Linux server timing task. The first interval time may be 30 seconds, one minute, or the like, and may be set by a user, which is not limited herein. Optionally, the query data task may be a step of maintaining to periodically execute paging batch-wise query after one trigger, and then the main role of the periodically triggered query data task of this step is to prevent the query data task from being terminated due to downtime caused by excessive data processing of the client server, for example, after the query data task is triggered once, after the query data task finishes querying a batch of data, the query data task is paused for 2-5 seconds through a sleep function, and then queries data of a next batch, during this process, the first query data task may be kept running as long as no fault occurs, and the next trigger of the query data task is to re-trigger and activate a new query data task in time to keep query data operations from being terminated when a previous query data task fails, if the previous data query task does not have a fault, the operation of triggering the data query task can be ignored; or, after the data query task is triggered once, the step of executing the paged batch query for whether there is data to be processed exits, for example, after the data query task is triggered once, the data query task directly exits until the data query task is triggered again. The two different ways of triggering the data query task may be selected according to the requirements of the user or the developer, and are not limited herein. If the data to be processed exists, the data to be processed is stored in a message queue, the message queue is a redis message queue, and the redis message queue can replace the request processing of the instant server with asynchronous processing, so that the pressure of the server is relieved, the data sequence arrangement and acquisition are realized, and the message queue is a low-delay and high-concurrency lightweight message queue service. That is, the operation performed by the data query task is mainly two aspects, on one hand, to query and obtain the data to be processed, and on the other hand, to store the data to be processed obtained by the query into the message queue.
Specifically, the step 101 includes: paging and inquiring whether data to be processed exist in batches;
in the embodiment of the present application, all data in each data table in the database is traversed in batches through the limit page, the query operation is cyclic, but the query data is not cyclic, that is, repeated queries need to be avoided. The data volume of a batch of data may be set or changed to 1000, 1500, or 2000 pieces according to the needs of a user or a developer, or may be set or changed according to the data pressure condition of the database, which is not limited herein.
Optionally, in order to implement the ordered query, the step of paging in batches to query whether there is data to be processed includes:
for any data table, sorting each data in an ascending order according to the primary key ID of each data in the data table;
acquiring the maximum value of the primary key ID in the inquired data of the previous batch;
determining the value of the primary key ID of the initial data of the next batch to be inquired according to the maximum value of the primary key ID;
and querying a batch of data based on the initial data and the sorting sequence, wherein the data volume of the batch of data does not exceed a preset data volume threshold.
The database may include a plurality of data tables, and the primary key ID is an uninterrupted numerical value, so that after the maximum value of the primary key ID in the queried data of the previous batch is obtained, the value of the primary key ID of the initial data of the next batch to be queried can be obtained only by adding 1 to the value of the primary key ID, so that the full-table ordered scanning query of the database can be realized, and the query speed is increased.
Optionally, to avoid a crash or a dead halt of the client server, the step 101 specifically includes:
detecting the data amount of the queried data;
and if the ratio of the data volume of the inquired data to the total data volume reaches a preset ratio, performing page refreshing jump, and after the page refreshing jump, continuing to periodically trigger the data inquiring task based on the first interval time.
The preset ratio may be one fifth, two fifths, three fifths, four fifths, or the like, or may be other values set by a user or a developer, and is not limited herein; that is, when the queried data in the data table reaches a certain proportion of the total data in the data table, the data querying task is suspended for a certain time, for example, three minutes, and then the client server can obtain a certain trimming time by refreshing and skipping through the JavaScript page, so that the condition that the client server is halted or crashed is avoided. Specifically, after querying whether the data to be processed exists in each batch, the flag bit of the queried data can be changed, so that when querying the data to be processed in the data table after the JavaScript page refreshes and jumps again, repeated querying of the queried data is avoided.
In step 102, periodically triggering a data processing task based on a preset second interval time;
in the embodiment of the present application, the second interval time may be one minute or two minutes, and may be set by a user, which is not limited herein. The data processing task is triggered periodically, so that the data processing task is also called a data processing timing task, that is, the data processing task is triggered and executed by the Linux server timing task. Similar to the query data task in step 101, the periodic trigger may have two operation modes, for example, the processing data task may be a processing data task that is periodically executed after one trigger, and then the main role of the periodic trigger processing data task in this step is to prevent the client server from being down and the processing data task from being terminated due to an abnormal condition, for example, after the processing data task is triggered once, the processing data task will keep listening to the message queue, once data exists in the message queue, the subsequent operation will be executed, and at the same time, the listening to the message queue will not be disconnected, in this process, the first processing data task may keep running as long as no fault occurs, and the processing data task is triggered next time to fail in the previous processing data task, re-triggering and activating a new data processing task in time to keep the data processing operation not terminated, wherein if the previous data processing task is not failed, the operation triggering the data processing task can be ignored; or, after the data processing task is triggered once, the operations in step 201 to step 203 are executed once, and the data processing task is exited, for example, the data processing task is triggered once to monitor the message queue, if data exists in the message queue, the subsequent operations are executed, the data processing task is exited after the subsequent operations are completed, and if no data exists in the message queue, the data processing task is exited directly until the data processing task is triggered again. The two different ways of triggering the data processing task may be selected according to the requirements of the user or the developer, and are not limited herein.
Specifically, the step 102 includes:
in step 201, detecting whether the pending data exists in the message queue;
in this embodiment, the data processing task monitors the message queue to detect whether the pending data exists in the message queue.
In step 202, if the pending data exists in the message queue, a preset number of the pending data is obtained;
in this embodiment of the present application, if it is found that the to-be-processed data exists in the message queue after the message queue is monitored, the to-be-processed data in the preset number in the message queue may be obtained in a first-in first-out manner according to a condition of storage time of each data in the message queue.
In step 203, the acquired data to be processed is processed based on the acquired data to be processed.
Optionally, the redis message queue is in a producer/consumer mode, which may be a producer and a consumer mode, or a producer and a plurality of consumer modes. Thus, the step 102 may specifically be:
detecting whether the current data processing pressure condition is matched with a preset pressure condition or not;
and if the current data processing pressure condition is matched with the preset pressure condition, periodically triggering more than two data processing tasks based on the second interval time.
In practical application, the number of the triggered data processing tasks can be flexibly selected, the data processing pressure condition can be a data volume pressure condition of data to be queried, an elasticity condition of data accumulated in a message queue, or a condition of querying database pressure, that is, if the overall data pressure of the client server is high, a plurality of data processing tasks can be invoked, the peak pressure can be shared through a producer and a plurality of consumer modes, the peak time can be flattened, and the balance concurrency pressure can be relieved.
As can be seen from the above, in the embodiment of the present application, a data processing process is divided into a data query task and a data processing task, where the data query task is to query to obtain data to be processed and store the data to be processed in a message queue, and the data processing task is to process high-concurrency data in batches, so as to improve the speed and efficiency of data processing in a multi-thread and multi-process mode.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Example two
On the basis of the first embodiment, the second embodiment of the present application provides another high-concurrency data processing method, and the high-concurrency data processing method provided by the second embodiment of the present application is described below, please refer to fig. 3, the high-concurrency data processing method in the second embodiment of the present application is applied to block chain uplink, so that, in the specific implementation manner given below, the block chain uplink is taken as an example, and a specific description is made on the high-concurrency data processing method provided by the second embodiment of the present application. Here, the explanation is made on the block chain uplink: the blockchain is actually a cluster of dispersed client nodes and a distributed database of all participants is a record of all bitcoin transaction histories. After the transaction data of bitcoin is packed into a "data block" or "block", the transaction is considered to have been preliminarily confirmed. The transaction is further confirmed after the block is linked to the previous block. After several block confirmations are successively obtained, the transaction can be regarded as being irreversibly confirmed, and the process is the uplink. The high-concurrency data processing method comprises the following steps:
in step 301, a data query task is triggered periodically based on a preset first interval time, and if the data to be processed is queried to exist, the data to be processed is stored in a message queue;
in the embodiment of the present application, the step 301 is the same as or similar to the step 101, and reference may be specifically made to the related description of the step 101, which is not repeated herein.
In step 302, periodically triggering a data processing task based on a preset second interval time;
in this embodiment of the application, the operation performed in the data processing task to process the acquired to-be-processed data may specifically be: and sending an asynchronous uplink request to the blockchain server so as to uplink the acquired data to be processed to the blockchain server to become a node of the blockchain. For other operations of processing the data task, reference may be made to the related description of step 102, and details are not described herein.
In step 303, determining whether the pending data is successfully uplink processed; if yes, go to step 304, otherwise go to step 308;
in step 304, a transaction hash returned by the blockchain server is received;
in this embodiment, when the pending uplink data is successfully processed, the blockchain server returns the transaction hash of the blockchain, and thus, in this step, the transaction hash returned by the blockchain server is received.
In step 305, an asset ID is generated based on the uplink data;
in step 306, encrypt the transaction hash, the asset ID, and the uplink data;
in the embodiment of the application, in order to ensure data security and prevent data leakage, the transaction hash, the asset ID and the uplink data may be encrypted, where an encryption algorithm used for encryption is not limited.
In step 307, updating the encrypted data to a corresponding position of the database;
in the embodiment of the present application, the data of the current uplink is found in the data table of the database, and the encrypted data is updated to the corresponding location of the database to serve as the transaction hash field, the asset ID field, and the asset hash field of the data, so as to prove that the data has undergone uplink processing and the uplink processing is successful.
In step 308, the data to be processed is pushed into the message queue again;
in this application, if the pending data uplink processing fails, the pending data is pushed into the message queue again, and the pending data task waits for the pending data to perform uplink processing again until the pending data uplink processing succeeds.
In step 309, the case where the current process fails is written in the log record.
In the embodiment of the application, when the client server requests the corresponding interface of the blockchain server to fail and the uplink processing is unsuccessful, log records are written, so that evidence is kept in the future, and the specific condition of the uplink failure can be inquired and known.
As can be seen from the above, in the embodiment of the present application, a data processing process is divided into a data query task and a data processing task, where the data query task is to query to obtain data to be processed and store the data to be processed in a message queue, and the data processing task is to process high-concurrency data in batches, so as to improve the speed and efficiency of data processing in a multi-thread and multi-process mode. If the data processing is successful, the corresponding data table in the database is updated, and if the data processing is failed, the data is pressed into the message queue again to wait for the next processing operation, and meanwhile, log records are also available, so that the trace of the operation which is failed in the processing can be ensured to be circulated.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
EXAMPLE III
As shown in fig. 4, the high-concurrency data processing apparatus 400 in the embodiment of the present application includes:
a query data task triggering module 401, configured to periodically trigger a query data task based on a preset first interval time, and if there is data to be processed in the query, store the data to be processed in a message queue;
a data task processing triggering module 402, configured to periodically trigger a data task processing based on a preset second interval time;
the data task processing triggering module 402 includes:
a detecting unit 4021, configured to detect whether the to-be-processed data exists in the message queue;
an obtaining unit 4022, configured to obtain a preset number of the to-be-processed data if the to-be-processed data exists in the message queue;
the processing unit 4023 is configured to process the acquired data to be processed based on the acquired data to be processed.
Optionally, the query data task triggering module 401 is specifically configured to detect a data amount of queried data; and if the ratio of the data volume of the inquired data to the total data volume reaches a preset ratio, performing page refreshing jump, and after the page refreshing jump, continuing to periodically trigger the data inquiring task based on the first interval time.
Optionally, the query data task triggering module 401 further includes:
and the flag bit changing unit is used for changing the flag bit of the inquired data.
Optionally, the data processing task triggering module 402 is specifically configured to detect whether a current data processing pressure condition matches a preset pressure condition; and if the current data processing pressure condition is matched with the preset pressure condition, periodically triggering more than two data processing tasks based on a preset second interval time.
Optionally, the high-concurrency data processing apparatus 400 further includes a processing failure module, where the processing failure module includes:
a pushing unit, configured to push the to-be-processed data into the message queue again if the to-be-processed data fails to be processed;
and the recording unit is used for writing the failure condition of the current processing into the log record.
Optionally, the query data task triggering module 401 includes a query unit;
the query unit is used for querying whether the data to be processed exists in a paging batch manner;
the query unit includes:
the sorting subunit is used for sorting each data in an ascending order according to the primary key ID of each data in any data table;
a primary key ID acquisition subunit, configured to acquire a value of a largest primary key ID in data that has been queried in a previous batch;
a primary key ID determining subunit, configured to determine, according to the maximum primary key ID value, a primary key ID value of the initial data of the next batch to be queried;
and the batch query subunit is used for querying data of a batch based on the initial data and the sorting sequence, wherein the data volume of the data of the batch does not exceed a preset data volume threshold.
Optionally, if the high-concurrency data processing apparatus 400 is applied to blockchain uplink, the processing unit 4023 is specifically configured to send an asynchronous uplink request to a blockchain server, so as to uplink the obtained to-be-processed data to the blockchain server to become a node of the blockchain;
the high-concurrency data processing apparatus 400 further includes a successful processing module, where the successful processing module includes:
a receiving unit, configured to receive a transaction hash returned by the blockchain server if the pending data is successfully linked;
a generation unit, configured to generate an asset ID based on the uplink data of this time;
the encryption unit is used for encrypting the transaction hash, the asset ID and the uplink data;
and the updating unit is used for updating the encrypted data to the corresponding position of the database.
As can be seen from the above, in the embodiment of the present application, the high-concurrency data processing apparatus decomposes a data processing process into a data query task and a data processing task, where the data query task is to query to obtain data to be processed and store the data to be processed in a message queue, and the data processing task is to process the high-concurrency data in batches, so as to improve the speed and efficiency of data processing in a multi-thread and multi-process mode.
Example four
An embodiment of the present application provides a client server, please refer to fig. 5, where the client server in the embodiment of the present application includes: a memory 501, one or more processors 502 (only one shown in fig. 5), and a computer program stored on the memory 501 and executable on the processors. Wherein: the memory 501 is used for storing software programs and modules, and the processor 502 executes various functional applications and data processing by running the software programs and units stored in the memory 501, so as to acquire resources corresponding to the preset events. Specifically, the processor 502 realizes the following steps by running the above-mentioned computer program stored in the memory 501:
periodically triggering a data query task based on a preset first interval time, and storing the data to be processed into a message queue if the data to be processed exists;
periodically triggering a data processing task based on a preset second interval time, wherein the data processing task comprises the following steps: detecting whether the data to be processed exists in the message queue; if the data to be processed exists in the message queue, acquiring a preset number of the data to be processed; and sending an asynchronous processing request to a data processing server based on the acquired data to be processed so as to process the acquired data to be processed.
Assuming that the foregoing is the first possible implementation manner, in a second possible implementation manner provided on the basis of the first possible implementation manner, the periodically triggering the query data task based on the preset first interval includes:
detecting the data amount of the queried data;
and if the ratio of the data volume of the inquired data to the total data volume in the database reaches a preset ratio, performing page refreshing jump, and after the page refreshing jump, continuing to periodically trigger the data inquiring task based on the first interval time.
In a third possible implementation manner provided as a basis for the second possible implementation manner, after the aforementioned periodically triggering the data query task based on the preset first interval time, the processor 502 further implements the following steps when executing the aforementioned computer program stored in the memory 501:
the flag bit of the data that has been queried is changed.
In a fourth possible implementation manner provided as a basis for the first possible implementation manner, the periodically triggering a data processing task based on a preset second interval time includes:
detecting whether the current data processing pressure condition is matched with a preset pressure condition or not;
and if the current data processing pressure condition is matched with the preset pressure condition, periodically triggering more than two data processing tasks based on the preset second interval time.
In a fifth possible implementation manner provided on the basis of the first possible implementation manner, the processor 502 further implements the following steps when executing the above computer program stored in the memory 501:
if the data to be processed fails to be processed, the data to be processed is pressed into the message queue again;
and writing the failure condition of the current processing into a log record.
In a sixth possible implementation manner provided on the basis of the first possible implementation manner, the second possible implementation manner, the third possible implementation manner, the fourth possible implementation manner, or the fifth possible implementation manner, the query data task includes a paged batch query whether there is data to be processed, and the paged batch query whether there is data to be processed includes:
for any data table, sorting each data in an ascending order according to the primary key ID of each data in the data table;
acquiring the maximum value of the primary key ID in the inquired data of the previous batch;
determining the value of the primary key ID of the initial data of the next batch to be inquired according to the maximum value of the primary key ID;
and querying a batch of data based on the initial data and the sorting sequence, wherein the data volume of the batch of data does not exceed a preset data volume threshold.
In a seventh possible implementation manner provided based on the sixth possible implementation manner, if the high concurrent data processing method is applied to a block chain uplink, the processing the acquired data to be processed includes:
sending an asynchronous uplink request to a block chain server so as to uplink the acquired data to be processed to the block chain server to become a node of a block chain;
the processor 502 further implements the following steps by executing the computer program stored in the memory 501:
if the data to be processed is successfully linked, receiving a transaction hash returned by the block chain server;
generating an asset ID based on the uplink data;
encrypting the transaction hash, the asset ID and the uplink data;
and updating the encrypted data to the corresponding position of the database.
Further, as shown in fig. 5, the client server may further include: one or more input devices 503 (only one shown in fig. 5) and one or more output devices 504 (only one shown in fig. 5). The memory 501, processor 502, input device 503, and output device 504 are connected by a bus 505.
It should be understood that in the embodiments of the present Application, the Processor 502 may be a Central Processing Unit (CPU), and the Processor may be other general-purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field-Programmable Gate arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Input devices 503 may include a keyboard, touchpad, fingerprint sensor, microphone, etc., and output devices 504 may include a display, speaker, etc.
Memory 501 may include both read-only memory and random access memory and provides instructions and data to processor 502. Some or all of the memory 501 may also include non-volatile random access memory. For example, the memory 501 may also store device type information.
As can be seen from the above, in the embodiment of the present application, the client server decomposes a data processing process into a data query task and a data processing task, where the data query task is to query to obtain data to be processed and store the data to be processed in a message queue, and the data processing task is to send an asynchronous processing request to the data processing server based on the data to be processed, so that the data processing server can process high-concurrency data in batches, and the speed and efficiency of data processing are improved in a multi-thread and multi-process mode.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the description of each embodiment has its own emphasis, and reference may be made to the related description of other embodiments for parts that are not described or recited in any embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described system embodiments are merely illustrative, and for example, the division of the above-described modules or units is only one logical functional division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, all or part of the flow in the method of the embodiments described above may be implemented by a computer program, which may be stored in a computer readable storage medium and used by a processor to implement the steps of the embodiments of the methods described above. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying the above-mentioned computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signal, telecommunication signal, software distribution medium, etc. It should be noted that the computer readable medium described above may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media excludes electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the embodiments of the present application, and they should be construed as being included in the present application.

Claims (7)

1. A high concurrency data processing method is characterized by comprising the following steps:
periodically triggering a data query task based on a preset first interval time, if the data to be processed exists in the query task, storing the data to be processed into a message queue, and changing a flag bit of the queried data; the data query task is to periodically query whether to-be-processed data exists after one-time triggering;
periodically triggering a data processing task based on a preset second interval time, wherein the data processing task comprises: detecting whether the data to be processed exists in the message queue; if the data to be processed exists in the message queue, acquiring a preset number of the data to be processed; processing the acquired data to be processed based on the acquired data to be processed;
the periodically triggering and querying data task based on the preset first interval time comprises the following steps:
detecting the data amount of the queried data;
if the ratio of the data volume of the queried data to the total data volume reaches a preset ratio, pausing the query data task for a certain time, performing page refreshing jump, and continuing to periodically trigger the query data task based on the first interval time after the page refreshing jump;
the periodically triggering and processing the data task based on the preset second interval time comprises the following steps:
detecting whether the current data processing pressure condition is matched with a preset pressure condition or not;
and if the current data processing pressure condition is matched with the preset pressure condition, periodically triggering more than two data processing tasks based on the second interval time.
2. The high concurrency data processing method according to claim 1, wherein the high concurrency data processing method further comprises:
if the data to be processed fails to be processed, the data to be processed is pressed into the message queue again;
and writing the failure condition of the current processing into a log record.
3. The method of any of claims 1 to 2, wherein the query data task comprises a paged batch query whether there is data to be processed, the paged batch query whether there is data to be processed comprising:
for any data table, sorting each data in an ascending order according to the primary key ID of each data in the data table;
acquiring the maximum value of the primary key ID in the inquired data of the previous batch;
determining the value of the primary key ID of the initial data of the next batch to be inquired according to the maximum value of the primary key ID;
and querying a batch of data based on the initial data and the sorting sequence, wherein the data volume of the batch of data does not exceed a preset data volume threshold.
4. The method for processing high concurrent data according to claim 3, wherein if the high concurrent data processing method is applied to block chain uplink, the processing the acquired data to be processed includes:
sending an asynchronous uplink request to a block chain server so as to uplink the acquired data to be processed into the block chain server to become a node of a block chain;
the high concurrency data processing method further comprises the following steps:
if the data to be processed is successfully linked, receiving a transaction hash returned by the block chain server;
generating an asset ID based on the uplink data;
encrypting the transaction hash, the asset ID and the uplink data;
and updating the encrypted data to the corresponding position of the database.
5. A highly concurrent data processing apparatus, characterized in that the highly concurrent data processing apparatus comprises:
the query data task triggering module is used for periodically triggering a query data task based on a preset first interval time, and storing the data to be processed into a message queue if the data to be processed exists in the query; the data query task is to periodically query whether to-be-processed data exists after one-time triggering;
the processing data task triggering module is used for periodically triggering the processing data task based on a preset second interval time;
wherein, the data task processing triggering module comprises:
a detecting unit, configured to detect whether the to-be-processed data exists in the message queue;
the acquisition unit is used for acquiring a preset number of the data to be processed if the data to be processed exists in the message queue;
the processing unit is used for processing the acquired data to be processed based on the acquired data to be processed;
the query data task triggering module is specifically used for detecting the data volume of the queried data; if the ratio of the data volume of the queried data to the total data volume reaches a preset ratio, pausing the query data task for a certain time, performing page refreshing jump, and continuing to periodically trigger the query data task based on the first interval time after the page refreshing jump;
the data processing task triggering module is specifically used for detecting whether the current data processing pressure condition is matched with a preset pressure condition; if the current data processing pressure condition is matched with a preset pressure condition, periodically triggering more than two data processing tasks based on the second interval time;
the query data task triggering module further comprises: and the flag bit changing unit is used for changing the flag bit of the inquired data.
6. A client server comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 4 when executing the computer program.
7. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 4.
CN201811015482.2A 2018-08-31 2018-08-31 High-concurrency data processing method and device and client server Active CN109344172B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811015482.2A CN109344172B (en) 2018-08-31 2018-08-31 High-concurrency data processing method and device and client server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811015482.2A CN109344172B (en) 2018-08-31 2018-08-31 High-concurrency data processing method and device and client server

Publications (2)

Publication Number Publication Date
CN109344172A CN109344172A (en) 2019-02-15
CN109344172B true CN109344172B (en) 2022-05-17

Family

ID=65292039

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811015482.2A Active CN109344172B (en) 2018-08-31 2018-08-31 High-concurrency data processing method and device and client server

Country Status (1)

Country Link
CN (1) CN109344172B (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110245006B (en) * 2019-05-07 2023-05-02 深圳壹账通智能科技有限公司 Method, device, equipment and storage medium for processing block chain transaction
CN110264348B (en) * 2019-05-07 2021-08-20 北京奇艺世纪科技有限公司 Processing method, device and storage medium for transaction uplink
CN110471774A (en) * 2019-06-28 2019-11-19 苏宁云计算有限公司 A kind of data processing method and device based on unified task schedule
CN110347607A (en) * 2019-07-16 2019-10-18 北京首汽智行科技有限公司 A kind of data cochain test method
CN110727655B (en) * 2019-09-10 2022-03-15 连连银通电子支付有限公司 Method, device, equipment and medium for building shadow database of block chain
CN111147355B (en) * 2019-12-25 2022-08-09 北京五八信息技术有限公司 Message sending method and device, electronic equipment and storage medium
CN113127546A (en) * 2019-12-30 2021-07-16 中国移动通信集团湖南有限公司 Data processing method and device and electronic equipment
CN111274252B (en) * 2020-01-08 2023-11-28 平安科技(深圳)有限公司 Block chain data uplink method and device, storage medium and server
CN111274258B (en) * 2020-02-10 2024-06-14 深圳市数联通科技有限公司 Block chain data uplink method
CN111367688A (en) * 2020-02-28 2020-07-03 京东数字科技控股有限公司 Service data processing method and device
CN111400390B (en) * 2020-04-08 2023-11-17 上海东普信息科技有限公司 Data processing method and device
CN111506430B (en) * 2020-04-23 2024-04-19 上海数禾信息科技有限公司 Method and device for processing data under multitasking and electronic equipment
CN111708618A (en) * 2020-06-12 2020-09-25 北京思特奇信息技术股份有限公司 Processing method and device based on Java multithreading
CN112597162B (en) * 2020-12-25 2023-08-08 平安银行股份有限公司 Data set acquisition method, system, equipment and storage medium
CN113360463A (en) * 2021-04-15 2021-09-07 网宿科技股份有限公司 Data processing method, device, server and readable storage medium
CN113312386B (en) * 2021-05-10 2022-06-24 四川新网银行股份有限公司 Batch warehousing method based on distributed messages
CN113691611B (en) * 2021-08-23 2022-11-22 湖南大学 Block chain distributed high-concurrency transaction processing method, system, equipment and storage medium
CN115344403A (en) * 2022-07-27 2022-11-15 广州方舟信息科技有限公司 User rights and interests data processing method based on distributed message queue
CN116302616A (en) * 2023-03-28 2023-06-23 之江实验室 Data processing method and device, storage medium and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR970078307A (en) * 1996-05-07 1997-12-12 유기범 How to Wait for a Message on a Centrix / Attendant Console Device
WO2013085166A1 (en) * 2011-12-08 2013-06-13 (주)네오위즈게임즈 Method for providing a soccer game to which a message broadcast item is applied, soccer game server, soccer game providing system, and recording medium
CN104811459A (en) * 2014-01-23 2015-07-29 阿里巴巴集团控股有限公司 Processing method, processing device and system for message services and message service system
CN105068864A (en) * 2015-07-24 2015-11-18 北京京东尚科信息技术有限公司 Method and system for processing asynchronous message queue

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR970078307A (en) * 1996-05-07 1997-12-12 유기범 How to Wait for a Message on a Centrix / Attendant Console Device
WO2013085166A1 (en) * 2011-12-08 2013-06-13 (주)네오위즈게임즈 Method for providing a soccer game to which a message broadcast item is applied, soccer game server, soccer game providing system, and recording medium
CN104811459A (en) * 2014-01-23 2015-07-29 阿里巴巴集团控股有限公司 Processing method, processing device and system for message services and message service system
CN105068864A (en) * 2015-07-24 2015-11-18 北京京东尚科信息技术有限公司 Method and system for processing asynchronous message queue

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
58到家MQ如何快速实现流量削峰填谷;架构师之路;《W3Cschool》;20170412;全文 *
MySQL的limit用法和分页查询的性能分析及优化;唐成勇;《segmentfault》;20170328;正文第3节 *
基于Redis实现分布式消息队列(一);后端技术探索;《腾讯云》;20180809;全文 *
架构师之路.58到家MQ如何快速实现流量削峰填谷.《W3Cschool》.2017, *

Also Published As

Publication number Publication date
CN109344172A (en) 2019-02-15

Similar Documents

Publication Publication Date Title
CN109344172B (en) High-concurrency data processing method and device and client server
CN112910945B (en) Request link tracking method and service request processing method
US20180357111A1 (en) Data center operation
US9563426B1 (en) Partitioned key-value store with atomic memory operations
US9560165B2 (en) BT offline data download system and method, and computer storage medium
CN108228322B (en) Distributed link tracking and analyzing method, server and global scheduler
CN109039817B (en) Information processing method, device, equipment and medium for flow monitoring
CN110633309A (en) Block chain transaction processing method and device
CN108206776B (en) Group history message query method and device
US11892976B2 (en) Enhanced search performance using data model summaries stored in a remote data store
CN111338834B (en) Data storage method and device
CN112035531A (en) Sensitive data processing method, device, equipment and medium
US10331484B2 (en) Distributed data platform resource allocator
CN113821506A (en) Task execution method, device, system, server and medium for task system
CN115934414A (en) Data backup method, data recovery method, device, equipment and storage medium
CN107885634B (en) Method and device for processing abnormal information in monitoring
CN108595121B (en) Data storage method and device
CN107276998B (en) OpenSSL-based performance optimization method and device
CN108390770B (en) Information generation method and device and server
CN112148705A (en) Data migration method and device
CN116108036A (en) Method and device for off-line exporting back-end system data
CN115757642A (en) Data synchronization method and device based on filing log file
CN115664992A (en) Network operation data processing method and device, electronic equipment and medium
CN114844771A (en) Monitoring method, device, storage medium and program product for micro-service system
CN113468218A (en) Method and device for monitoring and managing database slow SQL

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant