CN115329002A - Data asynchronous processing method based on artificial intelligence and related equipment - Google Patents

Data asynchronous processing method based on artificial intelligence and related equipment Download PDF

Info

Publication number
CN115329002A
CN115329002A CN202210987077.7A CN202210987077A CN115329002A CN 115329002 A CN115329002 A CN 115329002A CN 202210987077 A CN202210987077 A CN 202210987077A CN 115329002 A CN115329002 A CN 115329002A
Authority
CN
China
Prior art keywords
data
plaintext data
ciphertext
preset
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210987077.7A
Other languages
Chinese (zh)
Inventor
苏媛媛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202210987077.7A priority Critical patent/CN115329002A/en
Publication of CN115329002A publication Critical patent/CN115329002A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Bioethics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Storage Device Security (AREA)

Abstract

The application provides a data asynchronous processing method and device based on artificial intelligence, electronic equipment and a storage medium, wherein the data asynchronous processing method based on artificial intelligence comprises the following steps: initializing to obtain a plurality of initial plaintext data; dividing initial plaintext data into a plurality of batches; calculating a load value of each server node in the server cluster, and distributing the multiple batches of initial plaintext data to the server nodes according to the load values; encrypting the initial plaintext data according to a preset encryption duration to obtain ciphertext data and an index of the ciphertext data; according to the subject identification of plaintext data marking ciphertext data, distributing the ciphertext data to each server node for storage according to the index of the ciphertext data and the load value of the server node; and pushing the plaintext data corresponding to the ciphertext data to the user according to the subject identification of the ciphertext data. The method can perform asynchronous processing on the data transmission query task, thereby improving the efficiency of data query.

Description

Data asynchronous processing method based on artificial intelligence and related equipment
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to an artificial intelligence-based data asynchronous processing method and apparatus, an electronic device, and a storage medium.
Background
With the development of big data science and technology, most enterprises reserve billions or even trillion-level data under the business precipitation for many years, the enterprises generally encrypt and protect mass data in a mode of firstly inquiring and then encrypting, and then store the encrypted data in a warehouse, so that sensitive data can be protected due to encryption even if the data is maliciously stolen.
At present, the query rate for trillion volume data is extremely slow, so that the data encryption rate after query is further slowed, and the user experience and the enterprise service quality are influenced.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an artificial intelligence based data asynchronous processing method and related apparatus, so as to solve the technical problem of how to improve the efficiency of data asynchronous processing, where the related apparatus includes an artificial intelligence based data asynchronous processing device, an electronic apparatus, and a storage medium.
The embodiment of the application provides a data asynchronous processing method based on artificial intelligence, which comprises the following steps:
preprocessing a plaintext data table in a preset server cluster to obtain a plurality of initial plaintext data, wherein each initial plaintext data corresponds to an autonomous key and a data size;
dividing the initial plaintext data into a plurality of batches according to the self-increment key and the data size, wherein each batch comprises a plurality of initial plaintext data;
calculating a load value of each server node in the server cluster, and distributing the multiple batches of initial plaintext data to the server nodes according to the load values;
encrypting the initial plaintext data according to preset encryption duration to obtain ciphertext data and an index of the ciphertext data;
marking the subject identification of the ciphertext data according to plaintext data, and distributing the ciphertext data to each server node for storage according to the index of the ciphertext data and the load value of the server node;
and pushing the plaintext data corresponding to the ciphertext data to a user according to the subject identification of the ciphertext data.
In some embodiments, the preprocessing the plaintext data table in the preset server cluster to obtain a plurality of initial plaintext data includes:
the plaintext data table comprises a plurality of rows and a plurality of columns, the type of each column of data in the plaintext data table is inquired, the columns of which the types are not the self-increment main keys are used as initial plaintext data, and each initial plaintext data comprises a plurality of dimensions;
querying the data type and length corresponding to each dimension, and querying the data size of each dimension according to the length and the data type;
and taking the sum of the data sizes corresponding to all dimensions in each initial plaintext data as the data size of the initial plaintext data.
In some embodiments, said dividing said initial plaintext data into a plurality of batches according to said autonomous key and data size comprises:
a, uniformly dividing the initial plaintext data into a plurality of alternative batches according to a preset dividing threshold, wherein each alternative batch comprises a plurality of initial plaintext data, and the number of the alternative batches is equal to the dividing threshold;
b, respectively calculating the sum of the data sizes of all initial plaintext data in each alternative batch to serve as an evaluation value of the batch;
c, calculating the variance of all the evaluation values, if the variance is smaller than a preset first termination threshold value, indicating that the difference between the sizes of the data contained in the alternative batches is small, taking the multiple alternative batches as a plurality of final batches to complete the batching of the initial plaintext data, if the variance is not smaller than the preset first termination threshold value, indicating that the difference between the sizes of the data contained in all the alternative batches is large, updating the division threshold value to obtain an updated division threshold value, and repeatedly executing the steps a to c to obtain the initial plaintext data of the multiple batches;
d, if the final batches are not obtained after the repeated division times reach a preset second termination threshold value, taking the candidate batches with the minimum variance as the final batches, wherein the preset second termination threshold value is equal to the initial value of the preset division threshold value.
In some embodiments, the calculating a load value of each server node in the server cluster and distributing the batches of initial plaintext data to the server nodes according to the load value includes:
inquiring the number of processing tasks of each server node in the server cluster as a load value of each server node;
taking the minimum value of the corresponding self-increment key of all initial plaintext data in each batch as the index of the batch;
sorting the batches according to the sequence of the indexes from small to large to obtain the sequence of each batch, and sorting the server nodes according to the sequence of the load values from small to large to obtain the sequence of each server node;
and sending the batches to server nodes with the same sequence for subsequent data encryption processing.
In some embodiments, the encrypting the initial plaintext data according to a preset encryption duration to obtain ciphertext data and an index of the ciphertext data includes:
a, encrypting the initial plaintext data within a preset encryption duration range to obtain ciphertext data;
b, pausing data encryption when the preset encryption duration is over, and respectively recording the moment when each ciphertext data is encrypted to be used as an index of the ciphertext data;
c, if the initial plaintext data is not encrypted after the preset encryption duration is over, continuously comparing the resource occupancy rate of the server cluster with a preset occupancy rate threshold value, if the occupancy rate is lower than the preset occupancy rate threshold value, updating the preset encryption duration according to the resource occupancy rate of the server cluster and the current moment to obtain the updated preset encryption duration, and repeatedly executing the steps a to c to continuously encrypt the data until all the initial plaintext data are encrypted, and stopping data encryption;
wherein, the updating the preset encryption duration according to the resource occupancy rate of the server cluster and the current time to obtain the updated preset encryption duration includes:
inquiring hardware information of the server cluster, and calculating the resource occupancy rate of the server cluster according to the hardware information;
calculating the time difference between the current time and a preset reference time;
normalizing the resource occupancy rate and the time difference to obtain a normalized resource occupancy rate and a normalized time difference;
inputting the normalized resource occupancy rate and the normalized time difference into a preset integration function to calculate an operation time length updating proportion;
and calculating the product of the preset initial operation time length and the updating proportion to be used as the updated preset encryption time length.
In some embodiments, said marking the subject identifier of the ciphertext data according to plaintext data, and distributing the ciphertext data to each server node for storage according to the index of the ciphertext data and the load value of the server node includes:
classifying the initial plaintext data according to a pre-trained initial plaintext classification model to obtain a category of each initial plaintext data, and using the category as a subject identifier of ciphertext data corresponding to the initial plaintext data;
sequencing the ciphertext data according to the indexes of the ciphertext data from early to late to obtain the sequence of each ciphertext data, and sequencing the server nodes according to the sequence of the load values from small to large to obtain the sequence of each server node;
and distributing the ciphertext data to the server nodes with the same sequence for storage.
In some embodiments, the pushing the plaintext data corresponding to the ciphertext data to the user according to the subject identifier of the ciphertext data includes:
inquiring the type of a preset user requirement, wherein the type of the user requirement comprises user order information, user logistics information and user transaction information;
in each server node of the server cluster, sequentially inquiring a theme identifier corresponding to the ciphertext data according to the sequence from late to early of the index of the ciphertext data, if the theme identifier is consistent with the category of the user requirement, decrypting the ciphertext data according to a preset secret key to obtain initial plaintext data, and pushing the initial plaintext data to the user;
if the subject identification is inconsistent with the category of the user requirement, continuously inquiring the subject identification of each ciphertext data until all ciphertext data are inquired, and stopping inquiring;
and if the ciphertext data corresponding to the type required by the user cannot be inquired, sending an information delay prompt to the user.
The embodiment of the present application further provides an artificial intelligence-based data asynchronous processing device, the device includes:
the system comprises a preprocessing unit, a data processing unit and a data processing unit, wherein the preprocessing unit is used for preprocessing a plaintext data table in a preset server cluster to obtain a plurality of initial plaintext data, and each initial plaintext data corresponds to an autonomous key and a data size;
the batching unit is used for dividing the initial plaintext data into a plurality of batches according to the self-increment key and the data size, and each batch comprises a plurality of initial plaintext data;
the distribution unit is used for calculating a load value of each server node in the server cluster, and distributing the initial plaintext data of the batches to the server nodes according to the load values;
the encryption unit is used for encrypting the initial plaintext data according to preset encryption duration to obtain ciphertext data and an index of the ciphertext data;
the storage unit is used for marking the subject identification of the ciphertext data according to plaintext data and distributing the ciphertext data to each server node for storage according to the index of the ciphertext data and the load value of the server node;
and the pushing unit is used for pushing the plaintext data corresponding to the ciphertext data to a user according to the subject identification of the ciphertext data.
An embodiment of the present application further provides an electronic device, where the electronic device includes:
a memory storing computer readable instructions; and
a processor executing computer readable instructions stored in the memory to implement the artificial intelligence based data asynchronous processing method.
Embodiments of the present application further provide a computer-readable storage medium, in which computer-readable instructions are stored, and the computer-readable instructions are executed by a processor in an electronic device to implement the artificial intelligence based data asynchronous processing method.
According to the data asynchronous processing method based on the artificial intelligence, the plaintext data are divided into a plurality of batches according to the data size of the initial plaintext data, the plurality of batches of initial plaintext data are distributed to the server nodes according to the index of each batch to ensure the load balance of the server cluster, the initial plaintext data are continuously encrypted within the encryption duration to obtain ciphertext data, the encryption duration is continuously updated to maintain the stability of the server clusters, the subject identification of each ciphertext data is finally marked, the initial plaintext corresponding to the ciphertext data is pushed to a user by using the subject identification, asynchronous processing can be performed on a data query task, and therefore the efficiency of data query can be improved.
Drawings
FIG. 1 is a flow chart of a preferred embodiment of an artificial intelligence based asynchronous data processing method to which the present application relates.
FIG. 2 is a functional block diagram of a preferred embodiment of an artificial intelligence based data asynchronous processing device to which the present application relates.
Fig. 3 is a schematic structural diagram of an electronic device according to a preferred embodiment of the artificial intelligence-based asynchronous data processing method.
Detailed Description
For a clearer understanding of the objects, features and advantages of the present application, reference is made to the following detailed description of the present application along with the accompanying drawings and specific examples. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict. In the following description, numerous specific details are set forth to provide a thorough understanding of the present application, and the described embodiments are merely a subset of the embodiments of the present application and are not intended to be a complete embodiment.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the described features. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The embodiment of the present Application provides an artificial intelligence based data asynchronous processing method, which can be applied to one or more electronic devices, where the electronic device is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and hardware of the electronic device includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The electronic device may be any electronic product capable of performing human-computer interaction with a user, for example, a Personal computer, a tablet computer, a smart phone, a Personal Digital Assistant (PDA), a game machine, an interactive Internet Protocol Television (IPTV), an intelligent wearable device, and the like.
The electronic device may also include a network device and/or a user device. The network device includes, but is not limited to, a single network server, a server group consisting of a plurality of network servers, or a Cloud Computing (Cloud Computing) based Cloud consisting of a large number of hosts or network servers.
The Network where the electronic device is located includes, but is not limited to, the internet, a wide area Network, a metropolitan area Network, a local area Network, a Virtual Private Network (VPN), and the like.
Fig. 1 is a flow chart of a preferred embodiment of the data asynchronous processing method based on artificial intelligence according to the present application. The order of the steps in the flow chart may be changed and some steps may be omitted according to different needs.
S10, preprocessing a plaintext data table in a preset server cluster to obtain a plurality of initial plaintext data, wherein each initial plaintext data corresponds to an autonomous key and a data size.
In an optional embodiment, the preprocessing the plaintext data table in the preset server cluster to obtain a plurality of initial plaintext data includes:
the plaintext data table comprises a plurality of rows and a plurality of columns, the type of each column of data in the plaintext data table is inquired, the columns of which the types are not the self-increment main keys are taken as initial plaintext data, and each initial plaintext data comprises a plurality of dimensions;
querying the data type and length corresponding to each dimension, and querying the data size of each dimension according to the length and the data type;
and taking the sum of the data sizes corresponding to all dimensions in each initial plaintext data as the data size of the initial plaintext data.
In this optional embodiment, the function of the server cluster is to store plaintext data and forward the data, the server cluster includes a plurality of server nodes, and data transmission can be performed between each server node.
In this optional embodiment, the plaintext data in the server cluster is stored in a format of a plaintext data table, where the plaintext data table includes n rows and m columns, where n and m are integers greater than 1, each row in the plaintext data table corresponds to a piece of plaintext data, and each column corresponds to one feature in the plaintext data.
In this alternative embodiment, the data type of each column of data in the plaintext data table may be queried, and the column whose type is not the autonomy key may be used as the initial plaintext data, where each initial plaintext data includes multiple dimensions. The data types include auto increment primary key, varchar, int (integer value), float (single precision floating point), double (double precision floating point).
In this optional embodiment, the value of the autonomous key is a positive integer, and the difference between two adjacent autonomous keys is 1. The higher the value of the autonomy key is, the earlier the time that the plaintext data corresponding to the autonomy key is stored in the server cluster is, the more the plaintext data should be encrypted preferentially.
In this optional embodiment, the data type and length corresponding to each dimension may be queried, and the data size of each dimension may be queried according to the length and the data type. Illustratively, when the category is varchar, then the dimension data is of size +1; when the category is int, the size of the dimension data is 4 bytes; when the category is double, the size of the dimension data is 8 bytes; when the category is float, then the dimension data is 4 bytes in size.
In this alternative embodiment, the sum of the data sizes of all dimensions in each piece of plaintext data may be calculated as the data size corresponding to the piece of plaintext data. The larger the data size is, the larger the storage space occupied by the plaintext data in the server cluster is, and the more time is consumed for encrypting the plaintext data.
Therefore, initial plaintext data is screened out by inquiring the data type of each column in the plaintext data table, the size of each initial plaintext data is calculated according to the type and the length of the data in each initial plaintext data, data guidance is provided for the batching of subsequent data, the data size of each batch is convenient to balance, and the efficiency of subsequent data encryption can be improved.
S11, dividing the initial plaintext data into a plurality of batches according to the self-increment main key and the data size, wherein each batch comprises a plurality of initial plaintext data.
In an optional embodiment, the dividing the initial plaintext data into a plurality of batches according to the autonomy key and the data size includes:
a, uniformly dividing the initial plaintext data into a plurality of alternative batches according to a preset dividing threshold, wherein each alternative batch comprises a plurality of initial plaintext data, and the number of the alternative batches is equal to the dividing threshold. For example, the preset partitioning threshold may be initially set to 20, and then 20 candidate batches may be obtained.
And b, respectively calculating the sum of the data sizes of all the initial plaintext data in each alternative batch to serve as the evaluation value of the batch.
c, calculating the variance of all the evaluation values, if the variance is smaller than a preset first termination threshold, indicating that the difference between the sizes of the data contained in the candidate batches is small, then taking the multiple candidate batches as final multiple batches to complete the batching of the initial plaintext data, if the variance is not smaller than the preset first termination threshold, indicating that the difference between the sizes of the data contained in all the candidate batches is large, then updating the preset dividing threshold to obtain an updated dividing threshold, and repeatedly executing the steps a to c to obtain the initial plaintext data of the multiple batches, wherein the preset first termination threshold may be 0.001, and the updating of the preset dividing threshold may be 1 reduction of the preset dividing threshold;
and d, if the final batches are not obtained after the repeated division times reach a preset second termination threshold value, taking the candidate batches with the minimum variance as the final batches, wherein the preset second termination threshold value is equal to the initial value of the preset division threshold value.
Therefore, the original data are divided for multiple times, and the variance of the batch data after each division is recorded so as to perform multi-batch operation, so that the small difference between the data sizes of each batch can be ensured, the time for performing data encryption on each batch subsequently is balanced, and the efficiency of subsequent data encryption can be improved.
And S12, calculating a load value of each server node in the server cluster, and distributing the initial plaintext data of the plurality of batches to the server nodes according to the load value.
In an optional embodiment, the calculating a load value of each server node in the server cluster, and distributing the plurality of batches of initial plaintext data to the server nodes according to the load value includes:
inquiring the number of processing tasks of each server node in the server cluster as a load value of each server node;
taking the minimum value of the corresponding self-increment key of all initial plaintext data in each batch as the index of the batch;
sorting the batches according to the sequence of the indexes from small to large to obtain the sequence of each batch, and sorting the server nodes according to the sequence of the load values from small to large to obtain the sequence of each server node;
and sending the batches to server nodes with the same sequence for subsequent data encryption processing.
The smaller the number of the server node processing tasks is, the lower the load of the server node is, the more preferably the server node is allocated with data processing tasks, wherein the data processing tasks comprise data encryption and data transmission.
In this optional embodiment, the smaller the index is, the earlier all the initial plaintext data in the batch corresponding to the index is stored in the server cluster, and the more the initial plaintext data in the batch should be encrypted.
Therefore, the batches corresponding to the smaller indexes are distributed to the server nodes corresponding to the smaller load values according to the load values of the server nodes and the indexes of each batch, so that the load balance of the server cluster is ensured, and the stability of data transmission is improved.
And S13, encrypting the initial plaintext data according to a preset encryption duration to obtain ciphertext data and an index of the ciphertext data.
In an optional embodiment, the encrypting the initial plaintext data according to a preset encryption duration to obtain ciphertext data and an index of the ciphertext data includes:
a, encrypting the initial plaintext data within a preset encryption duration range to obtain ciphertext data, wherein the preset encryption duration range can be 2 hours, 3 hours, 4 hours and the like, and the method is not limited in the application.
In this optional embodiment, for the initial plaintext data in each server node, the initial plaintext data may be encrypted according to a sequence from small to large of a self-increment primary key of the initial plaintext data within a preset encryption duration range to obtain ciphertext data corresponding to each initial plaintext data, and the encryption method may be an RSA encryption algorithm or other existing encryption algorithms, which is not limited in this application.
b, in order to avoid the phenomenon that the server load is too high due to the fact that a large number of data encryption tasks are executed for a long time, data encryption is suspended when the preset encryption duration is over, and the time when each piece of ciphertext data is encrypted can be recorded as the index of the ciphertext data.
c, in order to restart the data encryption task under the condition of low server load to realize asynchronous processing of data, when the initial plaintext data is not encrypted after the preset encryption duration is over, continuously comparing the resource occupancy rate of the server cluster with a preset occupancy rate threshold, if the occupancy rate is lower than the preset occupancy rate threshold, updating the preset encryption duration according to the resource occupancy rate of the server cluster and the current time to obtain an updated preset encryption duration, and repeatedly executing the steps a to c to continuously encrypt the data until all the initial plaintext data are encrypted, and stopping data encryption, wherein the preset occupancy rate threshold may be 50%.
In an optional embodiment, the updating the preset encryption duration according to the resource occupancy of the server cluster and the current time to obtain the updated preset encryption duration includes:
inquiring hardware information of the server cluster, and calculating the resource occupancy rate of the server cluster according to the hardware information;
calculating the time difference between the current time and a preset reference time;
normalizing the resource occupancy rate and the time difference to obtain a normalized resource occupancy rate and a normalized time difference;
inputting the normalized resource occupancy rate and the normalized time difference into a preset integration function to calculate an operation time length updating proportion;
and calculating the product of the preset initial operation time length and the updating proportion to be used as the updated preset encryption time length.
In this optional embodiment, the resource occupancy rate S of the server cluster may be calculated according to various hardware information in the server cluster, where the various hardware information includes a CPU occupancy rate S1, a cache occupancy rate S2, a storage occupancy rate S3, and an I/O occupancy rate S4, a mean value of the S1, S2, S3, and S4 is calculated as the resource occupancy rate of the server cluster, and a smaller resource occupancy rate indicates that the load of the server cluster at the current time is smaller, which may increase the data encryption duration.
In this optional embodiment, a time difference between the current time and a preset reference time may be recorded as T, where the preset reference time may be the local time at the location of the server cluster at midnight 12, and a smaller time difference indicates that the current time at the location of the server cluster is later, and the data transmission requirement is lower, so that the data encryption duration may be increased.
In this optional embodiment, in order to eliminate the dimensional difference between the time difference and the resource occupancy rate, a normalization algorithm may be performed on the time difference and the resource occupancy rate to obtain a normalized time difference and a normalized resource occupancy rate, where the normalized time difference may be denoted as TG, and the normalized resource occupancy rate may be denoted as SG, and the preset normalization algorithm may be an existing normalization algorithm such as a maximization algorithm, a minimization algorithm, an arctangent function algorithm, an S-type growth curve algorithm, and the like, which is not limited in this application.
In this alternative embodiment, the normalized time difference and the normalized resource occupancy may be input into a preset integration function to calculate an adjustment ratio, where the preset integration function satisfies the following relation:
Figure BDA0003802250300000081
wherein X represents the adjustment ratio; TG represents the normalized time difference; SG represents the normalized resource occupancy.
In this alternative embodiment, the product of the preset encryption duration and the adjustment ratio X may be calculated as the updated preset encryption duration.
Therefore, data encryption is carried out in a time period with a small server load, the time length of the data encryption is adjusted in real time through the current time of the server cluster and the resource occupancy rate, the negative influence of a data encryption task on data transmission is avoided, uninterrupted encryption of the data in the time period with the small server load in an asynchronous processing mode can be realized, and the stability of the data encryption and the data transmission is improved.
And S14, marking the subject identification of the ciphertext data according to plaintext data, and distributing the ciphertext data to each server node for storage according to the index of the ciphertext data and the load value of the server node.
In an optional embodiment, the marking the subject identifier of the ciphertext data according to plaintext data, and distributing the ciphertext data to each server node for storage according to the index of the ciphertext data and the load value of the server node includes:
classifying the initial plaintext data according to a pre-trained initial plaintext classification model to obtain a category of each initial plaintext data, and using the category as a subject identifier of ciphertext data corresponding to the initial plaintext data;
sequencing the ciphertext data according to the sequence of the indexes of the ciphertext data from early to late to obtain the sequence of each ciphertext data, and sequencing the server nodes according to the sequence of the load values from small to large to obtain the sequence of each server node;
and distributing the ciphertext data to the server nodes with the same sequence for storage.
In this optional embodiment, the pre-trained initial plaintext classification model may be an existing classification model such as XGBoost (Extreme Gradient Boosting algorithm), lightGBM (Light Gradient Boosting Machine), GBDT (Gradient Boosting Decision Tree), and the like, which is not limited in this application. The input of the pre-trained initial plaintext classification model is the initial plaintext data, and the output of the pre-trained initial plaintext classification model is the category of the initial plaintext data, wherein the category comprises user order information, user logistics information and user transaction information.
In this alternative embodiment, the category of the initial plaintext data may be used as the subject identifier of the corresponding ciphertext data.
In this optional embodiment, the ciphertext data may be sorted according to an order from early to late of the index of the ciphertext data, and the earlier the index of the ciphertext data is, it indicates that the earlier the ciphertext data is encrypted, and the earlier the ciphertext data may be pushed to the user; and sequencing the server nodes according to the sequence of the load values from small to large, wherein the smaller the load value is, the higher priority the server node has to process data currently.
In this alternative embodiment, the ciphertext data may be distributed to the server nodes in the same order for storage.
In this way, the load values of the server nodes and the indexes of the ciphertext data are respectively calculated, the server nodes are sorted according to the load values, the ciphertext data are sorted according to the indexes, the ciphertext data are distributed to the server nodes with the same order again, the server cluster is ensured to be in a load balancing state, and the stability of the server cluster can be improved.
And S15, pushing the plaintext data corresponding to the ciphertext data to a user according to the subject identification of the ciphertext data.
In an optional embodiment, the pushing the plaintext data corresponding to the ciphertext data to the user according to the subject identifier of the ciphertext data includes:
inquiring the type of a preset user requirement, wherein the type of the user requirement comprises user order information, user logistics information and user transaction information;
in each server node of the server cluster, sequentially inquiring a theme identifier corresponding to the ciphertext data according to the sequence from late to early of the index of the ciphertext data, if the theme identifier is consistent with the category of the user requirement, decrypting the ciphertext data according to a preset secret key to obtain initial plaintext data, and pushing the initial plaintext data to the user;
if the subject identification is inconsistent with the category of the user requirement, continuously inquiring the subject identification of each ciphertext data until all ciphertext data are inquired, and stopping inquiring;
and if the ciphertext data corresponding to the type required by the user cannot be inquired, sending an information delay prompt to the user.
In this optional embodiment, the category of the user requirement refers to a category of a request for querying data sent by a user to a server, and the user requirement includes user order information, user logistics information, and user transaction information.
In this optional embodiment, in each server node of the server cluster, the subject identifier of each piece of ciphertext data may be sequentially queried from late to early according to an index of the piece of ciphertext data, if the subject identifier is consistent with the category required by the user, the querying is stopped, the piece of ciphertext data is decrypted according to a preset secret key to obtain initial plaintext data, and the initial plaintext data is pushed to the user.
In this optional embodiment, if the topic identifier is not consistent with the category of the user requirement, the query is continued until all ciphertext data is queried.
In this optional embodiment, if ciphertext data corresponding to the user requirement is not queried after all ciphertext data is queried, an information delay hint may be sent to the user, for example, the information delay hint may include: "your order information is to be updated, please try again later", "your transaction information is delayed, please check later", "your logistics information is still being updated, please check later".
Therefore, the corresponding plaintext information is pushed for the user by comparing the user requirement with the theme identification of the ciphertext data, the target of the user can be quickly positioned, and the information inquiring efficiency of the user is improved.
According to the data asynchronous processing method based on artificial intelligence, plaintext data are divided into a plurality of batches according to the data size of the initial plaintext data, the initial plaintext data of the plurality of batches are distributed to the server nodes according to the index of each batch to ensure the load balance of the server cluster, the initial plaintext data are continuously encrypted within the encryption duration to obtain ciphertext data, the encryption duration is continuously updated to maintain the stability of the plurality of groups of servers, the subject identification of each ciphertext data is finally marked, the initial plaintext corresponding to the ciphertext data is pushed to a user by using the subject identification, asynchronous processing can be performed on a data query task, and therefore the efficiency of data query can be improved.
Fig. 2 is a functional block diagram of a preferred embodiment of an artificial intelligence based data asynchronous processing device according to an embodiment of the present application. The artificial intelligence based data asynchronous processing device 11 comprises a preprocessing unit 110, a batching unit 111, a distribution unit 112, an encryption unit 113, a storage unit 114 and a pushing unit 115. The module/unit referred to in this application refers to a series of computer program segments that can be executed by the processor 13 and that can perform a fixed function, and that are stored in the memory 12. In the present embodiment, the functions of the modules/units will be described in detail in the following embodiments.
In an alternative embodiment, the preprocessing unit 110 is configured to preprocess a plaintext data table in a preset server cluster to obtain a plurality of initial plaintext data, where each initial plaintext data corresponds to an autonomous key and a data size.
In an optional embodiment, the preprocessing unit 110 preprocesses the plaintext data table in the preset server cluster to obtain a plurality of initial plaintext data, including:
the plaintext data table comprises a plurality of rows and a plurality of columns, the type of each column of data in the plaintext data table is inquired, the columns of which the types are not the self-increment main keys are taken as initial plaintext data, and each initial plaintext data comprises a plurality of dimensions;
querying the data type and length corresponding to each dimension, and querying the data size of each dimension according to the length and the data type;
and taking the sum of the data sizes corresponding to all dimensions in each initial plaintext data as the data size of the initial plaintext data.
In this optional embodiment, the function of the server cluster is to store plaintext data and forward the data, the server cluster includes a plurality of server nodes, and data transmission can be performed between each server node.
In this optional embodiment, the plaintext data in the server cluster is stored in a format of a plaintext data table, where the plaintext data table includes n rows and m columns, where n and m are integers greater than 1, each row in the plaintext data table corresponds to a piece of plaintext data, and each column corresponds to a feature in the plaintext data.
In this alternative embodiment, the data type of each column of data in the plaintext data table may be queried, and the column whose type is not the autonomy key may be used as the initial plaintext data, where each initial plaintext data includes multiple dimensions. The data types include auto increment primary key, varchar, int (integer value), float (single precision floating point), double (double precision floating point).
In this optional embodiment, the value of the self-increment key is a positive integer, and the difference between two adjacent self-increment keys is 1. The higher the value of the autonomy key is, the earlier the time that the plaintext data corresponding to the autonomy key is stored in the server cluster is, the more the plaintext data should be encrypted preferentially.
In this optional embodiment, the data type and length corresponding to each dimension may be queried, and the data size of each dimension may be queried according to the length and the data type. Illustratively, when the category is varchar, then the dimension data is of size +1; when the category is int, the size of the dimension data is 4 bytes; when the category is double, the size of the dimension data is 8 bytes; when the category is float, then the dimension data is 4 bytes in size.
In this alternative embodiment, the sum of the data sizes of all dimensions in each piece of plaintext data may be calculated as the data size corresponding to the piece of plaintext data. The larger the data size is, the larger the storage space occupied by the plaintext data in the server cluster is, and the more time is consumed for encrypting the plaintext data.
In an alternative embodiment, the batching unit 111 is configured to divide the initial plaintext data into a plurality of batches according to the autonomy key and the data size, and each batch contains a plurality of initial plaintext data.
In an alternative embodiment, the batching unit 111 divides the initial plaintext data into a plurality of batches according to the autonomous key and the data size, and includes:
a, uniformly dividing the initial plaintext data into a plurality of alternative batches according to a preset dividing threshold, wherein each alternative batch comprises a plurality of initial plaintext data, and the number of the alternative batches is equal to the dividing threshold. For example, the preset partition threshold may be initially set to 20, and then 20 candidate batches may be obtained.
And b, respectively calculating the sum of the data sizes of all the initial plaintext data in each alternative batch to serve as the evaluation value of the batch.
c, calculating the variance of all the evaluation values, if the variance is smaller than a preset first termination threshold, indicating that the difference between the sizes of the data contained in the candidate batches is small, then taking the candidate batches as final batches to complete the batching of the initial plaintext data, if the variance is not smaller than the preset first termination threshold, indicating that the difference between the sizes of the data contained in all the candidate batches is large, then updating the preset dividing threshold to obtain an updated dividing threshold, and repeatedly executing steps a to c to obtain the initial plaintext data of the candidate batches, where the preset first termination threshold may be 0.001, and the updating of the preset dividing threshold may be reducing the preset dividing threshold by 1;
d, if the final batches are not obtained after the number of times of repeated division reaches a preset second termination threshold, taking the candidate batches with the minimum variance as the final batches, wherein the preset second termination threshold is equal to the initial value of the preset division threshold.
In an optional embodiment, the distributing unit 112 is configured to calculate a load value of each server node in the server cluster, and distribute the multiple batches of initial plaintext data to the server nodes according to the load value.
In an optional embodiment, the distributing unit 112 calculates a load value of each server node in the server cluster, and distributes the multiple batches of initial plaintext data to the server nodes according to the load value, including:
inquiring the number of processing tasks of each server node in the server cluster as a load value of each server node;
taking the minimum value of the corresponding self-increment key of all initial plaintext data in each batch as the index of the batch;
sorting the batches according to the sequence of the indexes from small to large, and sorting the server nodes according to the sequence of the load values from small to large;
and sending the batch to server nodes with the same sequence for subsequent data encryption processing.
The smaller the number of the processing tasks of the server node, the lower the load of the server node is, the more the data processing tasks including data encryption and data transmission should be preferentially allocated to the server node.
In this optional embodiment, the smaller the index is, the earlier all the initial plaintext data in the batch corresponding to the index is stored in the server cluster, and the earlier the initial plaintext data in the batch should be encrypted.
In an alternative embodiment, the encryption unit 113 is configured to encrypt the initial plaintext data according to a preset encryption duration to obtain ciphertext data and an index of the ciphertext data.
In an optional embodiment, the encrypting unit 113 encrypts the initial plaintext data according to a preset encryption duration to obtain ciphertext data and an index of the ciphertext data, including:
and a, encrypting the initial plaintext data within a preset encryption time range to obtain ciphertext data, wherein the preset encryption time range can be 2 hours, 3 hours, 4 hours and the like, and the method is not limited in the application.
In this optional embodiment, for the initial plaintext data in each server node, the initial plaintext data may be encrypted within a preset encryption duration according to a sequence from small to large of a self-increment primary key of the initial plaintext data to obtain ciphertext data corresponding to each initial plaintext data, and the encryption method may be an RSA encryption algorithm or other existing encryption algorithms, which is not limited in this application.
b, in order to avoid the phenomenon that the server load is too high due to the fact that a large number of data encryption tasks are executed for a long time, data encryption is suspended when the preset encryption duration is over so as to obtain a plurality of ciphertext data, and the time when each ciphertext data is encrypted can be respectively recorded as the index of the ciphertext data.
c, in order to restart the data encryption task under the condition of low server load to realize asynchronous processing of data, when the initial plaintext data is not encrypted after the preset encryption duration is over, continuously comparing the resource occupancy rate of the server cluster with a preset occupancy rate threshold, if the occupancy rate is lower than the preset occupancy rate threshold, updating the preset encryption duration according to the resource occupancy rate of the server cluster and the current time to obtain an updated preset encryption duration, and repeatedly executing the steps a to c to continuously encrypt the data until all the initial plaintext data are encrypted, and stopping data encryption, wherein the preset occupancy rate threshold may be 50%.
In an optional embodiment, the updating the preset encryption duration according to the resource occupancy of the server cluster and the current time to obtain the updated preset encryption duration includes:
inquiring hardware information of the server cluster, and calculating the resource occupancy rate of the server cluster according to the hardware information;
calculating the time difference between the current time and a preset reference time;
normalizing the resource occupancy rate and the time difference to obtain a normalized resource occupancy rate and a normalized time difference;
inputting the normalized resource occupancy rate and the normalized time difference into a preset integration function to calculate an operation time length updating proportion;
and calculating the product of the preset initial operation time length and the updating proportion to be used as the updated preset encryption time length.
In this optional embodiment, the resource occupancy rate S of the server cluster may be calculated according to various hardware information in the server cluster, where the various hardware information includes a CPU occupancy rate S1, a cache occupancy rate S2, a storage occupancy rate S3, and an I/O occupancy rate S4, and a mean value of the S1, S2, S3, and S4 is calculated to serve as the resource occupancy rate of the server cluster, and a smaller resource occupancy rate indicates a smaller load of the server cluster at the current time, so that the data encryption duration may be increased.
In this optional embodiment, a time difference between the current time and a preset reference time may be recorded as T, where the preset reference time may be the local time at the location of the server cluster at midnight 12, and a smaller time difference indicates that the current time at the location of the server cluster is later, and the data transmission requirement is lower, so that the data encryption duration may be increased.
In this optional embodiment, in order to eliminate the dimensional difference between the time difference and the resource occupancy rate, a normalization process may be performed on the time difference and the resource occupancy rate according to a preset normalization algorithm to obtain a normalized time difference and a normalized resource occupancy rate, where the normalized time difference may be denoted by TG and the normalized resource occupancy rate may be denoted by SG, and the preset normalization algorithm may be an existing normalization algorithm such as a maximization algorithm, a minimization algorithm, an arctan function algorithm, an S-type growth curve algorithm, and the like, which is not limited in this application.
In this alternative embodiment, the normalized time difference and the normalized resource occupancy may be input into a preset integration function to calculate an adjustment ratio, where the preset integration function satisfies the following relation:
Figure BDA0003802250300000141
wherein X represents the adjustment ratio; TG represents the normalized time difference; SG represents the normalized resource occupancy.
In this alternative embodiment, the product of the preset encryption duration and the adjustment ratio X may be calculated as the updated preset encryption duration.
In an optional embodiment, the storage unit 114 is configured to mark the subject identifier of the ciphertext data according to plaintext data, and distribute the ciphertext data to each server node for storage according to the index of the ciphertext data and the load value of the server node.
In an optional embodiment, the storing unit 114, according to a plaintext data to mark the subject identifier of the ciphertext data, and according to an index of the ciphertext data and a load value of the server node, distributes the ciphertext data to each server node for storage, including:
classifying the initial plaintext data according to a pre-trained initial plaintext classification model to obtain a category of each initial plaintext data, and using the category as a subject identifier of ciphertext data corresponding to the initial plaintext data;
sequencing the ciphertext data according to the sequence of the indexes of the ciphertext data from early to late, and sequencing the server nodes according to the sequence of the load values from small to large;
and distributing the ciphertext data to the server nodes with the same sequence for storage.
In this optional embodiment, the pre-trained initial plaintext classification model may be an existing classification model such as XGBoost (Extreme Gradient Boosting algorithm), light Gradient gbm (Light Gradient Boosting Machine), GBDT (Gradient Boosting Decision Tree), and the like, which is not limited in this application. The input of the pre-trained initial plaintext classification model is the initial plaintext data, and the output is the category of the initial plaintext data, wherein the category comprises user order information, user logistics information and user transaction information.
In this alternative embodiment, the category of the initial plaintext data may be used as the subject identifier of the corresponding ciphertext data.
In this optional embodiment, the ciphertext data may be sorted according to an order from early to late of the index of the ciphertext data, and the earlier the index of the ciphertext data is, it indicates that the earlier the ciphertext data is encrypted, and the earlier the ciphertext data is, the earlier the ciphertext data may be pushed to the user; and the server nodes can be sorted according to the sequence of the load values from small to large, and the smaller the load value is, the higher priority the server node has to process data currently.
In this alternative embodiment, the ciphertext data may be distributed to the server nodes in the same order for storage.
In an optional embodiment, the pushing unit 115 is configured to push plaintext data corresponding to the ciphertext data to the user according to the subject identifier of the ciphertext data.
In an optional embodiment, the pushing unit 115 pushes the plaintext data corresponding to the ciphertext data to the user according to the subject identifier of the ciphertext data, including:
inquiring the type of a preset user requirement, wherein the type of the user requirement comprises user order information, user logistics information and user transaction information;
in each server node of the server cluster, sequentially inquiring a theme identifier corresponding to the ciphertext data according to the sequence from late to early of the index of the ciphertext data, if the theme identifier is consistent with the category of the user requirement, decrypting the ciphertext data according to a preset secret key to obtain initial plaintext data, and pushing the initial plaintext data to the user;
if the subject identification is inconsistent with the category of the user requirement, continuously inquiring the subject identification of each ciphertext data until all ciphertext data are inquired, and stopping inquiring;
and if the ciphertext data corresponding to the type required by the user cannot be inquired, sending an information delay prompt to the user.
In this optional embodiment, the category of the user requirement refers to a category of a request for querying data sent by a user to a server, and the user requirement includes user order information, user logistics information, and user transaction information.
In this optional embodiment, in each server node of the server cluster, the subject identifier of each piece of ciphertext data may be sequentially queried from late to early according to the index of the piece of ciphertext data, if the subject identifier is consistent with the category required by the user, the querying is stopped, the piece of ciphertext data is decrypted according to a preset secret key to obtain initial plaintext data, and the initial plaintext data is pushed to the user.
In this optional embodiment, if the topic identifier is not consistent with the category of the user requirement, the query is continued until all ciphertext data is queried.
In this optional embodiment, if ciphertext data corresponding to the user requirement is not queried after all ciphertext data are queried, an information delay hint may be sent to the user, for example, the information delay hint may include: "your order information is to be updated, please try again later", "your transaction information is delayed, please check later", "your logistics information is still being updated, please check later".
The data asynchronous processing device based on the artificial intelligence divides the plaintext data into a plurality of batches according to the data size of the initial plaintext data, distributes the plurality of batches of initial plaintext data to the server nodes according to the index of each batch to ensure the load balance of the server cluster, continuously encrypts the initial plaintext data within the encryption time to obtain ciphertext data, continuously updates the encryption time to maintain the stability of the server clusters, finally marks the subject identification of each ciphertext data, pushes the initial plaintext corresponding to the ciphertext data to a user by using the subject identification, can perform asynchronous processing on a data query task, and can improve the efficiency of data query.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 1 comprises a memory 12 and a processor 13. The memory 12 is used for storing computer readable instructions, and the processor 13 is used for executing the computer readable instructions stored in the memory to implement the artificial intelligence based data asynchronous processing method of any of the above embodiments.
In an alternative embodiment, the electronic device 1 further comprises a bus, a computer program stored in the memory 12 and executable on the processor 13, for example an artificial intelligence based data asynchronous processing program.
Fig. 3 only shows the electronic device 1 with components 12-13, and it will be understood by a person skilled in the art that the structure shown in fig. 3 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or a combination of certain components, or a different arrangement of components.
In conjunction with fig. 1, the memory 12 in the electronic device 1 stores a plurality of computer-readable instructions to implement an artificial intelligence based asynchronous processing method of data, and the processor 13 can execute the plurality of instructions to implement:
preprocessing a plaintext data table in a preset server cluster to obtain a plurality of initial plaintext data, wherein each initial plaintext data corresponds to an autonomous key and a data size;
dividing the initial plaintext data into a plurality of batches according to the self-increment key and the data size, wherein each batch comprises a plurality of initial plaintext data;
the server cluster comprises a plurality of server nodes, calculates the load value of each server node in the server cluster, and distributes the initial plaintext data of a plurality of batches to the server nodes according to the load values;
encrypting the initial plaintext data according to a preset encryption duration to obtain ciphertext data and an index of the ciphertext data;
marking the subject identification of the ciphertext data according to plaintext data, and distributing the ciphertext data to each server node for storage according to the index of the ciphertext data and the load value of the server node;
and pushing the plaintext data corresponding to the ciphertext data to a user according to the subject identification of the ciphertext data.
Specifically, the specific implementation method of the instruction by the processor 13 may refer to the description of the relevant steps in the embodiment corresponding to fig. 1, which is not described herein again.
It will be understood by those skilled in the art that the schematic diagram is merely an example of the electronic device 1, and does not constitute a limitation to the electronic device 1, the electronic device 1 may have a bus-type structure or a star-type structure, and the electronic device 1 may further include more or less hardware or software than those shown in the figures, or different component arrangements, for example, the electronic device 1 may further include an input and output device, a network access device, etc.
It should be noted that the electronic device 1 is only an example, and other existing or future electronic products, such as may be adapted to the present application, should also be included in the scope of protection of the present application, and is included by reference.
Memory 12 includes at least one type of readable storage medium, which may be non-volatile or volatile. The readable storage medium includes flash memory, removable hard disks, multimedia cards, card type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disks, optical disks, etc. The memory 12 may in some embodiments be an internal storage unit of the electronic device 1, for example a removable hard disk of the electronic device 1. The memory 12 may also be an external storage device of the electronic device 1 in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the electronic device 1. Further, the memory 12 may also include both an internal storage unit and an external storage device of the electronic device 1. The memory 12 may be used not only to store application software installed in the electronic device 1 and various types of data, such as codes of artificial intelligence-based data asynchronous processing programs, etc., but also to temporarily store data that has been output or is to be output.
The processor 13 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 13 is a Control Unit (Control Unit) of the electronic device 1, connects various components of the whole electronic device 1 by using various interfaces and lines, and executes various functions of the electronic device 1 and processes data by running or executing programs or modules (for example, executing data asynchronous processing programs based on artificial intelligence, etc.) stored in the memory 12 and calling data stored in the memory 12.
The processor 13 executes the operating system of the electronic device 1 and various types of application programs installed. The processor 13 executes the application program to implement the steps of the various artificial intelligence based data asynchronous processing method embodiments described above, such as the steps shown in FIG. 1.
Illustratively, the computer program may be partitioned into one or more modules/units that are stored in the memory 12 and executed by the processor 13 to accomplish the present application. The one or more modules/units may be a series of computer-readable instruction segments capable of performing certain functions, which are used to describe the execution of the computer program in the electronic device 1. For example, the computer program may be divided into a preprocessing unit 110, a batching unit 111, a distribution unit 112, an encryption unit 113, a storage unit 114, a push unit 115.
The integrated unit implemented in the form of a software functional module may be stored in a computer-readable storage medium. The software functional module is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a computer device, or a network device, etc.) or a processor (processor) to execute parts of the artificial intelligence based data asynchronous processing method according to the embodiments of the present application.
The integrated modules/units of the electronic device 1 may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. Based on such understanding, all or part of the flow in the method of the embodiments described above may be implemented by a computer program, which may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the steps of the embodiments of the methods described above may be implemented.
Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U.S. disk, removable hard disk, magnetic diskette, optical disk, computer Memory, read-Only Memory (ROM), random access Memory and other Memory, etc.
Further, the computer-readable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the blockchain node, and the like.
The block chain referred by the application is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
The bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one arrow is shown in FIG. 3, but this does not indicate only one bus or one type of bus. The bus is arranged to enable connected communication between the memory 12 and the at least one processor 13 etc.
Although not shown, the electronic device 1 may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 13 through a power management device, so that functions such as charge management, discharge management, and power consumption management are implemented through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device 1 may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
Further, the electronic device 1 may further include a network interface, and optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a bluetooth interface, etc.), which are generally used for establishing a communication connection between the electronic device 1 and other electronic devices.
Optionally, the electronic device 1 may further comprise a user interface, which may be a Display (Display), an input unit (such as a Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable, among other things, for displaying information processed in the electronic device 1 and for displaying a visualized user interface.
The embodiment of the present application further provides a computer-readable storage medium (not shown), in which computer-readable instructions are stored, and the computer-readable instructions are executed by a processor in the electronic device to implement the artificial intelligence based data asynchronous processing method according to any of the above embodiments.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
Furthermore, it will be obvious that the term "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the specification may also be implemented by one unit or means through software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present application and not for limiting, and although the present application is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made on the technical solutions of the present application without departing from the spirit and scope of the technical solutions of the present application.

Claims (10)

1. An artificial intelligence based data asynchronous processing method is characterized by comprising the following steps:
preprocessing a plaintext data table in a preset server cluster to obtain a plurality of initial plaintext data, wherein each initial plaintext data corresponds to an autonomous key and a data size;
dividing the initial plaintext data into a plurality of batches according to the self-increment key and the data size, wherein each batch comprises a plurality of initial plaintext data;
calculating a load value of each server node in the server cluster, and distributing the multiple batches of initial plaintext data to the server nodes according to the load values;
encrypting the initial plaintext data according to a preset encryption duration to obtain ciphertext data and an index of the ciphertext data;
marking the subject identification of the ciphertext data according to plaintext data, and distributing the ciphertext data to each server node for storage according to the index of the ciphertext data and the load value of the server node;
and pushing the plaintext data corresponding to the ciphertext data to a user according to the subject identification of the ciphertext data.
2. The artificial intelligence based data asynchronous processing method as claimed in claim 1, wherein said pre-processing the plaintext data tables in the preset server cluster to obtain a plurality of initial plaintext data comprises:
the plaintext data table comprises a plurality of rows and a plurality of columns, the data type of each column in the plaintext data table is inquired, the columns of which the types are not the self-increment main keys are taken as initial plaintext data, and each initial plaintext data comprises a plurality of dimensions;
querying the data type and length corresponding to each dimension, and querying the data size of each dimension according to the length and the data type;
and taking the sum of the data sizes corresponding to all dimensions in each initial plaintext data as the data size of the initial plaintext data.
3. The artificial intelligence based data asynchronous processing method as claimed in claim 1, wherein said dividing said initial plaintext data into a plurality of batches according to said autonomous key and data size comprises:
a, uniformly dividing the initial plaintext data into a plurality of alternative batches according to a preset dividing threshold, wherein each alternative batch comprises a plurality of initial plaintext data, and the number of the alternative batches is equal to the dividing threshold;
b, respectively calculating the sum of the data sizes of all initial plaintext data in each alternative batch to serve as an evaluation value of the batch;
c, calculating the variance of all the evaluation values, if the variance is smaller than a preset first termination threshold value, indicating that the difference between the data sizes contained in the candidate batches is small, taking the candidate batches as final batches to complete the batching of the initial plaintext data, if the variance is not smaller than the preset first termination threshold value, indicating that the difference between the data sizes contained in all the candidate batches is large, updating the partitioning threshold value to obtain an updated partitioning threshold value, and repeatedly executing the steps a to c to obtain the initial plaintext data of the batches;
d, if the final batches are not obtained after the repeated division times reach a preset second termination threshold value, taking the candidate batches with the minimum variance as the final batches, wherein the preset second termination threshold value is equal to the initial value of the preset division threshold value.
4. The artificial intelligence based data asynchronous processing method according to claim 1, wherein said calculating a load value of each server node in the server cluster and distributing the plurality of batches of initial plaintext data to the server nodes according to the load value comprises:
inquiring the number of processing tasks of each server node in the server cluster as a load value of each server node;
taking the minimum value of the corresponding self-increment main keys of all initial plaintext data in each batch as the index of the batch;
sorting the batches according to the sequence of the indexes from small to large to obtain the sequence of each batch, and sorting the server nodes according to the sequence of the load values from small to large to obtain the sequence of each server node;
and sending the batch to server nodes with the same sequence for subsequent data encryption processing.
5. The artificial intelligence based data asynchronous processing method as claimed in claim 1, wherein said encrypting the initial plaintext data according to a preset encryption duration to obtain ciphertext data and an index of the ciphertext data comprises:
a, encrypting the initial plaintext data within a preset encryption duration range to obtain ciphertext data;
b, pausing data encryption when the preset encryption duration is over, and respectively recording the moment when each ciphertext data is encrypted to be used as an index of the ciphertext data;
c, if the initial plaintext data is not encrypted after the preset encryption duration is over, continuously comparing the resource occupancy rate of the server cluster with a preset occupancy rate threshold value, if the occupancy rate is lower than the preset occupancy rate threshold value, updating the preset encryption duration according to the resource occupancy rate of the server cluster and the current moment to obtain the updated preset encryption duration, and repeatedly executing the steps a to c to continuously encrypt the data until all the initial plaintext data are encrypted, and stopping data encryption; wherein, the updating the preset encryption duration according to the resource occupancy rate of the server cluster and the current time to obtain the updated preset encryption duration comprises:
inquiring hardware information of the server cluster, and calculating the resource occupancy rate of the server cluster according to the hardware information;
calculating the time difference between the current time and a preset reference time;
normalizing the resource occupancy rate and the time difference to obtain a normalized resource occupancy rate and a normalized time difference;
inputting the normalized resource occupancy rate and the normalized time difference into a preset integration function to calculate an operation time length updating proportion;
and calculating the product of the preset initial operation time length and the updating proportion to be used as the updated preset encryption time length.
6. The artificial intelligence based data asynchronous processing method as claimed in claim 5, wherein said marking the subject identification of said ciphertext data according to plaintext data, and distributing said ciphertext data to each said server node for storage according to the index of said ciphertext data and the load value of said server node, comprises:
classifying the initial plaintext data according to a pre-trained initial plaintext classification model to obtain a category of each initial plaintext data, and using the category as a subject identifier of ciphertext data corresponding to the initial plaintext data;
sequencing the ciphertext data according to the sequence of the indexes of the ciphertext data from early to late to obtain the sequence of each ciphertext data, and sequencing the server nodes according to the sequence of the load values from small to large to obtain the sequence of each server node;
and distributing the ciphertext data to the server nodes with the same sequence for storage.
7. The artificial intelligence based data asynchronous processing method as claimed in claim 1, wherein said pushing plaintext data corresponding to said ciphertext data to a user according to a subject identifier of said ciphertext data comprises:
inquiring the type of a preset user requirement, wherein the type of the user requirement comprises user order information, user logistics information and user transaction information;
in each server node of the server cluster, sequentially inquiring a theme identifier corresponding to the ciphertext data according to the sequence from late to early of the index of the ciphertext data, if the theme identifier is consistent with the category of the user requirement, decrypting the ciphertext data according to a preset secret key to obtain initial plaintext data, and pushing the initial plaintext data to the user;
if the subject identification is inconsistent with the category required by the user, continuously inquiring the subject identification of each ciphertext data until all ciphertext data are inquired, and stopping inquiring;
and if the ciphertext data corresponding to the type required by the user cannot be inquired, sending an information delay prompt to the user.
8. An artificial intelligence based data asynchronous processing device, characterized in that the device comprises:
the system comprises a preprocessing unit, a data processing unit and a data processing unit, wherein the preprocessing unit is used for preprocessing a plaintext data table in a preset server cluster to obtain a plurality of initial plaintext data, and each initial plaintext data corresponds to an autonomous key and a data size;
the batching unit is used for dividing the initial plaintext data into a plurality of batches according to the self-increment key and the data size, and each batch comprises a plurality of initial plaintext data;
the distribution unit is used for calculating a load value of each server node in the server cluster, and distributing the initial plaintext data of the batches to the server nodes according to the load values;
the encryption unit is used for encrypting the initial plaintext data according to preset encryption duration to obtain ciphertext data and an index of the ciphertext data;
the storage unit is used for marking the subject identification of the ciphertext data according to plaintext data and distributing the ciphertext data to each server node for storage according to the index of the ciphertext data and the load value of the server node;
and the pushing unit is used for pushing the plaintext data corresponding to the ciphertext data to a user according to the subject identification of the ciphertext data.
9. An electronic device, characterized in that the electronic device comprises:
a memory storing computer readable instructions; and
a processor executing computer readable instructions stored in the memory to implement the artificial intelligence based data asynchronous processing method of any of claims 1 to 7.
10. A computer-readable storage medium, characterized in that: the computer-readable storage medium stores therein computer-readable instructions which are executed by a processor in an electronic device to implement the artificial intelligence based data asynchronous processing method according to any one of claims 1 to 7.
CN202210987077.7A 2022-08-17 2022-08-17 Data asynchronous processing method based on artificial intelligence and related equipment Pending CN115329002A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210987077.7A CN115329002A (en) 2022-08-17 2022-08-17 Data asynchronous processing method based on artificial intelligence and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210987077.7A CN115329002A (en) 2022-08-17 2022-08-17 Data asynchronous processing method based on artificial intelligence and related equipment

Publications (1)

Publication Number Publication Date
CN115329002A true CN115329002A (en) 2022-11-11

Family

ID=83923687

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210987077.7A Pending CN115329002A (en) 2022-08-17 2022-08-17 Data asynchronous processing method based on artificial intelligence and related equipment

Country Status (1)

Country Link
CN (1) CN115329002A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117194020A (en) * 2023-09-04 2023-12-08 北京宝联之星科技股份有限公司 Cloud computing original big data processing method, system and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117194020A (en) * 2023-09-04 2023-12-08 北京宝联之星科技股份有限公司 Cloud computing original big data processing method, system and storage medium
CN117194020B (en) * 2023-09-04 2024-04-05 北京宝联之星科技股份有限公司 Cloud computing original big data processing method, system and storage medium

Similar Documents

Publication Publication Date Title
CN111767268B (en) Database table partitioning method and device, electronic equipment and storage medium
CN112801718B (en) User behavior prediction method, device, equipment and medium
CN112559535B (en) Multithreading-based asynchronous task processing method, device, equipment and medium
CN112418798A (en) Information auditing method and device, electronic equipment and storage medium
CN111476225B (en) In-vehicle human face identification method, device, equipment and medium based on artificial intelligence
WO2022160442A1 (en) Answer generation method and apparatus, electronic device, and readable storage medium
CN112084486A (en) User information verification method and device, electronic equipment and storage medium
CN112699142A (en) Cold and hot data processing method and device, electronic equipment and storage medium
CN113868529A (en) Knowledge recommendation method and device, electronic equipment and readable storage medium
CN113868528A (en) Information recommendation method and device, electronic equipment and readable storage medium
CN115002062B (en) Message processing method, device, equipment and readable storage medium
CN115269523A (en) File storage and query method based on artificial intelligence and related equipment
CN115329002A (en) Data asynchronous processing method based on artificial intelligence and related equipment
CN114880368A (en) Data query method and device, electronic equipment and readable storage medium
CN113590703A (en) ES data importing method and device, electronic equipment and readable storage medium
CN114817408B (en) Scheduling resource identification method and device, electronic equipment and storage medium
CN112257078B (en) Block chain encryption and decryption service security trusted system based on TEE technology
CN112925753B (en) File additional writing method and device, electronic equipment and storage medium
CN115239958A (en) Wheel hub damage detection method based on artificial intelligence and related equipment
CN113342867A (en) Data distribution and management method and device, electronic equipment and readable storage medium
CN113704616A (en) Information pushing method and device, electronic equipment and readable storage medium
CN112667570A (en) File access method, device, equipment and readable storage medium
CN112446781A (en) Wind control data generation method, device, equipment and computer readable storage medium
CN113886108B (en) Dot data calling method and device, electronic equipment and storage medium
CN118037198B (en) Event-related article management method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination