CN114780564A - Data processing method, data processing apparatus, electronic device, and storage medium - Google Patents

Data processing method, data processing apparatus, electronic device, and storage medium Download PDF

Info

Publication number
CN114780564A
CN114780564A CN202210432671.XA CN202210432671A CN114780564A CN 114780564 A CN114780564 A CN 114780564A CN 202210432671 A CN202210432671 A CN 202210432671A CN 114780564 A CN114780564 A CN 114780564A
Authority
CN
China
Prior art keywords
data
attribute data
message
key value
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210432671.XA
Other languages
Chinese (zh)
Inventor
孙亮
雷寿国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Technology Holding Co Ltd
Original Assignee
Jingdong Technology Holding Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Technology Holding Co Ltd filed Critical Jingdong Technology Holding Co Ltd
Priority to CN202210432671.XA priority Critical patent/CN114780564A/en
Publication of CN114780564A publication Critical patent/CN114780564A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2358Change logging, detection, and notification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2308Concurrency control
    • G06F16/2336Pessimistic concurrency control approaches, e.g. locking or multiple versions without time stamps
    • G06F16/2343Locking methods, e.g. distributed locking or locking implementation details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/283Multi-dimensional databases or data warehouses, e.g. MOLAP or ROLAP

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present disclosure provides a data processing method, including: responding to a trigger timing task, and acquiring a plurality of key values configured in a cache database; generating message data based on the key value and the attribute data stored in the key value respectively for each key value; and sending the plurality of message data to the distributed message queue so that the attribute data contained in the message data is processed after the consumer receives the message data from the distributed message queue, and the processed attribute data is written into the storage device of the search engine cluster. In addition, the disclosure also provides a data processing device, an electronic device and a readable storage medium.

Description

Data processing method, data processing apparatus, electronic device, and storage medium
Technical Field
The present disclosure relates to the field of big data technologies, and more particularly, to a data processing method, a data processing apparatus, an electronic device, a readable storage medium, and a computer program product.
Background
The business data generated by the enterprise in the production and management process is mostly data with multiple dimensions, for example, the customer data usually includes basic information and relationship information of the customer, such as customer account, source, attribution type, and the like. In order to implement a search function of business data, in the related art, an enterprise typically stores customer information into a nested model of a search engine.
In the course of implementing the disclosed concept, the inventors found that there are at least the following problems in the related art: due to the limitation of the implementation of the bottom layer of the search engine, when the amount of the business data is large and the update is frequent, the writing of the data occupies a large amount of computing resources.
Disclosure of Invention
In view of the above, the present disclosure provides a data processing method, a data processing apparatus, an electronic device, a readable storage medium, and a computer program product.
One aspect of the present disclosure provides a data processing method, including: responding to a trigger timing task, and acquiring a plurality of key values configured in a cache database; generating message data based on the key value and the attribute data stored in the key value respectively for each key value; and sending a plurality of message data to a distributed message queue so that a consumer can process the attribute data contained in the message data after receiving the message data from the distributed message queue, and writing the processed attribute data into a storage device of a search engine cluster.
According to an embodiment of the present disclosure, the method further includes: acquiring change data generated in a database within a preset time period, wherein the preset time period comprises a trigger interval of the timing task; for each acquired change data, determining a target key value from a plurality of key values of the cache database; and storing the changed data as the attribute data of the target key value into the cache database.
According to an embodiment of the present disclosure, the acquiring the changed data generated in the database within the preset time period includes: acquiring the log information generated in the database in the preset time period; and analyzing the log information to obtain the changed data.
According to the embodiment of the disclosure, the change data is configured with a service line identifier and a client number; and multiple key values of the cache database are respectively attributed to the key value groups of multiple service lines.
According to an embodiment of the present disclosure, the determining, for each obtained change data, a target key value from a plurality of key values in the cache database includes: determining a target key value group based on the service line identification of the changed data; and determining the target key value from a plurality of key values in the target key value group based on the client number of the changed data.
According to an embodiment of the present disclosure, after receiving the message data from the distributed message queue, the consumer processes attribute data included in the message data, and writes the processed attribute data into a storage device of a search engine cluster, including: after the consumer receives the message data, in response to successful acquisition of the distributed lock, a preset number of attribute data are taken out from the attribute data contained in the message data to obtain a plurality of first target attribute data; processing the plurality of first target attribute data to obtain second target attribute data; writing the second target attribute data into the storage equipment of the search engine cluster through a preset data interface; and releasing the distributed lock.
According to the embodiment of the disclosure, the attribute data includes main data and sub-data of multiple dimensions, and the attribute data stored in the same key value has a preset sequence; the processing of the plurality of first target attribute data includes: based on the preset sequence, performing deduplication on attribute data, which is in the same state as the main data and the sub data, in the plurality of first target attribute data to obtain a plurality of third target attribute data; and for each third target attribute data, processing the sub-data of the third target attribute data based on the main data of the third target attribute data.
According to an embodiment of the present disclosure, for each third target attribute data, processing sub-data of the third target attribute data based on main data of the third target attribute data includes: for each third target attribute data, deleting the subdata in the third target attribute data and supplementing the subdata of all dimensions corresponding to the main data from the database under the condition that the main data of the third target attribute data is determined to be the main data of the newly added operation type; if the main data of the third target attribute data is determined to be the main data of the modification operation type, retaining the dimensionality of the sub-data of the third target attribute data, and supplementing all the dimensionality sub-data corresponding to the main data from the database; and if the third target attribute data does not contain main data, retaining the dimensionality of the subdata of the third target attribute data, supplementing the main data from the database based on the service line identification and the client number of the third target attribute data, and supplementing the subdata of all the dimensionalities corresponding to the main data from the database.
According to an embodiment of the present disclosure, the method further includes: and after a preset number of attribute data are taken out from the attribute data contained in the message data, sending the message data which finishes the taking-out operation to the distributed message queue under the condition that the message data still contains the attribute data, so that the consumer can consume the message data which finishes the taking-out operation again.
Another aspect of the present disclosure provides a data processing apparatus including: the first acquisition module is used for responding to a trigger timing task and acquiring a plurality of key values configured in a cache database; the generation module is used for generating message data respectively aiming at each key value based on the key value and the attribute data stored in the key value; and the first processing module is used for sending the message data to the distributed message queue so that a consumer can process the attribute data contained in the message data after receiving the message data from the distributed message queue, and the processed attribute data is written into the storage device of the search engine cluster.
Another aspect of the present disclosure provides an electronic device including: one or more processors; memory to store one or more instructions, wherein the one or more instructions, when executed by the one or more processors, cause the one or more processors to implement a method as described above.
Another aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions for implementing the method as described above when executed.
Another aspect of the disclosure provides a computer program product comprising computer executable instructions for implementing the method as described above when executed.
According to the embodiment of the disclosure, by setting the timing task, when the timing task is triggered, the attribute data stored in the plurality of key values in the cache database can be sent to the distributed message queue, so that a consumer can consume the message data in the message queue and write the message data into the storage device of the search engine. In the technical means, the cache database is used to realize secondary distribution and quasi-real-time writing of data, and the technical problem that due to the limitation of implementation of the bottom layer of a search engine, when the data volume of the client information is large and the updating is frequent, the writing of the data occupies a large amount of computing resources in the related technology is at least partially solved, so that the data storage efficiency is effectively improved, and the server resources are saved.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments of the present disclosure with reference to the accompanying drawings, in which:
fig. 1 schematically shows an exemplary system architecture to which the data processing method may be applied, according to an embodiment of the present disclosure.
Fig. 2 schematically shows a flow chart of a data processing method according to an embodiment of the present disclosure.
Fig. 3 schematically shows a schematic diagram of a process flow of first-time distribution of change data according to an embodiment of the present disclosure.
Fig. 4 schematically shows a schematic diagram of a process flow of second distribution of change data according to an embodiment of the present disclosure.
Fig. 5 schematically shows a block diagram of a data processing apparatus according to an embodiment of the present disclosure.
Fig. 6 schematically shows a block diagram of an electronic device adapted to implement a data processing method according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that these descriptions are illustrative only and are not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs, unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
The Elasticsearch is an open source search engine based on the Apache Lucene (TM), and is widely applied to business systems of enterprises by virtue of the characteristics of high performance and complete functions of the Apache Lucene, and is used for realizing searching, analysis and exploration of business data. On the other hand, because the bottom layer of the Elasticsearch is realized by java, a large amount of computing resources are occupied when data is frequently written due to language limitation, and thus fast writing cannot be realized.
In view of this, the embodiments of the present disclosure perform secondary distribution of data by using the cache database, and implement quasi real-time write and frequent write of data in a scene where the service data has a large dimension and the data amount of each dimension is huge.
In particular, embodiments of the present disclosure provide a data processing method, a data processing apparatus, an electronic device, a readable storage medium, and a computer program product. The method comprises the following steps: responding to a trigger timing task, and acquiring a plurality of key values configured in a cache database; generating message data based on the key value and the attribute data stored in the key value respectively for each key value; and sending the plurality of message data to the distributed message queue so that the attribute data contained in the message data is processed after the message data is received from the distributed message queue by the consumer, and writing the processed attribute data into the storage device of the search engine cluster.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, necessary security measures are taken, and the customs of the public order is not violated.
In the technical scheme of the disclosure, before the personal information of the user is obtained or collected, the authorization or the consent of the user is obtained.
Fig. 1 schematically shows an exemplary system architecture to which the data processing method may be applied, according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, a system architecture 100 according to this embodiment may include front- end devices 101, 102, a backend server 103, and a server cluster 104.
The front- end devices 101, 102 may be various electronic devices that support human-computer interaction functionality, including but not limited to smart phones, tablets, laptop portable computers, desktop computers, and the like.
The front- end devices 101, 102 may have various client applications configured thereon, such as a shopping-like application, a web browser application, a search-like application, an instant messaging tool, a mailbox client, and/or social platform software, among others.
The background server 103 may be a server providing various services, and may be configured with a storage device therein, wherein the storage device includes a database and a cache database. Any operation performed by the user in the front- end devices 101, 102 may be characterized as a data change in the database of the backend server 103.
The server cluster 104 may be comprised of multiple servers providing the same service, for example, the server cluster 104 may be comprised of multiple servers providing support for a search engine.
The connections between the front- end devices 101, 102 and the backend server 103, and between the backend server 103 and the server cluster 104 may be via a network, which may include various connection types, such as wired and/or wireless communication links, and the like.
It should be noted that the data processing method provided by the embodiment of the present disclosure may be generally executed by the server 103. Accordingly, the data processing apparatus provided by the embodiments of the present disclosure may be generally disposed in the server 103. The data processing method provided by the embodiment of the present disclosure may also be executed by a server or a server cluster that is different from the server 103 and is capable of establishing communication with the front- end devices 101 and 102, the server 103, and the server cluster 104. Accordingly, the data processing apparatus provided in the embodiment of the present disclosure may also be disposed in a server or a server cluster different from the server 103 and capable of establishing communication with the front- end devices 101 and 102, the server 103, and the server cluster 104.
For example, the operation of the user in the front-end device 101 may be reflected by the network as a data change in the database of the server 103, which may be collected by the cache database in the server 103 and perform the data processing method provided by the embodiment of the present disclosure to process the changed data; alternatively, other servers or server clusters may also read the changed data from the cache database of the server 103 and execute the data processing method provided by the embodiment of the present disclosure.
It should be understood that the number of front-end devices, servers, and server clusters in fig. 1 is merely illustrative. There may be any number of front-end devices, servers, and clusters of servers, as desired for implementation.
Fig. 2 schematically shows a flow chart of a data processing method according to an embodiment of the present disclosure.
As shown in fig. 2, the method includes operations S201 to S203.
In operation S201, in response to triggering a timing task, a plurality of key values configured in a cache database are obtained.
In operation S202, message data is generated based on a key value and attribute data stored in the key value, respectively for each key value.
In operation S203, the plurality of message data are sent to the distributed message queue, so that the consumer processes the attribute data included in the message data after receiving the message data from the distributed message queue, and writes the processed attribute data into the storage device of the search engine cluster.
According to the embodiment of the present disclosure, the timing task may be a task executed at a certain frequency, for example, the execution frequency of the timing task may be set to 1s, that is, the time window between two adjacent timing tasks is 1 s.
According to an embodiment of the present disclosure, the cache database may be any type of key-value type database, such as redis, memcache, require, and the like.
According to the embodiment of the disclosure, a plurality of key values can be pre-configured in the cache database, and the business data written in the cache database can be distributed to the key values, so that the problem of hot keys caused by a large number of requests for access of a single key value in a short time is avoided.
According to the embodiment of the disclosure, when the timing task is triggered, one message datum can be generated from the data contained in each key value pair in the cache database.
According to the embodiment of the present disclosure, the generated plurality of message data may be transmitted to the distributed message queue in an arbitrary order.
According to embodiments of the present disclosure, a consumer may be any one of a cluster of search engines.
According to the embodiment of the disclosure, after consuming message data, a consumer can acquire data contained in the message data, namely, corresponding key values and attribute data in the original cache database, and process the acquired attribute data, wherein the processing mode includes but is not limited to deletion, combination, supplementation and the like.
According to the embodiment of the disclosure, the frequency of writing data into the storage device of the search engine in one timing task is effectively reduced through the cooperative use of the cache database and the distributed message queue, so that the occupation of the writing operation on the operation resource is also reduced; on the other hand, by setting a smaller timed task trigger interval, the quasi-real-time writing of data can be realized.
According to the embodiment of the disclosure, by setting the timing task, when the timing task is triggered, the attribute data stored in the plurality of key values in the cache database can be sent to the distributed message queue, so that a consumer can consume the message data in the message queue and write the message data into the storage device of the search engine. In the technical means, the cache database is used to realize secondary distribution and quasi-real-time writing of data, and the technical problem that due to the limitation of implementation of the bottom layer of a search engine, the writing of data occupies a large amount of computing resources when the data volume of client information is large and the updating is frequent in the related technology is at least partially overcome, so that the data storage efficiency is effectively improved, and the server resources are saved.
The method shown in fig. 2 is further described with reference to fig. 3-4 in conjunction with specific embodiments.
Fig. 3 is a schematic diagram schematically illustrating a processing flow of first distribution of change data according to an embodiment of the present disclosure.
As shown in fig. 3, the process flow includes operations S301 to S304.
It should be noted that, unless explicitly stated that there is an execution sequence between different operations or there is an execution sequence between different operations in technical implementation, the execution sequence between multiple operations may not be sequential, or multiple operations may be executed simultaneously in the flowchart in this disclosure.
In operation S301, change data generated in a database is acquired.
In operation S302, the changed data is allocated to the corresponding key value of the cache database.
In operation S303, it is determined whether to trigger a timing task; under the condition that the timing task is not triggered, returning to execute operation S301 to continuously store the data generated in the database into the cache database; in response to triggering the timing task, operation S304 is performed.
In operation S304, each key value of the cache database is packed into message data, and sent to a message queue.
According to the embodiment of the present disclosure, the database may be a relational database such as mysql and orcale, or may be a non-relational database such as MongoDB and CouchDB, which is not limited herein.
According to the embodiment of the disclosure, the database implementation stores business data generated in the business system. When service data is added, modified or deleted, the log file of the database can record the related information of data change. For example, when any field in mysql changes, binlog will generate a log record based on the change of that field.
According to the embodiment of the disclosure, the change data generated in the database can be acquired by acquiring the log information generated in the database and analyzing the log information.
According to the embodiment of the disclosure, the obtained changed data can be sent to the cache database through the data change transmission system, and the data change transmission system can be used for ensuring that multiple changes of one piece of data are sequential in a data distribution mode according to the id hash. For example, if id is 1, and the name of data is changed to b and c, the data received by the cache database is also in the order of name b and name c.
According to the embodiment of the disclosure, the target key value can be determined from a plurality of key values configured in advance in the cache database, and then the changed data is stored in the cache database as the attribute data of the target key value.
According to the embodiment of the disclosure, when the key values are configured for the cache database, a key value group can be configured for each service line, and each key value group can include a plurality of key values. For example, the service line 100 is configured with 2 key values, which may be named redis _100_0, redis _100_1, respectively.
According to the embodiment of the disclosure, by configuring a plurality of key values for each service line, the problem of hot key caused by excessive data volume of a certain service line can be effectively prevented, and the read-write of the cache database is influenced.
According to the embodiment of the disclosure, the change data may be configured with the service line identifier and the client number, and when the distribution of the change data is performed, the target key value group may be determined based on the service line identifier of the change data, and the target key value may be determined from a plurality of key values of the target key value group based on the client number of the change data. For example, if the service line identifier of the changed data is 100 and the client number is 100001, the changed data may be allocated to the key value redis _100_1 according to the set rule.
According to the embodiment of the disclosure, the timing task may be configured with a certain trigger interval, the cache database continuously acquires the changed data from the database within a time window of the trigger interval, and after the timing task is triggered, the cache database may package the data acquired within the time window into message data and send the message data to the distributed message queue to be consumed by the consumer.
According to the embodiment of the disclosure, all key values configured in the cache database can be packaged into message data.
In some embodiments, the key values subjected to the packing operation may be further filtered, and in the case that the corresponding key values do not contain attribute data, an identifier of "unchanged data" may be added to the packed message data, so that the consumer may directly discard the message data after receiving the message data, thereby improving the efficiency of data synchronization.
Fig. 4 is a schematic diagram schematically illustrating a processing flow of second distribution of change data according to an embodiment of the present disclosure.
As shown in fig. 4, the process flow includes operations S401 to S409.
In operation S401, it is determined whether the distributed lock is successfully acquired; in a case where it is determined that the distributed lock cannot be successfully acquired, performing operation S402; in the case where it is determined that the distributed lock is successfully acquired, operation S403 is performed.
In operation S402, the locking operation is exited.
In operation S403, message data is consumed from the distributed message queue to obtain a plurality of attribute data.
In operation S404, a preset number of attribute data are extracted from the plurality of attribute data, resulting in first target attribute data.
In operation S405, deduplication and replenishment processing is performed on the first target attribute data to obtain second target attribute data.
In operation S406, the second target attribute data is written into the storage device of the search engine cluster through the preset data interface.
In operation S407, it is determined whether there is remaining attribute data in the message data; in case it is determined that there is still the remaining attribute data in the message data, operation S408 is performed; in case that it is determined that the message data is empty, operation S409 is performed.
In operation S408, the remaining attribute data is packaged into message data and sent to the distributed message queue.
In operation S409, the distributed lock is released.
According to the embodiment of the present disclosure, the implementation manner of the distributed lock is not limited, and for example, the distributed lock may be implemented based on redis, zookeeper, or the like.
According to the embodiment of the disclosure, by using the distributed lock, the atomicity of data writing operation in the resource can be ensured, so that the problem of logical errors of the resource can be avoided.
According to the embodiment of the disclosure, after the locking operation is quitted, according to the program logic configured by the developer, the locking operation may be tried again after a period of time, or the user may be directly fed back that the operation is failed, and the next operation instruction of the user is waited.
According to the embodiment of the present disclosure, the preset number may be set according to a specific service scenario, for example, may be set to 1000.
According to the embodiment of the disclosure, when the attribute data contained in the key value is less than the preset number, the full amount of attribute data in the key value can be taken out.
According to the embodiment of the present disclosure, if the attribute data contained in the key value is greater than the preset number, the remaining attribute data in the key value may be repackaged into message data and sent to the message queue to be consumed by the next consumer.
According to the embodiment of the disclosure, the attribute data may include data of multiple dimensions, data of one dimension may be set as main data, and data of other dimensions may be set as sub data. For example, in an application scenario of online shopping, a customer identifier or customer information in service data may be set as main data, and order information may be set as sub-data.
According to an embodiment of the present disclosure, the specific step of processing the first target attribute data may include:
first, a single piece of deduplication may be performed, and based on a preset order, deduplication is performed on attribute data having the same main data and sub-data in the plurality of first target attribute data, so as to obtain a plurality of third target attribute data.
According to an embodiment of the present disclosure, the preset order may be an original storage order of the attribute data.
According to the embodiment of the disclosure, under the condition that one piece of data is changed for multiple times, duplicate removal is performed based on the preset sequence, and only the data obtained after the last change can be reserved, so that the number of data needing to be subsequently processed is reduced, and the efficiency of data synchronization is improved.
Then, deduplication may be performed between the main data and the sub data in each first target attribute data, and for each third target attribute data, the sub data of the third target attribute data may be processed based on the main data of the third target attribute data.
According to the embodiment of the disclosure, for each third target attribute data, in the case that the main data of the third target attribute data is determined to be the main data of the newly added operation type, deleting the sub-data in the third target attribute data, and supplementing the sub-data of all dimensions corresponding to the main data from the database. For example, if the client information represented by the main data is not recorded in the storage device of the current search engine, the client corresponding to the client information may be considered as a new user, and in order to avoid missing information of the new user, the sub-data corresponding to the main data may be deleted, and the full amount of data related to the user may be obtained from the database as the sub-data of the main data.
And in the case that the main data of the third target attribute data is determined to be the main data of the modification operation type, retaining the dimensionality of the sub data of the third target attribute data, and supplementing the sub data of all dimensionalities corresponding to the main data from the database. For example, if the sub data corresponding to the main data is newly added or modified several times, only the data after the last change may be retained.
And under the condition that the third target attribute data does not contain the main data, keeping the dimensionality of the subdata of the third target attribute data, supplementing the main data from the database based on the service line identification and the client number of the third target attribute data, and supplementing the subdata of all the dimensionalities corresponding to the main data from the database.
According to the embodiment of the present disclosure, the preset data interface may be set according to a search engine, for example, in the case that the search engine is an elastic search, the preset data interface may be a bulk api.
According to the embodiment of the disclosure, by merging the attribute data, invalid data in the original data can be effectively deleted, the number of times of write-in operations to the storage device of the search engine is reduced, the operation pressure of the search engine server is reduced, and server resources are saved.
Fig. 5 schematically shows a block diagram of a data processing device according to an embodiment of the present disclosure.
As shown in fig. 5, the data processing apparatus 500 includes a first obtaining module 510, a generating module 520, and a first processing module 530.
The first obtaining module 510 is configured to obtain a plurality of key values configured in the cache database in response to triggering the timing task.
A generating module 520, configured to generate, for each key value, message data based on the key value and the attribute data stored in the key value.
The first processing module 530 is configured to send a plurality of message data to the distributed message queue, so that after receiving the message data from the distributed message queue, a consumer processes attribute data included in the message data, and writes the processed attribute data into a storage device of the search engine cluster.
According to the embodiment of the disclosure, by setting the timing task, when the timing task is triggered, the attribute data stored in the plurality of key values in the cache database can be sent to the distributed message queue, so that a consumer can consume the message data in the message queue and write the message data into the storage device of the search engine. In the technical means, the cache database is used to realize secondary distribution and quasi-real-time writing of data, and the technical problem that due to the limitation of implementation of the bottom layer of a search engine, the writing of data occupies a large amount of computing resources when the data volume of client information is large and the updating is frequent in the related technology is at least partially overcome, so that the data storage efficiency is effectively improved, and the server resources are saved.
According to an embodiment of the present disclosure, the data processing apparatus 500 further includes a second obtaining module, a first determining module, and a first storing module.
And the second acquisition module is used for acquiring the change data generated in the database within a preset time period, wherein the preset time period comprises the trigger interval of the timing task.
And the determining module is used for determining a target key value from a plurality of key values of the cache database for each acquired change data.
And the storage module is used for storing the changed data as the attribute data of the target key value into the cache database.
According to an embodiment of the present disclosure, the second acquisition module includes a first acquisition unit and a second acquisition unit.
The first acquisition unit is used for acquiring the log information generated in the database in a preset time period.
And the second acquisition unit is used for analyzing the log information to obtain the changed data.
According to the embodiment of the disclosure, the change data is configured with the service line identification and the client number, and a plurality of key values of the cache database are respectively attributed to key value groups of a plurality of service lines.
According to an embodiment of the present disclosure, the determination module includes a first determination unit and a second determination unit.
And a first determination unit for determining the target key value group based on the service line identification of the changed data.
A second determining unit configured to determine a target key value from a plurality of key values of the target key value group based on the client number of the changed data.
According to an embodiment of the present disclosure, the first processing module 530 includes a first processing sub-module, a second processing sub-module, a third processing sub-module, and a fourth processing sub-module.
And the first processing submodule is used for responding to the successful acquisition of the distributed lock after the consumer receives the message data, and extracting a preset number of attribute data from the attribute data contained in the message data to obtain a plurality of first target attribute data.
And the second processing submodule is used for processing the plurality of first target attribute data to obtain second target attribute data.
And the third processing submodule is used for writing the second target attribute data into the storage equipment of the search engine cluster through a preset data interface.
And the fourth processing submodule is used for releasing the distributed lock.
According to the embodiment of the disclosure, the attribute data includes main data and sub-data of multiple dimensions, and the attribute data stored in the same key value has a preset sequence.
According to an embodiment of the present disclosure, the second processing submodule includes a first processing unit and a second processing unit.
And the first processing unit is used for carrying out duplicate removal on the attribute data with the same main data and subdata in the plurality of first target attribute data based on a preset sequence to obtain a plurality of third target attribute data.
And the second processing unit is used for processing the sub data of the third target attribute data based on the main data of the third target attribute data for each third target attribute data.
According to an embodiment of the present disclosure, the second processing unit includes a first processing subunit, a second processing subunit, and a third processing subunit.
And the first processing subunit is used for deleting the sub-data in the third target attribute data and supplementing the sub-data of all dimensions corresponding to the main data from the database under the condition that the main data of the third target attribute data is determined as the main data of the newly added operation type for each third target attribute data.
And a second processing subunit, configured to, in a case where the main data of the third target attribute data is determined to be the main data of the modification operation type, reserve the dimensions of the sub data of the third target attribute data, and supplement the sub data of all the dimensions corresponding to the main data from the database.
And the third processing subunit is used for keeping the dimensionality of the subdata of the third target attribute data under the condition that the third target attribute data does not contain the main data, supplementing the main data from the database based on the service line identifier and the client number of the third target attribute data, and supplementing the subdata of all the dimensionalities corresponding to the main data from the database.
According to an embodiment of the present disclosure, the data processing apparatus 500 further comprises a second processing module.
And the second processing module is used for sending the message data which is subjected to the taking-out operation to the distributed message queue under the condition that the message data still contains the attribute data after the attribute data with the preset number is taken out from the attribute data contained in the message data, so that the consumer can consume the message data which is subjected to the taking-out operation again.
Any number of modules, sub-modules, units, sub-units, or at least part of the functionality of any number thereof according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least partially as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or by any other reasonable means of hardware or firmware for integrating or packaging a circuit, or by any one of or a suitable combination of any of software, hardware, and firmware. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the disclosure may be implemented at least partly as a computer program module, which when executed, may perform a corresponding function.
For example, any plurality of the first obtaining module 510, the generating module 520 and the first processing module 530 may be combined and implemented in one module/unit/sub-unit, or any one of the modules/units/sub-units may be split into a plurality of modules/units/sub-units. Alternatively, at least part of the functionality of one or more of these modules/units/sub-units may be combined with at least part of the functionality of other modules/units/sub-units and implemented in one module/unit/sub-unit. According to an embodiment of the present disclosure, at least one of the first obtaining module 510, the generating module 520, and the first processing module 530 may be implemented at least partially as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented by hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or may be implemented by any one of three implementations of software, hardware, and firmware, or any suitable combination of any of the three. Alternatively, at least one of the first obtaining module 510, the generating module 520 and the first processing module 530 may be at least partially implemented as a computer program module, which, when executed, may perform a corresponding function.
It should be noted that, the data processing apparatus portion in the embodiment of the present disclosure corresponds to the data processing method portion in the embodiment of the present disclosure, and the description of the data processing apparatus portion specifically refers to the data processing method portion, which is not described herein again.
Fig. 6 schematically shows a block diagram of an electronic device adapted to implement a data processing method according to an embodiment of the present disclosure. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, a computer electronic device 600 according to an embodiment of the present disclosure includes a processor 601, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. Processor 601 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 601 may also include onboard memory for caching purposes. The processor 601 may comprise a single processing unit or a plurality of processing units for performing the different actions of the method flows according to embodiments of the present disclosure.
In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are stored. The processor 601, the ROM602, and the RAM 603 are connected to each other via a bus 604. The processor 601 performs various operations of the method flows according to the embodiments of the present disclosure by executing programs in the ROM602 and/or RAM 603. It is to be noted that the programs may also be stored in one or more memories other than the ROM602 and RAM 603. The processor 601 may also perform various operations of the method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
Electronic device 600 may also include input/output (I/O) interface 605, input/output (I/O) interface 605 also connected to bus 604, according to an embodiment of the present disclosure. The electronic device 600 may also include one or more of the following components connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that the computer program read out therefrom is mounted in the storage section 608 as necessary.
According to embodiments of the present disclosure, method flows according to embodiments of the present disclosure may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program, when executed by the processor 601, performs the above-described functions defined in the system of the embodiments of the present disclosure. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
The present disclosure also provides a computer-readable storage medium, which may be embodied in the device/apparatus/system described in the above embodiments; or may exist alone without being assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to an embodiment of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium. Examples may include, but are not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
For example, according to an embodiment of the present disclosure, a computer-readable storage medium may include ROM602 and/or RAM 603 and/or one or more memories other than ROM602 and RAM 603 described above.
Embodiments of the present disclosure also include a computer program product comprising a computer program containing program code for performing the method provided by the embodiments of the present disclosure, when the computer program product is run on an electronic device, the program code being adapted to cause the electronic device to carry out the data processing method provided by the embodiments of the present disclosure.
The computer program, when executed by the processor 601, performs the above-described functions defined in the system/apparatus of the embodiments of the present disclosure. The above described systems, devices, modules, units, etc. may be implemented by computer program modules according to embodiments of the present disclosure.
In one embodiment, the computer program may be hosted on a tangible storage medium such as an optical storage device, a magnetic storage device, and the like. In another embodiment, the computer program may also be transmitted, distributed in the form of signals over a network medium, downloaded and installed via the communication section 609, and/or installed from a removable medium 611. The computer program containing program code may be transmitted using any suitable network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
In accordance with embodiments of the present disclosure, program code for executing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, these computer programs may be implemented using high level procedural and/or object oriented programming languages, and/or assembly/machine languages. The programming language includes, but is not limited to, programming languages such as Java, C + +, python, the "C" language, or the like. The program code may execute entirely on the user's computing device, partly on the user's device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments of the present disclosure and/or the claims may be made without departing from the spirit and teachings of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (13)

1. A method of data processing, comprising:
responding to a trigger timing task, and acquiring a plurality of key values configured in a cache database;
generating message data based on the key value and the attribute data stored in the key value respectively for each key value; and
and sending the plurality of message data to a distributed message queue so that a consumer can process the attribute data contained in the message data after receiving the message data from the distributed message queue, and writing the processed attribute data into a storage device of a search engine cluster.
2. The method of claim 1, further comprising:
acquiring change data generated in a database within a preset time period, wherein the preset time period comprises a trigger interval of the timing task;
for each acquired change data, determining a target key value from a plurality of key values of the cache database; and
and storing the changed data into the cache database as the attribute data of the target key value.
3. The method of claim 2, wherein the obtaining of the changed data generated in the database within the preset time period comprises:
acquiring log information generated in the database within the preset time period; and
and analyzing the log information to obtain the change data.
4. The method of claim 2, wherein,
the change data is configured with a service line identifier and a client number;
and multiple key values of the cache database are respectively attributed to the key value groups of multiple service lines.
5. The method of claim 4, wherein the determining a target key value from a plurality of key values of the cached database for each retrieved change data comprises:
determining a target key value group based on the service line identification of the changed data; and
determining the target key value from a plurality of key values of the target key value set based on the customer number of the changed data.
6. The method of claim 4, wherein the consumer processes attribute data contained in the message data after receiving the message data from the distributed message queue and writes the processed attribute data into a storage device of a search engine cluster, comprising:
after the consumer receives the message data, in response to successfully acquiring the distributed locks, taking out a preset number of attribute data from the attribute data contained in the message data to obtain a plurality of first target attribute data;
processing the plurality of first target attribute data to obtain second target attribute data;
writing the second target attribute data into storage equipment of the search engine cluster through a preset data interface; and
releasing the distributed lock.
7. The method of claim 6, wherein the attribute data comprises main data and sub-data of multiple dimensions, and the attribute data stored in the same key value has a preset order;
the processing of the plurality of first target attribute data includes:
based on the preset sequence, performing deduplication on attribute data with the same main data and sub-data in the plurality of first target attribute data to obtain a plurality of third target attribute data; and
and for each third target attribute data, processing the sub-data of the third target attribute data based on the main data of the third target attribute data.
8. The method of claim 7, wherein the processing, for each third target attribute data, sub-data of the third target attribute data based on main data of the third target attribute data comprises:
for each third target attribute data, deleting the subdata in the third target attribute data under the condition that the main data of the third target attribute data is determined to be the main data of the newly added operation type, and supplementing the subdata of all dimensions corresponding to the main data from the database;
under the condition that the main data of the third target attribute data is determined to be the main data of the modification operation type, keeping the dimensionality of the subdata of the third target attribute data, and supplementing the subdata of all dimensionalities corresponding to the main data from the database;
and under the condition that the third target attribute data does not contain main data, retaining the dimensionality of the subdata of the third target attribute data, supplementing the main data from the database based on the service line identification and the client number of the third target attribute data, and supplementing the subdata of all the dimensionalities corresponding to the main data from the database.
9. The method of claim 6, further comprising:
and after a preset number of attribute data are taken out from the attribute data contained in the message data, sending the message data with the taking-out operation completed to the distributed message queue under the condition that the message data still contain the attribute data, so that the consumer can consume the message data with the taking-out operation again.
10. A data processing apparatus comprising:
the first acquisition module is used for responding to a trigger timing task and acquiring a plurality of key values configured in a cache database;
a generation module, configured to generate, for each key value, message data based on the key value and the attribute data stored in the key value; and
the first processing module is used for sending the message data to a distributed message queue so that a consumer can process the attribute data contained in the message data after receiving the message data from the distributed message queue, and the processed attribute data is written into the storage device of the search engine cluster.
11. An electronic device, comprising:
one or more processors;
a memory to store one or more instructions that,
wherein the one or more instructions, when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-9.
12. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to carry out the method of any one of claims 1 to 9.
13. A computer program product comprising computer executable instructions for implementing the method of any one of claims 1 to 9 when executed.
CN202210432671.XA 2022-04-21 2022-04-21 Data processing method, data processing apparatus, electronic device, and storage medium Pending CN114780564A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210432671.XA CN114780564A (en) 2022-04-21 2022-04-21 Data processing method, data processing apparatus, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210432671.XA CN114780564A (en) 2022-04-21 2022-04-21 Data processing method, data processing apparatus, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
CN114780564A true CN114780564A (en) 2022-07-22

Family

ID=82433303

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210432671.XA Pending CN114780564A (en) 2022-04-21 2022-04-21 Data processing method, data processing apparatus, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN114780564A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116186059A (en) * 2023-04-24 2023-05-30 民航成都信息技术有限公司 Flight data updating method, system, electronic device and storage medium
CN116821245A (en) * 2023-07-05 2023-09-29 贝壳找房(北京)科技有限公司 Data aggregation synchronization method and storage medium in distributed scene
WO2024022329A1 (en) * 2022-07-25 2024-02-01 华为云计算技术有限公司 Data management method based on key value storage system and related device thereof

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180159731A1 (en) * 2015-01-23 2018-06-07 Ebay Inc. Processing high volume network data
CN111858496A (en) * 2020-07-27 2020-10-30 北京大道云行科技有限公司 Metadata retrieval method and device, storage medium and electronic equipment
CN114357337A (en) * 2022-01-11 2022-04-15 平安普惠企业管理有限公司 Cache management method, device, equipment and storage medium
CN114356921A (en) * 2021-12-28 2022-04-15 中国农业银行股份有限公司 Data processing method, device, server and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180159731A1 (en) * 2015-01-23 2018-06-07 Ebay Inc. Processing high volume network data
CN111858496A (en) * 2020-07-27 2020-10-30 北京大道云行科技有限公司 Metadata retrieval method and device, storage medium and electronic equipment
CN114356921A (en) * 2021-12-28 2022-04-15 中国农业银行股份有限公司 Data processing method, device, server and storage medium
CN114357337A (en) * 2022-01-11 2022-04-15 平安普惠企业管理有限公司 Cache management method, device, equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024022329A1 (en) * 2022-07-25 2024-02-01 华为云计算技术有限公司 Data management method based on key value storage system and related device thereof
CN116186059A (en) * 2023-04-24 2023-05-30 民航成都信息技术有限公司 Flight data updating method, system, electronic device and storage medium
CN116186059B (en) * 2023-04-24 2023-06-30 民航成都信息技术有限公司 Flight data updating method, system, electronic device and storage medium
CN116821245A (en) * 2023-07-05 2023-09-29 贝壳找房(北京)科技有限公司 Data aggregation synchronization method and storage medium in distributed scene

Similar Documents

Publication Publication Date Title
US10185603B2 (en) System having in-memory buffer service, temporary events file storage system and backup events file uploader service
CN107690616B (en) Streaming join in a constrained memory environment
CN114780564A (en) Data processing method, data processing apparatus, electronic device, and storage medium
US9990224B2 (en) Relaxing transaction serializability with statement-based data replication
WO2018052907A1 (en) Data serialization in a distributed event processing system
US10331669B2 (en) Fast query processing in columnar databases with GPUs
US11016971B2 (en) Splitting a time-range query into multiple sub-queries for parallel execution
US10176205B2 (en) Using parallel insert sub-ranges to insert into a column store
US20140324917A1 (en) Reclamation of empty pages in database tables
US9501313B2 (en) Resource management and allocation using history information stored in application's commit signature log
CN113094434A (en) Database synchronization method, system, device, electronic equipment and medium
US20200050785A1 (en) Database record access through use of a multi-value alternate primary key
US11681709B2 (en) Joining remote tables by a federated server
US10565202B2 (en) Data write/import performance in a database through distributed memory
CN112506490A (en) Interface generation method and device, electronic equipment and storage medium
CN114201508A (en) Data processing method, data processing apparatus, electronic device, and storage medium
CN114417112A (en) Data processing method, data processing apparatus, electronic device, and storage medium
US11907176B2 (en) Container-based virtualization for testing database system
CN113986833A (en) File merging method, system, computer system and storage medium
CN114168607A (en) Global serial number generation method, device, equipment, medium and product
CN113688160A (en) Data processing method, processing device, electronic device and storage medium
US20140053128A1 (en) Persisting state using scripts
US12072882B2 (en) Database query processing
US11099970B2 (en) Capturing task traces for multiple tasks
US11899811B2 (en) Processing data pages under group-level encryption

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination