CN115048411A - Data processing method, apparatus, system, medium, and program product - Google Patents

Data processing method, apparatus, system, medium, and program product Download PDF

Info

Publication number
CN115048411A
CN115048411A CN202110253003.6A CN202110253003A CN115048411A CN 115048411 A CN115048411 A CN 115048411A CN 202110253003 A CN202110253003 A CN 202110253003A CN 115048411 A CN115048411 A CN 115048411A
Authority
CN
China
Prior art keywords
data
target data
cache
target
identification information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110253003.6A
Other languages
Chinese (zh)
Inventor
黄增荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Technology Holding Co Ltd
Original Assignee
Jingdong Technology Holding Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Technology Holding Co Ltd filed Critical Jingdong Technology Holding Co Ltd
Priority to CN202110253003.6A priority Critical patent/CN115048411A/en
Publication of CN115048411A publication Critical patent/CN115048411A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/219Managing data history or versioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2358Change logging, detection, and notification

Abstract

The present disclosure provides a data processing method, including: acquiring a target data set from a database according to the identification information set, wherein the data in the target data set is data decoupled from the service; storing the data in the target data set into a cache; traversing data in the database, and determining target data, wherein the target data is changed data in a target data set; and updating the associated data in the cache by using the target data, wherein the associated data is the data which is stored in the cache and has the same identification information with the target data. The present disclosure also provides a data processing apparatus, a computer system, a readable storage medium, and a computer program product.

Description

Data processing method, apparatus, system, medium, and program product
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a data processing method, an apparatus, a computer system, a readable storage medium, and a computer program product.
Background
In the internet system, data is generally stored in a database, and when an application server is started, the data is called from the database in a remote request mode and is sequentially loaded into a system memory.
In implementing the disclosed concept, the inventors found that there are at least the following problems in the related art: data and service coupling is too deep, so that data is too large, the processing speed is low when data is loaded every time, and pressure is caused on a database by frequent query.
Disclosure of Invention
In view of the above, the present disclosure provides a data processing method, apparatus, computer system, readable storage medium and computer program product.
One aspect of the present disclosure provides a data processing method, including:
acquiring a target data set from a database according to the identification information set, wherein the data in the target data set is data decoupled from the service;
storing the data in the target data set into a cache;
traversing data in the database, and determining target data, wherein the target data is changed data in a target data set; and
and updating the associated data in the cache by using the target data, wherein the associated data is the data which is stored in the cache and has the same identification information with the target data.
According to an embodiment of the present disclosure, the data processing method further includes:
storing target identification information corresponding to the target data into a cache queue of a cache; and
and updating the data version of the target identification information for representing different batches in the cache queue.
According to an embodiment of the present disclosure, the data processing method further includes:
receiving an instruction for loading a target data set in a cache to a system memory; and
and responding to the instruction, and loading the target data set in the cache to the system memory.
According to an embodiment of the present disclosure, the data processing method further includes:
determining whether the target data set is completely loaded into a system memory;
under the condition that the target data set is completely loaded into the system memory, querying a data version;
determining whether target data are generated or not according to the data version;
under the condition that target data are determined to be generated, target identification information in a cache queue is obtained; and
and loading the target data into the system memory based on the target identification information.
According to an embodiment of the present disclosure, in a case where the target data set has been completely loaded into the system memory, querying the data version includes:
and circularly inquiring the data version according to a preset time interval under the condition that the target data set is completely loaded into the system memory.
According to an embodiment of the present disclosure, loading the target data set into the system memory or loading the target data into the system memory includes:
processing the target data set or the target data, wherein the processing comprises classification and/or format conversion;
and loading the processed data to a system memory.
According to an embodiment of the present disclosure, wherein traversing the data in the database, determining the target data comprises:
traversing the data in the database based on the identification information corresponding to the data in the target data set that has been stored in the cache; and
target data is determined.
According to an embodiment of the present disclosure, traversing data in a database includes:
and circularly traversing the data in the database according to a preset time interval.
According to an embodiment of the present disclosure, wherein storing the data in the target data set into the cache includes:
storing the data in the target data set into a cache; and
and storing the identification information set into a cache.
Another aspect of the present disclosure provides a data processing apparatus including:
the acquisition module is used for acquiring a target data set from the database according to the identification information set, wherein the data in the target data set is data decoupled from the service;
the storage module is used for storing the data in the target data set into a cache;
the traversal module is used for traversing data in the database and determining target data, wherein the target data is changed data in the target data set; and
and the updating module is used for updating the associated data in the cache by using the target data, wherein the associated data is the data which is stored in the cache and has the same identification information with the target data.
Yet another aspect of the present disclosure provides a computer system comprising:
one or more processors;
a memory for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method.
Yet another aspect of the disclosure provides a computer-readable storage medium having stored thereon executable instructions that, when executed by a processor, cause the processor to implement a method.
Yet another aspect of the disclosure provides a computer program product comprising a computer program comprising computer executable instructions that when executed are for implementing a method.
According to the embodiment of the disclosure, a target data set is obtained from a database according to an identification information set, wherein data in the target data set is data decoupled from service; storing the data in the target data set into a cache; traversing data in the database, and determining target data, wherein the target data is changed data in a target data set; the method comprises the steps of updating associated data in a cache by utilizing target data, wherein the associated data are stored in the cache and have the same identification information as the target data, and the data in a target data set are decoupled from services to avoid being associated with other services; and moreover, the associated data in the cache is updated by utilizing the target data, so that the timely updating is realized. Therefore, the technical problems of slow query speed and influence on other services caused by large data volume when data are directly queried from the database and loaded into the system memory in the prior art are at least partially solved, and the technical effects of decoupling data query and timely data update are further achieved.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments of the present disclosure with reference to the accompanying drawings, in which:
FIG. 1 schematically illustrates an exemplary system architecture to which the data processing methods and apparatus of the present disclosure may be applied;
FIG. 2 schematically shows a flow diagram of a data processing method according to an embodiment of the present disclosure;
fig. 3 schematically shows a signaling diagram of a data processing method according to an embodiment of the present disclosure;
fig. 4 schematically shows a signaling diagram of a data processing method according to another embodiment of the present disclosure;
fig. 5 schematically shows a signaling diagram of a data processing method according to another embodiment of the present disclosure;
FIG. 6 schematically shows a block diagram of a data processing apparatus according to an embodiment of the present disclosure; and
FIG. 7 schematically shows a block diagram of a computer system suitable for implementing a data processing method according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that these descriptions are illustrative only and are not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
The embodiment of the disclosure provides a data processing method. Acquiring a target data set from a database according to the identification information set, wherein the data in the target data set is data decoupled from the service; storing the data in the target data set into a cache; traversing data in the database, and determining target data, wherein the target data is changed data in a target data set; and updating the associated data in the cache by using the target data, wherein the associated data is the data which is stored in the cache and has the same identification information with the target data.
According to the embodiment of the disclosure, data in the target data set is decoupled from the service, so that association with other services is avoided; and the associated data in the cache is updated by utilizing the target data, so that the timely updating is realized.
Fig. 1 schematically illustrates an exemplary system architecture 100 to which the data processing methods and apparatus may be applied, according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, the system architecture 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104 and a server 105. Network 104 is the medium used to provide communication links between terminal devices 101, 102, 103 and server 105. Network 104 may include various connection types, such as wired and/or wireless communication links, and so forth.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have installed thereon various communication client applications, such as a shopping-like application, a web browser application, a search-like application, an instant messaging tool, a mailbox client, and/or social platform software, etc. (by way of example only).
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 105 may be a server providing various services, such as a background management server (for example only) providing support for websites browsed by users using the terminal devices 101, 102, 103. The background management server may analyze and perform other processing on the received data such as the user request, and feed back a processing result (e.g., a webpage, information, or data obtained or generated according to the user request) to the terminal device.
It should be noted that the data processing method provided by the embodiment of the present disclosure may be generally executed by the server 105. Accordingly, the data processing apparatus provided by the embodiments of the present disclosure may be generally disposed in the server 105. The data processing method provided by the embodiment of the present disclosure may also be executed by a server or a server cluster different from the server 105 and capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the data processing apparatus provided in the embodiments of the present disclosure may also be disposed in a server or a server cluster different from the server 105 and capable of communicating with the terminal devices 101, 102, 103 and/or the server 105.
For example, the data in the database may be originally stored in any one of the terminal devices 101, 102, or 103 (e.g., the terminal device 101, but not limited thereto), or stored on an external storage device and may be imported into the terminal device 101. Then, the terminal device 101 may send the data to a database of other terminal devices, servers, or server clusters, and execute the data processing method provided by the embodiment of the present disclosure by other servers, or server clusters, which may query the data in the database.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Fig. 2 schematically shows a flow chart of a data processing method according to an embodiment of the present disclosure.
As shown in fig. 2, the method includes operations S210 to S240.
In operation S210, a target data set is obtained from the database according to the identification information set, where data in the target data set is data decoupled from the service.
According to the embodiment of the disclosure, a request for acquiring a target data set from a database may be received before acquiring the target data set from the database according to an identification information set, where the request carries the identification information set.
According to the embodiment of the present disclosure, the identification information in the identification information set may be ID (identification) information characterizing the data, but is not limited thereto, and may also be other unique attribute identification information of the data.
According to an embodiment of the present disclosure, the data in the target data set may be hundred thousand levels of large object data. More specifically, the target data set may include tens or hundreds of thousands of pieces of data, each including tens or hundreds of thousands of pieces of sub data; but not limited to this, the target data set may also be data of ordinary magnitude. Under the condition that the target data set is large object data of hundreds of thousands level, the data processing method can better embody the advantages of rapidness and stability in the aspects of data retrieval and loading.
According to the embodiment of the disclosure, the database may be a Date base, data is stored in the database, and when the application server is started, the data is searched from the database in a mode of RPC call (i.e., remote procedure call), and is loaded into a system memory.
According to the embodiment of the disclosure, the data in the target data set is data decoupled from the service, so that the data is prevented from being coupled with the service too deeply.
In operation S220, data in the target data set is stored in the cache.
According to an embodiment of the present disclosure, the cache may be a redis cache (i.e., a kind of Key-Value database).
According to the embodiment of the disclosure, the data of the target data set is stored in the cache, and the cache is used as an intermediate transmission medium, rather than being directly called from the database and loaded into the system memory. On one hand, when the data in the cache is loaded to the system memory, the second-level loading of the whole target data set is realized, and the loading speed is high; on the other hand, under the condition that the calling data volume is large, compared with the condition that the calling data volume is directly inquired and loaded from the database to the system memory, the pressure on the database is reduced.
In operation S230, data in the database is traversed, and target data is determined, where the target data is changed data in the target data set.
In operation S240, the associated data in the cache is updated with the target data, wherein the associated data is data that has been stored in the cache and has the same identification information as the target data.
According to the embodiment of the invention, when the data in the database is updated, the target data is determined by traversing the data in the database, and the associated data in the cache is updated by utilizing the target data, so that the timely synchronization of the data in the cache and the data in the database is realized, and the problem that the data loaded in the system memory is overdue data instead of real-time updated data is avoided.
According to the embodiment of the invention, the data in the target data set is decoupled from the service, and the buffer is used as an intermediate transmission medium, so that the robustness of the system is improved; and target data is determined in time, and the associated data in the cache is updated by using the target data, so that the data in the cache and the data in the database are synchronized in time, and the response processing capacity is improved.
The method shown in fig. 2 is further described with reference to fig. 3-5 in conjunction with specific embodiments.
Fig. 3 schematically shows a signaling diagram of a data processing method according to an embodiment of the disclosure.
As shown in fig. 3, the data processing method includes operations S310 to S370 as follows.
In operation S310, a target data set is obtained from the database according to the identification information set, where data in the target data set is data decoupled from the service.
According to the embodiment of the disclosure, after the scheduling system is used for inquiring data from the database, the inquired data can be stored in the cache.
According to the embodiment of the disclosure, the data in the target data set is data decoupled from the service, wherein the data in the target data set can be decoupled from the service by using the scheduling system, and the implementation manner of the method can be processing for one step or one stage of service processing, so that the decoupling effect is achieved. But not limited to this, other ways of decoupling data from traffic may be utilized. As long as one party logic is changed, the whole system is not influenced, and the simultaneous calling of the associated data of the bottom layer service is avoided.
In operation S320, data in the target data set is stored in the buffer.
According to the embodiment of the disclosure, the data volume in the target data set is large, and can be large object data of hundred thousand levels. When the scheduling system is used for data scheduling, the data extraction can be performed according to a preset sequence instead of the second-level extraction.
In operation S330, the set of identification information is stored in the cache.
According to the embodiment of the disclosure, the data in the target data set can be stored in the cache, and meanwhile, the identification information set is stored in the cache. And loading the corresponding target data set into a system memory by the service system based on the identification information set.
In operation S340, data in the database is traversed, and target data is determined, where the target data is changed data in the target data set.
According to an alternative embodiment of the present disclosure, traversing data in the database based on identification information corresponding to data in the target data set that has been stored in the cache; and determining the target data.
According to an optional embodiment of the present disclosure, the changed data in the database may be queried based on identification information corresponding to data in the target data set that has been stored in the cache as retrieval information; however, the method is not limited to this, and the changed data in the database may be queried based on a preset time period, and then the corresponding identification information of the changed data is matched with the identification information corresponding to the data in the target data set that has been stored in the cache, and the matching can be determined as the target identification information.
According to an alternative embodiment of the present disclosure, traversing the data in the database may include cycling through the data in the database at preset time intervals.
According to an alternative embodiment of the present disclosure, the preset time interval may be a time interval of 5 seconds to 10 seconds, but is not limited thereto, and may also be a time interval of 1 minute. The setting can be carried out according to specific practical conditions.
According to the optional embodiment of the disclosure, the data in the database is circularly traversed according to the preset time interval, so that the data stored in the cache can be ensured to be synchronous with the changed data in the database in time.
In operation S350, the associated data in the cache is updated with the target data, wherein the associated data is data that has been stored in the cache and has the same identification information as the target data.
According to the embodiment of the disclosure, the associated data can be understood as the expired data, and in the disclosure, the associated data can be replaced by the target data based on the target identification information, so that the expired data in the cache can be replaced and updated.
According to the embodiment of the disclosure, only the data where the update occurs is replaced, and not the data in the whole target data set is recalled. Therefore, backlog of data is avoided, storage is simplified, and data storage capacity in the cache is relieved. In addition, the changed data in the database is subjected to faster response processing, only the changed data is called subsequently, and the pressure on the database is reduced.
In operation S360, target identification information corresponding to the target data is stored in the buffered buffer queue.
According to the embodiment of the disclosure, under the condition that the target data is updated to the associated data in the cache, the target identification information corresponding to the target data can be stored in the cache at the same time, so that when the service system correspondingly loads the updated data to the system memory based on the target identification information in the cache queue, mutual correspondence is realized, and the problem of wrong loading is prevented.
In operation S370, the data versions of the target identification information characterizing different batches in the buffer queue are updated.
According to the embodiment of the disclosure, a buffer queue and a data version can be set at the same time, the buffer queue is realized by using key-value, the data version corresponds to target identification information of different batches in the buffer queue, and the data version can be a long shaping number which is gradually increased from 0.
According to the embodiment of the disclosure, each time the scheduling system writes data into the cache, a data version is generated according to the current cache queue, for example, the data version corresponding to the current cache queue is 5, and when new target identification information is written into the cache queue, the corresponding data version is updated to 6 by adding 1. In the subsequent process of loading the updated data into the system memory, the target identification information in the cache queue corresponding to the data version can be determined and queried according to the data version, and the target data can be queried by using the target identification information.
According to the embodiment of the disclosure, data loss can be effectively prevented by using the data version as the identifier.
According to an alternative embodiment of the present disclosure, an instruction for loading a target data set in a cache to a system memory is received; and responding to the instruction, and loading the target data set in the cache to the system memory.
According to the embodiment of the disclosure, the target data set stored in the cache can be loaded into the system memory through the constructed service system. And loading the target data set in the cache to the system memory based on the target identification information set.
According to an alternative embodiment of the present disclosure, the target data set may be processed before loading, and the processed data may be loaded to the system memory.
According to the embodiment of the disclosure, the data in the target data set can be classified and converted into the format required by the application processor corresponding to the system memory.
Fig. 4 schematically shows a signaling diagram of a data processing method according to another embodiment of the present disclosure.
As shown in fig. 4, the data processing method includes operations S410 to S470.
In operation S410, an instruction for loading a target data set in a cache to a system memory is received.
According to the embodiment of the disclosure, the target data set can be loaded from the cache to the system memory by using the service system.
According to the embodiment of the present disclosure, the instruction for loading the target data set in the cache to the system memory may be sent by the scheduling system to the service system, but is not limited to this, and may also be sent by the user to the service system. The main purpose is to enable the launch of the load service.
In operation S420, in response to the instruction, the target data set in the cache is loaded to the system memory.
According to the embodiment of the disclosure, the operation of loading the target data set in the cache to the system memory may be a second-level time load.
In operation S430, it is determined whether the target data set has been completely loaded into the system memory.
In operation S440, in the case that the target data set has been completely loaded into the system memory, the data version is queried.
According to the embodiment of the disclosure, the business system can be used for judging whether all the data in the target data set are loaded to the system memory, and then querying the data version under the condition that all the data are loaded.
In operation S450, it is determined whether target data is generated according to the data version.
According to an optional embodiment of the present disclosure, in a case where the data set has been completely loaded into the system memory, the data version is cyclically queried at preset time intervals.
According to an alternative embodiment of the present disclosure, the preset time interval may be a time interval of 5 seconds to 10 seconds, but is not limited thereto and may also be a time interval of 1 minute. The matching setting can be carried out according to the preset time interval of the data in the traversing database of the scheduling system, so that the updating data can be obtained from the database in time and stored in the cache, and the updating data can be loaded to the system memory from the cache in time, and the matching processing can be realized.
According to the embodiment of the disclosure, the data version is used as the identifier to inquire and confirm the target data, so that data loss can be effectively prevented.
In operation S460, in the case where it is determined that target data is generated, target identification information in the buffer queue is acquired.
In operation S470, target data is loaded into the system memory based on the target identification information.
According to the optional embodiment of the disclosure, the updated data is loaded to the system memory in time by circularly querying the data version according to the preset time interval, so that a faster response processing effect is realized, and loading of a larger data volume is realized.
According to an optional embodiment of the present disclosure, the target data stored in the cache may be loaded into the system memory through the constructed service system. The target data in the cache can be loaded to the system memory based on the target identification information corresponding to the target data.
According to an alternative embodiment of the present disclosure, the target data may be processed before loading, and the processed data may be loaded to the system memory. In the present disclosure, the target data may be classified and format-converted so as to be converted into a format required by a processor corresponding to a system memory.
Fig. 5 schematically shows a signaling diagram of a data processing method according to another embodiment of the present disclosure.
As shown in fig. 5, the data processing method depends on a scheduling system and a service system, and before loading data in a Database (DB) into a system memory, a buffer (Redis) is used as an intermediate transmission medium. Firstly, the scheduling system stores the data into a cache and regularly inquires whether the corresponding data in the database is updated or not. And subsequently, only the changed data needs to be replaced in the cache, so that the updating is realized in time. And then the service of the service system is started, the data in the cache is loaded to the system memory, and the data in the system memory is timely updated by regularly inquiring whether the corresponding data in the cache is updated or not.
In summary, according to the embodiments of the present disclosure, by designing the scheduling system and the service system, tens of thousands or hundreds of thousands of data are loaded to the system memory in seconds, and other services are not affected. The problems that in the prior art, data are directly loaded from a database to a system memory are solved, for example: the time consumption for loading data is long, and the speed is low when the data volume is large; frequent operation, stress on the database; data is coupled with services, and when the data is called, the data is dependent on each other and needs to be called from the bottom layer.
Fig. 6 schematically shows a block diagram of a data processing apparatus according to an embodiment of the present disclosure.
As shown in FIG. 6, the data processing apparatus 600 includes an acquisition module 610, a logging module 620, a traversal module 630, and an update module 640.
An obtaining module 610, configured to obtain a target data set from a database according to the identification information set, where data in the target data set is data decoupled from a service;
a storing module 620, configured to store data in the target data set into a cache;
a traversing module 630, configured to traverse data in the database, and determine target data, where the target data is changed data in the target data set; and
and an updating module 640, configured to update associated data in the cache by using the target data, where the associated data is data that has been stored in the cache and has the same identification information as the target data.
According to an embodiment of the present disclosure, the data processing apparatus 600 further comprises an identification logging module and a version update module.
The identification storing module is used for storing target identification information corresponding to the target data into a cache queue of the cache; and
and the version updating module is used for updating the data versions of the target identification information for representing different batches in the cache queue.
According to an embodiment of the present disclosure, the data processing apparatus 600 further includes a receiving module and a responding module.
The receiving module is used for receiving an instruction for loading the target data set in the cache to the system memory; and
and the response module is used for responding to the instruction and loading the target data set in the cache to the system memory.
According to an embodiment of the present disclosure, the data processing apparatus 600 further includes a load determination module, a version query module, a target determination module, and a data loading module.
The loading determining module is used for determining whether the target data set is completely loaded into the system memory;
the version query module is used for querying the data version under the condition that the target data set is completely loaded into the system memory;
the target determining module is used for determining whether target data are generated according to the data version;
the identification acquisition module is used for acquiring target identification information in the cache queue under the condition that target data are determined to be generated; and
and the data loading module is used for loading the target data into the system memory based on the target identification information.
According to an embodiment of the present disclosure, the version query module includes a version loop query unit.
And the version cycle query unit is used for circularly querying the data version according to a preset time interval under the condition that the target data set is completely loaded into the system memory.
According to an embodiment of the present disclosure, loading the target data set into the system memory or loading the target data into the system memory includes processing the target data set or the target data, where the processing includes classification and/or format conversion; and loading the processed data to a system memory.
According to an embodiment of the present disclosure, the traversal module 630 includes a database traversal unit and a target data determination unit.
The database traversing unit is used for traversing the data in the database based on the identification information corresponding to the data in the target data set stored in the cache; and
and the target data determining unit is used for determining the target data.
According to an embodiment of the present disclosure, traversing the data in the database may include cycling through the data in the database at preset time intervals.
According to an embodiment of the present disclosure, the logging module 620 includes a data logging unit and an identification logging unit, among others.
The data storage unit is used for storing the data in the target data set into a cache; and
and the identification storing unit is used for storing the identification information set into the cache.
Any number of modules, sub-modules, units, sub-units, or at least part of the functionality of any number thereof according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, sub-units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware implementations. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the disclosure may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
For example, any number of the obtaining module 610, the storing module 620, the traversing module 630, and the updating module 640 may be combined into one module/unit/sub-unit to be implemented, or any one of the modules/units/sub-units may be split into a plurality of modules/units/sub-units. Alternatively, at least part of the functionality of one or more of these modules/units/sub-units may be combined with at least part of the functionality of other modules/units/sub-units and implemented in one module/unit/sub-unit. According to an embodiment of the present disclosure, at least one of the obtaining module 610, the storing module 620, the traversing module 630, and the updating module 640 may be implemented at least partially as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented by hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or may be implemented by any one of three implementations of software, hardware, and firmware, or any suitable combination of any of the three. Alternatively, at least one of the retrieving module 610, the logging module 620, the traversing module 630 and the updating module 640 may be at least partially implemented as a computer program module, which when executed, may perform a corresponding function.
It should be noted that, the data processing apparatus portion in the embodiment of the present disclosure corresponds to the data processing method portion in the embodiment of the present disclosure, and the description of the data processing apparatus portion specifically refers to the data processing method portion, which is not described herein again.
FIG. 7 schematically illustrates a block diagram of a computer system suitable for implementing the above-described method, according to an embodiment of the present disclosure. The computer system illustrated in FIG. 7 is only one example and should not impose any limitations on the functionality or scope of use of embodiments of the disclosure.
As shown in fig. 7, a computer system 700 according to an embodiment of the present disclosure includes a processor 701, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. The processor 701 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 701 may also include on-board memory for caching purposes. The processor 701 may comprise a single processing unit or a plurality of processing units for performing the different actions of the method flows according to embodiments of the present disclosure.
In the RAM 703, various programs and data necessary for the operation of the system 700 are stored. The processor 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. The processor 701 performs various operations of the method flows according to the embodiments of the present disclosure by executing programs in the ROM 702 and/or the RAM 703. It is noted that the programs may also be stored in one or more memories other than the ROM 702 and RAM 703. The processor 701 may also perform various operations of method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
The system 700 may also include an input/output (I/O) interface 705, which input/output (I/O) interface 705 is also connected to the bus 704, according to an embodiment of the present disclosure. The system 700 may also include one or more of the following components connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that the computer program read out therefrom is mounted in the storage section 708 as necessary.
According to an embodiment of the present disclosure, the method flow according to an embodiment of the present disclosure may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711. The computer program, when executed by the processor 701, performs the above-described functions defined in the system of the embodiment of the present disclosure. The above described systems, devices, apparatuses, modules, units, etc. may be implemented by computer program modules according to embodiments of the present disclosure.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist alone without being assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to an embodiment of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium. Examples may include, but are not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
For example, according to embodiments of the present disclosure, a computer-readable storage medium may include the ROM 702 and/or the RAM 703 and/or one or more memories other than the ROM 702 and the RAM 703 described above.
Embodiments of the present disclosure also include a computer program product comprising a computer program containing program code for performing the method provided by embodiments of the present disclosure, which, when the computer program product is run on an electronic device, is adapted to cause the electronic device to carry out the data processing method provided by embodiments of the present disclosure.
The computer program, when executed by the processor 701, performs the above-described functions defined in the system/apparatus of the embodiments of the present disclosure. The above described systems, devices, modules, units, etc. may be implemented by computer program modules according to embodiments of the present disclosure.
In one embodiment, the computer program may be hosted on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted in the form of a signal on a network medium, distributed, downloaded and installed via the communication section 709, and/or installed from the removable medium 711. The computer program containing program code may be transmitted using any suitable network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
In accordance with embodiments of the present disclosure, program code for executing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, these computer programs may be implemented using high level procedural and/or object oriented programming languages, and/or assembly/machine languages. The programming language includes, but is not limited to, programming languages such as Java, C + +, python, the "C" language, or the like. The program code may execute entirely on the user computing device, partly on the user device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used advantageously in combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (13)

1. A method of data processing, comprising:
acquiring a target data set from a database according to the identification information set, wherein the data in the target data set is data decoupled from the service;
storing the data in the target data set into a cache;
traversing data in the database, and determining target data, wherein the target data is changed data in the target data set; and
and updating the associated data in the cache by using the target data, wherein the associated data is the data which is stored in the cache and has the same identification information with the target data.
2. The method of claim 1, further comprising:
storing target identification information corresponding to the target data into a cache queue of the cache; and
and updating the data version of the target identification information for representing different batches in the cache queue.
3. The method of claim 2, further comprising:
receiving an instruction for loading the target data set in the cache to a system memory; and
in response to the instruction, loading the target data set in the cache to the system memory.
4. The method of claim 3, further comprising:
determining whether the target data set has been completely loaded into the system memory;
querying the data version when the target data set is completely loaded into the system memory;
determining whether the target data is generated or not according to the data version;
under the condition that the target data are determined to be generated, the target identification information in the cache queue is obtained; and
and loading the target data into the system memory based on the target identification information.
5. The method of claim 4, the querying the data version if the target data set has been completely loaded into the system memory comprising:
and circularly inquiring the data version according to a preset time interval under the condition that the target data set is completely loaded into the system memory.
6. The method of claim 4, wherein the loading the set of target data into the system memory or the loading the target data into the system memory comprises:
processing the target data set or the target data, wherein the processing comprises classification and/or format conversion;
and loading the processed data to the system memory.
7. The method of claim 1, wherein traversing the data in the database, determining target data comprises:
traversing the data in the database based on the identification information corresponding to the data in the target data set stored in the cache; and
and determining the target data.
8. The method of claim 7, the traversing the data in the database comprising:
and circularly traversing the data in the database according to a preset time interval.
9. The method of claim 1, wherein the caching the data in the target data set comprises:
storing the data in the target data set into the cache; and
and storing the identification information set into the cache.
10. A data processing apparatus comprising:
the acquisition module is used for acquiring a target data set from a database according to the identification information set, wherein the data in the target data set is data decoupled from the service;
the storage module is used for storing the data in the target data set into a cache;
the traversal module is used for traversing data in the database and determining target data, wherein the target data is changed data in the target data set; and
and the updating module is used for updating the associated data in the cache by using the target data, wherein the associated data is the data which is stored in the cache and has the same identification information with the target data.
11. A computer system, comprising:
one or more processors;
a memory for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-9.
12. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to carry out the method of any one of claims 1 to 9.
13. A computer program product comprising a computer program comprising computer executable instructions for implementing the method of any one of claims 1 to 9 when executed.
CN202110253003.6A 2021-03-08 2021-03-08 Data processing method, apparatus, system, medium, and program product Pending CN115048411A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110253003.6A CN115048411A (en) 2021-03-08 2021-03-08 Data processing method, apparatus, system, medium, and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110253003.6A CN115048411A (en) 2021-03-08 2021-03-08 Data processing method, apparatus, system, medium, and program product

Publications (1)

Publication Number Publication Date
CN115048411A true CN115048411A (en) 2022-09-13

Family

ID=83156528

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110253003.6A Pending CN115048411A (en) 2021-03-08 2021-03-08 Data processing method, apparatus, system, medium, and program product

Country Status (1)

Country Link
CN (1) CN115048411A (en)

Similar Documents

Publication Publication Date Title
CN111737270A (en) Data processing method and system, computer system and computer readable medium
CN112445868B (en) Service message processing method and device
CN111125107A (en) Data processing method, device, electronic equipment and medium
CN111427701A (en) Workflow engine system and business processing method
CN114090113B (en) Method, device, equipment and storage medium for dynamically loading data source processing plug-in
CN110866031B (en) Database access path optimization method and device, computing equipment and medium
CN113076224A (en) Data backup method, data backup system, electronic device and readable storage medium
CN110764769B (en) Method and device for processing user request
CN114237765B (en) Functional component processing method, device, electronic equipment and medium
CN114780361A (en) Log generation method, device, computer system and readable storage medium
CN115048411A (en) Data processing method, apparatus, system, medium, and program product
CN114201508A (en) Data processing method, data processing apparatus, electronic device, and storage medium
CN114168607A (en) Global serial number generation method, device, equipment, medium and product
CN113176907A (en) Interface data calling method and device, computer system and readable storage medium
CN115333871B (en) Firewall operation and maintenance method and device, electronic equipment and readable storage medium
CN113495747B (en) Gray scale release method and device
CN113486018B (en) Production data storage method, storage device, electronic equipment and storage medium
CN113468053B (en) Application system testing method and device
CN116795543A (en) Data processing method, device, equipment and storage medium
CN113419922A (en) Method and device for processing batch job running data of host
CN113779079A (en) Method and device for caching data
CN114218160A (en) Log processing method and device, electronic equipment and medium
CN115878596A (en) Data processing method, device, equipment and storage medium
CN114218330A (en) ES cluster selection method, ES cluster selection device, ES cluster selection apparatus, ES cluster selection medium, and program product
CN112988806A (en) Data processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination