CN110427386B - Data processing method, device and computer storage medium - Google Patents

Data processing method, device and computer storage medium Download PDF

Info

Publication number
CN110427386B
CN110427386B CN201910718451.1A CN201910718451A CN110427386B CN 110427386 B CN110427386 B CN 110427386B CN 201910718451 A CN201910718451 A CN 201910718451A CN 110427386 B CN110427386 B CN 110427386B
Authority
CN
China
Prior art keywords
data
request
data processing
memory queue
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910718451.1A
Other languages
Chinese (zh)
Other versions
CN110427386A (en
Inventor
陈锴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Cubesili Information Technology Co Ltd
Original Assignee
Guangzhou Cubesili Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Cubesili Information Technology Co Ltd filed Critical Guangzhou Cubesili Information Technology Co Ltd
Priority to CN201910718451.1A priority Critical patent/CN110427386B/en
Publication of CN110427386A publication Critical patent/CN110427386A/en
Application granted granted Critical
Publication of CN110427386B publication Critical patent/CN110427386B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2365Ensuring data consistency and integrity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management

Abstract

The application discloses a data processing method, a data processing device and a computer storage medium, and belongs to the technical field of computers. In the application, all data processing requests for the same data are added to the same memory queue, and the data processing requests in the memory queue are processed sequentially from the early to the late according to the adding time, so that even if the data query request for the same data is received after the data update request is received, the data query request is not processed before the data update request is processed because the data query request is positioned behind the data update request in the memory queue, and the data query request is processed only after the data update request is processed, thereby ensuring that the data acquired based on the data query request is updated data.

Description

Data processing method, device and computer storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and apparatus for querying data, and a computer storage medium.
Background
Currently, in order to increase the speed of a server accessing a database, hot spot data with higher access frequency in the database can be backed up in a cache. When the server receives the data query request, the data can be queried from the cache, and when the data to be queried does not exist in the cache, the data is queried from the database.
In the related art, when updating data in a database, two operations are generally involved, one is to update the data in the database, and the other is to delete the data before update in the cache. In the process of updating the data in the database, when a query request for the data to be updated is received, if the operation of updating the database is completed, but the operation of deleting the data before the update in the cache is not completed, the server preferentially searches the data before the update from the cache. Or if the operation of deleting the data before updating in the cache is completed, but the operation of updating the database is not completed, at this time, after the server has no effect of querying the data from the cache, the server will acquire the data from the database, and the data finally acquired by the server is the data before updating.
In the process of updating the data, if the server performs data query, the data acquired by the server may be the data currently required because the cache of the server and the data stored in the database may not be consistent.
Disclosure of Invention
The embodiment of the application provides a data processing method, a data processing device and a computer storage medium, which can ensure that acquired data is the latest updated data. The technical scheme is as follows:
in one aspect, a data processing method is provided, the method including:
receiving a data processing request carrying a data identifier, wherein the data processing request comprises any one of a data updating request and a data query request;
selecting a process from a plurality of processes according to the data identification so that the data processing requests with the same data identification are executed by the same process, wherein each process in the plurality of processes is started with a plurality of memory queues, and each memory queue is used for processing the data processing requests with the same data identification;
selecting one memory queue from a plurality of memory queues started by the selected process according to the data identification;
and adding the data processing requests to a selected memory queue, wherein the data processing requests in the selected memory queue are sequentially processed according to the sequence from the early to the late of the adding time.
In one possible example, the data processing request includes a data update request;
the adding the data processing request to the selected memory queue includes:
generating a cache deleting operation request and a database updating operation request according to the data updating request;
and sequentially adding the cache deleting operation request and the database updating operation request to the selected memory queue.
In one possible example, the data processing request comprises a data query request;
the adding the data processing request to the selected memory queue includes:
searching data corresponding to the data identifier from a cache of a server;
and when the data corresponding to the data identifier is searched, adding the data query request to the selected memory queue.
In one possible example, the searching the data corresponding to the data identifier in the cache of the slave server includes:
when the data corresponding to the data identifier is not found, generating an updating cache operation request and a data reading operation request according to the query request;
and sequentially adding the update cache operation request and the read data operation request to the selected memory queue.
In one possible example, the selecting a process from a plurality of processes according to the data identification includes:
carrying out hash evaluation on the data identifier to obtain a hash value of the data identifier;
and selecting one process from the plurality of processes according to the hash value, wherein the plurality of processes are in one-to-one correspondence with the plurality of hash values.
In one possible example, the selecting a memory queue from a plurality of memory queues started by a selected process according to the identification of the data to be processed includes:
surplus is carried out on the hash value by the data identifier;
and selecting one memory queue from the memory queues according to the value after the remainder.
In another aspect, there is provided a data processing apparatus, the apparatus comprising:
the receiving module is used for receiving a data processing request carrying a data identifier, wherein the data processing request comprises any one of a data updating request and a data query request;
the first selection module is used for selecting one process from a plurality of processes according to the data identification so that the data processing requests with the same data identification are executed by the same process, each process in the plurality of processes is started with a plurality of memory queues, and each memory queue is used for processing the data processing requests with the same data identification;
the second selection module is used for selecting one memory queue from a plurality of memory queues started by the selected process according to the data identification;
and the adding module is used for adding the data processing requests to the selected memory queue, wherein the data processing requests in the selected memory queue are sequentially processed according to the sequence from the early to the late of the adding time.
In one possible example, the data processing request includes a data update request;
the adding module is specifically configured to:
generating a cache deleting operation request and a database updating operation request according to the data updating request;
and sequentially adding the cache deleting operation request and the database updating operation request to the selected memory queue.
In one possible example, the data processing request comprises a data query request;
the adding module is specifically configured to:
searching data corresponding to the data identifier from a cache of a server;
and when the data corresponding to the data identifier is searched, adding the data query request to the selected memory queue.
In one possible example, the adding module is further specifically configured to:
when the data corresponding to the data identifier is not found, generating an updating cache operation request and a data reading operation request according to the query request;
and sequentially adding the update cache operation request and the read data operation request to the selected memory queue.
In one possible example, the first selection module is specifically configured to:
carrying out hash evaluation on the data identifier to obtain a hash value of the data identifier;
and selecting one process from the plurality of processes according to the hash value, wherein the plurality of processes are in one-to-one correspondence with the plurality of hash values.
In one possible example, the second selecting module is specifically configured to:
surplus is carried out on the hash value by the data identifier;
and selecting one memory queue from the memory queues according to the value after the remainder.
In another aspect, there is provided a data processing apparatus, the apparatus comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of any of the data processing methods described above.
In another aspect, a computer readable storage medium is provided, on which instructions are stored, which when executed by a processor implement the steps of any of the data processing methods described above.
In another aspect, there is provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the steps of implementing any of the data processing methods described above.
The technical scheme provided by the embodiment of the application has the beneficial effects that:
in the application, when a data processing request carrying a data identifier is received, one process is selected from a plurality of processes according to the data identifier no matter the data processing request is a data updating request or a data inquiring request, so that the data processing request carrying the same data identifier is executed by the same process. Because each process is started with one or more memory queues, each memory queue is used for processing the data processing request with the same carried data identifier, one memory queue can be continuously selected from the memory queues started by the selected process according to the data identifier, and the data processing request is added into the selected memory queue. It is known that, in the present application, all data processing requests for the same data are added to the same memory queue, and the data processing requests in the memory queue are processed sequentially from the early to the late in the addition time, so that, after receiving the data update request, even if receiving the data query request for the same data, the data query request is not processed until the data update request is not processed, and only after the data update request is processed, it is ensured that the data acquired based on the data query request is updated data.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a data processing method according to an embodiment of the present application.
FIG. 2 is a schematic diagram of a data processing architecture according to an embodiment of the present application.
Fig. 3 is a schematic diagram of a data processing apparatus according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
Before explaining the embodiment of the present application, an explanation is made on an application scenario related to the embodiment of the present application. Because the response speed of the cache is far greater than that of the database, after the hot spot data in the database is backed up to the cache, the server can directly access the hot spot data from the cache when accessing the hot spot data. Access from the database is only required when accessing non-hotspot data. Thereby improving the overall speed of the server accessing the data. The data processing method provided by the embodiment of the application is applied to the scene of simultaneously configuring the cache and the database for the server. Of course, the present application may also be applied to other scenarios where the cache and the database are configured at the same time, for example, the present application is not limited in particular to the case where the present application is applied to a terminal configured with a multi-level storage medium.
Fig. 1 is a flowchart of a data processing method according to an embodiment of the present application, as shown in fig. 1, where the method includes the following steps:
step 101: a data processing request carrying a data identification is received, the data processing request comprising any one of a data update request and a data query request.
In an embodiment of the present application, in order to ensure that data obtained from a server based on a data query request after updating data is updated data, for any data processor request, whether the data processing request is a data update request or a data query request, the data processing request is added to a memory queue through steps 102 to 104 described below.
The data identifier is used for distinguishing data of different users, and the data identifier can be an identifier of a user to which the data belongs. For example, for any user in the live broadcast process, the user generates some data in the live broadcast process, and other users can access the data of the user. In this way, the live account number of each user can be employed to distinguish between the data of different users.
Step 102: according to the data identification, selecting a process from a plurality of processes so that the data processing requests with the same data identification are executed by the same process, wherein each process in the plurality of processes is started with a plurality of memory queues, and each memory queue is used for processing the data processing requests with the same data identification.
Because different processes can be processed in parallel, in the embodiment of the application, a plurality of threads are preconfigured for simultaneously processing different data processing requests, thereby improving the speed of processing data by the server. Further, in order to make the data processing requests with the same data identifier executed by the same process, when the data processing requests with the data identifier are received, one process needs to be selected from a plurality of processes according to the data identifier.
It should be noted that, in the embodiment of the present application, the data processing requests carrying the same data identifier are processed by the same process. And a process is capable of processing data processing requests corresponding to a plurality of different data identifications, respectively. For example, the data processing requests carrying the data identifier a may all have the thread first execution, and the data processing requests carrying the data identifier B may also all have the thread first execution, which only needs to ensure that the data processing requests carrying the same data identifier are processed by the same process.
Thus, in one possible implementation, the implementation of step 102 may be: carrying out hash evaluation on the data identifier to obtain a hash value of the data identifier; and selecting one process from a plurality of processes according to the hash value, wherein the plurality of processes are in one-to-one correspondence with the plurality of hash values.
Since hash computation has such a feature: each data has a unique corresponding hash value, but different data may correspond to the same hash value. Therefore, when one process is selected from a plurality of processes through the implementation manner, the data processing request of the same data identifier can be processed by the same process, and the same process can process the data processing request of different data identifiers.
In addition, in order to ensure that the data processing requests aiming at the same data identifier are sequentially executed according to the sequence, each thread is started with a plurality of memory queues, and each memory queue is used for processing the data processing requests with the same carried data identifier. That is, each memory queue corresponds to a data identifier. For example, the data processing requests carrying the data identifier a and the data identifier B are processed in the first process, specifically, the data processor request carrying the data identifier a corresponds to the first memory queue, and the data processor request carrying the data identifier B corresponds to the second memory queue.
Step 103: and selecting one memory queue from a plurality of memory queues started by the selected process according to the data identification.
Based on step 102, each of the plurality of processes is started with a plurality of memory queues, each of which is used for processing the data processing request with the same carried data identifier. Therefore, after the process is selected, a memory queue can be further selected from a plurality of memory queues started by the selected process according to the data identifier.
Based on step 102, the process may be selected by way of hash calculation, and thus, step 103 may be implemented as follows: the hash value is subjected to redundancy by the data identifier; and selecting one memory queue from the memory queues according to the value after the remainder. Although different data identifiers may correspond to the same hash value, the values of the hash value that are obtained by the hash value are definitely different from different data identifiers corresponding to the same hash value, so that the memory queue and the values that are obtained by the hash value are associated, so that in step 103, a memory queue is selected according to the data identifier.
Step 104: the data processing requests are added to a selected memory queue, and the data processing requests in the selected memory queue are sequentially processed according to the sequence from the early to the late of the adding time.
After the memory queue is selected in step 103, the data processing request may be added to the selected memory queue in accordance with step 104.
In the embodiment of the application, after each thread starts a plurality of memory queues, a plurality of consumption threads corresponding to the memories one by one can be continuously started, so that each consumption thread processes the data processing requests in the corresponding memory queues in parallel. Further improving the speed of processing data by the server.
Each consumption thread sequentially processes the data processing requests in the corresponding memory queues according to the sequence from the early to the late of the added time. It is known from steps 102 and 103 that each consuming thread processes a data processing request of the same data identifier. Data processing requests of different data identifications are handled by different consuming threads.
In addition, for a server that configures both the database and the cache, the process of updating the data includes two operations, one is an operation to update the database and one is an operation to delete the cache. Thus, if the data processing request in step 101 is a data update request, the implementation of adding the data processing request to the selected memory queue in step 104 may be: generating a cache deleting operation request and a database updating operation request according to the data updating request; and sequentially adding the cache deleting operation request and the database updating operation request to the selected memory queue.
Sequentially adding the delete cache operation request and the update database operation request to the selected memory queue means that the delete cache operation request is placed in front of the update database operation request, so that the consuming thread performs the delete cache operation first and then performs the update database operation.
In addition, for a server that configures both the database and the cache, the server typically queries the cache before querying the database, so if the data processing request in step 101 is a data query request, the implementation manner of adding the data processing request to the selected memory queue in step 104 may be: searching data corresponding to the data identifier from the cache of the server; when the data corresponding to the data identification is found, the data query request is added to the selected memory queue.
Correspondingly, when the data corresponding to the data identifier is not found, generating an update cache operation request and a read data operation request according to the query request; and sequentially adding the update cache operation request and the read data operation request to the selected memory queue.
That is, in the embodiment of the present application, if the data processing request in step 101 is a data query request, if there is no data corresponding to the data identifier in the cache, in order to facilitate the subsequent reading of the data corresponding to the data identifier, the data corresponding to the data identifier may be backed up to the cache when the data is queried.
The above-mentioned steps are to sequentially add the update cache operation request and the read data operation request to the selected memory queue, or sequentially add the read data operation request and the update cache operation request to the selected memory queue, which is not limited herein.
Optionally, if the data processing request in step 101 is a data query request, the data query request may also be directly added to the memory queue, and when the consuming thread consumes the data query request, the data corresponding to the data identifier may be searched from the cache of the server; and acquiring the data corresponding to the data identifier when the data is searched. If not, the data is retrieved from the database.
FIG. 2 is a schematic diagram of a data processing architecture according to an embodiment of the present application. As shown in fig. 2, when the server receives a data processing request, the server selects a process from a plurality of processes according to the data identifier, and the process includes a plurality of memory queues in the selected process, wherein each memory queue is consumed by a consuming thread. The server continues to select a memory queue according to the data identifier, and then adds the data processing request to the selected memory queue.
The server in fig. 2 may be an rginx proxy server or a server on a routing layer for processing data processing requests of individual users.
In addition, when each request is added to the memory queue, the request may be encapsulated into a message body, and then the encapsulated message body is added to the memory queue, which will not be described in detail herein.
The following illustrates the technical effects of the data processing method provided by the embodiment of the present application:
it is assumed that both user a and user B are operating on the same user C's data. The user A performs a data updating operation at this time, and then sequentially puts the cache deleting request and the database updating operation request into a selected memory queue, where the memory queue corresponds to the user C. If the consuming thread corresponding to the memory queue consumes the request for deleting the cache operation, but does not consume the request for updating the database operation, that is, does not perform the updating operation of the database, then the cache data will be empty. If the server receives the data query request of the user B aiming at the user C, the server searches that the cache is empty, and then the update cache operation request and the read data operation request are sequentially put into the memory queue, so that the update cache operation request and the read data operation request are backlogged in the memory queue. After the previous database update operation request is consumed and executed, the backlogged update cache operation request and read data operation request can be consumed and the cache is updated again. At this point the data that user B queried for was up-to-date.
It can be known that, in the embodiment of the present application, when a data processing request carrying a data identifier is received, whether the data processing request is a data update request or a data query request, one process is selected from a plurality of processes according to the data identifier, so that the data processing request carrying the same data identifier is executed by the same process. Because each process is started with one or more memory queues, each memory queue is used for processing the data processing request with the same carried data identifier, one memory queue can be continuously selected from the memory queues started by the selected process according to the data identifier, and the data processing request is added into the selected memory queue. It can be seen that, in the embodiment of the present application, all data processing requests for the same data are added to the same memory queue, and the data processing requests in the memory queue are processed sequentially from early to late in the addition time, so that, after receiving the data update request, even if receiving the data query request for the same data, the data query request is located behind the data update request in the memory queue, so that the data query request will not be processed until the data update request is not processed, and only after the data update request is processed, it can be ensured that the data acquired based on the data query request is updated data.
Fig. 3 is a data processing apparatus according to an embodiment of the present application, where the apparatus 300 includes:
a receiving module 301, configured to receive a data processing request carrying a data identifier, where the data processing request includes any one of a data update request and a data query request;
a first selection module 302, configured to select a process from a plurality of processes according to the data identifier, so that the data processing request with the same data identifier is executed by the same process, where each process in the plurality of processes starts a plurality of memory queues, and each memory queue is used to process the data processing request with the same data identifier;
a second selecting module 303, configured to select, according to the data identifier, one memory queue from a plurality of memory queues started by the selected process;
the adding module 304 is configured to add the data processing requests to the selected memory queue, where the data processing requests in the selected memory queue are sequentially processed in order of addition time from early to late.
In one possible example, the data processing request includes a data update request;
the adding module is specifically used for:
generating a cache deleting operation request and a database updating operation request according to the data updating request;
and sequentially adding the cache deleting operation request and the database updating operation request to the selected memory queue.
In one possible example, the data processing request comprises a data query request;
the adding module is specifically used for:
searching data corresponding to the data identifier from the cache of the server;
when the data corresponding to the data identification is found, the data query request is added to the selected memory queue.
In one possible example, the adding module is further specifically configured to:
when the data corresponding to the data identification is not found, generating an updating cache operation request and a data reading operation request according to the query request;
and sequentially adding the update cache operation request and the read data operation request to the selected memory queue.
In one possible example, the first selection module is specifically configured to:
carrying out hash evaluation on the data identifier to obtain a hash value of the data identifier;
and selecting one process from a plurality of processes according to the hash values, wherein the plurality of processes are in one-to-one correspondence with the plurality of hash values.
In one possible example, the second selection module is specifically configured to:
the hash value is subjected to redundancy by the data identifier;
and selecting one memory queue from the memory queues according to the value after the remainder.
In the embodiment of the application, when a data processing request carrying a data identifier is received, one process is selected from a plurality of processes according to the data identifier no matter the data processing request is a data updating request or a data inquiring request, so that the data processing request carrying the same data identifier is executed by the same process. Because each process is started with one or more memory queues, each memory queue is used for processing the data processing request with the same carried data identifier, one memory queue can be continuously selected from the memory queues started by the selected process according to the data identifier, and the data processing request is added into the selected memory queue. It can be seen that, in the embodiment of the present application, all data processing requests for the same data are added to the same memory queue, and the data processing requests in the memory queue are processed sequentially from early to late in the addition time, so that, after receiving the data update request, even if receiving the data query request for the same data, the data query request is located behind the data update request in the memory queue, so that the data query request will not be processed until the data update request is not processed, and only after the data update request is processed, it can be ensured that the data acquired based on the data query request is updated data.
It should be noted that: in the data processing apparatus provided in the foregoing embodiments, only the division of the above functional modules is used as an example when performing data processing, and in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the data processing apparatus and the data processing method embodiment provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the data processing apparatus and the data processing method embodiment are detailed in the method embodiment, which is not described herein again.
Fig. 4 is a schematic diagram of a server architecture according to an exemplary embodiment. The server may be a server in a backend server cluster. Specifically, the present application relates to a method for manufacturing a semiconductor device.
The server 400 includes a Central Processing Unit (CPU) 401, a system memory 404 including a Random Access Memory (RAM) 402 and a Read Only Memory (ROM) 403, and a system bus 405 connecting the system memory 404 and the central processing unit 401. The server 400 also includes a basic input/output system (I/O system) 406, for facilitating the transfer of information between various devices within the computer, and a mass storage device 407 for storing an operating system 413, application programs 414, and other program modules 415.
The basic input/output system 406 includes a display 408 for displaying information and an input device 409, such as a mouse, keyboard, etc., for user input of information. Wherein both the display 408 and the input device 409 are coupled to the central processing unit 401 via an input output controller 410 coupled to the system bus 405. The basic input/output system 406 may also include an input/output controller 410 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, the input output controller 410 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 407 is connected to the central processing unit 401 through a mass storage controller (not shown) connected to the system bus 405. The mass storage device 407 and its associated computer-readable medium provide non-volatile storage for the server 400. That is, mass storage device 407 may include a computer-readable medium (not shown) such as a hard disk or CD-ROM drive.
Computer readable media may include computer storage media and communication media without loss of generality. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will recognize that computer storage media are not limited to the ones described above. The system memory 404 and mass storage device 407 described above may be collectively referred to as memory.
According to various embodiments of the application, the server 400 may also operate by a remote computer connected to the network through a network, such as the Internet. I.e., server 400 may be connected to network 412 through a network interface unit 411 coupled to system bus 405, or other types of networks or remote computer systems (not shown) may be coupled using network interface unit 411.
The memory also includes one or more programs, one or more programs stored in the memory and configured to be executed by the CPU. The one or more programs include instructions for performing the data processing methods provided by embodiments of the present application.
The embodiment of the present application also provides a non-transitory computer readable storage medium, which when executed by a processor of a server, enables the server to perform the data processing method provided in the above embodiment.
The embodiment of the application also provides a computer program product containing instructions, which when run on a server, cause the server to execute the data processing method provided in the above embodiment.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the application is not intended to limit the application to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the application are intended to be included within the scope of the application.

Claims (5)

1. The data processing method is characterized in that the method is applied to a scene of simultaneously configuring a cache and a database for a server, wherein hot spot data in the database is backed up in the cache, the server directly accesses from the cache when accessing hot spot data, and accesses from the database when accessing non-hot spot data, and the method comprises the following steps:
receiving a data processing request carrying a data identifier, wherein the data processing request comprises any one of a data updating request and a data query request;
selecting a process from a plurality of processes according to the data identification so that the data processing requests with the same data identification are executed by the same process, wherein each process in the plurality of processes is started with a plurality of memory queues, each memory queue is used for processing the data processing requests with the same data identification, and each memory queue corresponds to one data identification;
the hash value is subjected to redundancy by the data identifier; selecting one memory queue from a plurality of memory queues started by the selected process according to the value after the remainder is taken;
adding the data processing requests to a selected memory queue, wherein the data processing requests in the selected memory queue are sequentially processed according to the sequence from the early to the late of the adding time, each memory queue corresponds to a consumption thread, each consumption thread is used for processing the data processing requests in the corresponding memory queue, and each consumption thread processes the data processing requests with the same data identifier;
when the data processing request includes a data update request, the adding the data processing request to the selected memory queue includes:
generating a cache deleting operation request and a database updating operation request according to the data updating request;
sequentially adding the cache deleting operation request and the database updating operation request to the selected memory queue;
when the data processing request includes a data query request, the adding the data processing request to the selected memory queue includes:
searching data corresponding to the data identifier from a cache of a server;
when the data corresponding to the data identifier is found, adding the data query request to the selected memory queue;
when the data corresponding to the data identifier is not found, generating an updating cache operation request and a data reading operation request according to the query request;
and sequentially adding the update cache operation request and the read data operation request to the selected memory queue.
2. The method of claim 1, wherein selecting one process from a plurality of processes based on the data identification comprises:
carrying out hash evaluation on the data identifier to obtain a hash value of the data identifier;
and selecting one process from the plurality of processes according to the hash value, wherein the plurality of processes are in one-to-one correspondence with the plurality of hash values.
3. A data processing apparatus, wherein the apparatus is applied to a scenario in which a cache and a database are configured for a server at the same time, wherein hot spot data in the database is backed up into the cache, the server accesses the hot spot data directly from the cache, and accesses the non-hot spot data from the database, the apparatus comprising:
the receiving module is used for receiving a data processing request carrying a data identifier, wherein the data processing request comprises any one of a data updating request and a data query request;
the first selection module is used for selecting one process from a plurality of processes according to the data identifiers so that the data processing requests with the same data identifiers are executed by the same process, each process in the plurality of processes is started with a plurality of memory queues, each memory queue is used for processing the data processing requests with the same data identifiers, and each memory queue corresponds to one data identifier;
the second selection module is used for taking the remainder of the hash value from the data identifier; selecting one memory queue from a plurality of memory queues started by the selected process according to the value after the remainder is taken;
the adding module is used for adding the data processing requests to the selected memory queues, wherein the data processing requests in the selected memory queues are sequentially processed according to the sequence from the early to the late of the adding time, each memory queue corresponds to a consumption thread, each consumption thread is used for processing the data processing requests in the corresponding memory queue, and each consumption thread processes the data processing requests with the same data identifier;
when the data processing request includes a data update request, the adding module is specifically configured to:
generating a cache deleting operation request and a database updating operation request according to the data updating request;
sequentially adding the cache deleting operation request and the database updating operation request to the selected memory queue;
when the data processing request includes a data query request, the adding module is specifically configured to:
searching data corresponding to the data identifier from a cache of a server;
when the data corresponding to the data identifier is found, adding the data query request to the selected memory queue;
when the data corresponding to the data identifier is not found, generating an updating cache operation request and a data reading operation request according to the query request;
and sequentially adding the update cache operation request and the read data operation request to the selected memory queue.
4. A data processing apparatus, the apparatus comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of the method of any of the preceding claims 1 to 2.
5. A computer readable storage medium having stored thereon instructions which, when executed by a processor, implement the steps of the method of any of the preceding claims 1 to 2.
CN201910718451.1A 2019-08-05 2019-08-05 Data processing method, device and computer storage medium Active CN110427386B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910718451.1A CN110427386B (en) 2019-08-05 2019-08-05 Data processing method, device and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910718451.1A CN110427386B (en) 2019-08-05 2019-08-05 Data processing method, device and computer storage medium

Publications (2)

Publication Number Publication Date
CN110427386A CN110427386A (en) 2019-11-08
CN110427386B true CN110427386B (en) 2023-09-19

Family

ID=68412683

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910718451.1A Active CN110427386B (en) 2019-08-05 2019-08-05 Data processing method, device and computer storage medium

Country Status (1)

Country Link
CN (1) CN110427386B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111414259A (en) * 2020-02-10 2020-07-14 北京声智科技有限公司 Resource updating method, system, device, server and storage medium
CN111488366B (en) * 2020-04-09 2023-08-01 百度在线网络技术(北京)有限公司 Relational database updating method, relational database updating device, relational database updating equipment and storage medium
CN111782399B (en) * 2020-07-03 2023-12-01 北京思特奇信息技术股份有限公司 UDP-based efficient realization method for configuration server
CN112104731B (en) * 2020-09-11 2022-05-20 北京奇艺世纪科技有限公司 Request processing method and device, electronic equipment and storage medium
CN112364061A (en) * 2020-11-18 2021-02-12 浪潮云信息技术股份公司 Mysql-based high-concurrency database access method

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012015673A2 (en) * 2010-07-27 2012-02-02 Microsoft Corporation Application instance and query stores
CN103955486A (en) * 2014-04-14 2014-07-30 五八同城信息技术有限公司 Distributed service system as well as data updating method and data query method thereof
CN106294607A (en) * 2016-07-29 2017-01-04 北京奇虎科技有限公司 Data cached update method and updating device
CN106354851A (en) * 2016-08-31 2017-01-25 广州市乐商软件科技有限公司 Data-caching method and device
CN106874067A (en) * 2017-01-24 2017-06-20 华南理工大学 Parallel calculating method, apparatus and system based on lightweight virtual machine
CN107368502A (en) * 2016-05-13 2017-11-21 北京京东尚科信息技术有限公司 Information synchronization method and device
CN107508757A (en) * 2017-08-15 2017-12-22 网宿科技股份有限公司 Multi-process load-balancing method and device
CN107992517A (en) * 2017-10-26 2018-05-04 深圳市金立通信设备有限公司 A kind of data processing method, server and computer-readable medium
CN108459917A (en) * 2018-03-15 2018-08-28 欧普照明股份有限公司 A kind of message distribution member, message handling system and message distribution method
WO2018169429A1 (en) * 2017-03-17 2018-09-20 Oracle International Corporation Framework for the deployment of event-based applications
CN109710402A (en) * 2018-12-17 2019-05-03 平安普惠企业管理有限公司 Method, apparatus, computer equipment and the storage medium of process resource acquisition request

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FI20115269A0 (en) * 2011-03-18 2011-03-18 Tekla Corp Delayed update of shared information

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012015673A2 (en) * 2010-07-27 2012-02-02 Microsoft Corporation Application instance and query stores
CN103955486A (en) * 2014-04-14 2014-07-30 五八同城信息技术有限公司 Distributed service system as well as data updating method and data query method thereof
CN107368502A (en) * 2016-05-13 2017-11-21 北京京东尚科信息技术有限公司 Information synchronization method and device
CN106294607A (en) * 2016-07-29 2017-01-04 北京奇虎科技有限公司 Data cached update method and updating device
CN106354851A (en) * 2016-08-31 2017-01-25 广州市乐商软件科技有限公司 Data-caching method and device
CN106874067A (en) * 2017-01-24 2017-06-20 华南理工大学 Parallel calculating method, apparatus and system based on lightweight virtual machine
WO2018169429A1 (en) * 2017-03-17 2018-09-20 Oracle International Corporation Framework for the deployment of event-based applications
CN107508757A (en) * 2017-08-15 2017-12-22 网宿科技股份有限公司 Multi-process load-balancing method and device
CN107992517A (en) * 2017-10-26 2018-05-04 深圳市金立通信设备有限公司 A kind of data processing method, server and computer-readable medium
CN108459917A (en) * 2018-03-15 2018-08-28 欧普照明股份有限公司 A kind of message distribution member, message handling system and message distribution method
CN109710402A (en) * 2018-12-17 2019-05-03 平安普惠企业管理有限公司 Method, apparatus, computer equipment and the storage medium of process resource acquisition request

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
FTP搜索引擎数据采集策略的研究;郭立力等;《计算机工程与设计》;20090428(第08期);第1853-1855页 *

Also Published As

Publication number Publication date
CN110427386A (en) 2019-11-08

Similar Documents

Publication Publication Date Title
CN110427386B (en) Data processing method, device and computer storage medium
CN108108463B (en) Synchronous task processing method and device based on time slice scheduling
CN108228799B (en) Object index information storage method and device
US11310066B2 (en) Method and apparatus for pushing information
CN109766318B (en) File reading method and device
CN107515879B (en) Method and electronic equipment for document retrieval
CN109933585B (en) Data query method and data query system
CN108154024B (en) Data retrieval method and device and electronic equipment
CN112860953A (en) Data importing method, device, equipment and storage medium of graph database
CN109271193B (en) Data processing method, device, equipment and storage medium
CN112866339B (en) Data transmission method and device, computer equipment and storage medium
CN102724301B (en) Cloud database system and method and equipment for reading and writing cloud data
CN110162395B (en) Memory allocation method and device
CN113886496A (en) Data synchronization method and device of block chain, computer equipment and storage medium
CN109885729B (en) Method, device and system for displaying data
CN112395337B (en) Data export method and device
CN110515979B (en) Data query method, device, equipment and storage medium
CN111061557B (en) Method and device for balancing distributed memory database load
CN112764897B (en) Task request processing method, device and system and computer readable storage medium
CN110765125A (en) Data storage method and device
CN111399753B (en) Method and device for writing pictures
CN112783866A (en) Data reading method and device, computer equipment and storage medium
CN112685474A (en) Application management method, device, equipment and storage medium
CN112527900A (en) Method, device, equipment and medium for database multi-copy reading consistency
CN114189490B (en) User list processing method, system, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210112

Address after: 511442 3108, 79 Wanbo 2nd Road, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Applicant after: GUANGZHOU CUBESILI INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 511446 28th floor, block B1, Wanda Plaza, Wanbo business district, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Applicant before: GUANGZHOU HUADUO NETWORK TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20191108

Assignee: GUANGZHOU HUADUO NETWORK TECHNOLOGY Co.,Ltd.

Assignor: GUANGZHOU CUBESILI INFORMATION TECHNOLOGY Co.,Ltd.

Contract record no.: X2021440000054

Denomination of invention: Data processing method, device and computer storage medium

License type: Common License

Record date: 20210208

EE01 Entry into force of recordation of patent licensing contract
GR01 Patent grant
GR01 Patent grant