Disclosure of Invention
The embodiment of the application provides a data processing method, a data processing device and a computer storage medium, which can ensure that acquired data is the latest updated data. The technical scheme is as follows:
in one aspect, a data processing method is provided, the method including:
receiving a data processing request carrying a data identifier, wherein the data processing request comprises any one of a data updating request and a data query request;
selecting a process from a plurality of processes according to the data identification so that the data processing requests with the same data identification are executed by the same process, wherein each process in the plurality of processes is started with a plurality of memory queues, and each memory queue is used for processing the data processing requests with the same data identification;
selecting one memory queue from a plurality of memory queues started by the selected process according to the data identification;
and adding the data processing requests to a selected memory queue, wherein the data processing requests in the selected memory queue are sequentially processed according to the sequence from the early to the late of the adding time.
In one possible example, the data processing request includes a data update request;
the adding the data processing request to the selected memory queue includes:
generating a cache deleting operation request and a database updating operation request according to the data updating request;
and sequentially adding the cache deleting operation request and the database updating operation request to the selected memory queue.
In one possible example, the data processing request comprises a data query request;
the adding the data processing request to the selected memory queue includes:
searching data corresponding to the data identifier from a cache of a server;
and when the data corresponding to the data identifier is searched, adding the data query request to the selected memory queue.
In one possible example, the searching the data corresponding to the data identifier in the cache of the slave server includes:
when the data corresponding to the data identifier is not found, generating an updating cache operation request and a data reading operation request according to the query request;
and sequentially adding the update cache operation request and the read data operation request to the selected memory queue.
In one possible example, the selecting a process from a plurality of processes according to the data identification includes:
carrying out hash evaluation on the data identifier to obtain a hash value of the data identifier;
and selecting one process from the plurality of processes according to the hash value, wherein the plurality of processes are in one-to-one correspondence with the plurality of hash values.
In one possible example, the selecting a memory queue from a plurality of memory queues started by a selected process according to the identification of the data to be processed includes:
surplus is carried out on the hash value by the data identifier;
and selecting one memory queue from the memory queues according to the value after the remainder.
In another aspect, there is provided a data processing apparatus, the apparatus comprising:
the receiving module is used for receiving a data processing request carrying a data identifier, wherein the data processing request comprises any one of a data updating request and a data query request;
the first selection module is used for selecting one process from a plurality of processes according to the data identification so that the data processing requests with the same data identification are executed by the same process, each process in the plurality of processes is started with a plurality of memory queues, and each memory queue is used for processing the data processing requests with the same data identification;
the second selection module is used for selecting one memory queue from a plurality of memory queues started by the selected process according to the data identification;
and the adding module is used for adding the data processing requests to the selected memory queue, wherein the data processing requests in the selected memory queue are sequentially processed according to the sequence from the early to the late of the adding time.
In one possible example, the data processing request includes a data update request;
the adding module is specifically configured to:
generating a cache deleting operation request and a database updating operation request according to the data updating request;
and sequentially adding the cache deleting operation request and the database updating operation request to the selected memory queue.
In one possible example, the data processing request comprises a data query request;
the adding module is specifically configured to:
searching data corresponding to the data identifier from a cache of a server;
and when the data corresponding to the data identifier is searched, adding the data query request to the selected memory queue.
In one possible example, the adding module is further specifically configured to:
when the data corresponding to the data identifier is not found, generating an updating cache operation request and a data reading operation request according to the query request;
and sequentially adding the update cache operation request and the read data operation request to the selected memory queue.
In one possible example, the first selection module is specifically configured to:
carrying out hash evaluation on the data identifier to obtain a hash value of the data identifier;
and selecting one process from the plurality of processes according to the hash value, wherein the plurality of processes are in one-to-one correspondence with the plurality of hash values.
In one possible example, the second selecting module is specifically configured to:
surplus is carried out on the hash value by the data identifier;
and selecting one memory queue from the memory queues according to the value after the remainder.
In another aspect, there is provided a data processing apparatus, the apparatus comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of any of the data processing methods described above.
In another aspect, a computer readable storage medium is provided, on which instructions are stored, which when executed by a processor implement the steps of any of the data processing methods described above.
In another aspect, there is provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the steps of implementing any of the data processing methods described above.
The technical scheme provided by the embodiment of the application has the beneficial effects that:
in the application, when a data processing request carrying a data identifier is received, one process is selected from a plurality of processes according to the data identifier no matter the data processing request is a data updating request or a data inquiring request, so that the data processing request carrying the same data identifier is executed by the same process. Because each process is started with one or more memory queues, each memory queue is used for processing the data processing request with the same carried data identifier, one memory queue can be continuously selected from the memory queues started by the selected process according to the data identifier, and the data processing request is added into the selected memory queue. It is known that, in the present application, all data processing requests for the same data are added to the same memory queue, and the data processing requests in the memory queue are processed sequentially from the early to the late in the addition time, so that, after receiving the data update request, even if receiving the data query request for the same data, the data query request is not processed until the data update request is not processed, and only after the data update request is processed, it is ensured that the data acquired based on the data query request is updated data.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
Before explaining the embodiment of the present application, an explanation is made on an application scenario related to the embodiment of the present application. Because the response speed of the cache is far greater than that of the database, after the hot spot data in the database is backed up to the cache, the server can directly access the hot spot data from the cache when accessing the hot spot data. Access from the database is only required when accessing non-hotspot data. Thereby improving the overall speed of the server accessing the data. The data processing method provided by the embodiment of the application is applied to the scene of simultaneously configuring the cache and the database for the server. Of course, the present application may also be applied to other scenarios where the cache and the database are configured at the same time, for example, the present application is not limited in particular to the case where the present application is applied to a terminal configured with a multi-level storage medium.
Fig. 1 is a flowchart of a data processing method according to an embodiment of the present application, as shown in fig. 1, where the method includes the following steps:
step 101: a data processing request carrying a data identification is received, the data processing request comprising any one of a data update request and a data query request.
In an embodiment of the present application, in order to ensure that data obtained from a server based on a data query request after updating data is updated data, for any data processor request, whether the data processing request is a data update request or a data query request, the data processing request is added to a memory queue through steps 102 to 104 described below.
The data identifier is used for distinguishing data of different users, and the data identifier can be an identifier of a user to which the data belongs. For example, for any user in the live broadcast process, the user generates some data in the live broadcast process, and other users can access the data of the user. In this way, the live account number of each user can be employed to distinguish between the data of different users.
Step 102: according to the data identification, selecting a process from a plurality of processes so that the data processing requests with the same data identification are executed by the same process, wherein each process in the plurality of processes is started with a plurality of memory queues, and each memory queue is used for processing the data processing requests with the same data identification.
Because different processes can be processed in parallel, in the embodiment of the application, a plurality of threads are preconfigured for simultaneously processing different data processing requests, thereby improving the speed of processing data by the server. Further, in order to make the data processing requests with the same data identifier executed by the same process, when the data processing requests with the data identifier are received, one process needs to be selected from a plurality of processes according to the data identifier.
It should be noted that, in the embodiment of the present application, the data processing requests carrying the same data identifier are processed by the same process. And a process is capable of processing data processing requests corresponding to a plurality of different data identifications, respectively. For example, the data processing requests carrying the data identifier a may all have the thread first execution, and the data processing requests carrying the data identifier B may also all have the thread first execution, which only needs to ensure that the data processing requests carrying the same data identifier are processed by the same process.
Thus, in one possible implementation, the implementation of step 102 may be: carrying out hash evaluation on the data identifier to obtain a hash value of the data identifier; and selecting one process from a plurality of processes according to the hash value, wherein the plurality of processes are in one-to-one correspondence with the plurality of hash values.
Since hash computation has such a feature: each data has a unique corresponding hash value, but different data may correspond to the same hash value. Therefore, when one process is selected from a plurality of processes through the implementation manner, the data processing request of the same data identifier can be processed by the same process, and the same process can process the data processing request of different data identifiers.
In addition, in order to ensure that the data processing requests aiming at the same data identifier are sequentially executed according to the sequence, each thread is started with a plurality of memory queues, and each memory queue is used for processing the data processing requests with the same carried data identifier. That is, each memory queue corresponds to a data identifier. For example, the data processing requests carrying the data identifier a and the data identifier B are processed in the first process, specifically, the data processor request carrying the data identifier a corresponds to the first memory queue, and the data processor request carrying the data identifier B corresponds to the second memory queue.
Step 103: and selecting one memory queue from a plurality of memory queues started by the selected process according to the data identification.
Based on step 102, each of the plurality of processes is started with a plurality of memory queues, each of which is used for processing the data processing request with the same carried data identifier. Therefore, after the process is selected, a memory queue can be further selected from a plurality of memory queues started by the selected process according to the data identifier.
Based on step 102, the process may be selected by way of hash calculation, and thus, step 103 may be implemented as follows: the hash value is subjected to redundancy by the data identifier; and selecting one memory queue from the memory queues according to the value after the remainder. Although different data identifiers may correspond to the same hash value, the values of the hash value that are obtained by the hash value are definitely different from different data identifiers corresponding to the same hash value, so that the memory queue and the values that are obtained by the hash value are associated, so that in step 103, a memory queue is selected according to the data identifier.
Step 104: the data processing requests are added to a selected memory queue, and the data processing requests in the selected memory queue are sequentially processed according to the sequence from the early to the late of the adding time.
After the memory queue is selected in step 103, the data processing request may be added to the selected memory queue in accordance with step 104.
In the embodiment of the application, after each thread starts a plurality of memory queues, a plurality of consumption threads corresponding to the memories one by one can be continuously started, so that each consumption thread processes the data processing requests in the corresponding memory queues in parallel. Further improving the speed of processing data by the server.
Each consumption thread sequentially processes the data processing requests in the corresponding memory queues according to the sequence from the early to the late of the added time. It is known from steps 102 and 103 that each consuming thread processes a data processing request of the same data identifier. Data processing requests of different data identifications are handled by different consuming threads.
In addition, for a server that configures both the database and the cache, the process of updating the data includes two operations, one is an operation to update the database and one is an operation to delete the cache. Thus, if the data processing request in step 101 is a data update request, the implementation of adding the data processing request to the selected memory queue in step 104 may be: generating a cache deleting operation request and a database updating operation request according to the data updating request; and sequentially adding the cache deleting operation request and the database updating operation request to the selected memory queue.
Sequentially adding the delete cache operation request and the update database operation request to the selected memory queue means that the delete cache operation request is placed in front of the update database operation request, so that the consuming thread performs the delete cache operation first and then performs the update database operation.
In addition, for a server that configures both the database and the cache, the server typically queries the cache before querying the database, so if the data processing request in step 101 is a data query request, the implementation manner of adding the data processing request to the selected memory queue in step 104 may be: searching data corresponding to the data identifier from the cache of the server; when the data corresponding to the data identification is found, the data query request is added to the selected memory queue.
Correspondingly, when the data corresponding to the data identifier is not found, generating an update cache operation request and a read data operation request according to the query request; and sequentially adding the update cache operation request and the read data operation request to the selected memory queue.
That is, in the embodiment of the present application, if the data processing request in step 101 is a data query request, if there is no data corresponding to the data identifier in the cache, in order to facilitate the subsequent reading of the data corresponding to the data identifier, the data corresponding to the data identifier may be backed up to the cache when the data is queried.
The above-mentioned steps are to sequentially add the update cache operation request and the read data operation request to the selected memory queue, or sequentially add the read data operation request and the update cache operation request to the selected memory queue, which is not limited herein.
Optionally, if the data processing request in step 101 is a data query request, the data query request may also be directly added to the memory queue, and when the consuming thread consumes the data query request, the data corresponding to the data identifier may be searched from the cache of the server; and acquiring the data corresponding to the data identifier when the data is searched. If not, the data is retrieved from the database.
FIG. 2 is a schematic diagram of a data processing architecture according to an embodiment of the present application. As shown in fig. 2, when the server receives a data processing request, the server selects a process from a plurality of processes according to the data identifier, and the process includes a plurality of memory queues in the selected process, wherein each memory queue is consumed by a consuming thread. The server continues to select a memory queue according to the data identifier, and then adds the data processing request to the selected memory queue.
The server in fig. 2 may be an rginx proxy server or a server on a routing layer for processing data processing requests of individual users.
In addition, when each request is added to the memory queue, the request may be encapsulated into a message body, and then the encapsulated message body is added to the memory queue, which will not be described in detail herein.
The following illustrates the technical effects of the data processing method provided by the embodiment of the present application:
it is assumed that both user a and user B are operating on the same user C's data. The user A performs a data updating operation at this time, and then sequentially puts the cache deleting request and the database updating operation request into a selected memory queue, where the memory queue corresponds to the user C. If the consuming thread corresponding to the memory queue consumes the request for deleting the cache operation, but does not consume the request for updating the database operation, that is, does not perform the updating operation of the database, then the cache data will be empty. If the server receives the data query request of the user B aiming at the user C, the server searches that the cache is empty, and then the update cache operation request and the read data operation request are sequentially put into the memory queue, so that the update cache operation request and the read data operation request are backlogged in the memory queue. After the previous database update operation request is consumed and executed, the backlogged update cache operation request and read data operation request can be consumed and the cache is updated again. At this point the data that user B queried for was up-to-date.
It can be known that, in the embodiment of the present application, when a data processing request carrying a data identifier is received, whether the data processing request is a data update request or a data query request, one process is selected from a plurality of processes according to the data identifier, so that the data processing request carrying the same data identifier is executed by the same process. Because each process is started with one or more memory queues, each memory queue is used for processing the data processing request with the same carried data identifier, one memory queue can be continuously selected from the memory queues started by the selected process according to the data identifier, and the data processing request is added into the selected memory queue. It can be seen that, in the embodiment of the present application, all data processing requests for the same data are added to the same memory queue, and the data processing requests in the memory queue are processed sequentially from early to late in the addition time, so that, after receiving the data update request, even if receiving the data query request for the same data, the data query request is located behind the data update request in the memory queue, so that the data query request will not be processed until the data update request is not processed, and only after the data update request is processed, it can be ensured that the data acquired based on the data query request is updated data.
Fig. 3 is a data processing apparatus according to an embodiment of the present application, where the apparatus 300 includes:
a receiving module 301, configured to receive a data processing request carrying a data identifier, where the data processing request includes any one of a data update request and a data query request;
a first selection module 302, configured to select a process from a plurality of processes according to the data identifier, so that the data processing request with the same data identifier is executed by the same process, where each process in the plurality of processes starts a plurality of memory queues, and each memory queue is used to process the data processing request with the same data identifier;
a second selecting module 303, configured to select, according to the data identifier, one memory queue from a plurality of memory queues started by the selected process;
the adding module 304 is configured to add the data processing requests to the selected memory queue, where the data processing requests in the selected memory queue are sequentially processed in order of addition time from early to late.
In one possible example, the data processing request includes a data update request;
the adding module is specifically used for:
generating a cache deleting operation request and a database updating operation request according to the data updating request;
and sequentially adding the cache deleting operation request and the database updating operation request to the selected memory queue.
In one possible example, the data processing request comprises a data query request;
the adding module is specifically used for:
searching data corresponding to the data identifier from the cache of the server;
when the data corresponding to the data identification is found, the data query request is added to the selected memory queue.
In one possible example, the adding module is further specifically configured to:
when the data corresponding to the data identification is not found, generating an updating cache operation request and a data reading operation request according to the query request;
and sequentially adding the update cache operation request and the read data operation request to the selected memory queue.
In one possible example, the first selection module is specifically configured to:
carrying out hash evaluation on the data identifier to obtain a hash value of the data identifier;
and selecting one process from a plurality of processes according to the hash values, wherein the plurality of processes are in one-to-one correspondence with the plurality of hash values.
In one possible example, the second selection module is specifically configured to:
the hash value is subjected to redundancy by the data identifier;
and selecting one memory queue from the memory queues according to the value after the remainder.
In the embodiment of the application, when a data processing request carrying a data identifier is received, one process is selected from a plurality of processes according to the data identifier no matter the data processing request is a data updating request or a data inquiring request, so that the data processing request carrying the same data identifier is executed by the same process. Because each process is started with one or more memory queues, each memory queue is used for processing the data processing request with the same carried data identifier, one memory queue can be continuously selected from the memory queues started by the selected process according to the data identifier, and the data processing request is added into the selected memory queue. It can be seen that, in the embodiment of the present application, all data processing requests for the same data are added to the same memory queue, and the data processing requests in the memory queue are processed sequentially from early to late in the addition time, so that, after receiving the data update request, even if receiving the data query request for the same data, the data query request is located behind the data update request in the memory queue, so that the data query request will not be processed until the data update request is not processed, and only after the data update request is processed, it can be ensured that the data acquired based on the data query request is updated data.
It should be noted that: in the data processing apparatus provided in the foregoing embodiments, only the division of the above functional modules is used as an example when performing data processing, and in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the data processing apparatus and the data processing method embodiment provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the data processing apparatus and the data processing method embodiment are detailed in the method embodiment, which is not described herein again.
Fig. 4 is a schematic diagram of a server architecture according to an exemplary embodiment. The server may be a server in a backend server cluster. Specifically, the present application relates to a method for manufacturing a semiconductor device.
The server 400 includes a Central Processing Unit (CPU) 401, a system memory 404 including a Random Access Memory (RAM) 402 and a Read Only Memory (ROM) 403, and a system bus 405 connecting the system memory 404 and the central processing unit 401. The server 400 also includes a basic input/output system (I/O system) 406, for facilitating the transfer of information between various devices within the computer, and a mass storage device 407 for storing an operating system 413, application programs 414, and other program modules 415.
The basic input/output system 406 includes a display 408 for displaying information and an input device 409, such as a mouse, keyboard, etc., for user input of information. Wherein both the display 408 and the input device 409 are coupled to the central processing unit 401 via an input output controller 410 coupled to the system bus 405. The basic input/output system 406 may also include an input/output controller 410 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, the input output controller 410 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 407 is connected to the central processing unit 401 through a mass storage controller (not shown) connected to the system bus 405. The mass storage device 407 and its associated computer-readable medium provide non-volatile storage for the server 400. That is, mass storage device 407 may include a computer-readable medium (not shown) such as a hard disk or CD-ROM drive.
Computer readable media may include computer storage media and communication media without loss of generality. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will recognize that computer storage media are not limited to the ones described above. The system memory 404 and mass storage device 407 described above may be collectively referred to as memory.
According to various embodiments of the application, the server 400 may also operate by a remote computer connected to the network through a network, such as the Internet. I.e., server 400 may be connected to network 412 through a network interface unit 411 coupled to system bus 405, or other types of networks or remote computer systems (not shown) may be coupled using network interface unit 411.
The memory also includes one or more programs, one or more programs stored in the memory and configured to be executed by the CPU. The one or more programs include instructions for performing the data processing methods provided by embodiments of the present application.
The embodiment of the present application also provides a non-transitory computer readable storage medium, which when executed by a processor of a server, enables the server to perform the data processing method provided in the above embodiment.
The embodiment of the application also provides a computer program product containing instructions, which when run on a server, cause the server to execute the data processing method provided in the above embodiment.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the application is not intended to limit the application to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the application are intended to be included within the scope of the application.