CN113901262A - Method and device for acquiring data to be processed, server and storage medium - Google Patents

Method and device for acquiring data to be processed, server and storage medium Download PDF

Info

Publication number
CN113901262A
CN113901262A CN202111123434.7A CN202111123434A CN113901262A CN 113901262 A CN113901262 A CN 113901262A CN 202111123434 A CN202111123434 A CN 202111123434A CN 113901262 A CN113901262 A CN 113901262A
Authority
CN
China
Prior art keywords
processed
data
category
sub
belonging
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111123434.7A
Other languages
Chinese (zh)
Inventor
郭鹏
贾碧莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202111123434.7A priority Critical patent/CN113901262A/en
Publication of CN113901262A publication Critical patent/CN113901262A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/75Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The disclosure relates to a method, a device, a server and a storage medium for acquiring data to be processed, and relates to the technical field of data processing, wherein the method comprises the following steps: receiving an acquisition request, wherein the acquisition request comprises the category of the list to be processed; counting the current total quantity to be marked of the list to be processed and the current quantity to be marked of the data to be processed belonging to the category in each sub-list; the sub-tables are database tables for storing data to be processed in each list to be processed; determining the reading quantity of the data to be processed belonging to the category in each sublist according to the current total quantity to be marked of the list to be processed and the current quantity to be marked of the data to be processed belonging to the category in each sublist; and reading the data to be processed with the corresponding reading quantity from each sub-table according to the reading quantity of the data to be processed belonging to the category in each sub-table. Therefore, the data to be processed with the corresponding quantity can be read from each database sub-table, and the problem that the data to be processed obtained from the database sub-tables in the related art is uneven is solved.

Description

Method and device for acquiring data to be processed, server and storage medium
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a method and an apparatus for acquiring data to be processed, a server, and a storage medium.
Background
Generally, in order to ensure the security of video content, each piece of video uploaded by a user needs to be processed before the user releases the video, such as an auditing process, however, with the continuous development of short video services, the amount of video uploaded by the user also increases.
In the related art, video data is stored in a single data table manner, however, as the amount of video increases, query efficiency may be reduced. In order to support the requirements of large data volume and high concurrency, the video data is stored in a database sub-table mode. However, when there are many lists to be processed, there may be a situation that the data to be processed is always obtained from the same database sub-table, thereby causing a problem of uneven data obtaining from the database sub-table.
Disclosure of Invention
The disclosure provides a method, a device, a server and a storage medium for acquiring data to be processed, which are used for at least solving the problem of non-uniformity of acquiring the data to be processed from a database sub-table in the related art. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a method for acquiring data to be processed, including:
receiving an acquisition request, wherein the acquisition request comprises the category of a list to be processed;
counting the current total quantity to be marked of the list to be processed and the current quantity to be marked of the data to be processed belonging to the category in each sub-list; the sub-tables are database tables for storing data to be processed in each list to be processed;
determining the reading quantity of the data to be processed belonging to the category in each sub-table according to the current total quantity to be marked of the list to be processed and the current quantity to be marked of the data to be processed belonging to the category in each sub-table;
and reading the corresponding read quantity of the data to be processed from each sub-table according to the read quantity of the data to be processed belonging to the category in each sub-table.
In some embodiments of the present disclosure, the determining, according to the current total quantity to be marked of the to-be-processed list and the current quantity to be marked of the to-be-processed data belonging to the category in each of the sublists, the reading quantity of the to-be-processed data belonging to the category in each of the sublists includes:
determining the proportion of the data to be processed belonging to the category in each sublist to the current total quantity to be marked according to the current total quantity to be marked of the list to be processed and the current quantity to be marked of the data to be processed belonging to the category in each sublist;
and determining the reading quantity of the data to be processed belonging to the category in each sub-table according to the proportion of the data to be processed belonging to the category in each sub-table in the current total quantity to be marked.
In some embodiments of the present disclosure, the method further comprises:
reading the data to be processed of each category from each sub-table at regular time according to the category of each list to be processed and the identification of each sub-table;
and loading the data to be processed of each category read from each sublist into a cache unit of the list to be processed of the corresponding category.
In a possible implementation manner, the reading, according to the read number of the to-be-processed data belonging to the category in each sub-table, a corresponding read number of the to-be-processed data from each sub-table includes:
and reading the data to be processed with the corresponding reading quantity of each sub-table from the target cache unit of the list to be processed corresponding to the category according to the reading quantity of the data to be processed belonging to the category in each sub-table.
In a possible implementation manner, the reading, according to the read quantity of the to-be-processed data belonging to the category in each sub-table, the to-be-processed data of the corresponding read quantity of each sub-table from the target cache unit of the to-be-processed list corresponding to the category includes:
determining processed data and locked data among the target cache units;
filtering the processed data and the locked data in the target cache unit;
and reading the data to be processed with the corresponding reading quantity of each sub-table from the target cache unit which filters the processed data and the locked data according to the reading quantity of the data to be processed belonging to the category in each sub-table.
According to a second aspect of the embodiments of the present disclosure, there is provided an apparatus for acquiring data to be processed, including:
the device comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for receiving an acquisition request which comprises the category of a list to be processed;
the counting module is used for counting the current total quantity to be marked of the list to be processed and the current quantity to be marked of the data to be processed belonging to the category in each sublist; the sub-tables are database tables for storing data to be processed in each list to be processed;
a determining module, configured to determine, according to the current total quantity to be marked of the to-be-processed list and the current quantity to be marked of the to-be-processed data belonging to the category in each sublist, a read quantity of the to-be-processed data belonging to the category in each sublist;
and the reading module is used for reading the data to be processed with the corresponding reading quantity from each sub-table according to the reading quantity of the data to be processed belonging to the category in each sub-table.
In some embodiments of the present disclosure, the determining module is specifically configured to:
determining the proportion of the data to be processed belonging to the category in each sublist to the current total quantity to be marked according to the current total quantity to be marked of the list to be processed and the current quantity to be marked of the data to be processed belonging to the category in each sublist;
and determining the reading quantity of the data to be processed belonging to the category in each sub-table according to the proportion of the data to be processed belonging to the category in each sub-table in the current total quantity to be marked.
In some embodiments of the present disclosure, the apparatus further comprises:
and the cache module is used for regularly reading the data to be processed of each category from each sublist according to the category of each list to be processed and the identification of each sublist, and loading the data to be processed of each category read from each sublist into the cache unit of the list to be processed of the corresponding category.
In a possible implementation manner, the reading module is specifically configured to:
and reading the data to be processed with the corresponding reading quantity of each sub-table from the target cache unit of the list to be processed corresponding to the category according to the reading quantity of the data to be processed belonging to the category in each sub-table.
In a possible implementation manner, the reading module is specifically configured to:
determining processed data and locked data among the target cache units;
filtering the processed data and the locked data in the target cache unit;
and reading the data to be processed with the corresponding reading quantity of each sub-table from the target cache unit which filters the processed data and the locked data according to the reading quantity of the data to be processed belonging to the category in each sub-table.
According to a third aspect of the embodiments of the present disclosure, there is provided a server, including:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to be the instruction to implement the method for acquiring the to-be-processed data described in the embodiment of the first aspect.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a storage medium, where instructions executed by a processor of a server enable the server to execute the method for acquiring data to be processed according to the first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, performs the steps of the method of the first aspect.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
receiving an acquisition request, wherein the acquisition request comprises the category of the list to be processed; counting the current total quantity to be marked of the list to be processed and the current quantity to be marked of the data to be processed belonging to the category in each sub-list; the sub-tables are database tables for storing data to be processed in each list to be processed; determining the reading quantity of the data to be processed belonging to the category in each sublist according to the current total quantity to be marked of the list to be processed and the current quantity to be marked of the data to be processed belonging to the category in each sublist; and reading the data to be processed with the respective read quantity from each sub-table according to the read quantity of the data to be processed belonging to the category in each sub-table. Therefore, the data to be processed of the respective quantity can be read from the sub-tables of the databases, the problem that the data to be processed is not uniform in the sub-tables of the databases in the related art is solved, the function of uniformly distributing the mixed data stream can be achieved, the data to be processed of the same type in each sub-table of the databases can be read, and the problem that the difference of the reading quantity of the data of the same type in different sub-tables of the databases is overlarge is avoided.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a flow chart illustrating a method of obtaining data to be processed in accordance with an exemplary embodiment;
FIG. 2 is a flow diagram illustrating another method of obtaining data to be processed in accordance with an exemplary embodiment;
FIG. 3 is a flow chart illustrating yet another method of obtaining data to be processed in accordance with an exemplary embodiment;
FIG. 4 is a flow chart illustrating yet another method of obtaining data to be processed in accordance with an exemplary embodiment;
FIG. 5 is a block diagram illustrating an apparatus for acquiring data to be processed in accordance with an exemplary embodiment;
FIG. 6 is a block diagram illustrating an apparatus for acquiring data to be processed in accordance with an exemplary embodiment;
FIG. 7 is a block diagram illustrating a server in accordance with an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
It is to be noted that, for the convenience of understanding of the present disclosure, terms of the present disclosure are first explained accordingly.
In the embodiment of the present disclosure, the term "single data table" refers to a single data table, for example, when video data is stored by using a database, the video data is stored in the same data table in the database, and this case may be referred to as storing the video data by using a single data table manner.
In the embodiment of the present disclosure, the term "storing video data in a database in a table manner" may be understood as storing video data by using a plurality of data tables in a database. For example, a large amount of video data may be stored in different data tables in the database, respectively, for the purpose of storing the video data in a sub-table manner.
Generally, in order to ensure the security of video content, each piece of video uploaded by a user needs to be processed before the user releases the video, however, with the continuous development of short video services, the amount of video uploaded by the user also increases.
In the related art, video data is stored in a single data table manner, however, as the amount of video increases, query efficiency may be reduced. In order to support the requirements of large data volume and high concurrency, the video data is stored in a database sub-table mode. However, when there are many lists to be processed, there may be a situation that the data to be processed is always obtained from the same database sub-table, thereby causing a problem of uneven data obtaining from the database sub-table.
In order to solve the above problem, the present disclosure provides a method for acquiring data to be processed, which reads respective amounts of data to be processed from each database sub-table, so as to solve the problem in the related art that the data to be processed acquired from the database sub-tables are not uniform.
Fig. 1 is a flowchart illustrating a method for acquiring data to be processed according to an exemplary embodiment, and it should be noted that the method for acquiring data to be processed according to the embodiment of the present application may be applied to a server, and as shown in fig. 1, the method for acquiring data to be processed may include the following steps:
in step S101, an acquisition request is received, the acquisition request including a category of the to-be-processed list.
It can be understood that the processor can perform processing operation on the respective data to be processed through the processing operation terminal. Wherein the respective data to be processed of each processing operation terminal is assigned to the processing operation terminal in the form of a list to be processed. For example, the processor may trigger an acquisition request of the data to be processed by using a processing operation terminal, and perform operations such as processing on the data to be processed, where the processing operation terminal may be a personal computer, a mobile terminal, and the like. In the embodiment of the present disclosure, the server may receive, at the same time, an acquisition request of to-be-processed data sent by one or more processing operation terminals.
The processing operation terminal may send the request for acquiring the to-be-processed data in many ways, for example, by using a Remote Procedure Call (RPC) service. In addition, in order to distinguish different processing operation terminals, the processing operation terminals can be distinguished by different identifiers, i.e., the identifiers can uniquely identify one processing operation terminal.
In the embodiment of the present disclosure, after receiving an acquisition request for processing to-be-processed data sent by an operation terminal, a category of a to-be-processed list may be extracted from the acquisition request. The category of the to-be-processed list may be understood as being used to distinguish the type to which the to-be-processed data in each to-be-processed list belongs, that is, the data in the to-be-processed list represents the data of the same category. For example, the data to be processed are stored in the database sublist, and since the categories to which the data to be processed may belong are different, the data to be processed are usually distinguished by the category type field in the database sublist, so the data to be processed belonging to the same category can be read from the database sublist and assigned to the processing operation terminal for processing in the form of a list. For example, after the to-be-processed data belonging to the same category is read from the database sublist, the read to-be-processed data may be put into the to-be-processed list of the corresponding category according to different categories, so that the to-be-processed list may be assigned to the processing operation terminal, so that the processing operation terminal processes the data in the list.
For example, taking video data as an example, suppose that a large amount of video data to be processed is stored in a database sublist, the categories of the video data to be processed are different, for example, some video data belong to short videos, some video data belong to live-broadcast recorded videos, some video data belong to movie videos, some video data belong to scurf videos, and the like, the category video data all have corresponding fields in the database sublist for identification, when the video data to be processed is read from the database sublist and is distributed to a processing operation terminal in a list form, the video data to be processed belonging to the same category is put into a list to be processed, that is, the video data to be processed in the list to be processed belongs to the same category, so as to facilitate data reading and processing of the processing operation terminal on the video data.
In step S102, the current total to-be-marked amount of the to-be-processed list and the current to-be-marked amount of the to-be-processed data belonging to the category in each sub-table are counted.
In the embodiment of the present disclosure, the sub-table may be understood as a database table storing to-be-processed data in each to-be-processed list.
In the embodiment of the present disclosure, the current total to-be-marked amount of the to-be-processed list may be understood as a total data amount to be processed under the same category; the current amount to be marked can be understood as the number of marks currently waiting for processing. Because the data to be processed is stored in each sub-table of the database in a distributed manner, and the data currently waiting for processing is stored in each sub-table, when the processing operation terminal sends an acquisition request, the application can read the data to be processed from each sub-table, and fill the read data to be processed into the list to be processed and allocate the list to be processed to the processing operation terminal, so that the current to-be-marked amount of the data to be processed belonging to the category in each sub-table can be understood as: the number of data in each sub-table is the same as the category of the data to be processed.
For example, taking the pending list a as an example, assuming that the category of the data to be processed in the pending list a is category a, there are 10 sub-tables (e.g. sub-table 1, sub-table 2, …, and sub-table 10) in the database, the total number of the current pending flags of category a is 1000, that is, the current total pending flag amount of the pending list a is 1000, the data to be processed in the pending list a is stored in the 10 sub-tables, respectively, the number of the current pending flag data of category a contained in each sub-table may be different, for example, the number of the current pending flag data of category a stored in sub-table 1 is 100, the number of the current pending flag data of category a stored in sub-table 2 is 100, the number of the current pending flag data of category a stored in sub-table 3 is 500, the number of the current pending flag data of category a stored in sub-table 4 is 200, the number of flag data waiting for current processing of category a stored in the sub-table 5 is 20, the number of flag data waiting for current processing of category a stored in the sub-table 6 is 20, the number of flag data waiting for current processing of category a stored in the sub-table 7 is 20, the number of flag data waiting for current processing of category a stored in the sub-table 8 is 20, the number of flag data waiting for current processing of category a stored in the sub-table 9 is 10, and the number of flag data waiting for current processing of category a stored in the sub-table 10 is 10. For example, the sum of the number of the currently waiting-to-be-processed marking data of the category a in the 10 sub-tables is the same as the currently total waiting-to-be-marked amount of the to-be-processed list a, and is 100+100+500+200+20+20+20+20+10+10, which is 1000.
In one implementation, since each row in the database sub-table represents one piece of data, and the categories of the data to be processed are distinguished by field types in the sub-table, the current amount to be marked of the data to be processed belonging to the category in each sub-table can be determined by counting the number of data pieces with the same type value in each sub-table. For example, taking 10 sub-tables in the above database as an example, for sub-table 1, it is assumed that there are 1000 pieces of data in sub-table 1, where the number of pieces of data with a field type value of a is 100, that is, the current to-be-marked amount of the to-be-processed data belonging to category a in sub-table 1, such as the current to-be-marked amount being 100, can be determined by counting the number of pieces of data with a field type value of a in sub-table 1.
In an implementation manner, after the current to-be-marked amount of the to-be-processed data belonging to the category in each sub-table is obtained through statistics, the counted current to-be-marked amount may be subjected to sum calculation, and the obtained sum value is the current total to-be-marked amount of the to-be-processed list. For example, continuing to take 10 branch tables in the database, taking the category of the data to be processed in the list to be processed a as the category a as an example, after counting the number of the current mark data to be processed of the category a stored in each branch table of the 10 branch tables, summing the number of the 10 current mark data to be processed, so as to obtain the current total mark quantity to be processed of the list to be processed a as 1000.
In step S103, the reading number of the to-be-processed data belonging to the category in each sublist is determined according to the current total to-be-marked amount of the to-be-processed list and the current to-be-marked amount of the to-be-processed data belonging to the category in each sublist.
In one implementation manner, according to the current total amount to be marked of the to-be-processed list and the current amount to be marked of the to-be-processed data belonging to the category in each sublist, the proportion of the to-be-processed data belonging to the category in each sublist to the current total amount to be marked is determined, and according to the proportion of the to-be-processed data belonging to the category in each sublist to the current total amount to be marked, the read number of the to-be-processed data belonging to the category in each sublist is determined.
For example, continuing to take 10 sub-tables in the database, taking the category of the data to be processed in the list to be processed a as category a as an example, counting that the total currently pending flag amount of the list to be processed a is 1000, the number of currently pending flag data of category a stored in sub-table 1 is 100, the number of currently pending flag data of category a stored in sub-table 2 is 100, the number of currently pending flag data of category a stored in sub-table 3 is 500, the number of currently pending flag data of category a stored in sub-table 4 is 200, the number of currently pending flag data of category a stored in sub-table 5 is 20, the number of currently pending flag data of category a stored in sub-table 6 is 20, the number of currently pending flag data of category a stored in sub-table 7 is 20, the number of currently pending flag data of category a stored in sub-table 8 is 20, the sub-table 9 stores the number of flag data waiting for current processing of category a as 10, and the sub-table 10 stores the number of flag data waiting for current processing of category a as 10.
In order to implement a function of uniform allocation of a mixed data stream, in the embodiment of the present application, a proportion of the to-be-processed data belonging to the category a in each sublist to the current total to-be-marked amount is calculated by the current total to-be-marked amount of the to-be-processed list a and the current to-be-marked amount of the to-be-processed data belonging to the category a in each of the 10 sublists, and according to the proportion, the read number of the to-be-processed data belonging to the category a in each sublist is determined. For example, if the proportion of the to-be-processed data belonging to the category a in the sublist 1 to the current total to-be-marked amount is calculated to be 0.1, the read number of the to-be-processed data belonging to the category a in the sublist 1 is determined to be 10; calculating the proportion of the data to be processed belonging to A in the sublist 2 in the total current amount to be marked to be 0.1, and determining that the reading number of the data to be processed belonging to the category A in the sublist 2 is 10; calculating the proportion of the data to be processed belonging to the category A in the sublist 3 to the current total quantity to be marked to be 0.5, and determining that the reading quantity of the data to be processed belonging to the category A in the sublist 3 is 50; calculating the proportion of the data to be processed belonging to the category A in the sublist 4 to the total current to-be-marked amount to be 0.2, and determining that the reading number of the data to be processed belonging to the category A in the sublist 4 is 20; calculating the proportion of the data to be processed belonging to the category A in the sublist 5 in the total current to-be-marked amount to be 0.02, and determining that the reading number of the data to be processed belonging to the category A in the sublist 5 is 2; calculating the proportion of the data to be processed belonging to the category A in the sublist 6 in the total current to-be-marked amount to be 0.02, and determining that the reading number of the data to be processed belonging to the category A in the sublist 6 is 2; calculating the proportion of the data to be processed belonging to the category A in the sublist 7 in the total current to-be-marked amount to be 0.02, and determining that the reading number of the data to be processed belonging to the category A in the sublist 7 is 2; calculating the proportion of the data to be processed belonging to the category A in the sublist 8 in the total current to-be-marked amount to be 0.02, and determining that the reading number of the data to be processed belonging to the category A in the sublist 8 is 2; calculating the proportion of the data to be processed belonging to the category A in the sublist 9 in the total current to-be-marked amount to be 0.01, and determining that the reading number of the data to be processed belonging to the category A in the sublist 9 is 1; and calculating the proportion of the data to be processed belonging to the category A in the sublist 10 to the current total quantity to be marked to be 0.01, and determining that the reading quantity of the data to be processed belonging to the category A in the sublist 10 is 1. Therefore, the proportion of the data to be processed belonging to the category in each sub-table to the current total quantity to be marked in each sub-table can be determined according to the current total quantity to be marked in the list to be processed and the current quantity to be marked of the data to be processed belonging to the category in each sub-table, and thus, the corresponding quantity of the data to be processed can be read from each sub-table by utilizing the proportion, the data to be processed of the same category in each database sub-table can be read, the phenomenon that the quantity difference of the data of the same category in different database sub-tables is too large is avoided, the problem that the data to be processed is not uniform and is obtained from the database sub-tables in the related technology can be solved, and the function of uniform allocation of mixed data streams is achieved.
In step S104, according to the read number of the to-be-processed data belonging to the category among the respective sub-tables, the corresponding read number of the to-be-processed data is read from the respective sub-tables.
Alternatively, after obtaining the read number of the to-be-processed data belonging to the category in each sub-table, the to-be-processed data of the corresponding read number may be read from each sub-table, and the to-be-processed data may be assigned to the processing operation terminal in the form of a list (e.g., the to-be-processed data is loaded into the to-be-processed list), so that a user (e.g., a handler) of the processing operation terminal performs a processing operation on the to-be-processed data in the to-be-processed list.
For example, continuing the exemplary description in step S103, according to the read number of the to-be-processed data belonging to the category a in the 10 sub-tables, the corresponding read number of the to-be-processed data may be read from the 10 sub-tables. For example, assume that the number of reads for which sub-table 1 belongs to category a is 10, the number of reads for which sub-table 2 belongs to category a is 10, the number of reads for which sub-table 3 belongs to category a is 50, the number of reads for which sub-table 4 belongs to category a is 20, the number of reads for which sub-table 5 belongs to category a is 2, the number of reads for which sub-table 6 belongs to category a is 2, the number of reads for which sub-table 7 belongs to category a is 2, the number of reads for which sub-table 8 belongs to category a is 2, the number of reads for which sub-table 9 belongs to category a is 1, and the number of reads for which sub-table 10 belongs to category a is 1. 10 pieces of data to be processed belonging to the category a may be read from the sublist 1, 10 pieces of data to be processed belonging to the category a may be read from the sublist 2, 50 pieces of data to be processed belonging to the category a may be read from the sublist 3, 20 pieces of data to be processed belonging to the category a may be read from the sublist 4, 2 pieces of data to be processed belonging to the category a may be read from the sublist 5, 2 pieces of data to be processed belonging to the category a may be read from the sublist 6, 2 pieces of data to be processed belonging to the category a may be read from the sublist 7, 2 pieces of data to be processed belonging to the category a may be read from the sublist 8, 1 piece of data to be processed belonging to the category a may be read from the sublist 9, and 1 piece of data to be processed belonging to the category a may be read from the sublist 10.
According to the method for acquiring the to-be-processed data, an acquisition request can be received, wherein the acquisition request comprises the category of the to-be-processed list; counting the current total quantity to be marked of the list to be processed and the current quantity to be marked of the data to be processed belonging to the category in each sub-list; the sub-tables are database tables for storing data to be processed in each list to be processed; determining the reading quantity of the data to be processed belonging to the category in each sublist according to the current total quantity to be marked of the list to be processed and the current quantity to be marked of the data to be processed belonging to the category in each sublist; and reading the data to be processed with the corresponding reading quantity from each sub-table according to the reading quantity of the data to be processed belonging to the category in each sub-table. Therefore, the method and the device can read the corresponding amount of the data to be processed from each database sub-table, and solve the problem that the data to be processed is not uniform when the data to be processed is obtained from the database sub-tables in the related art, so that the function of uniformly distributing the mixed data stream can be realized, the data to be processed of the same type in each database sub-table can be read, and the problem that the difference of the read amount of the data of the same type in different database sub-tables is overlarge is avoided.
In order to greatly reduce the query to the database branch table and improve the response speed, optionally, the data to be processed of the same category in the database branch table may be cached in the respective cache units at regular time, so that the data to be processed may be read from the cache units. As shown in fig. 2 and 3, the method for acquiring data to be processed may include the following steps:
in step S201, the to-be-processed data of each category is read from each sub-table at regular time according to the category of each to-be-processed list and the identifier of each sub-table.
In one implementation, each to-be-processed list has a respective cache unit, where the cache unit may be represented by a two-dimensional array LocalCache < TwoTuple < type, table _ suffix >, Queue >. The local cache is a field representing a cache unit, the TwoTuple is a field representing a two-dimensional array, type represents a category of data to be processed (or a list to be processed), table _ suffix is an identifier for representing a branch table, the type in the two-dimensional array is < type, table _ suffix > represents which branch tables the data to be processed of the category type is stored in, and Queue represents a value of < type, table _ suffix >. For example, assuming that type is a, the data to be processed of category a is stored in 3 sub-tables (e.g. sub-table 1, sub-table 2, and sub-table 3), and the LocalCache < TwoTuple < type, table _ suffix >, Queue > should correspond to: < TwoTuple < typeA, sub-Table 1>, Queue1> (where the Queue1 is the value of the data to be processed of the category typeA in sub-Table 1, i.e., the Queue can be understood to resemble the value in key-value); < TwoTuple < typeA, Table 2>, Queue2> (where the Queue2 is the value of the data to be processed of the category typeA in Table 2); < TwoTuple < typeA, Table 3>, Queue3> (where the Queue3 is the value of the data to be processed of the category typeA in Table 3).
In the embodiment of the present disclosure, the Queue values of the to-be-processed list may be obtained according to the category of the to-be-processed list and the identifier of the sublist, each Queue corresponds to the to-be-processed data of the type category in one sublist, and the to-be-processed data may be queried in a timing look-back table, so as to load the queried to-be-processed data into the cache unit of the to-be-processed list of the corresponding list.
In step S202, the data to be processed of each category read from each branch table is loaded into the cache unit of the list to be processed of the corresponding category.
For example, taking a sub-table as a sub-table 1 as an example, the sub-table 1 stores a plurality of categories of to-be-processed data, and assuming that the sub-table 1 stores the category a of to-be-processed data, the category B of to-be-processed data, the category C of to-be-processed data, and the category D of to-be-processed data as an example, the category a of to-be-processed data read from the sub-table 1 may be loaded into the cache unit of the category a of to-be-processed list a; loading the data to be processed of the type B read from the sublist 1 into a cache unit of a list B to be processed of the type B; loading the data to be processed of the C category read from the sublist 1 into a cache unit of a list C to be processed of the category C; and loading the data to be processed of the D category read from the sub-table 1 into a cache unit of a list D to be processed of the category D.
In step S203, an acquisition request is received, the acquisition request including the category of the to-be-processed list.
In the embodiment of the present disclosure, step S203 may be implemented by adopting any one of the embodiments of the present disclosure, which is not limited by the embodiment of the present disclosure and is not described again.
In step S204, the current total to-be-marked amount of the to-be-processed list and the current to-be-marked amount of the to-be-processed data belonging to the category in each sublist are counted; the sub-table is a database table for storing the data to be processed in each list to be processed.
In the embodiment of the present disclosure, step S204 may be implemented by any one of the embodiments of the present disclosure, which is not limited in this disclosure and is not described again.
In step S205, the reading number of the to-be-processed data belonging to the category in each sub-table is determined according to the current total to-be-marked amount of the to-be-processed list and the current to-be-marked amount of the to-be-processed data belonging to the category in each sub-table.
In the embodiment of the present disclosure, step S204 may be implemented by any one of the embodiments of the present disclosure, which is not limited in this disclosure and is not described again.
In step S206, according to the read number of the to-be-processed data belonging to the category in each sub-table, the to-be-processed data of the read number corresponding to each sub-table is read from the target cache unit of the to-be-processed list corresponding to the category.
For example, taking two sub-tables as a sub-table 1 and a sub-table 2 as an example, the sub-table 1 stores a plurality of categories of data to be processed, and assuming that the sub-table 1 stores the category a data to be processed, the category B data to be processed, the category C data to be processed, and the category D data to be processed, and the sub-table 2 stores the category a data to be processed and the category B data to be processed as an example, the category a data to be processed read from the sub-table 1 and the category a data to be processed read from the sub-table 2 may be loaded into a cache unit of the category a list to be processed, where the cache unit is represented by a two-dimensional array: LocalCache < TwoTuple < class A, sub-table 1>, Queue1>, < TwoTuple < class A, sub-table 2>, Queue2>, which indicates that the cache unit contains data to be processed Queue1 from class A in sub-table 1 and data to be processed Queue2 from class A in sub-table 2; loading the data to be processed of the type B read from the sublist 1 and the data to be processed of the type B read from the sublist 2 into a cache unit of a list B to be processed of the type B, wherein the cache unit is represented by a two-dimensional array as follows: LocalCache < TwoTuple < class B, sub-table 1>, Queue3>, < TwoTuple < class B, sub-table 2>, Queue4>, which indicates that the cache unit contains data to be processed Queue3 from class B in sub-table 1 and data to be processed Queue4 from class B in sub-table 2; loading the data to be processed of the category C read from the sub-table 1 into a cache unit of a list to be processed C of the category C, wherein the cache unit is expressed by a two-dimensional array as follows: LocalCache < TwoTuple < category C, sub-table 1>, Queue5>, wherein the cache unit is shown to contain data to be processed Queue5 from category C in sub-table 1; loading the data to be processed of the D category read from the sub-table 1 into a cache unit of a list D to be processed of the category D, wherein the cache unit is expressed by a two-dimensional array as follows: LocalCache < TwoTuple < category D, sub-table 1>, Queue6>, wherein the cache unit is shown to contain the data to be processed Queue6 from category D in sub-table 1.
Suppose that the data to be processed belonging to the category a needs to be read from the above-mentioned sub-tables 1 and 2, where the number of reads belonging to the category a of the sub-table 1 is 10, and the number of reads belonging to the category a of the sub-table 2 is 10. 10 pieces of data to be processed belonging to the category A can be read from the cache unit of the to-be-processed list a of the category A and read from the cache unit of the to-be-processed list B of the category B
Reading 10 pieces of to-be-processed data belonging to the category A from the sublist 2, reading 50 pieces of to-be-processed data belonging to the category A from the sublist 3, reading 20 pieces of to-be-processed data belonging to the category A from the sublist 4, reading 2 pieces of to-be-processed data belonging to the category A from the sublist 5, reading 2 pieces of to-be-processed data belonging to the category A from the sublist 6, reading 2 pieces of to-be-processed data belonging to the category A from the sublist 7, reading 2 pieces of to-be-processed data belonging to the category A from the sublist 8, reading 1 piece of to-be-processed data belonging to the category A from the sublist 9, and reading 1 piece of to-be-processed data belonging to the category A from the sublist 10.
It can be understood that, since the to-be-processed data of each category can be read from each sub-table at regular time and the to-be-processed data of each category read from each sub-table is loaded into the cache unit of the to-be-processed list of the corresponding category, after the acquisition request of the to-be-processed data is received, the to-be-processed data of the corresponding read amount of each sub-table can be read from the target cache unit of the to-be-processed list corresponding to the category in the request. That is to say, the corresponding read amount of data to be processed can be read from the target cache unit of the list to be processed, so that the cache technology can greatly reduce the query to the sub-table of the database and improve the response speed.
To avoid repeated reading of data, in an implementation manner, as shown in fig. 4, the implementation manner of reading, from the target cache unit of the to-be-processed list corresponding to the category, the to-be-processed data of the corresponding read number of each sub-table according to the read number of the to-be-processed data belonging to the category in each sub-table may include the following steps:
in step S401, processed data and locked data among the target cache units are determined.
In the embodiment of the present disclosure, processed data may be understood as data that has been processed, where a certain identifier may be used to indicate that data has been processed, for example, a data flag in a cache unit is 1, and for example, a data flag in a cache unit is 1 may be understood as that data has been processed.
Among other things, in embodiments of the present disclosure, locked data may be understood as data that is currently occupied. For example, for data that may be occupied or some data may be occupied, such as being allocated for processing by other processing operation terminals, such data may be marked as locked data, which may prevent different processing operation terminals from processing the same data at the same time.
In step S402, the processed data and the locked data in the target cache unit are filtered.
In step S403, according to the read quantity of the to-be-processed data belonging to the category in each sub-table, the to-be-processed data of the read quantity corresponding to each sub-table is read from the target cache unit from which the processed data and the locked data are filtered.
That is, the data to be processed of the corresponding read quantity of each sub-table can be read from the target cache unit from which the processed data and the locked data are excluded, so that the same data can be prevented from being simultaneously processed by different processing operation terminals, and the data can be prevented from being repeatedly read.
It should be noted that, in some embodiments of the present disclosure, the processing operation terminal may use a remote procedure call RPC service to acquire data to be processed. For example, when assigning a pending list, it can be hashed to a RPC service instance by the list type. Under normal conditions, the data to be processed of the same type of request are always on the same instance, and the service can be switched to other instances after restarting or downtime. In an implementation manner, each RPC instance may have one or more cache units LocalCache of the to-be-processed list, and when data to be processed is read each time, corresponding data may be searched from each cache unit, where it may be ensured that high-latency data in each sub-table in the cache unit is read by sequentially traversing the cache units of each to-be-processed list, which may reduce latency of service processing and ensure fairness of each sub-table.
Fig. 5 is a block diagram illustrating an apparatus for acquiring data to be processed according to an example embodiment. As shown in fig. 5, the acquiring means may include: a receiving module 501, a counting module 502, a determining module 503 and a reading module 504.
The receiving module 501 is configured to receive an acquisition request, where the acquisition request includes a category of a to-be-processed list;
the counting module 502 is configured to count a current total to-be-marked amount of the to-be-processed list and a current to-be-marked amount of the to-be-processed data belonging to the category in each sub-table; the sub-tables are database tables for storing data to be processed in each list to be processed;
the determining module 503 is configured to determine, according to the current total quantity to be marked of the to-be-processed list and the current quantity to be marked of the to-be-processed data belonging to the category in each sublist, the read quantity of the to-be-processed data belonging to the category in each sublist; in one implementation, the determining module 503 is specifically configured to: determining the proportion of the data to be processed belonging to the category in each sublist to the current total quantity to be marked according to the current total quantity to be marked of the list to be processed and the current quantity to be marked of the data to be processed belonging to the category in each sublist; and determining the reading quantity of the data to be processed belonging to the category in each sub-table according to the proportion of the data to be processed belonging to the category in each sub-table to the current total quantity to be marked.
The reading module 504 is configured to read, according to the read quantity of the to-be-processed data belonging to the category in each sub-table, the to-be-processed data of the corresponding read quantity from each sub-table.
In one implementation, as shown in fig. 6, the obtaining apparatus may further include: a caching module 605. The cache module 505 is configured to read the to-be-processed data of each category from each sublist at regular time according to the category of each to-be-processed list and the identifier of each sublist, and load the to-be-processed data of each category read from each sublist into the cache unit of the to-be-processed list of the corresponding category.
In this embodiment of the present disclosure, the reading module 604 is specifically configured to: and reading the corresponding read quantity of the data to be processed of each branch table from the target cache unit of the list to be processed corresponding to the category according to the read quantity of the data to be processed belonging to the category in each branch table.
In this embodiment of the disclosure, the reading module 604 is specifically configured to: determining processed data and locked data in a target cache unit; filtering processed data and locked data in the target cache unit; and reading the data to be processed with the corresponding reading quantity of each sublist from the target cache unit which filters the processed data and the locked data according to the reading quantity of the data to be processed belonging to the category in each sublist. Wherein 601-604 in fig. 6 and 501-504 in fig. 5 have the same functions and structures.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
According to the device for acquiring the data to be processed, the data to be processed of respective quantity can be read from each database sub-table, the problem that the data to be processed is not uniform in acquisition from the database sub-tables in the related art is solved, the function of uniform allocation of mixed data streams can be achieved, the data to be processed of the same type in each database sub-table can be read, and the problem that the difference of the read quantity of the data of the same type in different database sub-tables is overlarge is avoided.
In order to implement the above embodiments, the embodiment of the present disclosure further provides a server.
Wherein, the server includes:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method for acquiring the data to be processed as described above.
As an example, fig. 7 is a block diagram illustrating a server 200 for obtaining data to be processed according to an exemplary embodiment, where as shown in fig. 7, the server 200 may further include:
a memory 210 and a processor 220, a bus 230 connecting different components (including the memory 210 and the processor 220), wherein the memory 210 stores a computer program, and when the processor 220 executes the program, the method for acquiring the data to be processed according to the embodiment of the disclosure is implemented.
Bus 230 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Server 200 typically includes a variety of server readable media. Such media may be any available media that is accessible by server 200 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 210 may also include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)240 and/or cache memory 250. The server 200 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 260 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 7, commonly referred to as a "hard drive"). Although not shown in FIG. 7, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 230 by one or more data media interfaces. Memory 210 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the disclosure.
A program/utility 280 having a set (at least one) of program modules 270, including but not limited to an operating system, one or more application programs, other program modules, and program data, each of which or some combination thereof may comprise an implementation of a network environment, may be stored in, for example, the memory 210. The program modules 270 generally perform the functions and/or methodologies of the embodiments described in this disclosure.
The server 200 may also communicate with one or more external devices 290 (e.g., keyboard, pointing device, display 291, etc.), with one or more devices that enable a user to interact with the server 200, and/or with any devices (e.g., network card, modem, etc.) that enable the server 200 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 292. Also, server 200 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet) via network adapter 293. As shown in FIG. 7, network adapter 293 communicates with the other modules of server 200 via bus 230. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the server 200, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processor 220 executes various functional applications and data processing by executing programs stored in the memory 210.
It should be noted that, for the implementation process and the technical principle of the server in this embodiment, reference is made to the foregoing explanation of the method for acquiring to-be-processed data in the embodiment of the present disclosure, and details are not described here again.
The server of the embodiment of the disclosure can read respective amounts of data to be processed from each database sub-table, and solve the problem of uneven acquisition of data to be processed from the database sub-tables in the related art, thereby achieving the function of uniform allocation of mixed data streams, ensuring that the data to be processed of the same category in each database sub-table can be read, and avoiding the overlarge difference in the read amounts of the data of the same category in different database sub-tables.
In order to implement the above embodiments, the embodiments of the present disclosure further provide a storage medium.
Wherein the instructions in the storage medium, when executed by the processor of the server, enable the server to perform the method of obtaining data to be processed as previously described.
In order to implement the above embodiments, the present disclosure also provides a computer program product, where when executed by an instruction processor, the computer program product enables a server to execute the method for acquiring to-be-processed data as described above.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method for acquiring data to be processed is characterized by comprising the following steps:
receiving an acquisition request, wherein the acquisition request comprises the category of a list to be processed;
counting the current total quantity to be marked of the list to be processed and the current quantity to be marked of the data to be processed belonging to the category in each sub-list; the sub-tables are database tables for storing data to be processed in each list to be processed;
determining the reading quantity of the data to be processed belonging to the category in each sub-table according to the current total quantity to be marked of the list to be processed and the current quantity to be marked of the data to be processed belonging to the category in each sub-table;
and reading the corresponding read quantity of the data to be processed from each sub-table according to the read quantity of the data to be processed belonging to the category in each sub-table.
2. The method according to claim 1, wherein the determining the reading number of the to-be-processed data belonging to the category in the branch tables according to the current total to-be-marked amount of the to-be-processed list and the current to-be-marked amount of the to-be-processed data belonging to the category in the branch tables comprises:
determining the proportion of the data to be processed belonging to the category in each sublist to the current total quantity to be marked according to the current total quantity to be marked of the list to be processed and the current quantity to be marked of the data to be processed belonging to the category in each sublist;
and determining the reading quantity of the data to be processed belonging to the category in each sub-table according to the proportion of the data to be processed belonging to the category in each sub-table in the current total quantity to be marked.
3. The method of claim 1, further comprising:
reading the data to be processed of each category from each sub-table at regular time according to the category of each list to be processed and the identification of each sub-table;
and loading the data to be processed of each category read from each sublist into a cache unit of the list to be processed of the corresponding category.
4. The method according to claim 3, wherein the reading a corresponding read amount of the data to be processed from each sub-table according to the read amount of the data to be processed belonging to the category from each sub-table comprises:
and reading the data to be processed with the corresponding reading quantity of each sub-table from the target cache unit of the list to be processed corresponding to the category according to the reading quantity of the data to be processed belonging to the category in each sub-table.
5. The method according to claim 4, wherein the reading, according to the read quantity of the to-be-processed data belonging to the category in the sub-tables, the to-be-processed data of the corresponding read quantity of the sub-tables from the target cache unit of the to-be-processed list corresponding to the category comprises:
determining processed data and locked data among the target cache units;
filtering the processed data and the locked data in the target cache unit;
and reading the data to be processed with the corresponding reading quantity of each sub-table from the target cache unit which filters the processed data and the locked data according to the reading quantity of the data to be processed belonging to the category in each sub-table.
6. An apparatus for acquiring data to be processed, comprising:
the device comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for receiving an acquisition request which comprises the category of a list to be processed;
the counting module is used for counting the current total quantity to be marked of the list to be processed and the current quantity to be marked of the data to be processed belonging to the category in each sublist; the sub-tables are database tables for storing data to be processed in each list to be processed;
a determining module, configured to determine, according to the current total quantity to be marked of the to-be-processed list and the current quantity to be marked of the to-be-processed data belonging to the category in each sublist, a read quantity of the to-be-processed data belonging to the category in each sublist;
and the reading module is used for reading the data to be processed with the corresponding reading quantity from each sub-table according to the reading quantity of the data to be processed belonging to the category in each sub-table.
7. The apparatus of claim 6, wherein the determining module is specifically configured to:
determining the proportion of the data to be processed belonging to the category in each sublist to the current total quantity to be marked according to the current total quantity to be marked of the list to be processed and the current quantity to be marked of the data to be processed belonging to the category in each sublist;
and determining the reading quantity of the data to be processed belonging to the category in each sub-table according to the proportion of the data to be processed belonging to the category in each sub-table in the current total quantity to be marked.
8. A server, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of acquiring data to be processed according to any one of claims 1 to 5.
9. A storage medium in which instructions are executed by a processor of a server to enable the server to perform the method of acquiring data to be processed according to any one of claims 1 to 5.
10. A computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method of any one of claims 1 to 5 when executed by a processor.
CN202111123434.7A 2021-09-24 2021-09-24 Method and device for acquiring data to be processed, server and storage medium Pending CN113901262A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111123434.7A CN113901262A (en) 2021-09-24 2021-09-24 Method and device for acquiring data to be processed, server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111123434.7A CN113901262A (en) 2021-09-24 2021-09-24 Method and device for acquiring data to be processed, server and storage medium

Publications (1)

Publication Number Publication Date
CN113901262A true CN113901262A (en) 2022-01-07

Family

ID=79029387

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111123434.7A Pending CN113901262A (en) 2021-09-24 2021-09-24 Method and device for acquiring data to be processed, server and storage medium

Country Status (1)

Country Link
CN (1) CN113901262A (en)

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170093975A1 (en) * 2015-09-26 2017-03-30 Arun Raghunath Technologies for managing data object requests in a storage node cluster
CN106844397A (en) * 2015-12-07 2017-06-13 阿里巴巴集团控股有限公司 Multiplexed transport method, apparatus and system based on point storehouse point table
CN107040567A (en) * 2016-09-27 2017-08-11 阿里巴巴集团控股有限公司 The management-control method and device of pre-allocation of resources amount
JP2018025911A (en) * 2016-08-09 2018-02-15 日本電信電話株式会社 Structure application propriety determination apparatus and distributed structure evaluation method
CN107707680A (en) * 2017-11-24 2018-02-16 北京永洪商智科技有限公司 A kind of distributed data load-balancing method and system based on node computing capability
CN108881512A (en) * 2018-06-15 2018-11-23 郑州云海信息技术有限公司 Virtual IP address equilibrium assignment method, apparatus, equipment and the medium of CTDB
CN109325034A (en) * 2018-10-12 2019-02-12 平安科技(深圳)有限公司 Data processing method, device, computer equipment and storage medium
CN109597724A (en) * 2018-09-18 2019-04-09 北京微播视界科技有限公司 Service stability measurement method, device, computer equipment and storage medium
CN109634746A (en) * 2018-12-05 2019-04-16 四川长虹电器股份有限公司 A kind of the utilization system and optimization method of web cluster caching
CN109710651A (en) * 2018-12-25 2019-05-03 成都四方伟业软件股份有限公司 Data type recognition methods and device
CN109725991A (en) * 2018-02-28 2019-05-07 平安普惠企业管理有限公司 Task processing method, device, equipment and readable storage medium storing program for executing
CN109992715A (en) * 2019-03-28 2019-07-09 网易传媒科技(北京)有限公司 Information displaying method, device, medium and calculating equipment
CN111339088A (en) * 2020-02-21 2020-06-26 苏宁云计算有限公司 Database division and table division method, device, medium and computer equipment
CN111399909A (en) * 2020-03-02 2020-07-10 中国平安人寿保险股份有限公司 Service system data distribution processing method, device and storage medium
CN111953567A (en) * 2020-08-14 2020-11-17 苏州浪潮智能科技有限公司 Method, system, equipment and medium for configuring multi-cluster management software parameters
CN112330404A (en) * 2020-11-10 2021-02-05 广发证券股份有限公司 Data processing method and device, server and storage medium
CN112631805A (en) * 2020-12-28 2021-04-09 深圳壹账通智能科技有限公司 Data processing method and device, terminal equipment and storage medium
CN112988360A (en) * 2021-05-10 2021-06-18 杭州绿城信息技术有限公司 Task distribution system based on big data analysis

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170093975A1 (en) * 2015-09-26 2017-03-30 Arun Raghunath Technologies for managing data object requests in a storage node cluster
CN106844397A (en) * 2015-12-07 2017-06-13 阿里巴巴集团控股有限公司 Multiplexed transport method, apparatus and system based on point storehouse point table
JP2018025911A (en) * 2016-08-09 2018-02-15 日本電信電話株式会社 Structure application propriety determination apparatus and distributed structure evaluation method
CN107040567A (en) * 2016-09-27 2017-08-11 阿里巴巴集团控股有限公司 The management-control method and device of pre-allocation of resources amount
CN107707680A (en) * 2017-11-24 2018-02-16 北京永洪商智科技有限公司 A kind of distributed data load-balancing method and system based on node computing capability
CN109725991A (en) * 2018-02-28 2019-05-07 平安普惠企业管理有限公司 Task processing method, device, equipment and readable storage medium storing program for executing
CN108881512A (en) * 2018-06-15 2018-11-23 郑州云海信息技术有限公司 Virtual IP address equilibrium assignment method, apparatus, equipment and the medium of CTDB
CN109597724A (en) * 2018-09-18 2019-04-09 北京微播视界科技有限公司 Service stability measurement method, device, computer equipment and storage medium
CN109325034A (en) * 2018-10-12 2019-02-12 平安科技(深圳)有限公司 Data processing method, device, computer equipment and storage medium
CN109634746A (en) * 2018-12-05 2019-04-16 四川长虹电器股份有限公司 A kind of the utilization system and optimization method of web cluster caching
CN109710651A (en) * 2018-12-25 2019-05-03 成都四方伟业软件股份有限公司 Data type recognition methods and device
CN109992715A (en) * 2019-03-28 2019-07-09 网易传媒科技(北京)有限公司 Information displaying method, device, medium and calculating equipment
CN111339088A (en) * 2020-02-21 2020-06-26 苏宁云计算有限公司 Database division and table division method, device, medium and computer equipment
CN111399909A (en) * 2020-03-02 2020-07-10 中国平安人寿保险股份有限公司 Service system data distribution processing method, device and storage medium
CN111953567A (en) * 2020-08-14 2020-11-17 苏州浪潮智能科技有限公司 Method, system, equipment and medium for configuring multi-cluster management software parameters
CN112330404A (en) * 2020-11-10 2021-02-05 广发证券股份有限公司 Data processing method and device, server and storage medium
CN112631805A (en) * 2020-12-28 2021-04-09 深圳壹账通智能科技有限公司 Data processing method and device, terminal equipment and storage medium
CN112988360A (en) * 2021-05-10 2021-06-18 杭州绿城信息技术有限公司 Task distribution system based on big data analysis

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王亚玲;杨超;章名尚;: "数据库系统应用分片中间件", 计算机系统应用, no. 10, 15 October 2015 (2015-10-15), pages 74 *
韩兵;王照清;廖联军;: "基于MySql多表分页查询优化技术", 计算机系统应用, no. 08, 15 August 2016 (2016-08-15), pages 171 - 175 *

Similar Documents

Publication Publication Date Title
WO2019205371A1 (en) Server, message allocation method, and storage medium
CN106407207B (en) Real-time newly-added data updating method and device
CN108009261B (en) Data synchronization method and device and electronic equipment
CN110633296A (en) Data query method, device, medium and electronic equipment
CN110750529B (en) Data processing method, device, equipment and storage medium
CN112115160B (en) Query request scheduling method and device and computer system
CN104238999A (en) Task scheduling method and device based on horizontal partitioning type distributed database
US11947534B2 (en) Connection pools for parallel processing applications accessing distributed databases
CN103067486A (en) Big-data processing method based on platform-as-a-service (PaaS) platform
CN111163186B (en) ID generation method, device, equipment and storage medium
CN114155026A (en) Resource allocation method, device, server and storage medium
CN110706148A (en) Face image processing method, device, equipment and storage medium
US20100058020A1 (en) Mobile phone and method for managing memory of the mobile phone
CN103106242A (en) Phone bill query method and phone bill query system
CN113901262A (en) Method and device for acquiring data to be processed, server and storage medium
CN110489356B (en) Information processing method, information processing device, electronic equipment and storage medium
CN111190910A (en) Quota resource processing method and device, electronic equipment and readable storage medium
CN109104506B (en) Method and device for determining domain name resolution rule and computer readable storage medium
CN110362575B (en) Method and device for generating global index of data
CN114924848A (en) IO (input/output) scheduling method, device and equipment
CN113760940A (en) Quota management method, device, equipment and medium applied to distributed system
CN110427377B (en) Data processing method, device, equipment and storage medium
CN111459654B (en) Method, device, equipment and storage medium for deploying server cluster
CN111796934A (en) Task issuing method and device, storage medium and electronic equipment
CN110908981A (en) Distributed data quality control method and system compatible with multiple databases

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination