CN112965798A - Big data processing method and system based on distributed multithreading - Google Patents
Big data processing method and system based on distributed multithreading Download PDFInfo
- Publication number
- CN112965798A CN112965798A CN202110245159.XA CN202110245159A CN112965798A CN 112965798 A CN112965798 A CN 112965798A CN 202110245159 A CN202110245159 A CN 202110245159A CN 112965798 A CN112965798 A CN 112965798A
- Authority
- CN
- China
- Prior art keywords
- data
- tasks
- priority queue
- database
- system server
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/48—Indexing scheme relating to G06F9/48
- G06F2209/484—Precedence
Abstract
The invention discloses a big data processing method and a system based on distributed multithreading, which comprises a system server and a plurality of intelligent execution terminals, wherein the system server comprises a database, a priority queue, a producer thread and a consumer thread; the method comprises the following steps: s1: the system server receives all the data tasks to be processed and stores the data tasks in a database; reading a plurality of data tasks from the database and putting the data tasks into a priority queue of the tasks to be processed; s2: the system server calls the intelligent execution terminals corresponding to the idle state to execute the tasks to be processed in the priority queue according to the running state of each intelligent execution terminal, and the intelligent execution terminals inform the system server of the real-time running state of the intelligent execution terminals; in the execution process, if an error is encountered, the error data task is returned to the first bit of the priority queue to wait for being called for execution. The invention improves the reuse rate of the machine and reduces the rework rate of the machine.
Description
Technical Field
The invention relates to the technical field of computers, in particular to a big data processing method and system based on distributed multithreading.
Background
In the past, in order to improve efficiency and save time, a plurality of machines are adopted to run simultaneously for processing, data needs to be divided into the number of copies of the machines equally, each machine processes distributed data independently, the demand of a complete machine superposition mechanism for the machines is multiplied along with the increase of the data, the reuse rate of the machines is low, unnecessary cost waste is easily caused, and meanwhile, the stability and the accuracy of data processing corresponding to the machine can be directly influenced when each machine processes the data if errors occur, so that the accuracy of the whole data processing is influenced. In particular, data interaction and processing between a plurality of intelligent supply chain execution terminals and a system server have the following disadvantages:
(1) the machine has large demand: the machine and the data volume are in a multiple relation, and the machine needs to be increased in a multiple way when the data volume is increased. (2) The machine reuse rate is low: the data are equally grouped according to the number of the data and the number of the machines before data processing, each machine processes the data distributed to the machine according to the same logic, although the data amount is the same according to the type and the size of the distributed data, the processing time of each machine is different, the processing time of the machine with relatively simple data is short, the machine is placed after the processing is finished, the whole task needs to wait until the last machine is finished, the machine which finishes the data processing in the middle in advance can only wait, and the reuse rate of the machine is very low. (3) The task rework rate is high: often, a data processing task is to complete all processing to achieve the desired processing result. The larger the data is, the more the machines are, the higher the error rate of processing tasks is, once errors occur, a single task is reworked or even the whole task is reworked, and the efficiency of data output is greatly reduced in such a circulation mode. (4) The data processing applicability of complex logic is not strong: for data processing of complex logic, all logics of the same data processing cannot be realized on one machine, and the data processing needs to be carried out through cooperation of a plurality of machines, so that the traditional one-line data processing mode is not very suitable.
Disclosure of Invention
The invention aims to solve the technical problems that the existing method for processing the big data of the supply chain system has the problems of large machine demand, low machine reuse rate, high task rework rate, low applicability of complex logic data processing and the like. The invention aims to provide a distributed multithreading-based big data processing method and system, which can be used for realizing the processing of complex logic data, improving the reuse rate of a processing machine, reducing the overall rework rate of processing tasks, improving the data processing efficiency and reducing the data processing cost.
The invention is realized by the following technical scheme:
in a first aspect, the invention provides a big data processing method based on distributed multithreading, which comprises a system server and a plurality of intelligent execution terminals, wherein the system server comprises a database, a priority queue, a producer thread and a consumer thread; the method comprises the following steps:
s1: the system server receives all the data tasks to be processed and stores the data tasks in a database; reading a plurality of data tasks from the database, putting the data tasks into a priority queue of the tasks to be processed, and waiting for a processing machine to obtain and process the data tasks;
s2: the system server calls the intelligent execution terminals corresponding to the idle state to execute the tasks to be processed in the priority queue according to the running state of each intelligent execution terminal, and the intelligent execution terminals inform the system server of the real-time running state of the intelligent execution terminals; in the execution process, if an error is encountered, the error data task is returned to the first bit of the priority queue to wait for being called for execution.
Further, step S1 specifically includes the following steps:
s11: the system server receives all the data tasks to be processed and stores the data tasks in a database;
s12: a producer thread scans a database in real time, judges whether a priority queue is full, and if not, performs enqueuing operation, reads a data task from the database and puts the data task at the tail of the priority queue; if full, go to step S13;
s13: the consumer thread judges whether the priority queue is full according to the real-time condition of the priority queue, if so, dequeue operation is carried out, and step S2 is executed; and if the queue is not full, performing enqueuing operation.
Further, the system server comprises a plurality of servers, and the scheduling policy formula of each server is as follows: num is id% total, wherein Num represents the serial number of the scheduled server, the first server is marked as 0, the second server is marked as 1, and other servers sequentially record the serial number; id represents a server id; % represents the remainder; total expresses the total number of servers.
Furthermore, each server comprises a priority queue or a plurality of priority queues, and inter-queue priorities are also arranged among the priority queues.
Further, the attribute of the priority queue includes type information and size information.
Further, the priority and the time of the data task are stored in the database; the priority of the data task is the importance level of the data task, including very important, generally important and not important.
In a second aspect, the invention further provides a big data processing system based on distributed multithreading, which comprises a system server and a plurality of intelligent execution terminals, wherein the system server comprises a database, a priority queue, a producer thread and a consumer thread;
the database is used for storing all data tasks to be processed received by the system server;
the priority queue is used for putting a plurality of data tasks read from the database into the priority queue of the tasks to be processed and waiting for the intelligent execution terminal to obtain and process the data tasks;
the producer thread is used for scanning the database in real time, judging whether the priority queue is full, if not, performing enqueuing operation, reading a data task from the database and putting the data task to the tail of the priority queue; if the consumer thread flow is full, executing the consumer thread flow;
the consumer thread is used for judging whether the priority queue is full according to the real-time condition of the priority queue, and if so, dequeuing operation is carried out to execute the subsequent process of calling the intelligent execution terminal; if not, performing enqueuing operation;
the system server receives all the data tasks to be processed and stores the data tasks in a database; reading a plurality of data tasks from the database and putting the data tasks into a priority queue of the tasks to be processed; the system server calls the intelligent execution terminals corresponding to the idle state to execute the tasks to be processed in the priority queue according to the running state of each intelligent execution terminal, and the intelligent execution terminals inform the system server of the real-time running state of the intelligent execution terminals; in the execution process, if an error is encountered, the error data task is returned to the first bit of the priority queue to wait for being called for execution.
In a third aspect, the present invention further provides a computer device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the method for processing big data based on distributed multithreading when executing the computer program.
In a fourth aspect, the present invention further provides a computer-readable storage medium, which stores a computer program, where the computer program is executed by a processor to implement the method for big data processing based on distributed multithreading.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1. the invention mainly relates to optimizing and upgrading a multitask data processing method, which adopts a queue mode instead of a traditional mode of equally distributing data and a principle of distributing data according to needs, and meanwhile, if processing errors occur in the task processing process, an intelligent execution terminal (namely a processing machine) is replaced for processing, so that the efficiency of each machine is fully exerted, and the processing rework rate is greatly reduced.
2. The core of the invention lies in data queue, improves the reuse rate of the machine and reduces the rework rate of the machine; 1) the intelligent execution terminals process the data tasks after obtaining the data tasks, the next data task to be processed is obtained from the priority queue after the processing is finished, and the process is sequentially circulated until all the data tasks (except the data needing manual processing) are processed, and all the intelligent execution terminals synchronously work in the period, so that the reuse rate of the machine is improved. 2) If errors are encountered in the processing process, the error data are returned to the priority queue and are arranged at the head, other intelligent execution terminals are enabled to perform processing again, and thus all the intelligent execution terminals of the data process the errors and are put into the degree column needing manual processing, and the rework rate of the machine is reduced.
3. The method of the invention can add data into the queue directly if the data is required to be added in the processing process, and does not need additional deployment and machine redistribution. When complex logic data processing (and one data needs multiple data processing), a plurality of task queues are adopted.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principles of the invention. In the drawings:
FIG. 1 is a flow chart of a big data processing method based on distributed multithreading according to the present invention.
FIG. 2 is a schematic diagram of the computer apparatus of the present invention.
Detailed Description
Hereinafter, the term "comprising" or "may include" used in various embodiments of the present invention indicates the presence of the invented function, operation or element, and does not limit the addition of one or more functions, operations or elements. Furthermore, as used in various embodiments of the present invention, the terms "comprises," "comprising," "includes," "including," "has," "having" and their derivatives are intended to mean that the specified features, numbers, steps, operations, elements, components, or combinations of the foregoing, are only meant to indicate that a particular feature, number, step, operation, element, component, or combination of the foregoing, and should not be construed as first excluding the existence of, or adding to the possibility of, one or more other features, numbers, steps, operations, elements, components, or combinations of the foregoing.
In various embodiments of the invention, the expression "or" at least one of a or/and B "includes any or all combinations of the words listed simultaneously. For example, the expression "a or B" or "at least one of a or/and B" may include a, may include B, or may include both a and B.
Expressions (such as "first", "second", and the like) used in various embodiments of the present invention may modify various constituent elements in various embodiments, but may not limit the respective constituent elements. For example, the above description does not limit the order and/or importance of the elements described. The foregoing description is for the purpose of distinguishing one element from another. For example, the first user device and the second user device indicate different user devices, although both are user devices. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of various embodiments of the present invention.
It should be noted that: if it is described that one constituent element is "connected" to another constituent element, the first constituent element may be directly connected to the second constituent element, and a third constituent element may be "connected" between the first constituent element and the second constituent element. In contrast, when one constituent element is "directly connected" to another constituent element, it is understood that there is no third constituent element between the first constituent element and the second constituent element.
The terminology used in the various embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the various embodiments of the invention. As used herein, the singular forms are intended to include the plural forms as well, unless the context clearly indicates otherwise. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which various embodiments of the present invention belong. The terms (such as those defined in commonly used dictionaries) should be interpreted as having a meaning that is consistent with their contextual meaning in the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein in various embodiments of the present invention.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to examples and accompanying drawings, and the exemplary embodiments and descriptions thereof are only used for explaining the present invention and are not meant to limit the present invention.
Example 1
As shown in fig. 1, the big data processing method based on distributed multithreading of the present invention includes a system server and a plurality of intelligent execution terminals, where the system server includes a database, a priority queue, a producer thread and a consumer thread; the method comprises the following steps:
s1: the system server receives all the data tasks to be processed and stores the data tasks in a database; reading a plurality of data tasks from the database, putting the data tasks into a priority queue of the tasks to be processed, and waiting for a processing machine to obtain and process the data tasks;
s2: the system server calls the intelligent execution terminals corresponding to the idle state to execute the tasks to be processed in the priority queue according to the running state of each intelligent execution terminal, and the intelligent execution terminals inform the system server of the real-time running state of the intelligent execution terminals; in the execution process, if an error is encountered, the error data task is returned to the first bit of the priority queue to wait for being called for execution.
The invention mainly relates to optimizing and upgrading a multitask data processing method, which adopts a queue mode instead of a traditional mode of equally distributing data and a principle of distributing data according to needs, and meanwhile, if processing errors occur in the task processing process, an intelligent execution terminal (namely a processing machine) is replaced for processing, so that the efficiency of each machine is fully exerted, and the processing rework rate is greatly reduced. The core of the invention lies in data queue, improves the reuse rate of the machine and reduces the rework rate of the machine; 1) the intelligent execution terminals process the data tasks after obtaining the data tasks, the next data task to be processed is obtained from the priority queue after the processing is finished, and the process is sequentially circulated until all the data tasks (except the data needing manual processing) are processed, and all the intelligent execution terminals synchronously work in the period, so that the reuse rate of the machine is improved. 2) If errors are encountered in the processing process, the error data are returned to the priority queue and are arranged at the head, other intelligent execution terminals are enabled to perform processing again, and thus all the intelligent execution terminals of the data process the errors and are put into the degree column needing manual processing, and the rework rate of the machine is reduced.
In addition, if data needs to be added in the processing process, the data is directly added into the queue, and additional deployment and machine redistribution are not needed. When complex logic data processing (and one data needs multiple data processing), a plurality of task queues are adopted.
In this embodiment, step S1 specifically includes the following steps:
s11: the system server receives all the data tasks to be processed and stores the data tasks in a database;
s12: a producer thread scans a database in real time, judges whether a priority queue is full, and if not, performs enqueuing operation, reads a data task from the database and puts the data task at the tail of the priority queue; if full, go to step S13;
s13: the consumer thread judges whether the priority queue is full according to the real-time condition of the priority queue, if so, dequeue operation is carried out, and step S2 is executed; and if the queue is not full, performing enqueuing operation.
In this embodiment, the system server includes a plurality of servers, and a scheduling policy formula of each server is as follows: num is id% total, wherein Num represents the serial number of the scheduled server, the first server is marked as 0, the second server is marked as 1, and other servers sequentially record the serial number; id represents a server id; % represents the remainder; total expresses the total number of servers.
For example, in an implementation, the system server includes three servers (server 1, server 2, and server 3), and the total is 3; num of the server 1 is 0, Num of the server 2 is 1, and Num of the server 3 is 2; when 1% 3 is 1, performing a scheduling task using the server 2; when 2% 3 is 2, performing a scheduling task using the server 3; when 3% 3 is 0, performing a scheduling task using the server 1; therefore, when one server of the task fails, the normal operation of the whole system server is not influenced, and the single-point problem of the system server is solved.
In this embodiment, each server includes one priority queue or multiple priority queues, and inter-queue priorities are further set between the priority queues.
In this embodiment, the attribute of the priority queue includes type information and size information.
In this embodiment, the priority and time of the data task are stored in the database; the priority of the data task is the importance level of the data task, including very important, generally important and not important. When reading the data task from the database and putting the data task into the priority queue, the data task can be decided to be put into the priority queue according to the priority degree of the data task stored in the database.
Example 2
As shown in fig. 1 and fig. 2, the present embodiment is different from embodiment 1 in that the present embodiment provides a distributed multithreading-based big data processing system, which supports the distributed multithreading-based big data processing method described in embodiment 1, the system includes a system server and a plurality of intelligent execution terminals, the system server includes a database, a priority queue, a producer thread and a consumer thread;
the database is used for storing all data tasks to be processed received by the system server;
the priority queue is used for putting a plurality of data tasks read from the database into the priority queue of the tasks to be processed and waiting for the intelligent execution terminal to obtain and process the data tasks;
the producer thread is used for scanning the database in real time, judging whether the priority queue is full, if not, performing enqueuing operation, reading a data task from the database and putting the data task to the tail of the priority queue; if the consumer thread flow is full, executing the consumer thread flow;
the consumer thread is used for judging whether the priority queue is full according to the real-time condition of the priority queue, and if so, dequeuing operation is carried out to execute the subsequent process of calling the intelligent execution terminal; if not, performing enqueuing operation;
the system server receives all the data tasks to be processed and stores the data tasks in a database; reading a plurality of data tasks from the database and putting the data tasks into a priority queue of the tasks to be processed; the system server calls the intelligent execution terminals corresponding to the idle state to execute the tasks to be processed in the priority queue according to the running state of each intelligent execution terminal, and the intelligent execution terminals inform the system server of the real-time running state of the intelligent execution terminals; in the execution process, if an error is encountered, the error data task is returned to the first bit of the priority queue to wait for being called for execution.
As shown in fig. 2, the present invention further provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the method for processing big data based on distributed multithreading when executing the computer program.
The invention also provides a computer readable storage medium, which stores a computer program, wherein the computer program is used for realizing the big data processing method based on distributed multithreading when being executed by a processor.
The invention arranges the big data in a distributed queue mode, the machine fetches the data as required to process, and a corresponding error processing mechanism is made, thus forming an efficient data processing production line. Compared with the prior art, the technical scheme provided by the invention greatly improves the reuse rate of machines in the big data processing, reduces the rework rate of the big data processing by adopting a task-data mode, and provides a more efficient and simpler solution for the complex logic data processing by adopting a distributed queue mode.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (9)
1. A big data processing method based on distributed multithreading is characterized by comprising a system server and a plurality of intelligent execution terminals, wherein the system server comprises a database, a priority queue, a producer thread and a consumer thread; the method comprises the following steps:
s1: the system server receives all the data tasks to be processed and stores the data tasks in a database; reading a plurality of data tasks from the database and putting the data tasks into a priority queue of the tasks to be processed;
s2: the system server calls the intelligent execution terminals corresponding to the idle state to execute the tasks to be processed in the priority queue according to the running state of each intelligent execution terminal, and the intelligent execution terminals inform the system server of the real-time running state of the intelligent execution terminals; in the execution process, if an error is encountered, the error data task is returned to the first bit of the priority queue to wait for being called for execution.
2. The big data processing method based on distributed multithreading of claim 1, wherein the step S1 specifically comprises the following steps:
s11: the system server receives all the data tasks to be processed and stores the data tasks in a database;
s12: a producer thread scans a database in real time, judges whether a priority queue is full, and if not, performs enqueuing operation, reads a data task from the database and puts the data task at the tail of the priority queue; if full, go to step S13;
s13: the consumer thread judges whether the priority queue is full according to the real-time condition of the priority queue, if so, dequeue operation is carried out, and step S2 is executed; and if the queue is not full, performing enqueuing operation.
3. The big data processing method based on distributed multithreading of claim 1, wherein the system server comprises a plurality of servers, and the scheduling policy formula of each server is as follows:
Num=id%total
num represents the number of the dispatched server, the first server is marked as 0, the second server is marked as 1, and other servers sequentially record the number; id represents a server id; % represents the remainder; total expresses the total number of servers.
4. The big data processing method based on distributed multithreading of claim 1, wherein each server comprises one priority queue or a plurality of priority queues, and inter-queue priorities are further set between the priority queues.
5. The big data processing method based on distributed multithreading of claim 1, wherein the attributes of the priority queue comprise type information and size information.
6. The big data processing method based on distributed multithreading as claimed in claim 1, wherein the database stores the priority and time of the data task; the priority of the data task is the importance level of the data task, including very important, generally important and not important.
7. The system of the distributed multithreading-based big data processing method according to any one of claims 1 to 6, wherein the system comprises a system server and a plurality of intelligent execution terminals, wherein the system server comprises a database, a priority queue, a producer thread and a consumer thread;
the database is used for storing all data tasks to be processed received by the system server;
the priority queue is used for putting a plurality of data tasks read from the database into the priority queue of the tasks to be processed and waiting for the intelligent execution terminal to obtain and process the data tasks;
the producer thread is used for scanning the database in real time, judging whether the priority queue is full, if not, performing enqueuing operation, reading a data task from the database and putting the data task to the tail of the priority queue; if the consumer thread flow is full, executing the consumer thread flow;
the consumer thread is used for judging whether the priority queue is full according to the real-time condition of the priority queue, and if so, dequeuing operation is carried out to execute the subsequent process of calling the intelligent execution terminal; if not, performing enqueuing operation;
the system server receives all the data tasks to be processed and stores the data tasks in a database; reading a plurality of data tasks from the database and putting the data tasks into a priority queue of the tasks to be processed; the system server calls the intelligent execution terminals corresponding to the idle state to execute the tasks to be processed in the priority queue according to the running state of each intelligent execution terminal, and the intelligent execution terminals inform the system server of the real-time running state of the intelligent execution terminals; in the execution process, if an error is encountered, the error data task is returned to the first bit of the priority queue to wait for being called for execution.
8. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements a large data processing method based on distributed multithreading according to any one of claims 1 to 6 when executing the computer program.
9. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements a distributed multithreading-based big data processing method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110245159.XA CN112965798A (en) | 2021-03-05 | 2021-03-05 | Big data processing method and system based on distributed multithreading |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110245159.XA CN112965798A (en) | 2021-03-05 | 2021-03-05 | Big data processing method and system based on distributed multithreading |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112965798A true CN112965798A (en) | 2021-06-15 |
Family
ID=76276635
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110245159.XA Pending CN112965798A (en) | 2021-03-05 | 2021-03-05 | Big data processing method and system based on distributed multithreading |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112965798A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113726620A (en) * | 2021-11-02 | 2021-11-30 | 深圳市发掘科技有限公司 | Management method, device and system of intelligent cooking device |
-
2021
- 2021-03-05 CN CN202110245159.XA patent/CN112965798A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113726620A (en) * | 2021-11-02 | 2021-11-30 | 深圳市发掘科技有限公司 | Management method, device and system of intelligent cooking device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106802826B (en) | Service processing method and device based on thread pool | |
CN105893126A (en) | Task scheduling method and device | |
CN102063286B (en) | Program flow controls | |
CN113238838A (en) | Task scheduling method and device and computer readable storage medium | |
CN108710536B (en) | Multilevel fine-grained virtualized GPU (graphics processing Unit) scheduling optimization method | |
CN103034554A (en) | ETL (Extraction-Transformation-Loading) dispatching system and method for error-correction restarting and automatic-judgment starting | |
US20150205633A1 (en) | Task management in single-threaded environments | |
CN107479981B (en) | Processing method and device for realizing synchronous call based on asynchronous call | |
CN110851246A (en) | Batch task processing method, device and system and storage medium | |
CN113504985A (en) | Task processing method and network equipment | |
CN110162344B (en) | Isolation current limiting method and device, computer equipment and readable storage medium | |
CN110287018A (en) | Batch tasks method of combination and device | |
CN110599341A (en) | Transaction calling method and system | |
CN112965798A (en) | Big data processing method and system based on distributed multithreading | |
CN112181522A (en) | Data processing method and device and electronic equipment | |
CN111553652A (en) | Service processing method and device | |
CN104426964B (en) | Data transmission method, device and terminal, computer storage media | |
CN101361039A (en) | Processor | |
CN101794215B (en) | Method and device for assembling and performing complex tasks | |
CN110046809B (en) | Job scheduling method and device | |
CN115098232A (en) | Task scheduling method, device and equipment | |
CN113806055A (en) | Lightweight task scheduling method, system, device and storage medium | |
JP5630798B1 (en) | Processor and method | |
CN109656708B (en) | Animation playing limiting method for Android, storage medium, electronic equipment and system | |
CN112083952A (en) | Spring architecture-based exception handling method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |