CN112840320A - Method and device for resource platform to share resources exclusively and electronic equipment - Google Patents

Method and device for resource platform to share resources exclusively and electronic equipment Download PDF

Info

Publication number
CN112840320A
CN112840320A CN201880098613.XA CN201880098613A CN112840320A CN 112840320 A CN112840320 A CN 112840320A CN 201880098613 A CN201880098613 A CN 201880098613A CN 112840320 A CN112840320 A CN 112840320A
Authority
CN
China
Prior art keywords
task
processor
task processing
queue
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201880098613.XA
Other languages
Chinese (zh)
Inventor
王博
牛昕宇
蔡权雄
熊超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Corerain Technologies Co Ltd
Original Assignee
Shenzhen Corerain Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Corerain Technologies Co Ltd filed Critical Shenzhen Corerain Technologies Co Ltd
Publication of CN112840320A publication Critical patent/CN112840320A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)
  • Multi Processors (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A method, a device and an electronic device for a resource platform to share resources exclusively are provided, wherein the method comprises the following steps: acquiring task request information submitted by a plurality of users through the task manager (S201); forming a tuple by the corresponding task parameter and the task ID according to each task request message and inserting the tuple into a task queue (S202); extracting head task parameters of a task queue from the task queue through a plurality of task processing functions, calling a task processor corresponding to the task processing functions to process the tasks, and respectively obtaining task processing results (S203); storing each of the task processing results and the corresponding task ID in a result database (S204). The method can improve the efficiency of task processing and the success rate of task processing.

Description

Method and device for resource platform to share resources exclusively and electronic equipment Technical Field
The application relates to the technical field of cloud, in particular to a method and a device for a resource platform to share resources exclusively and electronic equipment.
Background
With the rapid increase of the computational capability of the GPU, deep learning is increasingly applied to the industrial field and has achieved great success. In practical applications, in order to obtain higher accuracy, lower misrecognition error rate and better requirement, a longer time of operation is performed on the GPU to obtain a good model. The GPU resources generally occupy all of the video memory during the training of the deep learning algorithm or other algorithms. The training time is generally longer, and is several hours as a few hours and several days as a large number. In this process, the GPU cannot be used by other programs, and if other programs happen to also use the already occupied GPU, the program will report an error. That is, in most cases, training an algorithm on the GPU is a process of monopolizing GPU resources for a long time. For a platform for multiple users to perform training tasks, the existing resource platform has the problems of low task processing efficiency and low task processing success rate.
Content of application
The embodiment of the application provides a method and a device for a resource platform to share resources exclusively and a related product, wherein task processing is performed through a plurality of task processing functions and corresponding task processors, and executing tasks are independent from each other, so that the task processing efficiency and the task processing success rate are improved.
In a first aspect, an embodiment of the present application provides a method for a resource platform to share resources exclusively, where the resource platform includes: the method comprises the following steps of:
acquiring task request information submitted by a plurality of users through the task manager;
forming a tuple by the corresponding task parameter and the task ID according to each task request message and inserting the tuple into a task queue;
extracting head task parameters of a task queue from the task queue through a plurality of task processing functions, calling a task processor corresponding to the task processing functions to process the tasks, and respectively obtaining task processing results;
and storing each task processing result and the corresponding task ID in a result database.
In a second aspect, an embodiment of the present application provides an apparatus for resource platforms to share resources exclusively, including:
the task acquisition module is used for acquiring task request information submitted by a plurality of users through the task manager;
the task storage module is used for forming a tuple by the corresponding task parameter and the task ID according to each task request message and inserting the tuple into the task queue;
the task processing module is used for extracting the head task parameters of the task queue from the task queue through a plurality of task processing functions, calling the task processor corresponding to the task processing functions to process the tasks and respectively acquiring task processing results;
and the result storage module is used for storing each task processing result and the corresponding task ID in a result database.
In a third aspect, an embodiment of the present application provides an electronic device, including: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps in the method provided in the embodiments of the invention when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps in the method provided in the present application.
In the embodiment of the invention, task request information submitted by a plurality of users is acquired through the task manager; forming a tuple by the corresponding task parameter and the task ID according to each task request message and inserting the tuple into a task queue; extracting head task parameters of a task queue from the task queue through a plurality of task processing functions, calling a task processor corresponding to the task processing functions to process the tasks, and respectively obtaining task processing results; storing each task processing result and the corresponding task ID in a result database; therefore, when a plurality of task requests are processed, the task requests of the user are processed through the task processing function and the task processor corresponding to the task processing function in a matched mode, each task is processed independently, and a plurality of tasks do not need to occupy the same task processing function and the same task processor. The efficiency of task processing is improved to and task processing success rate.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a diagram of a network architecture for a resource platform to share resources exclusively;
FIG. 2 is a flowchart illustrating a method for resource platform exclusive resource sharing provided in the present application;
FIG. 3 is a schematic diagram of an apparatus for sharing resources exclusively by a resource platform according to the present application;
fig. 4 is a schematic structural diagram of an electronic device provided in the present application.
Wherein, 1, a user; 2. a task manager; 3. a task queue; 4. a task processing function; 5. a task processor; 6 result database.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "comprising" and "having," and any variations thereof, in the description and claims of this application and the drawings described herein are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 1, fig. 1 is a network architecture diagram of a resource platform for exclusive resource sharing, wherein the network architecture diagram includes: the system comprises a user 1, a task manager 2, a task queue 3, a task processing function 4, a task processor 5 and a result database 6, wherein the user can be one or more; the task manager 2 may be an application; the task queue 3 may be a storage database of tasks to be processed; the task processing function 4 may be a code of a task processing algorithm, the task processing function 4 may call the task processor 5, and the task processing functions 4 correspond to the number of the task processors 5 one to one. The number of the task processors 5 may be one or more, and may be set according to the requirement of the resource platform. The result database 6 is used to store task processing results and task IDs, and the result database 6 may be a key-value pair database or a relational database. The user submits the task to the task manager 2, the task manager 2 inserts the task parameter and the task ID of the task submitted by the user into the task queue 3, extracts the task parameter from the task queue 3 through the task processing function 4, calls the corresponding task processor 5 to process the task through the task processing function 4, and stores the task processing result in the result database 6. When a user wants to query a task processing result, the task processing result corresponding to the user can be queried from the result database 6 by inputting a task ID in the task manager 2.
Referring to fig. 2, fig. 2 provides a schematic flow chart of a method for resource platform to share resources exclusively, where the resource platform includes: as shown in fig. 2, the method includes the following steps:
step S201, acquiring task request information submitted by a plurality of users through the task manager.
The task manager is used for receiving task requests provided by a plurality of users and can be used for the users to inquire task processing results. The task manager may also be referred to as an application. The task manager provides a task submission interface through which users can submit tasks, which is an interface that supports concurrent requests, i.e., through which multiple users can simultaneously submit tasks. The task manager also provides a result query interface through which a user can query task processing results, the result query interface is also a concurrent request supporting interface, and a plurality of users can query task processing results simultaneously. The task request information may include: task submission time, task description, and task parameters, etc. After a user submits a task, the task manager can directly acquire task request information of the task.
Step S202, forming a tuple by the corresponding task parameter and the task ID according to each task request message and inserting the tuple into a task queue.
The task ID (number) may be an identity card representing the task, such as a task serial number; the task sequence number may be a time and sequence number of submission by the task. The task ID may be referred to as a task _ ID (task number), and the task ID may be generated according to a combination of a task submission time and a task sequence number when a task request submitted by a user is received, or may be automatically generated by the system. The task parameter may be a function parameter required when the task is executed. The task queue may be a database for storing tasks to be performed; the data structure type of the database is a queue, which can store task parameters for a plurality of tasks and task IDs. After a task request submitted by a user is acquired, a task ID corresponding to the task is generated, and a task parameter and the task ID of the task form a tuple to be stored in the task queue. If a plurality of task request messages are received, the task request messages are sequenced according to the receiving time sequence, and the tuple of each task is stored in the task queue by keeping the principle of first-come first-served. This ensures that tasks that were submitted first can be processed preferentially. The sorting mode of the task can be selected according to actual needs.
Step S203, extracting the head task parameters of the task queue from the task queue through a plurality of task processing functions, calling the task processor corresponding to the task processing functions to process the tasks, and respectively obtaining task processing results.
The task processing function may be a code of a task execution algorithm. The task processing function can configure a corresponding task processor, and the task processing function can call the corresponding task processor to process the task; in the resource platform, the number of the task processing functions is the same as that of the task processors, when the tasks are processed, each task processing function is in one-to-one correspondence with one task processor, only one task is executed, and only after the tasks are processed, the next task can be executed. The head task parameter of the task queue is the task parameter of the first task in the queue stored in the task queue. The task processor may be a processor for Processing tasks, such as a GPU (Graphics Processing Unit), an FPGA (Field-Programmable Gate Array), and the like. The task processor can be selected according to the task type and the task purpose of the resource platform. When the resource platform needs to process a plurality of tasks, a plurality of task groups can be formed by a plurality of task processing functions and a plurality of task processors in a one-to-one correspondence manner, and a first task of the task queue is sequentially extracted from the task queue to be processed, and a corresponding task processing result is obtained, for example, 10 tasks are stored in the task queue, and the resource platform has 5 task processors, and 5 corresponding task processing functions are correspondingly formed, so that 5 task groups are formed by the 5 task processors and the 5 task processing functions in a one-to-one correspondence manner, and task parameters of a previous task are sequentially extracted from the task queue by the 5 task groups to process the task, and a task processing result is correspondingly obtained. Because the task difficulty is different, the processing time of the tasks is different, so that the time for completing the tasks by the 5 task groups has a sequence, and the task group which completes the task earlier can extract the task parameters of the task which is earlier from the task queue and process the task. Therefore, 5 task groups can alternately and circularly extract the task parameters of the tasks from the task queue and process the extracted tasks, and after 10 tasks are processed, the task groups can stop working. Or, judging whether the task group obtains the task parameters, if not, waiting for a preset time threshold value, and then extracting the tasks from the task queue again for processing to obtain a task processing result, wherein the preset time threshold value can be 1s, 2s, 10s and the like. If yes, continuing to extract the task from the task queue, processing the task, and obtaining a task processing result.
And step S204, storing each task processing result and the corresponding task ID in a result database.
The task processing result may include data, graphics, text, and the like obtained after the task processing. And storing the task processing result and the corresponding task ID in a result database together, so that a user can conveniently inquire the task processing result.
In the embodiment of the invention, task request information submitted by a plurality of users is acquired through the task manager; forming a tuple by the corresponding task parameter and the task ID according to each task request message and inserting the tuple into a task queue; extracting head task parameters of a task queue from the task queue through a plurality of task processing functions, calling a task processor corresponding to the task processing functions to process the tasks, and respectively obtaining task processing results; storing each task processing result and the corresponding task ID in a result database; therefore, when a plurality of task requests are processed, the task requests of the user are processed through the task processing function and the task processor corresponding to the task processing function in a matched mode, each task is processed independently, and a plurality of tasks do not need to occupy the same task processing function and the same task processor. The efficiency of task processing is improved to and task processing success rate.
Optionally, before receiving task request information submitted by a plurality of users, the method includes:
starting a task manager, and enabling the task manager to receive a task request of a user in a background;
starting a task processing function and a task processor corresponding to the task processing function;
a task queue and a results database are created.
Specifically, a task manager is started in advance to receive a task request submitted by a user, a task processing function and a task processor are combined into a task group in advance to prepare for processing a task, a task queue and a result database are created in advance and used for storing a task to be processed submitted by the user and a task processing result after the task processing, wherein the task to be processed can be one or more, and the task processing result can be one or more.
In the above embodiment, preparations may be made for the resource platform to process the task in advance, which further improves task processing efficiency.
Optionally, the step of starting the task processing function and the task processor corresponding to the task processing function includes:
configuring a number for each task processor;
maintaining a task processing file for recording the number of the task processor in use;
when a task processing function is started, acquiring the number of a first task processor in the resource platform;
judging whether the number of the first task processor is recorded in a task processing file or not;
and if the number of the first task processor is not recorded in the task processing file, starting the task processing function by using the number of the first task processor, and recording the number of the first task processor in the task processing file.
Specifically, the numbers are used for distinguishing a plurality of task processors in the resource platform and facilitating configuration of corresponding task processing functions. The first task processor is a first matched task processor, and the first task processor may be randomly selected; the task processing file can be a temporary file and is used when an available task processor is configured for the task processing function, so that the one-to-one correspondence between the task processor and the task processing function is ensured, the task failure caused by the fact that two tasks use the same task processor at the same time is avoided, and the task processing success rate is further improved.
Optionally, after acquiring task request information submitted by a plurality of users, the method includes:
and respectively generating corresponding task IDs according to the task request information submitted by the users, and respectively sending each task ID to the user.
Specifically, after the user submits the task, a corresponding task ID can be obtained, the task processing result can be inquired according to the task ID, the task result claim error is avoided, and the task processing efficiency is improved.
Optionally, the step of storing each task processing result and the corresponding task ID in a result database includes:
and binding each task processing result with the corresponding task ID, and storing the task processing result in a result database.
Specifically, the task processing result is bound with the corresponding task ID, and when a user queries the task processing result in the result database through the task ID, the corresponding task processing result can be accurately queried, so that the task processing efficiency is further improved.
Optionally, the method further includes:
when a user inquires a task processing result, acquiring a task ID input by the user through a task manager;
extracting a corresponding task processing result from the result database according to the task ID;
and sending the task processing result to a user.
Specifically, the user can search the corresponding task processing result in the result database only by providing the task ID of the user.
Optionally, the method further includes:
after each task processing function completes one task processing, the next head task parameter can be circularly extracted from the task queue, and corresponding operation is executed.
Specifically, the resource platform can reasonably utilize the task processor to process a plurality of tasks, and the problem that the task processor is idle to cause waste of the task processor is avoided.
Referring to fig. 3, fig. 3 is a schematic diagram of an apparatus for resource platform exclusive resource sharing provided in the present application, as shown in fig. 3, including:
a task obtaining module 301, configured to obtain task request information submitted by multiple users through the task manager;
the task storage module 302 is configured to form a tuple by using the corresponding task parameter and the task ID according to each piece of task request information and insert the tuple into a task queue;
the task processing module 303 is configured to extract a head task parameter of a task queue from the task queue through a plurality of task processing functions, respectively, call a task processor corresponding to the task processing function to process the task, and respectively obtain task processing results;
a result storage module 304, configured to store each task processing result and the corresponding task ID in a result database.
In the embodiment of the present invention, the provided device for resource platforms to share resources can implement each process implemented in the above method embodiments, and achieve the same technical effect, and is not described herein again to avoid repetition.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an electronic device provided in the present application, and as shown in fig. 4, the electronic device includes: a memory 402, a processor 401 and a computer program stored on the memory 402 and executable on the processor 401, the processor 401 implementing the steps of the method in the embodiments of the present invention when executing the computer program.
Embodiments of the present invention provide a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor 401, implements the steps of the method described in the embodiments of the present invention.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative
In addition, a computer-readable storage medium or a computer-readable program may be stored in one computer-readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

  1. A method for a resource platform to share resources exclusively, the resource platform comprising: the method comprises the following steps of:
    acquiring task request information submitted by a plurality of users through the task manager;
    forming a tuple by the corresponding task parameter and the task ID according to each task request message and inserting the tuple into a task queue;
    extracting head task parameters of a task queue from the task queue through a plurality of task processing functions, calling a task processor corresponding to the task processing functions to process the tasks, and respectively obtaining task processing results;
    and storing each task processing result and the corresponding task ID in a result database.
  2. The method of claim 1, wherein prior to receiving task request information submitted by a plurality of users, the method comprises:
    starting a task manager, and enabling the task manager to receive a task request of a user in a background;
    starting a task processing function and a task processor corresponding to the task processing function;
    a task queue and a results database are created.
  3. The method of claim 2, wherein the step of initiating a task processing function and a task processor corresponding to the task processing function comprises:
    configuring a number for each task processor;
    maintaining a task processing file for recording the number of the task processor in use;
    when a task processing function is started, acquiring the number of a first task processor in the resource platform;
    judging whether the number of the first task processor is recorded in a task processing file or not;
    and if the number of the first task processor is not recorded in the task processing file, starting the task processing function by using the number of the first task processor, and recording the number of the first task processor in the task processing file.
  4. The method of claim 1, wherein after obtaining task request information submitted by a plurality of users, the method comprises:
    and respectively generating corresponding task IDs according to the task request information submitted by the users, and respectively sending each task ID to the user.
  5. The method of claim 4, wherein the step of storing each of the task processing results and corresponding task ID in a results database comprises:
    and binding each task processing result with the corresponding task ID, and storing the task processing result in a result database.
  6. The method of claim 5, wherein the method further comprises:
    when a user inquires a task processing result, acquiring a task ID input by the user through a task manager;
    extracting a corresponding task processing result from the result database according to the task ID;
    and sending the task processing result to a user.
  7. The method of claim 1, wherein the method further comprises:
    after each task processing function completes one task processing, the next head task parameter can be circularly extracted from the task queue, and corresponding operation is executed.
  8. An apparatus for a resource platform to share resources exclusively, comprising:
    the task acquisition module is used for acquiring task request information submitted by a plurality of users through the task manager;
    the task storage module is used for forming a tuple by the corresponding task parameter and the task ID according to each task request message and inserting the tuple into the task queue;
    the task processing module is used for extracting the head task parameters of the task queue from the task queue through a plurality of task processing functions, calling the task processor corresponding to the task processing functions to process the tasks and respectively acquiring task processing results;
    and the result storage module is used for storing each task processing result and the corresponding task ID in a result database.
  9. An electronic device, comprising: memory, processor and computer program stored on the memory and executable on the processor, the processor implementing the steps of the method according to any one of claims 1 to 7 when executing the computer program.
  10. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN201880098613.XA 2018-12-21 2018-12-21 Method and device for resource platform to share resources exclusively and electronic equipment Pending CN112840320A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/122560 WO2020124524A1 (en) 2018-12-21 2018-12-21 Method and apparatus for exclusive use of resources by resource platform, and electronic device

Publications (1)

Publication Number Publication Date
CN112840320A true CN112840320A (en) 2021-05-25

Family

ID=71101008

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880098613.XA Pending CN112840320A (en) 2018-12-21 2018-12-21 Method and device for resource platform to share resources exclusively and electronic equipment

Country Status (2)

Country Link
CN (1) CN112840320A (en)
WO (1) WO2020124524A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114625502A (en) * 2022-03-03 2022-06-14 盐城金堤科技有限公司 Word-throwing task processing method and device, storage medium and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080148271A1 (en) * 2006-12-19 2008-06-19 Ianywhere Solutions, Inc. Assigning tasks to threads requiring limited resources using programmable queues
CN103780635A (en) * 2012-10-17 2014-05-07 百度在线网络技术(北京)有限公司 System and method for distributed asynchronous task queue execution in cloud environment
CN104615487A (en) * 2015-01-12 2015-05-13 中国科学院计算机网络信息中心 System and method for optimizing parallel tasks
CN105022670A (en) * 2015-07-17 2015-11-04 中国海洋大学 Heterogeneous distributed task processing system and processing method in cloud computing platform
CN107729139A (en) * 2017-09-18 2018-02-23 北京京东尚科信息技术有限公司 A kind of method and apparatus for concurrently obtaining resource

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8209701B1 (en) * 2007-09-27 2012-06-26 Emc Corporation Task management using multiple processing threads
CN104462370A (en) * 2014-12-09 2015-03-25 北京百度网讯科技有限公司 Distributed task scheduling system and method
CN106776008A (en) * 2016-11-23 2017-05-31 福建六壬网安股份有限公司 A kind of method and system that load balancing is realized based on zookeeper
CN106775977B (en) * 2016-12-09 2020-06-02 北京小米移动软件有限公司 Task scheduling method, device and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080148271A1 (en) * 2006-12-19 2008-06-19 Ianywhere Solutions, Inc. Assigning tasks to threads requiring limited resources using programmable queues
CN103780635A (en) * 2012-10-17 2014-05-07 百度在线网络技术(北京)有限公司 System and method for distributed asynchronous task queue execution in cloud environment
CN104615487A (en) * 2015-01-12 2015-05-13 中国科学院计算机网络信息中心 System and method for optimizing parallel tasks
CN105022670A (en) * 2015-07-17 2015-11-04 中国海洋大学 Heterogeneous distributed task processing system and processing method in cloud computing platform
CN107729139A (en) * 2017-09-18 2018-02-23 北京京东尚科信息技术有限公司 A kind of method and apparatus for concurrently obtaining resource

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114625502A (en) * 2022-03-03 2022-06-14 盐城金堤科技有限公司 Word-throwing task processing method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
WO2020124524A1 (en) 2020-06-25
WO2020124524A8 (en) 2020-08-20

Similar Documents

Publication Publication Date Title
CN109242405B (en) Government affair processing method and device, computer equipment and readable storage medium
CN108572970B (en) Structured data processing method and distributed processing system
US20160147882A1 (en) Object Search Method and Apparatus
JP2014525070A5 (en)
CN109857377B (en) API (application program interface) arrangement method and device
CN106446265B (en) Question query display method for intelligent terminal
WO2019200763A1 (en) Method for processing question answering request from agent, and electronic apparatus and computer-readable storage medium
CN106034113A (en) Data processing method and data processing device
CN111898381A (en) Text information extraction method, device, equipment and medium combining RPA and AI
CN110209768B (en) Question processing method and device for automatic question answering
CN109783678B (en) Image searching method and device
CN109800078B (en) Task processing method, task distribution terminal and task execution terminal
CN112840320A (en) Method and device for resource platform to share resources exclusively and electronic equipment
WO2017045473A1 (en) Business process operation method and apparatus
CN108268498B (en) Processing method and device for batch crawler tasks
CN113051389A (en) Knowledge pushing method and device
CN110442439B (en) Task process processing method and device and computer equipment
CN110188106B (en) Data management method and device
WO2019237949A1 (en) Search method and device
CN106446080B (en) Data query method, query service equipment, client equipment and data system
CN110895538A (en) Data retrieval method, device, storage medium and processor
CN107977381B (en) Data configuration method, index management method, related device and computing equipment
CN112764897B (en) Task request processing method, device and system and computer readable storage medium
CN115269730A (en) Wide table synchronization method and device
CN108848183B (en) Login method and device for simulation user

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination