CN107092526B - Task processing method and device - Google Patents
Task processing method and device Download PDFInfo
- Publication number
- CN107092526B CN107092526B CN201610946065.4A CN201610946065A CN107092526B CN 107092526 B CN107092526 B CN 107092526B CN 201610946065 A CN201610946065 A CN 201610946065A CN 107092526 B CN107092526 B CN 107092526B
- Authority
- CN
- China
- Prior art keywords
- task
- task processing
- processing
- processed
- party
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/505—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5015—Service provider selection
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The embodiment of the invention provides a task processing method and a task processing device, which are used for detecting the task processing pressure of all task processing parties in a preset area at the current moment of a server; and responding to the fact that the task processing pressure is larger than a preset task processing pressure threshold value, respectively setting the task processing state of each task processing party in a preset area in a plurality of preset different task processing pressure intervals according to the task processing pressure, and determining the task processing pressure interval where the task processing pressure is located. Therefore, when the other task request side sends a generation request for generating a new task to be processed which requires the task processing side to process to the server, the experience of the other task request side is prevented from being reduced.
Description
Technical Field
The embodiment of the invention relates to the technical field of internet, in particular to a task processing method and device.
Background
With the rapid development of the internet, more and more entity restaurants provide order services, and meanwhile, more and more users can order in the entity restaurants through the network. When there are many physical restaurants in an area, the area forms a business circle.
Before a user submits an order for ordering food of a certain entity restaurant in a business district to a server, the server prompts the user of a default processing time length of the food, wherein the default processing time length represents: the distribution personnel in the business circle will deliver the food to the user at the latest when the time length from the time when the user submits the order to the server is the default processing time length. The user may submit the order to the server if the user is able to accept the default processing time period, and the user may not submit the order to the server if the user is not able to accept the default processing time period.
However, when the weather is bad or a plurality of physical restaurants in a business district release the discount activity at the same time, many users may submit orders for ordering the food of the physical restaurants in the business district to the server in a short time, so that if a new order for ordering the food of the physical restaurants in the business district submitted by another user is received again under the condition that the number of distribution personnel in the business district is not changed but the number of the orders is greatly increased, the new order cannot be guaranteed to be delivered to the other user within the default processing time, and the experience of the other user is reduced.
Disclosure of Invention
In order to overcome the problems in the related art, embodiments of the present invention provide a task processing method and apparatus.
According to a first aspect of the embodiments of the present invention, there is provided a task processing method applied to a server, the method including:
detecting the task processing pressure of all task processing parties in a preset area at the current moment of the server;
and responding to the fact that the task processing pressure is larger than a preset task processing pressure threshold value, and respectively setting the task processing state of each task processing party in the preset area according to the task processing pressure.
Wherein the task processing state comprises at least one of:
whether the task processing party can continue to receive the new task to be processed later, the processing time length required to be increased for the new task to be processed received after the task processing party processes the new task to be processed, and the quantity of resources required to be increased for the new task to be processed received after the task processing party processes the new task to be processed.
Wherein, the unprocessed task to be processed comprises:
the task processing party is not allocated to the task to be processed of the processing unit, and the task processing party is allocated to the processing unit but the processing unit is not already processing the task to be processed, and the processing unit is used for processing the task.
The detecting task processing pressure of all task processing parties located in a preset area at the current moment of the server includes:
acquiring the total number of unprocessed tasks to be processed of all task processing parties in a preset area at the current moment of the server;
acquiring the allocation quantity of processing units allocated for the task processing party in the preset area in advance;
and determining the task processing pressure of all task processing parties located in a preset area at the current moment of the server according to the total quantity and the allocation quantity.
Wherein, the setting of the task processing state of each task processing party located in the preset area according to the task processing pressure comprises:
determining a task processing pressure interval in which the task processing pressure is located in a plurality of preset different task processing pressure intervals;
and for any task processing party located in the preset area, acquiring attribute information of the task processing party, determining a task processing state simultaneously corresponding to the attribute information of the task processing party and the determined task processing pressure interval according to the preset corresponding relationship among the attribute information of the task processing party, the task processing pressure interval and the task processing state, and setting the task processing state of the task processing party as the determined task processing state.
Wherein, in a plurality of preset different task processing pressure intervals, determining the task processing pressure interval in which the task processing pressure is located includes:
and determining a task processing pressure interval corresponding to the task processing pressure according to a preset corresponding relation between the task processing pressure and the task processing pressure interval.
According to a second aspect of the embodiments of the present invention, there is provided a task processing apparatus applied to a server, the apparatus including:
the detection module is used for detecting the task processing pressure of all task processing parties in a preset area at the current moment of the server;
and the setting module is used for responding to the situation that the task processing pressure is greater than a preset task processing pressure threshold value, and respectively setting the task processing state of each task processing party in the preset area according to the task processing pressure.
Wherein the task processing state comprises at least one of:
whether the task processing party can continue to receive the new task to be processed later, the processing time length required to be increased for the new task to be processed received after the task processing party processes the new task to be processed, and the quantity of resources required to be increased for the new task to be processed received after the task processing party processes the new task to be processed.
Wherein, the unprocessed task to be processed comprises:
the task processing party is not allocated to the task to be processed of the processing unit, and the task processing party is allocated to the processing unit but the processing unit is not already processing the task to be processed, and the processing unit is used for processing the task.
Wherein the detection module comprises:
the first acquisition unit is used for acquiring the total number of unprocessed tasks to be processed of all task processing parties in a preset area at the current moment of the server;
a second obtaining unit, configured to obtain the allocation number of the processing units that are allocated to the task processing party in the preset area in advance;
and the first determining unit is used for determining the task processing pressure of all the task processing parties located in a preset area at the current moment of the server according to the total number and the allocation number.
Wherein the setting module includes:
the second determining unit is used for determining a task processing pressure interval in which the task processing pressure is located in a plurality of preset different task processing pressure intervals;
the setting unit is used for acquiring the attribute information of the task processing party for any task processing party in the preset area, determining the task processing state simultaneously corresponding to the attribute information of the task processing party and the determined task processing pressure interval according to the preset corresponding relation among the attribute information of the task processing party, the task processing pressure interval and the task processing state, and setting the task processing state of the task processing party as the determined task processing state.
Wherein the second determination unit includes:
and the determining subunit is used for determining a task processing pressure interval corresponding to the task processing pressure according to a preset corresponding relationship between the task processing pressure and the task processing pressure interval.
The technical scheme provided by the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, the task processing pressure of all task processing parties in a preset area at the current moment of a server is detected; and responding to the fact that the task processing pressure is larger than a preset task processing pressure threshold value, respectively setting the task processing state of each task processing party in a preset area in a plurality of preset different task processing pressure intervals according to the task processing pressure, and determining the task processing pressure interval where the task processing pressure is located. The task processing state includes at least one of: whether the task processing party can continue to receive the new task to be processed later, the processing time length required to be increased for the new task to be processed received after the task processing party processes the new task to be processed, and the quantity of resources required to be increased for the new task to be processed received after the task processing party processes the new task to be processed. Therefore, when the other task request side sends a generation request for generating a new task to be processed which requires the task processing side to process to the server, the experience of the other task request side is prevented from being reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of embodiments of the invention.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the embodiments of the invention.
FIG. 1 is a flow diagram illustrating a method of task processing in accordance with an exemplary embodiment;
fig. 2 is a block diagram illustrating a task processing device according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary embodiments do not represent all implementations consistent with embodiments of the invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of embodiments of the invention, as detailed in the following claims.
Fig. 1 is a flowchart illustrating a task processing method, as shown in fig. 1, for use in a server, according to an exemplary embodiment, the method including the following steps.
In step S101, detecting task processing pressures of all task processing parties located in a preset area at the current time of the server;
in the embodiment of the invention, a plurality of task processing parties are included in the preset area.
The step can be specifically realized by the following processes 11) to 13), and the steps comprise:
11) acquiring the total number of unprocessed tasks to be processed of all task processing parties in the preset area at the current moment of the server;
the scenes of the embodiment of the invention comprise: the system comprises a server, a plurality of task processing parties and at least one task requesting party, wherein the task processing parties are all located in the preset area.
For any task requester, when the task requester needs to generate a to-be-processed task that requires processing by a certain task handler located in a preset area, the task requester may send a generation request for generating the to-be-processed task that requires processing by the task handler to a server; when the server receives the generation request, generating a task to be processed, and distributing the task to be processed to the task processing party, wherein a plurality of processing units are provided in advance for all the task processing parties located in the preset area, the processing units are used for processing the task, and the task processing party located in the preset area can distribute the task to be processed to the processing units in the plurality of processing units for processing. Therefore, in response to the task handler receiving the to-be-processed task assigned by the server, the task handler may assign the generated to-be-processed task to one of the plurality of processing units, so that the processing unit processes the to-be-processed task. When the processing unit finishes processing the task to be processed, the processing unit indicates that the task processing party finishes processing the task to be processed.
Wherein, the unprocessed task to be processed comprises: the task processing party does not allocate the tasks to be processed to the processing units, and the task processing party allocates the tasks to be processed to the processing units but does not finish processing;
in the embodiment of the present invention, each time the server generates a task to be processed, the server allocates a task identifier to the generated task at the proxy, where the task identifier is used to uniquely identify the task, that is, the task identifiers of different tasks to be processed are different, and the task identifier may be a name or a number of the task to be processed, which is not limited in the present invention.
The server stores an unprocessed list corresponding to the preset area, and the unprocessed list is used for storing task identifiers of unprocessed to-be-processed tasks of all task processing parties located in the preset area.
In the embodiment of the present invention, when the server allocates the to-be-processed task to the task processing party, the task identifier allocated to the to-be-processed task is stored in the unprocessed completion list corresponding to the preset area, which is locally stored, and when the task processing party finishes processing the to-be-processed task, the server deletes the task identifier allocated to the to-be-processed task from the unprocessed completion list corresponding to the preset area, which is locally stored.
Therefore, in this step, the server may obtain a locally stored unprocessed list corresponding to the preset area, then count the number of task identifiers included in the unprocessed list corresponding to the preset area, and use the counted number as the total number of unprocessed to-be-processed tasks of all task processing parties located in the preset area at the current time of the server.
In the embodiment of the present invention, the server may periodically or in real time acquire the total number of unprocessed to-be-processed tasks of all the task processing parties located in the preset area at the current time of the server.
12) Acquiring the allocation quantity of processing units allocated to all task processing parties in the preset area in advance; the processing unit is used for processing tasks;
in this case, the technician may store the allocation amounts of the processing units allocated to all the task processing parties located in the preset area in the local area in advance. Therefore, in this step, the server can directly acquire the equipment amount from the local.
13) And determining the task processing pressure of all task processing parties located in the preset area at the current moment of the server according to the total quantity and the allocation quantity.
The total number and the allocated number may be divided to obtain a value, and the value is used as the task processing pressure of all the task processing parties located in the preset area at the current time of the server.
Of course, in the embodiment of the present invention, the task processing pressures of all task processing parties located in the preset area at the current time of the server may be calculated by other manners using the total number and the equipment number, which is not limited in the present invention.
In response to the task processing pressure being greater than the preset task processing pressure threshold, in step S102, the task processing state of each task processing party located in the preset area is set according to the task processing pressure.
In the embodiment of the present invention, the server may train the preset task processing pressure threshold in advance according to the number of unprocessed to-be-processed tasks at each time in the history process of all task processing parties located in the preset area and the number of processing units equipped for all task processing parties located in the preset area.
Specifically, for any time in the history process, the server may obtain the number of unprocessed to-be-processed tasks at that time by all task processing parties located in the preset area and the number of processing units allocated to all task processing parties located in the preset area, and then divide the number of unprocessed to-be-processed tasks at that time by all task processing parties located in the preset area by the number of processing units allocated to all task processing parties located in the preset area to obtain a value, which is used as the history task processing pressure at that time by all task processing parties located in the preset area. The above operation is performed for every other time in the history process, so that the historical task processing pressure of all the task processing parties located in the preset area at every time in the history process can be obtained respectively.
Then the server sorts the historical task processing pressure of all task processing parties in the preset area at each moment in the historical process according to the pressure sequence; searching the task processing pressure at the middle of the sequenced task processing pressures; and then determining a preset task processing pressure threshold according to the searched historical task processing pressure, and storing the preset task processing pressure threshold locally in the server.
Determining a preset task processing pressure threshold according to the searched historical task processing pressure, wherein the step of determining the preset task processing pressure threshold comprises the following steps:
the found historical task processing pressure is used as a preset task processing pressure threshold, or the found historical task processing pressure is multiplied by a preset coefficient to obtain a numerical value which is used as the preset task processing pressure threshold, where the preset coefficient may be a numerical value set in the server by a technician in advance, for example, 0.9, 1.5, or 2.0, and the like, and the present invention is not limited thereto.
Therefore, in this step, the server may obtain the preset task processing pressure threshold from the local, then compare the task processing pressure with the preset task processing pressure threshold, and in response to that the task processing pressure is greater than the preset task processing pressure threshold, it indicates that the number of the to-be-processed tasks that are not processed by all the task processing parties located in the preset area is greater, and since the number of the processing units that are equipped for all the task processing parties located in the preset area is limited and fixed, the number of the to-be-processed tasks that need to be processed by each of the processing units that are equipped for all the task processing parties located in the preset area is greater, and since each of the processing units needs to spend a certain time processing each of the to-be-processed tasks, in a case that the number of the to-be-processed tasks that need to be processed by each of the task processing units that are equipped for all the task processing parties located in the preset area is greater, respectively If other task requesters continue to send a generation request for generating a new task to be processed, which is required to be processed by a certain task processor in the preset area, to the server; after the server receives the generation request, the server allocates the new to-be-processed task to the task processing party, but even if the task processing party allocates the new to-be-processed task to a certain processing unit equipped for all task processing parties located in the preset area, the processing unit cannot process the new to-be-processed task within a time period taking the time when the server receives the generation request as the starting time and the duration as the default processing duration, that is, the task processing party cannot process the new to-be-processed task within the time period taking the time when the server receives the generation request as the starting time and the duration as the default processing duration, thereby bringing low experience to the other task requesting parties.
Therefore, in order to improve the experience of the other task requesting party, in response to the task processing pressure being greater than the preset task processing pressure threshold, the task processing pressure interval in which the task processing pressure is located needs to be determined among a plurality of preset different task processing pressure intervals. And then, for any task processing party in the preset area, acquiring the attribute information of the task processing party, determining a task processing state simultaneously corresponding to the attribute information of the task processing party and the determined task processing pressure interval according to the preset corresponding relation among the attribute information of the task processing party, the task processing pressure interval and the task processing state, and setting the task processing state of the task processing party as the determined task processing state. And executing the operation for each other task processing party in the preset area, so that the task processing states of all the task processing parties in the preset area can be set.
In this embodiment of the present invention, the attribute information of the task processing party may be: the nature of the task handler itself, for example, assumes that when the task handler is a merchant, the task handler includes: group self-operated merchants, individual ordinary merchants, individual purchasing merchants and the like. The embodiment of the present invention is not limited thereto.
In an embodiment of the invention, the task processing state comprises at least one of: whether the task processing party can continue to receive the new task to be processed later, the processing time length required to be increased for the new task to be processed received after the task processing party processes the new task to be processed, and the quantity of resources required to be increased for the new task to be processed received after the task processing party processes the new task to be processed.
In the embodiment of the invention, in the preset corresponding relationship among the attribute information of the task processing party, the task processing pressure interval and the task processing state, the processing state in each record can be whether the task processing party can continue to receive new tasks to be processed or not; the processing time required to be increased for a new task to be processed received after the task processing party processes the task can also be increased; the amount of resources required to be increased for a new task to be processed received after the task processor processes the task can also be increased; or a combination of processing time required to be added for whether the task processing party can continue to receive the new task to be processed after the task processing party processes the new task to be processed; or a combination of the number of resources required to be added for the task processor to continue to receive the new to-be-processed task after the task processor processes the new to-be-processed task; or a combination of the processing time required to be increased for a new task to be processed received after the task processor processes and the number of resources required to be increased for a new task to be processed received after the task processor processes; it may also be a combination of whether the task handler can continue to receive new pending tasks thereafter, the processing duration that needs to be increased for new pending tasks received after the task handler processes, and the amount of resources that needs to be increased for new pending tasks received after the task handler processes.
The method for determining the task processing pressure interval in which the task processing pressure is located in a plurality of preset different task processing pressure intervals comprises the following steps:
and determining a task processing pressure interval corresponding to the task processing pressure according to a preset corresponding relation between the task processing pressure and the task processing pressure interval.
In response to that the task processing pressure is less than or equal to the preset task processing pressure threshold, it indicates that the number of unprocessed to-be-processed tasks of all task processing parties located in the preset area is smaller, and therefore, the number of to-be-processed tasks that need to be processed by each processing unit equipped for all task processing parties located in the preset area is smaller, and in the case that the number of to-be-processed tasks that need to be processed by each task processing unit equipped for all task processing parties located in the preset area is smaller, if there are other task requesting parties, a generation request for generating a new to-be-processed task that requires processing by a certain task processing party located in the preset area is continuously sent to the server; after the server receives the generation request and allocates the new to-be-processed task to the task processing party, the task processing party allocates the new to-be-processed task to one of the processing units equipped for all the task processing parties located in the preset area, and the processing unit can process the new to-be-processed task within a time period taking the time when the server receives the generation request as the starting time and the duration as the default processing duration, that is, the time period is the default processing duration. The task processing party can process the new task to be processed in the time period which takes the time when the server receives the generation request as the starting time and takes the time as the default processing time, so that the experience of other task requesting parties is not reduced. Thus, in response to the task processing pressure being less than or equal to the preset task processing pressure threshold, the flow ends.
Under the condition that the number of tasks to be processed, which need to be processed by each task processing unit respectively, of all task processing parties in the preset area is large, if other task requesting parties continue to send a generation request for generating a new task to be processed, which needs to be processed by a certain task processing party in the preset area, to the server; after the server receives the generation request, in order to avoid that the server allocates the new to-be-processed task to the task processing party and the task processing party allocates the new to-be-processed task to a certain processing unit equipped for all task processing parties located in the preset area, the processing unit cannot process the new to-be-processed task within a time period taking the time when the server receives the generation request as the starting time and the duration as the default processing duration, so that a low experience is brought to the other task requesting parties.
In an embodiment of the present invention, when the server receives a generation request for generating a new to-be-processed task that requires processing by the task processing party located in the preset area, the server may increase a processing time length on the basis of a default processing time length to obtain a delay processing time length, and use the delay processing time length as a processing time length of the new to-be-processed task, where the delay processing time length is greater than the default processing time length, so that the server needs to send the delay processing time length to the other task requesting parties, so that the other task requesting parties determine whether the delay processing time length can be accepted; when the other task requesting party receives the delay processing time length, the other task requesting party may determine whether the delay processing time length can be accepted.
If the other task requesting party cannot accept the delay processing time, an instruction of canceling the generation of the new task to be processed may be sent to the server, and when the server receives the instruction of canceling the generation of the new task to be processed sent by the other task requesting party, the process is ended.
If the other task requesting party can accept the delay processing time, an instruction for confirming generation of a new task to be processed can be sent to the server, and when the server receives the instruction for confirming generation of the new task to be processed sent by the other task requesting party, the new task to be processed is generated; and setting the processing time length of the new task to be processed as the delay processing time length. Since the other task requester has already accepted the delay processing time, as long as the task handler completes the new task to be processed within the time period in which the time when the server receives the generation request is the start time and the time is the delay processing time, even if the task handler does not complete the new task to be processed within the time period in which the time when the server receives the generation request is the start time and the time is the default processing time, the experience of the other task requester can be prevented from being degraded.
In another embodiment of the present invention, the server may set that the task processing party cannot continue to receive the new task to be processed, so that, if a certain task requesting party needs to send a generation request for generating a new task to be processed, which is required to be processed by the task processing party located in the preset area, to the server, the server will refuse to generate the new task to be processed, so that there is no problem that whether the task processing party will process the new task to be processed within a time period taking the time when the server receives the generation request as the starting time and the time duration as the default processing time duration, and the experience of other task requesting parties can also be prevented from being reduced. And when the task processing pressure of all the task processing parties positioned in the preset area is less than or equal to the preset task processing pressure threshold value, the server resets that the task processing parties can continue to receive new tasks to be processed.
In another embodiment of the present invention, if a task requester needs to send a generation request to a server for generating a to-be-processed task that requires a task handler located in the preset area to process, the task requester needs to provide the task handler with the amount of resources required by the task handler to process the to-be-processed task, otherwise, the server refuses to generate the to-be-processed task.
For example, when the server receives a generation request for generating a to-be-processed task that requires the task processor located in the preset area to process, the server needs to send the amount of resources required by the task processor to process a new to-be-processed task to the task requester so that the task requester can decide whether to accept the amount of resources; when the task requester receives the amount of the resource, the task requester may determine whether the amount of the resource can be accepted.
If the task requester can not accept the amount of the resource, an instruction for canceling the generation of the task to be processed may be sent to the server, and when the server receives the instruction for canceling the generation of the task to be processed sent by the other task requester, the process is ended.
If the task requester can accept the amount of the resource, an instruction for confirming generation of the task to be processed can be sent to the server, and when the server receives the instruction for confirming generation of the task to be processed sent by the other task requester, the task to be processed is generated and distributed to the task processor with the task to be processed, so that the task processor distributes the task to be processed to the task unit for processing.
In a case where the number of to-be-processed tasks to be processed respectively by each task processing unit equipped for all task processing parties located in the preset area is large, if the server receives a generation request for generating a new to-be-processed task that requires processing by the task processing party located in the preset area, the server may increase the number of resources required by the task processing party for processing the to-be-processed task, and then send the increased number of required resources to the other task requesting party, so that the other task requesting party determines whether the increased number of required resources can be accepted.
In the embodiment of the present invention, if the number of resources required by the task processing party to process the task to be processed increases, the task requesting party generally does not need to generate a generation request for generating a new task to be processed, which is required to be processed by the task processing party located in the preset area, by the server.
Therefore, if the other task requester cannot accept the increased amount of the required resources, an instruction for canceling the generation of the new to-be-processed task may be sent to the server, and when the server receives the instruction for canceling the generation of the new to-be-processed task sent by the other task requester, the server may not generate the new to-be-processed task, so that there is no problem that whether the task handler will process the new to-be-processed task within a time period taking the time when the server receives the generation request as the starting time and the duration as the default processing duration, and the experience of the other task requester may also be avoided from being reduced.
In the embodiment of the invention, the task processing pressure of all task processing parties in a preset area at the current moment of a server is detected; and responding to the fact that the task processing pressure is larger than a preset task processing pressure threshold value, respectively setting the task processing state of each task processing party in a preset area in a plurality of preset different task processing pressure intervals according to the task processing pressure, and determining the task processing pressure interval where the task processing pressure is located. The task processing state includes at least one of: whether the task processing party can continue to receive the new task to be processed later, the processing time length required to be increased for the new task to be processed received after the task processing party processes the new task to be processed, and the quantity of resources required to be increased for the new task to be processed received after the task processing party processes the new task to be processed. Therefore, when the other task request side sends a generation request for generating a new task to be processed which requires the task processing side to process to the server, the experience of the other task request side is prevented from being reduced.
Fig. 2 is a block diagram illustrating a task processing device according to an example embodiment. Referring to fig. 2, the apparatus includes:
the detection module 11 is configured to detect task processing pressures of all task processing parties located in a preset area at the current time of the server;
and the setting module 12 is configured to, in response to that the task processing pressure is greater than a preset task processing pressure threshold, respectively set a task processing state of each task processing party located in the preset area according to the task processing pressure.
Wherein the task processing state comprises at least one of:
whether the task processing party can continue to receive the new task to be processed later, the processing time length required to be increased for the new task to be processed received after the task processing party processes the new task to be processed, and the quantity of resources required to be increased for the new task to be processed received after the task processing party processes the new task to be processed.
Wherein, the unprocessed task to be processed comprises:
the task processing party is not allocated to the task to be processed of the processing unit, and the task processing party is allocated to the processing unit but the processing unit is not already processing the task to be processed, and the processing unit is used for processing the task.
Wherein the detection module 11 comprises:
the first acquisition unit is used for acquiring the total number of unprocessed tasks to be processed of all task processing parties in a preset area at the current moment of the server;
a second obtaining unit, configured to obtain the allocation number of the processing units that are allocated to the task processing party in the preset area in advance;
and the first determining unit is used for determining the task processing pressure of all the task processing parties located in a preset area at the current moment of the server according to the total number and the allocation number.
Wherein the setting module 12 includes:
the second determining unit is used for determining a task processing pressure interval in which the task processing pressure is located in a plurality of preset different task processing pressure intervals;
the setting unit is used for acquiring the attribute information of the task processing party for any task processing party in the preset area, determining the task processing state simultaneously corresponding to the attribute information of the task processing party and the determined task processing pressure interval according to the preset corresponding relation among the attribute information of the task processing party, the task processing pressure interval and the task processing state, and setting the task processing state of the task processing party as the determined task processing state.
Wherein the second determination unit includes:
and the determining subunit is used for determining a task processing pressure interval corresponding to the task processing pressure according to a preset corresponding relationship between the task processing pressure and the task processing pressure interval.
In the embodiment of the invention, the task processing pressure of all task processing parties in a preset area at the current moment of a server is detected; and responding to the fact that the task processing pressure is larger than a preset task processing pressure threshold value, respectively setting the task processing state of each task processing party in a preset area in a plurality of preset different task processing pressure intervals according to the task processing pressure, and determining the task processing pressure interval where the task processing pressure is located. The task processing state includes at least one of: whether the task processing party can continue to receive the new task to be processed later, the processing time length required to be increased for the new task to be processed received after the task processing party processes the new task to be processed, and the quantity of resources required to be increased for the new task to be processed received after the task processing party processes the new task to be processed. Therefore, when the other task request side sends a generation request for generating a new task to be processed which requires the task processing side to process to the server, the experience of the other task request side is prevented from being reduced.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the embodiments of the invention following, in general, the principles of the embodiments of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the embodiments of the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the embodiments of the invention being indicated by the following claims.
It is to be understood that the embodiments of the present invention are not limited to the precise arrangements described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of embodiments of the invention is limited only by the appended claims.
Claims (10)
1. A task processing method, comprising:
detecting the task processing pressure of all task processing parties in a preset area at the current moment of a server; the task processing party is provided with a plurality of processing units, and allocates the tasks to be processed to the processing units for task processing according to the tasks to be processed allocated by the server;
respectively setting the task processing state of each task processing party in the preset area according to the task processing pressure in response to the fact that the task processing pressure is larger than a preset task processing pressure threshold value;
the detecting task processing pressure of all task processing parties located in a preset area at the current moment of the server includes: acquiring the total number of unprocessed tasks to be processed of all task processing parties in a preset area at the current moment of the server; acquiring the allocation quantity of processing units allocated for the task processing party in the preset area in advance; and determining the task processing pressure of all task processing parties located in a preset area at the current moment of the server according to the total quantity and the allocation quantity.
2. The method of claim 1, wherein task processing state comprises at least one of:
whether the task processing party can continue to receive the new task to be processed later, the processing time length required to be increased for the new task to be processed received after the task processing party processes the new task to be processed, and the quantity of resources required to be increased for the new task to be processed received after the task processing party processes the new task to be processed.
3. A method according to claim 1 or 2, wherein the pending tasks that are not finished comprise:
the task processing party is not allocated to the task to be processed of the processing unit, and the task processing party is allocated to the processing unit but the processing unit is not already processing the task to be processed, and the processing unit is used for processing the task.
4. The method according to claim 3, wherein the setting of the task processing state of each task processing party located in the preset area according to the task processing pressure comprises:
determining a task processing pressure interval in which the task processing pressure is located in a plurality of preset different task processing pressure intervals;
and for any task processing party located in the preset area, acquiring attribute information of the task processing party, determining a task processing state simultaneously corresponding to the attribute information of the task processing party and the determined task processing pressure interval according to the preset corresponding relationship among the attribute information of the task processing party, the task processing pressure interval and the task processing state, and setting the task processing state of the task processing party as the determined task processing state.
5. The method according to claim 4, wherein the determining a task processing pressure interval in which the task processing pressure is located in a plurality of preset different task processing pressure intervals comprises:
and determining a task processing pressure interval corresponding to the task processing pressure according to a preset corresponding relation between the task processing pressure and the task processing pressure interval.
6. A task processing apparatus, comprising:
the detection module is used for detecting the task processing pressure of all task processing parties in a preset area at the current moment of the server; the task processing party is provided with a plurality of processing units, and allocates the tasks to be processed to the processing units for task processing according to the tasks to be processed allocated by the server;
the setting module is used for responding to the fact that the task processing pressure is larger than a preset task processing pressure threshold value, and respectively setting the task processing state of each task processing party in the preset area according to the task processing pressure;
wherein the detection module comprises:
the first acquisition unit is used for acquiring the total number of unprocessed tasks to be processed of all task processing parties in a preset area at the current moment of the server;
a second obtaining unit, configured to obtain the allocation number of the processing units that are allocated to the task processing party in the preset area in advance;
and the first determining unit is used for determining the task processing pressure of all the task processing parties located in a preset area at the current moment of the server according to the total number and the allocation number.
7. The apparatus of claim 6, wherein task processing state comprises at least one of:
whether the task processing party can continue to receive the new task to be processed later, the processing time length required to be increased for the new task to be processed received after the task processing party processes the new task to be processed, and the quantity of resources required to be increased for the new task to be processed received after the task processing party processes the new task to be processed.
8. The apparatus according to claim 6 or 7, wherein the unprocessed pending tasks comprise:
the task processing party is not allocated to the task to be processed of the processing unit, and the task processing party is allocated to the processing unit but the processing unit is not already processing the task to be processed, and the processing unit is used for processing the task.
9. The apparatus of claim 8, wherein the setup module comprises:
the second determining unit is used for determining a task processing pressure interval in which the task processing pressure is located in a plurality of preset different task processing pressure intervals;
the setting unit is used for acquiring the attribute information of the task processing party for any task processing party in the preset area, determining the task processing state simultaneously corresponding to the attribute information of the task processing party and the determined task processing pressure interval according to the preset corresponding relation among the attribute information of the task processing party, the task processing pressure interval and the task processing state, and setting the task processing state of the task processing party as the determined task processing state.
10. The apparatus according to claim 9, wherein the second determining unit comprises:
and the determining subunit is used for determining a task processing pressure interval corresponding to the task processing pressure according to a preset corresponding relationship between the task processing pressure and the task processing pressure interval.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610946065.4A CN107092526B (en) | 2016-11-02 | 2016-11-02 | Task processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610946065.4A CN107092526B (en) | 2016-11-02 | 2016-11-02 | Task processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107092526A CN107092526A (en) | 2017-08-25 |
CN107092526B true CN107092526B (en) | 2021-06-15 |
Family
ID=59649287
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610946065.4A Active CN107092526B (en) | 2016-11-02 | 2016-11-02 | Task processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107092526B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111988812B (en) * | 2019-05-21 | 2021-10-29 | 大唐移动通信设备有限公司 | Method and device for setting threshold |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6650620B1 (en) * | 1999-05-04 | 2003-11-18 | Tut Systems, Inc. | Resource constrained routing in active networks |
CN101504740A (en) * | 2009-03-19 | 2009-08-12 | 钟明 | Network ordering system and method |
CN102201096A (en) * | 2010-03-26 | 2011-09-28 | 吴凤瑞 | Online shopping delivery automatic ordering method |
CN104680383A (en) * | 2015-01-16 | 2015-06-03 | 上海我有信息科技有限公司 | Dispatch reminding system and method for order processing |
CN104680384A (en) * | 2015-01-16 | 2015-06-03 | 上海我有信息科技有限公司 | Timing order processing system and method |
CN105844349A (en) * | 2016-03-21 | 2016-08-10 | 上海壹佰米网络科技有限公司 | Method and system for automatically distributing orders |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101091164A (en) * | 2004-05-20 | 2007-12-19 | Bea系统公司 | System and method for application server with self-tuned threading model |
DE102012221355A1 (en) * | 2012-11-22 | 2014-05-22 | Siemens Aktiengesellschaft | Method for providing resources in a cloud and device |
CN104331328B (en) * | 2013-07-22 | 2018-06-12 | 中国电信股份有限公司 | Schedule virtual resources method and schedule virtual resources device |
CN104142862B (en) * | 2013-12-16 | 2015-09-16 | 腾讯科技(深圳)有限公司 | The overload protection method of server and device |
CN105471614A (en) * | 2014-09-11 | 2016-04-06 | 腾讯科技(深圳)有限公司 | Overload protection method and device and server |
CN104468506A (en) * | 2014-10-28 | 2015-03-25 | 大唐移动通信设备有限公司 | Session state detection method and device |
-
2016
- 2016-11-02 CN CN201610946065.4A patent/CN107092526B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6650620B1 (en) * | 1999-05-04 | 2003-11-18 | Tut Systems, Inc. | Resource constrained routing in active networks |
CN101504740A (en) * | 2009-03-19 | 2009-08-12 | 钟明 | Network ordering system and method |
CN102201096A (en) * | 2010-03-26 | 2011-09-28 | 吴凤瑞 | Online shopping delivery automatic ordering method |
CN104680383A (en) * | 2015-01-16 | 2015-06-03 | 上海我有信息科技有限公司 | Dispatch reminding system and method for order processing |
CN104680384A (en) * | 2015-01-16 | 2015-06-03 | 上海我有信息科技有限公司 | Timing order processing system and method |
CN105844349A (en) * | 2016-03-21 | 2016-08-10 | 上海壹佰米网络科技有限公司 | Method and system for automatically distributing orders |
Also Published As
Publication number | Publication date |
---|---|
CN107092526A (en) | 2017-08-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20190244160A1 (en) | Order processing method and apparatus | |
CN108470298B (en) | Method, device and system for transferring resource numerical value | |
CN109376155B (en) | ID generation method and device, storage medium and electronic device | |
CN106779910B (en) | Distribution order distribution method and device | |
WO2018214411A1 (en) | Order allocation method and device, electronic apparatus, and computer readable storage medium | |
CN108154298B (en) | Distribution task allocation method and device, electronic equipment and computer storage medium | |
CN109347901B (en) | Method, medium, device and system for realizing consensus mechanism of block chain system | |
CN110827000A (en) | Conference room reservation method and device | |
TW201732694A (en) | Task allocation method, system and device | |
CN107360117B (en) | Data processing method, device and system | |
CN112887228A (en) | Cloud resource management method and device, electronic equipment and computer readable storage medium | |
CN104639546B (en) | The methods, devices and systems of multi-biological characteristic inclusive authentication | |
CN105162894A (en) | Equipment identification acquisition method and equipment identification acquisition device | |
CN109413202A (en) | The ordering system and method for block chain Transaction Information | |
CN107092526B (en) | Task processing method and device | |
CN106301881B (en) | Service processing method and device | |
CN111144860B (en) | Order processing method, device, server and storage medium | |
CN106855821B (en) | Distributed transaction processing method and device | |
CN107092999B (en) | Task processing method and device | |
CN106357783B (en) | A kind of fringe node distribution method and device | |
CN105335362A (en) | Real-time data processing method and system, and instant processing system | |
CN111309397B (en) | Data distribution method, device, server and storage medium | |
US9336536B1 (en) | Pre-generating blank application instances to improve response time | |
CN110874676B (en) | Resource allocation method, device and system | |
CN110264290B (en) | Method and device for acquiring recommendation information and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 100085 Beijing, Haidian District on the road to the information on the ground floor of the 1 to the 3 floor of the 2 floor, room 11, 202 Applicant after: Beijing Xingxuan Technology Co.,Ltd. Address before: 100085 Beijing, Haidian District on the road to the information on the ground floor of the 1 to the 3 floor of the 2 floor, room 11, 202 Applicant before: Beijing Xiaodu Information Technology Co.,Ltd. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |