CN113449994A - Assignment method, assignment device, electronic device, medium, and program product for job ticket - Google Patents

Assignment method, assignment device, electronic device, medium, and program product for job ticket Download PDF

Info

Publication number
CN113449994A
CN113449994A CN202110729910.3A CN202110729910A CN113449994A CN 113449994 A CN113449994 A CN 113449994A CN 202110729910 A CN202110729910 A CN 202110729910A CN 113449994 A CN113449994 A CN 113449994A
Authority
CN
China
Prior art keywords
queue
processing
target
task list
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110729910.3A
Other languages
Chinese (zh)
Inventor
唐韬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202110729910.3A priority Critical patent/CN113449994A/en
Publication of CN113449994A publication Critical patent/CN113449994A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06311Scheduling, planning or task assignment for a person or group
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06316Sequencing of tasks or work

Landscapes

  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Engineering & Computer Science (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Economics (AREA)
  • Operations Research (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Marketing (AREA)
  • Educational Administration (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The present disclosure provides a method, apparatus, electronic device, medium, and program product for assignment of a job ticket, which may be used in the financial field or other fields. The method comprises the following steps: in response to receiving a target task list of a target user for a specified service, acquiring a user identifier for representing the user priority of the target user; acquiring a pre-created candidate processing queue set comprising a first processing queue with high processing priority and a second processing queue with low processing priority, wherein the second processing queue comprises a plurality of sub-queues which are pre-configured with different weights; responding to the user priority of the user identification representing the target user, and distributing the target task list to a first processing queue to store the target task list; responding to the user identifier representing the low user priority of the target user, and acquiring a service identifier representing the service priority of the designated service; and distributing the target task list to the target sub-queue determined from the plurality of sub-queues to store the target task list according to the service identifier and the weight of each sub-queue.

Description

Assignment method, assignment device, electronic device, medium, and program product for job ticket
Technical Field
The present disclosure relates to the field of financial technology, and in particular, to a method and an apparatus for assigning a task list, an electronic device, a medium, and a program product.
Background
This section is intended to provide a background or context to the embodiments of the disclosure recited in the claims. The description herein is not admitted to be prior art by inclusion in this section.
With the continuous sinking of consumer finances in recent years, the advance of the affordable finances makes the traditional approval business to gradually face the challenge of high concurrency of the internet-like scene.
However, the distribution mode of the related art to the task list has many defects, so that the efficiency of task processing is low, and the task processing is not enough to deal with high concurrency.
Disclosure of Invention
In view of the above, in order to at least partially overcome the above technical problems in the distribution solutions of the related art, the present disclosure provides a distribution method, apparatus, electronic device, medium, and program product of a task sheet coping with a high concurrency scenario.
In order to achieve the above object, an aspect of the present disclosure provides a method of allocating a task order, which may include: in response to receiving a target task list of a target user for a specified service, acquiring a user identifier of the target user, wherein the user identifier is used for representing the user priority of the target user; acquiring a pre-created candidate processing queue set, wherein the candidate processing queue set comprises a first processing queue with high processing priority and a second processing queue with low processing priority, and the second processing queue comprises a plurality of sub-queues configured with different weights in advance; responding to the user identifier representing that the user priority of the target user is high, and distributing the target task list to the first processing queue to store the target task list; responding to the user identifier representing the low user priority of the target user, and acquiring a service identifier of the specified service, wherein the service identifier is used for representing the service priority of the specified service; and distributing the target task list to a target sub-queue determined from the plurality of sub-queues to store the target task list according to the service identifier and the weight of each sub-queue.
According to an embodiment of the present disclosure, the method for allocating the task list may further include: responding to the processing failure of the target task list stored in the first processing queue, and detecting whether the retry times of the target task list exceed a first retry time threshold value; adding one to the retry number of the target task list when the first retry number threshold is not exceeded; and moving the target task list to the tail of the first processing queue to wait for processing again.
According to an embodiment of the present disclosure, the candidate processing queue set may further include a third processing queue, and the method for allocating the task list may further include: and moving the target task list to the third processing queue when the first retry number threshold value is exceeded.
According to an embodiment of the present disclosure, the method for allocating the task list may further include: and in response to the successful processing of the target task list stored in the first processing queue, removing the target task list from the first processing queue.
According to an embodiment of the present disclosure, the method for allocating the task list may further include: responding to the processing failure of the target task list stored in the target sub-queue, and detecting whether the retry times of the target task list exceed a second retry time threshold value; adding one to the retry number of the target task list when the second retry number threshold is not exceeded; and moving the target task list to the tail of the target sub-queue to wait for processing again.
According to an embodiment of the present disclosure, the candidate processing queue set may further include a third processing queue, and the method for allocating the task list may further include: and if the second retry number threshold is exceeded, moving the target task list to the third processing queue.
According to an embodiment of the present disclosure, the method for allocating the task list may further include: and in response to the successful processing of the target task list stored in the target sub-queue, moving the target task list out of the target sub-queue.
According to an embodiment of the present disclosure, the method for allocating the task list may further include: and updating the weights of the plurality of sub-queues to obtain updated weights.
According to an embodiment of the present disclosure, the updating the weights of the plurality of sub-queues to obtain updated weights may include: acquiring the weight of each sub-queue; determining the sum of the weights of non-target sub-queues except the target sub-queue based on the weight of each sub-queue; updating the weight of the target sub-queue according to the difference between the weight of the target sub-queue and the sum of the weights to obtain an updated weight; and updating the weight of each non-target sub-queue according to the sum of the weight of each non-target sub-queue and the weight of the non-target sub-queue to obtain the updated weight.
According to an embodiment of the present disclosure, the method for allocating the task list may further include: and sending a notification message to prompt the target task list to be manually processed in response to the target task list being moved to the third processing queue.
To achieve the above object, another aspect of the present disclosure provides an apparatus for assigning a task order, which may include: a first obtaining module, configured to obtain a user identifier of a target user in response to receiving a target task list of the target user for a specified service, where the user identifier is used to represent a user priority of the target user; a second obtaining module, configured to obtain a pre-created candidate processing queue set, where the candidate processing queue set includes a first processing queue with a high processing priority and a second processing queue with a low processing priority, and the second processing queue includes multiple sub-queues configured with different weights in advance; a first allocating module, configured to allocate the target task sheet to the first processing queue to store the target task sheet in response to a user priority of the target user represented by the user identifier; a third obtaining module, configured to obtain a service identifier of the specified service in response to a low user priority of the target user represented by the user identifier, where the service identifier is used to represent a service priority of the specified service; and the second distribution module is used for distributing the target task list to a target sub-queue determined from the plurality of sub-queues to store the target task list according to the service identifier and the weight of each sub-queue.
According to an embodiment of the present disclosure, the apparatus for allocating a task list may further include: a first detection module, configured to detect whether a retry number of the target task list exceeds a first retry number threshold in response to a processing failure of the target task list stored in the first processing queue; the first processing module is used for adding one to the retry times of the target task list under the condition that the first retry time threshold value is not exceeded; and the first moving module is used for moving the target task list to the tail of the first processing queue to wait for processing again.
According to an embodiment of the present disclosure, the candidate processing queue set may further include a third processing queue, and the apparatus for allocating a task list may further include: and a second moving module, configured to move the target task list to the third processing queue when the first retry number threshold is exceeded.
According to an embodiment of the present disclosure, the apparatus for allocating a task list may further include: and the first shifting-out module is used for responding to the successful processing of the target task list stored in the first processing queue and shifting the target task list out of the first processing queue.
According to an embodiment of the present disclosure, the apparatus for allocating a task list may further include: a second detection module, configured to detect whether a retry number of the target task list exceeds a second retry number threshold in response to a processing failure of the target task list stored in the target sub-queue; a second processing module, configured to add one to the retry number of the target task ticket if the retry number does not exceed the second retry number threshold; and the third moving module is used for moving the target task list to the tail of the target sub-queue to wait for processing again.
According to an embodiment of the present disclosure, the candidate processing queue set may further include a third processing queue, and the apparatus for allocating a task list may further include: and a fourth moving module, configured to, when the second retry number threshold is exceeded, move the target task to the third processing queue.
According to an embodiment of the present disclosure, the apparatus for allocating a task list may further include: and the second shifting-out module is used for responding to the successful processing of the target task list stored in the target sub-queue and shifting the target task list out of the target sub-queue.
According to an embodiment of the present disclosure, the apparatus for allocating a task list may further include: and the first updating module is used for updating the weights of the plurality of sub queues to obtain the updated weights.
According to an embodiment of the present disclosure, the first updating module may include: the obtaining submodule is used for obtaining the weight of each sub queue; a determining sub-module, configured to determine, based on the weight of each sub-queue, a sum of weights of non-target sub-queues other than the target sub-queue; a first updating submodule, configured to update the weight of the target sub-queue according to a difference between the weight of the target sub-queue and the sum of the weights, so as to obtain an updated weight; and the second updating submodule is used for updating the weight of each non-target sub-queue according to the sum of the weight of each non-target sub-queue and the weight of the second updating submodule to obtain the updated weight.
According to an embodiment of the present disclosure, the apparatus for allocating a task list may further include: and the sending module is used for responding to the movement of the target task list to the third processing queue and sending a notification message to prompt the execution of manual processing on the target task list.
In order to achieve the above object, another aspect of the present disclosure provides an electronic device including: one or more processors, a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the assignment method for the task list as described above.
To achieve the above object, another aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions for implementing the assignment method of a task sheet as described above when executed.
To achieve the above object, another aspect of the present disclosure provides a computer program comprising computer executable instructions for implementing the assignment method of a task sheet as described above when executed.
According to the task list distribution method disclosed by the invention, the candidate processing queue set comprising the first processing queue with high processing priority and the second processing queue with low processing priority is created in advance, the target task list is distributed to the queues with different processing priorities according to the user priority and the service priority, the processing can be performed according to the user priority, and then the processing can be performed according to the service priority, so that the problems that the distribution mode of the task list in the related technology is insufficient, the task processing efficiency is low, and the high concurrency scene cannot be sufficiently dealt with can be solved, reduced, restrained and even avoided at least partially, and therefore the task processing efficiency can be improved, and the effect of efficiently dealing with the mass task list in the high concurrency scene can be realized.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
FIG. 1 schematically illustrates a system architecture of a method, apparatus, electronic device, medium and program product for assignment of a task sheet suitable for use with embodiments of the present disclosure;
FIG. 2 schematically illustrates an application scenario of a task sheet assignment method, apparatus, electronic device, medium, and program product suitable for use in embodiments of the present disclosure;
FIG. 3 schematically illustrates a flow chart of a method of assignment of a task sheet according to an embodiment of the present disclosure;
FIG. 4 schematically illustrates a flow chart of a method of assignment of a task sheet according to another embodiment of the present disclosure;
FIG. 5 schematically illustrates a block diagram of an apparatus for assignment of a task sheet according to an embodiment of the present disclosure;
FIG. 6 schematically illustrates a schematic diagram of a computer-readable storage medium product suitable for implementing the assignment method of a job ticket described above according to an embodiment of the present disclosure; and
fig. 7 schematically shows a block diagram of an electronic device adapted to implement the above described assignment method of a task sheet according to an embodiment of the present disclosure.
In the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
It should be noted that the figures are not drawn to scale and that elements of similar structure or function are generally represented by like reference numerals throughout the figures for illustrative purposes.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components. All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
Some block diagrams and/or flow diagrams are shown in the figures. It will be understood that some blocks of the block diagrams and/or flowchart illustrations, or combinations thereof, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable task list distribution apparatus, such that the instructions, which execute via the processor, create means for implementing the functions/acts specified in the block diagrams and/or flowchart block or blocks. The techniques of this disclosure may be implemented in hardware and/or software (including firmware, microcode, etc.). In addition, the techniques of this disclosure may take the form of a computer program product on a computer-readable storage medium having instructions stored thereon for use by or in connection with an instruction execution system.
In the related art, the distribution mode of the task list has many defects, so that the processing efficiency of the task list is low, and the task list is not enough to deal with high-concurrency scenes.
Accordingly, the present disclosure provides a method, apparatus, electronic device, medium, and program product for assigning a task sheet sufficient to cope with a highly concurrent scenario. The assignment method of the task list can comprise a data acquisition process and a task assignment process. In the data acquisition process, firstly, in response to receiving a target task list of a target user for a specified service, a user identifier used for representing the user priority of the target user is acquired, and then a pre-created candidate processing queue set is acquired, wherein the candidate processing queue set comprises a first processing queue with high processing priority and a second processing queue with low processing priority, and the second processing queue comprises a plurality of sub-queues which are pre-configured with different weights. After the data is obtained, a task allocation process is entered, the target task sheet can be directly allocated to the first processing queue to store the target task sheet under the condition that the user identifier represents that the user priority of the target user is high, or the service identifier of the designated service is firstly obtained under the condition that the user identifier represents that the user priority of the target user is low, the service identifier is used for representing the service priority of the designated service, and then the target task sheet is allocated to the target sub-queue determined from the plurality of sub-queues to store the target task sheet according to the service identifier and the weight of each sub-queue.
According to the task list distribution method provided by the disclosure, through the pre-created candidate processing queue set including the first processing queue with high processing priority and the second processing queue with low processing priority, the target task list can be distributed to the queues with different processing priorities according to the user priority and the service priority, the task list can be processed according to the user priority and then processed according to the service priority, and the problems that the task processing efficiency is low and the high concurrency scene cannot be handled due to the fact that a plurality of insufficient distribution modes of the task list in the related technology are at least partially solved, relieved, restrained and even avoided, and the technical effects of improving the task processing efficiency and effectively handling the high concurrency scene can be achieved.
It should be noted that the assignment method, apparatus, electronic device, medium, and program product of the job ticket provided by the present disclosure may be used in the financial field, and may also be used in any field other than the financial field. Therefore, the application fields of the method, the device, the electronic device, the medium and the program product for distributing the task list provided by the present disclosure are not limited.
Fig. 1 schematically illustrates a system architecture 100 of a method, apparatus, electronic device, medium, and program product for assignment of a task sheet suitable for use in embodiments of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, the system architecture 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104 and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have installed thereon various communication client applications, such as shopping-like applications, web browser applications, search-like applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only).
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 105 may be a server providing various services, such as a background management server (for example only) providing support for websites browsed by users using the terminal devices 101, 102, 103. The background management server may analyze and perform other processing on the received data such as the user request, and feed back a processing result (e.g., a webpage, information, or data obtained or generated according to the user request) to the terminal device.
It should be noted that the assignment method of the task list provided by the embodiment of the present disclosure may be generally executed by the server 105. Accordingly, the distribution device of the task list provided by the embodiment of the present disclosure may be generally disposed in the server 105. The assignment method of the task list provided by the embodiment of the present disclosure may also be performed by a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the distribution device of the task list provided by the embodiment of the present disclosure may also be disposed in a server or a server cluster different from the server 105 and capable of communicating with the terminal devices 101, 102, 103 and/or the server 105.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Fig. 2 schematically illustrates an application scenario of a task sheet distribution method, apparatus, electronic device, medium, and program product suitable for the embodiments of the present disclosure.
As shown in fig. 2, in an application scenario 200 for processing task lists at high concurrency, in order to improve the processing efficiency of the task lists, the task lists are usually stored in a queue to execute processing in sequence, and unlike the task list allocation method in the related art, the present disclosure adopts a solution of processing the task lists differently by using different user priorities and different service priorities for different types of processing priorities. The multi-queue includes pre-created first processing queue QA, second processing queue QB, and third processing queue QC. The first processing queue QA is used for storing the task list of the guest, the second processing queue QB is used for storing the task list needing to distinguish the service priority, and the third processing queue QC is used for storing the task list needing to be manually processed when the retry times in the first processing queue QA and the second processing queue QB reach the upper limit. Because the guest is a user with high quality and very important among a plurality of users, and the corresponding task list needs to be processed preferentially, the processing priority of the first processing queue QA is higher than that of the second processing queue QB, that is, the task list stored in the first processing queue QA is processed preferentially under the condition that the task lists are stored in the first processing queue QA and the second processing queue QB, and the task list stored in the second processing queue QB is processed only under the condition that the task list needing to be processed does not exist in the first processing queue QA.
The second processing queue QB comprises a plurality of sub-queues pre-configured with different weights, for example QB may comprise 3 sub-queues, Q1, Q2, Q3, with pre-configured weights of 2, 3, 4. The higher the weight the higher the processing priority.
It should be noted that, a corresponding queue depth may be set for each of the plurality of sub-queues included in the first processing queue QA, the second processing queue QB, and the third processing queue QC, and may be the same or different, and considering that the number of the task lists of the intended customers is less than the number of the task lists of the non-intended customers, the depth of the first processing queue QA is generally less than the depth of the second processing queue QB, and the specific value of the queue depth may be set according to the specific number of the task lists and the need of the service scene, which is not limited in this disclosure.
Fig. 3 schematically shows a flow chart of a method of assignment of a task sheet according to an embodiment of the present disclosure. As shown in fig. 3, the method 300 may include operations S310 to S350.
In operation S310, in response to receiving a target task sheet of a target user for a specified service, a user identifier of the target user is obtained.
According to an embodiment of the present disclosure, the user identification is used to characterize the user priority of the target user. The target user is a user who submits a target task list aiming at the specified service, and the target user can be a target user or a non-target user. The priority of the user who wants to be guested is high, and the priority of the user who does not want to be guested is low. The prospective customer may be a customer who meets certain requirements in a business scenario. Definition of the guests, the designated business, and the target task order are all related to a specific business scenario. For example, for a business scenario of a bank credit card, a prospective customer refers to a customer who has reached a certain amount (e.g., five million) of assets in the bank or has a credit record and has more business participating in the bank, the specified business may be a credit card application business, and the target task order may be an application order of a credit card application submitted by the prospective customer or an application order of a credit card application submitted by a non-prospective customer. The task list can contain task identification and also can contain task details, and the task details are used for representing the specific content of the task.
In the present disclosure, a queue refers to a special linear table, which is special in that it allows only a delete operation at the front end (front) of the table, while an insert operation at the back end (rear) of the table, as in a stack, a queue is a linear table with restricted operations. The end performing the insert operation is called the tail of the queue, and the end performing the delete operation is called the head of the queue. When there are no elements in the queue, it is called an empty queue. The data elements of the queue are also referred to as queue elements. Inserting a queue element into a queue is called enqueuing, and removing a queue element from a queue is called dequeuing. Since queues allow only insertion at one end and deletion at the other end, only the elements that enter the queue earliest can be deleted from the queue first, so queues are also known as first-in-first-out (FIFO) linear tables. The queue may be a memory queue, or may be a message queue, which is not limited in this disclosure.
In operation S320, a set of candidate processing queues created in advance is acquired.
According to an embodiment of the present disclosure, a plurality of processing queues are utilized to deposit a job ticket from a user. The candidate processing queue set includes a first processing queue with a high processing priority and a second processing queue with a low processing priority, and the second processing queue includes a plurality of sub-queues configured with different weights in advance. The different weights correspond to different processing priorities, and the larger the weight of the sub-queue is, the higher the processing priority is. The number of the sub-queues included in the second processing queue may be set according to the number of the service priority levels, and the number of the sub-queues may be the same or different, that is, the number of the sub-queues may be greater than the number of the service priority levels. The number of the service priority levels is 2 levels of high level and low level, and the second processing queue can comprise 2 sub queues and also can comprise 3 sub queues. For example, the second processing queue may include 2 sub-queues, Q1 and Q2, respectively, and the pre-configured weights are 2 and 3, respectively, so that the task list stored in Q2 is processed preferentially and then the task list stored in Q1 is processed in descending order of the weights. The second processing queue may also include 3 sub-queues, which are Q1, Q2, and Q3, and the pre-configured weights are 2, 3, and 4, respectively, so that the task sheet stored in Q3 is processed preferentially, then the task sheet stored in Q2 is processed, and finally the task sheet stored in Q1 is processed in descending order of the weights. Creating a sequential queue structure must statically allocate or dynamically apply for a continuous piece of storage space and set two pointers for management. One is the head of line pointer front, which points to the head of line element; the other is a tail pointer, which points to the storage location of the next enqueue element, and the specific creation method is not limited in the present disclosure and can be selected by a person skilled in the art.
In operation S330, in response to the user identifier indicating that the user priority of the target user is high, the target job ticket is allocated to the first processing queue to store the target job ticket.
According to the embodiment of the disclosure, if the user priority of the user identifier representing the target user is high, it indicates that the target user is a guest, and the task list of the target user needs to be processed preferentially, so that the target task list is allocated to the first processing queue.
In operation S340, in response to that the user identifier represents that the user priority of the target user is low, a service identifier of the designated service is obtained.
According to the embodiment of the disclosure, the service identifier is used for representing the service priority of the designated service. Taking the application service that the designated service can be a credit card as an example, the card type identifier of the credit card that the service identifier can apply for is matched with different service priorities corresponding to different card types. For example, the card types are a new credit card type and an old credit card type, and for the purpose of realizing the new product recommendation, the service priority of the new credit card type can be configured to be higher than that of the old credit card type, so that the approval application task list aiming at the new credit card can be processed preferentially than that of the old credit card.
In operation S350, the target job ticket is allocated to the target sub-queue determined from the plurality of sub-queues to store the target job ticket according to the service identifier and the weight of each sub-queue.
According to the embodiment of the disclosure, the task lists with different service priorities can be stored according to different weights pre-configured for the plurality of sub-queues included in the second processing queue, so that the task list with high service priority can be stored in the sub-queue with high weight. When the number of service priority levels is 2 high-level and low-level levels, and the second processing queue includes 2 sub-queues Q1 and Q2, the sub-queue Q2 may be used as a target sub-queue determined from the plurality of sub-queues, and when the second processing queue includes 3 sub-queues Q1, Q2, and Q3, the sub-queue Q3 may be used as a target sub-queue determined from the plurality of sub-queues.
According to the task list distribution method provided by the embodiment of the disclosure, the candidate processing queue set including the first processing queue with high processing priority and the second processing queue with low processing priority is created in advance, the target task list is distributed to the queues with different processing priorities according to the user priority and the service priority, the processing can be performed according to the user priority, and then the processing can be performed according to the service priority, so that the problems that the task processing efficiency is low and the high concurrency scene cannot be handled due to the fact that a plurality of defects exist in the distribution mode of the task list in the related technology can be solved, reduced, restrained and even avoided at least partially, and therefore the technical effects of improving the task processing efficiency and effectively handling the high concurrency scene can be achieved.
As an alternative embodiment, the method for allocating the task list may further include: responding to the processing failure of the target task list stored in the first processing queue, and detecting whether the retry times of the target task list exceed a first retry time threshold; under the condition that the first retry number threshold value is not exceeded, adding one to the retry number of the target task list; and moving the target task list to the tail of the first processing queue to wait for processing again.
According to the embodiment of the disclosure, the task lists in the first processing queue are processed sequentially, that is, the task lists are sequentially executed according to the time sequence of the task lists stored in the first processing queue, and the earlier the stored task lists are processed, the earlier the task lists are executed, so that it can be ensured that all the tasks in the first processing queue can be executed once.
In particular implementation, the execution result of the target task list may fail and may succeed. In the case of execution failure, the target job ticket may be retried to be executed again, but in order to avoid the situation that the first processing queue is occupied with an infinite number of retries, the present disclosure may set a first retry number threshold in advance, which is the maximum number of retries, and thus, in the case that the retry number does not exceed the threshold, the target job ticket may be moved to the tail of the first processing queue to wait for processing again.
As an alternative embodiment, the candidate processing queue set may further include a third processing queue, and the method for allocating the task list may further include: in the event that the first retry number threshold has been exceeded, moving the target task sheet to a third processing queue.
In order to avoid the situation that the target task list is stored in the first processing queue again, according to the embodiment of the disclosure, in the case that the retry number exceeds the threshold value, the target task list is moved to the third processing queue for manual processing.
As an alternative embodiment, the method for allocating the task list may further include: and in response to the successful processing of the target task list stored in the first processing queue, moving the target task list out of the first processing queue.
According to the embodiment of the disclosure, in order to avoid the situation that the target task sheet with successful processing occupies the first processing queue, the target task sheet is moved out of the first processing queue, so that the resource occupation of the first processing queue can be released, and the processing pressure is relieved.
As an alternative embodiment, the method for allocating the task list may further include: responding to the processing failure of the target task list stored in the target sub-queue, and detecting whether the retry times of the target task list exceed a second retry time threshold value; adding one to the retry number of the target task list under the condition that the second retry number threshold value is not exceeded; and moving the target task list to the tail of the target sub-queue to wait for processing again.
According to the embodiment of the disclosure, the execution result of the target task list may fail and may succeed. In the case of execution failure, the target task sheet may be retried to be executed again, but in order to avoid the situation that the target sub-queue is occupied by an infinite number of retries, the present disclosure may set a second retry number threshold in advance, which is the maximum number of retries, and thus, in the case that the retry number does not exceed the threshold, the target task sheet may be moved to the tail of the target sub-queue to wait for processing again.
As an alternative embodiment, the candidate processing queue set may further include a third processing queue, and the method for allocating the task list may further include: in the event that the second retry number threshold has been exceeded, moving the target task sheet to a third processing queue.
In order to avoid the situation that the target task list is stored in the target sub-queue again, according to the embodiment of the disclosure, in the case that the retry number exceeds the threshold value, the target task list is moved to the third processing queue to be processed manually. The second retry number threshold may be the same as or different from the first retry number threshold, and this disclosure is not limited thereto.
As an alternative embodiment, the method for allocating the task list may further include: and in response to the successful processing of the target task list stored in the target sub-queue, moving the target task list out of the target sub-queue.
According to the embodiment of the disclosure, in order to avoid the situation that the target task single pair which is successfully processed is occupied in the target sub-queue, the target task single pair is removed from the target sub-queue, so that the resource occupation of the target sub-queue can be released, and the processing pressure is relieved.
As an alternative embodiment, the method for allocating the task list may further include: the weights of the plurality of sub-queues are updated to obtain updated weights.
According to the embodiment of the disclosure, since the target sub-queue is determined according to the relationship between the initial weight configured by the plurality of sub-queues and the service priority of the target task list, in order to ensure that the task list stored in the sub-queue with low initial weight can be executed and processed under the condition that the target task list is successfully processed, the disclosure provides a weight updating method.
As an alternative embodiment, updating the weights of the plurality of sub-queues to obtain updated weights may include: acquiring the weight of each sub-queue; determining the sum of the weights of non-target sub-queues except the target sub-queue based on the weight of each sub-queue; updating the weight of the target sub-queue according to the difference between the weight of the target sub-queue and the sum of the weights to obtain the updated weight; and updating the weight of each non-target sub-queue according to the sum of the weight of each non-target sub-queue and the weight of the non-target sub-queue to obtain the updated weight.
In the implementation, the pre-configured weights in the second processing queue including 3 sub-queues Q1, Q2, and Q3 are 2, 3, and 4, respectively. The target user is a non-guest, and the target task sheet is a request sheet for applying for a new credit card. Then Q3 with weight 4 is the selected target sub-queue and Q1 with weight 2 and Q2 with weight 3 are the non-selected non-target sub-queues, and it can be determined that the sum of the weights of the non-selected non-target sub-queues is 5. Then the updated weight of Q1 is the pre-update weight of 2 plus the self weight of 2 equal to 4, the updated weight of Q2 is the pre-update weight of 3 plus the self weight of 3 equal to 6, and the updated weight of Q3 is the sum of the pre-update weight of 4 minus the weight of the non-selected sub-queue of 5 equal to-1, in accordance with the weighted round robin algorithm provided by the present disclosure. It should be noted that the weight updating algorithm provided in the present disclosure is only an exemplary algorithm, and is not a specific limitation to the updating algorithm, and a person skilled in the art may select an adaptive algorithm to implement the updating of the weights of the sub-queues in the second processing queue according to the actual needs of the business scenario, so as to ensure that the sub-queues with low priority can also be executed.
The weighting polling algorithm provided by the disclosure can update the pre-configured weights of the sub-queues, so that the weights of the sub-queues dynamically change along with the processing of the task list, and thus the task list stored in the sub-queue with low priority can be executed without being starved, and the method is a fair and reasonable weight updating method.
As an alternative embodiment, the method for allocating the task list may further include: in response to moving the target task sheet to the third processing queue, a notification message is sent to prompt manual processing of the target task sheet.
According to the embodiment of the disclosure, in the first processing queue and the second processing queue, after the target task list which still cannot be successfully processed after reaching the retry number is moved to the third processing queue, in order to perform manual processing in time, a notification message can be sent so that relevant personnel can process the notification message in time after receiving the notification message, so as to shorten the processing time and feed back the processing result to the user as soon as possible. For example, the contents of the target task list such as the list number, the task details, the failure reason and the like can be sent to the terminal equipment of the task approver in the modes of short messages, mails, WeChat and the like. The sending form and content of the specific message are not limited in this disclosure, and those skilled in the art can set the message form and content to be sent according to the actual needs of the service scenario.
The following description will be given taking the first processing queue as the guest requiring processing queue, the second processing queue as the weighted polling queue, and the third processing queue as the manual processing queue, where the weighted polling queue includes 3 sub-queues, Q1, Q2, and Q3, and the weights are 2, 3, and 4, respectively, as an example, and the method for allocating a task list according to the present disclosure, that is, the method for allocating a multi-queue credit approval task based on the weighted polling queue in which guests have priority, is described. As previously described, the plurality of queues in the set of candidate processing queues may include a pending queue, a weighted polling queue, and a manual processing queue. The guest processing queue is used for storing the approval task list of the guests. The weighted polling queue is used for storing the examination and approval task list needing to distinguish the service priority. The manual processing queue is used for storing the examination and approval task list which reaches the retry upper limit and needs to be manually processed. The method for processing the task list by the guest processing queue is executed in sequence, so that each guest task in the queue can be executed once. The method for selecting tasks by the weighted polling queue comprises the steps of initializing the weight of a sub-queue according to business requirements, selecting the first task in the queue with the largest weight for processing, updating the weight of the sub-queue, subtracting the sum of the weight of unselected sub-queues from the selected sub-queue, and adding the weight of the unselected sub-queue to the self weight. The manual processing queue is used for storing tasks which fail to be retried for a plurality of times by the guest processing queue and the weighted polling queue.
Fig. 4 schematically shows a flowchart of a method of assigning a task sheet according to another embodiment of the present disclosure. As shown in fig. 4, the assignment method 400 of the job ticket may include operations S410 to S440.
In operation S410, a queue for guest processing, a queue for weighted polling, and a queue for manual processing are created for storing examination and approval task lists of different service scenarios. In specific implementation, a guest processing queue is created, and the queue is used for storing the approval task list of the guest. And creating a weighted polling queue, wherein the queue is used for storing the examination and approval task lists needing to be prioritized. And creating a manual processing queue, wherein the manual processing queue is used for storing the examination and approval task list which reaches the retry upper limit and needs to be manually processed.
In operation S420, the approval task sheet is loaded to the guest processing queue and the weighted polling queue. In specific implementation, the requisition examination and approval sheet is selected and put into the end of the requisition processing queue according to the service element, namely the user identification. And selecting the non-essential passenger examination and approval sheet to be put into a plurality of sub-queues with different weights in the weighted polling queue, and respectively putting the non-essential passenger examination and approval sheet into the tail of the sub-queues with different weights according to the service priority.
In operation S430, the pending queue approval task is processed. In specific implementation, it is first determined whether the pending queue is empty, and if the pending queue is empty, it is determined whether the weighted polling queue is empty. And if the pending processing queue is not empty, selecting the first examination and approval sheet task in the pending processing queue for processing. And if the examination order task is successfully processed, removing the examination order task, and continuously selecting the next examination order task for processing until each examination order task in the waiting processing queue is processed once. And if the processing of the examination and approval task list fails, adding 1 to the retry times of the examination and approval task list, if the retry times reach a service threshold value, moving the examination and approval task list to a manual processing queue, and if the retry times do not reach the service threshold value, moving the examination and approval task list to the tail of a guest processing queue.
In operation S440, the weighted polling queue approval task is processed. In specific implementation, whether the weighted polling queue is empty is judged, and if the weighted polling queue is empty, the approval sheet of the prospective customer is selected and put at the tail of the prospective customer processing queue according to the business elements. If the weighted polling queue is not empty, one examination and approval task sheet in the sub-queue with the maximum weight in the weighted polling queue is selected for processing, and if a plurality of sub-queues with the maximum weight exist, the examination and approval task sheet in one of the sub-queues can be randomly selected for processing. Q3 is the sub-queue with the largest weight in Q1, Q2 and Q3, so that one examination and approval task list in Q3 can be selected for processing. And if the examination order task is successfully processed, removing the examination order task. And if the processing of the examination and approval task list fails, adding 1 to the retry times of the examination and approval task list, and if the retry times do not reach a service threshold value, moving the examination and approval task list to the tail of the weighted polling queue. The weights of the plurality of sub-queues are then updated. Specifically, the sum of the weights of the unselected sub-queues in the last step is calculated, the sum of the weights is subtracted from the weight of the selected sub-queue, and the weight of the unselected sub-queue is added with the weight of the unselected sub-queue. For example, Q1 with weight 2 and Q2 with weight 3 of Q1, Q2 and Q3 are unselected sub-queues, so the sum of the weights of the unselected sub-queues is 5, and Q3 with weight 4 is selected sub-queue. Q1 has an updated weight of 2 before update plus its own weight of 2 equal to 4, Q2 has an updated weight of 3 before update plus its own weight of 3 equal to 6, and Q3 has an updated weight of 4 before update minus the weight of the non-selected sub-queue of 5 equal to-1. And selecting the requisition approval list according to the business elements and putting the requisition approval list into the tail of the requisition processing queue.
According to the embodiment of the disclosure, the three situations discussed in the technical background are solved by using the wanted guest processing queue, the weighted polling queue and the manual processing queue, so that the aims of preferential processing of wanted guest business, no starvation of low-priority business and no processing of business exit are achieved, and the business processing efficiency and the customer experience are improved.
In the distribution solution of the task list provided by the related art, three situations are not considered when the task list distribution is approved: (1) the situation of the guests is processed preferentially; (2) a low-priority approval task order may have a condition of "starvation"; (3) and the condition that the approval task list which cannot be processed can fill the processing queue. The three conditions directly cause that the processing efficiency of the approval business is not high, the timeliness cannot be guaranteed, further the processing of the key task list is delayed, and the experience of the client is poor.
For ease of understanding, the implementation process of the disclosed assignment method for the task list will be explained below with reference to the processing methods of different types of approval task lists in the credit approval business.
The credit card approval business is developed by a certain bank plan, the required customers and the non-required customers are required to be distinguished according to the required customer marks, the credit card approval business of the required customers is preferentially processed, the credit card approval business of the non-required customers is prioritized according to the applied credit card types, the new credit card types mainly popularized by the current business department are high in priority, the previous credit card types are low in priority, the measures are firstly to ensure the experience of the high-value customers, and secondly to ensure that the new credit card types mainly popularized are popularized more quickly and occupy the market. The method and the device realize the service, and the specific realization process comprises a queue design and parameter setting process, a task storing queue process and a task executing process.
In the queue design and parameter setting process, a requisition processing queue can be created, a weighted polling queue can be created, according to the credit card type applied, the weighted polling queue can comprise two sub-queues, namely a credit card A queue, a credit card B queue and a credit card B queue, the weight is 3, wherein the credit card A is a new credit card type which is mainly popularized by the current business department, the priority is high, the credit card B is a past credit card type, the priority is low, besides the requisition processing queue and the weighted polling queue, in order to timely quit the incapable processing task, the requisition processing queue and the weighted polling queue are created, the requisition task is prevented from entering the requisition processing queue or the weighted polling queue again, the incapable processing task occupies the processing queue, a manual processing queue can be created, when the requisition task exists in the queue, the service personnel is notified to perform the manual process. Meanwhile, the number of automatic approval retries (such as the aforementioned first retry threshold and second retry threshold) may also be set to 3, that is, after 3 times of repeated execution, the approval task list will be moved to a manual processing queue and manually processed by the service.
And in the process of storing the tasks into the queue, receiving an approval application, determining whether to put the approval task list into a customer handling queue according to a customer waiting mark, if so, putting the approval task list into the customer handling queue, and if not, determining whether to put the approval task list into a credit card A queue or a credit card B queue in the weighted polling queue according to the card type of the credit card application. If the card type applied by the credit card is a new credit card type, the examination and approval task list is put into the queue A. If the card type applied by the credit card is the conventional credit card type, the approval task list is put into the queue B.
And in the task execution process, the examination and approval task list stored in the waiting processing queue is processed preferentially. And if the processing of the task list fails and the accumulated approval times are more than 3, moving the task list to a manual processing queue. Then, one approval job ticket in the weighted polling queue is selected for processing, since the weight 7 of the credit card A queue is greater than the weight 3 of the credit card B queue, the job ticket in the A queue is preferentially selected for processing, and after the processing is finished, the weight of the credit card A queue is updated to be 4 (the weight 7 before the updating of the credit card A queue is subtracted by the weight 3 of the unselected credit card B queue), and the weight of the credit card B queue is updated to be 6 (the weight 3 of the unselected credit card B queue is added with the self weight 3). If the execution of the examination and approval task list fails, whether the retry frequency of the examination and approval task is more than 3 is judged, if so, the examination and approval task is moved to a manual processing queue, otherwise, the examination and approval task is moved to the tail of a credit card A queue to wait for being executed next time. And finally returning to load the new task.
In the distribution method of the task list, a plurality of queues are introduced to process the distribution process of the approval tasks, and the credit approval task distribution method is based on the weighted polling queue with priority of the customers. Three queues, namely a pending processing queue, a weighted polling queue and a manual processing queue, are adopted to process the examination and approval task list. And preferentially processing the examination and approval tasks of the prospective customers based on the prospective customer processing queue, and ensuring that the examination and approval tasks of the prospective customers are preferentially processed. In an actual service scene, the number of tasks of the guest approval list is not extremely large, and the situation that the weighted polling queue cannot be executed is avoided. Based on the weighted polling queue, the number of sub-queues contained in the weighted polling queue is configured by the support parameters of the weighted polling queue, so that the requirements of high and low priorities of services can be greatly adapted. And adding a manual processing queue, moving the examination and approval tasks which fail to be repeatedly executed again to the queue, and manually processing the examination and approval tasks to avoid the examination and approval tasks from entering the guest processing queue or the weighted polling queue again and avoid the situation that the examination and approval tasks cannot be processed and the processing queue is full.
Fig. 5 schematically shows a block diagram of an apparatus for assigning a task sheet according to an embodiment of the present disclosure.
As shown in fig. 5, the apparatus 500 may include a first acquisition module 510, a second acquisition module 520, a first allocation module 530, a third acquisition module 540, and a second allocation module 550.
A first obtaining module 510, configured to obtain, in response to receiving a target task list of a target user for a specified service, a user identifier of the target user, where the user identifier is used to characterize a user priority of the target user. Optionally, the first obtaining module 510 may be configured to perform operation S310 described in fig. 3, for example, and is not described herein again.
A second obtaining module 520, configured to obtain a pre-created candidate processing queue set, where the candidate processing queue set includes a first processing queue with a high processing priority and a second processing queue with a low processing priority, and the second processing queue includes multiple sub-queues configured with different weights in advance. Optionally, the second obtaining module 520 may be configured to perform operation S320 described in fig. 3, for example, and is not described herein again.
And a first allocating module 530, configured to allocate the target task list to the first processing queue to store the target task list in response to the user identifier indicating that the user priority of the target user is high. Optionally, the first allocating module 530 may be configured to perform operation S330 described in fig. 3, for example, and is not described herein again.
A third obtaining module 540, configured to obtain a service identifier of the specified service in response to that the user identifier represents that the user priority of the target user is low, where the service identifier is used to represent the service priority of the specified service. Optionally, the third obtaining module 540 may be configured to perform operation S340 described in fig. 3, for example, and is not described herein again.
And a second allocating module 550, configured to allocate the target task sheet to a target sub-queue determined from the plurality of sub-queues to store the target task sheet according to the service identifier and the weight of each sub-queue. Optionally, the second allocating module 550 may be configured to perform operation S350 described in fig. 3, for example, and is not described herein again.
According to the task list distribution method disclosed by the invention, the candidate processing queue set comprising the first processing queue with high processing priority and the second processing queue with low processing priority is created in advance, the target task list is distributed to the queues with different processing priorities according to the user priority and the service priority, the processing can be performed according to the user priority, and then the processing can be performed according to the service priority, so that the problems that the distribution mode of the task list in the related technology is insufficient, the task processing efficiency is low, and the high concurrency scene cannot be handled sufficiently can be solved, at least part of the problems that the task list distribution mode in the related technology is insufficient, the task processing efficiency is low, and the high concurrency scene cannot be handled effectively can be solved.
As an alternative embodiment, the assignment device of the task list may further include: the first detection module is used for responding to the processing failure of the target task list stored in the first processing queue and detecting whether the retry times of the target task list exceed a first retry time threshold value or not; the first processing module is used for adding one to the retry times of the target task list under the condition that the first retry time threshold value is not exceeded; and the first moving module is used for moving the target task list to the tail of the first processing queue to wait for processing again.
As an alternative embodiment, the set of candidate processing queues may further include a third processing queue, and the apparatus for allocating a task list may further include: and the second moving module is used for moving the target task list to the third processing queue under the condition that the first retry time threshold value is exceeded.
As an alternative embodiment, the assignment device of the task list may further include: and the first shifting-out module is used for responding to the successful processing of the target task list stored in the first processing queue and shifting the target task list out of the first processing queue.
As an alternative embodiment, the assignment device of the task list may further include: the second detection module is used for responding to the processing failure of the target task list stored in the target sub-queue and detecting whether the retry times of the target task list exceed a second retry time threshold value or not; the second processing module is used for adding one to the retry times of the target task list under the condition that the second retry time threshold value is not exceeded; and the third moving module is used for moving the target task list to the tail of the target sub-queue to wait for processing again.
As an alternative embodiment, the candidate processing queue set may further include a third processing queue, and the apparatus may further include: and the fourth moving module is used for moving the target task list to the third processing queue under the condition that the second retry time threshold value is exceeded.
As an alternative embodiment, the assignment device of the task list may further include: and the second shifting-out module is used for responding to the successful processing of the target task list stored in the target sub-queue and shifting the target task list out of the target sub-queue.
As an alternative embodiment, the assignment device of the task list may further include: and the first updating module is used for updating the weights of the plurality of sub queues to obtain the updated weights.
As an alternative embodiment, the first updating module may include: the obtaining submodule is used for obtaining the weight of each sub queue; the determining sub-module is used for determining the sum of the weights of the non-target sub-queues except the target sub-queue based on the weight of each sub-queue; the first updating submodule is used for updating the weight of the target sub-queue according to the difference between the weight of the target sub-queue and the sum of the weights to obtain the updated weight; and the second updating submodule is used for updating the weight of each non-target sub-queue according to the sum of the weight of each non-target sub-queue and the self weight to obtain the updated weight.
As an alternative embodiment, the assignment device of the task list may further include: and the sending module is used for responding to the movement of the target task list to the third processing queue and sending a notification message to prompt the execution of manual processing on the target task list.
It should be noted that the implementation, solved technical problems, implemented functions, and achieved technical effects of each module in the partial embodiment of the assignment device for a task list are respectively the same as or similar to the implementation, solved technical problems, implemented functions, and achieved technical effects of each corresponding step in the partial embodiment of the assignment method for a task list, and are not described herein again.
Any number of modules, sub-modules, units, sub-units, or at least part of the functionality of any number thereof according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a field programmable gate array (FNGA), a programmable logic array (NLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware implementations. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the disclosure may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
For example, the first obtaining module, the second obtaining module, the first allocating module, the third obtaining module, the second allocating module, the first detecting module, the first processing module, the first moving module, the second moving module, the first moving-out module, the second detecting module, the second processing module, the third moving module, the fourth moving module, the second moving-out module, the first updating module, the obtaining sub-module, the determining sub-module, the first updating sub-module, the second updating sub-module, and the sending module may be combined and implemented in one module, or any one of them may be split into multiple modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to an embodiment of the present disclosure, at least one of the first obtaining module, the second obtaining module, the first assigning module, the third obtaining module, the second assigning module, the first detecting module, the first processing module, the first moving module, the second moving module, the first moving out module, the second detecting module, the second processing module, the third moving module, the fourth moving module, the second moving out module, the first updating module, the obtaining sub-module, the determining sub-module, the first updating sub-module, the second updating sub-module, and the sending module may be implemented at least in part as a hardware circuit, such as a field programmable gate array (FNGA), a programmable logic array (NLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or hardware or firmware that may be implemented in any other reasonable manner of integrating or packaging the circuit, or in any one of three implementations, software, hardware and firmware, or in any suitable combination of any of them. Alternatively, at least one of the first obtaining module, the second obtaining module, the first allocating module, the third obtaining module, the second allocating module, the first detecting module, the first processing module, the first moving module, the second moving module, the first moving-out module, the second detecting module, the second processing module, the third moving module, the fourth moving module, the second moving-out module, the first updating module, the obtaining sub-module, the determining sub-module, the first updating sub-module, the second updating sub-module, and the sending module may be at least partially implemented as a computer program module, and when the computer program module is executed, the corresponding function may be executed.
FIG. 6 schematically illustrates a schematic diagram of a computer-readable storage medium product suitable for implementing the assignment method of a job ticket described above according to an embodiment of the present disclosure.
In some possible embodiments, aspects of the present invention may also be implemented in a program product including program code for causing a device to perform the aforementioned operations (or steps) in the assignment method of the job ticket according to various exemplary embodiments of the present invention described in the above-mentioned "exemplary method" section of this specification when the program product is run on the device, for example, the electronic device may perform operations S310 to S350 shown in fig. 3, and operations S410 to S440 shown in fig. 4.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (ENROM or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
As shown in FIG. 6, a program product 600 is depicted that can employ a portable compact disk read-only memory (CD-ROM) and include program code and that can be run on a device, such as a personal computer, in accordance with an embodiment of the present invention. However, the program product of the present invention is not limited in this respect, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, or device. Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a local area network (LAA) or a wide area network (WAA), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Fig. 7 schematically shows a block diagram of an electronic device adapted to implement the above described assignment method of a task sheet according to an embodiment of the present disclosure. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 7, an electronic device 700 according to an embodiment of the present disclosure includes a processor 701, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. The processor 701 may include, for example, a general purpose microprocessor (e.g., a CNU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 701 may also include on-board memory for caching purposes. The processor 701 may comprise a single processing unit or a plurality of processing units for performing the different actions of the method flows according to embodiments of the present disclosure.
In the RAM 703, various programs and data necessary for the operation of the electronic apparatus 700 are stored. The processor 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. The processor 701 executes various operations of the assignment method flow of the job ticket according to the embodiment of the present disclosure by executing programs in the ROM 702 and/or the RAM 703. It is noted that the programs may also be stored in one or more memories other than the ROM 702 and RAM 703. The processor 701 may also perform operations S310 through S350 illustrated in fig. 3 according to the embodiment of the present disclosure by executing the program stored in the one or more memories. The electronic device may also perform operations S410 through S440 as shown in fig. 4.
Electronic device 700 may also include input/output (I/O) interface 705, which input/output (I/O) interface 705 is also connected to bus 704, according to an embodiment of the present disclosure. The system 700 may also include one or more of the following components connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a network interface card such as an LAA card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
According to embodiments of the present disclosure, method flows according to embodiments of the present disclosure may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711. The computer program, when executed by the processor 701, performs the above-described functions defined in the system of the embodiment of the present disclosure. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement a method of assigning a job ticket according to an embodiment of the present disclosure, including operations S310 to S350 shown in fig. 3. The electronic device may also perform operations S410 through S440 as shown in fig. 4.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (ENROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, according to embodiments of the present disclosure, a computer-readable storage medium may include the ROM 702 and/or the RAM 703 and/or one or more memories other than the ROM 702 and the RAM 703 described above.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (14)

1. A distribution method of a task list comprises the following steps:
in response to receiving a target task list of a target user for a specified service, acquiring a user identifier of the target user, wherein the user identifier is used for representing the user priority of the target user;
acquiring a pre-created candidate processing queue set, wherein the candidate processing queue set comprises a first processing queue with high processing priority and a second processing queue with low processing priority, and the second processing queue comprises a plurality of sub-queues configured with different weights in advance;
responding to the user identification representing that the user priority of the target user is high, and distributing the target task list to the first processing queue to store the target task list;
responding to the user identifier representing the low user priority of the target user, and acquiring the service identifier of the specified service, wherein the service identifier is used for representing the service priority of the specified service; and
and distributing the target task list to a target sub-queue determined from the plurality of sub-queues to store the target task list according to the service identifier and the weight of each sub-queue.
2. The method of claim 1, further comprising:
responding to the processing failure of the target task list stored in the first processing queue, and detecting whether the retry times of the target task list exceed a first retry time threshold;
in the case that the first retry number threshold value is not exceeded, adding one to the retry number of the target task list; and
and moving the target task list to the tail of the first processing queue to wait for processing again.
3. The method of claim 2, wherein the set of candidate processing queues further includes a third processing queue, the method further comprising:
moving the target task sheet to the third processing queue if the first retry number threshold has been exceeded.
4. The method of claim 1, further comprising:
and in response to the successful processing of the target task list stored in the first processing queue, removing the target task list from the first processing queue.
5. The method of claim 1, further comprising:
responding to the processing failure of the target task list stored in the target sub-queue, and detecting whether the retry times of the target task list exceed a second retry time threshold value;
adding one to the retry number of the target task order if the second retry number threshold is not exceeded; and
and moving the target task list to the tail of the target sub-queue to wait for processing again.
6. The method of claim 5, wherein the set of candidate processing queues further includes a third processing queue, the method further comprising:
moving the target task sheet to the third processing queue if the second retry number threshold has been exceeded.
7. The method of claim 1, further comprising:
and in response to the successful processing of the target task list stored in the target sub-queue, removing the target task list from the target sub-queue.
8. The method of claim 7, further comprising:
updating the weights of the plurality of sub-queues to obtain updated weights.
9. The method of claim 8, wherein the updating the weights of the plurality of sub-queues to obtain updated weights comprises:
acquiring the weight of each sub-queue;
determining the sum of the weights of non-target sub-queues except the target sub-queue based on the weight of each sub-queue;
updating the weight of the target sub-queue according to the difference between the weight of the target sub-queue and the sum of the weights to obtain an updated weight; and
and updating the weight of each non-target sub-queue according to the sum of the weight of each non-target sub-queue and the weight of the non-target sub-queue to obtain the updated weight.
10. The method of claim 3 or 6, further comprising:
in response to moving the target task sheet to the third processing queue, sending a notification message to prompt manual processing of the target task sheet.
11. An apparatus for distributing a task list, comprising:
the system comprises a first acquisition module, a first service processing module and a second service processing module, wherein the first acquisition module is used for responding to a received target task list of a target user for a specified service, and acquiring a user identifier of the target user, wherein the user identifier is used for representing the user priority of the target user;
a second obtaining module, configured to obtain a pre-created candidate processing queue set, where the candidate processing queue set includes a first processing queue with a high processing priority and a second processing queue with a low processing priority, and the second processing queue includes multiple sub-queues configured with different weights in advance;
the first distribution module is used for distributing the target task list to the first processing queue to store the target task list in response to the user priority that the user identifier represents the target user;
a third obtaining module, configured to obtain a service identifier of the specified service in response to a user priority of the target user represented by the user identifier being low, where the service identifier is used to represent a service priority of the specified service; and
and the second distribution module is used for distributing the target task list to the target sub-queues determined from the plurality of sub-queues to store the target task list according to the service identifiers and the weight of each sub-queue.
12. An electronic device, comprising:
one or more processors; and
a memory for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of any of claims 1-10.
13. A computer-readable storage medium storing computer-executable instructions that, when executed, cause a processor to perform the method of any one of claims 1 to 10.
14. A computer program product comprising a computer program which, when executed by a processor, performs the method according to any one of claims 1 to 10.
CN202110729910.3A 2021-06-29 2021-06-29 Assignment method, assignment device, electronic device, medium, and program product for job ticket Pending CN113449994A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110729910.3A CN113449994A (en) 2021-06-29 2021-06-29 Assignment method, assignment device, electronic device, medium, and program product for job ticket

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110729910.3A CN113449994A (en) 2021-06-29 2021-06-29 Assignment method, assignment device, electronic device, medium, and program product for job ticket

Publications (1)

Publication Number Publication Date
CN113449994A true CN113449994A (en) 2021-09-28

Family

ID=77814302

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110729910.3A Pending CN113449994A (en) 2021-06-29 2021-06-29 Assignment method, assignment device, electronic device, medium, and program product for job ticket

Country Status (1)

Country Link
CN (1) CN113449994A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117406936A (en) * 2023-12-14 2024-01-16 成都泛联智存科技有限公司 IO request scheduling method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117406936A (en) * 2023-12-14 2024-01-16 成都泛联智存科技有限公司 IO request scheduling method and device, electronic equipment and storage medium
CN117406936B (en) * 2023-12-14 2024-04-05 成都泛联智存科技有限公司 IO request scheduling method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US10572285B2 (en) Method and apparatus for elastically scaling virtual machine cluster
US20090113448A1 (en) Satisfying a request for an action in a virtual world
US11507419B2 (en) Method,electronic device and computer program product for scheduling computer resources in a task processing environment
US7681196B2 (en) Providing optimal number of threads to applications performing multi-tasking using threads
US9038093B1 (en) Retrieving service request messages from a message queue maintained by a messaging middleware tool based on the origination time of the service request message
CN109152061B (en) Channel allocation method, device, server and storage medium
CN113535367A (en) Task scheduling method and related device
CN113765820A (en) Token bucket-based current limiting method, token bucket-based current limiting device, token bucket-based computing equipment and token bucket-based current limiting medium
US20170024150A1 (en) Management of allocation for alias devices
CN110955640A (en) Cross-system data file processing method, device, server and storage medium
CN110851276A (en) Service request processing method, device, server and storage medium
CN113760488A (en) Method, device, equipment and computer readable medium for scheduling task
US7822918B2 (en) Preallocated disk queuing
CN113449994A (en) Assignment method, assignment device, electronic device, medium, and program product for job ticket
CN110413210B (en) Method, apparatus and computer program product for processing data
CN111144796A (en) Method and device for generating tally information
CN110515749B (en) Method, device, server and storage medium for queue scheduling of information transmission
CN107045452B (en) Virtual machine scheduling method and device
CN110825342B (en) Memory scheduling device and system, method and apparatus for processing information
CN111580882B (en) Application program starting method, device, computer system and medium
CN114169733A (en) Resource allocation method and device
CN112884387B (en) Method and device for controlling a vehicle
CN112785208A (en) Tourism order task allocation method, system, equipment and storage medium
CN112784187A (en) Page display method and device
US10956037B2 (en) Provisioning storage allocation using prioritized storage system capabilities

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination