CN117499539A - Method and device for dynamically distributing agents in two stages, computer equipment and storage medium - Google Patents

Method and device for dynamically distributing agents in two stages, computer equipment and storage medium Download PDF

Info

Publication number
CN117499539A
CN117499539A CN202311397742.8A CN202311397742A CN117499539A CN 117499539 A CN117499539 A CN 117499539A CN 202311397742 A CN202311397742 A CN 202311397742A CN 117499539 A CN117499539 A CN 117499539A
Authority
CN
China
Prior art keywords
task
information
agent
algorithm
queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311397742.8A
Other languages
Chinese (zh)
Inventor
戴敏
王道
杨耿
岑鹏涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Citic Bank Corp Ltd
Original Assignee
China Citic Bank Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Citic Bank Corp Ltd filed Critical China Citic Bank Corp Ltd
Priority to CN202311397742.8A priority Critical patent/CN117499539A/en
Publication of CN117499539A publication Critical patent/CN117499539A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/50Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
    • H04M3/51Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing
    • H04M3/523Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing with call distribution or queueing
    • H04M3/5232Call distribution algorithms
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/50Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
    • H04M3/51Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing
    • H04M3/523Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing with call distribution or queueing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The application discloses a method, a device, computer equipment and a storage medium for dynamically distributing agents in two stages, which are characterized in that S1, first incoming line information is acquired, first task metadata is acquired based on the first incoming line information, and S2, whether the first task metadata belongs to a dynamic agent distribution process in two stages is judged; s3, if the first task metadata belongs to the service type field, acquiring a corresponding first route configuration algorithm from a task route configuration algorithm library, and executing the first route configuration algorithm to acquire first matching information; and S4, acquiring a full task list and an agent list, and executing a corresponding first agent allocation algorithm based on the full task list, the agent list and the first matching information to acquire first agent information. The method solves the problems that the current seat allocation is inflexible and unreasonable, and the overall seat experience is poor.

Description

Method and device for dynamically distributing agents in two stages, computer equipment and storage medium
Technical Field
The invention belongs to the technical field of artificial customer service, and particularly relates to a method and a device for dynamically distributing agents in two stages, computer equipment and a storage medium.
Background
At present, the service mainly has several implementation methods for call center agent allocation, the agent allocation is completed through a preset rule by manual experience, the allocation result is optimized by implementing and establishing a working model, the model optimization is performed through machine learning, the allocation scheme is optimized through an artificial intelligent algorithm, and the like. In the prior agent system in the industry, after the client is led in, the corresponding idle agents are distributed according to channels, the distribution mode is simple, the distribution strategy cannot be dynamically modified after being set, and particularly when the encountered client has a complex problem, the agents with the most abundant relevant processing experience are not distributed according to the client requirements, so that the service experience is poor, and the complaint rate is high.
Aiming at the technical problems, the application discloses a method, a device, computer equipment and a storage medium for dynamically distributing agents in two stages. The task allocation process is dynamically and efficiently completed based on a queue routing algorithm and an agent allocation algorithm which are configured to be effective, so that the problems that a traditional agent system is not intelligent in allocation method, the agents are unreasonable to be allocated, and allocation logic cannot be configured to be effective pain points are solved, and therefore the efficiency of customer problem processing and service satisfaction are improved.
Disclosure of Invention
In order to solve the defects of the prior art, the invention provides a method, a device, computer equipment and a storage medium for dynamically distributing agents in two stages. Comprising the following steps:
the first aspect of the present application proposes a method for dynamically distributing agents in two stages, which is characterized in that the method includes:
s1, acquiring first incoming line information, and acquiring first task metadata based on the first incoming line information, wherein the first task metadata comprises a service type field;
s2, judging whether the first task metadata belongs to a dynamic two-stage seat allocation process;
s3, if the agent flow is distributed in two stages dynamically, acquiring a corresponding first route configuration algorithm from a task route configuration algorithm library based on a service type field of first task metadata, and executing the first route configuration algorithm to acquire first matching queue information;
and S4, acquiring task information and an agent list under the full task queue, and executing a corresponding first agent allocation algorithm based on the task information, the agent list and the first matching queue information to acquire first agent information.
Further, the method further comprises S5, locking the first agent, judging whether the first task of the first agent is finished, and if so, updating the full-volume task information and the agent list state.
Further, the step S1 further includes obtaining the first customer information from a preset database based on the first task metadata.
Further, the step S3 includes obtaining a corresponding first routing configuration algorithm from a task routing configuration algorithm library based on the first client information and a traffic type field of the first task metadata.
Further, the step S2 further includes adding an idempotent lock to the first task.
Further, the step S4 further includes the sub-steps of:
s401, acquiring task information and an agent list under a full-scale task queue by using a timing multithread, determining a first queue based on first matching queue information, and judging whether the first queue is configured with a dynamic algorithm or not;
s402, if the first queue is configured with a dynamic algorithm, a first task is distributed to a first seat based on the configured dynamic algorithm, and if the first queue is not configured with the dynamic algorithm, the first task is distributed to the first seat based on a preset static rule;
and S403, storing the first task information and the first seat information.
A second aspect of the present application proposes a device for dynamically two-stage allocation of agents, comprising:
the task generation module is used for acquiring first incoming line information, and acquiring first task metadata based on the first incoming line information, wherein the first task metadata comprises a service type field;
the first judging module is used for judging whether the first task metadata belongs to a dynamic two-stage seat allocation process or not;
the queue matching module is used for acquiring a corresponding first route configuration algorithm from the task route configuration algorithm library based on a service type field of the first task metadata if the dynamic two-stage allocation agent flow belongs to, and executing the first route configuration algorithm to acquire first matching queue information;
the agent matching module is used for acquiring task information and an agent list under the full task queue, executing a corresponding first agent allocation algorithm based on the task information, the agent list and the first matching queue information, and acquiring first agent information.
A third aspect of the present application proposes an electronic device, characterized by comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
and the processor executes the computer-executed instructions stored in the memory to realize a method for dynamically distributing agents in two stages.
A third aspect of the present application proposes a computer readable storage medium, wherein computer executable instructions are stored in the computer readable storage medium, the computer executable instructions when executed by a processor being configured to implement a method for dynamic two-stage allocation of agents.
A fourth aspect of the present application proposes a computer program product comprising a computer program which, when executed by a processor, implements a method of dynamic two-stage allocation of agents.
A fifth aspect of the present application proposes a computer program product comprising a computer program which, when executed by a processor, implements a method of dynamic two-stage allocation of agents.
The beneficial effects of the invention are as follows: the seat finally allocated to the task is obtained through a two-stage allocation scheme, and the allocation scheme specifically comprises the following steps: the first stage is that the task is routed to a queue algorithm, the algorithm completes the allocation of the queue to which the task belongs through a dynamically executable routing algorithm, and the dynamic flexibility is that an administrator can obtain the queue information to which the task belongs through a pre-written routing algorithm or a default general algorithm, meanwhile, the routing algorithm can be optimized dynamically through a routing management background according to the overall operation effect after the actual seat allocation, and the system has the configuration, namely effective heat deployment capability; the second phase is the task-to-queue agent allocation algorithm. And the task is distributed through the two-stage dynamic algorithm, and finally, the popup window ringing reminding from the task to the seat operation interface is finished through a real-time push bus. The whole technical scheme well solves the problems of inflexible distribution, unreasonable distribution and poor overall seat experience in the current field, and in addition, the technical scheme breaks the limit that the traditional seat needs to be concentrated to a specific area for operation, and can provide the customer service seat operation to be completed anytime and anywhere.
Drawings
Fig. 1 is a flowchart of a dynamic customer service agent allocation method of the present invention.
Fig. 2 is a block diagram of a dynamic customer service agent distribution device according to the present invention.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
For a clearer understanding of the present invention, reference will be made to the following detailed description taken in conjunction with the accompanying drawings and examples.
It should be understood that the description is only illustrative and is not intended to limit the scope of the invention. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the present invention. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The words "a", "an", and "the" as used herein are also intended to include the meaning of "a plurality", etc., unless the context clearly indicates otherwise. Furthermore, the terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner. The present invention is an improvement over the prior art and is therefore implemented in the prior art for the part not described in this application.
It is worth noting that the data collection and transfer actions referred to in this application are already agreed by the user and are necessary for the applicant to perform normal business activities. The collected data applicants are subjected to desensitization treatment such as anonymization and confidentiality, and the access rights of the data are correspondingly set so as to ensure that the privacy of users cannot be revealed. The data processing activities performed by the applicant accord with the rules of laws and regulations such as data security laws and personal information protection laws.
At present, the service mainly has several implementation methods for call center agent allocation, the agent allocation is completed through a preset rule by manual experience, the allocation result is optimized by implementing and establishing a working model, the model optimization is performed through machine learning, the allocation scheme is optimized through an artificial intelligent algorithm, and the like. In the prior agent system in the industry, after the client is led in, the corresponding idle agents are distributed according to channels, the distribution mode is simple, the distribution strategy cannot be dynamically modified after being set, and particularly when the encountered client has a complex problem, the agents with the most abundant relevant processing experience are not distributed according to the client requirements, so that the service experience is poor, and the complaint rate is high. The embodiment of the application provides a two-stage dynamic agent allocation method, a device, computer equipment and a storage medium, wherein the generated tasks are allocated to the most suitable agents through a two-stage dynamic allocation algorithm, so that the allocation accuracy and flexibility can be obviously provided, the problem processing efficiency and satisfaction of clients are improved, and the dynamic two-stage agent allocation method, the device, the computer equipment and the storage medium are respectively described in detail.
Fig. 1 is a flowchart of a dynamic customer service agent allocation method of the present invention. Firstly, the application provides a dynamic customer service agent allocation method, which comprises the following steps:
s1, acquiring first incoming line information, and acquiring first task metadata based on the first incoming line information, wherein the first task metadata comprises a service type field;
specifically, under the condition of receiving a first incoming line information request input by a user, first incoming line information analyzed by a call center system is generated to be processed, first task metadata is generated, the user performs audio and video on-line customer service through different access channels including traditional incoming call and outgoing call traffic, the first incoming line information is sent by an APP (application), when incoming line is successful, the call center system analyzes the incoming line information to obtain the metadata of a task, and the metadata comprises metadata fields such as task type, service type, task source, task tag, incoming line channel, customer incoming number, customer name and the like, wherein the unified call center system integrates core modules such as service tag identification, voice intelligent analysis, big data incoming line preprocessing and the like, and the first task metadata information of the task is obtained through multi-component information preprocessing;
optionally, the task metadata obtaining process includes: analyzing by a call center system to obtain metadata fields such as task types, service types, task sources, incoming line channels, customer incoming call numbers, customer names and the like;
further, the step S1 further includes obtaining the first customer information from a preset database based on the first task metadata.
Preferably, the client information middle platform is called based on the first task metadata, the first task metadata are further enriched, and all first client information needed by the subsequent procedure is obtained. Because the unified call center system cannot acquire detailed customer information, core customer information required by the subsequent flow in the task is perfected through a customer information center according to the incoming call number of the user, and the core customer information comprises a customer identity card, a customer card list, a customer name, a sex, an age and the like.
Further, the step S2 further includes adding an idempotent lock to the first task.
After the first task metadata information is obtained, adding an idempotent lock to the task through a distributed lock deployed by a distributed cluster, so as to prevent the task from being repeatedly allocated and processed;
s2, judging whether the first task metadata belongs to a dynamic two-stage seat allocation process;
judging an agent ID field in the task information, if the value exists, skipping a dynamic two-stage allocation process, executing a task pushing agent process through a real-time event bus, wherein the scene is suitable for batch outbound tasks, an administrator designates the processed agents, and if the processed agents are empty, executing a dynamic two-stage allocation process.
S3, if the agent flow is distributed in two stages dynamically, acquiring a corresponding first route configuration algorithm from a task route configuration algorithm library based on a service type field of first task metadata, and executing the first route configuration algorithm to acquire first matching queue information;
inquiring a pre-configured queue routing algorithm library, and acquiring a unique matched queue routing algorithm through a service type field of the first task metadata; dynamically executing a corresponding queue routing algorithm by a routing algorithm engine to obtain first matching queue information;
further, the step S3 includes obtaining a corresponding first routing configuration algorithm from a task routing configuration algorithm library based on the first client information and a traffic type field of the first task metadata.
Optionally, the process of obtaining the first matching queue information includes:
maintaining a queue routing algorithm and warehousing, wherein the queue routing algorithm is divided into two main types;
(1) Specific routing logic: the specific task constant information in the task information is used as a condition of a hit queue, so that a queue to which the task belongs and rich corresponding task information are obtained; (2) general routing logic: when there is no hit (1), a general routing algorithm is executed.
Acquiring a unique routing algorithm corresponding to a task according to the service type of the task, and executing the routing algorithm through a task routing engine to acquire first matching queue information of the task;
the algorithm of the queue routing algorithm library is configured in advance by an administrator, the algorithm is divided according to the service types, the finest routing algorithm rule of each service line is obtained after each service fine granularity is disassembled, the algorithm is hit according to the matching condition appointed in the routing algorithm and the corresponding value in the task information, then the queue information such as the queue name, the queue code and the task information finished by the special service program executed in the routing algorithm are executed, if the value in the task information does not accord with the algorithm, the next routing algorithm is continuously traversed, and if all the algorithm appointed conditions do not hit the task, the default routing algorithm is executed.
Further, after the task is routed to the queue flow, the queue related information of the task is perfected, the task number is generated in a business coding-UUID mode, and the task information which is relatively comprehensive after the process is executed is subjected to persistent storage
And S4, acquiring task information and an agent list under the full task queue, and executing a corresponding first agent allocation algorithm based on the task information, the agent list and the first matching queue information to acquire first agent information.
Further, the step S4 further includes the sub-steps of:
s401, acquiring task information and an agent list under a full-scale task queue by using a timing multithread, determining a first queue based on first matching queue information, and judging whether the first queue is configured with a dynamic algorithm or not;
s402, if the first queue is configured with a dynamic algorithm, a first task is distributed to a first seat based on the configured dynamic algorithm, and if the first queue is not configured with the dynamic algorithm, the first task is distributed to the first seat based on a preset static rule;
and S403, storing the first task information and the first seat information.
Optionally, the task allocation agent process includes:
maintaining agent allocation algorithm identifiers in a task queue, and dividing the agent allocation algorithm identifiers into two types, namely a scene allocation algorithm and a machine learning algorithm; the task allocation decision engine is operated in a timing scheduling mode, and corresponding agent allocation algorithm is executed according to the agent allocation identification in the task by inquiring the task information of the queues in all enabled states and the online agent list; the agent allocation algorithm comprises an algorithm set according to a scene, such as a last priority, an idle priority, an old country priority and an allocation rule, and a big data machine learning algorithm; the big data machine learning algorithm is used for processing tag data based on the histories of the agents through a big data platform, combining task information input, and obtaining the optimal agent distribution result through executing the machine learning algorithm;
specifically, based on a task allocation decision engine, the task information under all queues and an online agent list are obtained according to timing scheduling, so that the agents to which the tasks are finally allocated are obtained. Through the timing batch running service, the task information and the seat list of all the starting state queues are traversed concurrently;
and judging algorithm identification information in the queue according to the processed queue, calling an agent allocation decision engine, if the algorithm identification is scene algorithm identification, such as last priority, rural priority, idle priority and the like, executing corresponding scene algorithm logic, traversing the agents in the queue, and combining the algorithm logic to obtain the agent ID of the final task. And if the algorithm identification is the big data algorithm identification, calling a big data machine learning middle stage through a decision engine, and obtaining the optimal seat through a deep learning algorithm according to the task information and the historical processing data of the seat.
Further, the method further comprises S5, locking the first agent, judging whether the first task of the first agent is finished, and if so, updating the full-volume task information and the agent list state.
The selected agents are locked through the distributed locks, so that the agents are prevented from being allocated with other tasks again when the agents process the tasks, the tasks which are allocated with the agents are completed, the related information of the agents in the task information table is updated, and the states of the agents in the agent table are prevented.
Specifically, the selected agent is locked through the distributed lock, the task cannot be allocated to the agent again before the lock is released, and the full task information and the agent list state are updated.
Further, a real-time event bus is called, the real-time event bus consists of a message queue KAFKA and a Websocket service, tasks are pushed to an agent operation interface in real time, and after the agents receive the tasks, popup window reminding and ringing can be seen, so that the agents can process the distributed tasks in time, and after the agents click processing operation, the task list and the processing states in the agent list are updated.
Fig. 2 is a block diagram of a dynamic customer service agent distribution device according to the present invention. A second aspect of the present application proposes a device for dynamically distributing agents in two stages, which is characterized in that it includes:
the task generation module is used for acquiring first incoming line information, and acquiring first task metadata based on the first incoming line information, wherein the first task metadata comprises a service type field;
the first judging module is used for judging whether the first task metadata belongs to a dynamic two-stage seat allocation process or not;
the queue matching module is used for acquiring a corresponding first route configuration algorithm from the task route configuration algorithm library based on a service type field of the first task metadata if the dynamic two-stage allocation agent flow belongs to, and executing the first route configuration algorithm to acquire first matching queue information;
the agent matching module is used for acquiring task information and an agent list under the full task queue, executing a corresponding first agent allocation algorithm based on the task information, the agent list and the first matching queue information, and acquiring first agent information.
Further, the device further comprises a locking updating module, which is used for performing locking operation on the first agent, judging whether the first task of the first agent is finished, and updating the full task information and the agent list state if the first task of the first agent is finished.
Further, the device also comprises a client information acquisition module, which is used for acquiring the first client information from the preset database based on the first task metadata.
Further, the queue matching module is further configured to obtain a corresponding first routing configuration algorithm from a task routing configuration algorithm library based on the first client information and a service type field of the first task metadata.
Further, the device also comprises a first task locking module for adding an idempotent lock to the first task.
Further, the agent matching module is further configured to:
s401, acquiring task information and an agent list under a full-scale task queue by using a timing multithread, determining a first queue based on first matching queue information, and judging whether the first queue is configured with a dynamic algorithm or not;
s402, if the first queue is configured with a dynamic algorithm, a first task is distributed to a first seat based on the configured dynamic algorithm, and if the first queue is not configured with the dynamic algorithm, the first task is distributed to the first seat based on a preset static rule;
and S403, storing the first task information and the first seat information.
Further, the device further comprises a real-time pushing module: the system is used for pushing tasks to the agents in real time based on the real-time event bus and reminding the agents to process, and updating task processing states after the agents process the tasks, and changing the tasks from waiting to being processed to being completed.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 3, the electronic device may include: a transceiver 121, a processor 122, a memory 123.
The transceiver 121 may be used to obtain the first incoming line information.
Processor 122 executes the computer-executable instructions stored in the memory, causing processor 122 to perform the aspects of the embodiments described above. The processor 122 may be a general-purpose processor including a central processing unit CPU, a network processor (network processor, NP), etc.; but may also be a digital signal processor DSP, an application specific integrated circuit ASIC, a field programmable gate array FPGA or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component.
Memory 123 is coupled to processor 122 via the system bus and communicates with each other, and memory 123 is configured to store computer program instructions.
The system bus may be a peripheral component interconnect standard (peripheral component interconnect, PCI) bus or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The system bus may be classified into an address bus, a data bus, a control bus, and the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus. The transceiver is used to enable communication between the database access device and other computers (e.g., a user side, a read-write library, and a read-only library). The memory may include random access memory (random access memory, RAM) and may also include non-volatile memory (non-volatile memory).
The electronic device provided in the embodiment of the present application may be a terminal device in the above embodiment.
The embodiment of the application also provides a chip for running the instruction, which is used for executing the technical scheme of the method for dynamically distributing the agents in two stages in the embodiment.
The embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores computer instructions, and when the computer instructions run on a computer, the computer is caused to execute the technical scheme of the method for dynamically distributing the agents in two stages in the embodiment.
The embodiment of the application also provides a computer program product, which comprises a computer program stored in a computer readable storage medium, wherein at least one processor can read the computer program from the computer readable storage medium, and the at least one processor can realize the technical scheme of the method for dynamically distributing agents in two stages in the embodiment when executing the computer program.
It should be noted that although the operations of the method of the present invention are described in a particular order in the above embodiments and the accompanying drawings, this does not require or imply that the operations must be performed in the particular order or that all of the illustrated operations be performed in order to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform.
The beneficial effects of the invention are as follows: according to the technical scheme provided by the application, according to the incoming line request of a user, the incoming line analysis processing of the call center platform is called in advance to obtain the initial metadata of the task, and the final allocated seat of the task is obtained through a two-stage allocation scheme, wherein the allocation scheme specifically comprises the following steps: the first stage is that the task is routed to a queue algorithm, the algorithm completes the allocation of the queue to which the task belongs through a dynamically executable routing algorithm, and the dynamic flexibility is that an administrator can obtain the queue information to which the task belongs through a pre-written routing algorithm or a default general algorithm, meanwhile, the routing algorithm can be optimized dynamically through a routing management background according to the overall operation effect after the actual seat allocation, and the system has the configuration, namely effective heat deployment capability; the second stage is an agent allocation algorithm from the task to the queue, the algorithm is divided into two types, the algorithm type of confirming operation is set for the algorithm type of the queue information, the first type of algorithm is a scene algorithm, and the allocation is realized according to specific scenes, such as algorithms of the old country priority, the last priority, the idle priority, the telephone traffic priority and the like. The second type is a big data machine learning model, and the optimal agent distribution result is obtained through deep learning of the historical agent processing tag data through a big data platform. And the task is distributed through the two-stage dynamic algorithm, and finally, the popup window ringing reminding from the task to the seat operation interface is finished through a real-time push bus. The whole technical scheme well solves the problems of inflexible distribution, unreasonable distribution and poor overall seat experience in the current field, and in addition, the technical scheme breaks the limit that the traditional seat needs to be concentrated to a specific area for operation, and can provide the customer service seat operation to be completed anytime and anywhere.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. The specification and examples are to be regarded in an illustrative manner only.
It is to be understood that the present application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.
Those skilled in the art will further appreciate that the algorithm steps described in connection with the embodiments disclosed herein are capable of being carried out in electronic hardware, computer software, or a combination of both, and that the functions are carried out in either hardware or software, depending on the particular application and design constraints of the solution, those skilled in the art can utilize different methods for achieving the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the present invention.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
The foregoing is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the scope of the present invention should be included in the scope of the present invention. Therefore, the protection scope of the present invention should be subject to the protection scope of the claims.

Claims (10)

1. A method for dynamically two-stage seat assignment, comprising:
s1, acquiring first incoming line information, and acquiring first task metadata based on the first incoming line information, wherein the first task metadata comprises a service type field;
s2, judging whether the first task metadata belongs to a dynamic two-stage seat allocation process;
s3, if the agent flow is distributed in two stages dynamically, acquiring a corresponding first route configuration algorithm from a task route configuration algorithm library based on a service type field of first task metadata, and executing the first route configuration algorithm to acquire first matching queue information;
and S4, acquiring task information and an agent list under the full task queue, and executing a corresponding first agent allocation algorithm based on the task information, the agent list and the first matching queue information to acquire first agent information.
2. The method of claim 1, further comprising S5, performing a locking operation on the first agent, determining whether the first task of the first agent is finished, and if so, updating the full-scale task information and the agent list status.
3. The method of claim 1, wherein S1 further comprises retrieving first customer information from a pre-set database based on the first task metadata.
4. A method according to claim 3, wherein S3 comprises retrieving a corresponding first routing configuration algorithm from a task routing configuration algorithm library based on the first customer information and a traffic type field of first task metadata.
5. The method of claim 1, wherein S2 is preceded by adding an idempotent lock to the first task.
6. The method according to claim 1, wherein said step S4 further comprises the sub-steps of:
s401, acquiring task information and an agent list under a full-scale task queue by using a timing multithread, determining a first queue based on first matching queue information, and judging whether the first queue is configured with a dynamic algorithm or not;
s402, if the first queue is configured with a dynamic algorithm, a first task is distributed to a first seat based on the configured dynamic algorithm, and if the first queue is not configured with the dynamic algorithm, the first task is distributed to the first seat based on a preset static rule;
and S403, storing the first task information and the first seat information.
7. An apparatus for dynamic two-stage seat assignment, comprising:
the task generation module is used for acquiring first incoming line information, and acquiring first task metadata based on the first incoming line information, wherein the first task metadata comprises a service type field;
the first judging module is used for judging whether the first task metadata belongs to a dynamic two-stage seat allocation process or not;
the queue matching module is used for acquiring a corresponding first route configuration algorithm from the task route configuration algorithm library based on a service type field of the first task metadata if the dynamic two-stage allocation agent flow belongs to, and executing the first route configuration algorithm to acquire first matching queue information;
the agent matching module is used for acquiring task information and an agent list under the full task queue, executing a corresponding first agent allocation algorithm based on the task information, the agent list and the first matching queue information, and acquiring first agent information.
8. An electronic device, comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored in the memory to implement the method of any one of claims 1-6.
9. A computer readable storage medium having stored therein computer executable instructions which when executed by a processor are adapted to carry out the method of any one of claims 1-6.
10. A computer program product comprising a computer program which, when executed by a processor, implements the method of any of claims 1-6.
CN202311397742.8A 2023-10-26 2023-10-26 Method and device for dynamically distributing agents in two stages, computer equipment and storage medium Pending CN117499539A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311397742.8A CN117499539A (en) 2023-10-26 2023-10-26 Method and device for dynamically distributing agents in two stages, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311397742.8A CN117499539A (en) 2023-10-26 2023-10-26 Method and device for dynamically distributing agents in two stages, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117499539A true CN117499539A (en) 2024-02-02

Family

ID=89669929

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311397742.8A Pending CN117499539A (en) 2023-10-26 2023-10-26 Method and device for dynamically distributing agents in two stages, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117499539A (en)

Similar Documents

Publication Publication Date Title
CN109508344B (en) Service data query method and device, electronic equipment and storage medium
CN107944773A (en) A kind of Business Process Control method, apparatus and storage medium
CN111010426A (en) Message pushing method and device
WO2022105153A1 (en) Risk control decision-making method and apparatus, device, and storage medium
CN102957622A (en) Method, device and system for data processing
CN108550057A (en) It attends a banquet request processing method, electronic device, the computer readable storage medium of answering questions
CN107508914A (en) A kind of accurate method for pushing of message and system based on cloud computing analysis
CN111985786A (en) Agent-based task allocation method and device, computer equipment and storage medium
CN108520401B (en) User list management method, device, platform and storage medium
CN117499539A (en) Method and device for dynamically distributing agents in two stages, computer equipment and storage medium
CN117271177A (en) Root cause positioning method and device based on link data, electronic equipment and storage medium
CN116700929A (en) Task batch processing method and system based on artificial intelligence
CN115455042A (en) Data processing method, apparatus and computer readable storage medium
CN112667631B (en) Automatic editing method, device, equipment and storage medium for business field
CN111162920B (en) Ticket processing method and device of Internet of things
CN112200709A (en) Call center work order follow-up method and device, computer equipment and storage medium
CN112435151A (en) Government affair information data processing method and system based on correlation analysis
CN110648046A (en) Service processing scheduling method and device, computer equipment and storage medium
CN113238839B (en) Cloud computing based data management method and device
CN113709314B (en) Intelligent seat outbound method and device, electronic equipment and computer storage medium
CN113779325B (en) Data query method, device, storage medium and electronic equipment
CN116450311A (en) Decision task processing method, device, equipment and storage medium
KR100893689B1 (en) A method and system for providing telephone number information service using SMS
CN1092901C (en) a flexible call record mechanism
CN117785423A (en) Batch processing method and device based on double locking and related products

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination