CN117558079A - Queuing processing method and device for batch proxy service and electronic equipment - Google Patents

Queuing processing method and device for batch proxy service and electronic equipment Download PDF

Info

Publication number
CN117558079A
CN117558079A CN202311497341.XA CN202311497341A CN117558079A CN 117558079 A CN117558079 A CN 117558079A CN 202311497341 A CN202311497341 A CN 202311497341A CN 117558079 A CN117558079 A CN 117558079A
Authority
CN
China
Prior art keywords
service
proxy
queuing
neural network
network model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311497341.XA
Other languages
Chinese (zh)
Inventor
王永隆
郁巍
程灿权
徐丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202311497341.XA priority Critical patent/CN117558079A/en
Publication of CN117558079A publication Critical patent/CN117558079A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C11/00Arrangements, systems or apparatus for checking, e.g. the occurrence of a condition, not provided for elsewhere
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C11/00Arrangements, systems or apparatus for checking, e.g. the occurrence of a condition, not provided for elsewhere
    • G07C2011/04Arrangements, systems or apparatus for checking, e.g. the occurrence of a condition, not provided for elsewhere related to queuing systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses a queuing processing method and device for batch proxy service and electronic equipment. Relates to the artificial intelligence field, the financial science and technology field or other related technical fields. The method comprises the following steps: acquiring service data of batch of proxy services to be processed; inputting the service data into a target neural network model to obtain an output result output by the target neural network model, wherein the output result is used for representing a service queuing channel corresponding to each proxy service, and the target neural network model is used for extracting target characteristic information in the service data of each proxy service and matching the corresponding service queuing channel for each proxy service according to the target characteristic information of each proxy service; and queuing the batch of the proxy services based on the service queuing channels corresponding to each proxy service. The method and the device solve the technical problem that the processing efficiency is low in the queuing process of batch proxy service in the prior art.

Description

Queuing processing method and device for batch proxy service and electronic equipment
Technical Field
The application relates to the field of artificial intelligence, the technical field of financial science and technology or other related technical fields, and in particular relates to a queuing processing method and device for batch proxy service and electronic equipment.
Background
With the rapid development of internet finance, the service experience requirement of enterprise proxy service is gradually improved, and a new challenge is also provided for the timeliness of processing batch proxy service in the background of the system. In order to better serve clients, how to arrange the batch of proxy services by a relatively better and relatively more reasonable queuing mechanism to improve the overall quality of the services of the clients by the financial institutions becomes a problem to be solved by the financial institutions.
In the prior art, the priority level, the service data volume, the data arrival time and other factors of the batch of the proxy service sent by the client are generally solidified and serially arranged according to experience and queuing rules so as to realize queuing treatment of the batch of the proxy service sent by the client. However, the queuing mode can only meet the requirements of customers under the conditions of small influence of environmental change and low requirements of customer appeal differentiation, and can adapt to new requirements under the conditions of large influence of environmental change and high requirements of customer appeal differentiation by synchronously modifying queuing rules. And the queuing rules in the complex environment are synchronously modified, so that the processing time of service queuing is increased, and the problem of low processing efficiency of queuing processing of batch of proxy services is caused.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The application provides a queuing processing method, device and electronic equipment for batch proxy services, which at least solve the technical problem of low processing efficiency in the queuing processing process of batch proxy services in the prior art.
According to one aspect of the present application, there is provided a queuing processing method for batch proxy service, including: acquiring service data of batch of to-be-processed proxy services, wherein each proxy service in the batch of to-be-processed proxy services is a fund transaction service which is processed by a client entrusted financial institution; inputting the service data into a target neural network model to obtain an output result output by the target neural network model, wherein the output result is used for representing a service queuing channel corresponding to each proxy service, the target neural network model is used for extracting target characteristic information in the service data of each proxy service and matching the corresponding service queuing channel for each proxy service according to the target characteristic information corresponding to each proxy service, and the target characteristic information corresponding to each proxy service is used for representing the emergency degree of processing the proxy service; and queuing the batch of the proxy services based on the service queuing channels corresponding to each proxy service.
Optionally, inputting the service data into a target neural network model to obtain an output result output by the target neural network model, including: extracting service characteristics of the service data through an input layer in the target neural network model to obtain M pieces of first service characteristic information corresponding to each proxy service, wherein M is a positive integer; inputting M pieces of first service characteristic information corresponding to each proxy service into a hidden layer of the target neural network model to obtain K pieces of second service characteristic information corresponding to each proxy service output by the hidden layer, wherein each piece of second service characteristic information in the K pieces of second service characteristic information corresponding to each proxy service is first service characteristic information with an important grade higher than a preset grade in the M pieces of first service characteristic information corresponding to the proxy service, and K is a positive integer smaller than or equal to M; determining target feature information corresponding to each proxy service from K pieces of second service feature information corresponding to each proxy service through an output layer in the target neural network model; and matching the corresponding service queuing channels for each of the proxy services according to the target characteristic information corresponding to each of the proxy services through the output layer to obtain the output result.
Optionally, inputting the M pieces of first service feature information corresponding to each of the proxy services into a hidden layer of the target neural network model, to obtain K pieces of second service feature information corresponding to each of the proxy services output by the hidden layer, where the method includes: and carrying out data processing on the M pieces of first service characteristic information corresponding to each proxy service through a hidden layer in the target neural network model to obtain K pieces of second service characteristic information corresponding to each proxy service, wherein the data processing at least comprises data screening and data merging, the data screening is used for filtering service characteristic information with an importance level lower than the preset level in the M pieces of first service characteristic information corresponding to each proxy service, and the data merging is used for integrating service characteristic information with an importance level higher than the preset level in the M pieces of first service characteristic information corresponding to each proxy service.
Optionally, the target neural network model is obtained through the following process: acquiring a sample data set, wherein the sample data set comprises service data of N historical proxy services, and N is a positive integer; setting a target label for each history agent service in the N history agent services to obtain a target label corresponding to each history agent service, wherein the target label corresponding to each history agent service is used for representing an actual service queuing channel corresponding to the history agent service; and inputting the service data of the N historical proxy services and the target labels corresponding to each historical proxy service into an initial neural network model, and performing iterative training on the initial neural network model to obtain the target neural network model.
Optionally, inputting the service data of the N historical proxy services and the target label corresponding to each historical proxy service into an initial neural network model, and performing iterative training on the initial neural network model to obtain the target neural network model, including: performing a first operation, a second operation and a third operation on the initial neural network model according to the service data of the N historical proxy services and the target label corresponding to each historical proxy service, wherein the first operation is used for inputting the service data of the N historical proxy services into the initial neural network model to obtain a prediction result output by the initial neural network model, the prediction result is used for representing the service queuing channel corresponding to each historical proxy service, the second operation is used for determining an error value of the prediction result output by the initial neural network model according to the service queuing channel corresponding to each historical proxy service and the actual service queuing channel corresponding to each historical proxy service, and the third operation is used for judging whether the error value is smaller than a preset threshold and adjusting model parameters of the initial neural network model based on the error value under the condition that the error value is larger than or equal to the preset threshold; and sequentially executing the first operation, the second operation and the third operation for the initial neural network model for a plurality of times until the error value of the prediction result output by the adjusted initial neural network model is smaller than the preset threshold value, and determining the adjusted initial neural network model as the target neural network model.
Optionally, after queuing the batch of proxy services based on the service queuing channel corresponding to each proxy service, the method further includes: determining a target time length corresponding to each proxy service according to a service queuing channel corresponding to each proxy service, wherein the target time length is a predicted time length for processing each proxy service; and displaying the target duration in a target page corresponding to each proxy service.
Optionally, the service queuing channel is one of the following: the system comprises a queue inserting channel, a queuing channel, a reservation channel and a real-time channel, wherein the queue inserting channel is used for queuing a proxy service with the emergency degree of a first grade, the queuing channel is used for queuing a proxy service with the emergency degree of a second grade, the real-time channel is used for queuing a proxy service with the emergency degree of a third grade, the reservation channel is used for queuing a proxy service with the service processing time, the emergency degree of the first grade is higher than that of the second grade, and the emergency degree of the third grade is higher than that of the first grade.
According to another aspect of the present application, there is also provided a queuing processing apparatus for batch proxy service, including: the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring service data of batch of to-be-processed proxy services, wherein each proxy service in the batch of to-be-processed proxy services is a fund transaction service which is processed by a client entrusted financial institution; the matching module is used for inputting the service data into a target neural network model to obtain an output result output by the target neural network model, wherein the output result is used for representing a service queuing channel corresponding to each of the proxy services, the target neural network model is used for extracting target characteristic information in the service data of each of the proxy services and matching the corresponding service queuing channel for each of the proxy services according to the target characteristic information corresponding to each of the proxy services, and the target characteristic information corresponding to each of the proxy services is used for representing the emergency degree of processing the proxy services; and the processing module is used for queuing the batch of the proxy services based on the service queuing channels corresponding to each proxy service.
According to another aspect of the present application, there is also provided a computer readable storage medium having a computer program stored therein, wherein the computer program is configured to execute the queuing processing method of bulk proxy service described above at runtime.
According to another aspect of the present application, there is also provided an electronic device including one or more processors; and a memory for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement a queuing method for running the programs, wherein the programs are configured to perform the batch proxy service queuing method described above when run.
In the application, service data of batch of to-be-processed proxy services are firstly obtained, wherein each proxy service in the batch of to-be-processed proxy services is a fund transaction service which is processed by a client entrusted financial institution; then inputting the service data into a target neural network model to obtain an output result output by the target neural network model, wherein the output result is used for representing a service queuing channel corresponding to each proxy service, the target neural network model is used for extracting target characteristic information in the service data of each proxy service and matching the corresponding service queuing channel for each proxy service according to the target characteristic information corresponding to each proxy service, and the target characteristic information corresponding to each proxy service is used for representing the emergency degree of processing the proxy service; and finally, queuing the batch of the proxy services based on the service queuing channels corresponding to each proxy service.
In the process, the target characteristic information in the service data of the batch of the proxy services is extracted through the pre-trained target neural network model, the corresponding service queuing channel is matched for each proxy service according to the target characteristic information corresponding to each proxy service, then the batch of the proxy services is queued based on the service queuing channel corresponding to each proxy service, the service queuing channel corresponding to each proxy service can be determined without modifying the queuing rule in the complex environment, the processing time for queuing the batch of the proxy services is reduced, the technical effect of improving the processing efficiency of queuing the batch of the proxy services is achieved, and the technical problem that the processing efficiency is lower in the queuing process of the batch of the proxy services in the prior art is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
FIG. 1 is a flow chart of an alternative queuing method for batch proxy services according to embodiments of the present application;
FIG. 2 is a schematic illustration of an alternative process of a target neural network model, according to an embodiment of the present application;
FIG. 3 is a flow chart of an alternative training process for a target neural network model, according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an alternative queuing processing apparatus for bulk proxy services according to embodiments of the present application;
fig. 5 is a schematic diagram of an alternative electronic device according to an embodiment of the present application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that, the queuing processing method, the queuing processing device and the electronic device for batch proxy service of the present application may be used in the field of artificial intelligence and the field of financial technology, and may also be used in other fields besides the field of artificial intelligence and the field of financial technology, and the queuing processing method, the queuing processing device and the application field of the electronic device of the batch proxy service of the present application are not limited.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region, and provide corresponding operation entries for the user to select authorization or rejection.
Example 1
In accordance with embodiments of the present application, there is provided an alternative method embodiment for queuing batch proxy services, it being noted that the steps illustrated in the flowchart of the figures may be performed in a computer system, such as a set of computer executable instructions, and, although a logical sequence is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in a different order than what is illustrated herein.
FIG. 1 is a flowchart of an alternative queuing method for batch proxy services according to embodiments of the present application, as shown in FIG. 1, the method includes the steps of:
step S101, obtaining service data of batch of proxy services to be processed.
In an alternative embodiment, a queuing processing system for batch proxy services may be used as an execution body of the queuing processing method for batch proxy services in the embodiments of the present application. For convenience of description, a queuing processing system for batch-type proxy service will be simply referred to as a system hereinafter.
In step S101, each of the batch of proxy services to be processed is a funds transaction service that the customer delegates to the financial institution to process.
Alternatively, the business sources of the batch of proxy businesses to be processed shown in fig. 2 may be businesses that the customer transacts at a bank counter, businesses entrusted with banking proxy businesses, and so on.
Step S102, inputting the business data into the target neural network model to obtain an output result output by the target neural network model.
In step S102, the output result is used to represent the service queuing channel corresponding to each proxy service, the target neural network model is used to extract the target feature information in the service data of each proxy service, and match the corresponding service queuing channel for each proxy service according to the target feature information corresponding to each proxy service, where the target feature information corresponding to each proxy service is used to represent the emergency degree of processing the proxy service.
Optionally, as shown in fig. 2, the target feature information includes service age information and service urgency information of the proxy service.
Optionally, as shown in fig. 2, the service queuing channel is one of the following: a queue inserting channel, a queuing channel, a reservation channel and a real-time channel. In this embodiment, the service queuing channel may be added or deleted according to the actual situation of the proxy service.
And step S103, queuing the batch of the proxy services based on the service queuing channels corresponding to each proxy service.
For example, in the case that the service queuing channel corresponding to the proxy service is a queuing channel, the system may automatically allocate the proxy service to the queuing channel for queuing, and process the to-be-handled service according to the queuing order of the proxy service in the queuing channel. Under the condition that the service queuing channel corresponding to the proxy service is the queue inserting channel, the system can automatically distribute the proxy service to the queue inserting channel for queuing and process the proxy service in a reasonable time. Under the condition that the service queuing channel corresponding to the proxy service is the reservation channel, the system can automatically allocate the proxy service to the reservation channel for queuing, and process the proxy service in the appointed service processing time. Under the condition that the service queuing channel corresponding to the proxy service is a real-time channel, the system can automatically distribute the proxy service to the real-time channel and immediately process the to-be-processed service.
Based on the above-mentioned schemes defined in steps S101 to S103, it can be known that, in the present application, service data of a batch of to-be-processed proxy services is first obtained, where each proxy service in the batch of to-be-processed proxy services is a funds transaction service that is to be processed by a client-delegated financial institution; then inputting the service data into a target neural network model to obtain an output result output by the target neural network model, wherein the output result is used for representing a service queuing channel corresponding to each proxy service, the target neural network model is used for extracting target characteristic information in the service data of each proxy service and matching the corresponding service queuing channel for each proxy service according to the target characteristic information corresponding to each proxy service, and the target characteristic information corresponding to each proxy service is used for representing the emergency degree of processing the proxy service; and finally, queuing the batch of the proxy services based on the service queuing channels corresponding to each proxy service.
In the above process, the target feature information in the service data of the batch of the proxy services is extracted through the pre-trained target neural network model, the corresponding service queuing channel is matched for each proxy service according to the target feature information corresponding to each proxy service, then the batch of the proxy services is queued based on the service queuing channel corresponding to each proxy service, the service queuing channel corresponding to each proxy service can be determined without modifying the queuing rule in the complex environment, the processing time of queuing the batch of the proxy services is reduced, the technical effect of improving the processing efficiency of queuing the batch of the proxy services is achieved, and the technical problem that the processing efficiency is lower in the process of queuing the batch of the proxy services in the prior art is solved.
Optionally, in the queuing processing method for batch proxy service provided in the embodiment of the present application, the service queuing channel is one of the following: the system comprises a queue inserting channel, a queuing channel, a reservation channel and a real-time channel, wherein the queue inserting channel is used for queuing the proxy service with the emergency degree of a first level, the queuing channel is used for queuing the proxy service with the emergency degree of a second level, the real-time channel is used for queuing the proxy service with the emergency degree of a third level, the reservation channel is used for queuing the proxy service with the contracted service processing time, the emergency degree of the first level is higher than that of the second level, and the emergency degree of the third level is higher than that of the first level.
In this embodiment, the queue inserting channel is configured to queue the proxy service with the first level of urgency, for example, queue the proxy service with high service urgency, where the service volume of the proxy service is not large, but the aging sensitivity of the service processing is high, and the system needs to process the service to be handled as soon as possible after the batch uploading of the proxy service. The queuing channel is used for queuing the proxy service with the emergency degree of the second level, for example, queuing the conventional proxy service, and the system can reasonably schedule according to the use of the self resources and process the service to be handled according to the queuing sequence in the queuing channel; under the condition that the service experience requirement and the complexity are high, the system can allocate and dynamically adjust the resources according to the actual situation of the client. The real-time channel is used for queuing the emergency service with the third level, for example, queuing the emergency service, and the scope of the service and the number of the service need to be strictly controlled, so that the system resources are fully utilized on the premise of ensuring the stability of the system. The reservation channel is used for queuing the proxy service for which the service processing time is agreed, for example, queuing the proxy service for which the service processing time is agreed with a financial institution (such as a bank) in advance.
Optionally, in the queuing processing method for batch proxy service provided in the embodiment of the present application, inputting service data into a target neural network model to obtain an output result output by the target neural network model, including: the system extracts service characteristics of service data through an input layer in a target neural network model to obtain M pieces of first service characteristic information corresponding to each proxy service, wherein M is a positive integer; then, M pieces of first service characteristic information corresponding to each proxy service are input into a hidden layer of a target neural network model, K pieces of second service characteristic information corresponding to each proxy service output by the hidden layer are obtained, wherein each piece of second service characteristic information in the K pieces of second service characteristic information corresponding to each proxy service is first service characteristic information with the importance level higher than a preset level in the M pieces of first service characteristic information corresponding to the proxy service, and K is a positive integer smaller than or equal to M; then determining target feature information corresponding to each proxy service from K pieces of second service feature information corresponding to each proxy service through an output layer in the target neural network model; and finally, matching corresponding service queuing channels for each proxy service according to the target characteristic information corresponding to each proxy service through an output layer to obtain an output result.
Optionally, in the queuing processing method for batch proxy services provided in the embodiment of the present application, M pieces of first service feature information corresponding to each proxy service are input into a hidden layer of a target neural network model, so as to obtain K pieces of second service feature information corresponding to each proxy service output by the hidden layer, where the queuing processing method includes: the system performs data processing on M pieces of first service feature information corresponding to each proxy service through a hidden layer in the target neural network model to obtain K pieces of second service feature information corresponding to each proxy service, wherein the data processing at least comprises data screening and data merging, the data screening is used for filtering service feature information with an importance level lower than a preset level in the M pieces of first service feature information corresponding to each proxy service, and the data merging is used for integrating service feature information with an importance level higher than the preset level in the M pieces of first service feature information corresponding to each proxy service.
For example, the system may extract service characteristics of service data through an input layer in the target neural network model to obtain service submission channel information (i.e. service source in fig. 2), data source, record size, file format, data form and other characteristic information corresponding to each of the proxy services, then the system may perform data processing on the service submission channel information, data source, record size, file format, data form and other characteristic information corresponding to each of the proxy services through a hidden layer in the target neural network model to obtain service protocol type, proxy service type, unit type, proxy service time-effect and other characteristic information corresponding to each of the proxy services, and then the system may amplify and distribute the service protocol type, proxy service type, unit type, proxy service time-effect and other characteristic information corresponding to each of the proxy services through an output layer in the target neural network model, determine target characteristic information (i.e. service information and service time-effect information in fig. 2) corresponding to each of the proxy services from the service protocol type, proxy service type, unit type, proxy service time-effect and other characteristic information corresponding to each of the proxy services, and queue up according to the output layer to the corresponding to each of the target service queue and corresponding to each of the proxy service channels. When the system processes the characteristic information such as the service submission channel information, the data source, the record size, the file format, the data form and the like corresponding to each proxy service through the hidden layer of the target neural network model, the factors influencing the small service grouping (namely, the service characteristic information with the importance level lower than the preset level) can be filtered, and the factors with large service influence (namely, the service characteristic information with the importance level higher than the preset level) can be amplified and combined.
Optionally, to reduce the effects of short term and objective assignment fluctuations, the system may normalize the data for all input and output layers.
It should be noted that, the input layer, the hidden layer and the output layer in the target neural network model are used for extracting the target feature information corresponding to each proxy service, and matching the corresponding service queuing channel for each proxy service according to the target feature information corresponding to each proxy service, so that the service queuing channel corresponding to each proxy service can be determined without modifying the queuing rule in the complex environment, thereby realizing the improvement of the processing efficiency of queuing batch proxy services.
FIG. 3 is a flowchart of an alternative training process for a target neural network model, according to an embodiment of the present application, as shown in FIG. 3, the method includes the steps of:
step S31, a sample data set is acquired.
In step S31, the sample data set includes service data of N historical proxy services, where N is a positive integer.
And S32, setting a target label for each history agent service in the N history agent services to obtain a target label corresponding to each history agent service.
In step S32, the target label corresponding to each history proxy service is used to characterize the actual service queuing channel corresponding to the history proxy service.
And step S33, inputting the service data of the N historical proxy services and the target labels corresponding to each historical proxy service into an initial neural network model, and performing iterative training on the initial neural network model to obtain a target neural network model.
In this embodiment, the system may input service data of N historical proxy services and a target label corresponding to each historical proxy service into the initial neural network model, and after iterative training, a multi-layer neural network queuing prediction model (i.e., a target neural network model) may be obtained, so as to implement a service queuing channel for predicting the proxy service in a complex environment through the model.
Optionally, in the queuing processing method for batch proxy services provided in the embodiment of the present application, service data of N historical proxy services and a target label corresponding to each historical proxy service are input into an initial neural network model, and iterative training is performed on the initial neural network model to obtain a target neural network model, including: the system can execute a first operation, a second operation and a third operation on the initial neural network model according to the service data of the N historical proxy services and the target label corresponding to each historical proxy service, wherein the first operation is used for inputting the service data of the N historical proxy services into the initial neural network model to obtain a prediction result output by the initial neural network model, the prediction result is used for representing a service queuing channel corresponding to each historical proxy service, the second operation is used for determining an error value of the prediction result output by the initial neural network model according to the service queuing channel corresponding to each historical proxy service and the actual service queuing channel corresponding to each historical proxy service, and the third operation is used for judging whether the error value is smaller than a preset threshold value or not and adjusting model parameters of the initial neural network model based on the error value under the condition that the error value is larger than or equal to the preset threshold value; the system can sequentially execute a plurality of first operations, a plurality of second operations and a plurality of third operations on the initial neural network model until the error value of the prediction result output by the adjusted initial neural network model is smaller than a preset threshold value, and the adjusted initial neural network model is determined to be a target neural network model.
In this embodiment, the system may model train the initial neural network model by performing the following operations in sequence:
and the first operation is to input the service data of the N historical proxy services into the initial neural network model to obtain a prediction result output by the initial neural network model, wherein the prediction result is used for representing the service queuing channel corresponding to each historical proxy service.
And a second operation, determining an error value of a prediction result output by the initial neural network model according to the service queuing channel corresponding to each historical proxy service and the actual service queuing channel corresponding to each historical proxy service.
And third, judging whether the error value is smaller than a preset threshold value, and adjusting model parameters of the initial neural network model based on the error value under the condition that the error value is larger than or equal to the preset threshold value.
Further, the system may perform iterative training on the initial neural network model by sequentially performing the first operation, the second operation, and the third operation, until an error value of a prediction result output by the adjusted initial neural network model is smaller than a preset threshold value, and determine that the adjusted initial neural network model is the target neural network model.
It should be noted that, by sequentially performing the first operation, the second operation, and the third operation for the plurality of times on the initial neural network model until the error value of the prediction result output by the adjusted initial neural network model is smaller than the preset threshold, it is determined that the adjusted initial neural network model is the target neural network model, so that the prediction error of the target neural network model can be reduced to a range acceptable for the service, thereby improving the prediction accuracy of the model.
Optionally, in the queuing processing method for batch of proxy services provided in the embodiment of the present application, after queuing processing is performed on batch of proxy services based on a service queuing channel corresponding to each proxy service, the method further includes: the system can determine the target duration corresponding to each proxy service according to the service queuing channel corresponding to each proxy service, wherein the target duration is the estimated duration for processing each proxy service; and then displaying the target duration in a target page corresponding to each proxy service.
In order to improve the experience of the user, in this embodiment, the system may determine, according to the service queuing channel corresponding to each of the proxy services, the estimated time length (i.e. the target time length) of each of the proxy services, and display the estimated time length in the target page corresponding to each of the proxy services. For example, in the case where the service queuing channel corresponding to the proxy service is a queuing channel, the system may determine, according to the correspondence between the service queuing channel and the target duration, that the estimated duration for processing the proxy service is 5-10min. The system may then display the duration in a page that handles the to-do business for viewing by the user.
Therefore, according to the queuing processing method for batch proxy services, the target characteristic information in the service data of the batch proxy services can be extracted through the pre-trained target neural network model, the corresponding service queuing channels are matched for each proxy service according to the target characteristic information corresponding to each proxy service, then the batch proxy service is queued based on the service queuing channels corresponding to each proxy service, the service queuing channels corresponding to each proxy service can be determined without modifying the queuing rules in the complex environment, the processing time for queuing the batch proxy service is reduced, the technical effect of improving the processing efficiency of queuing the batch proxy service is achieved, and the technical problem that the processing efficiency is low in the queuing process of the batch proxy service in the prior art is solved. The method realizes ordered and quick arrangement processing of the bank proxy service by carrying out multi-level complex conversion on complex network factors influencing the proxy service, and can greatly improve the resource utilization rate and service experience of a bank background system.
Example 2
According to an embodiment of the present application, there is provided an embodiment of a queuing processing apparatus for batch-type proxy service, where fig. 4 is a schematic diagram of an alternative queuing processing apparatus for batch-type proxy service according to an embodiment of the present application, as shown in fig. 4, and the apparatus includes:
The acquiring module 401 is configured to acquire service data of a batch of to-be-processed proxy services, where each proxy service in the batch of to-be-processed proxy services is a funds transaction service that is to be processed by a client delegated financial institution;
the matching module 402 is configured to input service data into a target neural network model, obtain an output result output by the target neural network model, where the output result is used to represent a service queuing channel corresponding to each proxy service, the target neural network model is configured to extract target feature information in the service data of each proxy service, and match the corresponding service queuing channel for each proxy service according to the target feature information corresponding to each proxy service, where the target feature information corresponding to each proxy service is used to represent an emergency degree of processing the proxy service;
the processing module 403 is configured to queue a batch of proxy services based on a service queuing channel corresponding to each proxy service.
It should be noted that the above-mentioned obtaining module 401, matching module 402, and determining processing module 403 correspond to steps S101 to S103 in the above-mentioned embodiment 1, and the three modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in the above-mentioned embodiment 1.
Optionally, the matching module includes: the first extraction unit is used for extracting service characteristics of the service data through an input layer in the target neural network model to obtain M pieces of first service characteristic information corresponding to each proxy service, wherein M is a positive integer; the first processing unit is used for inputting M pieces of first service characteristic information corresponding to each proxy service into a hidden layer of the target neural network model to obtain K pieces of second service characteristic information corresponding to each proxy service output by the hidden layer, wherein each piece of second service characteristic information in the K pieces of second service characteristic information corresponding to each proxy service is first service characteristic information with an important grade higher than a preset grade in the M pieces of first service characteristic information corresponding to the proxy service, and K is a positive integer smaller than or equal to M; the first determining unit is used for determining target feature information corresponding to each proxy service from K pieces of second service feature information corresponding to each proxy service through an output layer in the target neural network model; and the matching unit is used for matching the corresponding service queuing channels for each proxy service according to the target characteristic information corresponding to each proxy service through the output layer to obtain an output result.
Optionally, the first processing unit includes: the data processing unit is used for carrying out data processing on M pieces of first service characteristic information corresponding to each proxy service through a hidden layer in the target neural network model to obtain K pieces of second service characteristic information corresponding to each proxy service, wherein the data processing at least comprises data screening and data merging, the data screening is used for filtering service characteristic information with an importance level lower than a preset level in the M pieces of first service characteristic information corresponding to each proxy service, and the data merging is used for integrating service characteristic information with an importance level higher than the preset level in the M pieces of first service characteristic information corresponding to each proxy service.
Optionally, the queuing processing device for batch proxy service further includes: the sample acquisition module is used for acquiring a sample data set, wherein the sample data set comprises service data of N historical proxy services, and N is a positive integer; the label setting module is used for setting a target label for each history agent service in the N history agent services to obtain a target label corresponding to each history agent service, wherein the target label corresponding to each history agent service is used for representing an actual service queuing channel corresponding to the history agent service; the iterative training module is used for inputting the service data of the N historical proxy services and the target labels corresponding to each historical proxy service into the initial neural network model, and carrying out iterative training on the initial neural network model to obtain the target neural network model.
Optionally, the iterative training module includes: the first execution unit is used for executing a first operation, a second operation and a third operation on the initial neural network model according to the service data of the N historical proxy services and the target label corresponding to each historical proxy service, wherein the first operation is used for inputting the service data of the N historical proxy services into the initial neural network model to obtain a prediction result output by the initial neural network model, the prediction result is used for representing a service queuing channel corresponding to each historical proxy service, the second operation is used for determining an error value of the prediction result output by the initial neural network model according to the service queuing channel corresponding to each historical proxy service and the actual service queuing channel corresponding to each historical proxy service, and the third operation is used for judging whether the error value is smaller than a preset threshold value or not and adjusting model parameters of the initial neural network model based on the error value under the condition that the error value is larger than or equal to the preset threshold value; the second execution unit is used for sequentially executing a plurality of first operations, a plurality of second operations and a plurality of third operations on the initial neural network model until the error value of the prediction result output by the adjusted initial neural network model is smaller than a preset threshold value, and determining the adjusted initial neural network model as a target neural network model.
Optionally, the queuing processing device for batch proxy service further includes: the first determining module is used for determining a target duration corresponding to each proxy service according to the service queuing channel corresponding to each proxy service after queuing the batch of the proxy services based on the service queuing channel corresponding to each proxy service, wherein the target duration is estimated duration for processing each proxy service; and the display module is used for displaying the target duration in the target page corresponding to each proxy service.
Optionally, the service queuing channel is one of: the system comprises a queue inserting channel, a queuing channel, a reservation channel and a real-time channel, wherein the queue inserting channel is used for queuing the proxy service with the emergency degree of a first level, the queuing channel is used for queuing the proxy service with the emergency degree of a second level, the real-time channel is used for queuing the proxy service with the emergency degree of a third level, the reservation channel is used for queuing the proxy service with the contracted service processing time, the emergency degree of the first level is higher than that of the second level, and the emergency degree of the third level is higher than that of the first level.
Example 3
According to another aspect of the embodiments of the present application, there is also provided a computer readable storage medium having a computer program stored therein, wherein the computer program is configured to execute the queuing processing method of bulk proxy service described above at runtime.
Example 4
According to another aspect of the embodiments of the present application, there is also provided an electronic device, wherein fig. 5 is a schematic diagram of an alternative electronic device according to an embodiment of the present application, as shown in fig. 5, the electronic device including one or more processors; and a memory for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement a queuing method for running the programs, wherein the programs are configured to perform the batch proxy service queuing method described above when run.
As shown in fig. 5, an embodiment of the present application provides an electronic device, where the device includes a processor, a memory, and a program stored on the memory and executable on the processor, and when the processor executes the program, the following steps are implemented:
acquiring service data of batch of to-be-processed proxy services, wherein each proxy service in the batch of to-be-processed proxy services is a fund transaction service which is processed by a client entrusted financial institution; inputting service data into a target neural network model to obtain an output result output by the target neural network model, wherein the output result is used for representing a service queuing channel corresponding to each proxy service, the target neural network model is used for extracting target characteristic information in the service data of each proxy service and matching the corresponding service queuing channel for each proxy service according to the target characteristic information corresponding to each proxy service, and the target characteristic information corresponding to each proxy service is used for representing the emergency degree of processing the proxy service; and queuing the batch of the proxy services based on the service queuing channels corresponding to each proxy service.
Optionally, the processor when executing the program further implements the following steps: extracting service characteristics of service data through an input layer in a target neural network model to obtain M pieces of first service characteristic information corresponding to each proxy service, wherein M is a positive integer; inputting M pieces of first service characteristic information corresponding to each proxy service into a hidden layer of a target neural network model to obtain K pieces of second service characteristic information corresponding to each proxy service output by the hidden layer, wherein each piece of second service characteristic information in the K pieces of second service characteristic information corresponding to each proxy service is first service characteristic information with an important grade higher than a preset grade in the M pieces of first service characteristic information corresponding to the proxy service, and K is a positive integer smaller than or equal to M; determining target feature information corresponding to each proxy service from K pieces of second service feature information corresponding to each proxy service through an output layer in a target neural network model; and matching the corresponding service queuing channels for each proxy service according to the target characteristic information corresponding to each proxy service through the output layer to obtain an output result.
Optionally, the processor when executing the program further implements the following steps: and carrying out data processing on M pieces of first service characteristic information corresponding to each proxy service through a hidden layer in the target neural network model to obtain K pieces of second service characteristic information corresponding to each proxy service, wherein the data processing at least comprises data screening and data merging, the data screening is used for filtering service characteristic information with an importance level lower than a preset level in the M pieces of first service characteristic information corresponding to each proxy service, and the data merging is used for integrating service characteristic information with an importance level higher than the preset level in the M pieces of first service characteristic information corresponding to each proxy service.
Optionally, the processor when executing the program further implements the following steps: acquiring a sample data set, wherein the sample data set comprises service data of N historical proxy services, and N is a positive integer; setting a target label for each history agent service in N history agent services to obtain a target label corresponding to each history agent service, wherein the target label corresponding to each history agent service is used for representing an actual service queuing channel corresponding to the history agent service; and inputting the service data of the N historical proxy services and the target labels corresponding to each historical proxy service into an initial neural network model, and performing iterative training on the initial neural network model to obtain a target neural network model.
Optionally, the processor when executing the program further implements the following steps: according to the service data of N historical proxy services and the target label corresponding to each historical proxy service, performing a first operation, a second operation and a third operation sequentially on the initial neural network model, wherein the first operation is used for inputting the service data of the N historical proxy services into the initial neural network model to obtain a prediction result output by the initial neural network model, the prediction result is used for representing a service queuing channel corresponding to each historical proxy service, the second operation is used for determining an error value of the prediction result output by the initial neural network model according to the service queuing channel corresponding to each historical proxy service and an actual service queuing channel corresponding to each historical proxy service, and the third operation is used for judging whether the error value is smaller than a preset threshold value or not and adjusting model parameters of the initial neural network model based on the error value under the condition that the error value is larger than or equal to the preset threshold value; and sequentially executing a plurality of first operations, a plurality of second operations and a plurality of third operations on the initial neural network model until the error value of the prediction result output by the adjusted initial neural network model is smaller than a preset threshold value, and determining the adjusted initial neural network model as a target neural network model.
Optionally, the processor when executing the program further implements the following steps: after queuing batch of proxy services based on the service queuing channel corresponding to each proxy service, determining a target time length corresponding to each proxy service according to the service queuing channel corresponding to each proxy service, wherein the target time length is estimated time length for processing each proxy service; and displaying the target duration in a target page corresponding to each proxy service.
Optionally, the service queuing channel is one of: the system comprises a queue inserting channel, a queuing channel, a reservation channel and a real-time channel, wherein the queue inserting channel is used for queuing the proxy service with the emergency degree of a first level, the queuing channel is used for queuing the proxy service with the emergency degree of a second level, the real-time channel is used for queuing the proxy service with the emergency degree of a third level, the reservation channel is used for queuing the proxy service with the contracted service processing time, the emergency degree of the first level is higher than that of the second level, and the emergency degree of the third level is higher than that of the first level.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology content may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of units may be a logic function division, and there may be another division manner in actual implementation, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution, in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application and are intended to be comprehended within the scope of the present application.

Claims (10)

1. The queuing processing method for batch proxy service is characterized by comprising the following steps:
acquiring service data of batch of to-be-processed proxy services, wherein each proxy service in the batch of to-be-processed proxy services is a fund transaction service which is processed by a client entrusted financial institution;
inputting the service data into a target neural network model to obtain an output result output by the target neural network model, wherein the output result is used for representing a service queuing channel corresponding to each proxy service, the target neural network model is used for extracting target characteristic information in the service data of each proxy service and matching the corresponding service queuing channel for each proxy service according to the target characteristic information corresponding to each proxy service, and the target characteristic information corresponding to each proxy service is used for representing the emergency degree of processing the proxy service;
And queuing the batch of the proxy services based on the service queuing channels corresponding to each proxy service.
2. The method of claim 1, wherein inputting the service data into a target neural network model to obtain an output result output by the target neural network model comprises:
extracting service characteristics of the service data through an input layer in the target neural network model to obtain M pieces of first service characteristic information corresponding to each proxy service, wherein M is a positive integer;
inputting M pieces of first service characteristic information corresponding to each proxy service into a hidden layer of the target neural network model to obtain K pieces of second service characteristic information corresponding to each proxy service output by the hidden layer, wherein each piece of second service characteristic information in the K pieces of second service characteristic information corresponding to each proxy service is first service characteristic information with an important grade higher than a preset grade in the M pieces of first service characteristic information corresponding to the proxy service, and K is a positive integer smaller than or equal to M;
determining target feature information corresponding to each proxy service from K pieces of second service feature information corresponding to each proxy service through an output layer in the target neural network model;
And matching the corresponding service queuing channels for each of the proxy services according to the target characteristic information corresponding to each of the proxy services through the output layer to obtain the output result.
3. The method of claim 2, wherein inputting the M pieces of first service feature information corresponding to each of the proxy services into a hidden layer of the target neural network model, to obtain K pieces of second service feature information corresponding to each of the proxy services output by the hidden layer, includes:
and carrying out data processing on the M pieces of first service characteristic information corresponding to each proxy service through a hidden layer in the target neural network model to obtain K pieces of second service characteristic information corresponding to each proxy service, wherein the data processing at least comprises data screening and data merging, the data screening is used for filtering service characteristic information with an importance level lower than the preset level in the M pieces of first service characteristic information corresponding to each proxy service, and the data merging is used for integrating service characteristic information with an importance level higher than the preset level in the M pieces of first service characteristic information corresponding to each proxy service.
4. The method of claim 1, wherein the target neural network model is obtained by:
acquiring a sample data set, wherein the sample data set comprises service data of N historical proxy services, and N is a positive integer;
setting a target label for each history agent service in the N history agent services to obtain a target label corresponding to each history agent service, wherein the target label corresponding to each history agent service is used for representing an actual service queuing channel corresponding to the history agent service;
and inputting the service data of the N historical proxy services and the target labels corresponding to each historical proxy service into an initial neural network model, and performing iterative training on the initial neural network model to obtain the target neural network model.
5. The method of claim 4, wherein inputting the service data of the N historical proxy services and the target label corresponding to each historical proxy service into an initial neural network model, and performing iterative training on the initial neural network model to obtain the target neural network model, comprises:
Performing a first operation, a second operation and a third operation on the initial neural network model according to the service data of the N historical proxy services and the target label corresponding to each historical proxy service, wherein the first operation is used for inputting the service data of the N historical proxy services into the initial neural network model to obtain a prediction result output by the initial neural network model, the prediction result is used for representing the service queuing channel corresponding to each historical proxy service, the second operation is used for determining an error value of the prediction result output by the initial neural network model according to the service queuing channel corresponding to each historical proxy service and the actual service queuing channel corresponding to each historical proxy service, and the third operation is used for judging whether the error value is smaller than a preset threshold and adjusting model parameters of the initial neural network model based on the error value under the condition that the error value is larger than or equal to the preset threshold;
and sequentially executing the first operation, the second operation and the third operation for the initial neural network model for a plurality of times until the error value of the prediction result output by the adjusted initial neural network model is smaller than the preset threshold value, and determining the adjusted initial neural network model as the target neural network model.
6. The method of claim 1, wherein after queuing the batch of proxy services based on the service queuing channel corresponding to each of the proxy services, the method further comprises:
determining a target time length corresponding to each proxy service according to a service queuing channel corresponding to each proxy service, wherein the target time length is a predicted time length for processing each proxy service;
and displaying the target duration in a target page corresponding to each proxy service.
7. The method of claim 1, wherein the traffic queuing channel is one of: the system comprises a queue inserting channel, a queuing channel, a reservation channel and a real-time channel, wherein the queue inserting channel is used for queuing a proxy service with the emergency degree of a first grade, the queuing channel is used for queuing a proxy service with the emergency degree of a second grade, the real-time channel is used for queuing a proxy service with the emergency degree of a third grade, the reservation channel is used for queuing a proxy service with the service processing time, the emergency degree of the first grade is higher than that of the second grade, and the emergency degree of the third grade is higher than that of the first grade.
8. A queuing processing apparatus for batch proxy service, comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring service data of batch of to-be-processed proxy services, wherein each proxy service in the batch of to-be-processed proxy services is a fund transaction service which is processed by a client entrusted financial institution;
the matching module is used for inputting the service data into a target neural network model to obtain an output result output by the target neural network model, wherein the output result is used for representing a service queuing channel corresponding to each of the proxy services, the target neural network model is used for extracting target characteristic information in the service data of each of the proxy services and matching the corresponding service queuing channel for each of the proxy services according to the target characteristic information corresponding to each of the proxy services, and the target characteristic information corresponding to each of the proxy services is used for representing the emergency degree of processing the proxy services;
and the processing module is used for queuing the batch of the proxy services based on the service queuing channels corresponding to each proxy service.
9. A computer readable storage medium, characterized in that a computer program is stored in the computer readable storage medium, wherein the computer program is arranged to execute the queuing method of bulk proxy services according to any of the claims 1 to 7 at run-time.
10. An electronic device comprising one or more processors and a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the queuing method for bulk proxy services of any one of claims 1-7.
CN202311497341.XA 2023-11-10 2023-11-10 Queuing processing method and device for batch proxy service and electronic equipment Pending CN117558079A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311497341.XA CN117558079A (en) 2023-11-10 2023-11-10 Queuing processing method and device for batch proxy service and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311497341.XA CN117558079A (en) 2023-11-10 2023-11-10 Queuing processing method and device for batch proxy service and electronic equipment

Publications (1)

Publication Number Publication Date
CN117558079A true CN117558079A (en) 2024-02-13

Family

ID=89813912

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311497341.XA Pending CN117558079A (en) 2023-11-10 2023-11-10 Queuing processing method and device for batch proxy service and electronic equipment

Country Status (1)

Country Link
CN (1) CN117558079A (en)

Similar Documents

Publication Publication Date Title
CN110992167B (en) Bank customer business intention recognition method and device
CN110009417B (en) Target customer screening method, device, equipment and computer readable storage medium
US20180374029A1 (en) Selection of customer service requests
CN113724010A (en) Customer loss prediction method and device
CN116450951A (en) Service recommendation method and device, storage medium and electronic equipment
CN115438821A (en) Intelligent queuing method and related device
US20200210907A1 (en) Utilizing econometric and machine learning models to identify analytics data for an entity
CN111095328A (en) System and method for detecting and responding to transaction patterns
CA3109764A1 (en) Prediction of future occurrences of events using adaptively trained artificial-intelligence processes
CN110245985B (en) Information processing method and device
CN115048487B (en) Public opinion analysis method, device, computer equipment and medium based on artificial intelligence
CN117558079A (en) Queuing processing method and device for batch proxy service and electronic equipment
CN114520773B (en) Service request response method, device, server and storage medium
CN114581130A (en) Bank website number assigning method and device based on customer portrait and storage medium
CN115004182A (en) Data quantization method based on determined value and estimated value
CN113065892A (en) Information pushing method, device, equipment and storage medium
CN111882339A (en) Prediction model training and response rate prediction method, device, equipment and storage medium
CN110852854A (en) Generation method of quantitative yield model and evaluation method of risk control strategy
CN112215386A (en) Personnel activity prediction method and device and computer readable storage medium
CN111932018B (en) Bank business performance contribution information prediction method and device
US11443391B2 (en) Automated employee self-service and payroll processing for charitable contributions
US20210248617A1 (en) System and method for predicting support escalation
US20230196184A1 (en) Cross-label-correction for learning with noisy labels
CN115186896A (en) User loss early warning method and device, electronic equipment and computer storage medium
CN116362895A (en) Financial product recommendation method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination