CN113344415A - Deep neural network-based service distribution method, device, equipment and medium - Google Patents

Deep neural network-based service distribution method, device, equipment and medium Download PDF

Info

Publication number
CN113344415A
CN113344415A CN202110696468.9A CN202110696468A CN113344415A CN 113344415 A CN113344415 A CN 113344415A CN 202110696468 A CN202110696468 A CN 202110696468A CN 113344415 A CN113344415 A CN 113344415A
Authority
CN
China
Prior art keywords
neural network
deep neural
training
data
training sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110696468.9A
Other languages
Chinese (zh)
Inventor
严杨扬
程克喜
蔡灵敏
张政
晏湘涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Property and Casualty Insurance Company of China Ltd
Original Assignee
Ping An Property and Casualty Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Property and Casualty Insurance Company of China Ltd filed Critical Ping An Property and Casualty Insurance Company of China Ltd
Priority to CN202110696468.9A priority Critical patent/CN113344415A/en
Publication of CN113344415A publication Critical patent/CN113344415A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06311Scheduling, planning or task assignment for a person or group
    • G06Q10/063112Skill-based matching of a person or a group to a task
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention relates to an intelligent decision technology, and discloses a service distribution method based on a deep neural network, which comprises the following steps: dividing the acquired historical customer data into a first training sample and a second training sample; carrying out first training on the original deep neural network by using a first training sample to obtain an initial deep neural network and a first training result; acquiring feedback information of a user on the first training result, and performing second training on the initial deep neural network by using the feedback information and a second training sample to obtain a standard deep neural network; and analyzing the data of a plurality of pre-acquired waited clients by using a standard deep neural network so as to distribute services to the seats. In addition, the invention also relates to a block chain technology, and the resource data can be stored in the nodes of the block chain. The invention also provides a device, equipment and medium for service distribution based on the deep neural network. The invention can solve the problem that the matching degree of the service distributed to the seat and the seat is lower.

Description

Deep neural network-based service distribution method, device, equipment and medium
Technical Field
The present invention relates to the field of intelligent decision making technologies, and in particular, to a method and an apparatus for service allocation based on a deep neural network, an electronic device, and a computer-readable storage medium.
Background
As demand increases, companies or enterprises offer various services to users, and the resulting increase in services results in the need for a large number of service personnel to handle the services. For example, there are a large number of insurance businesses in an insurance company, and thus, a large number of insurance businesses need to be distributed to a plurality of business persons for processing.
The existing service distribution mode is mostly based on the distribution of service levels, that is, each service and service personnel are graded, and the service is distributed to the service personnel of the corresponding level for processing. In this method, even if the business persons at the same level often have different client groups which are good at communication, the business is allocated according to the business level, which often results in low matching degree between the business and the business persons.
Disclosure of Invention
The invention provides a service distribution method and device based on a deep neural network and a computer readable storage medium, and mainly aims to solve the problem that the matching degree of services distributed to an agent and the agent is low.
In order to achieve the above object, the present invention provides a service allocation method based on a deep neural network, including:
obtaining historical customer data of a seat, and dividing the historical customer data into a first training sample and a second training sample according to whether a transaction service order exists or not;
performing first training on a pre-constructed original deep neural network by using the first training sample to obtain an initial deep neural network and a first training result;
obtaining feedback information of a user on the first training result, and performing second training on the initial deep neural network by using the feedback information and the second training sample to obtain a standard deep neural network;
analyzing the data of a plurality of pre-acquired customers to be received by using the standard deep neural network to obtain the probability value of the order of the transaction service between the seat and each customer to be received;
and acquiring the services of the plurality of clients to be waited, and distributing the services to the seat according to the probability value.
Optionally, before the first training of the pre-constructed raw deep neural network with the first training sample, the method further includes:
acquiring a deep neural network framework;
constructing a feature input layer in the deep neural network framework;
establishing a multi-layer fully connected layer after the feature input layer;
and constructing a batch standardization layer and a discarding layer among the multiple fully-connected layers to obtain the original deep neural network.
Optionally, the dividing the historical customer data into a first training sample and a second training sample according to whether there is a transaction service order includes:
extracting a status field of each data in the historical customer data;
judging whether the state field is the same as a preset standard field or not, if so, determining that a transaction service order exists in the historical customer data, and classifying the data corresponding to the state field into a first training sample;
and if the state field is different from the standard field, determining that no transaction service order exists in the historical customer data, and classifying the data corresponding to the state field into a second training sample.
Optionally, the performing a first training on a pre-constructed original deep neural network by using the first training sample to obtain an initial deep neural network and a first training result includes:
acquiring a first data characteristic of the first training sample;
performing first training on the original deep neural network by using the first data feature;
calculating a first loss value of a training result generated by the first training by using a preset first loss function;
and adjusting parameters of the original deep neural network according to the first loss value, returning to the step of first training until the times of the first training reach a preset first time, and outputting the initial deep neural network and a first training result generated by the first training.
Optionally, the adjusting the parameter of the original deep neural network according to the first loss value includes:
calculating the update gradient of parameters in the original deep neural network by using a preset optimization algorithm according to the first loss value;
and updating the parameters of the original deep neural network according to the updating gradient.
Optionally, the allocating, according to the probability value, the service to the agent includes:
comparing the probability value with a preset probability threshold;
and selecting the service of the customer to be waited for, of which the probability value is greater than the preset probability threshold value, to be allocated to the seat.
Optionally, the allocating, according to the probability value, the service to the agent includes:
sequencing the plurality of clients to be received according to the descending order of the probability value to obtain a client list;
and sequentially selecting a preset number of to-be-received clients from the client list according to the sequence from front to back, and distributing the services of the selected to-be-received clients to the seats.
In order to solve the above problem, the present invention further provides a deep neural network-based traffic distribution apparatus, including:
obtaining historical customer data of a seat, and dividing the historical customer data into a first training sample and a second training sample according to whether a transaction service order exists or not;
performing first training on a pre-constructed original deep neural network by using the first training sample to obtain an initial deep neural network and a first training result;
obtaining feedback information of a user on the first training result, and performing second training on the initial deep neural network by using the feedback information and the second training sample to obtain a standard deep neural network;
analyzing the data of a plurality of pre-acquired customers to be received by using the standard deep neural network to obtain the probability value of the order of the transaction service between the seat and each customer to be received;
and acquiring the services of the plurality of clients to be waited, and distributing the services to the seat according to the probability value.
In order to solve the above problem, the present invention also provides an electronic device, including:
a memory storing at least one instruction; and
and the processor executes the instructions stored in the memory to realize the deep neural network-based service distribution method.
In order to solve the above problem, the present invention further provides a computer-readable storage medium, which stores at least one instruction, where the at least one instruction is executed by a processor in an electronic device to implement the deep neural network-based traffic distribution method described above.
The embodiment of the invention trains the constructed network by utilizing the first training sample of the order, trains the network again by utilizing the feedback information of the user to the training result and the second training sample of the order which is not reached, improves the accuracy of the network, further predicts the success rate of the order taking service of the seat receiving client by the network, distributes the service to the seat according to the success rate, realizes the user group with high probability of order reaching for the seat matching, and improves the matching degree of the seat and the user. Therefore, the service distribution method, the service distribution device, the electronic equipment and the computer readable storage medium based on the deep neural network can solve the problem of low accuracy of the label generated by the resource.
Drawings
Fig. 1 is a schematic flowchart of a deep neural network-based service allocation method according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating the partitioning of historical customer data according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of performing a first training according to an embodiment of the present invention;
FIG. 4 is a functional block diagram of a deep neural network-based traffic distribution apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device implementing the deep neural network-based service allocation method according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the application provides a service distribution method based on a deep neural network. The execution subject of the deep neural network-based traffic distribution method includes, but is not limited to, at least one of electronic devices such as a server, a terminal and the like that can be configured to execute the method provided by the embodiments of the present application. In other words, the deep neural network-based traffic distribution method may be performed by software or hardware installed in a terminal device or a server device, and the software may be a block chain platform. The server includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like.
Fig. 1 is a schematic flow chart of a deep neural network-based service allocation method according to an embodiment of the present invention. In this embodiment, the method for service allocation based on a deep neural network includes:
s1, obtaining historical customer data of the seat, and dividing the historical customer data into a first training sample and a second training sample according to whether a transaction service order exists.
In the embodiment of the invention, the agent is a customer service person, such as a customer service person of an insurance company, a bank customer service person or a customer service person of a real estate company; the historical customer data is data of a plurality of historical customers served by the agent, such as the name, sex, occupation, business name and whether to reach a business order of the historical customer A.
In detail, the historical customer data of the agent may be provided by a user, or the historical customer data may be crawled from a storage area (database, block chain, network cache, etc.) for storing the historical customer data using a computer sentence (e.g., java sentence, python sentence, etc.) having a data crawling function.
In the embodiment of the invention, the historical customer data can be divided into the first training sample and the second training sample according to whether a transaction service order exists or not.
For example, the historical client data comprises data of a historical client a, data of a historical client B and data of a historical client C, wherein the data of the historical client a and the data of the historical client B contain a transaction service order, and the data of the historical client C does not contain the transaction service order, the data of the historical client a and the data of the historical client B are divided into a first training sample, and the data of the historical client C is divided into a second training sample; or the data of the historical client C is divided into a first training sample, and the data of the historical client A and the data of the historical client B are divided into a second training sample.
In an embodiment of the present invention, referring to fig. 2, the dividing the historical customer data into a first training sample and a second training sample according to whether there is a transaction order includes:
s21, extracting the status field of each data in the historical customer data;
s22, judging whether the state field is the same as a preset standard field or not;
if the status field is the same as the standard field, executing S23, determining that the historical customer data contains a transaction service order, and classifying the data corresponding to the status field into a first training sample;
and if the state field is different from the standard field, executing S24, determining that no transaction service order exists in the historical customer data, and classifying the data corresponding to the state field into a second training sample.
In detail, the status field may be written into the historical customer data by a user in advance, and the status field is used for marking whether each piece of data in the historical customer data includes a transaction service order.
The standard field can be used for marking data containing transaction service orders in historical customer data; alternatively, the standard field may also mark data in the historical customer data that does not contain a transaction order.
For example, when the standard field is used for marking data containing a transaction service order in historical customer data, and the data of a historical customer a in the historical customer data contains the transaction service order, the standard field is used as a state field to mark the data of the historical customer a; if the data of the historical client B does not contain the transaction service order, the data of the historical client B is marked by using a state field different from the standard field.
In the embodiment, the state field of each piece of historical customer data is extracted and compared with the standard field, so that the historical customer data is divided into the first training sample and the second training sample, and the accuracy of dividing the historical customer data is improved.
S2, performing first training on the pre-constructed original deep neural network by using the first training sample to obtain an initial deep neural network and a first training result.
In the embodiment of the invention, the original deep neural network is constructed by adopting a structure of a plurality of layers of full connection layers, so that the data expression capability of the network is enhanced, the complexity of the network is improved, and the accuracy of subsequently utilizing the network to distribute the services is further improved.
In an embodiment of the present invention, before performing the first training on the pre-constructed original deep neural network by using the first training sample, the method further includes:
acquiring a deep neural network framework;
constructing a feature input layer in the deep neural network framework;
establishing a multi-layer fully connected layer after the feature input layer;
and constructing a batch standardization layer and a discarding layer among the multiple fully-connected layers to obtain the original deep neural network.
In detail, the deep neural network framework can be preset by a user, and functions corresponding to different network layers are programmed in the deep neural network framework through computer languages such as java and python so as to realize the construction of a feature input layer, a full connection layer, a batch standardization layer and a discard layer.
The feature input layer is used for inputting data features of training data so that the network can analyze the training data. For example, when the original deep neural network is trained using a first training sample, the feature input layer inputs the data features of the first training sample.
The full connection layer is used for expressing and analyzing the data characteristics input by the characteristic input layer according to preset weights so as to better display the hidden relation among the characteristics and further obtain the training result of the training data; the structure of the multiple layers of full connection layers is beneficial to increasing the complexity of the network so as to improve the accuracy of the training result output by the network, and the multiple layers of full connection layers comprise the output layers of the models and are used for outputting the analysis result of the model models.
For example, when the original deep neural network is trained by using a first training sample, the fully-connected layer is configured to express and analyze data characteristics of the first training sample, and output a first training result by using an output layer in a multi-layer fully-connected layer, where the first training result is a prediction result of whether an agent and a client have fulfilled a transaction service order (e.g., the probability of a customer service a fulfilling a transaction service order to a historical customer a is 80%).
The batch standardization layer is used for standardizing the data characteristics expressed by the full connection layer so as to solve the problem of gradient disappearance in the network training process, and can perform weight adjustment on the data characteristics expressed by the full connection layer according to preset weights so as to optimize the gradient flow of the network.
The discarding layer, namely the Dropout layer, can temporarily discard the data features expressed by the fully-connected layer according to preset probability parameters so as to prevent the overfitting state of the network when the training data is less.
In one embodiment of the present invention, the original deep neural network includes a 7-layer network structure, where the first layer is a feature input layer, the second layer is a fully-connected layer including 128 neurons, the third layer is a batch normalization layer, the fourth layer is a discard layer, the fifth layer is a fully-connected layer including 64 neurons, the sixth layer is a fully-connected layer including 32 neurons, and the seventh layer is a fully-connected layer (output layer) including 1 neuron, where the second layer, the fifth layer, and the sixth layer use a Relu function as an activation function, the seventh layer uses a Sigmoid function as an activation function, and a probability parameter of the discard layer is 0.3.
In an embodiment of the present invention, referring to fig. 3, the performing a first training on a pre-constructed original deep neural network by using the first training sample to obtain an initial deep neural network and a first training result includes:
s31, acquiring a first data feature of the first training sample;
s32, performing first training on the original deep neural network by using the first data feature;
s33, calculating a first loss value of a training result generated by the first training by using a preset first loss function;
s34, adjusting the parameters of the original deep neural network according to the first loss value;
s35, judging whether the first training frequency reaches a preset first time;
if the number of times of the first training does not reach the preset first number, returning to the step S32;
and if the number of times of the first training reaches the preset first number of times, executing S36, outputting an initial deep neural network, and generating a first training result by the first training.
In detail, the first training sample may be analyzed through a preset intelligent model (e.g., NLP model, HMM model) to obtain first data features of the first training sample, where the first data features include, but are not limited to, a customer name, a customer gender, a customer occupation, a customer transaction name, and whether the customer has fulfilled a transaction order, which correspond to each historical customer data in the first training sample.
The network is trained by obtaining the first data characteristics of the first training sample, so that the data volume required to be processed by the network can be reduced, and the training efficiency and accuracy of the network are improved.
Further, after the first data feature is input to the original deep neural network, the original deep neural network analyzes the first data feature to obtain a predicted value of the first data feature, calculates a first loss value between the predicted value and a first training sample by using a preset first loss function, and further adjusts a parameter of the original deep neural network according to the first loss value to achieve training of the original deep neural network, wherein the first loss function includes but is not limited to a square loss function and a logarithmic loss function.
In one embodiment of the present invention, when adjusting the parameters of the original deep neural network according to the first loss value, the parameters of the countermeasure generation network may be adjusted by using a preset optimization algorithm, which includes but is not limited to: a batch gradient descent algorithm, a small batch gradient descent algorithm, and a random gradient descent algorithm.
For example, the current parameters of the countermeasure generation network are input into the optimization algorithm, the optimization algorithm is used to perform optimization calculation on the input current parameters to obtain optimized optimization parameters, and the optimization parameters are used to update the current parameters of the countermeasure generation network to adjust the parameters in the countermeasure generation network.
In an embodiment of the present invention, the adjusting parameters of the original deep neural network according to the first loss value includes:
calculating the update gradient of parameters in the original deep neural network by using a preset optimization algorithm according to the first loss value;
and updating the parameters of the original deep neural network according to the updating gradient.
When the current parameter is updated according to the update gradient, preset arithmetic operation can be carried out by using the parameter in the original deep neural network and the update gradient.
For example, the current parameter is 10, a preset optimization algorithm is used to calculate the parameter in the original deep neural network, an update gradient of the parameter is obtained to be 0.1, and the parameter can be updated to 10(1+0.1) ═ 11 according to the update gradient 0.1;
alternatively, the parameter may be updated to 10+0.1 — 10.1 according to the update gradient 0.1.
And S3, obtaining feedback information of the user on the first training result, and performing second training on the initial deep neural network by using the feedback information and the second training sample to obtain a standard deep neural network.
In the embodiment of the invention, the feedback information is feedback of the user on the first training result, and the feedback information is used for correcting or confirming the first training result, so that the training samples are optimized and enriched, and a more accurate network is trained.
For example, the first training result includes data of the historical client a in the first training sample (whether a transaction order is fulfilled, the probability value when the transaction order is fulfilled is 100%, and the probability value when the transaction order is not fulfilled is 0%), and a prediction result of the data of the historical client a (the probability value when the transaction order is fulfilled), and if a difference between the prediction result and the data of the historical client a is smaller than or equal to a preset probability, the feedback information is confirmation of the prediction result; and if the difference value between the prediction result and the data of the historical client A is greater than the preset probability, the feedback information is the correction of the prediction result.
In the embodiment of the present invention, the feedback information may be uploaded by a user at a client, for example, the user fills in the feedback information in a webpage used for collecting the feedback information at the client and uploads the feedback information.
Further, in the embodiment of the present invention, the feedback information and the second training sample are used to perform a second training on the initial deep learning network, so as to improve the accuracy of the initial deep learning network, and obtain a standard deep learning network.
In an embodiment of the present invention, the performing a second training on the initial deep neural network by using the feedback information and the second training sample to obtain a standard deep neural network includes:
acquiring a second data characteristic of the second training sample;
performing second training on the initial neural network by using the second data characteristic and the feedback information;
calculating a second loss value of a training result generated by the second training by using a preset second loss function;
and adjusting the parameters of the initial deep neural network according to the second loss value, and returning to the step of second training until the times of the second training reach a preset second time to obtain the standard deep neural network.
In detail, the step of obtaining the second data feature of the second training sample is consistent with the step of obtaining the first data feature of the first training sample, and is not repeated herein.
Specifically, the second loss function may be the same as the first loss function, and the preset second number may be the same as the preset first number.
In the embodiment of the invention, the feedback information and the second training sample are used for carrying out the second training on the initial deep neural network, so that the diversity of the training samples is increased, and the accuracy of the standard deep neural network obtained by training is improved.
S4, analyzing the data of a plurality of pre-acquired clients to be received by using the standard deep neural network to obtain the probability value of the transaction service order formed by the agent and each client to be received.
In the embodiment of the invention, the data of a plurality of pre-acquired customers to be received can be analyzed through the standard deep neural network so as to acquire the probability value of the business order which is formed by the agent and each customer to be received.
In the embodiment of the present invention, the analyzing the pre-acquired data of the multiple customers to be received by using the standard deep neural network to obtain the probability value of the agreement service order between the agent and each customer to be received includes:
carrying out convolution, pooling and other processing on the data of the plurality of clients to be waited to obtain client characteristics;
performing feature expression on the client features by utilizing a full connection layer of the standard deep neural network;
screening the features by utilizing a batch standardization layer and a discarding layer of the standard deep neural network to obtain screened features;
and acquiring a user portrait, performing probability value calculation on the user portrait and the screening features by using an output layer of the standard deep neural network, and determining the calculated probability value as the probability value of the order of the business order of the agent and the customer to be received corresponding to the screening features.
For example, there are data a of the customer a to be waited for and the customer a to be waited for, data B of the customer B to be waited for and the customer B to be waited for, data C of the customer C to be waited for and the customer C to be waited for, and data D of the customer D to be waited for and the customer D to be waited for; respectively carrying out convolution, pooling and other processing on the data a, the data b, the data c and the data d to obtain client characteristics corresponding to each client data, carrying out characteristic expression on the client characteristics by using a full connection layer of a standard deep neural network so as to express the client characteristics corresponding to each client data into a computer-readable form (such as vectors, characters and the like), screening the characteristics by using a batch standardization layer and a discarding layer of the standard deep neural network so as to express important characteristics of clients in the client characteristics under test, screening unimportant characteristics in the client characteristics so as to reduce the occupation of computing resources during subsequent analysis and improve the accuracy of the analysis, further carrying out probability value calculation on user images and screened characteristics by using an output layer of the standard deep neural network, wherein the output layer comprises an activation function, the method can be used for calculating the user portrait and the screening characteristics to obtain a related probability value between the user portrait and the screening characteristics, and the probability value is used as a probability value of a waiting customer corresponding to the screening characteristics and an agent to reach a transaction service order.
In detail, the activation function includes, but is not limited to, a softmax activation function, a sigmoid activation function.
For example, it is obtained that the probability value of the seat reaching the transaction service order with the customer a to be served is 90%, the probability value of the seat reaching the transaction service order with the customer B to be served is 75%, the probability value of the seat reaching the transaction service order with the customer C to be served is 55%, and the probability value of the seat reaching the transaction service order with the customer D to be served is 37%.
S5, obtaining the services of the plurality of clients to be waited, and distributing the services to the seat according to the probability value.
In the embodiment of the present invention, the service of each customer to be serviced may be queried by using a computer program statement (such as a python statement, a java statement, and the like) having a data query function, so as to obtain a service corresponding to each customer to be serviced in the plurality of customers to be serviced.
In one embodiment of the present invention, the probability value may be compared with a preset probability threshold, so as to allocate the services of a plurality of clients to be served to the agent according to the comparison result.
In detail, the allocating the service to the agent according to the probability value includes:
comparing the probability value with a preset probability threshold;
and selecting the service of the customer to be waited for, of which the probability value is greater than the preset probability threshold value, to be allocated to the seat.
For example, there are a customer a to be met, a customer B to be met, a customer C to be met, and a customer D to be met, where the probability value of the agent arriving at a business order for a deal with the customer a to be met is 90%, the probability value of the agent arriving at a business order for a deal with the customer B to be met is 75%, the probability value of the agent arriving at a business order for a deal with the customer C to be met is 55%, and the probability value of the agent arriving at a business order for a deal with the customer D to be met is 37%; and if the preset probability threshold is 60%, distributing the services corresponding to the customer A to be waited and the customer B to be waited to the seat.
In another embodiment of the present invention, the services of the multiple customers to be waited may be ranked according to the probability values, and a preset number of services may be selected and allocated to the agent according to the ranking.
In detail, the allocating the service to the agent according to the probability value includes:
sequencing the plurality of clients to be received according to the descending order of the probability value to obtain a client list;
and sequentially selecting a preset number of to-be-received clients from the client list according to the sequence from front to back, and distributing the services of the selected to-be-received clients to the seats.
Specifically, the preset number may be a maximum value of the number of the agents in the historical daily receptions.
For example, if there are a customer a to be received, a customer B to be received, a customer C to be received, and a customer D to be received, where the probability value of the order of the business of the agent to be received to the customer a to be received is 90%, the probability value of the order of the business of the agent to be received to the customer B to be received to the customer C to be received to the customer D to be received: a waiting customer A, a waiting customer B, a waiting customer C and a waiting customer D; and when the preset number is 3, distributing the services corresponding to the customer to be met A, the customer to be met B and the customer to be met C to the seat.
The embodiment of the invention trains the constructed network by utilizing the first training sample of the order, trains the network again by utilizing the feedback information of the user to the training result and the second training sample of the order which is not reached, improves the accuracy of the network, further predicts the success rate of the order taking service of the seat receiving client by the network, distributes the service to the seat according to the success rate, realizes the user group with high probability of order reaching for the seat matching, and improves the matching degree of the seat and the user. Therefore, the service distribution method based on the deep neural network can solve the problem of low accuracy of the label generated by the resource.
Fig. 4 is a functional block diagram of a deep neural network-based traffic distribution apparatus according to an embodiment of the present invention.
The deep neural network-based service distribution device 100 of the present invention may be installed in an electronic device. According to the implemented functions, the deep neural network-based traffic distribution apparatus 100 may include a data partitioning module 101, a first training module 102, a second training module 103, a data analysis module 104, and a traffic distribution module 105. The module of the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and that can perform a fixed function, and that are stored in a memory of the electronic device.
In the present embodiment, the functions regarding the respective modules/units are as follows:
the data dividing module 101 is configured to obtain historical customer data of an agent, and divide the historical customer data into a first training sample and a second training sample according to whether a transaction service order exists;
the first training module 102 is configured to perform first training on a pre-constructed original deep neural network by using the first training sample to obtain an initial deep neural network and a first training result;
the second training module 103 is configured to obtain feedback information of the user on the first training result, and perform second training on the initial deep neural network by using the feedback information and the second training sample to obtain a standard deep neural network;
the data analysis module 104 is configured to analyze data of a plurality of pre-acquired customers to be received by using the standard deep neural network, so as to obtain a probability value of a transaction service order formed by the agent and each customer to be received;
the service allocation module 105 is configured to acquire services of the multiple customers to be served, and allocate the services to the agent according to the probability value.
In detail, when the modules in the deep neural network-based service distribution apparatus 100 according to the embodiment of the present invention are used, the same technical means as the deep neural network-based service distribution method described in fig. 1 to 3 are adopted, and the same technical effects can be produced, which is not described herein again.
Fig. 5 is a schematic structural diagram of an electronic device implementing a deep neural network-based service allocation method according to an embodiment of the present invention.
The electronic device may include a processor 10, a memory 11, a communication bus 12, and a communication interface 13, and may further include a computer program, such as a front-end monitoring program, stored in the memory 11 and executable on the processor 10.
In some embodiments, the processor 10 may be composed of an integrated circuit, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same function or different functions, and includes one or more Central Processing Units (CPUs), a microprocessor, a digital Processing chip, a graphics processor, a combination of various control chips, and the like. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device by running or executing programs or modules (e.g., executing a front end monitor program, etc.) stored in the memory 11 and calling data stored in the memory 11.
The memory 11 includes at least one type of readable storage medium including flash memory, removable hard disks, multimedia cards, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disks, optical disks, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device, for example a removable hard disk of the electronic device. The memory 11 may also be an external storage device of the electronic device in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device. The memory 11 may be used not only to store application software installed in the electronic device and various types of data, such as codes of a front-end monitoring program, but also to temporarily store data that has been output or will be output.
The communication bus 12 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus. The bus may be divided into an address bus, a data bus, a control bus, etc. The bus is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like.
The communication interface 13 is used for communication between the electronic device and other devices, and includes a network interface and a user interface. Optionally, the network interface may include a wired interface and/or a wireless interface (e.g., WI-FI interface, bluetooth interface, etc.), which are typically used to establish a communication connection between the electronic device and other electronic devices. The user interface may be a Display (Display), an input unit such as a Keyboard (Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable, among other things, for displaying information processed in the electronic device and for displaying a visualized user interface.
Fig. 5 only shows an electronic device with components, and it will be understood by a person skilled in the art that the structure shown in fig. 5 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or a combination of certain components, or a different arrangement of components.
For example, although not shown, the electronic device may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so that functions of charge management, discharge management, power consumption management and the like are realized through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The service distribution program stored in the memory 11 of the electronic device 1 is a combination of instructions, which when executed in the processor 10, can implement:
obtaining historical customer data of a seat, and dividing the historical customer data into a first training sample and a second training sample according to whether a transaction service order exists or not;
performing first training on a pre-constructed original deep neural network by using the first training sample to obtain an initial deep neural network and a first training result;
obtaining feedback information of a user on the first training result, and performing second training on the initial deep neural network by using the feedback information and the second training sample to obtain a standard deep neural network;
analyzing the data of a plurality of pre-acquired customers to be received by using the standard deep neural network to obtain the probability value of the order of the transaction service between the seat and each customer to be received;
and acquiring the services of the plurality of clients to be waited, and distributing the services to the seat according to the probability value.
Specifically, the specific implementation method of the processor 10 for the instruction may refer to the description of the relevant steps in the embodiment corresponding to fig. 1, which is not described herein again.
Further, the integrated modules/units of the electronic device 1, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. The computer readable storage medium may be volatile or non-volatile. For example, the computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM).
The present invention also provides a computer-readable storage medium, storing a computer program which, when executed by a processor of an electronic device, may implement:
obtaining historical customer data of a seat, and dividing the historical customer data into a first training sample and a second training sample according to whether a transaction service order exists or not;
performing first training on a pre-constructed original deep neural network by using the first training sample to obtain an initial deep neural network and a first training result;
obtaining feedback information of a user on the first training result, and performing second training on the initial deep neural network by using the feedback information and the second training sample to obtain a standard deep neural network;
analyzing the data of a plurality of pre-acquired customers to be received by using the standard deep neural network to obtain the probability value of the order of the transaction service between the seat and each customer to be received;
and acquiring the services of the plurality of clients to be waited, and distributing the services to the seat according to the probability value.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A deep neural network-based service distribution method is characterized by comprising the following steps:
obtaining historical customer data of a seat, and dividing the historical customer data into a first training sample and a second training sample according to whether a transaction service order exists or not;
performing first training on a pre-constructed original deep neural network by using the first training sample to obtain an initial deep neural network and a first training result;
obtaining feedback information of a user on the first training result, and performing second training on the initial deep neural network by using the feedback information and the second training sample to obtain a standard deep neural network;
analyzing the data of a plurality of pre-acquired customers to be received by using the standard deep neural network to obtain the probability value of the order of the transaction service between the seat and each customer to be received;
and acquiring the services of the plurality of clients to be waited, and distributing the services to the seat according to the probability value.
2. The deep neural network-based traffic distribution method according to claim 1, wherein before the first training of the pre-constructed original deep neural network with the first training sample, the method further comprises:
acquiring a deep neural network framework;
constructing a feature input layer in the deep neural network framework;
establishing a multi-layer fully connected layer after the feature input layer;
and constructing a batch standardization layer and a discarding layer among the multiple fully-connected layers to obtain the original deep neural network.
3. The deep neural network-based traffic distribution method according to claim 1, wherein the dividing the historical customer data into a first training sample and a second training sample according to whether there is a transaction traffic order comprises:
extracting a status field of each data in the historical customer data;
judging whether the state field is the same as a preset standard field or not, if so, determining that a transaction service order exists in the historical customer data, and classifying the data corresponding to the state field into a first training sample;
and if the state field is different from the standard field, determining that no transaction service order exists in the historical customer data, and classifying the data corresponding to the state field into a second training sample.
4. The deep neural network-based traffic distribution method of claim 1, wherein the performing the first training on the pre-constructed original deep neural network by using the first training sample to obtain the initial deep neural network and the first training result comprises:
acquiring a first data characteristic of the first training sample;
performing first training on the original deep neural network by using the first data feature;
calculating a first loss value of a training result generated by the first training by using a preset first loss function;
and adjusting parameters of the original deep neural network according to the first loss value, returning to the step of first training until the times of the first training reach a preset first time, and outputting the initial deep neural network and a first training result generated by the first training.
5. The deep neural network-based traffic distribution method according to claim 4, wherein the adjusting the parameters of the original deep neural network according to the first loss value comprises:
calculating the update gradient of parameters in the original deep neural network by using a preset optimization algorithm according to the first loss value;
and updating the parameters of the original deep neural network according to the updating gradient.
6. The deep neural network-based traffic distribution method according to any one of claims 1 to 5, wherein the distributing the traffic to the agents according to the probability values comprises:
comparing the probability value with a preset probability threshold;
and selecting the service of the customer to be waited for, of which the probability value is greater than the preset probability threshold value, to be allocated to the seat.
7. The deep neural network-based traffic distribution method according to any one of claims 1 to 5, wherein the distributing the traffic to the agents according to the probability values comprises:
sequencing the plurality of clients to be received according to the descending order of the probability value to obtain a client list;
and sequentially selecting a preset number of to-be-received clients from the client list according to the sequence from front to back, and distributing the services of the selected to-be-received clients to the seats.
8. An apparatus for deep neural network-based traffic distribution, the apparatus comprising:
the data dividing module is used for acquiring historical customer data of the seat and dividing the historical customer data into a first training sample and a second training sample according to whether a transaction service order exists or not;
the first training module is used for carrying out first training on a pre-constructed original deep neural network by using the first training sample to obtain an initial deep neural network and a first training result;
the second training module is used for obtaining feedback information of a user on the first training result, and performing second training on the initial deep neural network by using the feedback information and the second training sample to obtain a standard deep neural network;
the data analysis module is used for analyzing the data of a plurality of pre-acquired customers to be received by using the standard deep neural network to obtain the probability value of the business order of the seat and each customer to be received;
and the service distribution module is used for acquiring the services of the plurality of clients to be met and distributing the services to the seats according to the probability values.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of deep neural network based traffic distribution of any one of claims 1 to 7.
10. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the deep neural network-based traffic distribution method according to any one of claims 1 to 7.
CN202110696468.9A 2021-06-23 2021-06-23 Deep neural network-based service distribution method, device, equipment and medium Pending CN113344415A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110696468.9A CN113344415A (en) 2021-06-23 2021-06-23 Deep neural network-based service distribution method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110696468.9A CN113344415A (en) 2021-06-23 2021-06-23 Deep neural network-based service distribution method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN113344415A true CN113344415A (en) 2021-09-03

Family

ID=77477914

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110696468.9A Pending CN113344415A (en) 2021-06-23 2021-06-23 Deep neural network-based service distribution method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN113344415A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114401310A (en) * 2021-12-31 2022-04-26 苏州市瑞川尔自动化设备有限公司 Visual cloud service data optimization method and server

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104966097A (en) * 2015-06-12 2015-10-07 成都数联铭品科技有限公司 Complex character recognition method based on deep learning
CN107346448A (en) * 2016-05-06 2017-11-14 富士通株式会社 Identification device, trainer and method based on deep neural network
CN108108814A (en) * 2018-01-17 2018-06-01 北京中星微人工智能芯片技术有限公司 A kind of training method of deep neural network
CN109598404A (en) * 2018-10-23 2019-04-09 平安科技(深圳)有限公司 Automatically to the method and apparatus for issuing the progress data processing of sales task list
CN109871885A (en) * 2019-01-28 2019-06-11 南京林业大学 A kind of plants identification method based on deep learning and Plant Taxonomy
CN109977394A (en) * 2018-12-10 2019-07-05 平安科技(深圳)有限公司 Text model training method, text analyzing method, apparatus, equipment and medium
CN110163655A (en) * 2019-04-15 2019-08-23 中国平安人寿保险股份有限公司 Distribution method of attending a banquet, device, equipment and storage medium based on gradient boosted tree
CN110288191A (en) * 2019-05-22 2019-09-27 中国平安财产保险股份有限公司 Data matching method, device, computer equipment and storage medium
WO2020020088A1 (en) * 2018-07-23 2020-01-30 第四范式(北京)技术有限公司 Neural network model training method and system, and prediction method and system
CN110766271A (en) * 2019-09-10 2020-02-07 中国平安财产保险股份有限公司 Customer service agent configuration method and device based on deep learning and computer equipment
CN111260448A (en) * 2020-02-13 2020-06-09 平安科技(深圳)有限公司 Artificial intelligence-based medicine recommendation method and related equipment
CN111563152A (en) * 2020-06-19 2020-08-21 平安科技(深圳)有限公司 Intelligent question and answer corpus analysis method and device, electronic equipment and readable storage medium
CN111582442A (en) * 2020-04-17 2020-08-25 中国科学院微电子研究所 Image identification method based on optimized deep neural network model
CN111640436A (en) * 2020-05-15 2020-09-08 北京青牛技术股份有限公司 Method for providing a dynamic customer representation of a call partner to an agent
CN111815124A (en) * 2020-06-18 2020-10-23 上海中通吉网络技术有限公司 Intelligent seat distribution method, device and equipment for express industry
CN112003989A (en) * 2020-08-26 2020-11-27 中国银行股份有限公司 Agent matching method and device, electronic equipment and computer storage medium
CN112036648A (en) * 2020-09-02 2020-12-04 中国平安财产保险股份有限公司 Model-based task allocation method and device, computer equipment and storage medium
CN112163887A (en) * 2020-09-30 2021-01-01 深圳前海微众银行股份有限公司 Electric sales system, electric sales list management method, device, equipment and storage medium
CN112668716A (en) * 2020-12-29 2021-04-16 奥比中光科技集团股份有限公司 Training method and device of neural network model

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104966097A (en) * 2015-06-12 2015-10-07 成都数联铭品科技有限公司 Complex character recognition method based on deep learning
CN107346448A (en) * 2016-05-06 2017-11-14 富士通株式会社 Identification device, trainer and method based on deep neural network
CN108108814A (en) * 2018-01-17 2018-06-01 北京中星微人工智能芯片技术有限公司 A kind of training method of deep neural network
WO2020020088A1 (en) * 2018-07-23 2020-01-30 第四范式(北京)技术有限公司 Neural network model training method and system, and prediction method and system
CN109598404A (en) * 2018-10-23 2019-04-09 平安科技(深圳)有限公司 Automatically to the method and apparatus for issuing the progress data processing of sales task list
CN109977394A (en) * 2018-12-10 2019-07-05 平安科技(深圳)有限公司 Text model training method, text analyzing method, apparatus, equipment and medium
CN109871885A (en) * 2019-01-28 2019-06-11 南京林业大学 A kind of plants identification method based on deep learning and Plant Taxonomy
CN110163655A (en) * 2019-04-15 2019-08-23 中国平安人寿保险股份有限公司 Distribution method of attending a banquet, device, equipment and storage medium based on gradient boosted tree
CN110288191A (en) * 2019-05-22 2019-09-27 中国平安财产保险股份有限公司 Data matching method, device, computer equipment and storage medium
CN110766271A (en) * 2019-09-10 2020-02-07 中国平安财产保险股份有限公司 Customer service agent configuration method and device based on deep learning and computer equipment
CN111260448A (en) * 2020-02-13 2020-06-09 平安科技(深圳)有限公司 Artificial intelligence-based medicine recommendation method and related equipment
CN111582442A (en) * 2020-04-17 2020-08-25 中国科学院微电子研究所 Image identification method based on optimized deep neural network model
CN111640436A (en) * 2020-05-15 2020-09-08 北京青牛技术股份有限公司 Method for providing a dynamic customer representation of a call partner to an agent
CN111815124A (en) * 2020-06-18 2020-10-23 上海中通吉网络技术有限公司 Intelligent seat distribution method, device and equipment for express industry
CN111563152A (en) * 2020-06-19 2020-08-21 平安科技(深圳)有限公司 Intelligent question and answer corpus analysis method and device, electronic equipment and readable storage medium
CN112003989A (en) * 2020-08-26 2020-11-27 中国银行股份有限公司 Agent matching method and device, electronic equipment and computer storage medium
CN112036648A (en) * 2020-09-02 2020-12-04 中国平安财产保险股份有限公司 Model-based task allocation method and device, computer equipment and storage medium
CN112163887A (en) * 2020-09-30 2021-01-01 深圳前海微众银行股份有限公司 Electric sales system, electric sales list management method, device, equipment and storage medium
CN112668716A (en) * 2020-12-29 2021-04-16 奥比中光科技集团股份有限公司 Training method and device of neural network model

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114401310A (en) * 2021-12-31 2022-04-26 苏州市瑞川尔自动化设备有限公司 Visual cloud service data optimization method and server

Similar Documents

Publication Publication Date Title
CN112541745B (en) User behavior data analysis method and device, electronic equipment and readable storage medium
CN111652280B (en) Behavior-based target object data analysis method, device and storage medium
CN113626606B (en) Information classification method, device, electronic equipment and readable storage medium
CN114663198A (en) Product recommendation method, device and equipment based on user portrait and storage medium
CN114722281B (en) Training course configuration method and device based on user portrait and user course selection behavior
CN113051480A (en) Resource pushing method and device, electronic equipment and storage medium
CN114612194A (en) Product recommendation method and device, electronic equipment and storage medium
CN114781832A (en) Course recommendation method and device, electronic equipment and storage medium
CN115081025A (en) Sensitive data management method and device based on digital middlebox and electronic equipment
CN113688923A (en) Intelligent order abnormity detection method and device, electronic equipment and storage medium
CN113516417A (en) Service evaluation method and device based on intelligent modeling, electronic equipment and medium
CN112465141A (en) Model compression method, model compression device, electronic device and medium
CN113627160B (en) Text error correction method and device, electronic equipment and storage medium
CN111652282B (en) Big data-based user preference analysis method and device and electronic equipment
CN113344415A (en) Deep neural network-based service distribution method, device, equipment and medium
CN112269875A (en) Text classification method and device, electronic equipment and storage medium
CN116403693A (en) Method, device, equipment and storage medium for dispatching questionnaire
CN113657499B (en) Rights and interests distribution method and device based on feature selection, electronic equipment and medium
CN113704407A (en) Complaint amount analysis method, device, equipment and storage medium based on category analysis
CN113822723B (en) Method, device, equipment and medium for analyzing website passenger flow based on big data
CN114968412B (en) Configuration file generation method, device, equipment and medium based on artificial intelligence
CN115225489B (en) Dynamic control method for queue service flow threshold, electronic equipment and storage medium
CN113793037B (en) Service distribution method, device, equipment and storage medium based on data analysis
CN114648368B (en) Economic information consultation system and method based on network big data
CN115146792A (en) Multitask learning model training method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination