CN113408934A - Urging task allocation method, urging task allocation device, urging task allocation apparatus, storage medium, and program product - Google Patents

Urging task allocation method, urging task allocation device, urging task allocation apparatus, storage medium, and program product Download PDF

Info

Publication number
CN113408934A
CN113408934A CN202110759472.5A CN202110759472A CN113408934A CN 113408934 A CN113408934 A CN 113408934A CN 202110759472 A CN202110759472 A CN 202110759472A CN 113408934 A CN113408934 A CN 113408934A
Authority
CN
China
Prior art keywords
task
collection
company
feature
urging
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110759472.5A
Other languages
Chinese (zh)
Inventor
何亚喆
陶韬
朱贇
刘炼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202110759472.5A priority Critical patent/CN113408934A/en
Publication of CN113408934A publication Critical patent/CN113408934A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06311Scheduling, planning or task assignment for a person or group
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/02Banking, e.g. interest calculation or account maintenance

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • General Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Data Mining & Analysis (AREA)
  • Development Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Marketing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Educational Administration (AREA)
  • Technology Law (AREA)
  • Game Theory and Decision Science (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a distribution method of a collection urging task, which can be used in the field of financial systems and comprises the following steps: acquiring data of each collection urging company; carrying out convolution operation on the data, and calculating the capacity of each company for executing the task of hastening receipts; and allocating the collection urging task according to the capacity of executing the collection urging task. The hasten receiving task allocation method based on the convolutional neural network can input the scale of a company, the service condition and the past service completion amount as characteristic values according to the hasten receiving capability evaluation aiming at the company, and obtains the matching degree of the company and the hasten receiving task to be allocated through calculation and analysis, thereby obtaining the task type and the task amount suitable for being allocated. The invention also provides an apparatus, a device, a storage medium and a program product for distributing the receiving-urging task.

Description

Urging task allocation method, urging task allocation device, urging task allocation apparatus, storage medium, and program product
Technical Field
The invention relates to the technical field of collection and distribution, in particular to a collection and distribution method, a collection and distribution device, collection and distribution equipment, a storage medium and a program product.
Background
In machine learning, a convolutional neural network is a deep feedforward artificial neural network, and has been successfully applied to image recognition. The convolution neural network is a feedforward neural network, and the artificial neuron can respond to peripheral units and can perform large-scale image processing. The convolutional neural network includes convolutional layers and pooling layers. The convolutional neural network includes a one-dimensional convolutional neural network, a two-dimensional convolutional neural network, and a three-dimensional convolutional neural network. One-dimensional convolutional neural networks are often applied to data processing of sequence classes; two-dimensional convolutional neural networks are often applied to the recognition of image-like texts; the three-dimensional convolutional neural network is mainly applied to medical image and video data identification.
The distribution of the collection urging tasks has two main distribution means at present, the first is to distribute all collection urging tasks to all collection urging companies in a collection urging system on average, so that the time efficiency of completing the tasks can be improved, and all collection urging companies can form a contrast to screen out high-quality partners.
The second method is to consider the disadvantages of the first method, and a new distribution mode of the collection tasks is introduced to distribute according to the size of the collection company, so that the collection company with a large scale can distribute a larger number of collection tasks, and the collection company with a small scale can distribute relatively fewer collection tasks.
The first method in the prior art is to distribute the number of collection tasks to each company on an average, and because of the different sizes of the collection tasks, it is often impossible to simply use the time for completing the collection tasks to evaluate the collection capacity of each collection company.
Although the second method is more effective, the second method only considers the factors of the company scale and does not consider the expertise of each hasty company, for example, a certain hasty company can have different hasty effects when facing different hasty tasks, and the second method also has obvious defects.
The disadvantage of the average allocation method is very obvious, the scale of a company and the adept service types of the collection acceleration can not be considered in the allocation task, the collection acceleration time of some small-magnitude companies is obviously reduced, and the completion efficiency of the whole collection acceleration task is influenced certainly at the moment. Although the shortage of the average distribution can be alleviated to a certain extent according to the company scale distribution, the method does not consider the factors such as the prior business condition and the business volume of each proctoring company, so that the speciality of each company cannot be well exerted, unreasonable distribution of the proctoring tasks can be caused, and the completion time of the whole proctoring tasks is influenced.
Disclosure of Invention
The invention mainly aims to provide an allocation method, device, equipment, storage medium and program product of an admission task, aiming at solving the technical problems of unreasonable and unintelligent allocation of the admission task in the prior art.
In order to achieve the purpose, the invention provides a distribution method of a collection urging task, which can be applied to the field of finance, and comprises the following steps:
acquiring data of each collection urging company;
carrying out convolution operation on the data, and calculating the capacity of each company for executing the task of hastening receipts;
and allocating the collection urging task according to the capacity of executing the collection urging task.
Optionally, the acquiring data of each proctoring company includes:
acquiring evaluation texts of all gathering companies;
acquiring word characteristics, position characteristics and part-of-speech characteristics of each word in the evaluation text;
concatenating the word features, the location features, and the part-of-speech features to form the data comprised of a feature matrix.
Optionally, the evaluation text includes company size, company related business category, company business volume, and the length of time the company completes the collection task.
Optionally, the step of performing convolution operation on the data and calculating the capability of each company to perform the collection-urging task includes:
performing convolution operation on the word feature, the position feature and the part-of-speech feature by using a plurality of convolution kernels with different sizes;
and obtaining the association among the word characteristics, the position characteristics and the part of speech characteristics.
Optionally, after the step of obtaining the association among the word feature, the location feature, and the part-of-speech feature, the method further comprises:
performing nonlinear mapping on the association among the word features, the position features and the part-of-speech features;
and determining the stimulation size transmitted to the subsequent neuron according to the result of the nonlinear mapping.
Optionally, the step of performing convolution operation on the word feature, the position feature and the part-of-speech feature by using a plurality of convolution kernels with different sizes includes:
the maximum value is selected among the features of each convolution kernel operation.
In addition, in order to achieve the above object, the present invention further provides a collection task allocation device, including:
the input layer module is used for converting the evaluation text into an input feature matrix;
the convolution layer module is used for carrying out convolution operation;
and the classifier distributes the collection urging task according to the capacity of executing the collection urging task.
Optionally, the hasty-harvesting task allocation device further includes an excitation layer module, and the excitation layer module is configured to perform nonlinear mapping on an output result of the convolutional layer module to determine a stimulation size transmitted to a subsequent neuron.
Optionally, the collection task allocation device further comprises a pooling layer module for compressing the amount of data and parameters.
Optionally, the collection task allocation device further includes a full connection layer module, where the full connection layer module is configured to map the obtained feature association to a sample mark space.
Optionally, the classifier is a Sofamax classifier.
In addition, to achieve the above object, the present invention also provides an electronic device including:
one or more processors;
a storage device for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform any of the methods described above.
Furthermore, to achieve the above object, the present invention also proposes a computer program product comprising a computer program which, when being executed by a processor, implements the steps of the file system management method as described above.
The technical scheme provided by the invention can be applied to the field of financial systems, and the allocation method of the collection urging task comprises the following steps: acquiring data of each collection urging company; carrying out convolution operation on the data, and calculating the capacity of each company for executing the task of hastening receipts; and allocating the collection urging task according to the capacity of executing the collection urging task. The hasten receiving task allocation method based on the convolutional neural network can input the scale of a company, the service condition and the past service completion amount as characteristic values according to the hasten receiving capability evaluation aiming at the company, and obtains the matching degree of the company and the hasten receiving task to be allocated through calculation and analysis, thereby obtaining the task type and the task amount suitable for being allocated. Convolution calculation is carried out on various characteristics such as company scale, company business volume, business type and the like to obtain the capacity of executing the collection urging task of a certain company, then collection urging task distribution is carried out according to the collection urging capacity of each company, the company with strong collection urging capacity can be distributed with more tasks, the company with weak collection urging capacity can be distributed with more businesses which the company excels, and the overall completion condition and the timeliness of the collection urging task are higher than those of the general collection urging task distribution method at the present stage.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
FIG. 1 is a schematic diagram of a system architecture of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of the receiving task assigning apparatus in FIG. 1;
FIG. 3 is a schematic diagram of an electronic device;
fig. 4 is a schematic flowchart of an embodiment of an admission task allocation method provided in the present invention;
fig. 5 is a schematic flowchart of an embodiment of an admission task allocation method provided in the present invention;
fig. 6 is a schematic flowchart of an embodiment of an admission task allocation method provided in the present invention;
FIG. 7 is a flowchart illustrating an embodiment of an allocation method for an hasten task according to the present invention;
FIG. 8 is a schematic diagram of a convolution process in an hasten receipt task allocation method according to the present invention;
FIG. 9 is a ReLU function image in the receiving task allocation method provided by the present invention;
FIG. 10 is a schematic diagram of a maximum pooling process in the catalytic recovery task allocation method provided by the present invention.
The reference numbers illustrate:
Figure BDA0003147704280000041
Figure BDA0003147704280000051
the implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to specific embodiments and the accompanying drawings. The technical solutions in the present invention are clearly and completely described, and it is obvious that the described embodiments are some, not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
Some block diagrams and/or flow diagrams are shown in the figures. It will be understood that some blocks of the block diagrams and/or flowchart illustrations, or combinations thereof, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the instructions, which execute via the processor, create means for implementing the functions/acts specified in the block diagrams and/or flowchart block or blocks. The techniques of this disclosure may be implemented in hardware and/or software (including firmware, microcode, etc.). In addition, the techniques of this disclosure may take the form of a computer program product on a computer-readable storage medium having instructions stored thereon for use by or in connection with an instruction execution system.
It should be noted that, if directional indication is involved in the embodiment of the present invention, the directional indication is only used for explaining the relative positional relationship, the motion situation, and the like between the components in a certain posture, and if the certain posture is changed, the directional indication is changed accordingly.
In addition, if there is a description of "first", "second", etc. in an embodiment of the present invention, the description of "first", "second", etc. is for descriptive purposes only and is not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, the meaning of "and/or" appearing throughout includes three juxtapositions, exemplified by "A and/or B" including either A or B or both A and B. Also, the technical solutions in the embodiments may be combined with each other, but must be based on the realization of those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not be within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "upper", "lower", "top", "bottom", "inner", "outer", and the like indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience of description and simplification of description, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and thus, should not be construed as limiting the present invention.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In addition, in the description of the present invention, "a plurality", and "a plurality" mean two or more unless otherwise specified.
In machine learning, a convolutional neural network is a deep feedforward artificial neural network, and has been successfully applied to image recognition. The convolution neural network is a feedforward neural network, and the artificial neuron can respond to peripheral units and can perform large-scale image processing. The convolutional neural network includes convolutional layers and pooling layers. The convolutional neural network includes a one-dimensional convolutional neural network, a two-dimensional convolutional neural network, and a three-dimensional convolutional neural network. One-dimensional convolutional neural networks are often applied to data processing of sequence classes; two-dimensional convolutional neural networks are often applied to the recognition of image-like texts; the three-dimensional convolutional neural network is mainly applied to medical image and video data identification.
The distribution of the collection urging tasks has two main distribution means at present, the first is to distribute all collection urging tasks to all collection urging companies in a collection urging system on average, so that the time efficiency of completing the tasks can be improved, and all collection urging companies can form a contrast to screen out high-quality partners.
The second method is to consider the disadvantages of the first method, and a new distribution mode of the collection tasks is introduced to distribute according to the size of the collection company, so that the collection company with a large scale can distribute a larger number of collection tasks, and the collection company with a small scale can distribute relatively fewer collection tasks.
The first method in the prior art is to distribute the number of collection tasks to each company on an average, and because of the different sizes of the collection tasks, it is often impossible to simply use the time for completing the collection tasks to evaluate the collection capacity of each collection company.
Although the second method is more effective, the second method only considers the factors of the company scale and does not consider the expertise of each hasty company, for example, a certain hasty company can have different hasty effects when facing different hasty tasks, and the second method also has obvious defects.
The disadvantage of the average allocation method is very obvious, the scale of a company and the adept service types of the collection acceleration can not be considered in the allocation task, the collection acceleration time of some small-magnitude companies is obviously reduced, and the completion efficiency of the whole collection acceleration task is influenced certainly at the moment. Although the shortage of the average distribution can be alleviated to a certain extent according to the company scale distribution, the method does not consider the factors such as the prior business condition and the business volume of each proctoring company, so that the speciality of each company cannot be well exerted, unreasonable distribution of the proctoring tasks can be caused, and the completion time of the whole proctoring tasks is influenced.
In view of this, the present invention provides an allocation method, an apparatus, a device, a storage medium, and a program product for an admission task, which are intended to solve the technical problem in the prior art that the allocation of the admission task is unreasonable and not intelligent.
As shown in fig. 1, the system architecture 100 according to the embodiment may include an incoming task assigning apparatus 101, a network 102 and a server 103. The network 102 is used to provide a communication link between the collection task assigning apparatus 101 and the server 103. Network 102 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
It should be noted that the method for allocating an admission task provided by the embodiment of the present disclosure may be executed by the server 103. Accordingly, the collection task allocation device 101 provided by the embodiment of the present disclosure may be disposed in the server 103. Alternatively, the collection-urging task allocation method provided by the embodiment of the present disclosure may be executed by a server or a server cluster that is different from the server 103 and can communicate with the collection-urging task allocation device 101 and/or the server 103. Accordingly, the collection task allocation device 101 provided in the embodiment of the present disclosure may also be disposed in a server or a server cluster different from the server 103 and capable of communicating with the server 103. Alternatively, the collection-urging task allocation method provided in the embodiment of the present disclosure may be partially executed by the server 103 and executed by the collection-urging task allocation device 101. Correspondingly, the collection task allocation device 101 provided in the embodiment of the present disclosure may also be partially disposed in the server 103.
It should be understood that the number of collection task assigning apparatuses 101, networks 102, and servers 103 in fig. 1 is merely illustrative. There may be any number of the reception job assigning apparatuses 101, networks 102, and servers 103 according to implementation needs.
Fig. 2 illustrates that the collection task assigning apparatus 101 according to the embodiment of the present disclosure includes: an input layer module 104, a convolutional layer module 105, an excitation layer module 106, a pooling layer module 107, a fully-connected layer module 108, and a classifier 109.
It should be noted that, portions of the prompt receipt task allocation device 101 in the embodiment of the present disclosure correspond to portions of the allocation method of the prompt receipt task in the embodiment of the present disclosure, and specific implementation details and technical effects thereof are also the same, and are not described herein again. Fig. 2 schematically shows a block diagram of a system adapted to implement the above described method according to an embodiment of the present disclosure. Fig. 2 illustrates the collection task assigning apparatus 101 as an example, which should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 3, fig. 3 is a schematic structural diagram of an electronic device 1800 in a hardware operating environment according to an embodiment of the present invention. As shown in fig. 3, the electronic device 1800 may include: a processor 1801, which may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)1802 or a program loaded from a storage portion 1808 into a Random Access Memory (RAM) 1803. The processor 1801 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 1801 may also include onboard memory for caching purposes. The processor 1801 may include a single processing unit or multiple processing units for performing the different actions of the method flows in accordance with embodiments of the present disclosure.
In the RAM1803, various programs and data necessary for the operation of the control device 1800 of the text-to-flow method are stored. A processor 1801 and a memory unit 3, the memory unit 3 including a ROM 1802 and a RAM1803 being connected to each other by a bus 1804. The processor 1801 performs various operations of the method flows according to embodiments of the present disclosure by executing programs in the ROM 1802 and/or the RAM 1803. Note that the programs may also be stored in one or more memories other than ROM 1802 and RAM 1803. The processor 1801 may also perform various operations of method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
Control apparatus 1800 of the text-to-flow method may also include input/output (I/O) interface 1805, input/output (I/O) interface 1805 also connected to bus 1804, according to an embodiment of the present disclosure. The control device 1800 of the text-to-flow method may further comprise one or more of the following components connected to the I/O interface 1805: an input portion 1806 including a keyboard, a mouse, and the like; an output portion 1807 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 1808 including a hard disk and the like; and a communication section 1809 including a network interface card such as a LAN card, a modem, or the like. The communication section 1809 performs communication processing via a network such as the internet. A driver 1810 is also connected to the I/O interface 1805 as needed. A removable medium 1811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1810 as necessary, so that a computer program read out therefrom is mounted in the storage portion 1808 as necessary. The communication section 1809 is used for implementing connection communication among these components, and includes various connection types such as wired, wireless communication links, or fiber optic cables. The input/output (I/O) interface 1805 may also include a standard wired interface, which may be a USB interface, a wireless interface.
The electronic device 1800 shown in fig. 3 further comprises: the network interface is mainly used for connecting the background server 103 and performing data communication with the background server 103; the user interface is mainly used for connecting user equipment; the electronic device 1800 calls the control program of the collection task allocation method stored in the memory through the processor 1801, and executes the control steps of the collection task allocation method provided by the embodiment of the present invention.
Those skilled in the art will appreciate that the configuration shown in fig. 3 does not constitute a limitation of the electronic device 1800 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
Based on the hardware structure, the embodiment of the distribution method of the invention for the collection task is provided.
Referring to fig. 4, fig. 4 is a schematic flow chart of an embodiment of a distribution method of a collection task according to the present invention, in an embodiment, the distribution method of the collection task includes the following steps:
s10: and acquiring data of each collection urging company.
S20: and carrying out convolution operation on the data, and calculating the capacity of each company for executing the collection task.
S30: and allocating the collection urging task according to the capacity of executing the collection urging task.
It should be noted that, the evaluation of the hastening capacity of a certain company is mainly performed through a convolutional neural network, the scale of the company, the business volume and the business type of the company, and the past business completion amount are used as characteristic values to perform convolution calculation to obtain the hastening task performance of the certain company, then the hastening task allocation is performed according to the hastening capacity of each company, the company with strong hastening capacity can be allocated with more tasks, the company with weak hastening capacity can be allocated with more businesses which the company excels, and the overall completion condition and the time efficiency of the hastening task are higher than those of the general hastening task allocation method at the present stage.
Referring to fig. 5, fig. 5 is a schematic flowchart of an embodiment included in step S10, and in an embodiment, step S10 includes the following steps:
step S11: and obtaining the evaluation text of each gathering company.
Step S12: and acquiring word characteristics, position characteristics and part-of-speech characteristics of each word in the evaluation text.
Step S13: concatenating the word features, the location features, and the part-of-speech features to form the data comprised of a feature matrix.
It should be noted that the input layer module 104 is mainly responsible for inputting the evaluation or introduction of the hastening capacity of a company into the model as training data, where the evaluation mainly includes the company size, the company related business category, the company business volume, and the time length of the hastening task completed by the company.
Firstly, data processing is carried out, Chinese text is subjected to word segmentation by using an open source tool, and then the text in training data is converted into word vector representation by using word2 vec. Sample data containing the characteristics are converted into a matrix, and then word vectors with various characteristics are spliced and input into a model through an input layer for training. The input principle is as follows:
Xi=[S(w);L(w);A(w)] (1)
S=[x1,x2,…,xn]T (2)
in the above formula (1) (2), S (w) represents the word feature of each word in the sentence, l (w) represents the position feature of each word in the sentence text, a (w) represents the part-of-speech feature of each word, the three features are spliced to form a complete feature vector of a word, S is the feature matrix of the whole sentence, the sentence matrix is used as the input of the model, and the above operations are all completed in the input layer module 104.
Referring to fig. 6, fig. 6 is a schematic flowchart of an embodiment included in step S20, and in an embodiment, step S20 includes the following steps:
step S21: and performing convolution operation on the word feature, the position feature and the part of speech feature by using a plurality of convolution kernels with different sizes.
Step S22: and obtaining the association among the word characteristics, the position characteristics and the part of speech characteristics.
Referring to fig. 7, fig. 7 is a schematic flowchart of an embodiment included in step S22, and in an embodiment, step S22 includes the following steps:
step S221: and carrying out nonlinear mapping on the association among the word characteristics, the position characteristics and the part of speech characteristics.
Step S222: and determining the stimulation size transmitted to the subsequent neuron according to the result of the nonlinear mapping.
Further, step S21 includes:
step S211: the maximum value is selected among the features of each convolution kernel operation.
It should be noted that, referring to fig. 8, when the convolution operation module 105 operates, a square matrix with a size F × F, which is called a filter and is also called a convolution kernel, is given, and the size of the matrix is also called a receptive field. The depth d of the filter and the depth d of the input layer are maintained consistent, thus a filter of size F x d can be obtained. In actual operation, different models determine different numbers of filters, denoted as K, each K containing a matrix of d F x F, and compute to generate an output matrix. The input of a certain size and the filter of a certain size, plus some additional parameters, will generate an output matrix of a certain size. In the present model, the input text matrix is 1-dimensional, so the convolution kernel is also a 1-dimensional matrix.
The layer performs convolution operations using convolution windows, each of which has a convolution kernel with a weight of W, to learn data characteristics. In the model, the convolution kernels use a plurality of convolution kernels with different sizes, so that more association among a plurality of characteristics can be guaranteed to be learned, all step lengths are set to be 1, the characteristics of the exceeding boundary can be expanded, and all expansion values are set to be 0 vectors. The weights of the convolution kernel randomly set the initial values and the initial bias matrix, which will be adjusted to the best with training. After a series of convolution operations of sliding windows of different sizes, the association and weight between various features can be extracted.
The mathematical expression for convolutional layer module 105:
Figure BDA0003147704280000121
in the above formula, f (x) represents an activation function; b is an offset; omegan,mRepresenting the weight corresponding to the position of the convolution kernel; n, M is the length and width of the convolution kernel; u denotes the output of the previous layer.
The function of the excitation layer module 106 is to perform nonlinear mapping on the output result of the convolutional layer module 105 to determine the stimulation magnitude transmitted to the subsequent neuron. The excitation function adopted in the model is ReLU, and the model has the characteristics of fast convergence and simpler gradient calculation. When the same feature of different samples passes through the neural network formed by relu, the paths flowing through the same feature are different (the activation value of relu is 0, the path is blocked, and the activation value is self, the path is passed), so that the final output space is obtained by nonlinear transformation of the input space. Referring to fig. 9, the mathematical expression of the ReLU function is:
f(x)=max(0,x) (4)
a pooling layer module 107 is in between successive convolutional layer modules 105 for compressing the amount of data and parameters, reducing overfitting, facilitating later optimization. The common method is a maximum pooling strategy, only the value with the largest score is taken as a pooling layer reserved value, all other feature values are discarded, the value with the largest score represents that only the strongest of the features is reserved, and other weak features are discarded. The one-dimensional array with the characteristics can be converted into a single numerical value, and the number of parameters can be reduced for a subsequent full-connection layer. The variable length input can be converted to a fixed length input, and the number of full link layer neurons can be predetermined.
Referring to fig. 10, a maximum pooling strategy is used to select a maximum value from the learned features of each convolution kernel, and extract the most effective features, which is a dimension reduction method to alleviate the situation of excessive output, and the discarded data in this method is also an effective way to learn the high-order representation of the source data. If the original input has changed slightly in position in similar but distinct input samples, the largest pooling layer will still output similar content.
Each node of the fully connected layer module 108 is connected to all nodes of the previous layer module for integrating the previously extracted features. The parameters of a fully connected layer module are also typically the most due to its fully connected nature. The fully-connected layer module is responsible for mapping the 'distributed feature representation' learned by the front layer module to a sample mark space in the whole convolutional neural network, random weight and random initial bias vectors are set in the model, and the Sofamax classifier distributes a proper collection urging capability value for a company.
The model uses convolution kernels with different sizes to simultaneously perform convolution calculation on the association and the weight among the characteristic values, the calculation time length error of the time length of the company and the self-induced receiving task is predicted, the training parameters in the model can be adjusted along with the change of the error until the iteration times reach the set threshold value, and the training parameters in the model at the moment are used as the optimal parameters.
The last layer will reduce the length 50 vector to a length 5 vector because we have five categories to predict (i.e., "strong", "normal", "weak"). Here the dimensionality reduction is done by another matrix multiplication. Softmax is used as a classification function that forces the sum of all five output values of the neural network to one. Thus, the output value will represent the probability of each of the five categories occurring.
In summary, the technical solution provided by the present invention can be applied to the field of financial systems, and the allocation method of the collection urging task includes the following steps: acquiring data of each collection urging company; carrying out convolution operation on the data, and calculating the capacity of each company for executing the task of hastening receipts; and allocating the collection urging task according to the capacity of executing the collection urging task. The hasten receiving task allocation method based on the convolutional neural network can input the scale of a company, the service condition and the past service completion amount as characteristic values according to the hasten receiving capability evaluation aiming at the company, and obtains the matching degree of the company and the hasten receiving task to be allocated through calculation and analysis, thereby obtaining the task type and the task amount suitable for being allocated. Convolution calculation is carried out on various characteristics such as company scale, company business volume, business type and the like to obtain the capacity of executing the collection urging task of a certain company, then collection urging task distribution is carried out according to the collection urging capacity of each company, the company with strong collection urging capacity can be distributed with more tasks, the company with weak collection urging capacity can be distributed with more businesses which the company excels, and the overall completion condition and the timeliness of the collection urging task are higher than those of the general collection urging task distribution method at the present stage.
The present disclosure also provides a computer-readable storage medium, which may be embodied in the apparatus/system described in the above embodiments; or may exist separately and not be incorporated into the device/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to an embodiment of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium. Examples may include, but are not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
For example, according to embodiments of the present disclosure, a computer-readable storage medium may include the ROM and/or RAM and/or one or more memories other than ROM and RAM described above.
According to embodiments of the present disclosure, method flows according to embodiments of the present disclosure may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication section, and/or installed from a removable medium. The computer program, when executed by a processor, performs the above-described functions defined in the system of the embodiments of the present disclosure. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
When the computer program product runs in a computer system, the program code is used for causing the computer system to realize the item recommendation method provided by the embodiment of the disclosure. The computer program performs the above-described functions defined in the system/apparatus of the embodiments of the present disclosure when executed by the processor 1801. The systems, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
In one embodiment, the computer program may be hosted on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted in the form of a signal on a network medium, distributed, downloaded and installed via the communication section 1809, and/or installed from a removable media 1811. The computer program containing program code may be transmitted using any suitable network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
In such embodiments, the computer program may be downloaded and installed from a network via the communication portion 1809, and/or installed from the removable media 1811. The computer program, when executed by the processor 1801, performs the above-described functions defined in the system of the embodiments of the present disclosure. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
In accordance with embodiments of the present disclosure, program code for executing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, these computer programs may be implemented using high level procedural and/or object oriented programming languages, and/or assembly/machine languages. The programming language includes, but is not limited to, programming languages such as Java, C + +, python, the "C" language, or the like. The program code may execute entirely on the user computing device, partly on the user device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention or portions thereof that contribute to the prior art may be embodied in the form of a software product, where the computer software product is stored in a storage medium (e.g., a Read Only Memory (ROM)/Random Access Memory (RAM), a magnetic disk, an optical disk), and includes several instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention. The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention, and are not intended to limit the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (14)

1. A distribution method of collection tasks is characterized by comprising the following steps:
acquiring data of each collection urging company;
carrying out convolution operation on the data, and calculating the capacity of each company for executing the task of hastening receipts;
and allocating the collection urging task according to the capacity of executing the collection urging task.
2. The method for distributing collection tasks according to claim 1, wherein the acquiring data of each collection company comprises:
acquiring evaluation texts of all gathering companies;
acquiring word characteristics, position characteristics and part-of-speech characteristics of each word in the evaluation text;
concatenating the word features, the location features, and the part-of-speech features to form the data comprised of a feature matrix.
3. The method of claim 2, wherein the evaluation text includes a company size, a company related business category, a company business volume, and a time length for which the company completes the task of receiving.
4. The method for distributing an hasten task according to claim 2, wherein the step of performing convolution operation on the data and calculating the capability of each company to execute the hasten task comprises:
performing convolution operation on the word feature, the position feature and the part-of-speech feature by using a plurality of convolution kernels with different sizes;
and obtaining the association among the word characteristics, the position characteristics and the part of speech characteristics.
5. The collection task distribution method of claim 4, wherein the step of obtaining the association between the word feature, the location feature, and the part-of-speech feature comprises:
performing nonlinear mapping on the association among the word features, the position features and the part-of-speech features;
and determining the stimulation size transmitted to the subsequent neuron according to the result of the nonlinear mapping.
6. The collection task allocation method according to claim 4, wherein the step of performing convolution operation on the word feature, the position feature and the part-of-speech feature by using convolution kernels having different sizes comprises:
the maximum value is selected among the features of each convolution kernel operation.
7. A collection task allocation apparatus, comprising:
the input layer module is used for converting the evaluation text into an input feature matrix;
the convolution layer module is used for carrying out convolution operation;
and the classifier distributes the collection urging task according to the capacity of executing the collection urging task.
8. The apparatus according to claim 7, further comprising a stimulus layer module for performing a non-linear mapping on the output of the convolutional layer module to determine the stimulation level to the subsequent neuron.
9. The catalytic cracking task distribution device of claim 7, further comprising a pooling layer module for compressing the amount of data and parameters.
10. The collection task distribution device of claim 7, further comprising a full link layer module for mapping the obtained feature associations to a sample label space.
11. The collection task allocation device of claim 7, wherein the classifier is a Sofamax classifier.
12. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of any of claims 1-6.
13. A computer-readable storage medium having stored thereon executable instructions that, when executed by a processor, cause the processor to perform the method of any one of claims 1 to 6.
14. A computer program product, comprising a computer program which, when executed by a processor, implements a method according to any one of claims 1 to 6.
CN202110759472.5A 2021-07-05 2021-07-05 Urging task allocation method, urging task allocation device, urging task allocation apparatus, storage medium, and program product Pending CN113408934A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110759472.5A CN113408934A (en) 2021-07-05 2021-07-05 Urging task allocation method, urging task allocation device, urging task allocation apparatus, storage medium, and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110759472.5A CN113408934A (en) 2021-07-05 2021-07-05 Urging task allocation method, urging task allocation device, urging task allocation apparatus, storage medium, and program product

Publications (1)

Publication Number Publication Date
CN113408934A true CN113408934A (en) 2021-09-17

Family

ID=77681270

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110759472.5A Pending CN113408934A (en) 2021-07-05 2021-07-05 Urging task allocation method, urging task allocation device, urging task allocation apparatus, storage medium, and program product

Country Status (1)

Country Link
CN (1) CN113408934A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109740148A (en) * 2018-12-16 2019-05-10 北京工业大学 A kind of text emotion analysis method of BiLSTM combination Attention mechanism
CN109934433A (en) * 2017-12-15 2019-06-25 航天信息股份有限公司 A kind of personnel ability's appraisal procedure, device and cloud service platform
CN110618855A (en) * 2018-12-25 2019-12-27 北京时光荏苒科技有限公司 Task allocation method and device, electronic equipment and storage medium
CN111104513A (en) * 2019-12-13 2020-05-05 中山大学 Short text classification method for game platform user question-answer service
CN111292007A (en) * 2020-02-28 2020-06-16 中国工商银行股份有限公司 Supplier financial risk prediction method and device
CN111539606A (en) * 2020-04-14 2020-08-14 支付宝(杭州)信息技术有限公司 Service processing method, device and equipment
CN112613324A (en) * 2020-12-29 2021-04-06 北京中科闻歌科技股份有限公司 Semantic emotion recognition method, device, equipment and storage medium
CN112668329A (en) * 2020-12-28 2021-04-16 广州博士信息技术研究院有限公司 Policy text classification method based on machine learning

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109934433A (en) * 2017-12-15 2019-06-25 航天信息股份有限公司 A kind of personnel ability's appraisal procedure, device and cloud service platform
CN109740148A (en) * 2018-12-16 2019-05-10 北京工业大学 A kind of text emotion analysis method of BiLSTM combination Attention mechanism
CN110618855A (en) * 2018-12-25 2019-12-27 北京时光荏苒科技有限公司 Task allocation method and device, electronic equipment and storage medium
CN111104513A (en) * 2019-12-13 2020-05-05 中山大学 Short text classification method for game platform user question-answer service
CN111292007A (en) * 2020-02-28 2020-06-16 中国工商银行股份有限公司 Supplier financial risk prediction method and device
CN111539606A (en) * 2020-04-14 2020-08-14 支付宝(杭州)信息技术有限公司 Service processing method, device and equipment
CN112668329A (en) * 2020-12-28 2021-04-16 广州博士信息技术研究院有限公司 Policy text classification method based on machine learning
CN112613324A (en) * 2020-12-29 2021-04-06 北京中科闻歌科技股份有限公司 Semantic emotion recognition method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
陈珂;梁斌;柯文德;许波;曾国超;: "基于多通道卷积神经网络的中文微博情感分析", 计算机研究与发展, no. 05, 15 May 2018 (2018-05-15), pages 1 - 13 *
陶永才;张鑫倩;石磊;卫琳;: "面向短文本情感分析的多特征融合方法研究", 小型微型计算机系统, no. 06, 29 May 2020 (2020-05-29) *

Similar Documents

Publication Publication Date Title
CN110674880B (en) Network training method, device, medium and electronic equipment for knowledge distillation
CN110880036B (en) Neural network compression method, device, computer equipment and storage medium
US10963817B2 (en) Training tree-based machine-learning modeling algorithms for predicting outputs and generating explanatory data
WO2022007823A1 (en) Text data processing method and device
CN111191791A (en) Application method, training method, device, equipment and medium of machine learning model
CN111523640B (en) Training method and device for neural network model
CN111582500A (en) Method and system for improving model training effect
CN107437111A (en) Data processing method, medium, device and computing device based on neutral net
CN112785005A (en) Multi-target task assistant decision-making method and device, computer equipment and medium
CN112418320A (en) Enterprise association relation identification method and device and storage medium
CN115238909A (en) Data value evaluation method based on federal learning and related equipment thereof
CN113128588A (en) Model training method and device, computer equipment and computer storage medium
US11869128B2 (en) Image generation based on ethical viewpoints
CN116229170A (en) Task migration-based federal unsupervised image classification model training method, classification method and equipment
WO2023050143A1 (en) Recommendation model training method and apparatus
CN115130573A (en) Data processing method, device, storage medium, equipment and product
WO2024114659A1 (en) Summary generation method and related device
CN114282258A (en) Screen capture data desensitization method and device, computer equipment and storage medium
WO2020042164A1 (en) Artificial intelligence systems and methods based on hierarchical clustering
WO2023045949A1 (en) Model training method and related device
CN113408934A (en) Urging task allocation method, urging task allocation device, urging task allocation apparatus, storage medium, and program product
CN116957006A (en) Training method, device, equipment, medium and program product of prediction model
CN114898184A (en) Model training method, data processing method and device and electronic equipment
CN115099988A (en) Model training method, data processing method, device and computer medium
CN115017321A (en) Knowledge point prediction method and device, storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination