CN112200664A - Repayment prediction method based on ERNIE model and DCNN model - Google Patents

Repayment prediction method based on ERNIE model and DCNN model Download PDF

Info

Publication number
CN112200664A
CN112200664A CN202011181563.7A CN202011181563A CN112200664A CN 112200664 A CN112200664 A CN 112200664A CN 202011181563 A CN202011181563 A CN 202011181563A CN 112200664 A CN112200664 A CN 112200664A
Authority
CN
China
Prior art keywords
model
ernie
prediction
training
dcnn
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011181563.7A
Other languages
Chinese (zh)
Inventor
李电祥
陈学珉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Changsheng Computer Technology Co ltd
Original Assignee
Shanghai Changsheng Computer Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Changsheng Computer Technology Co ltd filed Critical Shanghai Changsheng Computer Technology Co ltd
Priority to CN202011181563.7A priority Critical patent/CN112200664A/en
Publication of CN112200664A publication Critical patent/CN112200664A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/03Credit; Loans; Processing thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Strategic Management (AREA)
  • Artificial Intelligence (AREA)
  • Economics (AREA)
  • General Engineering & Computer Science (AREA)
  • Development Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Evolutionary Biology (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Marketing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Game Theory and Decision Science (AREA)
  • Technology Law (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Machine Translation (AREA)

Abstract

The application discloses a repayment prediction method based on an ERNIE model and a DCNN model, which fully utilizes voice data generated in a telephone collection process, utilizes the pre-trained ERNIE model to generate semantic representation of the voice data, and finally utilizes the DCNN model to determine a prediction result according to the semantic representation. Because the ERNIE model adopts four mask strategies of word mask, entity mask and random mask in the pre-training process, the knowledge information of word level, entity level and the like can be learned, and the model can better capture semantic information. The wide convolution of the DCNN model can increase the length of sentences, avoid loss of edge information, and the dynamic pooling layer of the DCNN model can keep the sequence of the original sequence, thereby remarkably improving the accuracy and reliability of repayment prediction. In addition, the application also provides a repayment prediction device, equipment and a readable storage medium based on the ERNIE model and the DCNN model, and the technical effects correspond to the method.

Description

Repayment prediction method based on ERNIE model and DCNN model
Technical Field
The application relates to the technical field of natural language processing, in particular to a repayment prediction method, a repayment prediction device, repayment prediction equipment and a readable storage medium based on an ERNIE model and a DCNN model.
Background
With the increasing scale of credit card services and the continuous sinking of lending people, the post-credit collection service of credit cards faces new challenges.
The traditional collection and payment forecasting model adopts data as application data of a user or current repayment information after loan, and the forecasting process based on the data comprises the following steps: preprocessing the current application data information to obtain target independent variable characteristic information of the current data information; the information is built by an algorithm model, and then a collection-hastening prediction model is obtained through a large amount of data training, so that the aim of improving the prediction precision is fulfilled. However, the scheme does not effectively utilize a large amount of voice data generated in the telephone communication process of the user, the voice data of the user is real and precious in the collection process, and valuable information can be provided for collection prediction, and the information cannot be provided by application data and post-loan repayment information.
With the rapid development of machine learning and deep learning, it becomes possible to learn useful information from large-scale dialogue data. Text classification is an important branch of natural language processing, and is rapidly developed in recent years, at present, text classification is generally performed through deep learning, and the quality of a text classification effect based on deep learning depends on how well latent semantic information features of data are extracted. Traditional machine learning extracts data potential information through algorithms such as one-hot coding, TFIDF, LDA and LSA, but the algorithms have the problem of dimension disaster, word vector models such as word2vec, glove and fasttext have respective advantages, but the word vector models also have the quality and quantity of input data, and the knowledge in different fields still has the embarrassment situation that the knowledge can be used again only by retraining.
The problem is effectively solved by training a pre-training model by using a large-scale corpus and then finely adjusting the pre-training model according to different tasks, and typical representatives of the pre-training model are a BERT (Bidirectional Encoder retrieval from transformations) model and a generalization model thereof. To further improve the effect of the classification task, the use of the BERT-CNN (conditional Neural Networks) model is a very good solution. However, this solution has at least the following two drawbacks:
(1) the BERT model only carries out mask from a word level in the mask process, correlation existing among words is not considered, deviation estimation exists on the joint probability of the language model, and meanwhile, the situation that the pre-training process and the generating process are inconsistent exists, so that the prediction accuracy is poor.
(2) The operation of the convolutional layer of the CNN model shortens the length of the sentence, resulting in loss of edge information, and the operation of the pooling layer of the CNN model disturbs the order of the sentences.
In summary, it is an urgent need to solve the above-mentioned problems by using voice data generated during the call collection process to improve the collection prediction accuracy and overcome the above-mentioned drawbacks.
Disclosure of Invention
The application aims to provide a repayment prediction method, a repayment prediction device, repayment prediction equipment and a readable storage medium based on an ERNIE model and a DCNN model, and the repayment prediction method, the repayment prediction device, the repayment prediction equipment and the readable storage medium are used for solving the problem that the prediction precision is low because a current collection prediction scheme does not fully utilize voice data in a telephone collection process. The specific scheme is as follows:
in a first aspect, the present application provides a repayment prediction method based on an ERNIE model and a DCNN model, including:
pre-training the ERNIE model by using a text data set;
carrying out hierarchical connection on the ERNIE model after pre-training and the DCNN model to obtain a repayment prediction model;
acquiring voice data generated in a telephone collection process, converting the voice data into text data by adopting an ASR (asynchronous receiver-transmitter) technology, and adding a label to obtain a training sample;
training the repayment prediction model by using the training sample;
and inputting the text data corresponding to the voice data to be tested into the trained repayment prediction model to obtain a prediction result.
Preferably, after the ASR technology is adopted to convert the speech data into text data, the method further includes:
and correcting the text data by using a kenLM error correction module or a pycorrect error correction module.
Preferably, the pre-training of the ERNIE model using the text data set includes:
constructing a training set according to a text data set and a plurality of mask strategies, wherein the plurality of mask strategies comprise a word mask strategy, an entity mask strategy and a random mask strategy;
pre-training the ERNIE model using the training set.
Preferably, the plurality of mask policies further include a sentence mask policy, and the sentence mask policy is: and for the target sentence, randomly selecting a starting position for masking, wherein the proportion of the masking does not exceed the preset proportion of the length of the target sentence.
Preferably, the inputting the text data corresponding to the voice data to be tested into the trained repayment prediction model to obtain a prediction result includes:
inputting text data corresponding to the voice data to be tested into an ERNIE model of the trained repayment prediction model to obtain semantic representation;
and inputting the semantic representation into a DCNN model of the trained repayment prediction model to obtain a prediction result.
Preferably, the DCNN model includes a wide convolution layer, a dynamic pooling layer, a Folding layer, and a full-link layer, and the inputting the semantic representation into the DCNN model of the trained repayment prediction model to obtain a prediction result includes:
performing convolution operation on the input semantic layer representation by using the wide convolution layer, and extracting complete sentence information to obtain a convolution result, wherein the complete sentence information comprises sentence beginning information and sentence ending information;
performing pooling operation on the convolution result by using a dynamic pooling layer to obtain a pooling result;
using a Folding layer to perform dimensionality reduction on the pooling result to obtain a dimensionality reduction result;
and determining a prediction result from the dimension reduction result by using a full connection layer.
Preferably, the ERNIE model includes a text encoder and a knowledge-based encoder, and the inputting the text data corresponding to the voice data to be tested into the ERNIE model of the repayment prediction model after training to obtain semantic representation includes:
generating text information, lexical information and syntactic information of the text information according to the text data by using a text encoder;
and integrating knowledge information of the text data into the text information by using a knowledge type encoder to obtain semantic representation.
In a second aspect, the present application provides a payment prediction apparatus based on ERNIE model and DCNN model, including:
a pre-training module: pre-training the ERNIE model by using a text data set;
a model construction module: the ERNIE model and the DCNN model are subjected to hierarchical connection after pre-training to obtain a repayment prediction model;
a training sample generation module: the system comprises a database, a database server and a database server, wherein the database is used for acquiring voice data generated in a telephone collection process, converting the voice data into text data by adopting an ASR (asynchronous receiver-transmitter) technology, and adding labels to obtain training samples;
a model training module: the system is used for training the repayment prediction model by utilizing the training sample;
a prediction module: and the model is used for inputting the text data corresponding to the voice data to be tested into the trained repayment prediction model to obtain a prediction result.
In a third aspect, the present application provides a repayment prediction apparatus based on an ERNIE model and a DCNN model, including:
a memory: for storing a computer program;
a processor: for executing the computer program to implement the ERNIE model and DCNN model based payment prediction method as described above.
In a fourth aspect, the present application provides a readable storage medium having stored thereon a computer program for implementing the ERNIE model and DCNN model-based repayment prediction method as described above when executed by a processor.
The application provides a repayment prediction method based on an ERNIE model and a DCNN model, which comprises the following steps: pre-training the ERNIE model by using a text data set; carrying out hierarchical connection on the ERNIE model after pre-training and the DCNN model to obtain a repayment prediction model; acquiring voice data generated in a telephone collection process, converting the voice data into text data by adopting an ASR (asynchronous receiver-transmitter) technology, and adding a label to obtain a training sample; training the repayment prediction model by using the training sample; and inputting the text data corresponding to the voice data to be tested into the trained repayment prediction model to obtain a prediction result.
Therefore, the method fully utilizes the voice data generated in the telephone collection process, utilizes the pretrained ERNIE model to generate the semantic representation of the voice data, and finally utilizes the DCNN model to determine the prediction result according to the semantic representation. Because the ERNIE model adopts four levels of mask strategies of word mask, entity mask and random mask in the pre-training process, knowledge information of word level, entity level and the like can be learned, so that the model can better capture semantic information and output semantic representation with high reference value. The wide convolution of the DCNN model can increase the length of sentences, avoid loss of edge information, and the dynamic pooling layer of the DCNN model can keep the sequence of the original sequence, extract corresponding amount of semantic feature information from the sentences with different lengths, and remarkably improve the accuracy and reliability of repayment prediction.
In addition, the application also provides a repayment prediction device, equipment and a readable storage medium based on the ERNIE model and the DCNN model, and the technical effects correspond to the method, and are not repeated herein.
Drawings
For a clearer explanation of the embodiments or technical solutions of the prior art of the present application, the drawings needed for the description of the embodiments or prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart illustrating a payment prediction method based on an ERNIE model and a DCNN model according to an embodiment of the present disclosure;
fig. 2 is a flowchart of a refinement of S105 according to a first payment prediction method based on the ERNIE model and the DCNN model provided in the present application;
FIG. 3 is a block diagram of the ERNIE model;
FIG. 4 is an internal structure diagram of the ERNIE + DCNN model;
fig. 5 is a functional block diagram of an embodiment of a payment prediction apparatus based on the ERNIE model and the DCNN model according to the present application.
Detailed Description
The core of the application is to provide a repayment prediction method, a repayment prediction device, equipment and a readable storage medium based on an ERNIE model and a DCNN model, voice data generated in a call receiving process are effectively utilized, a pre-trained ERNIE model is utilized to generate semantic representation of the voice data, a prediction result is finally determined according to the semantic representation by the DCNN model, and repayment prediction accuracy is remarkably improved.
In order that those skilled in the art will better understand the disclosure, the following detailed description will be given with reference to the accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, a first embodiment of a repayment prediction method based on an ERNIE model and a DCNN model provided by the present application is described below, where the first embodiment includes:
s101, pre-training an ERNIE model by using a text data set;
ERNIE is an enhanced semantic representation model based on knowledge masking strategies. The model learns the semantic representation of the complete concept through the mask of semantic units such as words, entities and the like. The ERNIE is mainly structurally divided into two parts, namely Transformer coding and knowledge integration, wherein the Transformer coding is used as a basic coder of a model to generate corresponding word vector representations, so that the context information of words in a text is reserved. The latter integrates knowledge at the phrase and entity level into the linguistic representation through a multi-stage knowledge masking strategy.
In the embodiment, the ERNIE model is pre-trained through the Chinese text data disclosed on the internet and the existing text data to obtain the pre-trained ERNIE model and all initialization parameters thereof. Specifically, a training set is constructed according to a text data set and a plurality of mask strategies, wherein the plurality of mask strategies comprise a word mask strategy, an entity mask strategy and a random mask strategy; the ERNIE model is then pre-trained using a training set.
The pretrained ERNIE model can effectively capture the voice information of the text data, and is convenient for subsequent repayment prediction.
S102, carrying out hierarchical connection on the ERNIE model after pre-training and the DCNN model to obtain a repayment prediction model;
the ERNIE model and the DCNN model are hierarchically connected to obtain a brand-new ERNIE + DCNN combined model, which is referred to as a repayment prediction model in this embodiment. In the actual prediction process, data firstly enter an ERNIE model, the output of the ERNIE model is used as input and enters a DCNN model, and a prediction result is finally obtained.
S103, acquiring voice data generated in the telephone collection process, converting the voice data into text data by adopting an ASR (asynchronous receiver-transmitter) technology, and adding a label to obtain a training sample;
the model of the application is used for predicting whether the user repays, therefore, the labels can be divided into two types: yes (payment), no (non-payment).
Since the word list of ERNIE includes various punctuation marks, it is not necessary to perform operations such as punctuation removal on the text data. And because the model is analyzed from the word level, all the models do not need to be subjected to word segmentation operation. The data is divided into training set, verifying set and testing set. For simplicity of description, the present embodiment only introduces the training set.
Because the prediction is mainly the behavior prediction of the user, the voice data of the user is more concerned. However, in practical applications, since the voice data of the customer service also has a certain reference value, the voice data of the customer service can be taken into consideration.
S104, training the repayment prediction model by using the training sample;
before training begins, parameters of the ERNIE model and the DCNN model are initialized respectively. Wherein, the ERNIE parameter is initialized to the parameter obtained by the pre-training; the parameters of the DCNN model are initialized randomly, and the values conform to the standard normal distribution.
In addition, information such as training termination conditions and learning rates needs to be set before training is started.
And S105, inputting the text data corresponding to the voice data to be tested into the trained repayment prediction model to obtain a prediction result.
As a preferred embodiment, in order to improve the reference value of the text data, after converting the voice data into the text data, the method further includes: and correcting the text data by using a kenLM error correction module or a pycorrect error correction module.
In practical application, as shown in fig. 2, S105 specifically includes:
s201, inputting text data corresponding to voice data to be detected into an ERNIE model of a repayment prediction model after training is completed, and obtaining semantic representation;
s202, inputting the semantic representation into a DCNN model of the trained repayment prediction model to obtain a prediction result.
In a specific embodiment, S202 includes: performing convolution operation on the input semantic layer representation by using the wide convolution layer, and extracting complete sentence information to obtain a convolution result, wherein the complete sentence information comprises sentence beginning information and sentence end information; performing pooling operation on the convolution result by using a dynamic pooling layer to obtain a pooling result; performing dimensionality reduction on the pooling result by using a Folding layer to obtain a dimensionality reduction result; and determining the dimension reduction result into a prediction result by utilizing the full connection layer.
In a specific embodiment, S201 includes: generating text information, text information lexical information and syntax information according to the text data by using a text encoder; and integrating knowledge information of the text data into the text information by using a knowledge type encoder to obtain semantic representation.
The reimbursement prediction method based on the ERNIE model and the DCNN model provided in this embodiment makes full use of the voice data generated in the telephone collection process, generates the semantic representation of the voice data by using the pre-trained ERNIE model, and finally determines the prediction result according to the semantic representation by using the DCNN model. The method has at least the following three advantages:
1. in the pretraining process of the ERNIE model, when masking operation is carried out, the masking operation is carried out not only from a word level, but also from a word level and an entity level, so that the condition that the joint probability of the trained language model has deviation is avoided.
2. In the aspect of constructing the model, knowledge map information is considered to be introduced, and a multi-information entity of the knowledge map is combined to serve as the language characteristic of the external knowledge improvement model, so that knowledge coding and heterogeneous information are further fused.
3. Unlike the operation modes of the convolution layer and the pooling layer of the CNN model, the DCNN model avoids the situations that convolution operation edge information is lost and the sentence sequence is disturbed in the pooling operation.
The second embodiment of the payment prediction method based on the ERNIE model and the DCNN model provided by the present application is described in detail below.
First, the ERNIE model is pre-trained with wiki encyclopedia, today's dairies, THUCNews, and its own text data set.
When the ERNIE model is pre-trained, it can be roughly divided into two parts: firstly, constructing a training set; secondly, the constructed training set is used for training the ERNIE model.
The specific implementation steps for constructing the training set comprise: dividing a text data set according to sentences, performing segmentation of characters, words, entities and other different granularities on the data by using a lexical analysis tool for the sentences, then performing token-based processing on the segmented data to obtain a token sequence and a segmentation boundary of a plaintext, and then mapping the token sequence and the segmentation boundary into id data through a word list, wherein the serialized token data comprises five parts, and each part is separated by a semicolon:
(1) token _ ids: a representation of a sentence pair is input.
(2) sensor _ type _ ids: 0 or 1 indicates to which sentence token belongs.
(3) position ids absolute position coding.
(4) seg _ labels: representing segmentation boundary information, 0 representing the prefix, 1 representing a non-prefix, and-1 representing a placeholder.
(5) next _ content _ label: this indicates whether or not the sentence has a relationship between upper and lower sentences (0 indicates no, 1 indicates present).
Randomly masking 15% of words of a sentence, wherein 80% of the 15% of words are replaced with mask, 10% are randomly replaced with other words, and the remaining 10% are still used with the original words and are not replaced. And concatenate [ cls ] in its real location, use this way to construct a completely new sentence to train the ERNIE model to predict the 15% of the words that are occluded. A priori knowledge is introduced in a pre-training stage, and 4 mask strategies are introduced for prediction:
(1) Basic-Level Masking, namely Masking single words from a single word Level but not acquiring high-Level semantics. I.e., the word mask policy described previously.
(2) Phrase-Level Masking, the input is still single word Level, Masking consecutive phrases. I.e., the word mask strategy described above.
(3) Entity-Level Masking, namely, firstly, Entity recognition is carried out, and then, the recognized Entity is masked. I.e., the entity mask policy described above.
(4) Calculating the length of the sentence, and then randomly selecting the starting position of the Span to carry out mask, wherein the proportion of the mask is not more than 20% of the length of the sentence. I.e., the sentence mask strategy described above.
The ERNIE model is pre-trained through the steps, and parameters of the last model are reserved and used for initializing the parameters of the ERNIE model in the repayment prediction model.
And secondly, carrying out hierarchical connection on the ERNIE model and the DCNN model to obtain a repayment prediction model.
In this embodiment, the ERNIE model includes, but is not limited to, a 12-layer transform encoder, 768 hidden layers, and 12 self-attention heads (self-attention heads). the core component of the transform is a multi-head self-attention mechanism, and 8 self-attention heads are combined in a splicing manner. The DCNN model mainly comprises a one-dim wide convolution layer, a dynamic k-max pond layer, a Folding layer and a full connection layer.
Specifically, the hierarchical connection process includes: the output of the last position of each layer of 12 layers in the ERNIE model is used as the input of the DCNN model to obtain the ERNIE-DCNN model, the input matrix with the width of 12 passes through a 1x1 matrix and a k-max-posing (dynamic pooling layer) to obtain a feature vector rich in sentence semantic information, a full connection layer (full connection layer) is added behind the feature vector, and finally the feature vector passes through a classifier.
And thirdly, extracting customer service voice data and user voice data generated in the call collection prompting from the Mysql database, and combining the customer service voice data and the user voice data in sequence. And converting the voice data into text data by adopting an ASR technology, and finishing an error correction process through a kenLM error correction module to obtain the text data in the voice call of a normal flow field.
The error correction module is divided into error detection, candidate recall, and scoring sort. The method is characterized in that the error detection aspect is trained by using unmarked unsupervised original data, and meanwhile, the distribution of prediction probability is reduced by utilizing character-sound mixing characteristics. In the aspect of candidate recall, the editing distance between the two in the aspect of homophone and homomorphism is calculated. And a trie tree (dictionary tree) is used to optimize the indexing efficiency. And in the aspect of candidate sequencing, a language model special prediction semantic feature based on the kenLM is added, so that the discrimination capability of the model is improved.
And fourthly, initializing work.
And respectively carrying out parameter initialization on the ERNIE model and the DCNN model.
Setting core parameters such as the number of rounds (epoch), the learning rate (learning rate), the batch size (batch size) and the like to train the model, wherein the evaluation indexes of the model are mainly precision, call and f1 values, and if the model is over-fitted, terminating the training task in advance through the set early stopping.
And fifthly, training a repayment prediction model.
The specific training steps are as follows: by inputting sentence + tag, the tags of the present embodiment are divided into two kinds, 0 and 1. The data is scrambled by using a random library, and the data is divided into a training set, a verification set and a test set by a sklern library according to the proportion of 0.64:0.16: 0.2. And (3) the data of the training set is lost into the model for training, the model effect of the training set is compared with the model effect of the verification set, and when the number of model turns reaches the specified number of turns or the verification index between the model turns and the specified number differs from the specified value, the model stops training. And testing the verification set data by using the trained model, and comparing the data with the real indexes to obtain the effect of the model.
The main relevant parameters of the repayment prediction model are as follows:
hidden_sizez:768
num_hidden_layers:12
num_attention_heads:12
hidden_act=’gelu’
hidden_dropout_prob:0.1
batch_size:128
pad_size:32
learning_rate:5e-5
and sixthly, performing real prediction by using the trained repayment prediction model.
The prediction process is similar to the training process and will not be described in this section.
The principles of the ERNIE model and the DCNN model are briefly described below. In this embodiment, a specific structure diagram of the ERNIE model is shown in fig. 3, and a structure diagram of the ERNIE model and the DCNN model after splicing is shown in fig. 4.
Compared to BERT structures, ERNIE is structurally more abundant in extracting text features than BERT. The framework of the ERNIE model is mainly divided into two modules, namely a text Encoder (T-Encoder) at the bottom layer, which is responsible for acquiring lexical and syntactic information of an input token; the second is a knowledge-based coder (K-Encoder) at the upper layer, which is responsible for integrating additional token-oriented knowledge information into the text information from the bottom layer.
The model in K-Encoder, also known as an agglomerator, as shown on the right of FIG. 3, has an input section consisting essentially of two parts: and outputting the bottom layer T-Encoder and obtaining an entity vector in the text by a transform algorithm.
And then, the text and the entity are respectively processed by using a multi-head self-attribute (MH-ATT), and the entity information and the text information are fused. The ERNIE model extracts features from the text to form a matrix vector rich in text features, which enters the DCNN model as its input value.
The DCNN model is able to capture semantic information for long-distance words. The DCNN model mainly comprises a one-dim wide convolution layer, a dynamic k-max pond layer, a Folding layer and a full connection layer. The following describes the respective layers:
one-dim wide convolutional layer is each dimension, and if the dimension of embedding is 300, then 300 dims are used, the wide convolutional layer can more effectively extract the graph edge angle information in the image task, and is also effective in the NLP task, and the sentence head and the sentence tail information can be more effectively extracted in such a way.
The max pond layer is the average of all convolutions, and the dynamic k-max pond is the maximum number of k selected. The value of k is dynamically selected by a formula according to the network structure and a preset value of k, and generally a minimum value of k (generally 3, 4, 5) at each layer can be preset.
The Fold layer maps the matrix of d dimension to d/2 dimension through mathematical calculation, thereby reducing the calculation amount and quickening the calculation speed.
The full-connection layer is the same as the common CNN model, all nodes of the current layer are connected and calculated, and the text features on the feature spaces extracted in the front are integrated to calculate the probability value under each classification label.
It can be seen that, according to the repayment prediction method based on the ERNIE model and the DCNN model provided by this embodiment, the ERNIE and DCNN combined model is adopted to complete the classification task of the prediction repayment model, and the pretrained ERNIE model is used to blend the structural information of the knowledge graph into the model, so that the model can better perform semantic modeling on the real world, learn the rationality of the language mechanically, and learn the semantic relation before the language. The DCNN model can further perform feature fusion by using a wide convolution layer and a dynamic k-max posing layer, and can simultaneously extract a plurality of key information, thereby ensuring that sentence information can be fully extracted.
In summary, compared with the payment prediction scheme based on BERT-CNN, the present embodiment has at least the following advantages:
1. the ERNIE model has significant advantages over the BERT model.
(1) The masking mechanism adopted by ERNIE combines the information of the words, the words and the entities, and adopts a random masking mode to increase the robustness of the model and capture better semantic information, while BERT only performs masking at the word level.
(2) The ERNIE model combines a plurality of information entities in the knowledge map as external knowledge to improve language characteristics, and integrates the coded knowledge information into semantic information by adopting a TransE knowledge embedding algorithm, thereby being beneficial to structured knowledge coding and heterogeneous information fusion.
(3) In order to extract knowledge information and train a language model, ERNIE adds a k-Encoder on the basis of a BERT structure, and realizes the fusion of the knowledge information and token original semantics to design a brand-new pre-training task.
(4) From the input data, ERNIE not only considers the plain text information, but also extracts the entity information in the sentence, embeds it into the vector, and performs segmentation from different angles of words, entities by lexical analysis tools.
2. The DCNN model has significant advantages over the CNN model.
Unlike the normal convolutional layer and the normal pooling layer in the CNN model, the DCNN model is composed of a wide convolutional layer and a dynamic k-max pooling layer.
Conventional convolution operations tend to make the convolved sentence shorter (L-w +1, L being the sentence length, w being the convolution kernel size), while the wide convolution of DCNN increases the sentence length to (L + w +1), because the window of the wide convolution does not need to cover the input values, and the parts without values are filled with 0 values, which convolution has the advantage that edge information is not lost.
Conventional pooling layers are typically either max pooling layers or average pooling layers. While the DCNN model uses a dynamic k-max pooling layer, i.e., the first k maxima in the sequence p are selected and the order of the original sequence is preserved. The benefit of k-max boosting is that more than one piece of important information in a sentence can be extracted and its relative order preserved.
3. The data level is also different with respect to the BERT-CNN model. Because the data source of the model is speech data, the obtained text data is also obtained by translating the speech data by the ASR technology, and certain conversion errors inevitably exist in the process of converting the speech into characters by the ASR technology. Therefore, an error correction module is added after the ASR module and before repayment of the prediction model to correct the text data, so as to improve the text quality of the input model.
In the following, a payment prediction apparatus based on the ERNIE model and the DCNN model provided in the embodiment of the present application is introduced, and the below-described payment prediction apparatus based on the ERNIE model and the DCNN model and the above-described payment prediction method based on the ERNIE model and the DCNN model may be referred to correspondingly.
As shown in fig. 5, the payment prediction apparatus based on the ERNIE model and the DCNN model of the present embodiment includes:
pre-training module 501: pre-training the ERNIE model by using a text data set;
model building module 502: the ERNIE model and the DCNN model are subjected to hierarchical connection after pre-training to obtain a repayment prediction model;
the training sample generation module 503: the system comprises a database, a database server and a database server, wherein the database is used for acquiring voice data generated in the telephone collection process, converting the voice data into text data by adopting an ASR (asynchronous receiver-transmitter) technology, and adding labels to obtain training samples;
model training module 504: the system is used for training the repayment prediction model by utilizing the training samples;
the prediction module 505: and the model is used for inputting the text data corresponding to the voice data to be tested into the trained repayment prediction model to obtain a prediction result.
The reimbursement prediction device based on the ERNIE model and the DCNN model of this embodiment is used for implementing the above-mentioned reimbursement prediction method based on the ERNIE model and the DCNN model, and therefore a specific implementation manner of the device may be found in the foregoing embodiment portions of the reimbursement prediction method based on the ERNIE model and the DCNN model, for example, the pre-training module 501, the model building module 502, the training sample generating module 503, the model training module 504, and the prediction module 505 are respectively used for implementing steps S101, S102, S103, S104, and S105 in the above-mentioned reimbursement prediction method based on the ERNIE model and the DCNN model. Therefore, specific embodiments thereof may be referred to in the description of the corresponding respective partial embodiments, and will not be described herein.
In addition, since the reimbursement prediction device based on the ERNIE model and the DCNN model of this embodiment is used for implementing the above-mentioned reimbursement prediction method based on the ERNIE model and the DCNN model, the action thereof corresponds to that of the above-mentioned method, and details thereof are not repeated here.
In addition, the present application further provides a repayment prediction device based on the ERNIE model and the DCNN model, including:
a memory: for storing a computer program;
a processor: for executing the computer program to implement the ERNIE model and DCNN model based repayment prediction method as described above.
Finally, the present application provides a readable storage medium having stored thereon a computer program for implementing the ERNIE model and DCNN model based repayment prediction method as described above when executed by a processor.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above detailed descriptions of the solutions provided in the present application, and the specific examples applied herein are set forth to explain the principles and implementations of the present application, and the above descriptions of the examples are only used to help understand the method and its core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A repayment prediction method based on an ERNIE model and a DCNN model is characterized by comprising the following steps:
pre-training the ERNIE model by using a text data set;
carrying out hierarchical connection on the ERNIE model after pre-training and the DCNN model to obtain a repayment prediction model;
acquiring voice data generated in a telephone collection process, converting the voice data into text data by adopting an ASR (asynchronous receiver-transmitter) technology, and adding a label to obtain a training sample;
training the repayment prediction model by using the training sample;
and inputting the text data corresponding to the voice data to be tested into the trained repayment prediction model to obtain a prediction result.
2. The method of claim 1, wherein after said converting the speech data into text data using ASR techniques, further comprising:
and correcting the text data by using a kenLM error correction module or a pycorrect error correction module.
3. The method of claim 1, wherein the pre-training of the ERNIE model with the text dataset comprises:
constructing a training set according to a text data set and a plurality of mask strategies, wherein the plurality of mask strategies comprise a word mask strategy, an entity mask strategy and a random mask strategy;
pre-training the ERNIE model using the training set.
4. The method of claim 3, wherein the plurality of mask policies further comprises a sentence mask policy, the sentence mask policy being: and for the target sentence, randomly selecting a starting position for masking, wherein the proportion of the masking does not exceed the preset proportion of the length of the target sentence.
5. The method of claim 1, wherein inputting text data corresponding to the voice data to be tested into the trained repayment prediction model to obtain a prediction result, comprises:
inputting text data corresponding to the voice data to be tested into an ERNIE model of the trained repayment prediction model to obtain semantic representation;
and inputting the semantic representation into a DCNN model of the trained repayment prediction model to obtain a prediction result.
6. The method of claim 5, wherein the DCNN model comprises a wide convolution layer, a dynamic pooling layer, a Folding layer, and a full-connectivity layer, and wherein inputting the semantic representation into the DCNN model of the trained repayment prediction model to obtain a prediction result comprises:
performing convolution operation on the input semantic layer representation by using the wide convolution layer, and extracting complete sentence information to obtain a convolution result, wherein the complete sentence information comprises sentence beginning information and sentence ending information;
performing pooling operation on the convolution result by using a dynamic pooling layer to obtain a pooling result;
using a Folding layer to perform dimensionality reduction on the pooling result to obtain a dimensionality reduction result;
and determining a prediction result from the dimension reduction result by using a full connection layer.
7. The method of claim 5, wherein the ERNIE model comprises a text encoder and a knowledge-based encoder, and the inputting the text data corresponding to the speech data to be tested into the ERNIE model of the trained repayment prediction model to obtain the semantic representation comprises:
generating text information, lexical information and syntactic information of the text information according to the text data by using a text encoder;
and integrating knowledge information of the text data into the text information by using a knowledge type encoder to obtain semantic representation.
8. A repayment prediction device based on an ERNIE model and a DCNN model is characterized by comprising:
a pre-training module: pre-training the ERNIE model by using a text data set;
a model construction module: the ERNIE model and the DCNN model are subjected to hierarchical connection after pre-training to obtain a repayment prediction model;
a training sample generation module: the system comprises a database, a database server and a database server, wherein the database is used for acquiring voice data generated in a telephone collection process, converting the voice data into text data by adopting an ASR (asynchronous receiver-transmitter) technology, and adding labels to obtain training samples;
a model training module: the system is used for training the repayment prediction model by utilizing the training sample;
a prediction module: and the model is used for inputting the text data corresponding to the voice data to be tested into the trained repayment prediction model to obtain a prediction result.
9. A repayment prediction device based on an ERNIE model and a DCNN model, comprising:
a memory: for storing a computer program;
a processor: for executing said computer program for implementing a method for predicting a payment based on the ERNIE model and the DCNN model according to any one of claims 1 to 7.
10. A readable storage medium having stored thereon a computer program for implementing the ERNIE model and DCNN model based payment prediction method according to any one of claims 1-7.
CN202011181563.7A 2020-10-29 2020-10-29 Repayment prediction method based on ERNIE model and DCNN model Pending CN112200664A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011181563.7A CN112200664A (en) 2020-10-29 2020-10-29 Repayment prediction method based on ERNIE model and DCNN model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011181563.7A CN112200664A (en) 2020-10-29 2020-10-29 Repayment prediction method based on ERNIE model and DCNN model

Publications (1)

Publication Number Publication Date
CN112200664A true CN112200664A (en) 2021-01-08

Family

ID=74012421

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011181563.7A Pending CN112200664A (en) 2020-10-29 2020-10-29 Repayment prediction method based on ERNIE model and DCNN model

Country Status (1)

Country Link
CN (1) CN112200664A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112990388A (en) * 2021-05-17 2021-06-18 成都数联铭品科技有限公司 Text clustering method based on concept words
CN113157913A (en) * 2021-01-30 2021-07-23 暨南大学 Ethical behavior discrimination method based on social news data set
CN114492387A (en) * 2022-04-18 2022-05-13 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Domain self-adaptive aspect term extraction method and system based on syntactic structure
WO2022160447A1 (en) * 2021-01-28 2022-08-04 平安科技(深圳)有限公司 Text error correction method, apparatus and device, and storage medium
CN116227484A (en) * 2023-05-09 2023-06-06 腾讯科技(深圳)有限公司 Model training method, apparatus, device, storage medium and computer program product

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110599324A (en) * 2019-07-25 2019-12-20 阿里巴巴集团控股有限公司 Method and device for predicting refund rate
CN111144108A (en) * 2019-12-26 2020-05-12 北京百度网讯科技有限公司 Emotion tendency analysis model modeling method and device and electronic equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110599324A (en) * 2019-07-25 2019-12-20 阿里巴巴集团控股有限公司 Method and device for predicting refund rate
CN111144108A (en) * 2019-12-26 2020-05-12 北京百度网讯科技有限公司 Emotion tendency analysis model modeling method and device and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
江伟: ""基于深度学习的文本分类"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
靳婷: ""任务型对话系统中语义理解的应用研究"", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022160447A1 (en) * 2021-01-28 2022-08-04 平安科技(深圳)有限公司 Text error correction method, apparatus and device, and storage medium
CN113157913A (en) * 2021-01-30 2021-07-23 暨南大学 Ethical behavior discrimination method based on social news data set
CN112990388A (en) * 2021-05-17 2021-06-18 成都数联铭品科技有限公司 Text clustering method based on concept words
CN112990388B (en) * 2021-05-17 2021-08-24 成都数联铭品科技有限公司 Text clustering method based on concept words
CN114492387A (en) * 2022-04-18 2022-05-13 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Domain self-adaptive aspect term extraction method and system based on syntactic structure
CN114492387B (en) * 2022-04-18 2022-07-19 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Domain self-adaptive aspect term extraction method and system based on syntactic structure
CN116227484A (en) * 2023-05-09 2023-06-06 腾讯科技(深圳)有限公司 Model training method, apparatus, device, storage medium and computer program product

Similar Documents

Publication Publication Date Title
CN110096570B (en) Intention identification method and device applied to intelligent customer service robot
CN108416058B (en) Bi-LSTM input information enhancement-based relation extraction method
CN110427461B (en) Intelligent question and answer information processing method, electronic equipment and computer readable storage medium
CN112200664A (en) Repayment prediction method based on ERNIE model and DCNN model
CN112183094B (en) Chinese grammar debugging method and system based on multiple text features
CN109918681B (en) Chinese character-pinyin-based fusion problem semantic matching method
CN112734881B (en) Text synthesized image method and system based on saliency scene graph analysis
CN114580382A (en) Text error correction method and device
CN113268586A (en) Text abstract generation method, device, equipment and storage medium
CN115292463B (en) Information extraction-based method for joint multi-intention detection and overlapping slot filling
CN111966812A (en) Automatic question answering method based on dynamic word vector and storage medium
CN113255320A (en) Entity relation extraction method and device based on syntax tree and graph attention machine mechanism
CN112818698B (en) Fine-grained user comment sentiment analysis method based on dual-channel model
CN115545041B (en) Model construction method and system for enhancing semantic vector representation of medical statement
CN114818717A (en) Chinese named entity recognition method and system fusing vocabulary and syntax information
CN114168754A (en) Relation extraction method based on syntactic dependency and fusion information
CN115017916A (en) Aspect level emotion analysis method and device, electronic equipment and storage medium
CN111966811A (en) Intention recognition and slot filling method and device, readable storage medium and terminal equipment
CN114742016A (en) Chapter-level event extraction method and device based on multi-granularity entity differential composition
CN113380223B (en) Method, device, system and storage medium for disambiguating polyphone
CN112488111B (en) Indication expression understanding method based on multi-level expression guide attention network
CN112417890B (en) Fine granularity entity classification method based on diversified semantic attention model
CN113486174A (en) Model training, reading understanding method and device, electronic equipment and storage medium
CN116595023A (en) Address information updating method and device, electronic equipment and storage medium
CN116362242A (en) Small sample slot value extraction method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210108

RJ01 Rejection of invention patent application after publication