CN111127179B - Information pushing method, device, computer equipment and storage medium - Google Patents

Information pushing method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN111127179B
CN111127179B CN201911277050.3A CN201911277050A CN111127179B CN 111127179 B CN111127179 B CN 111127179B CN 201911277050 A CN201911277050 A CN 201911277050A CN 111127179 B CN111127179 B CN 111127179B
Authority
CN
China
Prior art keywords
sample
processed
label
probability
samples
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911277050.3A
Other languages
Chinese (zh)
Other versions
CN111127179A (en
Inventor
王璋琪
卢亿雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Enyike Beijing Data Technology Co ltd
Original Assignee
Enyike Beijing Data Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Enyike Beijing Data Technology Co ltd filed Critical Enyike Beijing Data Technology Co ltd
Priority to CN201911277050.3A priority Critical patent/CN111127179B/en
Publication of CN111127179A publication Critical patent/CN111127179A/en
Application granted granted Critical
Publication of CN111127179B publication Critical patent/CN111127179B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/12Accounting
    • G06Q40/123Tax preparation or submission
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Engineering & Computer Science (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides an information pushing method, an information pushing device, computer equipment and a storage medium, wherein the method comprises the steps of obtaining a sample set to be processed; generating a sample feature vector for each sample to be processed in a sample set to be processed, and predicting the initial probability of the sample to be processed under each sample label in a preset sample label set; determining a part of samples to be processed from a sample set to be processed based on initial probability of each sample to be processed under each sample label in the sample label set; based on sample feature vectors corresponding to each sample to be processed and real labels corresponding to part of the samples to be processed respectively, and a label probability prediction model corresponding to each sample label in the sample label set, selecting a target label for each sample to be processed from the sample label set, and pushing the target label.

Description

Information pushing method, device, computer equipment and storage medium
Technical Field
The present application relates to the field of data processing technologies, and in particular, to an information pushing method, an information pushing device, a computer device, and a storage medium.
Background
In model training, a sample input to a model is generally required to be labeled, the labeling content is usually a phrase of a plurality of words, the labeling content can cover a text meaningful theme, such as a tax return problem in customer service, and the labeling content can be a tax return form or a tax compensation theme.
The process of labeling the sample is generally realized in a manual labeling mode, so that time and labor are very consumed, when the sample volume is relatively large, a label error phenomenon can exist in the manual labeling process, and the accuracy of the finally obtained sample label is reduced.
Disclosure of Invention
In view of the above, an embodiment of the present application is to provide an information pushing method, an apparatus, a computer device and a storage medium, so as to improve the efficiency of pushing information.
The embodiment of the application provides an information pushing device, which comprises:
the acquisition module is used for acquiring a sample set to be processed;
the first processing module is used for generating a sample feature vector for each sample to be processed in the sample set to be processed and predicting the initial probability of the sample to be processed under each sample label in the preset sample label set;
the determining module is used for determining partial samples to be processed from the sample set to be processed based on the initial probability of each sample label in the sample label set;
The second processing module is used for selecting a target label for each sample to be processed from the sample label set based on the sample feature vector corresponding to each sample to be processed and the real labels corresponding to part of the samples to be processed respectively and the label probability prediction model corresponding to each sample label in the sample label set, and pushing the target label.
In one embodiment, the first processing module is configured to generate a sample feature vector for the sample to be processed according to the following steps:
aiming at each sample to be processed, carrying out word segmentation processing on the sample to be processed to obtain a vocabulary set corresponding to the sample to be processed;
generating a vocabulary vector for each vocabulary in the vocabulary set corresponding to the sample to be processed;
and carrying out weighting processing on vocabulary vectors corresponding to all vocabularies in the vocabulary set corresponding to the sample to be processed to obtain sample feature vectors corresponding to the sample to be processed.
In one embodiment, the first processing module is configured to predict an initial probability of the sample to be processed under each sample label in the preset sample label set according to the following steps:
and inputting the sample feature vector of the sample to be processed into a first label probability prediction model corresponding to the sample label aiming at each sample label in the sample label set, and predicting to obtain the initial probability of the sample to be processed under the sample label.
In one embodiment, the first processing module is further configured to:
for each sample to be processed in a sample set to be processed, determining the maximum initial probability as a target initial probability corresponding to the sample to be processed, and taking a sample label corresponding to the target initial probability as a first label selected by the sample to be processed;
determining a first number of samples to be processed belonging to the same sample tag based on the first tag selected for each sample to be processed;
based on the initial probability corresponding to each sample to be processed and the ratio of the first number corresponding to each sample label in the total number of the samples, model parameters of a first label probability prediction model corresponding to each sample label are adjusted, and adjusted model parameters corresponding to each sample label are obtained.
In one embodiment, the second processing module is configured to select a target tag for each sample to be processed from the set of sample tags according to the steps of:
for each sample label, a sample feature vector of each sample to be processed is used as a model input of a label probability prediction model corresponding to the sample label, and a first probability of each sample to be processed under the sample label is obtained through prediction;
Based on the first probability of each sample to be processed under each sample label and the real labels of part of the samples to be processed, adjusting model parameters of a label probability prediction model corresponding to each sample label;
for each sample label, taking the adjusted model parameters corresponding to the sample label as final model parameters of the label probability prediction model;
the sample feature vector of each sample to be processed is used as a model input of a label probability prediction model corresponding to the sample label, and the final probability of each sample to be processed under the sample label is obtained through prediction;
and determining a sample label corresponding to the maximum final probability as a target sample label corresponding to each sample to be processed.
In one embodiment, the second processing module is configured to adjust model parameters of the tag probability prediction model corresponding to each sample tag according to the following steps:
for each sample to be processed in a sample set to be processed, determining the maximum first probability as a target first probability corresponding to the sample to be processed, and taking a sample label corresponding to the target first probability as a second label selected by the sample to be processed;
Determining a second number of partial samples to be processed belonging to the same sample tag based on the second tag selected for each sample to be processed;
based on the real labels corresponding to the part of samples to be processed and the real probabilities under the corresponding real labels, the second probabilities corresponding to the other processed samples except the part of samples to be processed in the sample set to be processed, and the proportion of the second number corresponding to each sample label to the total number of samples, the model parameters of the label probability prediction model corresponding to each sample label are adjusted.
In one embodiment, the determining module is configured to determine the portion of the sample to be processed according to the steps of:
for each sample to be processed, sorting the initial probabilities corresponding to the sample to be processed according to the order of the probabilities from high to low;
determining a difference value between a first initial probability and a second initial probability in initial probability sequences corresponding to the samples to be processed;
and determining the samples to be processed, corresponding to which the difference value is smaller than the preset threshold value, as partial samples to be processed.
In a second aspect, an embodiment of the present application provides an information pushing method, where the method includes:
acquiring a sample set to be processed;
generating a sample feature vector for each sample to be processed in a sample set to be processed, and predicting the initial probability of the sample to be processed under each sample label in a preset sample label set;
Determining part of samples to be processed from a sample set to be processed based on initial probability of each sample label in the sample label set;
based on sample feature vectors corresponding to each sample to be processed and real labels corresponding to part of the samples to be processed respectively, and a label probability prediction model corresponding to each sample label in the sample label set, selecting a target label for each sample to be processed from the sample label set, and pushing the target label.
In a third aspect, an embodiment of the present application provides a computer apparatus, including: a processor, a storage medium, and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating over the bus when the computer device is running, the processor executing the machine-readable instructions to perform the steps of the data processing method described above.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having a computer program stored thereon, which when executed by a processor performs the steps of the data processing method described above.
According to the information pushing method provided by the embodiment of the application, a sample set to be processed is obtained, a sample feature vector is generated for each sample to be processed in the sample set to be processed, the initial probability of the sample to be processed under each sample label in the preset sample label set is predicted, based on the initial probability of each sample to be processed under each sample label in the sample label set, part of the samples to be processed is determined from the sample set to be processed, based on the sample feature vector corresponding to each sample to be processed and the real label corresponding to part of the samples to be processed respectively, and a label probability prediction model corresponding to each sample label in the sample label set, a target label is selected for each sample to be processed from the sample label set, so that the accuracy of the sample label selected for each sample to be processed is improved by considering the real label and the sample feature vector of the sample to be processed when the target label is determined for the sample to be processed.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a first information pushing method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an information pushing device according to an embodiment of the present application;
fig. 3 shows a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described with reference to the accompanying drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are for the purpose of illustration and description only and are not intended to limit the scope of the present application. In addition, it should be understood that the schematic drawings are not drawn to scale. A flowchart, as used in this disclosure, illustrates operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be implemented out of order and that steps without logical context may be performed in reverse order or concurrently. Moreover, one or more other operations may be added to or removed from the flow diagrams by those skilled in the art under the direction of the present disclosure.
In addition, the described embodiments are only some, but not all, embodiments of the application. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present application.
It should be noted that the term "comprising" will be used in embodiments of the application to indicate the presence of the features stated hereafter, but not to exclude the addition of other features.
In the related art, the text clustering mode includes a proximity algorithm (K-NearestNeighbor, KNN) and a K-means clustering algorithm (K-means clustering algorithm, K-means) method, however, the two clustering methods involve a large amount of computation, cannot be applied to computation of a data set with large data volume or large dimension, cannot well capture data distribution, cannot provide other information about the data, and are not efficient.
Based on the above, the information pushing method provided by the embodiment of the application obtains a sample set to be processed, generates a sample feature vector for each sample to be processed in the sample set to be processed, predicts the initial probability of the sample to be processed under each sample tag in the preset sample tag set, determines a part of samples to be processed from the sample set to be processed based on the initial probability of each sample to be processed under each sample tag in the sample tag set, determines a real tag corresponding to each sample feature vector of each sample to be processed and a part of sample to be processed respectively, and selects a target tag for each sample to be processed from the sample tag set based on a tag probability prediction model corresponding to each sample tag in the sample tag set, thus, when determining the target tag for the sample to be processed, the real tag and the sample feature vector of the sample to be processed are considered, the target tag is automatically determined for the sample to be processed, and the accuracy of the sample tag selected for each sample to be processed is improved.
The information pushing method of the embodiment of the application can be applied to a server and can also be applied to any other computing equipment with processing functions. In some embodiments, the server or computing device described above may include a processor. The processor may process information and/or data related to the service request to perform one or more of the functions described in the present application.
The embodiment of the application provides an information pushing method, as shown in fig. 1, which specifically comprises the following steps:
s101, acquiring a sample set to be processed;
s102, generating sample feature vectors for each sample to be processed in a sample set to be processed, and predicting the initial probability of the sample to be processed under each sample label in a preset sample label set;
s103, determining part of samples to be processed from the sample set to be processed based on initial probability of each sample label in the sample label set;
s104, selecting a target label for each sample to be processed from the sample label set based on the sample feature vector corresponding to each sample to be processed and the real labels corresponding to part of the samples to be processed respectively and the label probability prediction model corresponding to each sample label in the sample label set, and pushing the target label.
In S101, the sample set to be processed includes a plurality of samples to be processed, where the samples to be processed may be text samples or chat records in different fields, the text samples may be text samples in a communication field, text samples in a financial field, text samples in an education field, text samples in an accounting field, and the like; chat logs may be dialogue logs between different users, etc.; the set of samples to be processed may be sent by the requesting end.
In S102, the sample feature vector characterizes a feature vector of the sample meaning; the sample tag set is preset, the initial probability represents the probability that the sample to be processed is any sample tag in the sample tag set, and the probability is higher when the probability is larger, the probability that the sample to be processed is any sample tag is higher.
In generating a sample feature vector for each sample to be processed in a set of samples to be processed for the sample to be processed, the method may comprise the steps of:
aiming at each sample to be processed, carrying out word segmentation processing on the sample to be processed to obtain a vocabulary set corresponding to the sample to be processed;
generating a vocabulary vector for each vocabulary in the vocabulary set corresponding to the sample to be processed;
and carrying out weighting processing on vocabulary vectors corresponding to all vocabularies in the vocabulary set corresponding to the sample to be processed to obtain sample feature vectors corresponding to the sample to be processed.
Here, the tools for performing word segmentation processing on each sample to be processed include a barker word segmentation tool, an nltk word segmentation tool and the like; the vocabulary set comprises a plurality of vocabularies; the vocabulary vectors represent vectors of vocabulary meanings.
In the specific implementation process, word segmentation is carried out on each sample to be processed by using a word segmentation tool, so as to obtain a vocabulary set corresponding to each sample to be processed.
For each sample to be processed, generating a vocabulary vector for each vocabulary in the vocabulary set corresponding to the sample to be processed by using a vocabulary vector generation model, and carrying out weighted averaging on each vocabulary vector, namely calculating the product of each vocabulary vector and the corresponding weight, and further calculating the sum of the vocabulary vectors after each product, thereby obtaining the sample vector corresponding to each sample to be processed. The vocabulary vector generation model may be a GloVe model, and when the vocabulary vectors in the vocabulary set are weighted and averaged, the weight corresponding to each vocabulary vector may be determined according to the frequency of the vocabulary corresponding to the vocabulary vector in the sample to be processed, where the higher the frequency is, the larger the weight corresponding to the vocabulary vector is.
When generating a vocabulary vector for a vocabulary in a vocabulary set, the GloVe model considers the context relationship among the vocabularies, the vocabularies with the context relationship, and the more the vocabularies appear, the closer the relationship among the vocabularies is, so that the GloVe model generates the vocabulary vector for each vocabulary with the context relationship among the vocabularies.
After obtaining the sample feature vector of each sample to be processed, predicting the initial probability of the sample to be processed under each sample label in a preset sample label set for each sample to be processed in the sample set to be processed, including:
For each sample label in a sample label set, inputting a sample feature vector of the sample to be processed into a first label probability prediction model corresponding to the sample label, and predicting to obtain the initial probability of the sample to be processed under the sample label;
for each sample to be processed in a sample set to be processed, determining the maximum initial probability as a target initial probability corresponding to the sample to be processed, and taking a sample label corresponding to the target initial probability as a first label selected by the sample to be processed;
determining a first number of samples to be processed belonging to the same sample tag based on the first tag selected for each sample to be processed;
based on the initial probability corresponding to each sample to be processed and the ratio of the first number corresponding to each sample label in the total number of the samples, model parameters of a first label probability prediction model corresponding to each sample label are adjusted, and adjusted model parameters corresponding to each sample label are obtained.
Here, the first tag probability prediction model may be a generalized hyperbolic distribution model, and model parameters of the first tag probability prediction model are set according to historical experience; the first tag is the initial sample tag determined for the sample to be processed.
In the related art, the traditional distributed model has strong interpretability, however, the gaussian distribution in the related art has the defects of high symmetry, incompatibility with outliers and the like, and the generally processed text data has certain distribution, which is also a main assumption of all natural language processing. However, the general distribution (e.g., gaussian distribution) cannot well describe the distribution of the text data after being converted into the digital vector, and thus, a more flexible distribution is required to describe the text data, so that it is proposed that the generalized hyperbolic distribution model is used, and has excellent distribution and compatibility, can be compatible with both symmetric and asymmetric distributions, and has a great tolerance to outliers.
In a specific implementation process, for each sample tag in a sample tag set, a sample feature vector of the sample to be processed is input into a first tag probability prediction model corresponding to the sample tag, and initial probability of the sample to be processed under the sample tag is predicted.
Model parameters in the first tag probability prediction model may be adjusted in the following manner to improve accuracy of the initial probability predicted.
After the initial probability of each sample to be processed under each sample label is obtained, determining the maximum initial probability as the target initial probability corresponding to the processed sample for each sample to be processed, and further taking the sample label corresponding to the target initial probability as the first label selected by the sample to be processed.
For example, the sample set to be processed includes two samples to be processed, S1 and S2 respectively, the preset sample label set includes two sample labels, T1 and T2 respectively, initial probabilities of the sample S1 under T1 and T2 are α 1 and α 2 respectively, initial probabilities of the sample S2 under T1 and T2 are α 3 and α 4 respectively, a maximum initial probability corresponding to the sample S1 is α 1, a maximum initial probability corresponding to the sample S2 is α 4, a sample label corresponding to the maximum initial probability is α 1, then T1 is a first label of the sample S1, a sample label corresponding to the maximum initial probability is α 4 is T2, and then T2 is a first label of the sample S2.
Counting the first number of samples to be processed corresponding to each sample tag in the sample tag set, calculating the ratio of the first number corresponding to each sample tag to the total number of samples in the sample set to be processed, taking the ratio as the duty ratio, and further, determining the first total loss value of the first tag probability prediction model corresponding to each sample tag by utilizing the corresponding initial probability of each sample to be processed, the duty ratio corresponding to each sample tag and the probability function corresponding to each sample tag, so that the first total loss value maximally adjusts the model parameters of the first tag probability prediction model corresponding to each sample tag.
The first loss value satisfies the following equation:
l is a first total loss value of a first label probability prediction model corresponding to each sample label; p (P) g The ratio of the first number of the samples to be processed corresponding to the g-th sample label to the total number of the samples is set;for model parameters corresponding to the g-th sample tag +.>When the model probability distribution function corresponding to the ith sample to be processed and corresponding to the g sample label is generated; z is Z i,q The initial probability of the ith sample to be processed under the label of the g sample is given; n is the total number of samples in the sample set to be processed; g is the total number of sample tags in the sample tag set.
In S103, the number of partial samples to be processed may be determined from the history data.
In determining a portion of the samples to be processed from the set of samples to be processed based on an initial probability that each sample to be processed is under each sample label in the set of sample labels, the steps of:
for each sample to be processed, sorting the initial probabilities corresponding to the sample to be processed according to the order of the probabilities from high to low;
determining a difference value between a first initial probability and a second initial probability in initial probability sequences corresponding to the samples to be processed;
And determining the samples to be processed, corresponding to which the difference value is smaller than the preset threshold value, as partial samples to be processed.
Here, the preset threshold may be determined according to history data, and may be determined according to actual conditions.
In a specific implementation process, after obtaining initial probability of each sample to be processed under each sample label, sequencing initial probability corresponding to each sample to be processed according to a sequence from big to small, calculating a difference value between the first initial probability and the second initial probability in initial probability sequencing corresponding to the sample to be processed, and selecting the sample to be processed with the difference value smaller than a preset threshold value as a part of samples to be processed.
For example, the sample set to be processed includes ten samples to be processed, S1 and S2 … … S10, the preset sample label set includes two sample labels, T1, T2 and T3, initial probabilities of the samples S1 under the conditions of T1, T2 and T3 are ordered to be α1, α2 and α3 according to the order from big to small, the difference value of the calculated α1 and α2 is P1, the calculation modes of other samples S2 to S10 are the same as that of the sample S1, the difference value corresponding to the samples S2, S3 and … … S10 is P2, P3 … … P10, the preset threshold value is O, and the difference value smaller than the preset threshold value is P1, P2 and P5, so that part of the samples to be processed are the sample S1, the sample S2 to be processed and the sample S5 to be processed.
In S104, the real label is a label that marks the sample to be processed manually, that is, the actual label of the sample to be processed; the label probability prediction model is a generalized hyperbolic distribution model.
When selecting a target label for each sample to be processed from the sample label set based on the sample feature vector corresponding to each sample to be processed and the real label corresponding to part of the sample to be processed respectively and the label probability prediction model corresponding to each sample label in the sample label set, the method may include the following steps:
for each sample label, a sample feature vector of each sample to be processed is used as a model input of a label probability prediction model corresponding to the sample label, and a first probability of each sample to be processed under the sample label is obtained through prediction;
based on the first probability of each sample to be processed under each sample label and the real labels of part of the samples to be processed, adjusting model parameters of a label probability prediction model corresponding to each sample label;
for each sample label, taking the adjusted model parameters corresponding to the sample label as final model parameters of the label probability prediction model;
The sample feature vector of each sample to be processed is used as a model input of a label probability prediction model corresponding to the sample label, and the final probability of each sample to be processed under the sample label is obtained through prediction;
and determining a sample label corresponding to the maximum final probability as a target sample label corresponding to each sample to be processed.
Here, the first probability is the probability of the sample to be processed under the sample label, which is predicted by the label probability prediction model, and the higher the probability is, the higher the probability that the sample to be processed is the sample label is;
in the implementation process, the initial model parameters of the tag probability prediction model may be determined according to historical data, or may be the adjusted model parameters corresponding to the first tag probability prediction model. In the actual implementation process, in order to improve the accuracy of the probability obtained by the label probability prediction model, the above adjustment model parameters are used as the initial model parameters of the label probability prediction model.
And for each sample label, taking the sample feature vector of each sample to be processed as a model input of a label probability prediction model corresponding to the sample label, so as to predict and obtain the first probability of each sample to be processed under the sample label.
Based on the first probability of each sample to be processed under each sample label and the real labels of part of the samples to be processed, the initial model parameters of the label probability prediction model corresponding to each sample label are adjusted so as to improve the prediction accuracy of the label probability prediction model.
When the model parameters of the label probability prediction model corresponding to each sample label are adjusted based on the second probability of each sample to be processed under each sample label and the real labels of part of the samples to be processed, the method can comprise the following steps:
for each sample to be processed in a sample set to be processed, determining the maximum first probability as a target first probability corresponding to the sample to be processed, and taking a sample label corresponding to the target first probability as a second label selected by the sample to be processed;
determining a second number of samples to be processed belonging to the same sample tag based on the second tag selected for each sample to be processed;
based on the real labels corresponding to the part of the samples to be processed and the real probabilities under the corresponding real labels, the second probabilities corresponding to the other processed samples except the part of the samples to be processed in the sample set to be processed, and the proportion of the number of the other processed samples corresponding to each sample label to the total number of the samples, the model parameters of the label probability prediction model corresponding to each sample label are adjusted.
Here, the first probability is the probability of the sample to be processed under the corresponding sample label, which is predicted by the label probability prediction model; the second label refers to a sample label selected for a sample to be processed; the second number is the total number of samples to be processed corresponding to the sample tags;
in a specific implementation process, for each sample to be processed in a sample set to be processed, determining the maximum first probability as a target first probability corresponding to the sample to be processed, that is, selecting the maximum first probability as the target first probability from the first probabilities under different sample labels corresponding to the sample to be processed, and further, selecting a sample label corresponding to the target first probability as a second label selected by the sample to be processed. In this way, each sample to be processed corresponds to one sample label, the second number of samples to be processed corresponding to each sample label is counted, the ratio of the second number corresponding to each sample label to the total number of samples is calculated, and the ratio is taken as a first duty ratio; and calculating the third number of other samples to be processed except for part of samples to be processed, which correspond to each sample label, calculating the ratio of the third number corresponding to each sample label to the total number of the samples, and taking the ratio as a second duty ratio.
And determining a second total loss value of the label probability prediction model corresponding to each sample label based on the real labels corresponding to part of the samples to be processed and the real probabilities under the corresponding real labels, the second probabilities corresponding to other samples to be processed except the part of the samples to be processed in the sample set to be processed, the ratio of the second number of the samples corresponding to each sample label to the total number of the samples and the ratio of the third number of the samples to the total number of the samples, so that the second total loss value is the maximum, and adjusting model parameters of the label probability prediction model corresponding to each sample label.
The second loss value satisfies the following equation:
s is the second total loss value of the label probability prediction model corresponding to each sample label, O g The ratio of the second number of the part of samples to be processed corresponding to the g-th sample label to the total number of the samples; h g The ratio of the third number of the other samples to be processed except for part of the samples to be processed corresponding to the g-th sample label to the total number of the samples;for model parameters corresponding to the g-th sample tag +.>When the model probability distribution function corresponding to the ith sample to be processed (other samples to be processed except part of samples to be processed) corresponding to the g sample label is performed; / >For model parameters corresponding to the g-th sample tag +.>When the ith part corresponding to the g-th sample labelA model probability distribution function corresponding to the sample to be processed; a is that ig The true probability of the ith part of samples to be processed under the label of the g sample is given; k (K) ig The first probability of the ith sample to be processed except part of the samples to be processed under the label of the g sample is given; n is the total number of samples in the sample set to be processed; g is the total number of sample tags in the sample tag set: k is the total number of samples to be partially processed.
And aiming at each sample label, taking the adjusted model parameters corresponding to the sample label as final model parameters of the label probability prediction model, and further, taking the sample feature vector of each sample to be processed as the model input of the label probability prediction model corresponding to the sample label, and predicting to obtain the final probability of each sample to be processed under the sample label. And determining a sample label corresponding to the maximum final probability as a target sample label corresponding to each sample to be processed.
For example, the sample set to be processed includes ten samples to be processed, S1 and S2 … … S10, the preset sample label set includes three sample labels, which are T1, T2 and T3, respectively, and the final probabilities of the sample S1 to be processed under T1, T2 and T3 are α10, α20 and α30, where α10 is the maximum final probability, and then T1 is the label of the sample S1 to be processed, and the calculation manners of other samples S2 to S10 to be processed are the same as those of the sample S1 to be processed, which will not be repeated herein.
Based on the same inventive concept, the embodiment of the present application further provides an information pushing device corresponding to the information pushing method, and since the principle of solving the problem by the method in the embodiment of the present application is similar to that of the information pushing method in the embodiment of the present application, the implementation of the device may refer to the implementation of the method, and the repetition is omitted.
The embodiment of the application provides an information pushing device, as shown in fig. 2, which comprises:
an acquisition module 21, configured to acquire a sample set to be processed;
a first processing module 22, configured to generate a sample feature vector for each sample to be processed in the set of samples to be processed, and predict an initial probability of the sample to be processed under each sample label in the set of preset sample labels;
a determining module 23, configured to determine a part of samples to be processed from the sample set to be processed based on an initial probability of each sample under each sample label in the sample label set;
the second processing module 24 is configured to select, from the sample tag set, a target tag for each sample to be processed based on the sample feature vector corresponding to each sample to be processed and the real tag corresponding to a part of the samples to be processed, and the tag probability prediction model corresponding to each sample tag in the sample tag set, and push the target tag.
Optionally, the first processing module 22 is configured to generate a sample feature vector for the sample to be processed according to the following steps:
aiming at each sample to be processed, carrying out word segmentation processing on the sample to be processed to obtain a vocabulary set corresponding to the sample to be processed;
generating a vocabulary vector for each vocabulary in the vocabulary set corresponding to the sample to be processed;
and carrying out weighting processing on vocabulary vectors corresponding to all vocabularies in the vocabulary set corresponding to the sample to be processed to obtain sample feature vectors corresponding to the sample to be processed.
Optionally, the first processing module 22 is configured to predict an initial probability of the sample to be processed under each sample label in the preset sample label set according to the following steps:
and inputting the sample feature vector of the sample to be processed into a first label probability prediction model corresponding to the sample label aiming at each sample label in the sample label set, and predicting to obtain the initial probability of the sample to be processed under the sample label.
Optionally, the first processing module 22 is further configured to:
for each sample to be processed in a sample set to be processed, determining the maximum initial probability as a target initial probability corresponding to the sample to be processed, and taking a sample label corresponding to the target initial probability as a first label selected by the sample to be processed;
Determining a first number of samples to be processed belonging to the same sample tag based on the first tag selected for each sample to be processed;
based on the initial probability corresponding to each sample to be processed and the ratio of the first number corresponding to each sample label in the total number of the samples, model parameters of a first label probability prediction model corresponding to each sample label are adjusted, and adjusted model parameters corresponding to each sample label are obtained.
Optionally, the second processing module 24 is configured to select a target tag for each sample to be processed from the set of sample tags according to the following steps:
for each sample label, a sample feature vector of each sample to be processed is used as a model input of a label probability prediction model corresponding to the sample label, and a first probability of each sample to be processed under the sample label is obtained through prediction;
based on the first probability of each sample to be processed under each sample label and the real labels of part of the samples to be processed, adjusting model parameters of a label probability prediction model corresponding to each sample label;
for each sample label, taking the adjusted model parameters corresponding to the sample label as final model parameters of the label probability prediction model;
The sample feature vector of each sample to be processed is used as a model input of a label probability prediction model corresponding to the sample label, and the final probability of each sample to be processed under the sample label is obtained through prediction;
and determining a sample label corresponding to the maximum final probability as a target sample label corresponding to each sample to be processed.
Optionally, the second processing module 24 is configured to adjust model parameters of the tag probability prediction model corresponding to each sample tag according to the following steps:
for each sample to be processed in a sample set to be processed, determining the maximum first probability as a target first probability corresponding to the sample to be processed, and taking a sample label corresponding to the target first probability as a second label selected by the sample to be processed;
determining a second number of partial samples to be processed belonging to the same sample tag based on the second tag selected for each sample to be processed;
based on the real labels corresponding to the part of samples to be processed and the real probabilities under the corresponding real labels, the second probabilities corresponding to the other processed samples except the part of samples to be processed in the sample set to be processed, and the proportion of the second number corresponding to each sample label to the total number of samples, the model parameters of the label probability prediction model corresponding to each sample label are adjusted.
Optionally, the determining module 23 is configured to determine the part of the sample to be processed according to the following steps:
for each sample to be processed, sorting the initial probabilities corresponding to the sample to be processed according to the order of the probabilities from high to low;
determining a difference value between a first initial probability and a second initial probability in initial probability sequences corresponding to the samples to be processed;
and determining the samples to be processed, corresponding to which the difference value is smaller than the preset threshold value, as partial samples to be processed.
Corresponding to the information pushing method in fig. 1, the embodiment of the present application further provides a computer device 300, as shown in fig. 3, where the device includes a memory 301, a processor 302, and a computer program stored in the memory 301 and capable of running on the processor 302, where the processor 302 implements the information pushing method when executing the computer program.
Specifically, the above memory 301 and the processor 302 may be general-purpose memories and processors, which are not specifically limited herein, when the processor 302 runs a computer program stored in the memory 301, the above information pushing method may be executed to solve the problem of low accuracy of pushing information in the prior art, and the present application obtains a set of samples to be processed, generates a sample feature vector for each sample to be processed in the set of samples to be processed, predicts an initial probability of the sample to be processed under each sample tag in a preset set of sample tags, determines a part of samples to be processed from the set of samples to be processed based on the initial probability of each sample to be processed under each sample tag in the set of sample tags, and selects a target tag from the set of sample tags based on a sample feature vector corresponding to each sample to be processed and a real tag corresponding to each sample tag in the set of sample tags to be processed, so that, when determining the target tag for each sample to be processed, the real tag and the sample feature vector of the sample to be processed are considered, and the accuracy of the sample to be selected for each sample to be processed is improved.
Corresponding to the information pushing method in fig. 1, the embodiment of the application further provides a computer readable storage medium, on which a computer program is stored, which when being executed by a processor, performs the steps of the above information pushing method.
Specifically, the storage medium can be a general storage medium, such as a mobile disk, a hard disk, and the like, when a computer program on the storage medium is run, the information pushing method can be executed to solve the problem of low accuracy of pushing information in the prior art, a sample set to be processed is obtained, a sample feature vector is generated for each sample to be processed in the sample set to be processed, the initial probability of the sample to be processed under each sample label in a preset sample label set is predicted, a part of samples to be processed is determined from the sample set to be processed, a real label corresponding to each sample to be processed and a real label corresponding to each sample to be processed in the sample label set are determined based on the sample feature vector corresponding to each sample label to be processed and a label probability prediction model corresponding to each sample label in the sample label set, and a target label is selected from the sample label set to be processed, so that when the target label is determined for the sample to be processed, the real label and the sample feature vector of the sample to be processed are considered, and the accuracy of the sample selected for each sample to be processed is improved.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the method embodiments, and are not repeated in the present disclosure. In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, and the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, and for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, indirect coupling or communication connection of devices or modules, electrical, mechanical, or other form.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple road network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, or a road network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily appreciate variations or alternatives within the scope of the present application. Therefore, the protection scope of the application is subject to the protection scope of the claims.

Claims (9)

1. An information pushing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a sample set to be processed;
the first processing module is used for generating a sample feature vector for each sample to be processed in the sample set to be processed and predicting the initial probability of the sample to be processed under each sample label in the preset sample label set;
the determining module is used for determining partial samples to be processed from the sample set to be processed based on the initial probability of each sample label in the sample label set;
the second processing module is used for selecting a target label for each sample to be processed from the sample label set based on the sample feature vector corresponding to each sample to be processed and the real labels corresponding to part of the samples to be processed respectively and the label probability prediction model corresponding to each sample label in the sample label set, and pushing the target label;
the determining module is used for determining part of samples to be processed according to the following steps:
for each sample to be processed, sorting the initial probabilities corresponding to the sample to be processed according to the order of the probabilities from high to low;
determining a difference value between a first initial probability and a second initial probability in initial probability sequences corresponding to the samples to be processed;
And determining the samples to be processed, corresponding to which the difference value is smaller than the preset threshold value, as partial samples to be processed.
2. The apparatus of claim 1, wherein the first processing module is configured to generate a sample feature vector for the sample to be processed according to:
aiming at each sample to be processed, carrying out word segmentation processing on the sample to be processed to obtain a vocabulary set corresponding to the sample to be processed;
generating a vocabulary vector for each vocabulary in the vocabulary set corresponding to the sample to be processed;
and carrying out weighting processing on vocabulary vectors corresponding to all vocabularies in the vocabulary set corresponding to the sample to be processed to obtain sample feature vectors corresponding to the sample to be processed.
3. The apparatus of claim 1, wherein the first processing module is configured to predict an initial probability of the sample to be processed under each sample label in a set of preset sample labels based on:
and inputting the sample feature vector of the sample to be processed into a first label probability prediction model corresponding to the sample label aiming at each sample label in the sample label set, and predicting to obtain the initial probability of the sample to be processed under the sample label.
4. The apparatus of claim 3, wherein the first processing module is further to:
For each sample to be processed in a sample set to be processed, determining the maximum initial probability as a target initial probability corresponding to the sample to be processed, and taking a sample label corresponding to the target initial probability as a first label selected by the sample to be processed;
determining a first number of samples to be processed belonging to the same sample tag based on the first tag selected for each sample to be processed;
based on the initial probability corresponding to each sample to be processed and the ratio of the first number corresponding to each sample label in the total number of the samples, model parameters of a first label probability prediction model corresponding to each sample label are adjusted, and adjusted model parameters corresponding to each sample label are obtained.
5. The apparatus of claim 1, wherein the second processing module is configured to select a target tag for each sample to be processed from the set of sample tags according to:
for each sample label, a sample feature vector of each sample to be processed is used as a model input of a label probability prediction model corresponding to the sample label, and a first probability of each sample to be processed under the sample label is obtained through prediction;
based on the first probability of each sample to be processed under each sample label and the real labels of part of the samples to be processed, adjusting model parameters of a label probability prediction model corresponding to each sample label;
For each sample label, taking the adjusted model parameters corresponding to the sample label as final model parameters of the label probability prediction model;
the sample feature vector of each sample to be processed is used as a model input of a label probability prediction model corresponding to the sample label, and the final probability of each sample to be processed under the sample label is obtained through prediction;
and determining a sample label corresponding to the maximum final probability as a target sample label corresponding to each sample to be processed.
6. The apparatus of claim 1, wherein the second processing module is configured to adjust model parameters of the tag probability prediction model corresponding to each sample tag according to:
for each sample to be processed in a sample set to be processed, determining the maximum first probability as a target first probability corresponding to the sample to be processed, and taking a sample label corresponding to the target first probability as a second label selected by the sample to be processed;
determining a second number of partial samples to be processed belonging to the same sample tag based on the second tag selected for each sample to be processed;
based on the real labels corresponding to the part of samples to be processed and the real probabilities under the corresponding real labels, the second probabilities corresponding to the other processed samples except the part of samples to be processed in the sample set to be processed, and the proportion of the second number corresponding to each sample label to the total number of samples, the model parameters of the label probability prediction model corresponding to each sample label are adjusted.
7. An information pushing method, which is characterized in that the method comprises the following steps:
acquiring a sample set to be processed;
generating a sample feature vector for each sample to be processed in a sample set to be processed, and predicting the initial probability of the sample to be processed under each sample label in a preset sample label set;
determining part of samples to be processed from a sample set to be processed based on initial probability of each sample label in the sample label set;
based on sample feature vectors corresponding to each sample to be processed and real labels corresponding to part of the samples to be processed respectively, and a label probability prediction model corresponding to each sample label in the sample label set, selecting a target label for each sample to be processed from the sample label set, and pushing the target label;
the determining a part of samples to be processed from the sample set to be processed based on the initial probability of each sample label in the sample label set, including:
for each sample to be processed, sorting the initial probabilities corresponding to the sample to be processed according to the order of the probabilities from high to low;
determining a difference value between a first initial probability and a second initial probability in initial probability sequences corresponding to the samples to be processed;
And determining the samples to be processed, corresponding to which the difference value is smaller than the preset threshold value, as partial samples to be processed.
8. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method of claim 7 when executing the computer program.
9. A computer readable storage medium having stored thereon a computer program, characterized in that the computer program when executed by a processor performs the steps of the method of claim 7.
CN201911277050.3A 2019-12-12 2019-12-12 Information pushing method, device, computer equipment and storage medium Active CN111127179B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911277050.3A CN111127179B (en) 2019-12-12 2019-12-12 Information pushing method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911277050.3A CN111127179B (en) 2019-12-12 2019-12-12 Information pushing method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111127179A CN111127179A (en) 2020-05-08
CN111127179B true CN111127179B (en) 2023-08-29

Family

ID=70498563

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911277050.3A Active CN111127179B (en) 2019-12-12 2019-12-12 Information pushing method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111127179B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009211693A (en) * 2008-02-29 2009-09-17 Fujitsu Ltd Pattern identification device and pattern identification method
CN103279868A (en) * 2013-05-22 2013-09-04 兰亭集势有限公司 Method and device for automatically identifying fraud order form
EP3171297A1 (en) * 2015-11-18 2017-05-24 CentraleSupélec Joint boundary detection image segmentation and object recognition using deep learning
CN107066449A (en) * 2017-05-09 2017-08-18 北京京东尚科信息技术有限公司 Information-pushing method and device
CN108763194A (en) * 2018-04-27 2018-11-06 广州优视网络科技有限公司 Using mark stamp methods, device, storage medium and computer equipment
AU2018101514A4 (en) * 2018-10-11 2018-11-15 Chi, Henan Mr An automatic text-generating program for Chinese Hip-hop lyrics
CN109344314A (en) * 2018-08-20 2019-02-15 腾讯科技(深圳)有限公司 A kind of data processing method, device and server
CN109710842A (en) * 2018-12-17 2019-05-03 泰康保险集团股份有限公司 Method for pushing, device and the readable storage medium storing program for executing of business information
CN109872242A (en) * 2019-01-30 2019-06-11 北京字节跳动网络技术有限公司 Information-pushing method and device
CN110163301A (en) * 2019-05-31 2019-08-23 北京金山云网络技术有限公司 A kind of classification method and device of image
CN110223114A (en) * 2019-05-29 2019-09-10 恩亿科(北京)数据科技有限公司 The method and apparatus for managing the characteristic information of user
CN110472154A (en) * 2019-08-26 2019-11-19 秒针信息技术有限公司 A kind of resource supplying method, apparatus, electronic equipment and readable storage medium storing program for executing
CN110535974A (en) * 2019-09-27 2019-12-03 恩亿科(北京)数据科技有限公司 Method for pushing, driving means, equipment and the storage medium of resource to be put

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2437801C (en) * 2001-02-08 2010-06-01 Sensormatic Electronics Corporation Differentially coherent combining for electronic article surveillance systems
US20130226758A1 (en) * 2011-08-26 2013-08-29 Reincloud Corporation Delivering aggregated social media with third party apis
GB2513105A (en) * 2013-03-15 2014-10-22 Deepmind Technologies Ltd Signal processing systems
US10339471B2 (en) * 2017-01-17 2019-07-02 International Business Machines Corporation Ensemble based labeling

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009211693A (en) * 2008-02-29 2009-09-17 Fujitsu Ltd Pattern identification device and pattern identification method
CN103279868A (en) * 2013-05-22 2013-09-04 兰亭集势有限公司 Method and device for automatically identifying fraud order form
EP3171297A1 (en) * 2015-11-18 2017-05-24 CentraleSupélec Joint boundary detection image segmentation and object recognition using deep learning
CN107066449A (en) * 2017-05-09 2017-08-18 北京京东尚科信息技术有限公司 Information-pushing method and device
CN108763194A (en) * 2018-04-27 2018-11-06 广州优视网络科技有限公司 Using mark stamp methods, device, storage medium and computer equipment
CN109344314A (en) * 2018-08-20 2019-02-15 腾讯科技(深圳)有限公司 A kind of data processing method, device and server
AU2018101514A4 (en) * 2018-10-11 2018-11-15 Chi, Henan Mr An automatic text-generating program for Chinese Hip-hop lyrics
CN109710842A (en) * 2018-12-17 2019-05-03 泰康保险集团股份有限公司 Method for pushing, device and the readable storage medium storing program for executing of business information
CN109872242A (en) * 2019-01-30 2019-06-11 北京字节跳动网络技术有限公司 Information-pushing method and device
CN110223114A (en) * 2019-05-29 2019-09-10 恩亿科(北京)数据科技有限公司 The method and apparatus for managing the characteristic information of user
CN110163301A (en) * 2019-05-31 2019-08-23 北京金山云网络技术有限公司 A kind of classification method and device of image
CN110472154A (en) * 2019-08-26 2019-11-19 秒针信息技术有限公司 A kind of resource supplying method, apparatus, electronic equipment and readable storage medium storing program for executing
CN110535974A (en) * 2019-09-27 2019-12-03 恩亿科(北京)数据科技有限公司 Method for pushing, driving means, equipment and the storage medium of resource to be put

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于局部敏感Hash的半监督直推SVM增量学习算法;姚明海;林宣民;王宪保;;浙江工业大学学报(第02期);全文 *

Also Published As

Publication number Publication date
CN111127179A (en) 2020-05-08

Similar Documents

Publication Publication Date Title
CN111222305B (en) Information structuring method and device
CN111310848B (en) Training method and device for multi-task model
CN111291183A (en) Method and device for carrying out classification prediction by using text classification model
CN111127364B (en) Image data enhancement strategy selection method and face recognition image data enhancement method
CN108446741B (en) Method, system and storage medium for evaluating importance of machine learning hyper-parameter
CN112732871A (en) Multi-label classification method for acquiring client intention label by robot
CN112148986B (en) Top-N service re-recommendation method and system based on crowdsourcing
CN110717027A (en) Multi-round intelligent question-answering method, system, controller and medium
CN113239702A (en) Intention recognition method and device and electronic equipment
CN113642652A (en) Method, device and equipment for generating fusion model
CN116127060A (en) Text classification method and system based on prompt words
CN113297355A (en) Method, device, equipment and medium for enhancing labeled data based on countermeasure interpolation sequence
CN114266252A (en) Named entity recognition method, device, equipment and storage medium
CN111127179B (en) Information pushing method, device, computer equipment and storage medium
CN110413750B (en) Method and device for recalling standard questions according to user questions
CN112115715A (en) Natural language text processing method and device, storage medium and electronic equipment
CN111445371A (en) Transportation route generation method and device, computer equipment and storage medium
CN110838021A (en) Conversion rate estimation method and device, electronic equipment and storage medium
CN110717037A (en) Method and device for classifying users
CN113238947B (en) Man-machine collaborative dialogue system evaluation method and system
CN116306663A (en) Semantic role labeling method, device, equipment and medium
CN115689603A (en) User feedback information collection method and device and user feedback system
CN113822455B (en) Time prediction method, device, server and storage medium
CN113762647A (en) Data prediction method, device and equipment
CN112200488A (en) Risk identification model training method and device for business object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant