CN112232067A - Method for generating file, method, device and equipment for training file evaluation model - Google Patents

Method for generating file, method, device and equipment for training file evaluation model Download PDF

Info

Publication number
CN112232067A
CN112232067A CN202011210227.0A CN202011210227A CN112232067A CN 112232067 A CN112232067 A CN 112232067A CN 202011210227 A CN202011210227 A CN 202011210227A CN 112232067 A CN112232067 A CN 112232067A
Authority
CN
China
Prior art keywords
target
candidate
pushed
file
case
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011210227.0A
Other languages
Chinese (zh)
Other versions
CN112232067B (en
Inventor
何雪枫
魏安康
谢兴波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hanhai Information Technology Shanghai Co Ltd
Original Assignee
Hanhai Information Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hanhai Information Technology Shanghai Co Ltd filed Critical Hanhai Information Technology Shanghai Co Ltd
Priority to CN202011210227.0A priority Critical patent/CN112232067B/en
Publication of CN112232067A publication Critical patent/CN112232067A/en
Application granted granted Critical
Publication of CN112232067B publication Critical patent/CN112232067B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/253Grammatical analysis; Style critique
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/211Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Data Mining & Analysis (AREA)
  • Business, Economics & Management (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a document generation method, a document evaluation model training device and a document evaluation model training device, and belongs to the technical field of the Internet. The method comprises the following steps: acquiring target text description information; acquiring a target candidate file set corresponding to the target text description information; acquiring probability information of each candidate push file in the target candidate file set based on the target candidate file set; and according to the probability information, determining the candidate pushed case with the highest accuracy in the target candidate case set as the target pushed case corresponding to the target text description information. According to the technical scheme, the problem of low document generation efficiency caused by manual acquisition of the pushed documents is solved, the acquisition efficiency of the target pushed documents is improved, the accuracy of each candidate pushed document is evaluated, the candidate pushed document with the highest accuracy is selected from the candidate pushed documents to serve as the target pushed document, and the accuracy of the target pushed documents is guaranteed.

Description

Method for generating file, method, device and equipment for training file evaluation model
Technical Field
The embodiment of the application relates to the technical field of internet, in particular to a method, a device and equipment for training a pattern generation and pattern evaluation model.
Background
At present, a pushed document of an application program can directly influence the browsing situation of a user for text description information corresponding to the pushed document.
In the related art, in order to attract a user to browse corresponding text description information, after acquiring text description information of the user for a certain commodity, a worker performs keyword extraction and picture configuration on the text description information of the user to generate a pushed case corresponding to the text description information, and then the pushed case is displayed in a display interface, so that the user is attracted to click the pushed case and browse the text description information corresponding to the pushed case, and popularization of the commodity is achieved.
Disclosure of Invention
The embodiment of the application provides a document generation method, a document evaluation model training method, a device and equipment. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides a document generation method, where the method includes:
acquiring target text description information;
acquiring a target candidate file set corresponding to the target text description information, wherein the target candidate file set comprises at least one candidate push file;
acquiring probability information of each candidate pushed case in the target candidate case set based on the target candidate case set, wherein the probability information is used for representing the accuracy of the candidate pushed case corresponding to the target text description information;
and according to the probability information, determining the candidate pushed case with the highest accuracy in the target candidate case set as the target pushed case corresponding to the target text description information.
In another aspect, an embodiment of the present application provides a method for training a document evaluation model, where the method includes:
obtaining a candidate training sample set, wherein the candidate training sample set comprises at least one positive sample and at least one negative sample; the positive sample refers to a pushed case with a click rate larger than a threshold value, and the negative sample refers to a pushed case with a click rate smaller than the threshold value;
expanding the candidate training sample set based on the feature words corresponding to each pushed case in the candidate training sample set to obtain a target training sample set;
and training the file evaluation model by adopting the target training sample set.
In another aspect, an embodiment of the present application provides a document generation apparatus, including:
the information acquisition module is used for acquiring the target text description information;
a candidate obtaining module, configured to obtain a target candidate pattern set corresponding to the target text description information, where the target candidate pattern set includes at least one candidate pushed pattern;
a probability obtaining module, configured to obtain probability information of each candidate pushed case in the target candidate case set based on the target candidate case set, where the probability information is used to represent accuracy of the candidate pushed case corresponding to the target text description information;
and the pattern determining module is used for determining the candidate pushed pattern with the highest accuracy in the target candidate pattern set as the target pushed pattern corresponding to the target text description information according to the probability information.
In another aspect, an embodiment of the present application provides a training apparatus for a document evaluation model, where the apparatus includes:
the device comprises a sample acquisition module, a comparison module and a comparison module, wherein the sample acquisition module is used for acquiring a candidate training sample set, and the candidate training sample set comprises at least one positive sample and at least one negative sample; the positive sample refers to a pushed case with a click rate larger than a threshold value, and the negative sample refers to a pushed case with a click rate smaller than the threshold value;
the sample expansion module is used for expanding the candidate training sample set based on the feature words corresponding to all the pushed documents in the candidate training sample set to obtain a target training sample set;
and the model training module is used for training the pattern evaluation model by adopting the target training sample set.
In a further aspect, an embodiment of the present application provides a computer device, where the computer device includes a processor and a memory, where the memory stores a computer program, and the computer program is loaded and executed by the processor to implement the above-mentioned document generation method or implement the above-mentioned training method for the document evaluation model.
In a further aspect, the present application provides a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the above-mentioned document generation method, or implements the above-mentioned training method for the document evaluation model.
In a further aspect, a computer program product is provided, which, when run on a computer device, causes the computer device to execute the above-mentioned document generation method, or to implement the above-mentioned training method of the document evaluation model.
The technical scheme provided by the embodiment of the application can bring the following beneficial effects:
the computer equipment acquires the corresponding target pushed case through the target text description information, the problem of low case generation efficiency caused by manual acquisition of the pushed case is solved, the acquisition efficiency of the target pushed case is improved, moreover, after the candidate pushed case corresponding to the target text description information is acquired, the accuracy of each candidate pushed case is evaluated, the candidate pushed case with the highest accuracy is selected from the candidate pushed cases as the target pushed case, and the accuracy of the target pushed case is ensured.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of a document generation system provided by one embodiment of the present application;
FIG. 2 is a flow chart of a document generation method provided by an embodiment of the present application;
FIG. 3 illustrates a schematic diagram of the working principle of an attention mechanism;
FIG. 4 is a diagram illustrating the structure of a pattern evaluation model;
FIG. 5 is a flow chart of a method of document generation provided by another embodiment of the present application;
fig. 6 is a diagram illustrating text attribute information;
FIG. 7 is a diagram illustrating a text statistic;
FIG. 8 illustrates a schematic diagram of a push document for presentation;
FIG. 9 is a schematic diagram illustrating a display of a push document for presentation;
FIG. 10 is a schematic diagram illustrating the manner in which a final presentation is obtained;
FIG. 11 is a flow chart of a method of training a document evaluation model provided in one embodiment of the present application;
FIG. 12 is a diagram illustrating an example of a target training sample set acquisition;
fig. 13 is a schematic diagram illustrating a difference between a pushed document acquired by the document generation method in the present application and a pushed document acquired by the related art;
FIG. 14 is a block diagram of a document generation apparatus provided by one embodiment of the present application;
FIG. 15 is a block diagram of a document generation apparatus provided in another embodiment of the present application;
FIG. 16 is a block diagram of a training apparatus for a document evaluation model provided in one embodiment of the present application;
FIG. 17 is a block diagram of a training apparatus for a document evaluation model provided in another embodiment of the present application;
fig. 18 is a block diagram of a computer device according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Please refer to fig. 1, which shows a schematic diagram of a document generation system according to an embodiment of the present application. The document generation system may include: a terminal 10 and a computer device 20.
The terminal 10 is used for sending text information to the computer device 20, and the text information comprises specific content of the text description information and statistical information of the text description information. The specific content of the text description information refers to content displayed to a user in a display interface by the text description information, and the statistical information of the text description information refers to statistical information corresponding to user operation on the text description information, such as the praise times, browsing amount, forwarding times and the like of the text description information. Alternatively, the terminal 10 may be an electronic device such as a mobile phone, a tablet Computer, a game console, an electronic book reader, a multimedia player, a wearable device, a PC (Personal Computer), and the like. The terminal 10 may be a client installed with an application program, and the application program may be any application program capable of presenting a pushed document to a user, such as a purchasing application program, a reading application program, a social contact application program, an information application program, and the like. In the embodiment of the application, a user can control the current display interface to jump to the display interface of the text description information corresponding to the pushed file through the triggering operation aiming at the pushed file. The trigger operation may be a click operation, a slide operation, a long press operation, or the like, which is not limited in this embodiment of the application. Alternatively, the application may be an application that needs to be downloaded and installed, or may be an application that is ready to use.
The computer device 20 is used for acquiring a corresponding push file according to the text information sent by the terminal 10. Alternatively, the computer device 20 may be a server providing background services for clients of applications in the terminal 10. The server may be a background server of the application program. The server may be one server, a server cluster composed of a plurality of servers, or a cloud computing service center. Alternatively, the server may provide background services for applications in multiple terminals 10 simultaneously. In the embodiment of the present application, a document evaluation model is provided in the computer device 20. The pattern evaluation model is a deep learning model used for evaluating the accuracy of candidate pushed patterns corresponding to the text description information, and the pattern evaluation model can be a bidirectional neural network model based on an attention mechanism. Alternatively, after extracting the candidate pushed case corresponding to the text description information from the text information sent by the terminal 10, the computer device 20 performs accuracy evaluation on the candidate pushed case according to the case evaluation model, and selects the candidate pushed case with the highest accuracy as the pushed case corresponding to the text description information based on the evaluation result. The push document is then sent by the computer device 20 to the terminal 10 so that the terminal 10 can present the push document to the user.
Alternatively, the terminal 10 and the computer device 20 may communicate with each other via a network.
Please refer to fig. 2, which shows a flowchart of a document generation method according to an embodiment of the present application. The method can be applied to the computer device 20 of the pattern generation system shown in fig. 1, for example, the execution subject of each step can be the computer device 20. The method comprises the following steps (201-204):
step 201, obtaining target text description information.
The target text description information refers to text input by the user, and the text may be an evaluation for a certain function, article, store, or the like. Optionally, the target text description information includes text information, picture information, or voice information, and the like, which is not limited in this embodiment of the application. The user may be a user of a certain application program or application platform, or may be a background worker of a certain application program or application platform. In the embodiment of the application, the computer device may obtain the target text description information through a terminal of a user.
In one possible implementation, the terminal automatically sends the target text description information to the computer device. Optionally, after detecting the target text description information, the terminal sends the target text description information to the computer device, and further, the computer device obtains the target text description information; or, the terminal collects and sends the text description information to the computer device according to a certain time interval, and further, the computer device can obtain the target text description information according to the certain time interval, where the time interval may be 0.1s, 1h, 1 day, 1 week, or the like, and this is not limited in this embodiment of the application.
In another possible implementation, the computer device actively acquires the target text description information from the terminal. Optionally, when determining that the computer device is in a loadable state, the computer device sends a text acquisition request to the terminal, and the terminal sends the target text description information to the computer device according to the text acquisition request; or, the computer device sends a text acquisition request to the terminal at a certain time interval, and then the terminal sends text description information to the computer device according to the text acquisition request, and further, the computer device can obtain the target text description information, where the time interval may be 0.1s, 1h, 1 day, 1 week, or the like, and this is not limited in this embodiment of the application.
It should be noted that, after the computer device obtains the target text description information, the computer device may process the target text description information in real time, or may store the target text description information and process the target text description information at an alternative time, which is not limited in this embodiment of the application. Optionally, when the computer device stores the target text description information, the computer device may classify the target text description information according to specific content in the target text description information, and store the target text description information into a storage queue corresponding to the classification result.
Step 202, a target candidate pattern set corresponding to the target text description information is obtained.
The target candidate document set is a set of candidate pushed documents corresponding to the target text description information. Optionally, the target candidate copy set includes at least one candidate push copy. The push document is a text used for representing main content of the text description information, and the push document may include text information, picture information, voice information, or the like, which is not limited in the embodiment of the present application.
In the embodiment of the present application, after obtaining the target text description information, the computer device obtains a target candidate document set corresponding to the target text description information. Optionally, the computer device may obtain a target candidate pattern set corresponding to the target text description information based on the target text description information.
In one possible implementation, the computer device extracts the target candidate document set from the target text description information based on a document extraction rule. Optionally, after obtaining the target text description information, the computer device obtains a pattern extraction rule corresponding to the target text description information. Wherein, the corresponding file extraction rules of different types of target text description information are different. For example, if the target text description information is comment information, the pattern extraction rule is to extract a target candidate pattern set from the text information and the picture information; if the target text description information is dialogue information, the file extraction rule is to extract a target candidate file set from the text information and the voice information. Further, after obtaining the pattern extraction rule, the computer device obtains a target candidate pattern set corresponding to the target text description information from the target text description information according to the pattern extraction rule.
In another possible implementation, the target text description information is summarized and summarized by the staff to obtain the target candidate file set. Optionally, after the computer device obtains the target text description information, the computer device sends the target text description information to a terminal of a worker, and the worker summarizes and summarizes the target text description information received by the terminal, so as to obtain a target candidate case set corresponding to the target text description information. The terminals of the staff corresponding to the different types of target text description information can be different.
In yet another possible implementation, the computer device extracts the target candidate document set from the target text description information based on a document extraction model. Optionally, after obtaining the target text description information, the computer device inputs the target text description information to a document extraction model, and obtains a target candidate document set output by the document extraction model. The pattern extraction model can be a deep learning model, and the pattern extraction models corresponding to different types of target text description information can be different.
Step 203, based on the target candidate file set, obtaining probability information of each candidate push file in the target candidate file set.
The probability information is used to characterize the accuracy of the candidate pushed patterns corresponding to the target text description information. Optionally, the probability information includes a probability score. The probability score is a numerical value with a value range of [0, 1], and the probability score is positively correlated with the accuracy, namely the larger the probability score is, the higher the accuracy of the candidate pushed file is. In the embodiment of the present application, after acquiring the target candidate document set, the computer device acquires probability information of each candidate push document in the target candidate document set based on the target candidate document set.
In a possible implementation manner, the computer device obtains probability information of each candidate pushed document in the target candidate document set based on the document evaluation model. Optionally, the step 203 includes the following steps:
1. and inputting each candidate pushing case in the target candidate case set to the case evaluation model respectively.
The pattern evaluation model is a bidirectional neural network model based on an attention mechanism. Wherein the attention mechanism is used for determining the important content distribution condition in the candidate push file, and determining the computer equipment resource allocation aiming at the candidate push file according to the important content distribution condition in the candidate push file.
By way of example, with reference to fig. 3, the principle of the attention mechanism will be briefly described. Adopting a small neural network g to approximate and calculate the state output S of the output layer at the time t-1t-1Word embedding h with candidate push casejFraction e of the relationship betweentjComprises the following steps:
etj=g(St-1,hj);
further, score e is given according to the relationshiptjDetermining word embedding hjWeight of atjComprises the following steps:
Figure BDA0002758471450000051
wherein, TxSentence vector referring to candidate pushed case, etkRefers to the relationship score embedded by the word of any participle in the candidate pushed document.
Then, the weight a is adoptedtjWord-matching embedding hjWeighting processing is carried out, meanwhile, weighting processing is carried out on each word embedding in the candidate push documents, and weighting processing results c for the candidate push documents are obtainedtComprises the following steps:
Figure BDA0002758471450000052
further, a recurrent neural network f is adopted to output S to the state of the output layer at the time t-1t-1Weighting processing result c for candidate push documentstAnd the state result y of the output layer at time tt-1Processing to obtain the state output S of the output layer at the time ttComprises the following steps:
St=f(St-1,yt-1,ct)。
optionally, in an embodiment of the present application, the document evaluation model includes an input layer, a word embedding layer, a neural network layer, an attention mechanism layer, and an output layer. Illustratively, as shown in fig. 4, the case evaluation model includes an input layer 41, a word embedding layer 42, a neural network layer 43, an attention mechanism layer 44, and an output layer 45. The input layer 41 is used for inputting candidate push documents; the word embedding layer 42 is used for acquiring word embedding corresponding to each participle of the candidate push case; the neural network layer 43 is used for performing bidirectional traversal on each word embedding to obtain at least one sentence vector; the attention mechanism layer 44 is configured to perform weighting processing on each sentence vector based on the important content distribution in the candidate pushed documents; the output layer 45 is configured to output probability information of the candidate push patterns based on the weighting processing result of each sentence vector. The bidirectional traversal comprises forward traversal based on the input sequence of embedding the corresponding participles into each word and backward traversal based on the input sequence of embedding the corresponding participles into each word. Of course, in the embodiment of the present application, in order to ensure the accuracy and efficiency of the document evaluation model, parameters of the document evaluation model may be specially set, for example, the length of the input sequence of the document evaluation model is set to 25.
In this embodiment of the application, after obtaining the target candidate document set, the computer device inputs each candidate pushed document in the target candidate document set to the document evaluation model, respectively.
Illustratively, the comparison between the attention-based bidirectional neural network model in the embodiment of the present application and other deep learning models is shown in table 1:
TABLE 1 comparison between a bidirectional neural network model based on attention mechanism and other deep learning models
Figure BDA0002758471450000053
2. And acquiring probability information output by the file evaluation model.
In the embodiment of the application, after the computer device inputs each candidate pushed case in the target candidate case set to the case evaluation model, probability information output by the case evaluation model is obtained.
In another possible implementation, the computer device obtains probability information of each candidate pushed document in the target candidate document set based on the document evaluation rule. Optionally, after obtaining the target candidate document set, the computer device obtains a document evaluation rule corresponding to the target candidate document set, and evaluates each candidate pushed document in the target candidate document set based on the document evaluation rule, so as to obtain probability information of each candidate pushed document.
In yet another possible implementation, the target candidate document set is manually evaluated by a worker to obtain probability information of each candidate pushed document in the target candidate document set. Optionally, after obtaining the target candidate document set, the computer device sends each candidate pushed document in the target candidate document set to a terminal of a worker, and the worker evaluates the candidate pushed document received by the terminal, so as to obtain probability information of each candidate pushed document.
And 204, determining the candidate pushed case with the highest accuracy in the target candidate case set as the target pushed case corresponding to the target text description information according to the probability information.
The target push document is a text used for representing main content of the target text description information, and the target push document may include text information, picture information, voice information, or the like, which is not limited in the embodiment of the present application.
In this embodiment of the application, after obtaining the probability information of each candidate pushed document, the computer device compares the probability information, and determines, according to the probability information, a candidate pushed document with the highest accuracy in the target candidate document set as the target pushed document corresponding to the target text description information. Optionally, if the probability information includes a probability score, the computer device may determine the candidate push scenario with the highest probability score as the target push scenario.
In summary, in the technical solution provided in the embodiment of the present application, the computer device obtains the corresponding target pushed document through the target text description information, so as to avoid the problem of low document generation efficiency caused by manually obtaining the pushed document, and improve the obtaining efficiency of the target pushed document; and after the candidate pushed documents corresponding to the target text description information are obtained, the computer equipment evaluates the accuracy of each candidate pushed document, selects the candidate pushed document with the highest accuracy from the candidate pushed documents as the target pushed document, and ensures the accuracy of the target pushed document.
Next, a method of acquiring the target candidate document set will be described.
In an exemplary embodiment, the above step 202 includes the following steps:
1. and carrying out syntactic preprocessing on the target text description information, and extracting a first candidate file set from the target text description information.
In the embodiment of the application, after obtaining the target text description information, the computer device performs syntax preprocessing on the target text description information, and extracts a first candidate pattern set from the target text description information. Wherein the first set of candidate documents includes at least one candidate push document.
Optionally, after obtaining the target text description information, the computer device removes symbols in the target text description information to obtain the first processing information. The symbols may include punctuation marks, numeric symbols, greek letters, emoticons, and the like, which are not limited in this embodiment of the application. Optionally, a symbol library may be stored in the computer device, and when removing the symbols in the target text description information, the computer device may remove the symbols in the target text description information based on the symbols contained in the symbol library.
Optionally, after the computer device obtains the first processing information, the computer device splices the phrases in the first processing information to obtain second processing information. The short sentences are sentences spaced by the symbols in the target text description information, and the second processing information comprises more than one long sentence. Note that, in the phrase splicing, the number of phrases to be spliced may be 2, 3, 4, 5, or the like, and this is not limited in the embodiment of the present application. Certainly, when the phrases are spliced, adjacent phrases can be spliced, and also nonadjacent phrases can be spliced, which is not limited in the embodiment of the present application.
Optionally, after obtaining the second processing information, the computer device removes long sentences that do not satisfy the grammatical logic rule from the second processing information based on the grammatical logic rule, so as to obtain the first candidate file set. The grammar logic rules comprise syntax rules and lexical rules. The syntax rules are used to detect the structure of each long sentence in the above-described second processing information and the dependency relationship between words in each long sentence. The lexical rule is used to detect the structure and property of the word itself in each long sentence in the second processing information.
2. And selecting candidate push files with quality meeting the target conditions from the first candidate file set to obtain a target candidate file set.
In this embodiment of the application, after acquiring the first candidate document set, the computer device selects a candidate pushed document with quality meeting a target condition from the first candidate document set, removes a candidate pushed document with quality not meeting the target condition, and further obtains the target candidate document set.
In a possible embodiment, the target condition is that a target requirement is met. Wherein the target requirements include, but are not limited to, at least one of: comprises positive emotion, beautiful words used by the case, smooth case, rich case and normal case typesetting. Optionally, after obtaining the first candidate document set, the computer device detects the first candidate document set, and removes a pushed document that does not meet the target requirement from the first candidate document set, so as to obtain the target candidate document set. Illustratively, the positive emotions refer to non-negative emotions; the grace of the words used in the documents means that the documents do not include the words in the target word bank, and the target word bank stores the non-grace words.
In another possible embodiment, the target condition is that no negative words are included. The negative words refer to words with a certain special tendency, one or more word banks containing the negative words can be stored in the computer equipment, and different types of negative words can be stored in different word banks. Optionally, after obtaining the first candidate case set, the computer device performs word segmentation filtering on the first candidate case set, and removes a pushed case containing negative words from the first candidate case set to obtain the target candidate case set.
In yet another possible embodiment, the target condition is that the character length is smaller than a target value. The target value may be any value, such as 24, 25, or 26, which is not limited in this application. Optionally, after obtaining the first candidate document set, the computer device performs sentence filtering on the first candidate document set, and removes a pushed document with a character length greater than a target value from the first candidate document set to obtain the target candidate document set; or after the computer equipment acquires the first candidate file set, performing character supplement or character interception on the candidate push files in the first candidate file set based on the target value corresponding to the character length to obtain the target candidate file set. Exemplarily, the character supplementation refers to supplementing the candidate push documents by using a character "0"; the character interception is to intercept characters corresponding to a target value from the candidate pushed documents, for example, to sequentially intercept the characters corresponding to the target value from the initial character of the candidate pushed documents.
It should be noted that, in practical applications, an operator may flexibly set the target conditions according to practical situations, for example, the target conditions may include any one or more of the above conditions, and this is not limited in this embodiment of the application.
Please refer to fig. 5, which shows a flowchart of a document generation method according to another embodiment of the present application. The method can be applied to the computer device 20 of the pattern generation system shown in fig. 1, for example, the execution subject of each step can be the computer device 20. The method comprises the following steps (501-506):
step 501, obtaining target text description information.
Step 502, a target candidate pattern set corresponding to the target text description information is obtained.
Step 503, based on the target candidate file set, obtaining probability information of each candidate push file in the target candidate file set.
Step 504, according to the probability information, determining the candidate pushed case with the highest accuracy in the target candidate case set as the target pushed case corresponding to the target text description information.
The steps 501-504 are the same as the steps 201-204 in the embodiment of fig. 2, and refer to the embodiment of fig. 2 specifically, which is not described herein again.
And 505, acquiring key information corresponding to the target push file according to the target text description information.
The key information refers to information for enabling the target push file display effect to be richer. Optionally, the computer device may use the key information to content populate the target push document. The filling position of the key information may be any position of the target push file, which is not limited in the embodiment of the present application.
In this embodiment of the application, the computer device may obtain, according to the target text description information, key information corresponding to the target pushed document. Optionally, after obtaining the target pushed document, the computer device may obtain key information corresponding to the target pushed document, and further add the key information to the target pushed document; or, the computer device may acquire the key information corresponding to the target pushed document and store the key information before acquiring the target pushed document, and further directly add the key information to the target pushed document after acquiring the target pushed document.
In a possible implementation manner, the key information includes a category to which the target text description information belongs. The category to which the text description information belongs is used for indicating the category corresponding to the target text description information. Optionally, the computer device classifies the target text description information according to the target text description information, determines a category to which the target text description information belongs, and obtains a recommended language corresponding to the target pushed file from a recommendation library corresponding to the category to which the target text description information belongs. The recommendation libraries corresponding to different types may contain the same or different recommendation languages. Illustratively, the recommended words include image information.
In another possible implementation, the key information includes text attribute information. Wherein the text attribute information is used for indicating the content information of the target text description information. Optionally, the computer device performs content detection on the target text description information according to the target text description information, and obtains text attribute information corresponding to the target pushed document. Illustratively, as shown in fig. 6, the text attribute information 60 includes a keyword 61 of the target text description information, an entity name 62, a recommendation reason 63, and an emotion analysis 64. The keywords 61 may include keywords of the target text description information, where the keywords may be titles of the target text description information, or words whose occurrence frequency in the target text description information is within a preset range; the entity name 62 may include a trade name, a store name, a business circle name, an author name of the target text description information, and a category name of a category to which the target text description information belongs; the recommendation reason 63 may include a tag of the target text description information, which may be selected by an author of the target text description information when inputting the target text description information, or may be assigned by the computer device according to the content of the target text description information, for example, the tag may be eating, shopping, entertainment, staying in a store, discount, etc.; the emotion analysis 64 may include the emotion contained in the target text description information, such as like, very like, like but with some problems, etc.
In yet another possible implementation, the key information includes text statistics information. The text statistical information is used to indicate statistical information of user operations for the target text description information, where the user operations may be click operations, browsing operations, forwarding operations, and the like, and this is not limited in this embodiment of the present application. Optionally, the computer device performs statistics on user operations for the target text description information according to the target text description information, and obtains text statistical information corresponding to the target push file. Illustratively, with reference to fig. 7 in conjunction, the textual statistics 70 include content popularity 71, content conversion 72, purchase status 73, and quality index 74. The content popularity 71 comprises the number of praise, the number of browse, the number of collection, the number of author fans and the number of clicks corresponding to the target text description information; the content conversion 72 includes evaluation number and sharing number corresponding to the target text description information; the purchase condition 73 includes sales, discount and average price corresponding to the goods included in the target text description information; the quality index 74 includes the star rating and the number of times of recommendation corresponding to the commodity included in the target text description information.
It should be noted that, in practical applications, a worker may flexibly set the key information according to practical situations, for example, the key information may include any one or more of the above information, and this is not limited in this embodiment of the application.
Step 506, adding key information to the target pushed case to obtain a pushed case for display.
In this embodiment of the application, after obtaining the key information, the computer device adds the key information to the target pushed document to obtain a pushed document for display. The push document for presentation is a document which can generally present the main content of the target text description information to the user and generate attraction force for the user. Taking the above-mentioned key information as a recommendation, as shown in fig. 8, the push document 81 for presentation includes a recommendation 82, the push document 83 for presentation includes a recommendation 84, and the push document 85 for presentation includes a recommendation 86.
It should be noted that, in the embodiment of the present application, a computer device may also add a title to a push document for presentation. Optionally, the computer device obtains user information of a user account for the pushed document for presentation, and adds a title to the pushed document for presentation based on the user information. The user account is a user account corresponding to a user of the terminal for displaying the pushed documents. Optionally, the user information includes a user name, a user avatar, a character string corresponding to the user account, and the like; the above-mentioned title may include any one or more kinds of user information, which is not limited in the embodiment of the present application.
Optionally, after adding a title to the pushed case shown by the user, the computer device may send the pushed case with a title for display to the terminal, and correspondingly, the terminal receives the pushed case with a title for display and shows the pushed case with a title for display to the user in a display interface, as shown in fig. 9, the display interface 90 includes the pushed case 91 for display and a title 92 of the pushed case 91 for display. The push document 91 for display includes key information 93.
Of course, in actual use, the above-mentioned title may be added by the terminal. Optionally, after acquiring the pushed document for display, the computer device may send the pushed document for display to the terminal, and then, after acquiring the pushed document for display, the terminal adds a title to the pushed document for display according to the user information of the user account, and displays the pushed document for display with the title to the user in the display interface.
In summary, in the technical solution provided in the embodiment of the present application, the pushed document for display is obtained by adding the key information to the target pushed document, so that the pushed document displayed to the user is rich in content, and the attraction of the pushed document to the user is improved.
In addition, the manner in which the documentation in this application is generated is fully described with reference to fig. 10. After obtaining the target text description information, the computer equipment performs semantic extraction on the target text description information to obtain a plurality of candidate pushed documents, extracts the target pushed documents from the candidate pushed documents according to the document evaluation model, and further adds key information in the target pushed documents to obtain the pushed documents for display. And finally, adding a title in the pushed file for display to obtain the file for display finally. In addition, when the computer equipment determines the titles, user interest mining can be carried out according to user behaviors, and then the corresponding titles are determined according to the user interests, so that the final file for display has pertinence and attraction to the user.
Please refer to fig. 11, which illustrates a flowchart of a training method of a document evaluation model according to an embodiment of the present application. The method can be applied to the computer device 20 of the pattern generation system shown in fig. 1, and can be applied to any other computer device. The method can comprise the following steps (1101-1103):
step 1101, a set of candidate training samples is obtained.
The candidate training sample is a candidate set of training samples corresponding to the pattern evaluation model. Optionally, the computer device may obtain, from the pushed document library, a pushed document whose click number is greater than a certain number, and further obtain a training sample set.
Optionally, the set of candidate training samples includes at least one positive sample and at least one negative sample. Wherein, the positive sample refers to the pushed case with the click rate larger than the threshold value, and the negative sample refers to the pushed case with the click rate smaller than the threshold value. The click rate is a ratio of the number of clicks of the pushed documents to the number of display times, and is any value which can be flexibly set.
Step 1102, expanding the candidate training sample set based on the feature words corresponding to each pushed case in the candidate training sample set to obtain a target training sample set.
The characteristic words are words for indicating characteristics of the pushed case. Optionally, the feature words may be obtained according to candidate pushed patterns. In the embodiment of the application, after the candidate training sample set is obtained, the computer device obtains the feature words corresponding to each pushed document in the candidate training sample set, and expands the candidate training sample set based on the feature words, so as to obtain the target training sample set.
And 1103, training the case evaluation model by adopting the target training sample set.
The pattern evaluation model refers to a deep learning model for evaluating the accuracy of candidate pushed patterns corresponding to the text description information, and may be a bidirectional neural network model based on an attention mechanism. In the embodiment of the present application, after obtaining the target training sample set, the computer device trains the pattern evaluation model by using the target training sample set.
Optionally, after obtaining the target training sample set, the computer device removes symbols in the pushed file of the target training sample set to obtain a first processed target training sample set. The symbols may include punctuation marks, numeric symbols, greek letters, emoticons, and the like, which are not limited in this embodiment of the application. Further, the computer device performs quality detection on the first processed target training sample set, removes the push file with quality not meeting the target condition, and obtains a second processed target training sample set. Wherein the target conditions include, but are not limited to, at least one of: the method comprises positive emotion, graceful use of words in the case, smooth case, rich case, normal case typesetting, no negative words included, and character length smaller than a target value. And then, the computer equipment trains the pattern evaluation model by adopting the second processed target training sample set.
It should be noted that, in the embodiment of the present application, multiple rounds of training are required for completing the training of the document evaluation model, and since the click rate of pushing the document is changed in real time, the computer device may update the target training sample set during each round of training of the document evaluation model. Optionally, the computer device updates the positive samples and the negative samples in the candidate training sample set based on the update of the click rate, so as to obtain an updated candidate sample training set. Wherein, the positive sample refers to the pushed case with the click rate larger than the threshold value, and the negative sample refers to the pushed case with the click rate smaller than the threshold value. Further, the computer equipment obtains an updated extended sample set according to the updated candidate sample training set, and performs the next round of training on the file evaluation model by adopting the updated candidate sample training set and the updated extended sample set.
In summary, in the technical scheme provided by the embodiment of the application, the target training sample set is obtained by expanding the candidate training sample set, and the target training sample set is the training sample of the document evaluation model, so that the problem of few model training samples is effectively solved, and the reliability and accuracy of the training of the document evaluation model are ensured.
Next, a method for acquiring the target training sample set will be described.
In an exemplary embodiment, the above step 1102 includes the following steps:
1. and extracting characteristic words from each pushed case of the candidate training sample set.
In the embodiment of the present application, after acquiring the candidate training sample set, the computer device extracts feature words from each of the pushed documents of the candidate training sample set.
Optionally, after obtaining the candidate training sample set, the computer device removes symbols in each pushed case of the candidate training sample set to obtain each processed pushed case. The symbols may include punctuation marks, numeric symbols, greek letters, emoticons, and the like, which are not limited in this embodiment of the application. Optionally, a symbol library may be stored in the computer device, and when removing the symbols in the target text description information, the computer device may remove the symbols in the target text description information based on the symbols contained in the symbol library.
Optionally, after obtaining each processed pushed case, the computer device performs word segmentation processing on each processed pushed case to obtain a word segmentation corresponding to each processed pushed case. Further, target participles are removed from the participles corresponding to the processed push documents, and keywords corresponding to the processed push documents are obtained. Wherein the target participles include but are not limited to at least one of the following: stop words, participles with an appearance frequency in the push document greater than a first target value, and participles with an appearance frequency in the push document less than a second target value, the first target value being greater than the second target value.
Optionally, after obtaining the keyword, the computer device obtains a synonym corresponding to the keyword, and determines the keyword and the synonym corresponding to the keyword as the feature word.
2. And based on the characteristic words, selecting other pushed documents with similarity greater than a threshold value with the characteristic words from the pushed document library.
In this embodiment of the application, after acquiring the feature words, the computer device selects, based on the feature words, other pushed documents from the pushed document library, where the similarity between the pushed documents and the feature words is greater than a threshold value. The threshold value may be any value, which is not limited in this embodiment of the application.
Optionally, after obtaining the feature words, the computer device traverses the pushed documents in the pushed document library, and determines that the similarity between the pushed documents and the feature words is greater than the threshold value after the coincidence degree between the participles in the pushed documents and the feature words is greater than a certain numerical value, that is, the pushed documents are the other pushed documents.
3. And marking other pushed documents with the similarity between the characteristic words corresponding to the positive sample larger than the threshold value as positive samples, and marking other pushed documents with the similarity between the characteristic words corresponding to the negative samples larger than the threshold value as negative samples to obtain an extended sample set.
In this embodiment of the application, after obtaining the other pushed documents, the computer device labels the other pushed documents to obtain an extended sample set. Optionally, the computer device labels, as positive samples, other pushed documents with similarity between the feature words corresponding to the positive samples being greater than a threshold value, and labels, as negative samples, other pushed documents with similarity between the feature words corresponding to the negative samples being greater than the threshold value, thereby obtaining the extended sample set.
4. And adding an extended sample set in the candidate training sample set to obtain a target training sample set.
In this embodiment, after obtaining the extended sample set, the computer device adds the extended sample set to the candidate training sample set to obtain a target training sample set.
Illustratively, with reference to fig. 12, a complete description is provided of the manner in which the target training sample set is obtained. Firstly, the push documents 121 with the click rate larger than a certain numerical value are obtained from the push document library 120, the push documents 121 are labeled, the push documents 121 with the click rate larger than a threshold value are labeled as positive samples, the push documents 121 with the click rate smaller than the threshold value are labeled as negative samples, and a candidate training sample set 122 is obtained. Then, the computer device obtains feature words from the candidate training sample set 122, and expands the candidate training sample set 122 according to the feature words to obtain a target training sample set 123.
Certainly, compared with the pushed documents obtained according to the related art, the pushed documents obtained according to the document generation method in the present application have a significant improvement in the click rate and the click number, as shown in fig. 13, the click rate is increased by 7.2%, and the click number is increased by 8.0%.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 14, a block diagram of a document generation apparatus according to an embodiment of the present application is shown. The device has the function of realizing the document generation method, and the function can be realized by hardware or hardware executing corresponding software. The device can be a computer device and can also be arranged in the computer device. The apparatus 1400 may include: an information acquisition module 1410, a candidate acquisition module 1420, a probability acquisition module 1430, and a pattern determination module 1440.
And an information obtaining module 1410, configured to obtain the target text description information.
A candidate obtaining module 1420, configured to obtain a target candidate pattern set corresponding to the target text description information, where the target candidate pattern set includes at least one candidate pushed pattern.
A probability obtaining module 1430, configured to obtain probability information of each candidate pushed case in the target candidate case set based on the target candidate case set, where the probability information is used to represent the accuracy of the candidate pushed case corresponding to the target text description information.
A pattern determining module 1440, configured to determine, according to the probability information, the candidate pushed pattern with the highest accuracy in the target candidate pattern set as the target pushed pattern corresponding to the target text description information.
In an exemplary embodiment, the candidate acquisition module 1420 includes: a first acquisition unit and a second acquisition unit.
The first obtaining unit is used for carrying out syntax preprocessing on the target text description information and extracting a first candidate case set from the target text description information, wherein the first candidate case set comprises at least one candidate push case.
And the second acquisition unit is used for selecting the candidate pushed file with the quality meeting the target condition from the first candidate file set to obtain a target candidate file set.
In an exemplary embodiment, the first obtaining unit is configured to remove a symbol in the target text description information to obtain first processing information; splicing all the short sentences in the first processing information to obtain second processing information; wherein the short sentence is a sentence separated by the symbol in the target text description information, and the second processing information includes more than one long sentence; and based on a grammatical logic rule, removing long sentences which do not meet the grammatical logic rule from the second processing information to obtain the first candidate file set.
In an exemplary embodiment, the second obtaining unit is configured to detect the first candidate document set, and remove, from the first candidate document set, a pushed document that does not meet a target requirement, to obtain the target candidate document set; wherein the target requirements include at least one of: the method comprises the steps of positive emotion, graceful word use of the file, smooth file, rich file and normal file typesetting; or, performing word segmentation filtering on the first candidate case set, and removing a pushed case containing negative words from the first candidate case set to obtain the target candidate case set; or sentence filtering is carried out on the first candidate file set, and the pushed files with the character length larger than the target value are removed from the first candidate file set to obtain the target candidate file set; or, based on a target value corresponding to the character length, performing character supplementation or character interception on the candidate push scrip in the first candidate scrip set to obtain the target candidate scrip set.
In an exemplary embodiment, the probability obtaining module 1430 is configured to input each candidate pushed case in the target candidate case set to a case evaluation model, where the case evaluation model is a bidirectional neural network model based on an attention mechanism; and acquiring probability information output by the file evaluation model.
In an exemplary embodiment, the case evaluation model includes an input layer, a word embedding layer, a neural network layer, an attention mechanism layer, and an output layer; wherein, the input layer is used for inputting the candidate push file; the word embedding layer is used for acquiring word embedding corresponding to each participle of the candidate push case; the neural network layer is used for performing bidirectional traversal on each word embedding to obtain at least one sentence vector; wherein the bidirectional traversal comprises a forward traversal based on an input order of the words embedded into the corresponding participles and a reverse traversal based on an input order of the words embedded into the corresponding participles; the attention mechanism layer is used for weighting each sentence vector based on the important content distribution condition in the candidate push file; and the output layer is used for outputting probability information of the candidate push file based on the weighting processing result of each sentence vector.
In an exemplary embodiment, as shown in fig. 15, the apparatus 1400 further includes: a key acquisition module 1450 and a case acquisition module 1460.
The key obtaining module 1450 is configured to obtain, according to the target text description information, key information corresponding to the target push scenario.
A document acquisition module 1460, configured to add the key information to the target pushed document to obtain a pushed document for display.
In an exemplary embodiment, the key obtaining module 1450 is configured to determine, according to the target text description information, a category to which the target text description information belongs; acquiring a recommendation language corresponding to the target pushed file from a recommendation library corresponding to the category to which the target text description information belongs; or acquiring text attribute information corresponding to the target pushed file according to the target text description information; wherein the text attribute information is used for indicating content information of the target text description information; or acquiring text statistical information corresponding to the target push file according to the target text description information; wherein the text statistical information is used for indicating statistical information of user operation aiming at the target text description information.
In an exemplary embodiment, as shown in fig. 15, the apparatus 1400 further includes: a user acquisition module 1470 and a title addition module 1480.
A user obtaining module 1470, configured to obtain user information of the user account for the push document for display.
A title adding module 1480 for adding a title in the push document for presentation based on the user information.
In summary, in the technical solution provided in the embodiment of the present application, the computer device obtains the corresponding target pushed document through the target text description information, so as to avoid the problem of low document generation efficiency caused by manually obtaining the pushed document, and improve the obtaining efficiency of the target pushed document; and after the candidate pushed documents corresponding to the target text description information are obtained, the computer equipment evaluates the accuracy of each candidate pushed document, selects the candidate pushed document with the highest accuracy from the candidate pushed documents as the target pushed document, and ensures the accuracy of the target pushed document.
Please refer to fig. 16, which illustrates a block diagram of a training apparatus of a document evaluation model according to an embodiment of the present application. The device has the function of realizing the training method of the file evaluation model, and the function can be realized by hardware or by hardware executing corresponding software. The device can be a computer device and can also be arranged in the computer device. The apparatus 1600 may include: a sample acquisition module 1610, a sample expansion module 1620, and a model training module 1630.
A sample obtaining module 1610 configured to obtain a candidate training sample set, where the candidate training sample set includes at least one positive sample and at least one negative sample; the positive sample refers to a pushed case with a click rate larger than a threshold value, and the negative sample refers to a pushed case with a click rate smaller than the threshold value.
And the sample expansion module 1620 is configured to expand the candidate training sample set based on the feature words corresponding to each pushed document in the candidate training sample set to obtain a target training sample set.
A model training module 1630, configured to train the pattern evaluation model with the target training sample set.
In an exemplary embodiment, the sample expansion module 1620 includes: the device comprises a feature extraction unit, a case acquisition unit, a case labeling unit and a sample acquisition unit.
And the characteristic extraction unit is used for extracting characteristic words from each pushed case of the candidate training sample set.
And the case acquiring unit is used for selecting other pushed cases with similarity larger than a threshold value with the characteristic words from the pushed case library based on the characteristic words.
And the case labeling unit is used for labeling other pushed cases with the similarity between the characteristic words corresponding to the positive samples being larger than the threshold value as positive samples, and labeling other pushed cases with the similarity between the characteristic words corresponding to the negative samples being larger than the threshold value as negative samples to obtain an extended sample set.
And the sample acquisition unit is used for adding the extended sample set in the candidate training sample set to obtain the target training sample set.
In an exemplary embodiment, the feature extraction unit is configured to remove symbols in each pushed scenario of the candidate training sample set to obtain each processed pushed scenario; performing word segmentation processing on each processed push case to obtain word segmentation corresponding to each processed push case; removing target participles from the participles corresponding to the processed push documents to obtain keywords corresponding to the processed push documents; wherein the target participle comprises at least one of: stop words, participles with an appearance frequency in the push document larger than a first target value, and participles with an appearance frequency in the push document smaller than a second target value, wherein the first target value is larger than the second target value. And determining the keywords and synonyms corresponding to the keywords as the characteristic words.
In an exemplary embodiment, the model training module 1630 is configured to remove symbols in the pushed pattern of the target training sample set to obtain a first processed target training sample set; performing quality detection on the first processed target training sample set, and removing the push files with quality not meeting the target conditions to obtain a second processed target training sample set; wherein the target condition comprises at least one of: the method comprises the steps of positive emotion, graceful word use of the file, smooth file, rich file, normal file typesetting, no negative word included and character length smaller than a target value; and training the pattern evaluation model by adopting the target training sample set after the second processing.
In an exemplary embodiment, as shown in fig. 17, the apparatus 1600 further comprises: a candidate update module 1640 and a sample update module 1650.
The candidate updating module 1640 is configured to update the positive samples and the negative samples in the candidate training sample set based on the update of the click rate to obtain an updated candidate training sample set; the positive sample refers to a pushed case with the click rate larger than the threshold, and the negative sample refers to a pushed case with the click rate smaller than the threshold.
And a sample updating module 1650, configured to obtain an updated extended sample set according to the updated candidate sample training set.
The model training module 1630 is further configured to perform a next round of training on the pattern evaluation model by using the updated candidate sample training set and the updated extended sample set.
In summary, in the technical scheme provided by the embodiment of the application, the target training sample set is obtained by expanding the candidate training sample set, and the target training sample set is the training sample of the document evaluation model, so that the problem of few model training samples is effectively solved, and the reliability and accuracy of the training of the document evaluation model are ensured.
It should be noted that, when the apparatus provided in the foregoing embodiment implements the functions thereof, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the apparatus may be divided into different functional modules to implement all or part of the functions described above. In addition, the apparatus and method embodiments provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
Referring to FIG. 18, a block diagram of a computer device 1800 is shown, according to an embodiment of the present application. The computer device may be the computer device 20 shown in fig. 1, and the computer device 20 may implement the above-mentioned document generation method or the above-mentioned training method of the document evaluation model. Specifically, the method comprises the following steps:
the computer device 1800 includes a Processing Unit (e.g., a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), an FPGA (Field Programmable Gate Array), etc.) 1801, a system Memory 1804 including a RAM (Random Access Memory) 1802 and a ROM (Read Only Memory) 1803, and a system bus 1805 connecting the system Memory 1804 and the Central Processing Unit 1801. The computer device 1800 also includes a basic I/O system (Input/Output) 1806 to facilitate information transfer between various devices within the computer device, and a mass storage device 1807 for storing an operating system 1813, application programs 1814, and other program modules 1815.
The basic input/output system 1806 includes a display 1808 for displaying information and an input device 1809 such as a mouse, keyboard, etc. for user input of information. The display 1808 and the input device 1809 are connected to the central processing unit 1801 via an input/output controller 1810 connected to the system bus 1805. The basic input/output system 1806 may also include an input/output controller 1810 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, input-output controller 1810 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 1807 is connected to the central processing unit 1801 through a mass storage controller (not shown) connected to the system bus 1805. The mass storage device 1807 and its associated computer-readable media provide non-volatile storage for the computer device 1800. That is, the mass storage device 1807 may include a computer-readable medium (not shown) such as a hard disk or CD-ROM (Compact disk Read-Only Memory) drive.
Without loss of generality, the computer readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EPROM (Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), flash Memory or other solid state Memory technology, CD-ROM, DVD (Digital Video Disc) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer storage media is not limited to the foregoing. The system memory 1804 and mass storage device 1807 described above may be collectively referred to as memory.
The computer device 1800 may also operate as a remote computer connected to a network, such as the internet, according to embodiments of the present application. That is, the computer device 1800 may be connected to the network 1812 through the network interface unit 1811 that is coupled to the system bus 1805, or the network interface unit 1811 may be used to connect to other types of networks or remote computer systems (not shown).
The memory stores therein a computer program that is loaded by the processor and implements the above-described method for creating a document on the computer device side or the above-described method for training a document evaluation model on the configuration terminal side.
In an exemplary embodiment, there is also provided a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor of a computer device, implements the above-described document generation method; alternatively, the computer program is configured to be executed by a processor of the terminal to implement the training method of the document evaluation model.
Optionally, the computer-readable storage medium may include: a Read Only Memory (ROM), a Random Access Memory (RAM), a Solid State Drive (SSD), or an optical disc. The Random Access Memory may include a resistive Random Access Memory (ReRAM) and a Dynamic Random Access Memory (DRAM).
In an exemplary embodiment, there is also provided a computer program product for performing the above-mentioned document generation method when the computer program product is run on a computer device; when the computer program product runs on a configuration terminal, the training method of the file evaluation model is executed.
It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. In addition, the step numbers described herein only exemplarily show one possible execution sequence among the steps, and in some other embodiments, the steps may also be executed out of the numbering sequence, for example, two steps with different numbers are executed simultaneously, or two steps with different numbers are executed in a reverse order to the order shown in the figure, which is not limited by the embodiment of the present application.
The above description is only exemplary of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements and the like that are made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (18)

1. A method for generating a document, the method comprising:
acquiring target text description information;
acquiring a target candidate file set corresponding to the target text description information, wherein the target candidate file set comprises at least one candidate push file;
acquiring probability information of each candidate pushed case in the target candidate case set based on the target candidate case set, wherein the probability information is used for representing the accuracy of the candidate pushed case corresponding to the target text description information;
and according to the probability information, determining the candidate pushed case with the highest accuracy in the target candidate case set as the target pushed case corresponding to the target text description information.
2. The method of claim 1, wherein the obtaining of the target candidate document set corresponding to the target text description information comprises:
performing syntax preprocessing on the target text description information, and extracting a first candidate case set from the target text description information, wherein the first candidate case set comprises at least one candidate push case;
and selecting candidate push files with quality meeting target conditions from the first candidate file set to obtain a target candidate file set.
3. The method of claim 2, wherein said syntactically preprocessing said target textual description information to extract a first set of candidate patterns from said target textual description information comprises:
removing symbols in the target text description information to obtain first processing information;
splicing all the short sentences in the first processing information to obtain second processing information; wherein the short sentence is a sentence separated by the symbol in the target text description information, and the second processing information includes more than one long sentence;
and based on a grammatical logic rule, removing long sentences which do not meet the grammatical logic rule from the second processing information to obtain the first candidate file set.
4. The method of claim 2, wherein selecting a candidate pushed scrip from the first set of candidate scrips that meets a target requirement in quality to obtain a target set of candidate scrips comprises at least one of:
detecting the first candidate file set, and removing the pushed files which do not meet the target requirement from the first candidate file set to obtain the target candidate file set; wherein the target requirements include at least one of: the method comprises the steps of positive emotion, graceful word use of the file, smooth file, rich file and normal file typesetting;
performing word segmentation and filtration on the first candidate case set, and removing a push case containing negative words from the first candidate case set to obtain the target candidate case set;
sentence filtering is carried out on the first candidate file set, and a pushed file with the character length larger than a target value is removed from the first candidate file set to obtain a target candidate file set;
and performing character supplement or character interception on the candidate push scrip in the first candidate scrip set based on the target value corresponding to the character length to obtain the target candidate scrip set.
5. The method of claim 1, wherein obtaining probability information for each candidate pushed document in the target candidate document set based on the target candidate document set comprises:
inputting each candidate pushing case in the target candidate case set to a case evaluation model respectively, wherein the case evaluation model is a bidirectional neural network model based on an attention mechanism;
and acquiring probability information output by the file evaluation model.
6. The method of claim 5, wherein the document evaluation model comprises an input layer, a word embedding layer, a neural network layer, an attention mechanism layer, and an output layer; wherein,
the input layer is used for inputting the candidate push files;
the word embedding layer is used for acquiring word embedding corresponding to each participle of the candidate push case;
the neural network layer is used for performing bidirectional traversal on each word embedding to obtain at least one sentence vector; wherein the bidirectional traversal comprises a forward traversal based on an input order of the words embedded into the corresponding participles and a reverse traversal based on an input order of the words embedded into the corresponding participles;
the attention mechanism layer is used for weighting each sentence vector based on the important content distribution condition in the candidate push file;
and the output layer is used for outputting probability information of the candidate push file based on the weighting processing result of each sentence vector.
7. The method according to any one of claims 1 to 6, wherein after determining the candidate pushed scenario with the highest accuracy as the target pushed scenario corresponding to the target text description information according to the probability information, the method further comprises:
acquiring key information corresponding to the target push file according to the target text description information;
and adding the key information in the target pushed case to obtain a pushed case for display.
8. The method according to claim 7, wherein the obtaining of the key information corresponding to the target push scenario according to the target text description information includes at least one of:
determining the category to which the target text description information belongs according to the target text description information; acquiring a recommendation language corresponding to the target pushed file from a recommendation library corresponding to the category to which the target text description information belongs;
acquiring text attribute information corresponding to the target pushed file according to the target text description information; wherein the text attribute information is used for indicating content information of the target text description information;
acquiring text statistical information corresponding to the target push file according to the target text description information; wherein the text statistical information is used for indicating statistical information of user operation aiming at the target text description information.
9. The method of claim 7, wherein after adding the key information to the target push document to obtain a push document for display, the method further comprises:
acquiring user information of a user account aimed at by the push file for display;
adding a title in the push file for display based on the user information.
10. A method for training a pattern evaluation model, the method comprising:
obtaining a candidate training sample set, wherein the candidate training sample set comprises at least one positive sample and at least one negative sample; the positive sample refers to a pushed case with a click rate larger than a threshold value, and the negative sample refers to a pushed case with a click rate smaller than the threshold value;
expanding the candidate training sample set based on the feature words corresponding to each pushed case in the candidate training sample set to obtain a target training sample set;
and training the file evaluation model by adopting the target training sample set.
11. The method according to claim 10, wherein the expanding the candidate training sample set based on the feature words corresponding to the respective pushed documents in the candidate training sample set to obtain a target training sample set comprises:
extracting feature words from each pushed case of the candidate training sample set;
based on the characteristic words, selecting other pushed documents with similarity greater than a threshold value with the characteristic words from a pushed document library;
marking other pushed documents with the similarity between the feature words corresponding to the positive sample larger than the threshold value as positive samples, and marking other pushed documents with the similarity between the feature words corresponding to the negative samples larger than the threshold value as negative samples to obtain an extended sample set;
and adding the extended sample set in the candidate training sample set to obtain the target training sample set.
12. The method of claim 11, wherein the extracting feature words from the respective pushed documents of the candidate training sample set comprises:
removing symbols in each push file of the candidate training sample set to obtain each processed push file;
performing word segmentation processing on each processed push case to obtain word segmentation corresponding to each processed push case;
removing target participles from the participles corresponding to the processed push documents to obtain keywords corresponding to the processed push documents; wherein the target participle comprises at least one of: stop using words, participles with the frequency of occurrence in the pushed documents being greater than a first target value, and participles with the frequency of occurrence in the pushed documents being less than a second target value, wherein the first target value is greater than the second target value;
and determining the keywords and synonyms corresponding to the keywords as the characteristic words.
13. The method of any one of claims 10 to 12, wherein said training said pattern assessment model with said target training sample set comprises:
removing symbols in the pushed file of the target training sample set to obtain a first processed target training sample set;
performing quality detection on the first processed target training sample set, and removing the push files with quality not meeting the target conditions to obtain a second processed target training sample set; wherein the target condition comprises at least one of: the method comprises the steps of positive emotion, graceful word use of the file, smooth file, rich file, normal file typesetting, no negative word included and character length smaller than a target value;
and training the pattern evaluation model by adopting the target training sample set after the second processing.
14. The method according to any one of claims 10 to 12, further comprising:
updating the positive samples and the negative samples in the candidate training sample set based on the updating of the click rate to obtain an updated candidate sample training set; the positive sample refers to a pushed case with the click rate larger than the threshold, and the negative sample refers to a pushed case with the click rate smaller than the threshold;
acquiring an updated extended sample set according to the updated candidate sample training set;
and performing next round of training on the file evaluation model by adopting the updated candidate sample training set and the updated extended sample set.
15. A document creation apparatus, comprising:
the information acquisition module is used for acquiring the target text description information;
a candidate obtaining module, configured to obtain a target candidate pattern set corresponding to the target text description information, where the target candidate pattern set includes at least one candidate pushed pattern;
a probability obtaining module, configured to obtain probability information of each candidate pushed case in the target candidate case set based on the target candidate case set, where the probability information is used to represent accuracy of the candidate pushed case corresponding to the target text description information;
and the pattern determining module is used for determining the candidate pushed pattern with the highest accuracy in the target candidate pattern set as the target pushed pattern corresponding to the target text description information according to the probability information.
16. An apparatus for training a document evaluation model, the apparatus comprising:
the device comprises a sample acquisition module, a comparison module and a comparison module, wherein the sample acquisition module is used for acquiring a candidate training sample set, and the candidate training sample set comprises at least one positive sample and at least one negative sample; the positive sample refers to a pushed case with a click rate larger than a threshold value, and the negative sample refers to a pushed case with a click rate smaller than the threshold value;
the sample expansion module is used for expanding the candidate training sample set based on the feature words corresponding to all the pushed documents in the candidate training sample set to obtain a target training sample set;
and the model training module is used for training the pattern evaluation model by adopting the target training sample set.
17. A computer device comprising a processor and a memory, the memory having stored therein a computer program that is loaded and executed by the processor to implement the document generation method of any of claims 1 to 9 or the training method of the document evaluation model of any of claims 10 to 14.
18. A non-transitory computer-readable storage medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the document generation method of any one of claims 1 to 9, or implements the training method of the document evaluation model of any one of claims 10 to 14.
CN202011210227.0A 2020-11-03 2020-11-03 Document generation method, training method, device and equipment of document evaluation model Active CN112232067B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011210227.0A CN112232067B (en) 2020-11-03 2020-11-03 Document generation method, training method, device and equipment of document evaluation model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011210227.0A CN112232067B (en) 2020-11-03 2020-11-03 Document generation method, training method, device and equipment of document evaluation model

Publications (2)

Publication Number Publication Date
CN112232067A true CN112232067A (en) 2021-01-15
CN112232067B CN112232067B (en) 2024-09-27

Family

ID=74122739

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011210227.0A Active CN112232067B (en) 2020-11-03 2020-11-03 Document generation method, training method, device and equipment of document evaluation model

Country Status (1)

Country Link
CN (1) CN112232067B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113486260A (en) * 2021-07-15 2021-10-08 北京三快在线科技有限公司 Interactive information generation method and device, computer equipment and storage medium
CN113657113A (en) * 2021-08-24 2021-11-16 北京字跳网络技术有限公司 Text processing method and device and electronic equipment
CN114861621A (en) * 2022-04-21 2022-08-05 阿里巴巴(中国)有限公司 Object description scheme generation method, device, system and computer program product
CN118095293A (en) * 2024-04-24 2024-05-28 卓世未来(天津)科技有限公司 Text extension method and system based on large language model
WO2024217011A1 (en) * 2023-04-19 2024-10-24 北京字跳网络技术有限公司 Video generation method and apparatus, device, storage medium, and program product

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018086470A1 (en) * 2016-11-10 2018-05-17 腾讯科技(深圳)有限公司 Keyword extraction method and device, and server
US20190155877A1 (en) * 2017-11-17 2019-05-23 Adobe Inc. Generating a Targeted Summary of Textual Content Tuned to a Target Audience Vocabulary
CN110427617A (en) * 2019-07-22 2019-11-08 阿里巴巴集团控股有限公司 The generation method and device of pushed information
CN110795657A (en) * 2019-09-25 2020-02-14 腾讯科技(深圳)有限公司 Article pushing and model training method and device, storage medium and computer equipment
CN110852793A (en) * 2019-10-28 2020-02-28 北京深演智能科技股份有限公司 Document recommendation method and device and electronic equipment
WO2020107878A1 (en) * 2018-11-30 2020-06-04 平安科技(深圳)有限公司 Method and apparatus for generating text summary, computer device and storage medium
CN111523326A (en) * 2020-04-23 2020-08-11 北京百度网讯科技有限公司 Entity chain finger method, device, equipment and storage medium
CN111581923A (en) * 2020-04-29 2020-08-25 北京字节跳动网络技术有限公司 Method, device and equipment for generating file and computer readable storage medium
US20210174024A1 (en) * 2018-12-07 2021-06-10 Tencent Technology (Shenzhen) Company Limited Method for training keyword extraction model, keyword extraction method, and computer device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018086470A1 (en) * 2016-11-10 2018-05-17 腾讯科技(深圳)有限公司 Keyword extraction method and device, and server
US20190155877A1 (en) * 2017-11-17 2019-05-23 Adobe Inc. Generating a Targeted Summary of Textual Content Tuned to a Target Audience Vocabulary
WO2020107878A1 (en) * 2018-11-30 2020-06-04 平安科技(深圳)有限公司 Method and apparatus for generating text summary, computer device and storage medium
US20210174024A1 (en) * 2018-12-07 2021-06-10 Tencent Technology (Shenzhen) Company Limited Method for training keyword extraction model, keyword extraction method, and computer device
CN110427617A (en) * 2019-07-22 2019-11-08 阿里巴巴集团控股有限公司 The generation method and device of pushed information
CN110795657A (en) * 2019-09-25 2020-02-14 腾讯科技(深圳)有限公司 Article pushing and model training method and device, storage medium and computer equipment
CN110852793A (en) * 2019-10-28 2020-02-28 北京深演智能科技股份有限公司 Document recommendation method and device and electronic equipment
CN111523326A (en) * 2020-04-23 2020-08-11 北京百度网讯科技有限公司 Entity chain finger method, device, equipment and storage medium
CN111581923A (en) * 2020-04-29 2020-08-25 北京字节跳动网络技术有限公司 Method, device and equipment for generating file and computer readable storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
胡宝顺;王大玲;于戈;马婷;: "基于句法结构特征分析及分类技术的答案提取算法", 计算机学报, no. 04, 15 April 2008 (2008-04-15) *
詹飞;朱艳辉;梁文桐;冀相冰;: "基于BERT和TextRank关键词提取的实体链接方法", 湖南工业大学学报, no. 04, 15 July 2020 (2020-07-15) *
郑雄风;丁立新;万润泽;: "基于用户和产品Attention机制的层次BGRU模型", 计算机工程与应用, no. 11, 23 May 2017 (2017-05-23) *
饶竹一;张云翔;: "基于BiGRU和注意力机制的多标签文本分类模型", 现代计算机, no. 01, 5 January 2020 (2020-01-05) *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113486260A (en) * 2021-07-15 2021-10-08 北京三快在线科技有限公司 Interactive information generation method and device, computer equipment and storage medium
CN113657113A (en) * 2021-08-24 2021-11-16 北京字跳网络技术有限公司 Text processing method and device and electronic equipment
CN113657113B (en) * 2021-08-24 2024-08-02 北京字跳网络技术有限公司 Text processing method and device and electronic equipment
CN114861621A (en) * 2022-04-21 2022-08-05 阿里巴巴(中国)有限公司 Object description scheme generation method, device, system and computer program product
WO2024217011A1 (en) * 2023-04-19 2024-10-24 北京字跳网络技术有限公司 Video generation method and apparatus, device, storage medium, and program product
CN118095293A (en) * 2024-04-24 2024-05-28 卓世未来(天津)科技有限公司 Text extension method and system based on large language model

Also Published As

Publication number Publication date
CN112232067B (en) 2024-09-27

Similar Documents

Publication Publication Date Title
Kumar et al. Sentiment analysis of multimodal twitter data
Mandloi et al. Twitter sentiments analysis using machine learninig methods
CN112232067B (en) Document generation method, training method, device and equipment of document evaluation model
US11847414B2 (en) Robustness to adversarial behavior for text classification models
CN109657054B (en) Abstract generation method, device, server and storage medium
CN110325986B (en) Article processing method, article processing device, server and storage medium
US10380249B2 (en) Predicting future trending topics
Ritter et al. Open domain event extraction from twitter
US20200134398A1 (en) Determining intent from multimodal content embedded in a common geometric space
CN110309114B (en) Method and device for processing media information, storage medium and electronic device
US20210209289A1 (en) Method and apparatus for generating customized content based on user intent
CN107798622B (en) Method and device for identifying user intention
Setlur et al. Automatic generation of semantic icon encodings for visualizations
Zhang et al. Image clustering: An unsupervised approach to categorize visual data in social science research
US11653071B2 (en) Responsive video content alteration
CN112257452A (en) Emotion recognition model training method, device, equipment and storage medium
Saxena et al. Analysing customers reactions on social media promotional campaigns: A text-mining approach
CN109933793B (en) Text polarity identification method, device and equipment and readable storage medium
Tang et al. Emotion modeling from writer/reader perspectives using a microblog dataset
US20150235243A1 (en) Engagement tool for a website
CN114255067A (en) Data pricing method and device, electronic equipment and storage medium
CN113722487A (en) User emotion analysis method, device and equipment and storage medium
WO2010132062A1 (en) System and methods for sentiment analysis
CN111274384B (en) Text labeling method, equipment and computer storage medium thereof
Yasukochi et al. Analyzing font style usage and contextual factors in real images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant