CN112560458A - Article title generation method based on end-to-end deep learning model - Google Patents
Article title generation method based on end-to-end deep learning model Download PDFInfo
- Publication number
- CN112560458A CN112560458A CN202011451526.3A CN202011451526A CN112560458A CN 112560458 A CN112560458 A CN 112560458A CN 202011451526 A CN202011451526 A CN 202011451526A CN 112560458 A CN112560458 A CN 112560458A
- Authority
- CN
- China
- Prior art keywords
- article
- title
- intention
- text
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000013136 deep learning model Methods 0.000 title claims abstract description 13
- 238000013145 classification model Methods 0.000 claims description 19
- 238000004590 computer program Methods 0.000 claims description 12
- 238000002372 labelling Methods 0.000 claims description 6
- 230000011218 segmentation Effects 0.000 claims description 4
- 238000010276 construction Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006698 induction Effects 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/258—Heading extraction; Automatic titling; Numbering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/34—Browsing; Visualisation therefor
- G06F16/345—Summarisation for human users
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
- G06F16/355—Class or cluster creation or modification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/289—Phrasal analysis, e.g. finite state techniques or chunking
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Machine Translation (AREA)
Abstract
The invention discloses an article title generation method based on an end-to-end deep learning model, which classifies articles, identifies the article titles, provides article categories and title intention combinations at an input end, optimizes the title generation quality by using additional information, can manually control the title content form, and has stronger diversity. The invention can select the popular title intention combination, and the generated title content has more attraction; because the content of the title generated by the model is related to the input intention combination, the content form of the title can be controlled by giving different intention combinations, and the method has great effect when the title of the article needs to highlight certain aspects of the content.
Description
Technical Field
The invention particularly relates to an article title generation method based on an end-to-end deep learning model.
Background
For the problem of automatic generation of article titles in the vertical domain, the following solutions are mainly available at present:
(1) and searching the article titles in the vertical field on the network according to the input of the user.
(2) After the titles of articles in the vertical field are collected, content rules are summarized, title templates are written manually, and the templates are filled with specific information of the articles to generate the titles.
In the prior art, the following disadvantages exist:
(1) according to the title automatic generation method based on the search and the template, the generated title is low in association degree with the text of the article, and bad feeling of inconsistent text can be brought to readers.
(2) The title generated by the prior art has poor diversity and cannot manually control the content form of the title.
Disclosure of Invention
In view of the above situation, in order to overcome the defects of the prior art, the invention provides an article title generation method based on an end-to-end deep learning model.
In order to achieve the purpose, the invention provides the following technical scheme:
the article title generation method based on the end-to-end deep learning model comprises the following steps:
(1) inputting the text of the article into an article classification model to obtain article classifications; the method comprises the steps of dividing an article into small sections, and predicting the article intention by using an intention classification model; according to article categories and article intentions, preferentially selecting a title intention combination consisting of the article intentions from popular title intention combinations under the article categories;
(2) segmenting words of the text of the article, and selecting an abstract to obtain a text abstract;
(3) and combining and splicing the text abstract, the article categories and the title intentions to obtain input data, and inputting the input data into an end-to-end title generation model to obtain the article title.
Further, the construction of the end-to-end title generation model comprises data set establishment and model training, wherein the data set establishment and model training comprises the following steps:
(1.1) collecting article corpora in the vertical field;
(1.2) classifying the articles in the article corpus obtained in the step (1.1) by a vertical domain expert to obtain article categories; selecting a certain amount of articles to obtain articlesChapter data set { A1,A2,A3……Ai……AnN is the number of articles, AiRepresenting the ith article, i is more than or equal to 1 and less than or equal to n; classifying and labeling the articles to obtain an article class data set { C1,C2,C3……Ci……CnIn which C isiIs AiTraining a bert model to obtain an article classification model;
(1.3) dividing the article title into small sections according to punctuation marks to obtain a small section title data set { T }1,T2,T3……Tt……Tm},TtRepresenting the t-th subsection title, M is the quantity of the subsection titles, t is more than or equal to 1 and less than or equal to M, classifying the intentions of each subsection, and labeling to obtain a title intention data set { M1,M2,M3……Mt……MmIn which M ist={I1,I2,I3……It……IktIs the corresponding intention of the t-th subsection title, wherein ktFor the intended number of titles of the t-th subsection, ktTraining a bert model to obtain an intention classification model, wherein the natural number is not 0;
(1.4) processing the full article corpus: predicting the combination of article categories and title intentions by using the models obtained in the steps (1.2) and (1.3); the method comprises the steps of segmenting words of an article text and an article title, selecting a text abstract, splicing the article type, a title intention combination and the text abstract as input, taking the segmented title as a prediction target, and training a transform model to obtain an end-to-end title generation model.
Further, according to the article categories and the title intention combinations obtained in the step (1.4), the title intention combinations are classified according to the article categories, and are ranked according to the reading amount of the articles on the network, so that the popular title intention combinations under each article category are obtained.
Further, the step (1) is specifically as follows: inputting an article text into the article classification model in the step (1.2), predicting the article text to obtain article classifications, dividing the article text into small segments according to punctuations, predicting article intentions by using the intention classification model in the step (1.3), and preferentially selecting a title intention combination consisting of the article intentions from popular title intention combinations under the corresponding article classifications according to the article classifications and the article intentions; that is to say: and calculating the intention proportion of the popular title intention combination belonging to the article intentions, and then selecting the intention combination with high intention proportion as the title intention combination.
Further, the step (2) is specifically as follows: using a sentencepece word segmentation tool to segment the text of the article, calculating the total number of words of the article, if the total number of words of the article is not more than 500, taking the whole text as a summary, and if the total number of words of the article is more than 500, selecting 400 words at the beginning of the article and 100 words at the end of the article to form a text summary.
Further, the step (3) is specifically: and splicing the article categories, the title intention combination and the text abstract by using a blank space, and inputting the spliced article categories, the spliced title intention combination and the spliced text abstract into an end-to-end title generation model to generate corresponding article titles.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor being capable of implementing the steps of the article title generation method described above when executing the computer program.
A computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, is capable of implementing the steps in the article title generation method described above.
The invention has the beneficial effects that:
(1) the invention adopts the deep learning technology to classify the articles, identifies the intention of the article titles, provides article categories and title intention combinations at the input end, optimizes the quality of title generation by the additional information, can manually control the title content form, and has stronger diversity.
(2) The invention can select the popular title intention combination, and the generated title content has more attraction; because the content of the title generated by the model is related to the input intention combination, the content form of the title can be controlled by giving different intention combinations, and the method has great effect when the title of the article needs to highlight certain aspects of the content.
(3) The title obtained by the method has better diversity, and the article title generation method can be conveniently and quickly transferred to different vertical fields.
Drawings
FIG. 1 is a flow diagram of title generation model training.
Fig. 2 is a title generation flow chart.
Fig. 3 is a schematic diagram of a computer device.
Detailed Description
The technical solutions of the present invention are further described in the automotive field with reference to the accompanying drawings, and it should be noted that the specific embodiments are only for describing the present invention in detail, and should not be construed as limiting the present invention.
Example 1
As shown in fig. 2, the article title generation method based on the end-to-end deep learning model includes the following steps:
(1) inputting the text of the article into an article classification model to obtain article classifications; according to article categories and intentions contained in the articles, preferentially selecting a title intention combination consisting of article intentions from popular title intention combinations under the article categories;
(2) segmenting words of the text of the article, and selecting an abstract to obtain a text abstract;
(3) and combining and splicing the text abstract, the article categories and the title intentions to obtain input data, and inputting the input data into an end-to-end title generation model to obtain the article title.
In some preferred modes, the construction of the end-to-end title generation model includes data set establishment and model training, as shown in fig. 1, the data set establishment and model training includes the following steps:
(1.1) collecting article corpora in the vertical field; in this embodiment, the vertical field is the automotive field;
(1.2) carrying out induction analysis on the automobile articles in the article corpus obtained in the step (1.1) by an automobile field expert to obtain 21 article categories, such as single-vehicle shopping guide, double-vehicle comparison and the like, and summarizing each categoryAnd (4) compiling an article classification labeling guide and guiding data labeling personnel to learn the characteristics of the Chinese article. Selecting about thousands of articles to obtain article data set { A1,A2,A3……Ai……AnN is the number of articles, AiDenotes the ith article, AnRepresents the nth article, i is more than or equal to 1 and less than or equal to n. The data annotation personnel analyze the characteristics met by the article according to the content of the automobile article, and then judge that the article belongs to a certain class to obtain an article class data set { C1,C2,C3……Ci……CnIn which C isiIs AiArticle category of (1). Training a bert model by adopting the data set to obtain an article classification model; the bert model here is a conventional bert model in the prior art, and the present invention does not improve it.
(1.3) carrying out inductive analysis on the automobile article titles by automobile domain experts to obtain 26 title intention categories, such as power, price, appearance and the like, summarizing the characteristics of each title intention category, writing an intention classification marking guide, describing the content form or key words corresponding to each intention category, and attaching some examples to guide people to learn according to the marking guide. Selecting titles of about thousands of articles, dividing the article titles into small sections according to punctuation marks to obtain a small section title data set { T }1,T2,T3……Tt……Tm},TtDenotes the T-th sub-section title, TmRepresenting the mth subsection title, wherein M is the quantity of the subsection titles, t is more than or equal to 1 and less than or equal to M, classifying intentions according to the content of each subsection title by a data annotation person, and each subsection possibly corresponds to one or more intentions to obtain a title intention data set { M1,M2,M3……Mt……MmIn which M ist={I1,I2,I3……It……IktIs the corresponding intention of the t-th subsection title, wherein ktFor the intended number of titles of the t-th subsection, ktIs a natural number other than 0.
Training a bert model by adopting the data set to obtain an intention classification model; the bert model here is a conventional bert model in the prior art, and the present invention does not improve it.
(1.4) processing the full article corpus: predicting article types a and title intention combinations b by using the models obtained in the steps (1.2) and (1.3); using a sentencepece word segmentation tool to segment the text and the title of the article, calculating the total number of words of the article after segmenting the text of the article, if the total number of words of the article is not more than 500, using the whole text as a summary c, if the total number of words of the article is more than 500, selecting 400 words at the beginning of the article and 100 words at the end of the article to form a text summary c, splicing the article category a, the title intention combination b and the text summary c by using a blank as input, using a segmented title d as a prediction target, and training a transform model to obtain an end-to-end title generation model; the transformer model herein is a model known in the art and is not modified by the present invention.
In some preferred modes, the title intention combinations are classified according to the article categories and the title intention combinations obtained in the step (1.4), and the article categories are ranked according to the reading amount of the articles on the network, so that the popular title intention combinations under each article category are obtained. Because an article correspondingly has an article category and a title intention combination, the article category and the title intention combination are mutually related, and the title intention combination can be classified according to the article category. After the article category is determined, the article category corresponding to the title intention combination of the article is determined. The popular title intention combination is the intention combination corresponding to the title of the popular article.
In some preferred modes, step (1) is specifically: inputting the article text into the article classification model in the step (1.2) to predict the article text to obtain the article classification, for example, classifying an article to obtain the article classification "one-car shopping guide";
dividing the text of the article into small sections according to punctuations, and predicting the intention contained in the article by using the intention classification model in the step (1.3), wherein the intention of the article is combined into { power, price and appearance }; according to article categories and intentions contained in the articles, selecting popular title intention combinations corresponding to the article categories from the obtained popular title intention combinations, and preferentially selecting the title intention combinations consisting of the article intentions; that is to say: and calculating the intention proportion of the popular title intention combination belonging to the article intentions, and then selecting the intention combination with high intention proportion as the title intention combination.
For example, the popular title intention combination under the category of 'single-vehicle shopping guide' is { 'power, price', 'price, appearance', 'power, interior' }, the intention proportion of article intentions in the title intention combinations is calculated, and the intention combination with a high intention proportion is selected as the title intention combination. The ratio of the intention of "power, price" and "price, appearance" is 100%, and the ratio of the intention of "power, interior" is 50%, so "power, price" and "price, appearance" are selected.
Although the intention classification model is obtained by training using the heading intention data, the difference between the heading content and the text content is not large, and the intention included in the article is obtained by predicting the text of the article using the intention classification model.
In some preferred modes, the step (2) is specifically as follows: using a sentencepece word segmentation tool to segment the text of the article, calculating the total number of words of the article, if the total number of words of the article is not more than 500, taking the whole text as a summary, and if the total number of words of the article is more than 500, selecting 400 words at the beginning of the article and 100 words at the end of the article to form a text summary.
In some preferred modes, the step (3) is specifically as follows: and splicing the article categories, the title intention combination and the text abstract by using blanks, and inputting the spliced article categories, the title intention combination and the text abstract into an end-to-end title generation model to generate corresponding article titles, wherein the title contents correspond to the title intention combination.
Example 2, see figure 3.
In the present embodiment, there is provided a computer device 100, which includes a memory 102, a processor 101, and a computer program 103 stored on the memory 102 and operable on the processor 101, and the processor 101, when executing the computer program 103, can implement the steps in the article title generation method provided in embodiment 1.
Example 3
In the present embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which, when executed by a processor, is capable of implementing the steps in the article title generation method provided in the above-described embodiments.
In this embodiment, the computer program may be the computer program in embodiment 2.
In this embodiment, the computer-readable storage medium can be executed by the computer apparatus in embodiment 2.
It will be understood by those skilled in the art that all or part of the processes of the above embodiments may be implemented by hardware instructions of a computer program, which may be stored in a non-volatile computer-readable storage medium, and when executed, may include the processes of the above embodiments of the methods. Any reference to memory, storage, databases, or other media used in embodiments provided herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The features of the above-mentioned embodiments may be arbitrarily combined, and for the sake of brevity, all possible combinations of the above-mentioned embodiments are not described, but should be construed as being within the scope of the present specification as long as there is no contradiction between the combinations of the features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of protection. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present invention should be subject to the appended claims.
Claims (8)
1. The article title generation method based on the end-to-end deep learning model is characterized by comprising the following steps of:
(1) inputting the text of the article into an article classification model to obtain article classifications; the method comprises the steps of dividing an article into small sections, and predicting the article intention by using an intention classification model; according to article categories and article intentions, preferentially selecting a title intention combination consisting of the article intentions from popular title intention combinations under the article categories;
(2) segmenting words of the text of the article, and selecting an abstract to obtain a text abstract;
(3) and combining and splicing the text abstract, the article categories and the title intentions to obtain input data, and inputting the input data into an end-to-end title generation model to obtain the article title.
2. The article title generation method based on the end-to-end deep learning model as claimed in claim 1, wherein the construction of the end-to-end title generation model comprises data set establishment and model training, and the data set establishment and model training comprises the following steps:
(1.1) collecting article corpora in the vertical field;
(1.2) classifying the articles in the article corpus obtained in the step (1.1) by a vertical domain expert to obtain article categories; selecting a certain amount of articles to obtain an article data set { A1,A2,A3……Ai……AnN is the number of articles, AiRepresenting the ith article, i is more than or equal to 1 and less than or equal to n; classifying and labeling the articles to obtain an article class data set { C1,C2,C3……Ci……CnTherein ofCiIs AiTraining a bert model to obtain an article classification model;
(1.3) dividing the article title into small sections according to punctuation marks to obtain a small section title data set { T }1,T2,T3……Tt……Tm},TtRepresenting the t-th subsection title, M is the quantity of the subsection titles, t is more than or equal to 1 and less than or equal to M, classifying the intentions of each subsection, and labeling to obtain a title intention data set { M1,M2,M3……Mt……MmIn which M ist={I1,I2,I3……It……IktIs the corresponding intention of the t-th subsection title, wherein ktFor the intended number of titles of the t-th subsection, ktTraining a bert model to obtain an intention classification model, wherein the natural number is not 0;
(1.4) processing the full article corpus: predicting the combination of article categories and title intentions by using the models obtained in the steps (1.2) and (1.3); the method comprises the steps of segmenting words of an article text and an article title, selecting a text abstract, splicing the article type, a title intention combination and the text abstract as input, taking the segmented title as a prediction target, and training a transform model to obtain an end-to-end title generation model.
3. The method for generating article titles based on the end-to-end deep learning model as claimed in claim 2, wherein the title intention combinations are classified according to the article categories and the title intention combinations obtained in step (1.4), and are ranked according to the reading amount of the articles on the network, so as to obtain popular title intention combinations under each article category.
4. The article title generation method based on the end-to-end deep learning model as claimed in claim 3, wherein the step (1) is specifically as follows: inputting an article text into the article classification model in the step (1.2), predicting the article text to obtain article classifications, dividing the article text into small segments according to punctuations, predicting article intentions by using the intention classification model in the step (1.3), and preferentially selecting a title intention combination consisting of the article intentions from popular title intention combinations under the corresponding article classifications according to the article classifications and the article intentions; that is to say: and calculating the intention proportion of the popular title intention combination belonging to the article intentions, and then selecting the intention combination with high intention proportion as the title intention combination.
5. The article title generation method based on the end-to-end deep learning model as claimed in claim 1, wherein the step (2) is specifically as follows: using a sentencepece word segmentation tool to segment the text of the article, calculating the total number of words of the article, if the total number of words of the article is not more than 500, taking the whole text as a summary, and if the total number of words of the article is more than 500, selecting 400 words at the beginning of the article and 100 words at the end of the article to form a text summary.
6. The article title generation method based on the end-to-end deep learning model as claimed in claim 1, wherein the step (3) is specifically as follows: and splicing the article categories, the title intention combination and the text abstract by using a blank space, and inputting the spliced article categories, the spliced title intention combination and the spliced text abstract into an end-to-end title generation model to generate corresponding article titles.
7. A computer apparatus comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor is capable of implementing the steps of the article title generation method of any one of claims 1 to 6 when executing the computer program.
8. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, is adapted to carry out the steps of the method for title generation of an article according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011451526.3A CN112560458A (en) | 2020-12-09 | 2020-12-09 | Article title generation method based on end-to-end deep learning model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011451526.3A CN112560458A (en) | 2020-12-09 | 2020-12-09 | Article title generation method based on end-to-end deep learning model |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112560458A true CN112560458A (en) | 2021-03-26 |
Family
ID=75061690
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011451526.3A Pending CN112560458A (en) | 2020-12-09 | 2020-12-09 | Article title generation method based on end-to-end deep learning model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112560458A (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107977363A (en) * | 2017-12-20 | 2018-05-01 | 北京百度网讯科技有限公司 | Title generation method, device and electronic equipment |
US20180122369A1 (en) * | 2016-10-28 | 2018-05-03 | Fujitsu Limited | Information processing system, information processing apparatus, and information processing method |
CN108509417A (en) * | 2018-03-20 | 2018-09-07 | 腾讯科技(深圳)有限公司 | Title generation method and equipment, storage medium, server |
CN109299477A (en) * | 2018-11-30 | 2019-02-01 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating text header |
CN110413768A (en) * | 2019-08-06 | 2019-11-05 | 成都信息工程大学 | A kind of title of article automatic generation method |
US20200097563A1 (en) * | 2018-09-21 | 2020-03-26 | Salesforce.Com, Inc. | Intent classification system |
CN111159332A (en) * | 2019-12-03 | 2020-05-15 | 厦门快商通科技股份有限公司 | Text multi-intention identification method based on bert |
CN111401044A (en) * | 2018-12-27 | 2020-07-10 | 北京字节跳动网络技术有限公司 | Title generation method and device, terminal equipment and storage medium |
CN111930929A (en) * | 2020-07-09 | 2020-11-13 | 车智互联(北京)科技有限公司 | Article title generation method and device and computing equipment |
CN111931513A (en) * | 2020-07-08 | 2020-11-13 | 泰康保险集团股份有限公司 | Text intention identification method and device |
-
2020
- 2020-12-09 CN CN202011451526.3A patent/CN112560458A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180122369A1 (en) * | 2016-10-28 | 2018-05-03 | Fujitsu Limited | Information processing system, information processing apparatus, and information processing method |
CN107977363A (en) * | 2017-12-20 | 2018-05-01 | 北京百度网讯科技有限公司 | Title generation method, device and electronic equipment |
CN108509417A (en) * | 2018-03-20 | 2018-09-07 | 腾讯科技(深圳)有限公司 | Title generation method and equipment, storage medium, server |
US20200097563A1 (en) * | 2018-09-21 | 2020-03-26 | Salesforce.Com, Inc. | Intent classification system |
CN109299477A (en) * | 2018-11-30 | 2019-02-01 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating text header |
CN111401044A (en) * | 2018-12-27 | 2020-07-10 | 北京字节跳动网络技术有限公司 | Title generation method and device, terminal equipment and storage medium |
CN110413768A (en) * | 2019-08-06 | 2019-11-05 | 成都信息工程大学 | A kind of title of article automatic generation method |
CN111159332A (en) * | 2019-12-03 | 2020-05-15 | 厦门快商通科技股份有限公司 | Text multi-intention identification method based on bert |
CN111931513A (en) * | 2020-07-08 | 2020-11-13 | 泰康保险集团股份有限公司 | Text intention identification method and device |
CN111930929A (en) * | 2020-07-09 | 2020-11-13 | 车智互联(北京)科技有限公司 | Article title generation method and device and computing equipment |
Non-Patent Citations (2)
Title |
---|
李增君;: "浅谈文章标题", 黑河教育, no. 02, 15 April 2009 (2009-04-15), pages 63 * |
杨晓波;姜丽;: "提升论文灵魂建设――构建准确达意标题", 湖北第二师范学院学报, no. 02, 15 February 2018 (2018-02-15), pages 129 - 132 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109858041B (en) | Named entity recognition method combining semi-supervised learning with user-defined dictionary | |
CN112711660B (en) | Method for constructing text classification sample and method for training text classification model | |
CN106991085A (en) | The abbreviation generation method and device of a kind of entity | |
CN111783993A (en) | Intelligent labeling method and device, intelligent platform and storage medium | |
CN105630772B (en) | A kind of abstracting method of webpage comment content | |
CN112417132B (en) | New meaning identification method for screening negative samples by using guest information | |
CN114722805B (en) | Little sample emotion classification method based on size instructor knowledge distillation | |
CN115599901A (en) | Machine question-answering method, device, equipment and storage medium based on semantic prompt | |
CN113408287A (en) | Entity identification method and device, electronic equipment and storage medium | |
CN115630156A (en) | Mongolian emotion analysis method and system fusing Prompt and SRU | |
JP2020098592A (en) | Method, device and storage medium of extracting web page content | |
CN114722822A (en) | Named entity recognition method, device, equipment and computer readable storage medium | |
CN108228779B (en) | Score prediction method based on learning community conversation flow | |
CN112214597B (en) | Semi-supervised text classification method and system based on multi-granularity modeling | |
CN112560458A (en) | Article title generation method based on end-to-end deep learning model | |
CN114741512A (en) | Automatic text classification method and system | |
CN115906824A (en) | Text fine-grained emotion analysis method, system, medium and computing equipment | |
CN115526174A (en) | Deep learning model fusion method for finance and economics text emotional tendency classification | |
CN111274404B (en) | Small sample entity multi-field classification method based on man-machine cooperation | |
CN111164589A (en) | Emotion marking method, device and equipment of speaking content and storage medium | |
CN112668344A (en) | Complexity-controllable diversified problem generation method based on hybrid expert model | |
Mas-Candela et al. | Sequential next-symbol prediction for optical music recognition | |
CN116894427B (en) | Data classification method, server and storage medium for Chinese and English information fusion | |
CN114218923B (en) | Text abstract extraction method, device, equipment and storage medium | |
CN116484811B (en) | Text revising method and device for multiple editing intents |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |