WO2021135091A1 - 一种基于深度学习的目标软文的生成方法及装置 - Google Patents

一种基于深度学习的目标软文的生成方法及装置 Download PDF

Info

Publication number
WO2021135091A1
WO2021135091A1 PCT/CN2020/097007 CN2020097007W WO2021135091A1 WO 2021135091 A1 WO2021135091 A1 WO 2021135091A1 CN 2020097007 W CN2020097007 W CN 2020097007W WO 2021135091 A1 WO2021135091 A1 WO 2021135091A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
title
word segmentation
titles
input
Prior art date
Application number
PCT/CN2020/097007
Other languages
English (en)
French (fr)
Inventor
朱景涛
沈艺
齐康
倪合强
梁诗雯
Original Assignee
苏宁易购集团股份有限公司
苏宁云计算有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏宁易购集团股份有限公司, 苏宁云计算有限公司 filed Critical 苏宁易购集团股份有限公司
Priority to CA3166556A priority Critical patent/CA3166556A1/en
Publication of WO2021135091A1 publication Critical patent/WO2021135091A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities

Definitions

  • the invention relates to the technical field of natural language processing, in particular to a method and device for generating target soft text based on deep learning.
  • Marketing essays are often used when new products are promoted in the market.
  • Marketing essays usually consist of three parts: title, introduction, and marketing text.
  • title uses vivid and concise language to indicate the product to be marketed, which is spectacular.
  • the introduction plays a guiding role and guides the direction of consumption. It leads to the following marketing text, and the marketing text introduces the product and recommends marketing.
  • the embodiments of the present invention provide a method and device for generating target soft texts based on deep learning to overcome the low production efficiency of manually writing target soft texts in the prior art, and the template-generated target soft text sentence pattern is fixed, Problems such as dullness and insufficient diversification.
  • the technical solution adopted by the present invention is:
  • a method for generating target soft articles based on deep learning includes the following steps:
  • the target title, the target lead, and the target text are assembled to obtain multiple target soft articles.
  • the input information is input into the second generation model to generate at least one target text.
  • the method also includes a process of building a title library, including:
  • the second word segmentation result and the first keyword are input into a third generation model to obtain a plurality of new titles, and the title library is composed of the new titles.
  • construction process of the title library further includes:
  • a third generation model is trained based on a preset algorithm.
  • the method further includes a construction process of the first generative model, including:
  • the title matching module is used to receive related information of the target object, and match several adapted target titles from the title library according to the related information.
  • the titles in the title library are expanded by the collected titles through the third generation model Come
  • the lead generation module is used to input the target title into the first generation model to generate at least one target lead;
  • a text generation module configured to generate at least one input information conforming to a preset structure according to the related information and preset rules, input the input information into the second generation model, and generate at least one target text;
  • the information assembly module assembles the target title, the target lead, and the target text to obtain multiple target soft texts.
  • the text generation module includes:
  • the first word segmentation unit is configured to perform word segmentation processing on the related information, and extract a target word segmentation that meets a preset condition from the obtained first word segmentation result;
  • the word segmentation reorganization unit is used to reorganize the target word segmentation to obtain at least one piece of input information that conforms to a preset structure
  • the text generation unit is used to input the input information into the second generation model to generate at least one target text.
  • the device further includes a first building module, including:
  • the second word segmentation unit is used to perform word segmentation processing on several collected first sample titles to obtain the second word segmentation result;
  • the first extraction unit is configured to extract the first keyword from the first sample title by using a preset first keyword extraction method
  • the title generation unit is configured to input the second word segmentation result and the first keyword into a third generation model to obtain a plurality of new titles, and the title library is composed of the new titles.
  • the first building module further includes:
  • a first intersection unit configured to intersect the first keyword set and the second word segmentation result to obtain an input data set
  • the first training unit is configured to take the data of the input data set as input and the target title as output, and train a third generation model based on a preset algorithm.
  • the device further includes a second building module, including:
  • the third word segmentation unit is used to perform word segmentation processing on several collected second sample titles and introductory pairs corresponding to the second sample titles;
  • the second extraction unit is configured to extract the second keyword from the second sample title by using a preset second keyword extraction method
  • the second intersection unit is used to take the intersection of the second keyword set and the second sample title after each word segmentation to obtain the target keyword;
  • the lead expansion unit is used to traverse each of the second sample titles, match the target keywords with the lead that corresponds to the second sample title after full word segmentation, and obtain the successfully matched lead as the current second sample The new lead of the title;
  • the second training unit is configured to take the second sample title as input, the lead corresponding to the second sample title and the new lead as output, and train a first generative model based on a preset algorithm.
  • the method and device for generating target soft text based on deep learning receive relevant information of the target object, and according to the relevant information, match several suitable target titles from the title library, and the titles in the title library
  • the collected headline is extended by the third generation model, the target headline is input into the first generation model, at least one target lead is generated, and at least one input information conforming to the preset structure is generated according to related information and preset rules.
  • Input information into the second generative model generate at least one target text, assemble the target title, target introduction, and target text to obtain multiple target texts.
  • deep learning and natural language processing technology it can realize the automatic intelligence of marketing texts. Diversified generation, saving the investment of operating personnel, improving the production efficiency of marketing soft text, effectively avoiding the problem of low handwriting efficiency, and avoiding the dull problem of template generation;
  • the method and device for generating target soft text based on deep learning obtain the second word segmentation result by performing word segmentation processing on a number of collected first sample titles, and use the preset first keyword extraction
  • the method extracts the first keyword from the first sample title, inputs the second word segmentation result and the first keyword into the third generation model, obtains multiple new titles, and utilizes the existing limited titles Expand the number of titles in the title library;
  • the method and device for generating target soft texts based on deep learning perform word segmentation processing on a number of collected second sample titles and introductory pairs corresponding to the second sample titles, using a preset
  • the second keyword extraction method extracts the second keyword from the second sample title, takes the intersection of the second keyword set and the second sample title after each word segmentation, obtains the target keyword, and traverses For each of the second sample titles, match the target keyword with the lead corresponding to the second sample title after full word segmentation, and obtain the successfully matched lead as the new lead of the current second sample heading, and
  • the second sample title is used as input, and the introduction corresponding to the second sample title and the new introduction are used as output.
  • the first generation model is trained based on a preset algorithm, which expands the training data of the introduction generation model and avoids Due to insufficient training data, it is easy to cause problems such as over-fitting and poor generation effect.
  • Fig. 1 is a flowchart showing a method for generating a target soft article based on deep learning according to an exemplary embodiment
  • Fig. 2 is a flowchart of generating at least one input information conforming to a preset structure according to related information and preset rules, inputting the input information into a second generation model, and generating at least one target text according to an exemplary embodiment;
  • Fig. 3 is a flowchart showing a construction process of a title library according to an exemplary embodiment
  • Fig. 4 is a flowchart showing a construction process of a title library according to another exemplary embodiment
  • Fig. 5 is a flowchart showing a construction process of a first generation model according to an exemplary embodiment
  • Fig. 6 is a schematic structural diagram of a device for generating target soft text based on deep learning according to an exemplary embodiment.
  • the method for generating target soft text based on deep learning provided by the present invention firstly retrieves the adapted title from the title database according to the relevant information of the target object, and then generates the introductory and marketing language in turn according to the matched title and related information (ie marketing Body), finally assemble the target title, introduction and marketing body, and output multiple marketing soft articles.
  • the Seq2Seq algorithm is used to generate the introduction and the marketing text, which can effectively avoid the problem of low handwriting efficiency and at the same time avoid the dullness of template generation.
  • Seq2Seq is a generative architecture composed of an encoder and a decoder. It generates an output sequence Y according to the input sequence X. It is widely used in tasks such as translation, automatic text summarization, and automated robot question answering.
  • Fig. 1 is a flowchart showing a method for generating a target soft text based on deep learning according to an exemplary embodiment. Referring to Fig. 1, the method includes the following steps:
  • S1 Receive related information of the target object, and match several adapted target titles from the title library according to the related information, and the titles in the title library are expanded from the collected titles through the third generation model.
  • the target soft article generally contains three parts: title, introduction and body.
  • the target soft essay in the embodiment of the present invention includes marketing soft essays. Taking marketing soft essays as an example, the marketing soft essays include three parts: title, introduction, and body.
  • the relevant information of the target object in the embodiment of the present invention includes the title of the product for which the target soft text is to be generated, or the description information of the target object for which the target soft text is to be generated, and in the embodiment of the present invention, the received relevant information may be user input , And the relevant information entered by the user can be one or more titles of a certain category of products.
  • the title matching method After receiving the relevant information of the target object input by the user, according to the preset title matching method (for example, character string matching after word segmentation, similarity matching, etc.), from the title library, a number of matching information is matched.
  • the target title wherein the title in the title library is expanded from the collected title through the third generation model.
  • the title matching method is not specifically limited, and the user can set it according to specific needs.
  • S2 Input the target title into the first generation model to generate at least one target lead.
  • the first generation model is a natural language processing model pre-trained by using a preset algorithm (for example, the Seq2Seq algorithm).
  • the input of the model is the above-mentioned target title, and the output is the target introduction corresponding to the target title.
  • the number of target slogans output by the first generation model can be one or more, and there is no limitation here.
  • S3 Generate at least one input information conforming to the preset structure according to the related information and preset rules, and input the input information into the second generation model to generate at least one target text.
  • the second generation model is also a natural language processing model pre-trained by using a preset algorithm (for example, the Seq2Seq algorithm).
  • a preset algorithm for example, the Seq2Seq algorithm.
  • the input of the second generation model is expanded. Therefore, before generating the target text, first generate at least one input information conforming to the preset structure according to related information and preset rules, and then input the acquired input information into the second generation model to generate at least one target text.
  • at least one means that there can be one or more.
  • At least one piece of input information conforming to a preset structure is generated according to the related information and a preset rule, and the input information is input to
  • generating at least one target text includes:
  • S101 Perform word segmentation processing on the related information, and extract a target word segmentation meeting a preset condition from the obtained first word segmentation result.
  • the relevant information is mostly a structure of "modifier + category word", where the modifiers are words such as brand, function, characteristic, and material.
  • the input of the second generative model is expanded by reorganizing the sequence of modifiers, so that the target text output by the second generative model can be diversified. Therefore, before generating the target text, it is necessary to perform word segmentation processing on related information to obtain the first word segmentation result, and then extract the target word segmentation that meets the preset conditions from the first word segmentation result. Since the order of modifiers is reorganized to expand the input of the second generative model, the target participle that meets the preset conditions here is the participle belonging to the modifier in the result of the first participle.
  • S102 Reorganize the target word segmentation to obtain at least one piece of input information that conforms to a preset structure.
  • a reorganization mechanism can be preset according to actual needs, for example, the order of modifiers after regrouping words. Then, according to the reorganization mechanism, the target word segmentation obtained in the above steps is reorganized, and a plurality of input information conforming to the preset structure is output.
  • the preset structure can be a "modifier + category word" structure, and the user can set and adjust it according to actual needs, and there is no specific restriction here.
  • S103 Input the input information into the second generation model to generate at least one target text.
  • the input information obtained through the above steps is input into the second generation model to generate at least one target text.
  • Fig. 3 is a flowchart showing the construction process of the title library according to an exemplary embodiment.
  • the construction process of the title library includes:
  • S201 Perform word segmentation processing on several collected first sample titles, and obtain a second word segmentation result.
  • the adapted target title is obtained by matching from the title library according to the related information.
  • the method of expanding the collected limited titles is adopted to increase the number of titles in the title library.
  • S202 Use a preset first keyword extraction method to extract a first keyword from the first sample title.
  • the preset first keyword extraction method is then used to extract the first keyword from the sample title, where the user can set the extraction ratio of the first keyword according to actual needs (that is, the first keyword accounts for the sample title). Proportion of the title).
  • the first keyword extraction method is not specifically limited, and the user can set it according to actual needs, for example, using the TS-IDF algorithm.
  • S203 Input the second word segmentation result and the first keyword into a third generation model to obtain multiple new titles, and the title library is composed of the new titles.
  • the second word segmentation result and the first keyword obtained in the above steps are used as the input of the third generative model, and the output obtained (the output is the new title) is the expanded title obtained according to the target title.
  • These new titles It constitutes the title library provided by the embodiment of the present invention.
  • a beam search (BeamSearch) decoder can be used, so that a large number of titles can be generated.
  • FIG. 4 is a flowchart of a construction process of a title library according to another exemplary embodiment.
  • the construction process of the title library includes :
  • S301 Perform word segmentation processing on a number of collected first sample titles to obtain a second word segmentation result
  • S302 Use a preset first keyword extraction method to extract the first keyword from the sample title
  • S304 Take the data of the input data set as input and the target title as output, and train a third generation model based on a preset algorithm;
  • S305 Input the second word segmentation result and the first keyword into a third generation model to obtain multiple new titles, and the title library is composed of the new titles.
  • the third generation model here is also a natural language processing model pre-trained by using a preset algorithm (for example, the Seq2Seq algorithm).
  • a preset algorithm for example, the Seq2Seq algorithm.
  • the intersection of the first keyword set and the second word segmentation result obtained in the above steps can be performed to obtain the input data set, and then the data of the input data set can be used as input.
  • the title is the output, and a third generation model is trained based on a preset algorithm (for example, the Seq2Seq algorithm).
  • the specific implementation process of steps S301, S302 and step S305 can refer to the specific implementation process of steps S201 to S203 described above, which will not be repeated here.
  • models in different training states can be used to repeat the above steps to further expand the title.
  • This method only uses existing headings (referring to the first sample headings), and uses a specific extraction method to construct input and output to train a third generative model, so that a large number of flexible headings can be obtained in a short time, saving labor costs ,Increase productivity.
  • Fig. 5 is a flow chart showing the construction process of the first generative model according to an exemplary embodiment.
  • the method further includes a first
  • the process of building a generative model includes:
  • S401 Perform word segmentation processing on several collected second sample titles and lead pairs corresponding to the second sample titles.
  • the method of keyword matching is used to mine the internal relationship between the title and the lead, and one title is matched with multiple leads.
  • This can greatly expand the training data of the first generation model and avoid training data. Insufficient data leads to problems such as over-fitting and poor generation effect, which effectively improves the generation effect of the first generative model.
  • first collect a certain amount of title-introduction pairs in advance that is, collect a number of second sample titles and introductions corresponding to the second sample titles, and then perform the second sample title and the introduction pairs corresponding to the second sample titles Word segmentation processing, obtain its segmentation results respectively.
  • S402 Use a preset second keyword extraction method to extract a second keyword from the second sample title.
  • the preset second keyword extraction method is then used to extract the second keyword from the second sample title, where the user can set the extraction ratio of the second keyword according to actual needs (that is, the second keyword accounts for the sample Proportion of the title).
  • the second keyword extraction method is also not specifically limited, and the user can set it according to actual needs, for example, using the TF-IDF algorithm.
  • S403 Take an intersection of the second keyword set and the second sample title after each word segmentation to obtain a target keyword.
  • the target keywords extracted from each second sample title in specific implementation, the second keyword set and the second sample title after each word segmentation can be intersected, and the result obtained by the intersection will be taken As the target keyword.
  • an optimal matching criterion is set in advance according to actual needs, such as sorting according to the number of matched keywords, and the top 10 leading words with the largest number of matched keywords are selected as the leading words corresponding to the title.
  • Traverse each second sample title use the target keywords of each second sample title to match with the lead after the full amount of word segmentation, and obtain several successfully matched leads according to the preset optimal matching criterion as the current second sample The new lead of the title, which can greatly expand the amount of data.
  • S405 Take the second sample title as input, and the lead corresponding to the second sample title and the new lead as output, and train a first generation model based on a preset algorithm.
  • the first generation model is also a natural language processing model pre-trained by using a preset algorithm (for example, the Seq2Seq algorithm).
  • a preset algorithm for example, the Seq2Seq algorithm.
  • Fig. 6 is a schematic structural diagram of an apparatus for generating target soft text based on deep learning according to an exemplary embodiment. As shown in Fig. 6, the apparatus includes:
  • the title matching module is configured to receive relevant information of the target object, and match several adapted target titles from the title library according to the relevant information, and the titles in the title library are expanded from the collected titles;
  • the lead generation module is used to input the target title into the first generation model to generate at least one target lead;
  • a text generation module configured to generate at least one input information conforming to a preset structure according to the related information and preset rules, and input the input information into the second generation model to generate at least one target text;
  • the information assembly module assembles the target title, the target introduction, and the target text to obtain multiple target soft texts.
  • the text generation module includes:
  • the first word segmentation unit is configured to perform word segmentation processing on the related information, and extract a target word segmentation that meets a preset condition from the obtained first word segmentation result;
  • the word segmentation reorganization unit is used to reorganize the target word segmentation to obtain at least one piece of input information that conforms to a preset structure
  • the text generation unit is used to input the input information into the second generation model to generate at least one target text.
  • the device further includes a first building module, including:
  • the second word segmentation unit is used to perform word segmentation processing on several collected first sample titles to obtain the second word segmentation result;
  • the first extraction unit is configured to extract the first keyword from the first sample title by using a preset first keyword extraction method
  • the title generation unit is configured to input the second word segmentation result and the first keyword into a third generation model to obtain a plurality of new titles, and the title library is composed of the new titles.
  • the first building module further includes:
  • a first intersection unit configured to intersect the first keyword set and the second word segmentation result to obtain an input data set
  • the first training unit is configured to take the data of the input data set as input and the target title as output, and train a third generation model based on a preset algorithm.
  • the device further includes a second building module, including:
  • the third word segmentation unit is used to perform word segmentation processing on several collected second sample titles and introductory pairs corresponding to the second sample titles;
  • the second extraction unit is configured to extract the second keyword from the second sample title by using a preset second keyword extraction method
  • the second intersection unit is used to take the intersection of the second keyword set and the second sample title after each word segmentation to obtain the target keyword;
  • the lead expansion unit is used to traverse each of the second sample titles, match the target keywords with the lead that corresponds to the second sample title after full word segmentation, and obtain the successfully matched lead as the current second sample The new lead of the title;
  • the second training unit is configured to take the second sample title as input, the lead corresponding to the second sample title and the new lead as output, and train a first generative model based on a preset algorithm.
  • the method and device for generating target soft text based on deep learning receive relevant information of the target object, and according to the relevant information, match several suitable target titles from the title library, and the titles in the title library
  • the collected headline is extended by the third generation model, the target headline is input into the first generation model, at least one target lead is generated, and at least one input information conforming to the preset structure is generated according to related information and preset rules.
  • Input information into the second generative model generate at least one target text, assemble the target title, target introduction, and target text to obtain multiple target texts.
  • deep learning and natural language processing technology it can realize the automatic intelligence of marketing texts. Diversified generation, saving the investment of operating personnel, improving the production efficiency of marketing soft text, effectively avoiding the problem of low handwriting efficiency, and avoiding the dull problem of template generation;
  • the method and device for generating target soft text based on deep learning obtain the second word segmentation result by performing word segmentation processing on a number of collected first sample titles, and use the preset first keyword extraction
  • the method extracts the first keyword from the first sample title, inputs the second word segmentation result and the first keyword into the third generation model, obtains multiple new titles, and utilizes the existing limited titles Expand the number of titles in the title library;
  • the method and device for generating target soft text based on deep learning are processed by segmenting a number of collected second sample titles and introductory pairs corresponding to the second sample titles, using a preset
  • the second keyword extraction method extracts the second keyword from the second sample title, takes the intersection of the second keyword set and the second sample title after each word segmentation, obtains the target keyword, and traverses For each of the second sample titles, the target keywords are matched with the lead words corresponding to the second sample headings after the full word segmentation, and the successfully matched lead words are obtained as the new lead words of the current second sample headings.
  • the second sample title is used as input, and the introduction corresponding to the second sample title and the new introduction are used as output.
  • the first generation model is trained based on a preset algorithm, which expands the training data of the introduction generation model and avoids Due to insufficient training data, it is easy to cause problems such as over-fitting and poor generation effect.
  • the device for generating a target soft text based on deep learning triggers the target soft text generation service
  • only the division of the above functional modules is used as an example for illustration. In actual applications, the above functions can be changed according to needs.
  • the allocation is completed by different functional modules, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above.
  • the device for generating a target soft article based on deep learning provided by the above embodiment belongs to the same concept as the embodiment of the method for generating a target soft article based on deep learning, that is, the device is based on the method for generating a target soft article based on deep learning.
  • the specific implementation process refer to the method embodiment, which will not be repeated here.
  • the program can be stored in a computer-readable storage medium.
  • the storage medium mentioned can be a read-only memory, a magnetic disk or an optical disk, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Machine Translation (AREA)
  • Character Discrimination (AREA)

Abstract

一种基于深度学习的目标软文的生成方法及装置,该方法包括:接收目标对象的相关信息,根据相关信息从标题库中匹配出若干条适配的目标标题,标题库中的标题由采集到的标题通过第三生成模型扩展而来(S1);将目标标题输入到第一生成模型中,生成至少一个目标导语(S2);根据相关信息以及预设规则生成至少一个符合预设结构的输入信息,将输入信息输入到第二生成模型中,生成至少一个目标正文(S3);对目标标题、目标导语以及目标正文进行组装,获取多篇目标软文(S4)。通过利用深度学习和自然语言处理技术,能够实现营销软文的自动化智能化多样化生成,节省运营人员的投入,提升营销软文的生产效率,有效的避免手写效率低下的问题,同时避免模板生成的呆板问题。

Description

一种基于深度学习的目标软文的生成方法及装置 技术领域
本发明涉及自然语言处理技术领域,特别涉及一种基于深度学习的目标软文的生成方法及装置。
背景技术
新产品在市场进行推广时经常会用到营销软文,营销软文通常由标题、导语和营销正文三部分组成。标题用生动简洁的语言表明营销的产品,引人入胜,导语起到引导性作用,引导消费方向,引出下面营销正文,营销正文则对产品进行介绍、推荐营销。
目前营销软文,无论是标题、导语,还是营销正文,多为商家运营人员手动编写,或者采用模板自动生成。这两种方法或多或少均存在不足:
对于手动编写,需要相关人员根据待营销的品类组织生动的语言手动编写营销软文,一旦需要短时间输出大量软文或拓展至较多品类时,往往存在生产效率低下的问题;
对于模板生成,虽然可以短时间内生成批量,但是生成的语句存在模式固定、呆板、多样化不足等问题。
发明内容
为了解决现有技术的问题,本发明实施例提供了一种基于深度学习的目标软文的生成方法及装置,以克服现有技术中手动编写目标软文生产效率低下、模板生成目标软文语句模式固定、呆板、多样化不足等问题。
为解决上述一个或多个技术问题,本发明采用的技术方案是:
一方面,提供了一种基于深度学习的目标软文的生成方法,该方法包括如下步骤:
接收目标对象的相关信息,根据所述相关信息从标题库中匹配出若干条适配的目标标题,所述标题库中的标题由采集到的标题通过第三生成模型扩展而来;
将所述目标标题输入到第一生成模型中,生成至少一个目标导语;
根据所述相关信息以及预设规则生成至少一个符合预设结构的输入信息,将所述输入信息输入到第二生成模型中,生成至少一个目标正文;
对所述目标标题、所述目标导语以及所述目标正文进行组装,获取多篇目标软文。
进一步的,所述根据所述相关信息以及预设规则生成至少一个符合预设结构的输入信息,将所述输入信息输入到第二生成模型中,生成至少一个目标正文包括:
对所述相关信息进行分词处理,从获取到的第一分词结果中提取出满足预设条件的目标分词;
对所述目标分词进行重组,获取至少一个符合预设结构的输入信息;
将所述输入信息输入到第二生成模型中,生成至少一个目标正文。
进一步的,所述方法还包括标题库的构建过程,包括:
对采集到的若干第一样本标题进行分词处理,获取第二分词结果;
采用预设的第一关键词提取方法从所述第一样本标题中提取出第一关键词;
将所述第二分词结果以及所述第一关键词输入到第三生成模型,获取多个新的标题,所述标题库由所述新的标题构成。
进一步的,所述标题库的构建过程还包括:
对所述第一关键词集合与所述第二分词结果取交集,获取输入数据集合;
将所述输入数据集合的数据作为输入,所述目标标题作为输出,基于预设算法训练出第三生成模型。
进一步的,所述方法还包括第一生成模型的构建过程,包括:
对采集到的若干第二样本标题以及与所述第二样本标题对应的导语对进行分词处理;
采用预设的第二关键词提取方法从所述第二样本标题中提取出第二关键词;
对所述第二关键词集合与每条分词后的所述第二样本标题取交集,获取目标关键词;
遍历每一所述第二样本标题,将所述目标关键词与全量分词后的与所述第二样本标题对应的导语中进行匹配,获取匹配成功导语作为当前第二样本标题的新的导语;
将所述第二样本标题作为输入,与所述第二样本标题对应的导语以及所述新的导语作为输出,基于预设算法训练出第一生成模型。
另一方面,提供了一种基于深度学习的目标软文的生成装置,所述装置包括:
标题匹配模块,用于接收目标对象的相关信息,根据所述相关信息从标题库中匹配出若干条适配的目标标题,所述标题库中的标题由采集到的标题通过第三生成模型扩展而来;
导语生成模块,用于将所述目标标题输入到第一生成模型中,生成至少一个目标导语;
正文生成模块,用于根据所述相关信息以及预设规则生成至少一个符合预设结构的输入信息,将所述输入信息输入到第二生成模型中,生成至少一个目标正文;
信息组装模块,对所述目标标题、所述目标导语以及所述目标正文进行组装,获取多篇 目标软文。
进一步的,所述正文生成模块包括:
第一分词单元,用于对所述相关信息进行分词处理,从获取到的第一分词结果中提取出满足预设条件的目标分词;
分词重组单元,用于对所述目标分词进行重组,获取至少一个符合预设结构的输入信息;
正文生成单元,用于将所述输入信息输入到第二生成模型中,生成至少一个目标正文。
进一步的,所述装置还包括第一构建模块,包括:
第二分词单元,用于对采集到的若干第一样本标题进行分词处理,获取第二分词结果;
第一提取单元,用于采用预设的第一关键词提取方法从所述第一样本标题中提取出第一关键词;
标题生成单元,用于将所述第二分词结果以及所述第一关键词输入到第三生成模型,获取多个新的标题,所述标题库由所述新的标题构成。
进一步的,所述第一构建模块还包括:
第一求交单元,用于对所述第一关键词集合与所述第二分词结果取交集,获取输入数据集合;
第一训练单元,用于将所述输入数据集合的数据作为输入,所述目标标题作为输出,基于预设算法训练出第三生成模型。
进一步的,所述装置还包括第二构建模块,包括:
第三分词单元,用于对采集到的若干第二样本标题以及与所述第二样本标题对应的导语对进行分词处理;
第二提取单元,用于采用预设的第二关键词提取方法从所述第二样本标题中提取出第二关键词;
第二求交单元,用于对所述第二关键词集合与每条分词后的所述第二样本标题取交集,获取目标关键词;
导语拓展单元,用于遍历每一所述第二样本标题,将所述目标关键词与全量分词后的与所述第二样本标题对应的导语中进行匹配,获取匹配成功导语作为当前第二样本标题的新的导语;
第二训练单元,用于将所述第二样本标题作为输入,与所述第二样本标题对应的导语以及所述新的导语作为输出,基于预设算法训练出第一生成模型。
本发明实施例提供的技术方案带来的有益效果是:
1、本发明实施例提供的基于深度学习的目标软文的生成方法及装置,通过接收目标对象的相关信息,根据相关信息从标题库中匹配出若干条适配的目标标题,标题库中的标题由采集到的标题通过第三生成模型扩展而来,将目标标题输入到第一生成模型中,生成至少一个目标导语,根据相关信息以及预设规则生成至少一个符合预设结构的输入信息,将输入信息输入到第二生成模型中,生成至少一个目标正文,对目标标题、目标导语以及目标正文进行组装,获取多篇目标软文,利用深度学习和自然语言处理技术,能够实现营销软文的自动化智能化多样化生成,节省运营人员的投入,提升营销软文的生产效率,有效的避免手写效率低下的问题,同时避免模板生成的呆板问题;
2、本发明实施例提供的基于深度学习的目标软文的生成方法及装置,通过对采集到的若干第一样本标题进行分词处理,获取第二分词结果,采用预设的第一关键词提取方法从第一样本标题中提取出第一关键词,将所述第二分词结果以及所述第一关键词输入到第三生成模型,获取多个新的标题,利用现有的有限的标题拓展标题库中标题的数量;
3、本发明实施例提供的基于深度学习的目标软文的生成方法及装置,通过对采集到的若干第二样本标题以及与所述第二样本标题对应的导语对进行分词处理,采用预设的第二关键词提取方法从所述第二样本标题中提取出第二关键词,对所述第二关键词集合与每条分词后的所述第二样本标题取交集,获取目标关键词,遍历每一所述第二样本标题,将所述目标关键词与全量分词后的与所述第二样本标题对应的导语中进行匹配,获取匹配成功导语作为当前第二样本标题的新的导语,将所述第二样本标题作为输入,与所述第二样本标题对应的导语以及所述新的导语作为输出,基于预设算法训练出第一生成模型,拓展了导语生成模型的训练数据,避免了由于训练数据不足而容易导致过拟合、生成效果不佳等问题。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是根据一示例性实施例示出的基于深度学习的目标软文的生成方法的流程图;
图2是根据一示例性实施例示出的根据相关信息以及预设规则生成至少一个符合预设结构的输入信息,将输入信息输入到第二生成模型中,生成至少一个目标正文的流程图;
图3是根据一示例性实施例示出的标题库的构建过程的流程图;
图4是根据另一示例性实施例示出的标题库的构建过程的流程图;
图5是根据一示例性实施例示出的第一生成模型的构建过程的流程图;
图6是根据一示例性实施例示出的基于深度学习的目标软文的生成装置的结构示意图。
具体实施方式
为使本发明的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明提供的基于深度学习的目标软文的生成方法,首先根据目标对象的相关信息从标题库中检索出适配的标题,再根据匹配出的标题和相关信息依次生成导语和营销语(即营销正文),最后组装目标标题、导语和营销正文,输出多篇营销软文。本发明实施例中,生成导语和营销正文采用了Seq2Seq算法来实现,可以有效的避免手写效率低下的问题,同时避免模板生成的呆板。Seq2Seq是一种由编码器和解码器组成的生成架构,根据输入序列X来生成输出序列Y,在翻译、文本自动摘要和机器人自动问答等任务上有着广泛的运用。
图1是根据一示例性实施例示出的基于深度学习的目标软文的生成方法的流程图,参照图1所示,该方法包括如下步骤:
S1:接收目标对象的相关信息,根据所述相关信息从标题库中匹配出若干条适配的目标标题,所述标题库中的标题由采集到的标题通过第三生成模型扩展而来。
具体的,目标软文一般包含标题、导语和正文三部分。本发明实施例中的目标软文包括营销软文,以营销软文为例,营销软文包含标题、导语和正文三部分。本发明实施例中的目标对象的相关信息包括待生成目标软文的产品的标题、或待生成目标软文的目标对象的描述信息等,并且本发明实施例中,接收到的相关信息可以是用户输入的,并且用户输入的相关信息可以是一个或多个某品类产品的标题。在接收到用户输入的目标对象的相关信息后,根据预设的标题匹配方法(例如,分词后字符串匹配、相似度匹配等),从标题库中匹配出若干条与该相关信息适配的目标标题,其中,标题库中的标题由采集到的标题通过第三生成模型扩展而来。这里需要说明的是,本发明实施例中,不对标题匹配方法做具体的限定,用户可以根据具体需求进行设置。
S2:将所述目标标题输入到第一生成模型中,生成至少一个目标导语。
具体的,本发明实施例中,第一生成模型是采用预设的算法(例如Seq2Seq算法)预先训练的自然语言处理模型。该模型的输入为上述目标标题,输出为与目标标题对应的目标导语,其中,第一生成模型输出的目标标语的数量可以一个,也可以是多个,这里不做限制。
S3:根据所述相关信息以及预设规则生成至少一个符合预设结构的输入信息,将所述输 入信息输入到第二生成模型中,生成至少一个目标正文。
具体的,第二生成模型同样是采用预设的算法(例如Seq2Seq算法)预先训练的自然语言处理模型。本发明实施例中,为了使得第二生成模型输出的目标正文能够多样化,采用拓展第二生成模型的输入来实现。因此,在生成目标正文前,首先根据相关信息以及预设规则生成至少一个符合预设结构的输入信息,然后将获取到的输入信息输入到第二生成模型中,生成至少一个目标正文。其中,至少一个是指可以是一个,也可以是多个。
S4:对所述目标标题、所述目标导语以及所述目标正文进行组装,获取多篇目标软文。
具体的,最后,对组装目标标题以及通过上述步骤获取到的目标导语以及目标正文,获取多篇目标软文,以供用户参考选择。
参照图2所示,作为一种较优的实施方式,本发明实施例中,所述根据所述相关信息以及预设规则生成至少一个符合预设结构的输入信息,将所述输入信息输入到第二生成模型中,生成至少一个目标正文包括:
S101:对所述相关信息进行分词处理,从获取到的第一分词结果中提取出满足预设条件的目标分词。
具体的,通常,相关信息多为“修饰词+品类词”的结构,其中修饰词为品牌、功能、特性、材质等词语。本发明实施例中,采用重组修饰词的顺序的方式来拓展第二生成模型的输入,使得第二生成模型输出的目标正文能够多样化。因此,在生成目标正文前,需要先对相关信息进行分词处理,获取第一分词结果,然后从第一分词结果中提取出满足预设条件的目标分词。由于是采用的重组修饰词的顺序的方式来拓展第二生成模型的输入,因此这里满足预设条件的目标分词为第一分词结果中属于修饰词的分词。
S102:对所述目标分词进行重组,获取至少一个符合预设结构的输入信息。
具体的,本发明实施例中,可以根据实际需求预先设置一重组机制,例如,重组分词后的修饰词的顺序等。然后按照该重组机制对上述步骤获取到的目标分词进行重组,输出多个符合预设结构的输入信息。同样,预设结构可以是“修饰词+品类词”的结构,用户可以根据实际需求进行设置调整,这里不做具体限制。
S103:将所述输入信息输入到第二生成模型中,生成至少一个目标正文。
具体的,最后,将通过上述步骤获取到的输入信息输入到第二生成模型中,生成至少一个目标正文。
图3是根据一示例性实施例示出的标题库的构建过程的流程图,参照图3所示,作为一种较优的实施方式,本发明实施例中,标题库的构建过程,包括:
S201:对采集到的若干第一样本标题进行分词处理,获取第二分词结果。
具体的,本发明实施例中,在接收到目标对象的相关信息后,是采用根据相关信息从标题库中匹配的方式来获取适配的目标标题的,但是,在构建标题库的过程中,实际采集到的标题数量往往是有限的。为解决上述问题,本发明实施例中,采用利用对采集到的有限的标题进行拓展的方式,来增加标题库中标题的数量的。具体进行标题拓展时,首先对采集到的若干第一样本标题进行分词处理,获取第二分词结果。
S202:采用预设的第一关键词提取方法从所述第一样本标题中提取出第一关键词。
具体的,然后采用预设的第一关键词提取方法从所述样本标题中提取出第一关键词,其中,用户可以根据实际需求设置第一关键词的提取比例(即第一关键词占样本标题的比例)。这里需要说明的是,本发明实施例中,不对第一关键词提取方法做具体的限定,用户可以根据实际需求进行设置,例如,采用TS-IDF算法等。
S203:将所述第二分词结果以及所述第一关键词输入到第三生成模型,获取多个新的标题,所述标题库由所述新的标题构成。
具体的,将上述步骤获取到的第二分词结果以及第一关键词作为第三生成模型的输入,得到的输出(输出为新的标题)即为根据目标标题获取到拓展标题,这些新的标题即构成本发明实施例提供的标题库。这里需要说明的是,本发明实施例中第三生成模型中可以采用集束搜索(BeamSearch)解码器,从而可以生成大量的标题。
图4是根据另一示例性实施例示出的标题库的构建过程的流程图,参照图4所示,作为一种较优的实施方式,本发明实施例中,所述标题库的构建过程包括:
S301:对采集到的若干第一样本标题进行分词处理,获取第二分词结果;
S302:采用预设的第一关键词提取方法从所述样本标题中提取出第一关键词;
S303:对所述第一关键词集合与所述第二分词结果取交集,获取输入数据集合;
S304:将所述输入数据集合的数据作为输入,所述目标标题作为输出,基于预设算法训练出第三生成模型;
S305:将所述第二分词结果以及所述第一关键词输入到第三生成模型,获取多个新的标题,所述标题库由所述新的标题构成。
具体的,这里的第三生成模型同样是采用预设的算法(例如Seq2Seq算法)预先训练的自然语言处理模型。在为第三生成模型准备训练数据时,可以对上述步骤获取到的第一关键词集合与第二分词结果进行取交集操作,获取输入数据集合,然后将该输入数据集合的数据作为输入,目标标题作为输出,基于预设算法(例如Seq2Seq算法)训练出第三生成模型。 另外,步骤S301、S302以及步骤S305的具体实施过程可以参照上述步骤S201至S203的具体实施过程,这里不再一一赘述。
另外,还可以采用不同训练状态(即不同step或epoch)下的模型重复上述步骤,进一步拓展标题。该方法仅借助现有的标题(指第一样本标题),采用特定的抽取方式构建输入、输出训练出第三生成模型,从而可以在短时间内获取大量句式灵活的标题,节省人力成本,提高生产效率。
图5是根据一示例性实施例示出的第一生成模型的构建过程的流程图,参照图5所示,作为一种较优的实施方式,本发明实施例中,所述方法还包括第一生成模型的构建过程,包括:
S401:对采集到的若干第二样本标题以及与所述第二样本标题对应的导语对进行分词处理。
具体的,本发明实施例中,采用关键词匹配的方法挖掘标题与导语的内在关系,将一个标题与多个导语匹配对应,这样可以极大的拓展第一生成模型的训练数据,避免因训练数据不足导致过拟合、生成效果不佳等问题,有效的提升第一生成模型的生成效果。具体在实施时,首先预先采集一定量的标题-导语对,即采集若干第二样本标题以及与该第二样本标题对应的导语,然后第二样本标题以及与第二样本标题对应的导语对进行分词处理,分别获取其分词结果。
S402:采用预设的第二关键词提取方法从所述第二样本标题中提取出第二关键词。
具体的,然后采用预设的第二关键词提取方法从第二样本标题中提取出第二关键词,其中,用户可以根据实际需求设置第二关键词的提取比例(即第二关键词占样本标题的比例)。这里需要说明的是,本发明实施例中,同样不对第二关键词提取方法做具体的限定,用户可以根据实际需求进行设置,例如,采用TF-IDF算法等。
S403:对所述第二关键词集合与每条分词后的所述第二样本标题取交集,获取目标关键词。
具体的,从每条第二样本标题中提取出的目标关键词,具体实施时,可以对第二关键词集合和每条分词后的第二样本标题进行取交集,将取交集的得到的结果作为目标关键词。
S404:遍历每一所述第二样本标题,将所述目标关键词与全量分词后的与所述第二样本标题对应的导语中进行匹配,获取匹配成功导语作为当前第二样本标题的新的导语。
具体的,预先根据实际需求设置一最优匹配准则,如按照匹配关键词数量进行排序,选取匹配关键词数量最多的前10条导语作为该标题对应的导语。遍历每条第二样本标题,利用每 条第二样本标题的目标关键词到全量分词后的与导语中进行匹配,按照预先设置的最优匹配准则获取匹配成功的若干条导语作为当前第二样本标题的新的导语,这样可以极大拓展数据量。
S405:将所述第二样本标题作为输入,与所述第二样本标题对应的导语以及所述新的导语作为输出,基于预设算法训练出第一生成模型。
具体的,第一生成模型同样是采用预设的算法(例如Seq2Seq算法)预先训练的自然语言处理模型。最后将第二样本标题作为输入,与第二样本标题对应的导语以及上述步骤拓展出的新的导语作为输出,基于该预设的算法训练出第一生成模型。
图6是根据一示例性实施例示出的基于深度学习的目标软文的生成装置的结构示意图,参照图6所示,该装置包括:
标题匹配模块,用于接收目标对象的相关信息,根据所述相关信息从标题库中匹配出若干条适配的目标标题,所述标题库中的标题由采集到的标题扩展而来;
导语生成模块,用于将所述目标标题输入到第一生成模型中,生成至少一个目标导语;
正文生成模块,用于根据所述相关信息以及预设规则生成至少一个符合预设结构的输入信息,将所述输入信息输入到第二生成模型中,生成至少一个目标正文;
信息组装模块,对所述目标标题、所述目标导语以及所述目标正文进行组装,获取多篇目标软文。
作为一种较优的实施方式,本发明实施例中,所述正文生成模块包括:
第一分词单元,用于对所述相关信息进行分词处理,从获取到的第一分词结果中提取出满足预设条件的目标分词;
分词重组单元,用于对所述目标分词进行重组,获取至少一个符合预设结构的输入信息;
正文生成单元,用于将所述输入信息输入到第二生成模型中,生成至少一个目标正文。
作为一种较优的实施方式,本发明实施例中,所述装置还包括第一构建模块,包括:
第二分词单元,用于对采集到的若干第一样本标题进行分词处理,获取第二分词结果;
第一提取单元,用于采用预设的第一关键词提取方法从所述第一样本标题中提取出第一关键词;
标题生成单元,用于将所述第二分词结果以及所述第一关键词输入到第三生成模型,获取多个新的标题,所述标题库由所述新的标题构成。
作为一种较优的实施方式,本发明实施例中,所述第一构建模块还包括:
第一求交单元,用于对所述第一关键词集合与所述第二分词结果取交集,获取输入数据 集合;
第一训练单元,用于将所述输入数据集合的数据作为输入,所述目标标题作为输出,基于预设算法训练出第三生成模型。
作为一种较优的实施方式,本发明实施例中,所述装置还包括第二构建模块,包括:
第三分词单元,用于对采集到的若干第二样本标题以及与所述第二样本标题对应的导语对进行分词处理;
第二提取单元,用于采用预设的第二关键词提取方法从所述第二样本标题中提取出第二关键词;
第二求交单元,用于对所述第二关键词集合与每条分词后的所述第二样本标题取交集,获取目标关键词;
导语拓展单元,用于遍历每一所述第二样本标题,将所述目标关键词与全量分词后的与所述第二样本标题对应的导语中进行匹配,获取匹配成功导语作为当前第二样本标题的新的导语;
第二训练单元,用于将所述第二样本标题作为输入,与所述第二样本标题对应的导语以及所述新的导语作为输出,基于预设算法训练出第一生成模型。
综上所述,本发明实施例提供的技术方案带来的有益效果是:
1、本发明实施例提供的基于深度学习的目标软文的生成方法及装置,通过接收目标对象的相关信息,根据相关信息从标题库中匹配出若干条适配的目标标题,标题库中的标题由采集到的标题通过第三生成模型扩展而来,将目标标题输入到第一生成模型中,生成至少一个目标导语,根据相关信息以及预设规则生成至少一个符合预设结构的输入信息,将输入信息输入到第二生成模型中,生成至少一个目标正文,对目标标题、目标导语以及目标正文进行组装,获取多篇目标软文,利用深度学习和自然语言处理技术,能够实现营销软文的自动化智能化多样化生成,节省运营人员的投入,提升营销软文的生产效率,有效的避免手写效率低下的问题,同时避免模板生成的呆板问题;
2、本发明实施例提供的基于深度学习的目标软文的生成方法及装置,通过对采集到的若干第一样本标题进行分词处理,获取第二分词结果,采用预设的第一关键词提取方法从第一样本标题中提取出第一关键词,将所述第二分词结果以及所述第一关键词输入到第三生成模型,获取多个新的标题,利用现有的有限的标题拓展标题库中标题的数量;
3、本发明实施例提供的基于深度学习的目标软文的生成方法及装置,通过对采集到的若干第二样本标题以及与所述第二样本标题对应的导语对进行分词处理,采用预设的第二关键 词提取方法从所述第二样本标题中提取出第二关键词,对所述第二关键词集合与每条分词后的所述第二样本标题取交集,获取目标关键词,遍历每一所述第二样本标题,将所述目标关键词与全量分词后的与所述第二样本标题对应的导语中进行匹配,获取匹配成功导语作为当前第二样本标题的新的导语,将所述第二样本标题作为输入,与所述第二样本标题对应的导语以及所述新的导语作为输出,基于预设算法训练出第一生成模型,拓展了导语生成模型的训练数据,避免了由于训练数据不足而容易导致过拟合、生成效果不佳等问题。
需要说明的是:上述实施例提供的基于深度学习的目标软文的生成装置在触发目标软文生成业务时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的基于深度学习的目标软文的生成装置与基于深度学习的目标软文的生成方法实施例属于同一构思,即该装置是基于该基于深度学习的目标软文的生成方法的,其具体实现过程详见方法实施例,这里不再赘述。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本发明的较佳实施例,并不用以限制本发明,凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (10)

  1. 一种基于深度学习的目标软文的生成方法,其特征在于,所述方法包括如下步骤:
    接收目标对象的相关信息,根据所述相关信息从标题库中匹配出若干条适配的目标标题,所述标题库中的标题由采集到的标题通过第三生成模型扩展而来;
    将所述目标标题输入到第一生成模型中,生成至少一个目标导语;
    根据所述相关信息以及预设规则生成至少一个符合预设结构的输入信息,将所述输入信息输入到第二生成模型中,生成至少一个目标正文;
    对所述目标标题、所述目标导语以及所述目标正文进行组装,获取多篇目标软文。
  2. 根据权利要求1所述的基于深度学习的目标软文的生成方法,其特征在于,所述根据所述相关信息以及预设规则生成至少一个符合预设结构的输入信息,将所述输入信息输入到第二生成模型中,生成至少一个目标正文包括:
    对所述相关信息进行分词处理,从获取到的第一分词结果中提取出满足预设条件的目标分词;
    对所述目标分词进行重组,获取至少一个符合预设结构的输入信息;
    将所述输入信息输入到第二生成模型中,生成至少一个目标正文。
  3. 根据权利要求1或2所述的基于深度学习的目标软文的生成方法,其特征在于,所述方法还包括标题库的构建过程,包括:
    对采集到的若干第一样本标题进行分词处理,获取第二分词结果;
    采用预设的第一关键词提取方法从所述第一样本标题中提取出第一关键词;
    将所述第二分词结果以及所述第一关键词输入到第三生成模型,获取多个新的标题,所述标题库由所述新的标题构成。
  4. 根据权利要求3所述的基于深度学习的目标软文的生成方法,其特征在于,所述标题库的构建过程还包括:
    对所述第一关键词集合与所述第二分词结果取交集,获取输入数据集合;
    将所述输入数据集合的数据作为输入,所述目标标题作为输出,基于预设算法训练出第三生成模型。
  5. 根据权利要求1或2所述的基于深度学习的目标软文的生成方法,其特征在于,所述方法还包括第一生成模型的构建过程,包括:
    对采集到的若干第二样本标题以及与所述第二样本标题对应的导语对进行分词处理;
    采用预设的第二关键词提取方法从所述第二样本标题中提取出第二关键词;
    对所述第二关键词集合与每条分词后的所述第二样本标题取交集,获取目标关键词;
    遍历每一所述第二样本标题,将所述目标关键词与全量分词后的与所述第二样本标题对应的导语中进行匹配,获取匹配成功导语作为当前第二样本标题的新的导语;
    将所述第二样本标题作为输入,与所述第二样本标题对应的导语以及所述新的导语作为输出,基于预设算法训练出第一生成模型。
  6. 一种基于深度学习的目标软文的生成装置,其特征在于,所述装置包括:
    标题匹配模块,用于接收目标对象的相关信息,根据所述相关信息从标题库中匹配出若干条适配的目标标题,所述标题库中的标题由采集到的标题通过第三生成模型扩展而来;
    导语生成模块,用于将所述目标标题输入到第一生成模型中,生成至少一个目标导语;
    正文生成模块,用于根据所述相关信息以及预设规则生成至少一个符合预设结构的输入信息,将所述输入信息输入到第二生成模型中,生成至少一个目标正文;
    信息组装模块,对所述目标标题、所述目标导语以及所述目标正文进行组装,获取多篇目标软文。
  7. 根据权利要求6所述的基于深度学习的目标软文的生成装置,其特征在于,所述正文生成模块包括:
    第一分词单元,用于对所述相关信息进行分词处理,从获取到的第一分词结果中提取出满足预设条件的目标分词;
    分词重组单元,用于对所述目标分词进行重组,获取至少一个符合预设结构的输入信息;
    正文生成单元,用于将所述输入信息输入到第二生成模型中,生成至少一个目标正文。
  8. 根据权利要求6或7所述的基于深度学习的目标软文的生成装置,其特征在于,所述装置还包括第一构建模块,包括:
    第二分词单元,用于对采集到的若干第一样本标题进行分词处理,获取第二分词结果;
    第一提取单元,用于采用预设的第一关键词提取方法从所述第一样本标题中提取出第一关键词;
    标题生成单元,用于将所述第二分词结果以及所述第一关键词输入到第三生成模型,获取多个新的标题,所述标题库由所述新的标题构成。
  9. 根据权利要求8所述的基于深度学习的目标软文的生成装置,其特征在于,所述第一构建模块还包括:
    第一求交单元,用于对所述第一关键词集合与所述第二分词结果取交集,获取输入数据 集合;
    第一训练单元,用于将所述输入数据集合的数据作为输入,所述目标标题作为输出,基于预设算法训练出第三生成模型。
  10. 根据权利要求6或7所述的基于深度学习的目标软文的生成装置,其特征在于,所述装置还包括第二构建模块,包括:
    第三分词单元,用于对采集到的若干第二样本标题以及与所述第二样本标题对应的导语对进行分词处理;
    第二提取单元,用于采用预设的第二关键词提取方法从所述第二样本标题中提取出第二关键词;
    第二求交单元,用于对所述第二关键词集合与每条分词后的所述第二样本标题取交集,获取目标关键词;
    导语拓展单元,用于遍历每一所述第二样本标题,将所述目标关键词与全量分词后的与所述第二样本标题对应的导语中进行匹配,获取匹配成功导语作为当前第二样本标题的新的导语;
    第二训练单元,用于将所述第二样本标题作为输入,与所述第二样本标题对应的导语以及所述新的导语作为输出,基于预设算法训练出第一生成模型。
PCT/CN2020/097007 2019-12-30 2020-06-19 一种基于深度学习的目标软文的生成方法及装置 WO2021135091A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CA3166556A CA3166556A1 (en) 2019-12-30 2020-06-19 Method and device for generating target advertorial based on deep learning

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911403246.2A CN111178018B (zh) 2019-12-30 2019-12-30 一种基于深度学习的目标软文的生成方法及装置
CN201911403246.2 2019-12-30

Publications (1)

Publication Number Publication Date
WO2021135091A1 true WO2021135091A1 (zh) 2021-07-08

Family

ID=70650585

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/097007 WO2021135091A1 (zh) 2019-12-30 2020-06-19 一种基于深度学习的目标软文的生成方法及装置

Country Status (3)

Country Link
CN (1) CN111178018B (zh)
CA (1) CA3166556A1 (zh)
WO (1) WO2021135091A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111178018B (zh) * 2019-12-30 2024-03-26 苏宁云计算有限公司 一种基于深度学习的目标软文的生成方法及装置
CN115409000B (zh) * 2022-11-02 2023-01-24 浪潮通信信息系统有限公司 一种热点人物软文自动生成方法及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103246710A (zh) * 2013-04-22 2013-08-14 张经纶 一种多媒体旅游游记的自动生成方法及装置
CN106777193A (zh) * 2016-12-23 2017-05-31 李鹏 一种自动撰写特定稿件的方法
CN109992764A (zh) * 2017-12-29 2019-07-09 阿里巴巴集团控股有限公司 一种文案生成方法及装置
US20190236148A1 (en) * 2018-02-01 2019-08-01 Jungle Disk, L.L.C. Generative text using a personality model
CN110162623A (zh) * 2019-04-15 2019-08-23 深圳壹账通智能科技有限公司 软文自动生成方法、装置、计算机设备及存储介质
CN111178018A (zh) * 2019-12-30 2020-05-19 苏宁云计算有限公司 一种基于深度学习的目标软文的生成方法及装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106503255B (zh) * 2016-11-15 2020-05-12 科大讯飞股份有限公司 基于描述文本自动生成文章的方法及系统
CN109388745A (zh) * 2018-06-15 2019-02-26 云天弈(北京)信息技术有限公司 一种批量文章自动写作系统
CN109460447A (zh) * 2018-11-29 2019-03-12 上海文军信息技术有限公司 一种营销软文识别方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103246710A (zh) * 2013-04-22 2013-08-14 张经纶 一种多媒体旅游游记的自动生成方法及装置
CN106777193A (zh) * 2016-12-23 2017-05-31 李鹏 一种自动撰写特定稿件的方法
CN109992764A (zh) * 2017-12-29 2019-07-09 阿里巴巴集团控股有限公司 一种文案生成方法及装置
US20190236148A1 (en) * 2018-02-01 2019-08-01 Jungle Disk, L.L.C. Generative text using a personality model
CN110162623A (zh) * 2019-04-15 2019-08-23 深圳壹账通智能科技有限公司 软文自动生成方法、装置、计算机设备及存储介质
CN111178018A (zh) * 2019-12-30 2020-05-19 苏宁云计算有限公司 一种基于深度学习的目标软文的生成方法及装置

Also Published As

Publication number Publication date
CN111178018B (zh) 2024-03-26
CA3166556A1 (en) 2021-07-08
CN111178018A (zh) 2020-05-19

Similar Documents

Publication Publication Date Title
CN109726293B (zh) 一种因果事件图谱构建方法、系统、装置及存储介质
CN108984683B (zh) 结构化数据的提取方法、系统、设备及存储介质
CN110765759B (zh) 意图识别方法及装置
CN108959418A (zh) 一种人物关系抽取方法、装置、计算机装置及计算机可读存储介质
CN108363725B (zh) 一种用户评论观点提取和观点标签生成的方法
CN107273358B (zh) 一种基于管道模式的端到端英文篇章结构自动分析方法
CN111709242B (zh) 一种基于命名实体识别的中文标点符号添加方法
CN101458681A (zh) 语音翻译方法和语音翻译装置
WO2018153215A1 (zh) 一种自动生成语义相近句子样本的方法
WO2021135091A1 (zh) 一种基于深度学习的目标软文的生成方法及装置
CN103116578A (zh) 一种融合句法树和统计机器翻译技术的翻译方法与装置
WO2022226716A1 (zh) 基于深度学习的Java程序内部注释的生成方法及系统
Amancio et al. An analysis of crowdsourced text simplifications
CN106980620A (zh) 一种对中文字串进行匹配的方法及装置
Wei et al. Poet-based poetry generation: Controlling personal style with recurrent neural networks
CN108519963B (zh) 一种将流程模型自动转换为多语言文本的方法
CN116432654A (zh) 一种基于内容上下文的自动续写生成方法
CN111914555A (zh) 基于Transformer结构的自动化关系抽取系统
CN113343717A (zh) 一种基于翻译记忆库的神经机器翻译方法
CN113688606A (zh) 一种自动化进行文档报告写作的方法
Asscher The explanatory power of descriptive translation studies in the machine translation era
CN111708896B (zh) 一种应用于生物医学文献的实体关系抽取方法
CN113963306A (zh) 基于人工智能的课件片头制作方法和装置
CN107491443B (zh) 一种包含非常规词汇的中文句子翻译方法及系统
CN112784579A (zh) 一种基于数据增强的阅读理解选择题答题方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20910378

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 3166556

Country of ref document: CA

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20910378

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 23/08/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20910378

Country of ref document: EP

Kind code of ref document: A1