CN111209725B - Text information generation method and device and computing equipment - Google Patents

Text information generation method and device and computing equipment Download PDF

Info

Publication number
CN111209725B
CN111209725B CN201811377243.1A CN201811377243A CN111209725B CN 111209725 B CN111209725 B CN 111209725B CN 201811377243 A CN201811377243 A CN 201811377243A CN 111209725 B CN111209725 B CN 111209725B
Authority
CN
China
Prior art keywords
text
information
title
commodity
generation model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811377243.1A
Other languages
Chinese (zh)
Other versions
CN111209725A (en
Inventor
严玉良
王勇臻
黄恒
刘晓钟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201811377243.1A priority Critical patent/CN111209725B/en
Publication of CN111209725A publication Critical patent/CN111209725A/en
Application granted granted Critical
Publication of CN111209725B publication Critical patent/CN111209725B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a text information generation method, a text information generation device and computing equipment. The method comprises the following steps: acquiring title information of a commodity; inputting the title information into a text generation model to generate a plurality of descriptive texts for the commodity; acquiring click information of the plurality of descriptive texts; training the text generation model at least according to the acquired click information to adjust network parameters of the text generation model.

Description

Text information generation method and device and computing equipment
Technical Field
The present invention relates to the field of natural language processing, and in particular, to a method and apparatus for generating text information, and a computing device.
Background
At present, in the process of content, the description information of the commodity needs to be written manually by a person/a driver, and the text generation efficiency is extremely low. Therefore, automatic generation of commodity descriptions is desired to free manual hands and improve text generation efficiency.
Disclosure of Invention
The present invention has been made in view of the above problems, and it is an object of the present invention to provide a text generation method, apparatus and computing device that overcomes or at least partially solves the above problems.
According to an aspect of the present invention, there is provided a text information generating method including:
acquiring title information of a commodity;
inputting the title information into a text generation model to generate a plurality of descriptive texts for the commodity;
acquiring click information of the plurality of descriptive texts;
training the text generation model at least according to the acquired click information to adjust network parameters of the text generation model.
Alternatively, in the text information generating method according to the present invention, the text generating model includes a caption encoder adapted to encode caption information into a semantic vector for the caption information, and a caption decoder adapted to generate a word distribution vector describing each position of the text based at least on the semantic vector, and generate a plurality of descriptive texts for the commodity based on the word distribution vector.
Optionally, the text information generating method according to the present invention further includes: acquiring attribute information associated with a commodity; generating an attention vector according to the attribute information; the attention vector is input to the caption decoder to cause the caption decoder to generate the word distribution vector from the semantic vector and the attention vector.
Optionally, in the text information generating method according to the present invention, the attribute information associated with the commodity includes at least one of a brand, a color, a size, and a price of the commodity.
Optionally, in the text information generating method according to the present invention, the training the text generating model at least according to the obtained click information to adjust network parameters of the text generating model includes: acquiring a target description text of the title information; calculating a first cross entropy loss of the word distribution vector and the target descriptive text; calculating a second cross entropy loss of the description text with the highest click rate in a preset number of the description texts and the title information; and adjusting network parameters of the text generation model by taking the sum of the first cross entropy loss and the second cross entropy loss as a loss function value.
Optionally, in the text information generating method according to the present invention, the generating a plurality of descriptive texts for the commodity according to the word distribution vector includes: and searching the word distribution vector of each position by adopting a cluster searching algorithm, so as to generate a plurality of descriptive texts of the commodity.
Optionally, in the text information generating method according to the present invention, the title encoder and the title decoder employ at least one of a recurrent neural network RNN, a gated recurrent unit GRU, or a long and short time memory network LSTM.
Optionally, the text information generating method according to the present invention further includes: and sending the description text to a client for display.
According to another aspect of the present invention, there is also provided a text information generating apparatus including:
the first acquisition module is suitable for acquiring title information of the commodity;
a text generation module adapted to input the title information to a text generation model to generate a plurality of descriptive text for the item;
the second acquisition module is suitable for acquiring click information of the plurality of descriptive texts;
and the parameter adjustment module is suitable for training the text generation model according to the acquired click information so as to adjust network parameters of the text generation model.
According to yet another aspect of the present invention, there is also provided a computing device including:
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing any of the methods described above.
According to the method, firstly, on the basis that the text generation model generates description texts of a plurality of versions, the description texts of the versions are put on a line, then, the user behavior (the clicking condition of the user on the description texts) of the on-line version is obtained, and the user behavior is added into a loss function of the text generation model to continue training, so that the performance index of the text generation model can be improved.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 shows a schematic diagram of a text information generating system 100 according to one embodiment of the invention;
FIG. 2 shows a schematic diagram of a computing device 200 according to one embodiment of the invention;
FIG. 3 illustrates a flow diagram of a text information generation method 300 according to one embodiment of the invention;
fig. 4 shows a schematic diagram of a text information generating apparatus 400 according to an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1 shows a schematic diagram of a text information generating system 100 according to an embodiment of the invention. As shown in fig. 1, the text information generating system 100 includes a user terminal 110 and a computing device 200.
The user terminal 110, i.e. a terminal device used by a user, may be a personal computer such as a desktop computer, a notebook computer, or a mobile phone, a tablet computer, a multimedia device, an intelligent wearable device, but is not limited thereto. The computing device 200 is used to provide services to the user terminal 110, which may be implemented as a server, e.g., an application server, a Web server, etc.; but not limited to, desktop computers, notebook computers, processor chips, tablet computers, and the like.
According to one embodiment, the computing device 200 may perform the merchandise information query, and the terminal device 110 may establish a connection with the computing device 200 via the internet, so that the user may perform the merchandise information query via the terminal device 110. For example, a user opens a browser or shopping class Application (APP) on terminal device 110, enters a query phrase (query) in a search box, i.e., initiates a query request to computing device 200. After receiving the query request, the computing device 200 queries the commodity information according to the query phrase input by the user, and returns a query result to the terminal device 110, where the query result may include title information of the commodity and description text for the commodity. The terminal device 110 displays title information of the commodity and commodity description text in an interface, and the user can click on the commodity description text of interest, thereby entering a commodity detail page. At the same time, computing device 200 may record the clicking actions of the user on the merchandise description text. Wherein the article description text is automatically generated by the computing device 200 using a text generation tool (text generation model) based on title information of the article.
In one embodiment, the text information generating system 100 further includes a data storage 120. The data storage 120 may be a relational database such as MySQL, ACCESS, etc., or a non-relational database such as NoSQL, etc.; the data storage device 120 may be a local database residing in the computing device 200, or may be a distributed database, such as HBase, disposed at a plurality of geographic locations, and in any case, the data storage device 120 is used to store data, and the specific deployment and configuration of the data storage device 120 is not limited by the present invention. The computing device 200 may connect with the data storage 120 and retrieve data stored in the data storage 120. For example, the computing device 200 may directly read the data in the data storage device 120 (when the data storage device 120 is a local database of the computing device 200), or may access the internet through a wired or wireless manner, and obtain the data in the data storage device 120 through a data interface.
In an embodiment of the present invention, the data storage 120 is adapted to store merchandise information, such as title information of the merchandise, attribute information associated with the merchandise, detailed description of the merchandise, descriptive text for the merchandise, and click data for the merchandise by a user (including clicking actions of the user on the descriptive text). The description text for the commodity is generated by a text generation model based on the title information of the commodity and click data of a user for the commodity. For training the text generation model, a training data set may also be stored in the data storage device, wherein each training sample of the training data set comprises title information and associated target description text, and the target description text may be manually generated by a person based on the title information, i.e. the description text for the commodity is written as the target description text by reading commodity title information, related attributes of the commodity and detailed description.
The text information generating method of the present invention may be executed in a computing device. FIG. 2 illustrates a block diagram of a computing device 200 according to one embodiment of the invention. As shown in FIG. 2, in a basic configuration 202, computing device 200 typically includes a system memory 206 and one or more processors 204. A memory bus 208 may be used for communication between the processor 204 and the system memory 206.
Depending on the desired configuration, the processor 204 may be any type of processing including, but not limited to: a microprocessor (μp), a microcontroller (μc), a digital information processor (DSP), or any combination thereof. Processor 204 may include one or more levels of cache, such as a first level cache 210 and a second level cache 212, a processor core 214, and registers 216. The example processor core 214 may include an Arithmetic Logic Unit (ALU), a Floating Point Unit (FPU), a digital signal processing core (DSP core), or any combination thereof. The example memory controller 218 may be used with the processor 204, or in some implementations, the memory controller 218 may be an internal part of the processor 204.
Depending on the desired configuration, system memory 206 may be any type of memory including, but not limited to: volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.), or any combination thereof. The system memory 106 may include an operating system 220, one or more applications 222, and program data 224. The application 222 is in effect a plurality of program instructions for instructing the processor 204 to perform a corresponding operation. In some implementations, the application 222 can be arranged to cause the processor 204 to operate with the program data 224 on an operating system.
Computing device 200 may also include an interface bus 240 that facilitates communication from various interface devices (e.g., output devices 242, peripheral interfaces 244, and communication devices 246) to basic configuration 202 via bus/interface controller 230. The example output device 242 includes a graphics processing unit 248 and an audio processing unit 250. They may be configured to facilitate communication with various external devices, such as a display or speakers, via one or more a/V ports 252. The example peripheral interface 244 may include a serial interface controller 254 and a parallel interface controller 256, which may be configured to facilitate communication via one or more I/O ports 258 and external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device) or other peripherals (e.g., printer, scanner, etc.). The example communication device 246 may include a network controller 260 that may be arranged to facilitate communication with one or more other computing devices 262 over a network communication link via one or more communication ports 264.
The network communication link may be one example of a communication medium. Communication media may typically be embodied by computer readable instructions, data structures, program modules, and may include any information delivery media in a modulated data signal, such as a carrier wave or other transport mechanism. A "modulated data signal" may be a signal that has one or more of its data set or changed in such a manner as to encode information in the signal. By way of non-limiting example, communication media may include wired media such as a wired network or special purpose network, and wireless media such as acoustic, radio Frequency (RF), microwave, infrared (IR) or other wireless media. The term computer readable media as used herein may include both storage media and communication media.
In the computing device 200 according to the invention, the application 222 comprises a text information generating apparatus 400, the apparatus 400 comprising a plurality of program instructions which may instruct the processor 104 to perform the text information generating method 300.
Fig. 3 shows a flow chart of a text information generation method 300 according to an embodiment of the invention. The method 300 is suitable for execution in a computing device, such as the computing device 200 described previously.
As shown in fig. 3, the method 300 starts at step S310, and in step S310, title information of a commodity is acquired. As described above, when a user browses a shopping website or queries the shopping website for merchandise information, the shopping website typically displays information about the merchandise to the user, including title information of the merchandise, descriptive text for the merchandise, and detailed pages of the merchandise. The computing device may obtain the merchandise information from a data store, either locally or in communication therewith. In addition, when training the text generation model, the title information of the commodity may be acquired from the training data set.
The merchandise in embodiments of the present invention includes, but is not limited to, any type of merchandise that may be offered to the marketplace for consumption or use by people. In some embodiments, the merchandise may include physical products such as clothing, coffee, automobiles, etc., and in other embodiments, the merchandise may include intangible products such as services, education, games, virtual resources, etc.
In step S320, the acquired title information is input to a text generation model, and a plurality of descriptive texts for the commodity are generated by the text generation model. The text generation model is a sequence-to-sequence (Seq 2 Seq) model, which typically includes an encoder that converts source text into a vector and a decoder that converts the vector into target text. In addition, the text generation model may also be an attention-based sequence-to-sequence model, i.e., the attention mechanism is introduced in the sequence-to-sequence model.
In an embodiment of the invention, the text generation model comprises a title encoder adapted to encode the title information into semantic vectors for the title information and a title decoder adapted to generate descriptive text for the good from the semantic vectors. The title encoder and the title decoder may each employ a Recurrent Neural Network (RNN), a gated loop unit (GRU), a long short time memory network (LSTM), or other types of neural networks, as the invention is not limited in this regard.
The title encoder comprises a plurality of encoding units connected in sequence, each encoding unit outputs a state vector of a word in the title information, and the last encoding unit can also output a state vector corresponding to the whole title. The semantic vector may be a state vector corresponding to the whole headline, or a vector sequence including a state vector corresponding to each word and a state vector corresponding to the whole headline. The title decoder comprises a plurality of decoding units connected in sequence, wherein each decoding unit generates a word distribution vector for describing one position of the text, namely, a first decoding unit generates a word distribution vector corresponding to a first word of the text, a second decoding unit generates a word distribution vector corresponding to a second word of the text, and so on, and finally, the word distribution vector for describing each position of the text is obtained.
After generating the word distribution vectors for each location, various search algorithms, such as a beam search (BeamSearch) algorithm, may be used to search the word distribution vectors for each location to generate a plurality of descriptive text for the good.
Assuming that the parameter of the beam search is k, the k optimal versions are simultaneously maintained when each time step is calculated, wherein k decoding versions with the highest probability are calculated according to k t-1 step results and the word distribution vector of the t step at the t step until the decoding reaches an end tag (for example, s), and k descriptive texts are obtained. That is, by performing the BeamSearch search, k descriptive texts can be obtained on the one hand, and the text probability corresponding to each descriptive text can be obtained simultaneously on the other hand.
In one embodiment, the method 300 further comprises: acquiring attribute information associated with the commodity, generating an attention vector according to the acquired attribute information of the commodity, inputting the generated attention vector to a title decoder, and generating a word distribution vector describing each position in the text by the title decoder according to the semantic vector and the attention vector. Specifically, the attention vector includes a plurality of dimensions corresponding to the attribute and attribute value of the commodity, and the attention vector may be input as an initialization state to the title decoder so that the title decoder focuses more on the related attribute of the commodity in generating the description text of the commodity.
The attribute information associated with the commodity is, for example, the brand, color, size, price, etc. of the commodity. Likewise, the computing device may obtain the attribute information for the commodity from a data store locally or communicatively coupled thereto. The attribute information of the commodity may be expressed in the form of a key-value (key-value), for example: brand-nike, color-red, size: s, m, l, xs, price-600, etc.
After a plurality of descriptive texts for the commodity are generated by the text generation model, the descriptive texts may be sent to the client for display, for example, the descriptive text with the highest text probability may be sent to the client for display.
In addition, the text generation model may also be trained in advance before text processing using the text generation model. The training process is as follows:
1) The training data set is acquired, each training sample of the training data set comprises title information and associated target description text, and the target description text can be manually generated by a person based on the title information, namely, the description text for the commodity is written as the target description text by reading commodity title information, related attributes and detailed description of the commodity.
2) The title information in the training sample is input into a text generation model, and word distribution vectors are output by the text generation model.
3) And calculating the cross entropy loss of the word distribution vector and the target description text.
4) And according to the cross entropy loss, adopting a back propagation algorithm to adjust network parameters of the text generation model.
In order to further improve the performance index of the text generation model, the embodiment of the present invention further combines the user behavior to train the text generation model, which is specifically described in steps S330 and S340.
Before the model continues to train, a plurality of description texts of the commodity can be put on-line, namely, each description text is returned to a corresponding user or users, and then click information of the user for the plurality of description texts is acquired in step S330. Here, the click information may be the number of clicks or the click rate corresponding to each descriptive text.
In step S340, the text generation model is trained at least according to the obtained click information to adjust network parameters of the text generation model. The training process is as follows:
1) Acquiring a target description text of the title information;
the target descriptive text is descriptive text associated with the title information in the training sample.
2) Calculating a first cross entropy loss of a word distribution vector output by the text generation model and the target description text;
how to calculate the cross entropy loss of word distribution vectors and text can be done using existing techniques. In an embodiment of the present invention, the first cross entropy loss H (seq) may be expressed as:
Figure BDA0001871083760000091
wherein pred i And (3) representing a distribution vector corresponding to the ith word in the predicted descriptive text, wherein m is the number of words included in the predicted descriptive text, and glod represents the target descriptive text.
3) Calculating a second cross entropy loss of a predetermined number (e.g., t) of description texts with highest click rate and the title information;
how to calculate the cross entropy loss between texts, the prior art can be used. In an embodiment of the present invention, the second cross entropy loss H (ce) may be expressed as:
Figure BDA0001871083760000101
wherein, desc i The description text representing the ith high point rate, title representing title information, and a being a preset weighting coefficient (which may be empirically or experimentally set). The probability value in the equation may be calculated based on the similarity between texts.
For example, if a certain title information corresponds to 10 description texts, t=4 description texts with the highest click rate may be obtained therefrom, and then the cross entropy loss of the 4 description texts and the title information is calculated.
4) And adjusting network parameters of the text generation model by taking the sum of the first cross entropy loss and the second cross entropy loss as a loss function value.
In summary, according to the embodiment of the invention, on the basis that the text generation model generates the description texts of multiple versions, the description texts of the versions are put on line, then the user behavior (the click condition of the user on the description text) of the on-line version is obtained, and the user behavior is added into the loss function of the model to continue training, so that the most needed result of the user can be continuously evolved.
For example, if for a certain title: the V-collar chiffon one-piece dress 2018 spring and summer women wear a novel Korean lace-edged antique white skirt air-quality lace skirt, and a text generation model according to the invention generates a 3-version result:
a) The dress adopts high-quality lace fabric, and is very cool and comfortable after being put on.
b) The dress adopts high-quality cotton-flax fabric, and is comfortable and clear after being put on.
c) The white dress is suitable for wearing in spring and summer, and adopts high-quality lace fabric, and the dress is worn to make the style of the dress as ancient as possible.
After these versions are put on line, if the click-through rate of version c) is highest, then the new model training will continue to be biased primarily towards version c).
Fig. 4 shows a schematic diagram of a text information generating apparatus 400 according to an embodiment of the invention. Referring to fig. 4, the apparatus 400 includes:
a first obtaining module 410, adapted to obtain title information of the commodity;
a text generation module 420 adapted to input the title information into a text generation model to generate a plurality of descriptive text for the item;
a second obtaining module 430, adapted to obtain click information of the plurality of descriptive texts;
the parameter adjustment module 440 is adapted to train the text generation model according to the obtained click information to adjust network parameters of the text generation model.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general-purpose systems may also be used with the teachings herein. The required structure for a construction of such a system is apparent from the description above. In addition, the present invention is not directed to any particular programming language. It will be appreciated that the teachings of the present invention described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the present invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.

Claims (8)

1. A text information generation method, comprising:
acquiring title information of a commodity;
inputting the title information into a text generation model to generate a plurality of descriptive texts for the commodity, wherein the text generation model comprises a title encoder and a title decoder, the title encoder is suitable for encoding the title information into semantic vectors for the title information, the title decoder is suitable for generating word distribution vectors for each position of the descriptive texts at least according to the semantic vectors, and generating a plurality of descriptive texts for the commodity according to the word distribution vectors;
acquiring click information of the plurality of descriptive texts;
training the text generation model at least according to the obtained click information to adjust network parameters of the text generation model, and training the text generation model at least according to the obtained click information to adjust network parameters of the text generation model, including:
acquiring a target description text of the title information, wherein the target description text is a description text associated with the title information in a training sample;
calculating a first cross entropy loss of the word distribution vector and the target descriptive text;
calculating second cross entropy loss of the description text with the highest click rate in a preset number of the description texts and the title information;
and adjusting network parameters of the text generation model by taking the sum of the first cross entropy loss and the second cross entropy loss as a loss function value.
2. The method of claim 1, further comprising:
acquiring attribute information associated with a commodity;
generating an attention vector according to the attribute information;
the attention vector is input to the caption decoder to cause the caption decoder to generate the word distribution vector from the semantic vector and the attention vector.
3. The method of claim 2, wherein the attribute information associated with the commodity comprises at least one of a brand, a color, a size, and a price of the commodity.
4. The method of claim 1 or 2, wherein the generating a plurality of descriptive text for the article from the word distribution vector comprises:
and searching the word distribution vector of each position by adopting a cluster searching algorithm, so as to generate a plurality of descriptive texts of the commodity.
5. The method of claim 1 or 2, wherein the title encoder and title decoder employ at least one of a recurrent neural network RNN, a gated loop unit GRU, or a long and short term memory network LSTM.
6. The method of claim 1, further comprising:
and sending the description text to a client for display.
7. A text information generating apparatus comprising:
the first acquisition module is suitable for acquiring title information of the commodity;
a text generation module adapted to input the title information into a text generation model to generate a plurality of descriptive texts for the commodity, the text generation model comprising a title encoder adapted to encode the title information into semantic vectors for the title information and a title decoder adapted to generate a word distribution vector for each position of the descriptive text at least from the semantic vectors and to generate a plurality of descriptive texts for the commodity from the word distribution vectors;
the second acquisition module is suitable for acquiring click information of the plurality of descriptive texts;
the parameter adjustment module is suitable for acquiring target description text of the title information, wherein the target description text is a description text associated with the title information in a training sample; calculating a first cross entropy loss of the word distribution vector and the target descriptive text; calculating second cross entropy loss of the description text with the highest click rate in a preset number of the description texts and the title information; and adjusting network parameters of the text generation model by taking the sum of the first cross entropy loss and the second cross entropy loss as a loss function value.
8. A computing device, comprising:
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing any of the methods of claims 1-6.
CN201811377243.1A 2018-11-19 2018-11-19 Text information generation method and device and computing equipment Active CN111209725B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811377243.1A CN111209725B (en) 2018-11-19 2018-11-19 Text information generation method and device and computing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811377243.1A CN111209725B (en) 2018-11-19 2018-11-19 Text information generation method and device and computing equipment

Publications (2)

Publication Number Publication Date
CN111209725A CN111209725A (en) 2020-05-29
CN111209725B true CN111209725B (en) 2023-04-25

Family

ID=70787604

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811377243.1A Active CN111209725B (en) 2018-11-19 2018-11-19 Text information generation method and device and computing equipment

Country Status (1)

Country Link
CN (1) CN111209725B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113157910B (en) * 2021-04-28 2024-05-10 北京小米移动软件有限公司 Commodity description text generation method, commodity description text generation device and storage medium
CN115250365A (en) * 2021-04-28 2022-10-28 京东科技控股股份有限公司 Commodity text generation method and device, computer equipment and storage medium
CN113256379A (en) * 2021-05-24 2021-08-13 北京小米移动软件有限公司 Method for correlating shopping demands for commodities

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017162074A1 (en) * 2016-03-25 2017-09-28 阿里巴巴集团控股有限公司 Method, apparatus and device for mapping products
CN107526725A (en) * 2017-09-04 2017-12-29 北京百度网讯科技有限公司 The method and apparatus for generating text based on artificial intelligence
CN107577763A (en) * 2017-09-04 2018-01-12 北京京东尚科信息技术有限公司 Search method and device
CN107977363A (en) * 2017-12-20 2018-05-01 北京百度网讯科技有限公司 Title generation method, device and electronic equipment
CN108024005A (en) * 2016-11-04 2018-05-11 北京搜狗科技发展有限公司 Information processing method, device, intelligent terminal, server and system
CN108052512A (en) * 2017-11-03 2018-05-18 同济大学 A kind of iamge description generation method based on depth attention mechanism
CN108108449A (en) * 2017-12-27 2018-06-01 哈尔滨福满科技有限责任公司 A kind of implementation method based on multi-source heterogeneous data question answering system and the system towards medical field
CN108319585A (en) * 2018-01-29 2018-07-24 北京三快在线科技有限公司 Data processing method and device, electronic equipment, computer-readable medium
CN108763211A (en) * 2018-05-23 2018-11-06 中国科学院自动化研究所 The automaticabstracting and system of knowledge are contained in fusion

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10733380B2 (en) * 2017-05-15 2020-08-04 Thomson Reuters Enterprise Center Gmbh Neural paraphrase generator

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017162074A1 (en) * 2016-03-25 2017-09-28 阿里巴巴集团控股有限公司 Method, apparatus and device for mapping products
CN108024005A (en) * 2016-11-04 2018-05-11 北京搜狗科技发展有限公司 Information processing method, device, intelligent terminal, server and system
CN107526725A (en) * 2017-09-04 2017-12-29 北京百度网讯科技有限公司 The method and apparatus for generating text based on artificial intelligence
CN107577763A (en) * 2017-09-04 2018-01-12 北京京东尚科信息技术有限公司 Search method and device
CN108052512A (en) * 2017-11-03 2018-05-18 同济大学 A kind of iamge description generation method based on depth attention mechanism
CN107977363A (en) * 2017-12-20 2018-05-01 北京百度网讯科技有限公司 Title generation method, device and electronic equipment
CN108108449A (en) * 2017-12-27 2018-06-01 哈尔滨福满科技有限责任公司 A kind of implementation method based on multi-source heterogeneous data question answering system and the system towards medical field
CN108319585A (en) * 2018-01-29 2018-07-24 北京三快在线科技有限公司 Data processing method and device, electronic equipment, computer-readable medium
CN108763211A (en) * 2018-05-23 2018-11-06 中国科学院自动化研究所 The automaticabstracting and system of knowledge are contained in fusion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王帅 ; 赵翔 ; 李博 ; 葛斌 ; 汤大权 ; .TP-AS:一种面向长文本的两阶段自动摘要方法.中文信息学报.2018,(06),全文. *
郑雄风 ; 丁立新 ; 万润泽 ; .基于用户和产品Attention机制的层次BGRU模型.计算机工程与应用.2017,(11),全文. *

Also Published As

Publication number Publication date
CN111209725A (en) 2020-05-29

Similar Documents

Publication Publication Date Title
JP7296387B2 (en) Content generation method and apparatus
US10599731B2 (en) Method and system of determining categories associated with keywords using a trained model
CN106776673B (en) Multimedia document summarization
US11830033B2 (en) Theme recommendation method and apparatus
CN111209725B (en) Text information generation method and device and computing equipment
CN107480158A (en) The method and system of the matching of content item and image is assessed based on similarity score
CN107644036B (en) Method, device and system for pushing data object
US9727906B1 (en) Generating item clusters based on aggregated search history data
US20180268307A1 (en) Analysis device, analysis method, and computer readable storage medium
CN107958385B (en) Bidding based on buyer defined function
CN103268317A (en) System and method for semantically annotating images
JP6698040B2 (en) Generation device, generation method, and generation program
JP6405343B2 (en) Information processing apparatus, information processing method, and program
US10380623B2 (en) System and method for generating an advertisement effectiveness performance score
WO2017107010A1 (en) Information analysis system and method based on event regression test
CN111581926A (en) Method, device and equipment for generating file and computer readable storage medium
WO2023020160A1 (en) Recommendation method and apparatus, training method and apparatus, device, and recommendation system
US20180113919A1 (en) Graphical user interface rendering predicted query results to unstructured queries
JP6037540B1 (en) Search system, search method and program
CN113344648B (en) Advertisement recommendation method and system based on machine learning
US20210233150A1 (en) Trending item recommendations
CN111797622B (en) Method and device for generating attribute information
CN112200614A (en) Advertisement text implanting and displaying method and corresponding device, equipment and medium
US10496698B2 (en) Method and system for determining image-based content styles
CN111309951A (en) Advertisement words obtaining method and device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant