WO2003056451A1 - Procede de generation de texte et generateur de texte - Google Patents
Procede de generation de texte et generateur de texte Download PDFInfo
- Publication number
- WO2003056451A1 WO2003056451A1 PCT/JP2002/013185 JP0213185W WO03056451A1 WO 2003056451 A1 WO2003056451 A1 WO 2003056451A1 JP 0213185 W JP0213185 W JP 0213185W WO 03056451 A1 WO03056451 A1 WO 03056451A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- text
- word
- keyword
- dependency
- text generation
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/205—Parsing
- G06F40/211—Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/268—Morphological analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
- G06F40/53—Processing of non-Latin text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
- G06F40/55—Rule-based translation
- G06F40/56—Natural language generation
Definitions
- the present invention relates to a natural language processing method and apparatus.
- it has a feature in the method of generating text from several keys.
- the technology to generate natural text using several keywords when they are input will contribute to assisting those who are not good at writing sentences, such as foreigners. ,
- conventional techniques for generating a natural text by inputting one or more keywords include a technique for generating a sentence based on a template, a key technique, and the like.
- the present invention has been created based on such a conventional background, and provides a generating method and a generating apparatus for generating a natural text from one or more keywords based on the keywords.
- a text is generated based on the following steps.
- the process proceeds to an extraction step of extracting the text or phrase related to the keyword from the database.
- the database contains many example sentences, for example, searching for text and phrases that contain the word "her” and extracting them.
- an optimal text using the input keyword is generated.
- this text generation step for example, when a text including "She”, “To”, and “Go” is in the middle of the evening, "She went to the park” Is generated.
- the extracted text may be subjected to morphological analysis and syntax analysis to obtain a dependency structure of the text.
- morphological analysis and syntax analysis to obtain a dependency structure of the text.
- the dependency probability of the entire text may be obtained using the dependency model, and the highest probability may be generated as the optimal text.
- whether or not there is a word to be complemented between any two keys can be determined using a learning model.
- words are complemented in the learning model in order from the one with the highest probability of completion, the process is repeated until the probability that there is no word to be complemented is highest between any keys. Since the complemented words can be incorporated into the keywords, it is possible to further complement between the complemented words. As a result, suitable complementation can be realized, so that natural text generation can be achieved even when the given keyword is small.
- the generated text will be text that conforms to the characteristics.
- the present invention can also be provided as a text generation device that generates a text of a sentence or a sentence.
- the text generating apparatus includes: an input unit for inputting a word serving as one or more keywords; a text database composed of a plurality of texts; and a text or phrase related to the keyword, from the text database.
- An extraction unit for searching and extracting, and a text generation unit for generating an optimum text using the input keyword by combining the extracted text or phrase are provided.
- an analyzing means for morphologically analyzing and syntactically analyzing the extracted text to obtain a dependency structure of the text, and a dependency structure for forming a dependency structure including the keyword
- the forming means may be included in the text generating means.
- the dependency structure forming means may calculate the dependency probability of the entire text by using the dependency model, and generate the highest probability as the optimal text.
- the text generation means it is also possible to generate an optimal text having a natural sentence arrangement by using the word order model after or after forming the dependency structure.
- Word interpolating means that repeats until the probability that there is no word to be interpolated becomes the highest may be included.
- the text generating apparatus may include a text having a characteristic text pattern on a daily basis, and the text generating means may generate a text reflecting the characteristic.
- FIG. 1 is an explanatory diagram of a text generation device according to the present invention.
- FIG. 2 is a partial graph of a dependency structure analyzed by the text generation unit.
- Fig. 3 shows the dependency tree generated by the text generator.
- Fig. 4 shows a dependency structure tree in another example sentence.
- Fig. 5 shows an example of the calculation of the probability that the order of the relevant clauses is appropriate.
- the designated parts of the reference numerals are as follows. 1: text generator, 2: input key word, 3: output text, 10: keyword input section, 11: text / phrase search and extraction section, 12: text generation section, 1 2a: analysis Section, 1 2b: Forming section, 1 2c: Evaluation section, 13: Database Preferred mode for carrying out the invention
- FIG. 1 is an explanatory diagram of a text generation device (1) according to the present invention.
- the apparatus includes a database (13) together with a keyword input unit (10), a text phrase search and extraction unit (11), and a text generation unit (12).
- the text base (13) is provided with a plurality of texts in advance as tables, and the contents of the tables can be changed as appropriate. Various texts can be generated by changing the contents, but this point will be described later.
- the text phrase search and extraction section (1 1) Search for text and phrases that contain at least one of the keywords and extract them.
- the text generator (1 2) outputs a natural text, in this case, “She went to the park” (3) by combining them based on the extracted texts and phrases.
- the text phrase search and extraction unit (11) extracts sentences containing n keywords from the database (13). Here, it is sufficient to include at least one keyword.
- the extracted sentence is sent to the text generator (12).
- the text generation unit (1 2) consists of an analysis unit (1 2a), a formation unit (1 2b), and an evaluation unit (1 2c).
- the analysis unit (12a) first performs morphological analysis and Perform parsing.
- a morphological analysis method using the ME model filed by the applicant of the present invention in Japanese Patent Application No. 2001-139563 may be used.
- the likelihood as a morpheme is represented as a probability.
- each character string constituting the sentence has one of two identification codes, that is, "1” or It can be replaced by the problem of assigning “0”.
- “1” is divided by the number of grammatical attributes in order to assign grammatical attributes.
- n it can be replaced by the problem of assigning each character string an identification code from “0” to “n”.
- the likelihood that a character string is a morpheme and has any grammatical attribute is applied to the probability distribution function in the ME model. Is required.
- processing is performed by finding regularity in the probabilities representing this likelihood.
- the features used include information on the character type of the character string of interest, whether the character string is registered in the dictionary, changes in the character type from the previous morpheme, the part of speech of the previous morpheme, etc. Use the information of When one sentence is given, the sentence is divided into morphemes and the grammatical attributes are added so that the product of probability is maximized for the entire sentence.
- a publicly known algorithm can be appropriately used for searching for the optimal solution.
- the morphological analysis method using the ME model is a highly advantageous method, for example, an effective morphological analysis can be performed even if an unknown word is included.
- the above method is particularly effective, but is not necessarily limited, and any morphological analysis method can be used.
- an analysis method using the ME model can also be introduced for syntax analysis in the analysis unit (12a). Any other parsing method The following method is shown as an example.
- the database (13) can be referenced from the text generator (12), and the ME model can learn from a plurality of texts contained in the database.
- the dependency element has only one receiving element.
- a phrase is divided into a part corresponding to the first subject and a part corresponding to the last particle and the conjugation form, and the distance between the phrases and the presence or absence of punctuation are considered as features along with each feature.
- this method the accuracy of the same level or more can be obtained although the size of the learning data is about 1/10 compared to the conventional method using the decision tree / maximum likelihood estimation method.
- This method has the highest level of accuracy as a learning-based system. It is a technique that can be obtained.
- the analysis unit (12a) accurately analyzes the text retrieved and extracted from the database (13), and Obtain a dependency structure.
- the dependency structure can be represented as a subgraph.
- the node of the graph structure is a clause
- the arc is a dependency.
- the process shifts to the processing in the formation unit (12b) of the text generation unit (12).
- the analysis and formation in the text generation unit (12) are integrated processes as described below, and operate in cooperation with each other.
- a dependency structure tree containing n input words is generated. Trees are generated by combining the above subgraphs.
- one of the two generated trees (Fig. 3a and b) is selected again using the above dependency model.
- the restrictions on the word order are relatively lenient, and when the dependency relation is determined, a result similar to a natural text can be obtained.
- the language targeted by the present invention is not necessarily limited to Japanese. However, it may be used in other languages.
- the words can be rearranged as follows.
- a natural sentence sequence is replaced and output from the tree with the highest priority.
- a word order model that uses the ME model to generate a natural sequence of sentences from the dependency structure.
- the word order model can also be learned by referring to the database (13).
- the word order means the word order between the relations, that is, the relation of the order of the phrases relating to the same phrase.
- sentences with a long qualifier are more likely to come before shorter ones, such as "it" Phrases containing pulse directives are more likely to come before.
- a method of learning the relationship between the above-described elements and the tendency of word order, that is, regularity from a predetermined text has been devised.
- this method not only which elements contribute to determining the word order but also to what extent, which combination of elements and the tendency of the word order will be deduced from the text used for learning. You can learn more. The degree of contribution of each element is efficiently learned using the ME model. Regardless of the number of hang-up clauses, we pick up two and learn the order.
- the learned model When generating a sentence, the learned model can be used to input the clauses in the dependency relationship and determine the order of the dependency clauses.
- the word order is determined by the following procedure.
- the probability that the order of the relevant clauses is appropriate is determined using the learned model. This probability is obtained by replacing it with “0” or “1”, which indicates whether the order is appropriate, and applying it to the function of the probability distribution in the ME model.
- the overall probability is calculated as the product of the probabilities that the order is appropriate when two dependency clauses are picked up.
- Figure 5 shows an example (50) of calculating the probability that the order of clauses is appropriate.
- the probability that the word order is “Yesterday” and “Taro is” is represented by “P * (Yesterday, Taro is)”, and the probability is 0.6.
- the probabilities of the word order (5 1) in the first row in FIG. , 0.336.
- the word order model includes a generalized node
- the generalized node by presenting the generalized node as it is, a place where a person's name, a place name, a date, or the like is easily inserted can be understood.
- the dependency structure is input in the word order model described above, but the word order model can be used in the process of forming the dependency structure in the embodiment of the present invention.
- a plurality of texts that are considered optimal by the dependency model, the word order model, and the like are formed as candidates.
- these can be directly output from the text generation device (1), but in the following, an evaluation unit (12c) is further arranged in the text generation unit (12) to evaluate text candidates.
- the following shows the configuration for reordering.
- the evaluator (12c) evaluates text candidates based on various information such as the order of input keywords, the frequency of extracted patterns, and scores calculated from the dependency model and word order model. .
- the evaluation section (12c) can also refer to the data base (13).
- this evaluation unit (12c) By the function of this evaluation unit (12c), among the candidates formed as natural texts, it is possible to output a plurality of texts that are considered to be particularly optimal, for example, by ranking them.
- the text generation device (1) according to the present invention can be introduced into still another language processing system. Thus, a plurality of outputs may be performed as described above. May be output.
- a configuration may be adopted in which a product having a higher ranking than a certain value or a product having a certain threshold or more in terms of probability or score is output and manually selected.
- the evaluation unit (12c) In the configuration of the evaluation unit (12c), only the candidates formed in the formation unit (12b) are input. However, the evaluation unit (12c) further analyzes the entire sentence consisting of multiple texts. Evaluating which of the text candidates to select from the whole sentence flow may determine one from each text candidate.
- a natural text is generated with a configuration different from that of the related art while referring to the database base (is). be able to.
- the present invention also provides a complementing method when there are not enough keywords. That is, when n key words are input, the space between the words is complemented using the ME model. For the model, input 2 out of n and complement between the 2 keys.
- the above-mentioned complementary technology or the presentation of multiple texts and the selection by the patient can be sufficiently effective.
- Keywords are appropriately extracted from human utterance sentences, a new sentence is created, and this is repeated.
- typical information such as
- a system that automatically creates a new story may be realized by combining the above complementary technologies. For example, when “Grandfather / Grandmother / Mountain / Turtle" is entered, at least in the database, the tales of Momotaro and Urashima Taro are prepared to create a new story that is similar to the two tales but different from them. it can. In this case, a new word to be complemented and reconstructed as a key word may be “Kawaii * peach * Ryugujo”. In particular, the more stories that are prepared for an overnight base, the more novel stories are produced, and the more it is read, the more difficult it is to understand the relationship with the original text.
- a sentence and an important keyword in the sentence can be given, and a sentence of an appropriate length including the keyword can be generated, so that a composition system can be realized. If it is shorter than the original sentence, it will be a summary. It is also conceivable to add typical information to the sentence to generate a more detailed sentence. In this way, unlike conventional summarization systems, sentences are generated independently from important keywords, so that more natural summaries can be obtained.
- the key is extracted from the text, and the text is generated again based on the keyword.
- it can be rewritten into expressions specific to that database. For example, if a novel of a writer is made into a database, it will be possible to rewrite sentences like that writer.
- the automatically generated text reflects the text patterns, so that it is simple and individual. It is also possible to generate simple text.
- the present invention has the above configuration, and thus has the following effects.
- the extraction step extracts text and phrases from the database. By combining extracted texts or phrases, it is possible to generate an optimal text using the input keyword.
- Morphological analysis and syntax analysis of the extracted text to obtain a dependency structure of the text can realize more natural and accurate text generation. Further, a process of forming a dependency structure including a keyword Then, the dependency probability of the entire text is obtained using the dependency model, and the text having the highest probability is generated as the optimal text, so that a more natural text can be generated. Also, for a word order that was difficult with the conventional configuration, it is also possible to use a word order model to generate a natural text sequence.
- text can be generated that reflects the characteristics only by providing a text having a characteristic text pattern in the database.
- a generation method can be provided.
- the present invention provides a text generation method that provides an excellent text generation method as described above. Can create equipment and contribute to the improvement of natural language processing technology,
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Machine Translation (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP02786125A EP1469398A4 (en) | 2001-12-27 | 2002-12-17 | TEXTER PRODUCTION METHOD AND TEXT GENERATOR |
US10/500,243 US20050050469A1 (en) | 2001-12-27 | 2002-12-17 | Text generating method and text generator |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2001-395618 | 2001-12-27 | ||
JP2001395618A JP3921523B2 (ja) | 2001-12-27 | 2001-12-27 | テキスト生成方法及びテキスト生成装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2003056451A1 true WO2003056451A1 (fr) | 2003-07-10 |
Family
ID=19189012
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2002/013185 WO2003056451A1 (fr) | 2001-12-27 | 2002-12-17 | Procede de generation de texte et generateur de texte |
Country Status (4)
Country | Link |
---|---|
US (1) | US20050050469A1 (ja) |
EP (1) | EP1469398A4 (ja) |
JP (1) | JP3921523B2 (ja) |
WO (1) | WO2003056451A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113642324A (zh) * | 2021-08-20 | 2021-11-12 | 北京百度网讯科技有限公司 | 文本摘要生成方法、装置、电子设备及存储介质 |
Families Citing this family (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4085156B2 (ja) * | 2002-03-18 | 2008-05-14 | 独立行政法人情報通信研究機構 | テキスト生成方法及びテキスト生成装置 |
JP3790825B2 (ja) * | 2004-01-30 | 2006-06-28 | 独立行政法人情報通信研究機構 | 他言語のテキスト生成装置 |
US8065154B2 (en) * | 2004-07-30 | 2011-11-22 | The Research Foundation of State Univesity of New York | Augmentative communications device for the speech impaired using commercial-grade technology |
JP4524640B2 (ja) * | 2005-03-31 | 2010-08-18 | ソニー株式会社 | 情報処理装置および方法、並びにプログラム |
US8862591B2 (en) * | 2006-08-22 | 2014-10-14 | Twitter, Inc. | System and method for evaluating sentiment |
US8756527B2 (en) * | 2008-01-18 | 2014-06-17 | Rpx Corporation | Method, apparatus and computer program product for providing a word input mechanism |
US8768852B2 (en) * | 2009-01-13 | 2014-07-01 | Amazon Technologies, Inc. | Determining phrases related to other phrases |
US9569770B1 (en) | 2009-01-13 | 2017-02-14 | Amazon Technologies, Inc. | Generating constructed phrases |
JP5390944B2 (ja) * | 2009-06-08 | 2014-01-15 | アクトーム総合研究所株式会社 | プロジェクト管理情報を用いたドキュメント情報生成装置およびドキュメント情報生成用プログラム |
US9298700B1 (en) * | 2009-07-28 | 2016-03-29 | Amazon Technologies, Inc. | Determining similar phrases |
US10007712B1 (en) | 2009-08-20 | 2018-06-26 | Amazon Technologies, Inc. | Enforcing user-specified rules |
US8799658B1 (en) | 2010-03-02 | 2014-08-05 | Amazon Technologies, Inc. | Sharing media items with pass phrases |
JP5630138B2 (ja) * | 2010-08-12 | 2014-11-26 | 富士ゼロックス株式会社 | 文作成プログラム及び文作成装置 |
US9678993B2 (en) | 2013-03-14 | 2017-06-13 | Shutterstock, Inc. | Context based systems and methods for presenting media file annotation recommendations |
CN105550372A (zh) * | 2016-01-28 | 2016-05-04 | 浪潮软件集团有限公司 | 一种语句训练装置、方法和信息提取系统 |
JP6647713B2 (ja) | 2016-06-03 | 2020-02-14 | インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation | 請求項中のキーワードの抽出 |
JP2018010409A (ja) * | 2016-07-12 | 2018-01-18 | Supership株式会社 | 情報処理装置及びプログラム |
US10810260B2 (en) * | 2018-08-28 | 2020-10-20 | Beijing Jingdong Shangke Information Technology Co., Ltd. | System and method for automatically generating articles of a product |
CN109800421A (zh) * | 2018-12-19 | 2019-05-24 | 武汉西山艺创文化有限公司 | 一种游戏剧本生成方法及其装置、设备、存储介质 |
WO2020139865A1 (en) * | 2018-12-24 | 2020-07-02 | Conversica, Inc. | Systems and methods for improved automated conversations |
JP2021047817A (ja) * | 2019-09-20 | 2021-03-25 | 富士ゼロックス株式会社 | 出力装置、及び出力プログラム |
JP7345034B1 (ja) | 2022-10-11 | 2023-09-14 | 株式会社ビズリーチ | 文書作成支援装置、文書作成支援方法及び文書作成支援プログラム |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH05250407A (ja) | 1992-03-10 | 1993-09-28 | Hitachi Ltd | 手話変換装置および方法 |
JPH08249331A (ja) * | 1995-03-09 | 1996-09-27 | Sharp Corp | 文書処理装置 |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5699441A (en) * | 1992-03-10 | 1997-12-16 | Hitachi, Ltd. | Continuous sign-language recognition apparatus and input apparatus |
US5887069A (en) * | 1992-03-10 | 1999-03-23 | Hitachi, Ltd. | Sign recognition apparatus and method and sign translation system using same |
JP3385146B2 (ja) * | 1995-06-13 | 2003-03-10 | シャープ株式会社 | 会話文翻訳装置 |
KR100318573B1 (ko) * | 1996-10-16 | 2001-12-28 | 마찌다 가쯔히꼬 | 문자 입력 장치 및 문자 입력 프로그램을 기억한 기록 매체 |
US6862566B2 (en) * | 2000-03-10 | 2005-03-01 | Matushita Electric Industrial Co., Ltd. | Method and apparatus for converting an expression using key words |
US7177797B1 (en) * | 2000-08-31 | 2007-02-13 | Semantic Compaction Systems | Linguistic retrieval system and method |
US7027974B1 (en) * | 2000-10-27 | 2006-04-11 | Science Applications International Corporation | Ontology-based parser for natural language processing |
US6904428B2 (en) * | 2001-04-18 | 2005-06-07 | Illinois Institute Of Technology | Intranet mediator |
US7003444B2 (en) * | 2001-07-12 | 2006-02-21 | Microsoft Corporation | Method and apparatus for improved grammar checking using a stochastic parser |
US6820075B2 (en) * | 2001-08-13 | 2004-11-16 | Xerox Corporation | Document-centric system with auto-completion |
-
2001
- 2001-12-27 JP JP2001395618A patent/JP3921523B2/ja not_active Expired - Lifetime
-
2002
- 2002-12-17 US US10/500,243 patent/US20050050469A1/en not_active Abandoned
- 2002-12-17 EP EP02786125A patent/EP1469398A4/en not_active Withdrawn
- 2002-12-17 WO PCT/JP2002/013185 patent/WO2003056451A1/ja active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH05250407A (ja) | 1992-03-10 | 1993-09-28 | Hitachi Ltd | 手話変換装置および方法 |
JPH08249331A (ja) * | 1995-03-09 | 1996-09-27 | Sharp Corp | 文書処理装置 |
Non-Patent Citations (4)
Title |
---|
ADAM BERGER, JOHN LAFFERTY: "School of Computer Science", CARNEGIE MELLON UNIVERSITY, article "Information Retrieval as Statistical Translation" |
KIYOTAKA UCHIMOTO, HITOSHI ISAHARA: "Saidai entropy model o mochiita nihongo text no ikkan shori", THE SOCIETY FOR ARTIFICIAL INTELLIGENCE KENKYUKAI SHIRYO SIG-CII-2000-NOV-09, 14 November 2000 (2000-11-14), XP002966845 * |
KOJI KAKIGAMARA, TERAUKI AIZAWA: "Completion of Japanese Sentences by Inferring Function Words from Content Words", PROCEEDINGS OF THE 12TH CONFERENCE ON COMPUTATIONAL LINGUISTICS, vol. 1, 22 August 1988 (1988-08-22), pages 291 - 296 |
See also references of EP1469398A4 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113642324A (zh) * | 2021-08-20 | 2021-11-12 | 北京百度网讯科技有限公司 | 文本摘要生成方法、装置、电子设备及存储介质 |
CN113642324B (zh) * | 2021-08-20 | 2024-02-09 | 北京百度网讯科技有限公司 | 文本摘要生成方法、装置、电子设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
JP2003196280A (ja) | 2003-07-11 |
US20050050469A1 (en) | 2005-03-03 |
JP3921523B2 (ja) | 2007-05-30 |
EP1469398A1 (en) | 2004-10-20 |
EP1469398A4 (en) | 2008-10-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2003056451A1 (fr) | Procede de generation de texte et generateur de texte | |
RU2336552C2 (ru) | Лингвистически информированные статистические модели структуры составляющих для упорядочения в реализации предложений для системы генерирования естественного языка | |
Charniak | Statistical language learning | |
Nivre et al. | The CoNLL 2007 shared task on dependency parsing | |
CN103970798B (zh) | 数据的搜索和匹配 | |
JP3768205B2 (ja) | 形態素解析装置、形態素解析方法及び形態素解析プログラム | |
JP2000353161A (ja) | 自然言語生成における文体制御方法及び装置 | |
CN112307171B (zh) | 一种基于电力知识库的制度标准检索方法及系统和可读存储介质 | |
CN110245349B (zh) | 一种句法依存分析方法、装置及一种电子设备 | |
JP3992348B2 (ja) | 形態素解析方法および装置、並びに日本語形態素解析方法および装置 | |
Stratica et al. | Using semantic templates for a natural language interface to the CINDI virtual library | |
CN113407697A (zh) | 深度百科学习的中文医疗问句分类系统 | |
Dmytriv et al. | The Speech Parts Identification for Ukrainian Words Based on VESUM and Horokh Using | |
CN110750967B (zh) | 一种发音的标注方法、装置、计算机设备和存储介质 | |
Bhat | Morpheme segmentation for kannada standing on the shoulder of giants | |
JPH09319767A (ja) | 類義語辞書登録方法 | |
Sridhar et al. | Automatic Tamil lyric generation based on ontological interpretation for semantics | |
JP2004246491A (ja) | テキストマイニング装置及びテキストマイニングプログラム | |
JP2014191484A (ja) | 文末表現変換装置、方法、及びプログラム | |
JPH02181261A (ja) | 自動抄録生成装置 | |
Gong et al. | Improved word list ordering for text entry on ambiguous keypads | |
Rajendran et al. | Deep Learning Speech Synthesis Model for Word/Character-Level Recognition in the Tamil Language. | |
Sankaravelayuthan et al. | A Comprehensive Study of Shallow Parsing and Machine Translation in Malaylam | |
CN111414459A (zh) | 人物关系获取方法、装置、电子设备及存储介质 | |
Kimura et al. | Spoken dialogue processing method using inductive learning with genetic algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): CA CN KR SG US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): FR GB |
|
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
REEP | Request for entry into the european phase |
Ref document number: 2002786125 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2002786125 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 2002786125 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 10500243 Country of ref document: US |