WO2019146837A1 - Method and mobile apparatus for performing word prediction - Google Patents
Method and mobile apparatus for performing word prediction Download PDFInfo
- Publication number
- WO2019146837A1 WO2019146837A1 PCT/KR2018/002869 KR2018002869W WO2019146837A1 WO 2019146837 A1 WO2019146837 A1 WO 2019146837A1 KR 2018002869 W KR2018002869 W KR 2018002869W WO 2019146837 A1 WO2019146837 A1 WO 2019146837A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- words
- word
- input
- language model
- attributes
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/02—Input arrangements using manually operated switches, e.g. using keyboards or dials
- G06F3/023—Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
- G06F3/0233—Character input methods
- G06F3/0237—Character input methods using prediction or retrieval techniques
Definitions
- One or more embodiments relate to a method and mobile apparatus for performing word prediction.
- One or more embodiments include a method and mobile apparatus for performing word prediction, in which a word succeeding input words is estimated by applying a first N-gram language model indicating a frequency of occurrence regarding a word sequence of consecutive words as a probability value and a second N-gram language model indicating a frequency of occurrence regarding a word sequence including undecided words having word attributes tagged as a probability value to the input words.
- a method of performing word prediction includes: receiving an input of words; performing first estimation of a word succeeding the input words by applying a first N-gram language model to the input words, wherein the first N-gram language model indicates, as a probability value, a frequency of occurrence regarding a word sequence of consecutive words; when the estimated word fails to satisfy a predetermined condition, performing second estimation of a word succeeding the input words by applying a second N-gram language model to the input words, some of which are replaced with undecided words tagged with word attributes of some of the input words, wherein the second N-gram language model indicates a frequency of occurrence regarding a word sequence of consecutive words including undecided words tagged with word attributes; and recommending a word succeeding the input words, based on a result of estimation using the language models.
- a mobile apparatus for performing word prediction includes: a user interface; a processor; and a memory storing instructions executable by the processor, wherein the processor executes the instructions: to receive an input of words via the user interface; to perform first estimation of a word succeeding the input words by applying a first N-gram language model to the input words, wherein the first N-gram language model indicates, as a probability value, a frequency of occurrence regarding a word sequence of consecutive words; when the estimated word fails to satisfy a predetermined condition, to perform second estimation of a word succeeding the input words by applying a second N-gram language model to the input words, some of which are replaced with undecided words tagged with word attributes of some of the input words, wherein the second N-gram language model indicates a frequency of occurrence regarding a word sequence of consecutive words including undecided words tagged with word attributes; and to recommend a word succeeding the input words, based on a result of estimation using the language models.
- FIG. 1 illustrates a mobile apparatus for performing word prediction and a server for providing a language model, according to an embodiment
- FIG. 2 is a flowchart illustrating a method of building a language model for use in word prediction, according to an embodiment
- FIG. 3 is a detailed flowchart of a process of generating a second N-gram language model, according to an embodiment
- FIG. 4 illustrates an example in which a first N-gram language model and a second N-gram language model are generated when a corpus is provided, according to an embodiment
- FIG. 5 is a flowchart illustrating a method of performing word prediction, according to an embodiment.
- FIG. 6 is a detailed flowchart of a process of second estimation of a word succeeding input words by using a second N-gram language model, according to an embodiment.
- a method of performing word prediction includes: receiving an input of words; performing first estimation of a word succeeding the input words by applying a first N-gram language model to the input words, wherein the first N-gram language model indicates, as a probability value, a frequency of occurrence regarding a word sequence of consecutive words; when the estimated word fails to satisfy a predetermined condition, performing second estimation of a word succeeding the input words by applying a second N-gram language model to the input words, some of which are replaced with undecided words tagged with word attributes of some of the input words, wherein the second N-gram language model indicates a frequency of occurrence regarding a word sequence of consecutive words including undecided words tagged with word attributes; and recommending a word succeeding the input words, based on a result of estimation using the language models.
- One or more embodiments relate to a method and mobile apparatus for performing word prediction. Detailed descriptions about elements well known to one of ordinary skill in the art to which the embodiments herein pertain will be omitted.
- FIG. 1 illustrates a mobile apparatus 100 for performing word prediction and a server 200 for providing a language model, according to an embodiment.
- the mobile apparatus 100 may be a smartphone, a tablet personal computer (PC), a laptop computer, etc. In one or more embodiments according to the present disclosure, the mobile apparatus 100 may also be a wearable device such as a smartwatch.
- the mobile apparatus 100 for performing word prediction may include a memory 110, a processor 120, and a user interface 130.
- a memory 110 may be included in the mobile apparatus 100.
- a processor 120 may be included in the mobile apparatus 100.
- a user interface 130 may be further included in the mobile apparatus 100.
- the mobile apparatus 100 is an electronic apparatus having an operating system (OS) installed and capable of displaying a processing result according to a user input by executing an application installed thereon.
- the mobile apparatus 100 may be a smartphone, a tablet PC, a laptop computer, a digital camera, etc.
- the term "application” refers to an application program or a mobile application. A user may select and execute an application from among various kinds of applications installed on the mobile apparatus 100.
- the memory 110 may store software and/or a program.
- the memory 110 may store an application, a program such as an application programming interface (API), and various kinds of data.
- API application programming interface
- the processor 120 may access and use the data stored in the memory 110 or may store new data in the memory 110. Also, the processor 120 may execute the program installed in the memory 110. Also, the processor 120 may install, on the memory 110, an application received from outside.
- the processor 120 may include at least one processor.
- the processor 120 may control other elements included in the mobile apparatus 100 to perform an operation corresponding to the user input received via the user interface 130.
- the processor 120 may include at least one specialized processor corresponding to each function or may be an integrated-type processor.
- the processor 120 may execute the program stored in the memory 110, may read data or a file stored in the memory 110, or may store a new file on the memory 110.
- the user interface 130 may receive the user input, etc. from the user.
- the user interface 130 may display information such as a result of executing an application on the mobile apparatus 100, a processing result corresponding to the user input, and a status of the mobile apparatus 100.
- the user interface 130 may include hardware units for receiving an input from the user or providing an output from the mobile apparatus 100, and may also include an exclusive software module for driving the hardware units.
- the user interface 130 may include an operation panel such as a touch panel for receiving the user input, a display panel for displaying a screen, etc.
- the user interface 130 may be a touch screen in which the operation panel and the display panel are coupled to each other, but is not limited thereto.
- the memory 110 may store instructions that are executable by the processor 120.
- the processor 120 may execute the instructions stored in the memory 110.
- the processor 120 may execute the application installed on the mobile apparatus 100 according to the user input.
- the processor 120 may display a virtual keyboard and an input field for receiving the user input on a screen of the mobile apparatus 100, via the user interface 130.
- the processor 120 may receive an input of words via the user interface 130, and may predict a word succeeding the input words.
- the processor 120 may estimate the word succeeding the input words by applying a language model to the input words, and may recommend the estimated word to the user via the user interface 130.
- the language model may be a language model that has learned a rule of constructing a sentence based on the order of words for a word sequence of consecutive words with respect to a number of corpuses collected as samples.
- the language model may calculate a frequency of occurrence regarding a word sequence of consecutive words as a probability value to learn the rule of constructing a sentence.
- an N-gram language model may be a probabilistic language model with a series of N words represented by way of probability.
- the N-gram language model may be a model indicating a probability of appearance of N consecutive words.
- the N-gram language model may indicate a frequency of occurrence as a probability value according to a statistical method with respect to each of various combinations of N words.
- the N-gram language model may be used to estimate an N-th word from a sequence of (N-1) words and output a probability thereof.
- the processor 120 may estimate a word succeeding input words by using a first N-gram language model indicating, as a probability value, a frequency of occurrence regarding a word sequence of consecutive words.
- the processor 120 may estimate a word succeeding the input words, some of which are replaced with undecided words tagged with words attributes of some of the input words.
- the processor 120 may estimate the word succeeding the input words by using a second N-gram language model indicating, as a probability value, a frequency of occurrence regarding a word sequence of consecutive words including undecided words tagged with word attributes.
- the processor 120 may select some of input words, determine word attributes of the selected words, and replace the selected words with undecided words tagged with the determined word attributes.
- the processor 120 may apply the second N-gram language model to the input words having the selected words replaced with the undecided words tagged with the determined word attributes.
- the word attributes may be at least one of a part of speech, a tense, a physical quantity, a place name, an organization's name, and a person's name.
- the processor 120 may recommend a word succeeding the input words, based on a result of estimation using the language models.
- the language models may be built in the mobile apparatus 100 and be stored in the memory 110.
- the language models may be provided to the mobile apparatus 100 as the mobile apparatus 100 communicates with an external apparatus.
- the mobile apparatus 100 may include a communication interface supporting at least one of various wired/wireless communication methods.
- the mobile apparatus 100 may perform wired/wireless communication with another device or a network.
- Examples of the wireless communication may include, for example, wireless fidelity (Wi-Fi), Bluetooth, long term evolution (LTE), etc.
- Examples of the wired communication may include, for example, universal serial bus (USB), high-definition multimedia interface (HDMI), etc.
- the mobile apparatus 100 may be connected to an external apparatus located outside the mobile apparatus 100 to transmit/receive signals or data. As shown in FIG. 1, the mobile apparatus 100 may communicate with the server 200 for providing a language model.
- the server 200 may include a memory storing various kinds of databases to build a language model and provide the language model to the mobile apparatus 100, a processor for generating a language model, a communication interface, etc.
- the mobile apparatus 100 may receive a language model trained in the server 200 for providing a language model from the server 200 and use the language model received from the server 200 as a language model for performing word prediction.
- FIG. 2 is a flowchart illustrating a method of building a language model for use in word prediction, according to an embodiment.
- the language model for use in word prediction may be preinstalled in the mobile apparatus 100 or may be built in the mobile apparatus 100.
- a case of providing a language model from the server 200 to the mobile apparatus 100 will be hereinafter described as an example.
- the mobile apparatus 100 may build the language model by the same method as described with reference to FIGS. 2 to 4.
- the server 200 may divide a corpus into words.
- the server 200 may generate a first N-gram language model indicating, as a probability value, a frequency of occurrence regarding a word sequence of N consecutive words from among a plurality of words constituting a corpus.
- the server 200 may generate a second N-gram language model indicating, as a probability value, a frequency of occurrence regarding a word sequence of N consecutive words from among a plurality of words constituting a corpus, the word sequence of N consecutive words having some selected words replaced with undecided words tagged with word attributes of the selected words.
- FIG. 3 is a detailed flowchart of a process of generating a second N-gram language model, according to an embodiment.
- the server 200 may select some words in a word sequence of N consecutive words from among a plurality of words constituting a corpus.
- the server 200 may use an exceptional word database storing irreplaceable exceptional words to select some words determined as not corresponding to the irreplaceable exceptional words.
- the exceptional word database may store consecutive words whose relationship is irreplaceable, such as an idiom.
- the server 200 may replace the selected words with undecided words tagged with word attributes of the selected words.
- the server 200 may determine words attributes of the selected words by using a word attribute database storing word attributes of all words, for example, words registered in the dictionary.
- the word attribute database may store at least one word attribute with respect to each word, and may store a word attribute regarding a part of speech or a tense type of each word, or a word attribute related to a physical quantity, a place name, an organization’s name, or a person’s name.
- the server 200 may obtain, as a probability value, a frequency of occurrence regarding a word sequence of N consecutive words of which selected words are replaced with undecided words tagged with word attributes of the selected words.
- the server 200 may store the obtained probability value and the word sequence of N consecutive words of which the selected words are replaced with undecided words tagged with word attributes of the selected words, so as to correspond to each other.
- the server 200 may generate an integrated language model of the first N-gram language model and the second N-gram language model.
- the server 200 may store the integrated language model for use in word prediction or may store each of the first N-gram language model and the second N-gram language model.
- FIG. 4 illustrates an example in which a first N-gram language model and a second N-gram language model are generated when a corpus is provided, according to an embodiment.
- FIG. 4 illustrates a process in which a first N-gram language model and a second N-gram language model are generated when a corpus of "Today we know that between 1948 and 1990" is provided.
- the first N-gram language model may generate a 1-gram or unigram list with respect to each word constituting a corpus, and may calculate each frequency of occurrence as a probability value.
- the first N-gram language model may generate a 2-gram list with respect to two consecutive words from among words constituting the corpus, and may calculate each frequency of occurrence as a probability value. For example, a list of two consecutive words such as "Today we”, “we know”, ..., and "and 1990" may be generated, and each frequency of occurrence may be calculated as a probability value.
- frequencies of occurrence of word sequences of three, four, ..., and N consecutive words from among a plurality of words constituting the corpus may be calculated as probability values, and thus, the first N-gram language model may be generated.
- the second N-gram language model may indicate, as a probability value, a frequency of occurrence regarding a word sequence of N consecutive words from among a plurality of words constituting a corpus, the word sequence of N consecutive words having some selected words replaced with undecided words tagged with word attributes of the selected words. Since the second N-gram language model selects and replaces K words from among the N words, the second N-gram language model may be referred to as K-skip-N-gram.
- 1-skip-2-gram may select one word from among a word sequence of two consecutive words from among words constituting a corpus, that is, two consecutive words such as "Today we”, “we know”, ..., and "and 1990" and determine a word attribute of the selected word, and may calculate, as a probability value, a frequency of occurrence regarding a word sequence of which the selected word is replaced with an undecided word tagged with the determined word attribute.
- the word 'Today' may be selected first, a word attribute of the word 'Today' may be determined as 'noun', and a frequency of occurrence regarding a word sequence "[noun] we” of which the word 'Today' is replaced with an undecided word '[noun]' tagged with the word attribute 'noun' may be calculated as a probability value.
- the word 'we' may be selected this time, a word attribute of the word 'we' may be determined as 'personal pronoun', and a frequency of occurrence regarding a word sequence "Today [personal pronoun]" of which the word 'we' is replaced with an undecided word '[personal pronoun]' tagged with the word attribute 'personal pronoun' may be indicated as a probability value.
- a frequency of occurrence may be calculated as a probability value.
- the selected words replaced with undecided words tagged with word attributes of the selected words may be calculated as probability values, and thus, the second N-gram language model may be generated.
- a language model for word prediction may be an integrated language model of the first N-gram language model and the second N-gram language model.
- FIG. 5 is a flowchart illustrating a method of performing word prediction, according to an embodiment.
- the mobile apparatus 100 may receive an input of words.
- a user may consecutively input the words via the user interface 130.
- the mobile apparatus 100 may estimate a word succeeding the input words by applying a first N-gram language model to the input words, the first N-gram language model indicating, as a probability value, a frequency of occurrence regarding a word sequence of consecutive words.
- the mobile apparatus 100 may estimate an N-th word succeeding the input (N-1) words, based on an N-gram list and the probability value.
- a probability value corresponding to a 3-gram list in the first N-gram language model may be looked up to estimate a third word, and thus, whether there is a word sequence starting with "Today I" in the 3-gram list may be checked, and a third word of the checked word sequence may be estimated as the third word succeeding the input two words.
- the mobile apparatus 100 may estimate a word succeeding the input words by applying, to the input words of which selected words are replaced with undecided words tagged with word attributes of the selected words, a second N-gram language model indicating, as a probability value, a frequency of occurrence regarding a word sequence of consecutive words including undecided words tagged with word attributes.
- FIG. 6 is a detailed flowchart of a process of second estimation of a word succeeding input words by using a second N-gram language model, according to an embodiment.
- the mobile apparatus 100 may select some words from among input words. For example, when two consecutive words "Today I" are input, 'Today' or 'I' may be selected.
- the mobile apparatus 100 may determine word attributes of the selected words. For example, when two consecutive words "Today I" are input, a word attribute of the selected 'Today' or 'I' may be determined as 'noun' or 'personal pronoun', respectively. In this regard, the same word attribute database used to build a language model may be used.
- the mobile apparatus 100 may replace the selected words with undecided words tagged with the determined word attributes. For example, when two consecutive words "Today I" are input, they may be replaced by "Today [personal pronoun]” or "[noun] I".
- the mobile apparatus 100 may estimate a word succeeding the input words by applying a second N-gram language model to the input words of which the selected words are replaced with the undecided words tagged with the determined word attributes.
- a probability value corresponding to a 1-skip-3-gram list in the second N-gram language model may be looked up to estimate a third word, and thus, whether there is a word sequence starting with "Today [personal pronoun]” or "[noun] I" in the 1-skip-3-gram list may be checked, and a third word of the checked word sequence may be estimated as the third word succeeding the input two words "Today I".
- candidates that may be estimated as a word succeeding the input words may be broadened but with the limitation that at least the input words and the word attributes match each other.
- a proper word fails to be estimated even by the 1-skip-3-gram list, it may be checked whether there is a word sequence starting with "[personal pronoun]", which is a word attribute of "I” in "Today I", in a 1-skip-2-gram list, and a second word of the checked word sequence may be estimated as the third word succeeding the two input words "Today I". Nevertheless, when a proper word succeeding the input words fails to be estimated, the word may be estimated according to every unigram.
- the mobile apparatus 100 may recommend a word succeeding the input words, based on a result of estimation using the language models.
- the embodiments described above may be provided as applications stored on a non-transitory computer-readable storing medium to perform a method of performing word prediction.
- the embodiments described above may be provided as applications or computer programs stored on a non-transitory computer-readable storing medium to allow the mobile apparatus 100 to execute each operation of the method of performing word prediction.
- the embodiments described above may be implemented as a non-transitory computer-readable storing medium for storing instructions or data executable by a computer or a processor. At least one of the instructions and the data may be stored in the form of program codes, and when it is executed by a processor, a predetermined program module may be generated to perform a predetermined operation.
- the non-transitory computer-readable storing medium may be read-only memory (ROM), random-access memory (RAM), flash memory, CD-ROMs, CD-Rs, CD+Rs, CD-RWs, CD+RWs, DVD-ROMs, DVD-Rs, DVD+Rs, DVD-RWs, DVD+RWs, DVD-RAMs, BD-ROMs, BD-Rs, BD-R LTHs, BD-REs, a magnetic tape, a floppy disc, a magneto-optical data storing device, an optical data storing device, a hard disc, a solid-state disk (SSD), and any type of device that is capable of storing instructions or software, related data, data files, and data structures, and providing instructions or software, related data, data files, and data structures to a processor or a computer to allow the processor or the computer to execute the instructions.
- ROM read-only memory
- RAM random-access memory
- flash memory CD-ROMs, CD-Rs, CD+R
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Machine Translation (AREA)
Abstract
Description
Claims (12)
- A method of performing word prediction, the method comprising:receiving an input of words;performing first estimation of a word succeeding the input words by applying a first N-gram language model to the input words, wherein the first N-gram language model indicates, as a probability value, a frequency of occurrence regarding a word sequence of consecutive words;when the estimated word fails to satisfy a predetermined condition, performing second estimation of a word succeeding the input words by applying a second N-gram language model to the input words, some of which are replaced with undecided words tagged with word attributes of some of the input words, wherein the second N-gram language model indicates a frequency of occurrence regarding a word sequence of consecutive words comprising undecided words tagged with word attributes; andrecommending a word succeeding the input words, based on a result of estimation using the language models.
- The method of claim 1, wherein the word attributes comprise at least one of a part of speech, a tense, a physical quantity, a place name, an organization’s name, and a person’s name.
- The method of claim 1, wherein the second N-gram language model is generated by:selecting some words in a word sequence of N consecutive words from among a plurality of words constituting a corpus,replacing the selected words with undecided words tagged with words attributes of the selected words,obtaining, as a probability value, a frequency of occurrence regarding a word sequence of the N consecutive words of which the selected words are replaced with the undecided words tagged with the word attributes of the selected words, andstoring the obtained probability value and the word sequence of the N consecutive words of which the selected words are replaced with the undecided words tagged with the word attributes of the selected words, so as to correspond to each other.
- The method of claim 3, wherein the selected words are determined as not corresponding to irreplaceable exceptional words by using an exceptional word database storing irreplaceable exceptional words.
- The method of claim 3, wherein the word attributes of the selected words are determined by a word attribute database storing word attributes regarding words registered in a dictionary.
- The method of claim 1, wherein the performing of the second estimation comprises:selecting some of the input words;determining word attributes of the selected words;replacing the selected words with the undecided words tagged with the determined word attributes; andestimating the word succeeding the input words by applying the second N-gram language model to the input words of which the selected words are replaced with the undecided words tagged with the determined word attributes.
- A mobile apparatus comprising:a user interface;a processor; anda memory storing instructions executable by the processor,wherein the processor executes the instructions:to receive an input of words via the user interface;to perform first estimation of a word succeeding the input words by applying a first N-gram language model to the input words, wherein the first N-gram language model indicates, as a probability value, a frequency of occurrence regarding a word sequence of consecutive words;when the estimated word fails to satisfy a predetermined condition, to perform second estimation of a word succeeding the input words by applying a second N-gram language model to the input words, some of which are replaced with undecided words tagged with word attributes of some of the input words, wherein the second N-gram language model indicates a frequency of occurrence regarding a word sequence of consecutive words comprising undecided words tagged with word attributes; andto recommend a word succeeding the input words, based on a result of estimation using the language models.
- The mobile apparatus of claim 7, wherein the word attributes comprise at least one of a part of speech, a tense, a physical quantity, a place name, an organization’s name, and a person’s name.
- The mobile apparatus of claim 7, wherein the second N-gram language model is provided by a server capable of communicating with the mobile apparatus,wherein the serverselects some words in a word sequence of N consecutive words from among a plurality of words constituting a corpus,replaces the selected words with undecided words tagged with word attributes of the selected words and obtains a probability value as a frequency of occurrence regarding a word sequence of the N consecutive words of which the selected words are replaced with the undecided words tagged with the word attributes of the selected words, andstores the obtained probability value and the word sequence of the N consecutive words of which the selected words are replaced with the undecided words tagged with the word attributes of the selected words, so as to correspond to each other.
- The mobile apparatus of claim 9, wherein the selected words are determined as not corresponding to irreplaceable exceptional words by using an exceptional word database storing irreplaceable exceptional words.
- The mobile apparatus of claim 9, wherein the word attributes of the selected words are determined by a word attribute database storing word attributes regarding words registered in a dictionary.
- The mobile apparatus of claim 7, wherein, when the second estimation is performed, the processor selects some of the input words, determines word attributes of the selected words, replaces the selected words with the undecided words tagged with the determined word attributes, and estimates the word succeeding the input words by applying the second N-gram language model to the input words of which the selected words are replaced with the undecided words tagged with the determined word attributes.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2018-0009608 | 2018-01-25 | ||
KR1020180009608A KR20190090646A (en) | 2018-01-25 | 2018-01-25 | Method and mobile apparatus for performing word prediction |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019146837A1 true WO2019146837A1 (en) | 2019-08-01 |
Family
ID=67395775
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2018/002869 WO2019146837A1 (en) | 2018-01-25 | 2018-03-12 | Method and mobile apparatus for performing word prediction |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR20190090646A (en) |
WO (1) | WO2019146837A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113177402A (en) * | 2021-04-26 | 2021-07-27 | 平安科技(深圳)有限公司 | Word replacement method and device, electronic equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000099085A (en) * | 1998-09-18 | 2000-04-07 | Atr Interpreting Telecommunications Res Lab | Statistical language model generating device and voice recognition device |
JP2009223560A (en) * | 2008-03-14 | 2009-10-01 | Sanyo Electric Co Ltd | Document processor, electronic medical chart device and document processing program |
KR20140026772A (en) * | 2012-08-23 | 2014-03-06 | 주식회사 다음커뮤니케이션 | System and method of managing document |
KR101482430B1 (en) * | 2013-08-13 | 2015-01-15 | 포항공과대학교 산학협력단 | Method for correcting error of preposition and apparatus for performing the same |
KR20160066441A (en) * | 2014-12-02 | 2016-06-10 | 삼성전자주식회사 | Voice recognizing method and voice recognizing appratus |
-
2018
- 2018-01-25 KR KR1020180009608A patent/KR20190090646A/en active IP Right Grant
- 2018-03-12 WO PCT/KR2018/002869 patent/WO2019146837A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000099085A (en) * | 1998-09-18 | 2000-04-07 | Atr Interpreting Telecommunications Res Lab | Statistical language model generating device and voice recognition device |
JP2009223560A (en) * | 2008-03-14 | 2009-10-01 | Sanyo Electric Co Ltd | Document processor, electronic medical chart device and document processing program |
KR20140026772A (en) * | 2012-08-23 | 2014-03-06 | 주식회사 다음커뮤니케이션 | System and method of managing document |
KR101482430B1 (en) * | 2013-08-13 | 2015-01-15 | 포항공과대학교 산학협력단 | Method for correcting error of preposition and apparatus for performing the same |
KR20160066441A (en) * | 2014-12-02 | 2016-06-10 | 삼성전자주식회사 | Voice recognizing method and voice recognizing appratus |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113177402A (en) * | 2021-04-26 | 2021-07-27 | 平安科技(深圳)有限公司 | Word replacement method and device, electronic equipment and storage medium |
CN113177402B (en) * | 2021-04-26 | 2024-03-01 | 平安科技(深圳)有限公司 | Word replacement method, device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
KR20190090646A (en) | 2019-08-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9460085B2 (en) | Testing and training a question-answering system | |
WO2020027540A1 (en) | Apparatus and method for personalized natural language understanding | |
CN111090641B (en) | Data processing method and device, electronic equipment and storage medium | |
WO2020262788A1 (en) | System and method for natural language understanding | |
US20100107114A1 (en) | In context web page localization | |
CN110399722B (en) | Virus family generation method, device, server and storage medium | |
WO2020262800A1 (en) | System and method for automating natural language understanding (nlu) in skill development | |
CN108563645B (en) | Metadata translation method and device of HIS (hardware-in-the-system) | |
CN103678371B (en) | Word library updating device, data integration device and method and electronic equipment | |
US10019511B2 (en) | Biology-related data mining | |
WO2019146837A1 (en) | Method and mobile apparatus for performing word prediction | |
US20180018315A1 (en) | Information processing device, program, and information processing method | |
JP7172187B2 (en) | INFORMATION DISPLAY METHOD, INFORMATION DISPLAY PROGRAM AND INFORMATION DISPLAY DEVICE | |
WO2019045185A1 (en) | Mobile device and method for correcting character string entered through virtual keyboard | |
JP7194759B2 (en) | Translation data generation system | |
CN112052661A (en) | Article analysis method, recording medium, and article analysis system | |
JP2009048455A (en) | Device for estimating interclause relationship and computer program | |
CN112836057A (en) | Knowledge graph generation method, device, terminal and storage medium | |
CN115732052A (en) | Case report table generation method and device based on structured clinical project | |
CN107403352B (en) | Prioritizing topics of interest determined from product evaluations | |
JP2021086362A (en) | Information processing device, information processing method, and program | |
CN109522542A (en) | A kind of method and device identifying vehicle failure sentence | |
CN111310016A (en) | Label mining method, device, server and storage medium | |
CN110297825B (en) | Data processing method, device, computer equipment and storage medium | |
CN108509057A (en) | Input method and relevant device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18903028 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18903028 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 01.02.2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18903028 Country of ref document: EP Kind code of ref document: A1 |