CN1206581C - Mixed input method - Google Patents
Mixed input method Download PDFInfo
- Publication number
- CN1206581C CN1206581C CNB011195444A CN01119544A CN1206581C CN 1206581 C CN1206581 C CN 1206581C CN B011195444 A CNB011195444 A CN B011195444A CN 01119544 A CN01119544 A CN 01119544A CN 1206581 C CN1206581 C CN 1206581C
- Authority
- CN
- China
- Prior art keywords
- letter
- input
- user
- group
- writing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Landscapes
- Character Discrimination (AREA)
Abstract
The present invention provides an input method which combines speech input and writing input, and comprises a first group of letters which are generated by using a speech recognition program, and a second group of letters and a third group of letters which are generated by using a writing recognition program, wherein the letters simultaneously exist in the first and the second groups of letters.
Description
Technical field
The invention provides a kind of input method, refer in particular to a kind of while integrating speech sound and the input method of writing input.
Background technology
No matter be desktop computer, personal digital assistant (PDA) or palmtop computer, on the interface between user and the computer system, all need to use input method.Two kinds of more easy-to-use input methods are arranged in the present computer system, and they are for phonetic entry and write input, yet these two kinds of input methods all respectively have shortcoming.Pronunciation inputting method, as speech recognition program, the language that has a tone as Chinese etc. in input regular meeting often runs into serious difficulty.Even in English, the word that existing speech recognition system all can't be discerned the user exactly and sent.The word that just a user sent of computer system provides a word string to click correct word to user oneself as a result.And write recognizer similar shortcoming being arranged also, particularly for the literal of more complicated, similarly is Chinese characters, and the user clicks a correct word in being required by a word string possibly behind word of input.Perhaps when writing Chinese words, computer system may be divided into several parts with each word according to stroke or pronunciation.The input system of wanting to learn this class is not a nothing the matter for many users, and concerning them, input speed certainly also will be slow.
Summary of the invention
Therefore fundamental purpose of the present invention provides a kind of integrating speech sound input method and the input method of writing recognition methods, to address the above problem.
To achieve these goals, the invention provides a kind ofly in conjunction with phonetic entry and write the input method of input, it comprises: use a speech recognition program to produce one first group of letter according to phonetic entry; Use one to write recognizer according to writing one second group of letter of input generation; Produce a trigram, letter wherein is to have this first and second group letter simultaneously; And show at least one letter in this trigram.
Description of drawings
Fig. 1 is the synoptic diagram of the computer system of use input method of the present invention.
Fig. 2 is the functional-block diagram of computer system shown in Figure 1.
Fig. 3 is the another kind of functional-block diagram of computer system shown in Figure 1.
Fig. 4 is the 1st group of letter, second group of letter and the block scheme of trigram.
Embodiment
See also Fig. 1, Fig. 1 is the synoptic diagram of the computer system 10 of use input method of the present invention.Computer system 10 includes processing unit 16, the microphone 17 and that a display 12, a keyboard 14, contain relative application software and is used for writing the tablet 18 of input.Display 12, keyboard 14, microphone 17 and tablet 18 are connected to processing unit 16 all.The user can speak by microphone 17, via speech recognition software literal is input in the processing unit 16.Equally, the user also can write on tablet 18, via writing identification software literal is input in the processing unit 16.Microphone 17 is designed to and can operates simultaneously with tablet 18.And the application software of carrying out in processing unit 16 just can be according to the phonetic entry of wheat wind gram 17, and tablet 18 write input, produce an immediate literal at least.These literal that meet then can be presented on the display 12 and see to the user, and allow the user to select a literal of wanting.
Please be simultaneously with reference to Fig. 1 and Fig. 2.Fig. 2 is the functional-block diagram of the processing unit 16 of first embodiment shown in Figure 1.In processing unit 16, there are a central processing unit (CPU) 22 and a storer 24 at least, be used for store applications and numerical data.Storer 24 includes a voice input module 25, and writes load module 27, and a database 29.Voice input module 25 includes a speech recognition program 26, writes load module 27 and then includes and write recognizer 28.Voice input module 25 can be obtained speech data from wheat wind gram 17, and according to this speech data, utilizes speech recognition program 26 to produce a literal (word string).Write load module 27 and then can obtain and write data, and write recognizer 28 and can utilize and write data and produce a corresponding literal (word string) from tablet 18.Therefore, except microphone 17 and tablet 18, the major part of input method shown in Fig. 2 all is to carry out in processing unit 16, is just carried out by software.Speech recognition program 26 with write recognizer 28 and all can utilize database 29 to carry out other work.
Input method of the present invention must be adjusted user's pronunciation and the singularity of writing voluntarily.This function can and be write recognizer 28 and carry out by speech recognition program 26.Speech recognition program 26 is originally that plan comes the recognizing voice literal according to one first standard 26a, and the first standard 26a is the most general characteristics of speech sounds of certain language-specific.The characteristic of the first standard 26a is stored in the database 29.In training process, the characteristic of the first standard 26a is slowly revised and is accumulated and forms, and therefore just can become at last user's voice style.When speech recognition program 26 was beyond recognition a word of saying, the user can import corresponding literal 14 with keyboard 14.This beyond all recognition word then just can be attached to this literal in the database 29, becomes the part of the first standard 26a.Equally, write recognizer 28 former plans before this and discern writing words according to one second standard 28a.In the program similar to speech recognition program 26, the user can train the written form of writing recognizer 28 identification user uniquenesses.When recognizer 28 was write in training, the second standard 28a can be adjusted according to user's lettering feature.Beyond all recognition handwritten word can utilize keyboard 14 manually to import with the assistance training process, and the characteristic of these handwritten word then just can add among the second standard 28a.
Please consult Fig. 1 and Fig. 3 simultaneously.Fig. 3 is the functional-block diagram of the processing unit 16 of second embodiment shown in Figure 1.Than first embodiment shown in Figure 2, most recognition methods all is to carry out in hardware among second embodiment.In processing unit 16, there are a central processing unit (CPU) 22 and a storer 24 at least, be used for store applications and numerical information.Also have a voice input module 35 and to write load module 37, wherein voice input module 35 contains a speech recognition program 36, writes load module 37 and then contains and write recognizer 38.Storer 24 includes database 29 with store voice load module 35 and the information of writing load module 37.Central processing unit 22, storer 24, voice input module 35 and write load module 37 and all be electrically connected mutually.As shown in previous embodiment, voice input module 35 with write load module 37 and all can utilize database 29 to carry out other work, and can adapt to the special voice of user and write characteristic.
See also Fig. 1 to Fig. 4.Fig. 4 is the synoptic diagram of one first group of letter 53, one second group of letter 54 and the trigram 55 that produces according to the inventive method.After foundation and framework were finished database 29, the present invention can adapt to the computer system 10 of input method and can use.When the user by microphone 17 and tablet 18 input alphabets, voice input module 25,35 can produce first group of letter 53 of letter 56 of one group of user's that might coincide phonetic entry according to speech recognition program 26,36.Equally, writing load module 25,35 also can be according to writing second group of letter 54 that recognizer 28,38 produces one group of user's that might coincide the letter of writing input 56.Computer system 10 then can utilize 53 and second groups of letters of first group of letter 54 to produce a trigram 55.Letter 56 in the trigram 55 is to exist simultaneously in 53 and second groups of letters 54 of first group of letter.For example, first group of letter 53 being produced of speech recognition program 26,36 may comprise as if the letter 56 of A, B, C, D, E, F and G.Write that second group of letter 54 that recognizer 28,38 produced may comprise as if the letter 56 of B, D, J, H, K and M.Then, computer system 10 just can produce the trigram 55 that contains literal B and D.Trigram 55 then just can be shown to the user and see, allows the user can select B or D.By this way, the option that offers the user just can significantly reduce, to simplify user's input process.If only contain single alphabetically 56 in the trigram 55,, and do not need the user further to do the action that clicks just this single alphabetical 56 can be selected automatically.Utilize this autoselect process, whole input process just can pick up speed.And if be empty in the trigram 55, that is to say, do not find to contain identical literal 56 in first group of letter 53 and the second group of letter 54, just the user must manually import the literal wanted by keyboard 14.The voice of the literal that this is missed and lettering feature then just can be input in the database 29.In this mode, speech recognition program 26 and 36 and the training process of writing recognizer 28 and 38 just can continue.Though in this embodiment, the content of first group of letter 53, second group of letter 54 and trigram 55 all is single alphabetical 56, yet undoubtedly, the content of first group of letter 53, second group of letter 54 and trigram 55 also might be an at least one group of File 6.That is to say, no longer be only to handle single character, and this input method also can be handled a sentence.
Than known technology, input method of the present invention has been integrated a pronunciation inputting method and a writing input method.These two kinds of input methods can be used the group of text that contains single literal or word string with generation simultaneously, and literal wherein is to exist simultaneously in pronunciation inputting method and the resulting output of writing input method.As a result, can save many flower, and allow the people of seldom typewriting reduce the amount that to typewrite in the time of the literal of selecting pronunciation inputting method and writing input method according to input method of the present invention.Owing to combine writing input method, the present invention has also saved the time of identification phonetic entry.Therefore because pronunciation inputting method and writing input method all respectively have their shortcoming, combine them that they are also favourable than independent use.
The above only is the preferred embodiments of the present invention, and all equivalent variations and modifications of being done in claim scope of the present invention all should belong to the covering scope of patent of the present invention.
Claims (8)
1. one kind in conjunction with phonetic entry and write the input method of input, and it comprises:
Use a speech recognition program to produce one first group of letter according to phonetic entry;
Use one to write recognizer according to writing one second group of letter of input generation;
Produce a trigram, letter wherein is to be present in this first and second letter of group in the letter simultaneously; And
Show at least one letter in this trigram.
2. the method for claim 1, it also comprises provides at least one database, and wherein to write recognizer be to select this first group letter and this second group of letter by this database for this speech recognition program and this.
3. method as claimed in claim 2, it comprises in addition and adds one first letter to this database, and this first letter system is by using another kind of input method to produce.
4. method as claimed in claim 3, wherein this speech recognition program uses first standard that is used for the recognizing voice input to discern user's phonetic entry, and makes this first standard meet user's phonetic entry style.
5. method as claimed in claim 4, wherein this input method comprises the user is added this database to input voice data that should first letter.
6. method as claimed in claim 3, wherein this is write recognizer and uses one to be used to discern the input of writing that second standard of writing input is discerned the user, and make that this second standard meets the user write the input style.
7. method as claimed in claim 6, wherein this input method comprises the user is added this database to the input data of writing that should first letter.
8. method as claimed in claim 3, wherein this input method comprises and uses a key board to produce this first letter.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB011195444A CN1206581C (en) | 2001-05-29 | 2001-05-29 | Mixed input method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB011195444A CN1206581C (en) | 2001-05-29 | 2001-05-29 | Mixed input method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN1388434A CN1388434A (en) | 2003-01-01 |
CN1206581C true CN1206581C (en) | 2005-06-15 |
Family
ID=4663669
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB011195444A Expired - Fee Related CN1206581C (en) | 2001-05-29 | 2001-05-29 | Mixed input method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN1206581C (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102722263A (en) * | 2012-05-29 | 2012-10-10 | 李晶 | Character input method and device thereof |
-
2001
- 2001-05-29 CN CNB011195444A patent/CN1206581C/en not_active Expired - Fee Related
Also Published As
Publication number | Publication date |
---|---|
CN1388434A (en) | 2003-01-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR100714769B1 (en) | Scalable neural network-based language identification from written text | |
CN102272827B (en) | Method and apparatus utilizing voice input to resolve ambiguous manually entered text input | |
US20080180283A1 (en) | System and method of cross media input for chinese character input in electronic equipment | |
CN1742273A (en) | Multimodal speech-to-speech language translation and display | |
JP2003015803A (en) | Japanese input mechanism for small keypad | |
US20020120651A1 (en) | Natural language search method and system for electronic books | |
CN1758211A (en) | Multimodal method to provide input to a computing device | |
CN1731511A (en) | Method and system for performing speech recognition on multi-language name | |
CN1359514A (en) | Multimodal data input device | |
US20020152075A1 (en) | Composite input method | |
CN101137979A (en) | Phrase constructor for translator | |
CN103324607A (en) | Method and device for word segmentation of Thai texts | |
KR100917552B1 (en) | Method and system for improving the fidelity of a dialog system | |
CN1129837C (en) | Mounting device for universal Chinese phonetic alphabet keyboard | |
CN1206581C (en) | Mixed input method | |
CN1275174C (en) | Chinese language input method possessing speech sound identification auxiliary function and its system | |
CN1854997A (en) | Numbers and alphabets inputting method | |
CN100561469C (en) | Create and use the method and system of Chinese language data and user-corrected data | |
CN1965349A (en) | Multimodal disambiguation of speech recognition | |
JP2007535692A (en) | System and method for computer recognition and interpretation of arbitrarily spoken characters | |
Shakil et al. | Cognitive Devanagari (Marathi) text-to-speech system | |
CN1808354A (en) | Chinese character input method using phrase association and voice prompt for mobile information terminal | |
EP1617635A2 (en) | Speech recognition by a portable terminal for voice dialing | |
EP1729284A1 (en) | Method and systems for a accessing data by spelling discrimination letters of link names | |
Phaiboon et al. | Isarn Dharma Alphabets lexicon for natural language processing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
PP01 | Preservation of patent right |
Effective date of registration: 20060925 Pledge (preservation): Preservation |
|
PD01 | Discharge of preservation of patent |
Date of registration: 20070325 Pledge (preservation): Preservation |
|
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20050615 Termination date: 20150529 |
|
EXPY | Termination of patent right or utility model |