KR20130128172A - Mobile terminal and inputting keying method for the disabled - Google Patents
Mobile terminal and inputting keying method for the disabled Download PDFInfo
- Publication number
- KR20130128172A KR20130128172A KR1020120052026A KR20120052026A KR20130128172A KR 20130128172 A KR20130128172 A KR 20130128172A KR 1020120052026 A KR1020120052026 A KR 1020120052026A KR 20120052026 A KR20120052026 A KR 20120052026A KR 20130128172 A KR20130128172 A KR 20130128172A
- Authority
- KR
- South Korea
- Prior art keywords
- representative word
- representative
- word
- command
- display unit
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/28—Constructional details of speech recognition systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/06—Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/06—Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
- G10L2021/065—Aids for the handicapped in understanding
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Quality & Reliability (AREA)
- Signal Processing (AREA)
- Telephone Function (AREA)
Abstract
According to an aspect of the present invention, there is provided a mobile communication terminal including a voice input unit comprising: a keypad display unit which is divided into seconds, middle, and final characters, and a representative word is matched to each of the seconds, middle, and final characters; A character memory unit for storing character data for ultra, middle, and finality corresponding to the voice data of the representative word; A text processing unit for outputting text data corresponding to the elementary, middle, and final words when the voice data of the representative word is detected from the voice input unit; And a text display unit for displaying text data under the control of the text processing unit. The present invention relates to a mobile communication terminal for a disabled person and a text generation method performed by such a configuration.
Description
The present invention relates to a mobile communication terminal and a text generation method for the disabled with respect to speech and behavior, which will be described in more detail. The present invention relates to a mobile communication terminal and a text generation method for matching a representative word to generate a text message by voice recognition of the representative word.
Recently, the use of mobile communication terminals is rapidly spreading due to the convenience of the portable terminal, and thus, mobile communication terminal manufacturers are competitively developing terminals having more convenient functions to secure a large number of users.
The most commonly used additional function in the mobile communication terminal is the "text message", but in recent years, there is an increasing tendency to prefer communication by "text message" rather than the call function.
In order to use the "text message", it is necessary to touch the keypad on the consonants and vowels that constitute the text message. The general keypad is somewhat different depending on the character input method, but is composed of a character key composed of consonants and vowels, and a plurality of function keys such as a plurality of numeric keys, navigation keys, menu keys, cancel keys, and confirmation keys.
As described above, when a text message is created using a conventional keypad, a user completes a text message by touching each corresponding key. However, it is not easy for the disabled to complete the text message by touching the respective keys of the keypad.
As a mobile communication terminal for the handicapped, Patent Application Nos. 2002-64317 and Patent Application No. 2008-12578 have proposed a mobile communication terminal for the visually impaired by adding a braille arrangement and a structure to the keypad. No. 2001-6951, patent application No. 2002-27305 and the like has been proposed for the mobile communication terminal for the visually impaired by outputting the voice when the keypad touch.
However, it is not easy to complete a text message by touching each of the keys of a compactly constructed keypad, and it is not easy to use voice recognition as a function inherent in the existing mobile communication terminal. There is a problem.
Therefore, an object of the present invention is to provide a mobile communication terminal and a text generation method that enables the writing of text messages without touching the keypad even in the case of the disabled with speech and behavior.
According to an aspect of the present invention, there is provided a mobile communication terminal for a disabled person, which includes a voice input unit, comprising: a keypad display unit which is divided into s, s and s and has a representative word matched to each s, s and s; A character memory unit for storing character data for ultra, middle, and finality corresponding to the voice data of the representative word; A text processing unit for outputting text data corresponding to the elementary, middle, and final words when the voice data of the representative word is detected from the voice input unit; Character display unit for displaying the character data by the control of the character processing unit; characterized in that it comprises a.
The mobile communication terminal may further include a command processing unit for outputting a command list corresponding to the text data output on the display unit; A command display unit displaying a command list output from the command processing unit; The command memory unit in which the command list is stored may be further configured.
The mobile communication terminal may include a representative word memory unit for storing a representative word list for each elementary, middle, and finality; A representative word processing unit outputting a representative word list from the representative word memory unit and determining a representative word having a high voice recognition rate among the representative word list; A representative word display unit for displaying the representative word list may be further configured.
On the other hand, the character generation method for the disabled of the present invention is a representative voice input step; Searching for the primary, middle, and final species corresponding to the representative word; Characterized in that it comprises a ;;
Further, in the step of searching for the elementary, middle, and finality corresponding to the representative word, the command corresponding to the syllables, words, and phrases completed by the elementary, middle, and finality when searching for the elementary, middle, and finality corresponding to the representative word. Searching may be further included.
In addition, the displaying of the elementary, middle, and final words corresponding to the representative word may further include displaying a command list corresponding to the syllables, words, and phrases formed by the displayed second, middle, and final words.
In addition, the representative voice input step, the step of selecting the seconds, middle, and finality; Retrieving a representative word for the selected hyper, middle, and tertiary species; Displaying a searched representative word list for the selected primary, middle, and species; A voice input step for each representative word in the representative word list; And displaying a representative word having a high speech recognition rate.
Therefore, according to the present invention, the mobile communication terminal and the text generation method for the disabled correspond to the elementary, middle, and final voices that are easily spoken in the disabled. There is an advantage in that the text message is easily formed by combining the middle and vertical.
In addition, the mobile communication terminal and the character generation method for the disabled of the present invention by providing a list of commands commonly used for the syllables, words, phrases completed by the elementary, middle, and final by the representative word inputted text message is easily There is an advantage to be formed.
In addition, the mobile terminal and the text generation method for the disabled according to the present invention has the advantage of reducing the error in the generation of the text message can be specified and changed according to the user with a high speech recognition rate.
1 is a schematic configuration diagram of a mobile communication terminal for the disabled of the present invention.
Figure 2a and Figure 2b is a block diagram showing the flow of the character generation method for the disabled of the present invention.
Figure 3a to 3k is a basic use state diagram for generating a text message in the present invention.
Figure 4a to 4d is a state diagram used for generating a text message using a command in the present invention.
6a to 6d is a state diagram used to change the representative word in the present invention.
Hereinafter, with reference to the accompanying drawings will be described in detail with respect to the mobile terminal and the text generation method for the disabled of the present invention.
As shown in FIG. 1, the mobile communication terminal for the disabled of the present invention is characterized by being largely composed of a
The term "disabled person" refers to a user who does not have easy vocalization and keypad touch, including a brain lesion disorder. The present invention allows the disabled to speak a representative word that is easy to speak without requiring a keypad touch. The text message is generated.
Herein, the term “representative” defines words that are easily recognized by the
The
The
The
The
First, as shown in FIG. 3A, the
In particular, in the
When the representative word is voice recognized through the
Based on such a configuration, the present invention allows the text message to be generated by the representative word recognition as shown in FIGS. 3A to 3K. Figures 3A-3K illustrate a flow for completing a text message of "a cold winter day with snow" using the present invention.
Referring to FIG. 3A, the initial display unit 111 of the
Then, the neutral display unit 112 is automatically converted, and the user utters "Hangang" as a representative word corresponding to the neutral "TT", and when the utterance is recognized, "TT" is displayed as neutral in the character display unit.
Next, the seed display unit 113 is automatically converted, and the user utters a "carp" as a representative word corresponding to the seed "b", and this utterance is recognized, and the letter "b" is displayed on the character display unit.
Next, as shown in FIG. 3B, the initial words "o" and the neutral "ㅗ" of the second syllable are recognized and displayed by the representative words in the same manner as the flow described above. Subsequently, the terminal display unit 113 is automatically converted, and when the user utters the representative word "koi" corresponding to "n", "b" is recognized as the terminal star, and the
Subsequently, as shown in FIG. 3C, when the initial display unit 111 is automatically converted and the voice is skipped, the user is switched to the neutral display unit 112, where the skip is the representative word for the function key. . Accordingly, when the user utters "grape" as a representative word for "-" corresponding to the neutral of the third syllable, "B" recognized as a final star is automatically converted into a first consonant and "snowy" is displayed in the
Next, as shown in FIG. 3D, when the user utters the "crosswalk" corresponding to the representative word of the "offset" function key, the cursor of the
The
An operation relationship between the
First, as shown in FIG. 4A, the user utters the primary and neutral voices of the first syllable, respectively, so that "nu" is displayed on the
On the other hand, the representative
An operation relationship between the representative
First, as shown in FIG. 6A, when the user selects an environment setting (representative word change) on the terminal interface, selection keys 141, 142, and 143 and an automatic setting key are displayed on the representative
On the other hand, the character generation method for the disabled according to the present invention, as shown in Figure 2a representative voice input step (S10); Searching for a second, middle, and finality corresponding to the representative word (S20); It has a step (S30) to display the seconds, middle, and finality corresponding to the representative word. That is, by repeating the steps (S10, 20, 30) is a text message to be generated is completed.
Here, in the step S20 of searching for a primary, middle, and final word corresponding to the representative word, when searching for the primary, middle, and final word corresponding to the representative word, the syllables, words, and phrases completed by the corresponding primary, middle, and final word A step S21 of searching for a command is included. When the command is searched, the next step is to display a list of commands corresponding to the syllables, words, and phrases formed by the displayed seconds, middle, and final words in S30. It is to include (S31). Thereby, the user does not need to complete the corresponding elementary, middle, and finality of the text message that he or she wants to write by the utterance of the representative word, and replaces the commands corresponding to the syllables, words, and phrases that are used frequently or frequently This makes texting easier.
In addition, the representative voice input step (S30) may be a step that can be selected or converted to the representative word for the corresponding elementary, middle, and final. That is, the user changes the representative words for the corresponding elementary, middle, and final words when an error occurs due to low vocalization of the matched representative word or low voice recognition rate by the representative word. To this end, as shown in FIG. 2B, first, a step S31 of selecting the finality is made. Thereafter, a step (S32) of searching for a representative word for the selected super, middle, and final species is provided. Thereafter, there is a step S32 of displaying the searched representative word list for the selected second, middle, and final species. Next, a voice input step S33 is provided for each representative word with respect to the representative word list displayed next. When the voice for the representative word is input in this way, the representative word list has a step of determining and displaying a representative word having a high voice recognition rate (S34). As mentioned above, determining and presenting a representative word having a high voice recognition rate among the representative word list is performed by the representative
Although described and described in connection with the preferred embodiment of the present invention, it should be understood that the present invention may be variously modified and changed without departing from the spirit or the scope of the present invention provided by the claims below. Those skilled in the art can easily know.
100: display unit 110: pad display unit
120: character display unit 130: command display unit
140: representative word display unit 200: control unit
210: character processing unit 220: command processing unit
230: representative language processing unit 300: memory unit
310: character memory unit 320: command memory unit
330: representative memory unit 400: voice input unit
Claims (7)
A keypad display unit which is divided into s, s, and s and whose representative words are matched to the s, s and s;
A character memory unit for storing character data for ultra, middle, and finality corresponding to the voice data of the representative word;
A text processing unit for outputting text data corresponding to the elementary, middle, and final words when the voice data of the representative word is detected from the voice input unit;
And a text display unit for displaying text data under the control of the text processing unit.
A command processor for searching for a command corresponding to the character data output on the display unit and outputting a searched command list;
A command display unit displaying a command list output from the command processing unit;
Command memory unit for storing a command list; Mobile communication terminal for the disabled, characterized in that further configured.
A representative word memory unit for storing a representative word list for each elementary, middle and species;
A representative word processing unit for searching for a representative word from the representative word memory unit, outputting the searched representative word list, and determining a representative word having a high voice recognition rate from the representative word list;
A representative word display unit for displaying a representative word list and a representative word having a high voice recognition rate; The mobile communication terminal for the disabled, characterized in that further configured.
Searching for the primary, middle, and final species corresponding to the representative word;
Characters for the disabled, characterized in that comprises ;;
In the step of searching for the elementary, middle, and final word corresponding to the representative word, the command matching the syllables, words, and phrases completed by the elementary, middle, and final word is searched when searching for the elementary, middle, and final word corresponding to the representative word. Character generation method for a disabled person characterized in that it comprises a.
The step of displaying the elementary words, the middle word, the final word corresponding to the representative word includes the step of separately displaying a list of commands corresponding to the syllables, words, and phrases formed by the displayed second, middle, and final words. Character generation method for.
Selecting the second, middle and final species;
Retrieving a representative word for the selected hyper, middle, and tertiary species;
Displaying a searched representative word list for the selected primary, middle, and species;
A voice input step for each representative word in the representative word list;
Displaying a representative word with a high speech recognition rate; Character generation method for the disabled characterized in that it comprises a.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020120052026A KR20130128172A (en) | 2012-05-16 | 2012-05-16 | Mobile terminal and inputting keying method for the disabled |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020120052026A KR20130128172A (en) | 2012-05-16 | 2012-05-16 | Mobile terminal and inputting keying method for the disabled |
Publications (1)
Publication Number | Publication Date |
---|---|
KR20130128172A true KR20130128172A (en) | 2013-11-26 |
Family
ID=49855422
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020120052026A KR20130128172A (en) | 2012-05-16 | 2012-05-16 | Mobile terminal and inputting keying method for the disabled |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR20130128172A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104536644A (en) * | 2014-12-15 | 2015-04-22 | 深圳市金立通信设备有限公司 | Terminal |
KR20150043272A (en) * | 2015-04-03 | 2015-04-22 | 박남태 | The method of voice control for display device |
CN105100081A (en) * | 2015-07-02 | 2015-11-25 | 惠州Tcl移动通信有限公司 | Mobile terminal based on voice services and method for realizing voice services thereof |
-
2012
- 2012-05-16 KR KR1020120052026A patent/KR20130128172A/en active IP Right Grant
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104536644A (en) * | 2014-12-15 | 2015-04-22 | 深圳市金立通信设备有限公司 | Terminal |
KR20150043272A (en) * | 2015-04-03 | 2015-04-22 | 박남태 | The method of voice control for display device |
CN105100081A (en) * | 2015-07-02 | 2015-11-25 | 惠州Tcl移动通信有限公司 | Mobile terminal based on voice services and method for realizing voice services thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
RU2379767C2 (en) | Error correction for speech recognition systems | |
CN105283914B (en) | The system and method for voice for identification | |
AU2012227212B2 (en) | Consolidating speech recognition results | |
KR101819458B1 (en) | Voice recognition apparatus and system | |
KR100593589B1 (en) | Multilingual Interpretation / Learning System Using Speech Recognition | |
KR20130128172A (en) | Mobile terminal and inputting keying method for the disabled | |
KR102091684B1 (en) | Voice recognition text correction method and a device implementing the method | |
JP2012226220A (en) | Speech recognition device, speech recognition method, and speech recognition program | |
KR20160138613A (en) | Method for auto interpreting using emoticon and apparatus using the same | |
KR20190057493A (en) | Keypad system for foreigner korean alphabet learner | |
KR20110017600A (en) | Apparatus for word entry searching in a portable electronic dictionary and method thereof | |
KR100655720B1 (en) | Alphabet input apparatus in a keypad and method thereof | |
KR20160054751A (en) | System for editing a text and method thereof | |
KR102112059B1 (en) | Method for making hangul mark for chinese pronunciation on the basis of listening, and method for displaying the same, learning foreign language using the same | |
KR20090000858A (en) | Apparatus and method for searching information based on multimodal | |
KR101447388B1 (en) | Method of automatic word suggestion generation system to transcribe Korean characters into Latin script on input software | |
KR100625357B1 (en) | Alphabet input apparatus in a keypad and method thereof | |
JP2006078829A (en) | Speech recognition device and speech recognition method | |
KR20150084186A (en) | Method for Hanja word suggestion list automatic generation and entry system by sounds of Korean letter entry | |
JP5083811B2 (en) | Electronic device, control program, and control method | |
KR101704501B1 (en) | Method, apparatus and computer-readable recording medium for improving a set of at least one semantic unit | |
CN115904172A (en) | Electronic device, learning support system, learning processing method, and program | |
EP3489952A1 (en) | Speech recognition apparatus and system | |
KR20100038853A (en) | Method for inputting characters in terminal | |
KR100848727B1 (en) | Alphabet input apparatus in a keypad and method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
E701 | Decision to grant or registration of patent right |