KR20130128172A - Mobile terminal and inputting keying method for the disabled - Google Patents

Mobile terminal and inputting keying method for the disabled Download PDF

Info

Publication number
KR20130128172A
KR20130128172A KR1020120052026A KR20120052026A KR20130128172A KR 20130128172 A KR20130128172 A KR 20130128172A KR 1020120052026 A KR1020120052026 A KR 1020120052026A KR 20120052026 A KR20120052026 A KR 20120052026A KR 20130128172 A KR20130128172 A KR 20130128172A
Authority
KR
South Korea
Prior art keywords
representative word
representative
word
command
display unit
Prior art date
Application number
KR1020120052026A
Other languages
Korean (ko)
Inventor
신대진
공병구
Original Assignee
(주)인피니티텔레콤
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)인피니티텔레콤 filed Critical (주)인피니티텔레콤
Priority to KR1020120052026A priority Critical patent/KR20130128172A/en
Publication of KR20130128172A publication Critical patent/KR20130128172A/en

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/06Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/06Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
    • G10L2021/065Aids for the handicapped in understanding

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Quality & Reliability (AREA)
  • Signal Processing (AREA)
  • Telephone Function (AREA)

Abstract

According to an aspect of the present invention, there is provided a mobile communication terminal including a voice input unit comprising: a keypad display unit which is divided into seconds, middle, and final characters, and a representative word is matched to each of the seconds, middle, and final characters; A character memory unit for storing character data for ultra, middle, and finality corresponding to the voice data of the representative word; A text processing unit for outputting text data corresponding to the elementary, middle, and final words when the voice data of the representative word is detected from the voice input unit; And a text display unit for displaying text data under the control of the text processing unit. The present invention relates to a mobile communication terminal for a disabled person and a text generation method performed by such a configuration.

Description

Mobile communication terminal and text generation method for the disabled {MOBILE TERMINAL AND INPUTTING KEYING METHOD FOR THE DISABLED}

The present invention relates to a mobile communication terminal and a text generation method for the disabled with respect to speech and behavior, which will be described in more detail. The present invention relates to a mobile communication terminal and a text generation method for matching a representative word to generate a text message by voice recognition of the representative word.

Recently, the use of mobile communication terminals is rapidly spreading due to the convenience of the portable terminal, and thus, mobile communication terminal manufacturers are competitively developing terminals having more convenient functions to secure a large number of users.

The most commonly used additional function in the mobile communication terminal is the "text message", but in recent years, there is an increasing tendency to prefer communication by "text message" rather than the call function.

In order to use the "text message", it is necessary to touch the keypad on the consonants and vowels that constitute the text message. The general keypad is somewhat different depending on the character input method, but is composed of a character key composed of consonants and vowels, and a plurality of function keys such as a plurality of numeric keys, navigation keys, menu keys, cancel keys, and confirmation keys.

As described above, when a text message is created using a conventional keypad, a user completes a text message by touching each corresponding key. However, it is not easy for the disabled to complete the text message by touching the respective keys of the keypad.

As a mobile communication terminal for the handicapped, Patent Application Nos. 2002-64317 and Patent Application No. 2008-12578 have proposed a mobile communication terminal for the visually impaired by adding a braille arrangement and a structure to the keypad. No. 2001-6951, patent application No. 2002-27305 and the like has been proposed for the mobile communication terminal for the visually impaired by outputting the voice when the keypad touch.

However, it is not easy to complete a text message by touching each of the keys of a compactly constructed keypad, and it is not easy to use voice recognition as a function inherent in the existing mobile communication terminal. There is a problem.

Therefore, an object of the present invention is to provide a mobile communication terminal and a text generation method that enables the writing of text messages without touching the keypad even in the case of the disabled with speech and behavior.

According to an aspect of the present invention, there is provided a mobile communication terminal for a disabled person, which includes a voice input unit, comprising: a keypad display unit which is divided into s, s and s and has a representative word matched to each s, s and s; A character memory unit for storing character data for ultra, middle, and finality corresponding to the voice data of the representative word; A text processing unit for outputting text data corresponding to the elementary, middle, and final words when the voice data of the representative word is detected from the voice input unit; Character display unit for displaying the character data by the control of the character processing unit; characterized in that it comprises a.

The mobile communication terminal may further include a command processing unit for outputting a command list corresponding to the text data output on the display unit; A command display unit displaying a command list output from the command processing unit; The command memory unit in which the command list is stored may be further configured.

The mobile communication terminal may include a representative word memory unit for storing a representative word list for each elementary, middle, and finality; A representative word processing unit outputting a representative word list from the representative word memory unit and determining a representative word having a high voice recognition rate among the representative word list; A representative word display unit for displaying the representative word list may be further configured.

On the other hand, the character generation method for the disabled of the present invention is a representative voice input step; Searching for the primary, middle, and final species corresponding to the representative word; Characterized in that it comprises a ;;

Further, in the step of searching for the elementary, middle, and finality corresponding to the representative word, the command corresponding to the syllables, words, and phrases completed by the elementary, middle, and finality when searching for the elementary, middle, and finality corresponding to the representative word. Searching may be further included.

In addition, the displaying of the elementary, middle, and final words corresponding to the representative word may further include displaying a command list corresponding to the syllables, words, and phrases formed by the displayed second, middle, and final words.

In addition, the representative voice input step, the step of selecting the seconds, middle, and finality; Retrieving a representative word for the selected hyper, middle, and tertiary species; Displaying a searched representative word list for the selected primary, middle, and species; A voice input step for each representative word in the representative word list; And displaying a representative word having a high speech recognition rate.

Therefore, according to the present invention, the mobile communication terminal and the text generation method for the disabled correspond to the elementary, middle, and final voices that are easily spoken in the disabled. There is an advantage in that the text message is easily formed by combining the middle and vertical.

In addition, the mobile communication terminal and the character generation method for the disabled of the present invention by providing a list of commands commonly used for the syllables, words, phrases completed by the elementary, middle, and final by the representative word inputted text message is easily There is an advantage to be formed.

In addition, the mobile terminal and the text generation method for the disabled according to the present invention has the advantage of reducing the error in the generation of the text message can be specified and changed according to the user with a high speech recognition rate.

1 is a schematic configuration diagram of a mobile communication terminal for the disabled of the present invention.
Figure 2a and Figure 2b is a block diagram showing the flow of the character generation method for the disabled of the present invention.
Figure 3a to 3k is a basic use state diagram for generating a text message in the present invention.
Figure 4a to 4d is a state diagram used for generating a text message using a command in the present invention.
6a to 6d is a state diagram used to change the representative word in the present invention.

Hereinafter, with reference to the accompanying drawings will be described in detail with respect to the mobile terminal and the text generation method for the disabled of the present invention.

As shown in FIG. 1, the mobile communication terminal for the disabled of the present invention is characterized by being largely composed of a display unit 100, a controller 200, a memory unit 300, and a voice input unit 400.

The term "disabled person" refers to a user who does not have easy vocalization and keypad touch, including a brain lesion disorder. The present invention allows the disabled to speak a representative word that is easy to speak without requiring a keypad touch. The text message is generated.

Herein, the term “representative” defines words that are easily recognized by the voice input unit 400 by the speech of the disabled for each elementary, middle, and final voice, which is easy for the applicant to speak through the experiment. The word group is collected and matched to the group of words corresponding to each elementary, middle, and finality. A plurality of "keywords" are stored for each elementary, middle and finality, and each second is automatically or selected by the user. This is the word that matches the middle, final, or final word.

The display unit 100 is composed of a keypad display unit 110, a character display unit 120, a command display unit 130, a representative word display unit 140, the keypad, characters, commands, representative words under the control of the controller 200 Corresponds to the configuration for displaying such.

The controller 200 includes a character processor 210, a command processor 220, and a representative word processor 230. When the representative word is input from the voice input unit 400, the corresponding character and command from the memory unit 300, respectively. , A representative word is searched and output to the display unit 100.

The memory unit 300 includes a character memory unit 310, a command memory unit 320, and a representative word memory unit 330 to correspond to a configuration for storing data regarding a character, a command, and a representative word, respectively.

The voice input unit 400 corresponds to a configuration of recognizing a voice for a representative word and transmitting a corresponding signal to the controller 200.

First, as shown in FIG. 3A, the keypad display unit 110 includes a initial display unit, a neutral display unit, and a vertical display unit. Meanwhile, although not shown in the drawings, a number display unit and a symbol display unit may also be included. In the following description, the initial display unit, the neutral display unit, and the final display unit will be described.

In particular, in the keypad display unit 110, representative words corresponding to the respective primary, middle, and final keys of the initial display unit 111, the neutral display unit 112, and the final display unit 113 are displayed. That is, the user recognizes and speaks a representative word corresponding to each second, middle, and final key by the keypad display unit 110 so that the second, middle, and final characters corresponding to the representative word are displayed on the character display unit 120. . Of course, the keypad display unit 110 is configured so that a signal is also input by a touch, but as mentioned above, since the present invention is for the handicapped and the keypad is not easily touched, the initial display unit 111 unlike the existing keypad arrangement. , The neutral display unit 112, the vertical display unit 113 is configured to be converted by automatic or manual.

When the representative word is voice recognized through the voice input unit 400, the text processing unit 210 searches for seconds, middle, and final matches that match the representative word input from the text data previously stored in the text memory unit 310, and retrieves the found seconds. Among them, the finality is to be displayed on the character display unit 120. Meanwhile, as shown in FIG. 5, when the representative word input from the voice input unit 400 is not recognized, the character processing unit 210 may display a guide on recurrence of the representative word on the character display unit 120.

Based on such a configuration, the present invention allows the text message to be generated by the representative word recognition as shown in FIGS. 3A to 3K. Figures 3A-3K illustrate a flow for completing a text message of "a cold winter day with snow" using the present invention.

Referring to FIG. 3A, the initial display unit 111 of the keypad display unit 110 is automatically displayed at the bottom of the character display unit 120. In response, the user utters a representative word "Seoul" corresponding to the first consonant "b". When the voice of the representative word is input to the text processing unit 210 through the voice input unit 400, the text processing unit 210 matches “b” from the text memory unit 310 with the representative word by the text display unit ( 120). Hereinafter, since the operations of the voice input unit 400, the text processing unit 210, the text memory unit 310, and the text display unit 120 are the same, description thereof will be omitted.

Then, the neutral display unit 112 is automatically converted, and the user utters "Hangang" as a representative word corresponding to the neutral "TT", and when the utterance is recognized, "TT" is displayed as neutral in the character display unit.

Next, the seed display unit 113 is automatically converted, and the user utters a "carp" as a representative word corresponding to the seed "b", and this utterance is recognized, and the letter "b" is displayed on the character display unit.

Next, as shown in FIG. 3B, the initial words "o" and the neutral "ㅗ" of the second syllable are recognized and displayed by the representative words in the same manner as the flow described above. Subsequently, the terminal display unit 113 is automatically converted, and when the user utters the representative word "koi" corresponding to "n", "b" is recognized as the terminal star, and the character display unit 120 displays "eye on". .

Subsequently, as shown in FIG. 3C, when the initial display unit 111 is automatically converted and the voice is skipped, the user is switched to the neutral display unit 112, where the skip is the representative word for the function key. . Accordingly, when the user utters "grape" as a representative word for "-" corresponding to the neutral of the third syllable, "B" recognized as a final star is automatically converted into a first consonant and "snowy" is displayed in the character display unit 120. Is displayed, the keypad display unit 110 is automatically converted to the final display unit 113, and when the user utters "carp" as a representative word for "b" corresponding to the final syllable of the third syllable, the character display unit 120 displays " Snowy "is displayed and the third syllable is completed.

Next, as shown in FIG. 3D, when the user utters the "crosswalk" corresponding to the representative word of the "offset" function key, the cursor of the character display unit 120 is moved to enable the spacing. In this way, by uttering a representative word of each elementary, middle, final and function keys, the user eventually completes a text message of "a cold winter day with snow".

The command processor 220 corresponds to a configuration in which the command memory unit 320 searches for a command corresponding to a syllable, a word, or a phrase output on the character display unit 120 and displays the command in the command display unit 130. . That is, the user searches for the command corresponding to the syllables, words, and phrases displayed on the character display unit 120 by the voice of the representative word, and displays the searched command list on the command display unit 130. When a command is selected through a touch of 130, the command corresponds to a configuration that replaces the command with syllables, words, and phrases in the character display unit 120. In addition, the command processor 220 transmits the text message input by the user through the representative word to the command memory unit 320 to be stored in the command memory unit 320. During transmission, the text message is divided into words and phrases, and then transferred to the command memory unit 320, and the divided data is stored in the command memory unit 320. Therefore, the command memory unit 320 is to save the words, phrases, sentences frequently used by the user to improve the ease and accuracy in the future writing the text message. Of course, the command stored in the command memory unit 320 may be data downloaded from the service provider.

An operation relationship between the command processor 220, the command memory 320, and the command display 130 will be described with reference to FIGS. 4A to 4D. 4A to 4D illustrate a case where a user wants to generate a text message "let's make a snowman".

First, as shown in FIG. 4A, the user utters the primary and neutral voices of the first syllable, respectively, so that "nu" is displayed on the character display unit 120. When the "nu" is displayed on the character display unit 120 as described above, the command processing unit 220 automatically searches for a command including "nu" from the command memory unit 320 and displays a command list on the command display unit 130. do. The command list displayed in this way is sorted on the command display unit 130 according to a frequency used by a user or a previously stored rank. If there is no corresponding word or phrase in the displayed command, it is automatically converted to the final display unit 113 after a certain time. When the user utters a "carp" corresponding to the final syllable "b", the character display unit 120 is displayed. "Eye" is displayed. The command processing unit 220 automatically searches for a command including "eye" from the command memory unit 320 and displays a command list on the command display unit 130. Accordingly, when the user selects "snowman" as shown in FIG. 4B in the command list, the command processor 220 replaces "eye" of the character display unit 120 with "snowman" which is the selected command. In this case as well, although not shown in the drawing, in the selection of the command list, the command may be configured to match the representative word to each command list so that the command may be selected by the voice of the representative word. By repeating this flow, the user can easily complete the text message by the command list.

On the other hand, the representative word processing unit 230 searches for a representative word for the elementary, middle, and final selected by the user from the representative word memory 330 in which the representative word list for the elementary, middle, and finality is stored, and the representative word display unit 140 Corresponds to the configuration that displays a list of searched representative words. In addition, the representative word processing unit 230 recognizes the user's utterance of each representative word from the representative word list displayed on the representative word display unit 140 from the voice input unit 400 to represent a high speech recognition rate among the representative word list. The representative word display unit 140 is displayed in the order of the representative words or the representative words. In addition, when the main word display unit 140 is displayed in the order of the representative words or the representative words with the high voice recognition rate, the user selects the representative word matching the corresponding elementary, middle, and finality, and the representative word processing unit 230 The keypad display unit 110 is converted to the representative word selected for the corresponding seconds, middle, and finality.

An operation relationship between the representative word processing unit 230, the representative word memory unit 330, and the representative word display unit 140 will be described with reference to FIGS. 6A to 6D.

First, as shown in FIG. 6A, when the user selects an environment setting (representative word change) on the terminal interface, selection keys 141, 142, and 143 and an automatic setting key are displayed on the representative word display unit 140. 144, the advanced setting key 145 is displayed. When the user selects the corresponding first key 141, the representative word processing unit 230 searches for the representative word matching the corresponding first word from the representative memory unit 333 and displays the representative word list on the representative display unit 140. Sort it. In this case, it is reasonable that the speech recognition rate is sorted in order of the representative words according to the preset. Accordingly, when the user selects the automatic setting key 144, the representative word is automatically activated according to the sorting order among the representative words matching the initial consonant. Next, as shown in FIGS. 6B and 6C, the user speaks with respect to the activated representative word, and for each of these utterances, the representative word processor 230 has the highest voice recognition rate among the spoken representative words. The word is displayed on the representative word display unit 140 and presented. Accordingly, when the user selects a representative word having a high voice recognition rate presented in the representative word display unit 140, as shown in FIG. 6D, the representative word is automatically converted into the initial consonant in the keypad display unit 110. As a result, the user can assign a representative word having a high voice recognition rate according to his / her voice to each elementary, middle, and final voice, thereby reducing the number of errors and retries due to the voice recognition rate drop in the text message writing. It will be possible.

On the other hand, the character generation method for the disabled according to the present invention, as shown in Figure 2a representative voice input step (S10); Searching for a second, middle, and finality corresponding to the representative word (S20); It has a step (S30) to display the seconds, middle, and finality corresponding to the representative word. That is, by repeating the steps (S10, 20, 30) is a text message to be generated is completed.

Here, in the step S20 of searching for a primary, middle, and final word corresponding to the representative word, when searching for the primary, middle, and final word corresponding to the representative word, the syllables, words, and phrases completed by the corresponding primary, middle, and final word A step S21 of searching for a command is included. When the command is searched, the next step is to display a list of commands corresponding to the syllables, words, and phrases formed by the displayed seconds, middle, and final words in S30. It is to include (S31). Thereby, the user does not need to complete the corresponding elementary, middle, and finality of the text message that he or she wants to write by the utterance of the representative word, and replaces the commands corresponding to the syllables, words, and phrases that are used frequently or frequently This makes texting easier.

In addition, the representative voice input step (S30) may be a step that can be selected or converted to the representative word for the corresponding elementary, middle, and final. That is, the user changes the representative words for the corresponding elementary, middle, and final words when an error occurs due to low vocalization of the matched representative word or low voice recognition rate by the representative word. To this end, as shown in FIG. 2B, first, a step S31 of selecting the finality is made. Thereafter, a step (S32) of searching for a representative word for the selected super, middle, and final species is provided. Thereafter, there is a step S32 of displaying the searched representative word list for the selected second, middle, and final species. Next, a voice input step S33 is provided for each representative word with respect to the representative word list displayed next. When the voice for the representative word is input in this way, the representative word list has a step of determining and displaying a representative word having a high voice recognition rate (S34). As mentioned above, determining and presenting a representative word having a high voice recognition rate among the representative word list is performed by the representative word processing unit 230. Finally, when the user selects the suggested representative word, the representative word for the corresponding elementary, middle, and final words is converted into the representative word with the high voice recognition rate of the user.

Although described and described in connection with the preferred embodiment of the present invention, it should be understood that the present invention may be variously modified and changed without departing from the spirit or the scope of the present invention provided by the claims below. Those skilled in the art can easily know.

100: display unit 110: pad display unit
120: character display unit 130: command display unit
140: representative word display unit 200: control unit
210: character processing unit 220: command processing unit
230: representative language processing unit 300: memory unit
310: character memory unit 320: command memory unit
330: representative memory unit 400: voice input unit

Claims (7)

In the mobile communication terminal comprising a voice input unit,
A keypad display unit which is divided into s, s, and s and whose representative words are matched to the s, s and s;
A character memory unit for storing character data for ultra, middle, and finality corresponding to the voice data of the representative word;
A text processing unit for outputting text data corresponding to the elementary, middle, and final words when the voice data of the representative word is detected from the voice input unit;
And a text display unit for displaying text data under the control of the text processing unit.
The method of claim 1,
A command processor for searching for a command corresponding to the character data output on the display unit and outputting a searched command list;
A command display unit displaying a command list output from the command processing unit;
Command memory unit for storing a command list; Mobile communication terminal for the disabled, characterized in that further configured.
The method of claim 1,
A representative word memory unit for storing a representative word list for each elementary, middle and species;
A representative word processing unit for searching for a representative word from the representative word memory unit, outputting the searched representative word list, and determining a representative word having a high voice recognition rate from the representative word list;
A representative word display unit for displaying a representative word list and a representative word having a high voice recognition rate; The mobile communication terminal for the disabled, characterized in that further configured.
Representative voice input step;
Searching for the primary, middle, and final species corresponding to the representative word;
Characters for the disabled, characterized in that comprises ;;
5. The method of claim 4,
In the step of searching for the elementary, middle, and final word corresponding to the representative word, the command matching the syllables, words, and phrases completed by the elementary, middle, and final word is searched when searching for the elementary, middle, and final word corresponding to the representative word. Character generation method for a disabled person characterized in that it comprises a.
6. The method of claim 5,
The step of displaying the elementary words, the middle word, the final word corresponding to the representative word includes the step of separately displaying a list of commands corresponding to the syllables, words, and phrases formed by the displayed second, middle, and final words. Character generation method for.
In the representative voice input step,
Selecting the second, middle and final species;
Retrieving a representative word for the selected hyper, middle, and tertiary species;
Displaying a searched representative word list for the selected primary, middle, and species;
A voice input step for each representative word in the representative word list;
Displaying a representative word with a high speech recognition rate; Character generation method for the disabled characterized in that it comprises a.

KR1020120052026A 2012-05-16 2012-05-16 Mobile terminal and inputting keying method for the disabled KR20130128172A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020120052026A KR20130128172A (en) 2012-05-16 2012-05-16 Mobile terminal and inputting keying method for the disabled

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020120052026A KR20130128172A (en) 2012-05-16 2012-05-16 Mobile terminal and inputting keying method for the disabled

Publications (1)

Publication Number Publication Date
KR20130128172A true KR20130128172A (en) 2013-11-26

Family

ID=49855422

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020120052026A KR20130128172A (en) 2012-05-16 2012-05-16 Mobile terminal and inputting keying method for the disabled

Country Status (1)

Country Link
KR (1) KR20130128172A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104536644A (en) * 2014-12-15 2015-04-22 深圳市金立通信设备有限公司 Terminal
KR20150043272A (en) * 2015-04-03 2015-04-22 박남태 The method of voice control for display device
CN105100081A (en) * 2015-07-02 2015-11-25 惠州Tcl移动通信有限公司 Mobile terminal based on voice services and method for realizing voice services thereof

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104536644A (en) * 2014-12-15 2015-04-22 深圳市金立通信设备有限公司 Terminal
KR20150043272A (en) * 2015-04-03 2015-04-22 박남태 The method of voice control for display device
CN105100081A (en) * 2015-07-02 2015-11-25 惠州Tcl移动通信有限公司 Mobile terminal based on voice services and method for realizing voice services thereof

Similar Documents

Publication Publication Date Title
RU2379767C2 (en) Error correction for speech recognition systems
CN105283914B (en) The system and method for voice for identification
AU2012227212B2 (en) Consolidating speech recognition results
KR101819458B1 (en) Voice recognition apparatus and system
KR100593589B1 (en) Multilingual Interpretation / Learning System Using Speech Recognition
KR20130128172A (en) Mobile terminal and inputting keying method for the disabled
KR102091684B1 (en) Voice recognition text correction method and a device implementing the method
JP2012226220A (en) Speech recognition device, speech recognition method, and speech recognition program
KR20160138613A (en) Method for auto interpreting using emoticon and apparatus using the same
KR20190057493A (en) Keypad system for foreigner korean alphabet learner
KR20110017600A (en) Apparatus for word entry searching in a portable electronic dictionary and method thereof
KR100655720B1 (en) Alphabet input apparatus in a keypad and method thereof
KR20160054751A (en) System for editing a text and method thereof
KR102112059B1 (en) Method for making hangul mark for chinese pronunciation on the basis of listening, and method for displaying the same, learning foreign language using the same
KR20090000858A (en) Apparatus and method for searching information based on multimodal
KR101447388B1 (en) Method of automatic word suggestion generation system to transcribe Korean characters into Latin script on input software
KR100625357B1 (en) Alphabet input apparatus in a keypad and method thereof
JP2006078829A (en) Speech recognition device and speech recognition method
KR20150084186A (en) Method for Hanja word suggestion list automatic generation and entry system by sounds of Korean letter entry
JP5083811B2 (en) Electronic device, control program, and control method
KR101704501B1 (en) Method, apparatus and computer-readable recording medium for improving a set of at least one semantic unit
CN115904172A (en) Electronic device, learning support system, learning processing method, and program
EP3489952A1 (en) Speech recognition apparatus and system
KR20100038853A (en) Method for inputting characters in terminal
KR100848727B1 (en) Alphabet input apparatus in a keypad and method thereof

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right